JEP! 514: Ahead-of-Time Command-Line Ergonomics

https://openjdk.org/jeps/514arrow-up-right


TL;DR;

AOT cache = “pre-initialized JVM state for classes”

“The cache represents a JVM that has already done class loading and linking, so future runs can skip directly to that state.”

The key idea is:

  • in JEP 483 you had:

    • record

    • then create

  • in JEP 514 those two steps become one simpler command with -XX:AOTCacheOutput=...

JEP! 483: Ahead-of-Time Class Loading & Linking

RUN:

scripts/jep483/run-aot-classloading.sh 

and then :

scripts/jep483/run-aot-classloading.sh 

underneath JVM orchestrates and executes old steps :

Step 2: Creating AOT cache in ONE command (JEP 514)...
Picked up JAVA_TOOL_OPTIONS: -Djava.class.path=target/miniapp-only.jar
 -Xlog:aot*=info:file=target/logs/aot-build.log 
 -XX:AOTCacheOutput=target/app.aot
 -XX:AOTConfiguration=target/app.aot.config
 -XX:AOTMode=create

TAKE LOOK AT : app-build.out new jvm is started to create cache from 0 state

JDK_AOT_VM_OPTIONS - to pass arguments for the second phase

TIMES EXPLANATION

Timing explanation:

real time (0.13 → 0.09) is the total wall-clock time, meaning how long the application takes from start to finish from the user’s perspective. This is the most important metric. In this case, startup is about 30% faster, which shows the real benefit of AOT.

user time (0.18 → 0.16) is the CPU time spent executing Java code and JVM logic. It decreased slightly because the JVM no longer needs to perform as much work during startup, such as class loading, bytecode verification, and linking.

sys time (0.06 → 0.05) is the time spent in the operating system (for example file access, memory operations, classpath scanning). It also decreased slightly because fewer classes need to be read and processed at runtime.

The key improvement comes from skipping JVM startup work. With AOT cache, classes are already preloaded, verified, and linked, so the JVM can reuse that state instead of rebuilding it.

Even though user and sys times decreased only slightly, together they reduce the overall real time noticeably. This is because startup consists of many small operations, and AOT eliminates a large portion of them.

Important takeaway: AOT does not make your business logic faster. It reduces the work the JVM must do before your code starts running, which improves startup time.

LONGER EXPLANATION JEP 483 vs JEP 514

JEP 483

Introduced the feature itself:

  • training run

  • record AOT config

  • create cache

  • run with cache

In practice, your script does exactly that with separate steps:

  • -XX:AOTMode=record

  • -XX:AOTConfiguration=...

  • -XX:AOTMode=create

  • -XX:AOTCache=...

  • then later -XX:AOTMode=on / -XX:AOTCache=...

That matches the JEP 483 two-step flow: first record, then create, then run with the cache.

JEP 514

Does not introduce a new kind of cache. It just adds a more ergonomic command:

That single command internally does:

  1. training run (record)

  2. cache creation (create)

and uses a temporary config file automatically.

When two steps are needed

As JEP doc states :

For example, if you intend to deploy an application to small instances in a cloud then you could do the training run on a small instance but create the AOT cache on a large instance. That way the training run reflects the deployment environment, but the creation of the AOT cache can leverage the additional CPU cores and memory of the large instance.

Such a division of labor may become more important as Leyden AOT optimizations become more complex. For example, some future AOT optimization might require minutes of time to create a cache on a small instance, but just seconds on a large instance.

Last updated