This project is actively maintained; see GitHub releases for the latest versions.
Java Latency Benchmark Harness is a tool that allows you to benchmark your code running in context, rather than in a microbenchmark. See Further Reading for a series of articles introducing JLBH. An excellent introduction can be found in this series of articles. The requirements document contains detailed feature descriptions.
For terminology used throughout the project, see the Glossary (section 7).
Since those articles were written the main change has been to allow JLBH to be installed to an event loop, rather than it running in its own thread. To do this, use the JLBH.eventLoopHandler method rather than JLBH.start.
| Note | The JLBH harness itself runs on a single thread; benchmarked code may spawn threads, but the harness thread is unique. | 
For example, you can run the benchmark on your own event loop:
EventLoop eventLoop = new MediumEventLoop(null, "el", Pauser.busy(), true, null);
eventLoop.start();
JLBH jlbh = new JLBH(options);
jlbh.eventLoopHandler(eventLoop);This installs JLBH onto the supplied loop instead of starting its own thread.
To run a simple benchmark disable accounting for coordinated omission as follows:
JLBHOptions options = new JLBHOptions()
        .warmUpIterations(100_000)
        .iterations(1_000_000)
        .throughput(1_000_000)
        .accountForCoordinatedOmission(false) // disable correction
        .jlbhTask(myTask);
new JLBH(options).start();A minimal demonstration is provided in src/test/java/net/openhft/chronicle/jlbh/ExampleJLBHMain.java, showing how to run the harness from the command line.
Configure JLBH using the JLBHOptions builder:
JLBHOptions options = new JLBHOptions()
        .throughput(50_000)
        .iterations(1_000_000)
        .jlbhTask(myTask);
new JLBH(options).start();Commonly used option methods include:
- 
.throughput(int)- set target iterations per time unit
- 
.iterations(long)- number of iterations to execute
- 
timeout(long)aborts the benchmark if no samples are produced for the given number of milliseconds.
For a full list of configuration parameters see src/main/java/net/openhft/chronicle/jlbh/JLBHOptions.java.
For a visual overview of how a benchmark progresses, see the benchmark lifecycle diagram.
JLBH can optionally track operating-system jitter by running a background thread that records scheduler delays as a separate probe. This is useful when investigating latency spikes caused by kernel activity, but it incurs some overhead so it is enabled by default. Disable it via recordOSJitter(false) when you want to avoid the extra monitoring. The feature is listed in the requirements specification around lines 52 and 56 of project-requirements.adoc.
JLBH lets you time additional stages of your benchmark using addProbe(String). The returned NanoSampler records its own histogram.
class ProbedTask implements JLBHTask {
    private JLBH jlbh;
    private NanoSampler stage;
    @Override
    public void init(JLBH jlbh) {
        this.jlbh = jlbh;
        stage = jlbh.addProbe("my-stage");
    }
    @Override
    public void run(long startTimeNs) {
        long stepStart = System.nanoTime();
        // ... benchmarked stage ...
        stage.sampleNanos(System.nanoTime() - stepStart);
        jlbh.sample(System.nanoTime() - startTimeNs);
    }
}For a runnable example see src/test/java/net/openhft/chronicle/jlbh/SimpleOSJitterBenchmark.java.
CPU affinity can be configured using the OpenHFT Affinity library as noted in the requirements document.
Histogramming of recorded latencies is precise. As specified in the jlbh-requirements, JLBH generates high-resolution histograms of at least 35 bits. This accuracy retains sub-nanosecond resolution over a wide range, so the reported percentiles closely reflect true end-to-end timings.
The project ships with several example tasks and tests located under src/test/java/net/openhft/chronicle/jlbh. They can serve as a starting point when writing your own benchmarks.
| Class | Purpose | 
|---|---|
| 
 | Basic demonstration harness | 
| 
 | Helper fixtures used in deterministic tests | 
| 
 | Example integration test | 
| 
 | Unit test showing result extraction | 
| 
 | Tests percentile distribution logic | 
| 
 | Simple benchmark that does no work | 
| 
 | Tests percentile summaries | 
| 
 | Minimal benchmark example | 
| 
 | Demonstrates OS jitter recording | 
| 
 | Example of serialising results | 
The following table summarises the main non-functional quality attributes derived from the Software Requirements Specification.
| Area | Requirement | 
|---|---|
| Performance | Overhead per sample must remain below 100 ns when no additional probes are active. Histogram generation must support >=200 M iterations without heap pressure. | 
| Reliability | Harness must abort gracefully on interruptions or sample time-outs, and immutable result objects ensure thread-safe publication. | 
| Usability | Fluent builder API with sensible defaults yields a runnable benchmark in ≤10 LOC. ASCII table outputs are human-readable and CI-friendly. | 
| Portability | Pure-Java codebase with runtime-detected JDK optimisations; no native compilation. | 
| Maintainability | >=80 % unit-test coverage and adherence to Chronicle coding standards validated by SonarCloud. | 
| Security | No executable deserialisation; harness operates in-process. Users remain responsible for securing benchmarked code. | 
- 
What is JLBH 
- 
What was the motivation for JLBH 
- 
Differences between JMH and JLBH 
- 
Quick start guide 
The output of a short JLBH run may look similar to the following:
-------------------------------- SUMMARY (end to end) us -------------------------
Percentile   run1         run2         run3      % Variation
50.0:            8.07         8.07         6.10        17.69
90.0:           11.66        11.66         9.71        11.82
99.0:           12.46        12.46        10.51        11.02
worst:          12.56        12.56        10.61        10.93- 
A side by side example using JMH and JLBH for Date serialisation 
- 
Measuring Date serialisation in a microbenchmark 
- 
Measuring Date serialisation as part of a proper application 
- 
How to add a probe to your JLBH benchmark 
- 
Understanding the importance of measuring code in context 
Co-ordinated omission occurs when latency measurements ignore the time requests wait to be serviced, leading to unrealistically low results.
- 
Running JLBH with and without accounting for coordinated omission 
- 
An example illustrating the numerical impact of coordinated omission 
- 
A discussion about flow control 
The Affects of Throughput on Latency Note: the blog deliberately uses "Affects" in the title while describing the effects of throughput on latency.
- 
A discussion about the effects of throughput on latency 
- 
How to use JLBH to measure TCP loopback 
- 
Adding probes to test both halves of the TCP round trip 
- 
Watching the effect of increasing throughput on latency 
- 
Understanding that you have to drop throughput to achieve good latencies at high percentiles. 
- 
Using JLBH to benchmark QuickFIX 
- 
Observing how QuickFix latencies degrade through the percentiles 
- 
Comparing QuickFIX with Chronicle FIX 
- 
Extract latency percentiles from net.openhft.chronicle.jlbh.JLBHTest::shouldProvideResultDatafor xUnit tests.
- 
Run them on a production-like CI server. 
- 
When local hardware is adequate, execute them along with other tests. 
- 
Integrate these benchmarks into the TDD cycle to expose latency-related design issues early. 
JLBH performs a warm-up phase before measurements start. A rule of thumb is to warm up for about 30% of the run iterations. Override this with warmUpIterations(int) if needed.
The code is licensed under the terms described in the Apache 2.0 License. Releases are available on GitHub.