Skip to content

NegativeArraySizeException in HprofInMemoryIndex for large heap dumps (~7GB) #2807

@hurricup

Description

@hurricup

Description

When opening a large heap dump (~6.9 GB) with HprofHeapGraph.openHeapGraph(), Shark crashes with a NegativeArraySizeException in UnsortedByteEntries.append. This appears to be an int overflow in the internal index builder when the heap dump contains a very large number of objects.

Stack trace

java.lang.NegativeArraySizeException: -1545189232
	at shark.internal.UnsortedByteEntries.append(UnsortedByteEntries.kt:32)
	at shark.internal.HprofInMemoryIndex$Builder.onHprofRecord(HprofInMemoryIndex.kt:592)
	at shark.StreamingHprofReader.readRecords(StreamingHprofReader.kt:246)
	at shark.internal.HprofInMemoryIndex$Companion.indexHprof(HprofInMemoryIndex.kt:777)
	at shark.HprofIndex$Companion.indexRecordsOf(HprofIndex.kt:34)
	at shark.HprofHeapGraph$Companion.openHeapGraph(HprofHeapGraph.kt:428)

Environment

  • Shark version: 2.14 (com.squareup.leakcanary:shark-graph:2.14)
  • JDK: JBR 21.0.8 (64-bit)
  • Heap dump: ~6.9 GB, standard JVM hprof (JAVA PROFILE 1.0.2), from IntelliJ-based IDE
  • OS: Linux x86_64

Analysis

The negative array size -1545189232 suggests an int overflow when computing a buffer/array size in UnsortedByteEntries.append. Smaller heap dumps (up to ~4.2 GB / 54M objects) work fine. This specific dump is ~6.9 GB and likely exceeds an int-sized capacity calculation.

Reproducing

Any sufficiently large hprof file (likely >4GB or with enough records to overflow an int counter) should trigger this. The simplest reproduction:

val file = File("large-heap-dump.hprof")
file.openHeapGraph().use { graph ->
    // crashes before reaching this point
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions