-
Notifications
You must be signed in to change notification settings - Fork 4k
Open
Description
Description
When opening a large heap dump (~6.9 GB) with HprofHeapGraph.openHeapGraph(), Shark crashes with a NegativeArraySizeException in UnsortedByteEntries.append. This appears to be an int overflow in the internal index builder when the heap dump contains a very large number of objects.
Stack trace
java.lang.NegativeArraySizeException: -1545189232
at shark.internal.UnsortedByteEntries.append(UnsortedByteEntries.kt:32)
at shark.internal.HprofInMemoryIndex$Builder.onHprofRecord(HprofInMemoryIndex.kt:592)
at shark.StreamingHprofReader.readRecords(StreamingHprofReader.kt:246)
at shark.internal.HprofInMemoryIndex$Companion.indexHprof(HprofInMemoryIndex.kt:777)
at shark.HprofIndex$Companion.indexRecordsOf(HprofIndex.kt:34)
at shark.HprofHeapGraph$Companion.openHeapGraph(HprofHeapGraph.kt:428)
Environment
- Shark version: 2.14 (
com.squareup.leakcanary:shark-graph:2.14) - JDK: JBR 21.0.8 (64-bit)
- Heap dump: ~6.9 GB, standard JVM hprof (JAVA PROFILE 1.0.2), from IntelliJ-based IDE
- OS: Linux x86_64
Analysis
The negative array size -1545189232 suggests an int overflow when computing a buffer/array size in UnsortedByteEntries.append. Smaller heap dumps (up to ~4.2 GB / 54M objects) work fine. This specific dump is ~6.9 GB and likely exceeds an int-sized capacity calculation.
Reproducing
Any sufficiently large hprof file (likely >4GB or with enough records to overflow an int counter) should trigger this. The simplest reproduction:
val file = File("large-heap-dump.hprof")
file.openHeapGraph().use { graph ->
// crashes before reaching this point
}Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels