-
Notifications
You must be signed in to change notification settings - Fork 13
Hornet SSL Oddity
The Consumer and Producer programs are adapted from the ping example that ships with Hornet -- they have the key and trust stores used by Datomic.
You can run the consumer and producer as follows:
git://github.com/Datomic/datomic-java-examples.git
# from two different shells
mvn exec:java -Dexec.mainClass=hornet.samples.PingConsumer
mvn exec:java -Dexec.mainClass=hornet.samples.PingProducer
If you run first the consumer, and then the producer, they will attempt to loop forever, with the consumer consuming messages from the producer. In my tests, this loop fails in under a minute, with less than a few hundred pings.
Two errors jump out at me from the logs:
2012-12-02 09:33:02.630 DEBUG default o.jboss.netty.handler.ssl.SslHandler - Failed to clean up SSLEngine.
javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?
2012-12-02 09:33:02.645 DEBUG default o.jboss.netty.handler.ssl.SslHandler - Swallowing an exception raised while writing non-app data
java.nio.channels.ClosedChannelException: null
These errors suggest some kind of lifecycle bug. Interestingly, the errors disappear if you run the same test without SSL:
If you run the same Consumer and Producer above, but the change each file's m.put("ssl-enabled", true); to false, the example no longer fails. The attached log shows a run more than 10K pings, two orders of magnitude more than the number of pings needed to make the SSL example above fall over.
If I return to the SSL-enabled code, and turn on detailed SSL logging with -Djavax.net.debug=all, the problem seems to go away. This furthers my impression that there is some kind of timing problem around closing SSL sessions, and that that additional overhead of detailed SSL logging is enough to mask the problem.
This question on StackOverflow might be related. Poster sees the same ClosedChannelException exception, only when under heavy load. No resolution. :-(