[SNMP4J] DefaultTcpTransportMapping: socket disconnect causes timeouts for all

Ladd ladd at codemettle.com
Mon Jun 2 19:37:49 CEST 2014


I’ve got an SNMP manager built with v2.3.0.  I’m seeing identical behavior
with v2.1.0.  Java 1.6.  There’s one polling engine polling two agents.  I’m
talking to both over TCP with DefaultTcpTransportMapping.  Everything’s
running on Windows and on the same LAN.  I’m seeing the same behavior using
SNMPv1 and v3.

Both agents are identical.  They’re sent the same polls at the same rate and
return the same data.  I’m polling for one column in a table every 2000ms
and the normal response time is around 500ms.  Timeout is 5000ms.

Everything runs great until one of the agents is stopped.  Depending on how
it’s stopped I see one of these two exceptions:
[o.s.t.DefaultTcpTransportMapping] - java.io.IOException: An existing
connection was forcibly closed by the remote host
[o.s.t.DefaultTcpTransportMapping] - java.io.IOException: Connection reset
by peer

The problem I’m having is that communication to the other still-running
agent then starts timing out.

I’m sharing the same Snmp and DefaultTcpTransportMapping instance.  And I’m
using the same TableUtils instance to run getTable synchronously.  Polling
to each agent is handled by separate threads.

Any advice on how I can isolate the root cause of the timeouts to the good
agent?

Thanks!





More information about the SNMP4J mailing list