[JDEV] Detecting client/server disconnect?

Oliver Jones oj at world.std.com
Tue Apr 10 09:17:20 CDT 2001


At 10:30 PM 4/6/01 -0700, Robert Temple wrote:
>This has been an minor issue for us.

It was minor for us too, until we switched over to handling our Jabber 
connections through a load-balancer and it suddenly turned into a major 
issue.  That's because the load-balancer is aggressive about cleaning up 
idle connections.

>  People think they are connected
>or they think someone else is connected but really their socket connection
>was severed and the client and/or the server don't know about it.  It
>sure would be nice if this was fixed in the protocol.  I'm not sure how
>something like this would be backwards compatible...

The server-side keepalive I implemented does seem to be backwards 
compatible.  I believe the Winjab client-side keepalive is backwards 
compatible -- hey, Winjab works fine!

>  Is that important at this stage?

I believe this issue is tremendously important at this stage for the 
following reasons:

(1) many corporate gateways (e.g. the ip masquerading stuff in Linux, and 
SOCKS proxy servers) time out idle TCP flows in a few minutes.

(2) a scarce resource on any highly scaled up Jabber implementation is 
sockets on the server.  Even if you get up to 20,000 connections on a 
single box, this amounts to $0.15 per connection if you pay $3000 for the 
box (a typical price for a dual processor 800MHz noname Linux rackmount 
with plenty of memory and spindle space).    You want to scale to hundreds 
of thousands of users?  You can't waste connections.

With keepalive defined in the session-layer architecture, the 
implementation of the server can scavenge idle ports and re-use them by 
disconnecting sockets that haven't been heard from recently.  You may be 
able to get TCP keepalive to do some of this; that's fine.

This suggestion comes out of experience trying to scale up Jabber (using 
jpolld).

Ollie Jones





More information about the JDev mailing list