Adjust TCP keepalive

Is there a means to adjust the rate at which TCP keepalives are sent? It seems to do about every 20 seconds, which while probably needed for some NATs, is needlessly fast for my application and actually adds up to a significant amount of data transfer across a metered, extremely low volume cellular data plan.

There’s currently no means of adjusting it, but that’s a very good idea. They should be getting sent after 57s of idle time, every 15s until replied-to. And, sadly, yes: we did indeed find NATs that dropped the imp’s connection unless we did that.

Peter

Ah, I see the 57s timer, now.

I’m also seeing the Imp server sending TCP keepalives, sometimes right after the Imp has sent one. This is what I saw as 20-30s delay between keepalives at first glance. A keepalive in one direction generates bi-directional activity due to the ACK which I would think would keep most NAT sessions open. Did you find it necessary to initiate keepalives in both directions to work with some (arguably broken) NATs?

Yes, to allow the server to know when the client has gone away in an unclean fashion (eg power yanked).

Not sure on the server period off the top of my head but indeed it should be set slightly above the client period so that the client keepalive packets satisfy the server’s default keepalive requirements.

I’m definitely seeing them going both ways. The server->client period does appear to be longer (~75s), but it’s sending them regardless of whether it has received one from the client recently.

I really need to get this adjusted if at all possible. It adds up something like 30MB/mo of transfer per device if you just let it sit there and do its thing. Disconnecting (or deepsleeping) removes the keepalive of course but also sacrifices interactivity, and the overhead upon reconnection seems quite high.

I don’t believe this will be something we’ll have user-settable. There’s too much risk with people breaking things here (eg they turn it up high then we get blamed for devices falling offline).

If the server is still sending keepalives even though it’s seen keepalives from the client end, then that’s a linux tcp stack issue and unlikely to be something we want to fiddle with.

I don’t believe this will be something we’ll have user-settable.

Agreed, it’s not as if the Squirrel code is in a position to know the right setting – it’s not a property of the model, it’s a (possibly time-varying) property of the individual connection. Unfortunately the imp has no way of determining the right setting other than by detecting connections either being dropped or not, i.e. any sort of auto-detection of the interval would introduce connection drops in the case that the required interval really is short.

Peter