Is there a maximum limit on sending data in a single transaction from device to agent?

I’m sending my sensor data from device to agent. I’m able to send 1600 samples (each sample contains 6 bytes). But as soon as I increase my sample size, say 2000, device just hangs. server log says device disconnected.
I’m utilizing ~42% of memory for device code.
I’ve put imp.getmemoryfree() before making a call to agent.

for 1600 sample size free memory is 53608
for 2000 sample size free memory is 51208

I’ve been having this issue for quite a long time now, earlier this cap used to be 2600 samples and there my code size used to be less than 30% (don’t remember the exact number)
We are looking for a sample size of ~10000 in our final product. I’m sure we need some help here.

Moreover I want to know
-> what is the Wifi Buffer size?
-> how much is the RAM on imp?

Got a bit more info on what format your sensor data is in? Is this a blob (in which case 2000*6 will be fine) or a table or…? The data has to be marshalled to be sent, which is very low overhead for a blob but higher for a table.

Right now the server only accepts ~16kB in a single packet, I believe, but that limit has recently been raised (I don’t think the change has yet been deployed though).

The imp001/002 has 128kB of RAM (imp003 192kB), of which you get about ~82kB (imp003 >130kB). Data sent upstream is marshalled on the fly, but will take buffer space (both for the data, the encryption and then in TCP buffers) so transient usage during sends is higher.

The TCP transmit buffers are by default 3kB, but this can be resized for higher send throughput, see

My Upload data looks like this
local sample_data = {}
sample_data.mote_device_id <- mote_device_id
sample_data.mote_mac_address <- mote_mac_address
sample_data.channel_id <- channel_id
sample_data.time_stamp <- time_stamp
sample_data.num_samples <- num_samples
sample_data.samples <- val
sample_data.fifo_overflow_error <- fifo_overflow_error
sample_data.voltage <- voltage
sample_data.rssi <- rssi

Here val is a blob which has all my samples.
How do I calculate the serialization size of table which is transmitted?

For TCB transmit buffer size as 3kB, does that mean in one packet, data sent is 3kB?
In casee of 1600 samples, I’ll be sending 1600*6 bytes of sample data + rest of the data in the table + extra bytes for serialization. This comes around 10-11 kB. So is this data broken into packets of 3kB and sent?
Meanwhile let me try to increase buffer size and do the testing.

Is it a good way to increase buffer size just before sending the data and once sent reduce it back to default value?
or it should be done at the beginning and should be kept same throughout the run.

The default TCP output window is 3KB. The packet size is smaller: 576 bytes TCP segment size, so packets of 600 bytes or so.

If you send 11KB in one go, it will be split into 576-byte segments and sent. After (by default) 3KB, the output window will fill up, and the write operation will stall until the data starts getting ACKed by the other end, freeing up space in the output window for further 576-byte segments. But as long as at least (11-3=)8KB of it gets ACKed before the overall timeout (30s), no error will occur.

As documented under, you currently can’t reduce the output window size, only increase it. So if you need this facility, set it up-front to the biggest you’ll ever need.