Limit size of an agent array

I want to store data samples in an array on the agent, which I pull to a server every so often. I clear the array after pulling the data. To avoid running out of agent memory, I want to cast off the oldest data points when the array reaches a certain size, say 20,000 points. If I use “if ( array.len() > 20,000) array.remove(0)”, will that free up a data location, limiting the agent memory use? Also I use “array.push(val)” to add to the array. I assume it’s pushed onto the bottom of the array.

Yes, that should work. Push adds to the end, and remove(0) will remove the first element.

I have run out of memory. I don’t know if it’s the imp of the agent that has run out. I suspect the agent, since it’s the one storing data.
I did add the line:
“if ( ::pressureHistory.len() > 20,000) ::pressureHistory.remove(0) ;” to prevent over flow. Was 20,000 to big?
Is there a way to halt the agent and dissect the memory use?

You can print the free memory with imp.getmemoryfree() - I don’t know how big your pressureHistory objects are; the agent has a 1MB memory limit.

i’ll try free mem on both imp and agent.

20K point was way too many. I can hold 5k or so before running out. That’s about a days worth. i can pull the data over daily. Not what I wanted, but it’ll work fine.

One thing tho’. I assume getmemoryfree() returns the number of bytes. I calculate 240 bytes/sample in my case. This seem like a lot. I’m storing a unix timestamp and a float. I think that’s 32 + 32 bits, 8 bytes. Even with data structure overhead, it’s a big jump to 240.

Yeah, that does seem a bit much, though it’s an array of objects hence there will be an object overhead too which could account for some of this (every object has an overhead due to allocation, reference counting, etc).

If you store as two arrays, one for timestamp and one for float, how does that work out?

I’m using http.jsonencode() my in the agent to send the history table to my server.
This line:
"::pressureHistory.push( { height=z.height timestamp=z.timestamp } ) ;" is probably why 240 bytes are used per data entry. I think a new table with a new hash is created with each call.
To reduce the table overhead, I created a class to define the data structure once:
class heightData {
constructor(h,t){
timestamp=t;
height=h;
}
height=null;
timestamp=null;
};
then I create new instances and push them into the history.
::pressureHistory.push( heightData( z.height,z.timestamp) ) ;
this reduces the bytes/record to 28. Much better…
but…
jsonencode() doesn’t handle the class entries as a table.
Is there a class method I can define to help jsonencode() know how to encode the data?
If not I’ll try two tables, send each over then align them on the server.

Maybe I should be using http.base64encode()?

That won’t help I use a table defined like a json obj that works with http.jsonencode(). I wonder if the issue is to do with a class NOT being serialisable

Conversely, the following Squirrel items are not serialisable:

//This works for me local vars = { "p1_op": ""+_p1_op+"", "v1_op": ""+_v1_op+"", "v2_op": ""+_v2_op+"", "rssi": ""+_rssi+"", "vdd": ""+_vdd+"" "bssid": ""+_bssid+"", "mac": ""+_mac+"" } local jvars = http.jsonencode(vars); res.send(200,jvars);

No, there aren’t “json helpers” as you might want I’m afraid.

The likely solution is for you to encode the JSON yourself when you want to send it by iterating over the class array and building a string. The issue you may run into is how big this string is :slight_smile: