Memory use in blob operations

I was looking into optimising the memory use of some code that runs on 001 & 002 HW (hence quite RAM limited ) and wanted to check some coding good practices used the eImp libraries. In the SpiFlash lib, I found this code snippet which kind of made me wonder how internally temporary memory iallocation is done :

` //, integer) - Copies data from the SPI flash and returns it as a series of bytes.
function read(addr, bytes) {
// Throw error if disabled
if (!_enabled) throw SPI_NOT_ENABLED;

    _spi_w(format(SPIFLASH_READ, (addr >> 16) & 0xFF, (addr >> 8) & 0xFF, addr & 0xFF));
    local readBlob = _spi.readblob(bytes);

    return readBlob;

// spiflash.readintoblob(integer, blob, integer) - Copies data from the SPI flash storage into a pre-existing blob.
function readintoblob(addr, data, bytes) {
    data.writeblob(read(addr, bytes));


Question : the way readintoblob() is implemented is different then I would expect. I would have expected an almost complete copy of the read() code, but with “data.writeblob(_spi.readblob(bytes))” instead of local readBlob = _spi.readblob(bytes); Would that be a more optimal use of RAM - because there’s one less temp blob needed, or doesn’t that make a difference (because for instance the writeblob() function allocates the same temporary memory under the hood ? Especially when large chunks of data are read from the EEPROM, allocating one more temp blob for eg 5K bytes can make a serious difference

Both ways a new blob is created and then written into another blob; spi.readblob has no facility to read data into an existing blob, it always creates a new object even if it’s called “in-line”.

In order to lower the demand for RAM, would it make sense to partition the reading of large blocks of data from an EEPROM to avoid the need to temporary allocate big blocks of temp data (eg split reading a 4K block into 16x256 bytes). There would probably be a performance hit, but it could avoid out of memory crashes during large EEPROM reads (the occurence of which triggered this exercise of optimisation). What’s your view ?

Depends how tight on ram you are, and what performance criteria you’re trying to hit I guess?

How much free memory do you have?

around 10-15K, which poses a problem if big chunks of memory are allocated.
Will be a thing of the past when we move to IMP005 over time but problematic now. Performance isn’t important. Maybe one R/W operation every few minutes. Avoiding out-of-memory restarts is the key here.

1 Like