How does buffering between imp and agent work?

Consider a simple bridge where an application pushes packets of bytes to the agent, the agent pushes the packets to the imp and the imp pushes them out a hardware port (e.g. serial or spi). It is possible that the application may push packets at a faster rate than the imp can push them out the hardware port necessitating backward flow control. Assume the packets are contained in HTTP requests from the application to the agent and serialized blobs sent from the agent to the imp.

In order to understand how to best implement the flow control it is necessary to understand how incoming packet are buffered on both the agent and the imp.

In the simple case the http.onrequest handler in the agent would convert the HTTP data into a blob and send that to the imp via device.send. The imp would receive the blob via agent.on and in the code executed by the agent.on handler output the data to the hardware port.

The process of outputting the data to the hardware port could be slower than the rate of incoming packets. My question has to do with where your system will be buffering or queuing data in this scenario:

  1. Is there a queue of incoming HTTP requests ahead of the agent’s http.onrequest execution for the case the process of sending the encoded blob to the imp is slower than the incoming packet rate?

  2. Is there a queue of incoming messages(events) ahead of the imp’s agent.on execution for the case the process of sending the blob out the hardware is slower than the incoming blob rate?

If so, how big are the queues (especially if any are imp-hosted) and is there a way to inquire about their size, etc? This information could be used to apply back pressure.

An alternative is to decouple the interfaces on both the agent and imp but this requires directly managing queues and more sophisticated code execution and is much more complex.

The queueing from the agent to the imp is largely limited by TCP buffers; if the imp is busy (eg in an agent.on handler doing a SPI write) then no incoming packets will be processed by the imp in that period - that means no TCP ACKs and eventually, back-pressure on the agent (it will block in a call to device.send).

The issue with this is that you may then have problems getting out of band messages to the imp, because the pipe will be full of data that you need to consume.

Many of our agent<>device examples use a “push/pull” scheme - eg the Lala example code. The agent sends a message to the device telling it how much data to expect, then the device sends pull messages to the agent which cause the agent to push the next block to the device. In this way, both stay in sync and you’re not filling TCP buffers, so you can send other messages to and fro without blocking.

The imp’s advertised TCP window is small (under 10kB as I remember). For streaming stuff upwards form the imp, throughput is very much limited by the buffer-latency product - because the imp has to hold packets until they’ve been ACKed by the server - and so we have a new API, post rel23, which allows the user to allocate more space to TCP transmit buffers to support more demanding use cases.

I’m not the best person to ask about how the incoming agent requests get queued though… I’ll let someone else here weigh in on that!

What’s the lala example code? A search of your wiki brings up nothing.

Because it’s not on the wiki. Example code is spread around all over the web, which is really frustrating. Hopefully the new developer community manager can collate everything into one defined place.

But in the meantime, you can find lala here:

A big part of reworking the devwiki is going to be collecting and clarifying a lot of the example code that’s spread out.

Thanks for digging that up @sjm. For those who don’t know, the Lala is an impee based on the imp module (as opposed to the card) designed for working with audio. Tom has a bit more work to do on it before we post reference designs and code on the wiki. I believe we’re also going to have some content on the blog about the Lala board once it’s all ready to go :slight_smile:

Thanks sjm. That’s very helpful. It’s instructive to see a real application.