Imp.onidle behavior

When giving the imp a “heartbeat” connection you would want to guarantee that the imp always checks-in in some interval. If you are using RETURN_ON_ERROR as I am, you would need to make sure that at the end of any execution scope, a server.sleepfor or similar is called. Otherwise, the imp will forever be stuck in lala-land never sleeping, and never reconnecting unless you’ve defined some interrupt or you have physical access to the imp. This makes imp programming very error prone IMO.

I thought about using imp.onidle for protecting me against such pitfalls, but it’s no use, because imp.onidle is called even if there is a pending server.connect somewhere. So, if I tell the imp to sleep in onidle callback, my server.connect callback will never get it’s moment in the sun.

Wouldn’t imp.onidle be much more useful if it waited for any registered server.connect events too? I understand it would be difficult to change the api, but perhaps a new api call to handle this important pitfall? imp.beforesleep? I bet it’s a common source of error – tracking down the end of every possible execution context can be very tricky.

onidle() is not intended to be used just for sleeping - it’s to run routines that are low priority, to be dealt with when there’s nothing else more important to do.

Generally you’d have a flow something like:

connectcallback() { do work needed on a connection, register onidle() handler to sleep }
connect(connectcallback);
foreground_work();

This obviously assumes that the foreground_work is blocking, because as soon as the connect callback fires the device will sleep.

A more complex flow might be:

connectcallback() { do work needed on a connection, register onidle() handler to sleep }
workdone() { connect(connectcallback); }
do_work(workdone);

…ie, you kick off some work on a wake, and when the work is complete you call workdone, which kicks off a connect. When connected the connect callback sends the data and sleeps.

Even more complex is kicking off a connect in the background at the same time as your work is being done (to optimize power), and then only sending & sleeping when work is done and the data to send has been prepared.

Maybe post the code or describe the problem you’re trying to solve?

(generally, if you’re doing work with the imp offline then debugging can get hard as there’s no logging output; here’s a guide to help with that: https://www.electricimp.com/docs/resources/disconnecteddebugging/ )

The problem I’m trying to solve is preventing the imp from going into a permanently awake/disconnected state. Which can happen if every execution scope isn’t carefully checked to put the imp asleep as I described in the OP.

It’s important to know what the imp is doing while it is disconnected. Setting up a debug UART can be very helpful here. The alternative is to log state changes in an internal list and to send the list to the agent (or use server.log) when it reconnect. That way, you can get some confidence that your state machine is dealing will all possible situations.

I have debugged offline plenty of times before. Through UART, and the nv table. That’s besides the point I’m trying to make. I’m suggesting a safer mechanism similar to imp.onidle, that would protect beginners and seasoned developers with larger codebases from this potentially very costly and serious error that is easy to make.

I agree with you that imp.onidle must be used with care. I have a large code base running on the device and a mature state machine for managing wifi connections. When imp.onidle() is called, it doesn’t tell me anything useful with respect to the state of wifi, so I avoid making any changes to wifi/connection/imp state with it.

Looking at the documentation for imp.onidle(), I see that it does suggest that you can use server.sleepfor() etc when imp.onidle() is called. I think that this would only make sense if you are doing everything synchronously. The moment you have asynchronous tasks running, you do need to have a firm grasp of the state of everything before considering a sleep call. Maybe there needs to be a warning about that.