Yes, I considered physically wiring something into a pin change handler. Seemed like too crude a hack to pursue if another way is possible.
Imagine A = B + C, where B and C are relatively involved functions. Each takes input from real-time data streams: one a changing voltage, the other a serial data stream (requiring parsing). But, I want A to be output at a known, fixed rate, independent of how long either B or C takes.
I want to produce code that plays nice. Besides producing A as output, my code also has other responsibilities, so I don’t simply want A to be produced as fast as possible. That hogs CPU cycles unnecessarily from these other tasks, including the O/S itself.
So, I’ve come up with the following way to throttle things in lieu of imOS not being multi-threaded in nature. Not sure it’s the best way. Say I’m happy with an A update rate of P /sec, my process frequency, P (say, 200 Hz). A must output 200x/sec, or close to it. I have B produced by imp’s Sampler feature at 2000 Hz. 10x oversampling. All good so far. C is busy parsing the serial stream for validly framed data (which it may or may not find). That’s called via a wakeup timer Bx/sec (not really, though). Each continuously updates a global table with their results. I know B and C each take less time to complete than their respective periods. Then, A (via imp.wakeup( 1/P, A) simply polls the table and does its stuff.
This all seems to work quite well. I can partition the CPU costs by adjusting the callback and wakeup frequencies to achieve whatever real-time responsiveness I require. At least, if wakeup didn’t hit the snooze button so much…
So, now do you see where I’m stuck? Even though imp.wakup has centisecond resolution, it can’t fire things more often than 50x/sec, and, regardless, is not the best way to provision the above anyway, me thinks.
Even if I changed C to a uart event-based model, I can’t get A to fire at 200x/sec (or even 60x/sec for that matter).