Pin state change edge detect

When using the pin state change callback to detect handshaking pulses in our communication between the imp and an external uP, we experience a lot of missed messages; Since we want to detect a rising edge, we are using code for the callback such as :

function HostToImpMsgHandler()
	if ( == 1) // rising edge indicating frame was just received
		uartRxPacket.type =;
		uartRxPacket.len = uartRxPacket.payload.tell();

the Sync pulse is however only 19msec wide, so we assume the callback, not being a true interrupt, sometimes executes too late, missing to comply with the level check at the beginning of the callback and therefore not treating the message. Is this possible ?
Would be better if the state change callback would differentiate between a rising edge and a falling edge (eg 2 different callbacks, or a parameter indicating the direction).
For now we’re going to detect on the falling edge, as the low level period in between messages is mostly > 50msec. Would that be enough to guarantee catching every edge ?

The edge is certainly caught, but yes the handler may not run until much later - depends on what else your code is doing at the time.

Generally though, using an out of band signal to tell code that there’s data in a buffer - then reading blindly - is suboptimal for resilience. Is there any reason you can’t process the data stream from PcUart on RX data callbacks? You’d need a header & length indication, but that should not be hard to deal with.

eg look at _rx_byte() here

we’ve been doing that for years, but experienced problems getting past 57600 baud (squirrel VM simply couldn’t keep up as the callback literally executes for every char received - see some of my earlier posts on the subject). The only way to get to higher speeds seems to be not triggering on every char received and using uart.readblob at the end of the frame, emptying the fifo in one go. It’s a binary stream of data, so no particular char can be used to indicate end of frame (which you can btw only detect when inspecting every byte received) , and the gap between frames is sometimes not very big, in between 50 and 100msec. We’ve tried all sorts of mechanisms to detect the frame gap, but none really worked flawlessly;
That’s why we introduced a HW sync signal to indicate EoFrame, but now we hit this latency problem.on the callback. I think we could coope if we could differentiate based on which edge was detected…
I’m going to leave out the check for rising edge. That will mean also the falling edge triggers it, but assuming the callbacks for the 2 edges execute in-order, by the time the falling edge gets processed, the fifo is empty which we can detect as well. The only requirement then is that a frame gets processed within 100msec (which is the inter-frame period) but that’s already better then a required max latency of 19msec. The header does contain a length, so we can avoid reading part of the next message if we would be slightly late

The VM can keep up with higher speeds - but we do recommend emptying the buffer at every callback. We have had customers running a 1Mbit/s on an 003, for example. It does depend what else you’re doing with the data, though.

There’s no issue with processing a binary stream - as long as you have a length in the header, that can be used to determine the end of the frame.

If you just don’t check the edge, and no data was present, then your first byte read would return -1 and you could abort there?

Indeed. That’s what we’ll do. With the check for rising edge being in the code, and if execution was delayed by more then 19msecs, we didn’t only miss handling a frame, we also ran into trouble with the next one. We’ll deal with that by introducing some more intelligence in parsing the content of the fifo (instead of blindly assuming all content belongs to one frame). Shouldn’t be too difficult as indeed the header contains all necessary info to do so. We simply never thought of it (being used to the ‘interrupt way’ of working where response to a trigger is only a few us delayed…)