Having done a pair of large LED displays, the next imp project I want to tackle is replacing the PIC on my Nixie Clock.
All the PIC is doing is keeping time and driving a pair of HV5622 chips: 32-bit shift registers with open-drain outputs.
So the hardware interface is pretty straightforward, but I’m wondering about time(). I know that I’ll need to run my own timers for anything I want to have happen on a sub-second scale, but it’s not clear to me when the RTC in the imp gets set, and how often it’s updated/corrected.
Being the sort of person who has a nixie clock, I’d like it to be fairly accurate, something in the 10’s of ms range.
Is the RTC in that range, or do I need to do more work on my own to get a good timebase?
With software calibration using the server as a source, the Imp can be within 1ms/hour accurate, even without a Wifi connection.
You get time() by adding a constant to millis().
The RTC is updated on the first connect after a sleep or cold boot. We may well be updating it on all connects in the future, though.
It’s also currently only set to 1s resolution so can be +/-1s (eg: second tick immediately after it’s sent to the imp).
Chip-scale atomic clocks are not cheap, even compared to Nixie-tube clocks – but you would get on Slashdot…
Probably easier than writing an NTP client in the imp.
No need for an NTP client. Once you have then lose your Wifi connection, the Imp has absolute time within 1ms/hour drift. The code is simple if you’re interested. You can resync as often as you like with a connection. Does shallow sleep keep millis() running? I guess deep sleep does not?
@sbright33 - I now have an imp driving running the nixie clock display, so I’d be interested in your high-accuracy clock code.