Time() drifts about 5ms / minute

Compared to both millis() and the server time which are both correct together.
Do you need some simple code to illustrate?
Or do you believe me?

That’s 8 sec drift per day.

The server timebase is NTP, so it should remain accurate over the long term. The timebase for millis() and micros() (and, indirectly, imp.wakeup() and imp.sleep()) is not “correct”: it’s a crystal with a quoted accuracy of 30ppm, so expect to see up to about 2ms/min or 3s/day drift. The timebase for time() (and server.sleepuntil()) is a separate crystal with a quoted accuracy of 20ppm, though, so it shouldn’t be drifting by more than about 1.2ms/min or 2s/day.

So there does seem to be something up with your test imp’s RTC timebase, but do note that you might also be placing too much faith in the millis() timebase.


I guess I’m just lucky with millis(). I do not notice even 2ms drift after an hour. If that statement is true, than my specific hardware time() drifts far beyond it’s spec of 1.2ms/min.

Obviously millis() starts over at zero when you wake up or boot. Does time() ever jump to match the server time? Or will it drift for weeks? Obviously it syncs when you power up.

Interesting: millis() does not reset to 0 when you boot by loading new code.

Maybe millis() should work like the Linux jiffies counter, and start off at something like -30,000 just to make sure that everyone’s code deals with it wrapping.

Yes, time() jumps to match the server time every time the imp boots (including waking from server.sleepfor), and every time it gets new Squirrel (the Play or Run button in the code editor).


You’ve misunderstood. millis() does not reset it just keeps counting when you press Run. How does the server stay in sync with NTP? Will it jump by a second every day? In other words how often does it sync with NTP? How many seconds does it move when it syncs? I realize I’m way into the details. But wait until you see what I’ve achieved!

Ah, what I meant about millis() was that people shouldn’t notice or care about its absolute value (even when attempting to time it versus the RTC timebase), just about the difference between two readings. It should not be used as a “time since last booted”, because it wraps after only 25 days, which in some applications is a perfectly valid length of time for an imp to run after booting.

Whenever the imp requests new code from the server (at bootup, or when requested to by the code-editor), time() is overwritten with the server’s idea of the current time. The server’s timebase is precise, 'cos it’s NTP, but is only transferred with one-second precision, and no attempt is made to account for any delays the message may encounter between the server and the imp.


I understand how it goes between the server and Imp. I’ve been coding non-stop for days on this project. I’m curious about the server’s accuracy. 100ms? How often is the server updated by NTP?

I’m aware of the delay between them. I’m aware of the 1 sec precision. My question concerns the timing of the change from 3:39:00 to 3:39:01. It appears to be very accurate.

The server uses the normal Linux NTP implementation, which continually makes tiny adjustments to the clock rate in order to match the upstream timebases. It doesn’t really make sense to ask “how often” it’s updated. (Google it a bit for more details.) In my experience, the Linux NTP implementation keeps PC clocks within a few milliseconds of datum UTC.


Thanks so much for helping! I have found that to be true.