The problem of using SINGLE with TIMER in Linux.
The problem of using SINGLE with TIMER in Linux.
Something that caused a number of issues over the years when i helped people port their code to Linux, was they used the SINGLE datatype in their timing code. This would cause strange bugs. My solution each time was to just change their datatype to DOUBLE. It was always curious to me, because this issue doesn't occur under windows.
Well today in discussion with Mysoft, it has become clear exactly why this is. On windows, TIMER() returns the number of seconds since the machine was booted. On linux, it returns the number of seconds since the Unix Epoch. The number of seconds since the Epoch is so huge, that it cannot be stored in a SINGLE with any decent accuracy. It can't even store to one second of accuracy, let alone milli or nano seconds.
So, I would like to request that the behaviour of TIMER() on Linux be changed to match that of windows. This should be possible by reading /proc/uptime (or a similar code based solution) on the first call to Timer() and calculating an adjustment value that will be applied each time TIMER() is called, to the result from gettimeofday() before it is converted to double and returned.
This will then mean the SINGLE/timing issue will go away (as long as people don't have uptimes of many many years i assume) and TIMER() will function the same on both Windows/Linux, by essentially returning the uptime.
Of course there are still issues involved with the way that TIMER() works, but this would help a lot to mitigate a problem that is rather unintuitive.
Well today in discussion with Mysoft, it has become clear exactly why this is. On windows, TIMER() returns the number of seconds since the machine was booted. On linux, it returns the number of seconds since the Unix Epoch. The number of seconds since the Epoch is so huge, that it cannot be stored in a SINGLE with any decent accuracy. It can't even store to one second of accuracy, let alone milli or nano seconds.
So, I would like to request that the behaviour of TIMER() on Linux be changed to match that of windows. This should be possible by reading /proc/uptime (or a similar code based solution) on the first call to Timer() and calculating an adjustment value that will be applied each time TIMER() is called, to the result from gettimeofday() before it is converted to double and returned.
This will then mean the SINGLE/timing issue will go away (as long as people don't have uptimes of many many years i assume) and TIMER() will function the same on both Windows/Linux, by essentially returning the uptime.
Of course there are still issues involved with the way that TIMER() works, but this would help a lot to mitigate a problem that is rather unintuitive.
-
- Site Admin
- Posts: 6323
- Joined: Jul 05, 2005 17:32
- Location: Manchester, Lancs
I think this is a reasonable idea, since the "base point" of timer is essentially undefined in FB, so we can make it what we like.
www.freebasic.net/wiki/keypgTimer seems to indicate this is a problem for DOS too.
(It would would be good (in some ways) to have a QB-compatible timer() function (daytimer()?) that does return the number of seconds since midnight, just for cross-platform consistency. I remember looking into this once, but didn't find a way of accurately finding the point of midnight on Windows, apart from loop polling time() for up to a second.)
www.freebasic.net/wiki/keypgTimer seems to indicate this is a problem for DOS too.
(It would would be good (in some ways) to have a QB-compatible timer() function (daytimer()?) that does return the number of seconds since midnight, just for cross-platform consistency. I remember looking into this once, but didn't find a way of accurately finding the point of midnight on Windows, apart from loop polling time() for up to a second.)
Long ago I too had a problem with the old QB4.5 timer doing funny things at midnight. It was because there is no timer tick exactly at midnight since the RTC was based on the 32.768 kHz crystal, while the Ticks were derived by division of the NTSC colour burst 14.318MHz crystal.
Counting the number of seconds since midnight takes 16.4 bits, when someone tries to count them in a 16 bit register they may only become aware of the problem at 18h:12m:15s when overflow first causes a problem.
We have to assume that a PC supported by a UPS will run beyond one year without cold starts. The number of seconds in a year is 31622400 which requires 24.91 bits. That will only just fit in a single because there are 24 bits plus one implicit bit = 25 bits. There can be no sub-second resolution if singles are used for Timer over one year. Counting 18Hz Ticks in a Single will fail after only 3 weeks.
For this reason the use of a single to handle the value of Timer is bad practice and should always be changed immediately to a double.
The question then becomes is resolution lost by Timer on Unix systems when using Doubles. To get one microsecond resolution requires 6 digits per second and 7 digits per year = 13 digits. Double has 15 digits so we have 2 spare digits. That gives us 99 years from the start of the Unix epoch. So it seems that no modification of the Timer function is actually needed for Timer on Unix systems if Double precision is used.
(Going through the numbers more carefully; 1,000,000 micro sec per sec = 19.932 bits. 86,400 sec per day = 16.399 bits. 366 days per year = 8.516 bits. The total is 44.846 bits per year. A Double supports a maximum of 53 bits so we have 53 - 44.846 = 8.154 bits unused. This is sufficient for about 285 years.)
Counting the number of seconds since midnight takes 16.4 bits, when someone tries to count them in a 16 bit register they may only become aware of the problem at 18h:12m:15s when overflow first causes a problem.
We have to assume that a PC supported by a UPS will run beyond one year without cold starts. The number of seconds in a year is 31622400 which requires 24.91 bits. That will only just fit in a single because there are 24 bits plus one implicit bit = 25 bits. There can be no sub-second resolution if singles are used for Timer over one year. Counting 18Hz Ticks in a Single will fail after only 3 weeks.
For this reason the use of a single to handle the value of Timer is bad practice and should always be changed immediately to a double.
The question then becomes is resolution lost by Timer on Unix systems when using Doubles. To get one microsecond resolution requires 6 digits per second and 7 digits per year = 13 digits. Double has 15 digits so we have 2 spare digits. That gives us 99 years from the start of the Unix epoch. So it seems that no modification of the Timer function is actually needed for Timer on Unix systems if Double precision is used.
(Going through the numbers more carefully; 1,000,000 micro sec per sec = 19.932 bits. 86,400 sec per day = 16.399 bits. 366 days per year = 8.516 bits. The total is 44.846 bits per year. A Double supports a maximum of 53 bits so we have 53 - 44.846 = 8.154 bits unused. This is sufficient for about 285 years.)
Thanks MichaelW for identifying my erroneous and sloppy guesswork, you make a valuable reference.
So; Counting Ticks at 18Hz in a Single may fail after only 10 days, not 3 weeks.
I believe my conclusions for Timer on any platform remain intact;
Midnight should not be used as an origin.
Singles will always be dangerous.
Doubles will always be sufficient.
Any modification of the Timer origin would further hide the danger of using Singles with Timer.
Maybe it would be advantageous if the assignment of Timer to a Single threw a compiler warning.
So; Counting Ticks at 18Hz in a Single may fail after only 10 days, not 3 weeks.
I believe my conclusions for Timer on any platform remain intact;
Midnight should not be used as an origin.
Singles will always be dangerous.
Doubles will always be sufficient.
Any modification of the Timer origin would further hide the danger of using Singles with Timer.
Maybe it would be advantageous if the assignment of Timer to a Single threw a compiler warning.
I don't really see this as feasible, unless it's the ultra simple case of:Maybe it would be advantageous if the assignment of Timer to a Single threw a compiler warning.
dim as single t1 = timer()
People can easily subvert it.
I disagree with the idea of going back to the seconds since midnight method, as I expect this will cause even more bugs, because it only takes a program to be running for a day before it hits a reset, and that can cause people problems when comparing to old stored values. At least with time since boot, this problem is kicked into the long grass. Perhaps that's even more dangerous though, as people are less likely to discover potential issues during their testing phase.(It would would be good (in some ways) to have a QB-compatible timer() function (daytimer()?) that does return the number of seconds since midnight, just for cross-platform consistency.
Some good points are bought up in this thread, some of which I was thinking about when I said "there are still issues involved with the way that TIMER() works".
It's a bit of a battle in my mind, in a way I'd rather the current TIMER() was scrapped, and one introduced that worked more like the windows/linux OS functions do, (long) integer values that wrap around correctly, which will give greater precision and are less prone to comparison errors on that wrap around. But, this is BASIC, and there is the QB legacy to consider.
I'd seriously like the platforms to be in harmony though, whichever the chosen method is.
The hidden “loss of resolution” problem yetifoot flagged in this thread will continue so long as beginners or legacy code uses Single with Timer. We could add a big warning to the Timer documentation, but that won't flag legacy code, nor beginners who learn slowly from their own mistakes.
The only way I can see to detect misuse is to generate a warning when Timer is implicitly converted from Double to Single. It is not necessary to detect every complex occurrence of misuse. It only requires that one case exists and is detected in any program for the problem to be brought to the attention of the programmer.
' example;
Dim as Single t1
t1 = timer() ' this will implicitly convert to single and could throw a warning.
Then people lacking wisdom can easily subvert the warning by using;
t1 = Csng(Timer) ' this can explicitly convert to single and so not throw a warning.
The only way I can see to detect misuse is to generate a warning when Timer is implicitly converted from Double to Single. It is not necessary to detect every complex occurrence of misuse. It only requires that one case exists and is detected in any program for the problem to be brought to the attention of the programmer.
' example;
Dim as Single t1
t1 = timer() ' this will implicitly convert to single and could throw a warning.
Then people lacking wisdom can easily subvert the warning by using;
t1 = Csng(Timer) ' this can explicitly convert to single and so not throw a warning.
Why not support two timers - the current one as is, and a microsecond timer that returns a 64-bit signed integer containing the elapsed microseconds from boot. I don’t know about linux, but for Windows a microsecond is reasonably close to the ~2 microsecond effective resolution that I typically get for the Windows high-resolution timer, capturing the timer count from a tight loop coded in assembly. Regarding the potential for overflow, for the most recent versions of Windows that I have tested (2000 and XP) the performance frequency was 3579545 Hz, versus 1193182 Hz for Windows 9x.
@MichaelW: This is the one that makes most sense to me... especially because we do consider legacy. It's safer to do this anyway:
IMO.
Code: Select all
var t = msTimer() '' No, not Microsoft