Timeslice for cpu

General FreeBASIC programming questions.
Dinosaur
Posts: 1481
Joined: Jul 24, 2005 1:13
Location: Hervey Bay (.au)

Timeslice for cpu

Post by Dinosaur »

Hi all

I have finally seen that giving time slices back to the cpu has benefit.
With continuous usb transmissions, the timing becomes more erratic.
If I yield about a mSec it stabilises within 4mSec instead of 20mSec.

In CGUI there is a Flag in "CguiYieldTimeSlice(1)" but it seems to long.(2.5mSec)
If I put "Sleep 1" on every loop of my program , the scan time (to run my process) goes from 3 micro Sec's
to a millisecond or more.

Can someone tell me what the minimum "Time Slice" is that is needed by the cpu,
and is there another way "other then Sleep" to give time back to the cpu ?

Something like creating a "dummy Interrupt" or a specific system call.

Regards
D.J.Peters
Posts: 8586
Joined: May 28, 2005 3:28
Contact:

Re: Timeslice for cpu

Post by D.J.Peters »

At first (and so far I know) the OS windows or linux does not have any sleep() or any other kind of delay() in real.

It exist only functions with a timeout value (given in milliseconds)

Sleep can be a call to select() a comand that can wait for file or network socket events.

If you or Sleep() calls select() without any eventmask but with a timeout value it will wait X ms and comes back to the caller.

On windows exist a Select() and MsgWaitForMultipleObjects() 15 ms is the lowest resolution for a (stable) timeout value.

With the exception for the WM_TIMER message the lowest resolution for SetTimer() is 10 ms.

With other words a real CPU IDLE time via sleep(15) on windows and sleep(10) on Linux is realistic.

Shorter delays must be made with stupid empty CPU loops. for i = 1 to XXX : next ...

Windows nor Linux are realtime operating systems.

Joshy
Last edited by D.J.Peters on Feb 26, 2016 20:01, edited 1 time in total.
Dinosaur
Posts: 1481
Joined: Jul 24, 2005 1:13
Location: Hervey Bay (.au)

Re: Timeslice for cpu

Post by Dinosaur »

Hi All

Thanks for the reply Joshy.
I have read a lot about priorities and "niceness" settings, but they do not solve the erratic timing.
Mind you, the device I am talking to also contributes to the problem.
Running the same software but using a different I/O board, shows a higher level of stability.
I know we are not using a "Real Time" OS, but I am comparing with previous systems.

I have run the same software on FreeDos, Win XPe and now Linux.
But never saw a problem until Linux.

Also I have never used the Sleep statement in Dos or Win XPe,
and I could clock an output and watch it on a Oscilloscope (without sync)
and only see a very slow drift.(Power control using Phase Angle Firing of SCR's)
That was usb to UBW32 board.

So in other words I have never seen any problems by not giving back cpu cycles.
But now with Linux / usb I am seeing it.

Running a Quad Core system, do they even need to be given Time slices these days ?
In the early days memory refresh was critical, but now ????

Obviously the usb is affected.

Regards
MichaelW
Posts: 3500
Joined: May 16, 2006 22:34
Location: USA

Re: Timeslice for cpu

Post by MichaelW »

I have doubts that the CPU is the problem, because it's much, much faster than your I/O board.
Dinosaur
Posts: 1481
Joined: Jul 24, 2005 1:13
Location: Hervey Bay (.au)

Re: Timeslice for cpu

Post by Dinosaur »

Hi All

Michael, I agree that the cpu would not be the problem ,unless it has a need to have cpu cycles yielded to it.
I don't seem to be able to get an answer on that.

However, I have "solved" my problem of erratic usb response times.
Reading from the port (even when the complete string is waiting) is the biggest culprit in response time variation.
Trying to Tx within 5 or 6 mSec after a previous Rx causes variations.

Firstly I removed all Sleep statements.
I shifted all my logic & Time setting to immediately after the completion of the transmission.
So the Start time for my critical output is set once the Tx is done & verified.
Then 30 mSec before the completion of the Critical Time, I prevent ALL usb activity.
Then at the completion of the critical output time, the first Tx allowed is the cancellation of the output.

Have had that running for hours (at 40 cycles per minute) and have not seen 1 mSec variation of the 200 mSec output time.
Also tried to break it by interspersing 50 mSec output pulses, but all good.

Reading the web on usb latency showed this to be a wide spread concern, particularly with Audiophiles.

Luckily I am able to adapt my logic to overcome the problem.

Regards
marcov
Posts: 3462
Joined: Jun 16, 2005 9:45
Location: Netherlands
Contact:

Re: Timeslice for cpu

Post by marcov »

D.J.Peters wrote:At first (and so far I know) the OS windows or linux does not have any sleep() or any other kind of delay() in real.
Linux and *nix in general:

http://linux.die.net/man/3/usleep
http://linux.die.net/man/2/nanosleep
http://linux.die.net/man/2/clock_nanosleep

windows:
https://msdn.microsoft.com/en-us/librar ... 85%29.aspx

for more granularity:
https://msdn.microsoft.com/en-us/librar ... 85%29.aspx

However the actual sleep time may vary, and usually depends on the granularity of the scheduler's timing. Typically it is better on systems with HPET timers in uncore. (read anything after core2)

On modern systems it is usually better to try to work event driven as much as possible, finely grained polling is not efficient on pre-emptive systems. If really must be, move all polling to a separate thread and increase its scheduler precision.
Dinosaur
Posts: 1481
Joined: Jul 24, 2005 1:13
Location: Hervey Bay (.au)

Re: Timeslice for cpu

Post by Dinosaur »

Hi All

Thank you for that reply Marcov, but is a sleep actually needed.?

I know the load on the cpu is reduced by introducing sleeps, (and it will run cooler) but other then is there an actual need ?

Also , I saw these usleep etc, whilst reading on the subject, but are they accessible from within FB ?

Regards
dodicat
Posts: 7983
Joined: Jan 10, 2006 20:30
Location: Scotland

Re: Timeslice for cpu

Post by dodicat »

usleep can be captured alright, but it seems to be limited to 999999 microseconds.
Any more than that, it doesn't do. (Win XP 32 bit)

Code: Select all


extern "c"
declare function usleep( as ulong) as long '=0 when working
end extern

print timer
print usleep(999999)
print timer
 
print
print timer
print usleep(500000)
print timer
 
Dinosaur
Posts: 1481
Joined: Jul 24, 2005 1:13
Location: Hervey Bay (.au)

Re: Timeslice for cpu

Post by Dinosaur »

Hi All

Thanks for that example dodicat, I will try putting 500 uSec sleeps in and see what it does.

Regards
MichaelW
Posts: 3500
Joined: May 16, 2006 22:34
Location: USA

Re: Timeslice for cpu

Post by MichaelW »

Dinosaur wrote: Michael, I agree that the cpu would not be the problem ,unless it has a need to have cpu cycles yielded to it.
I don't seem to be able to get an answer on that.
Under Windows, and running on a multicore processor, if you max out your process priority, so no other (user) process will be able to preempt it, and your code does not overheat any part of the core it's running on, it should run continuously.

Scheduling Priorities
SetPriorityClass
SetThreadPriortity

Code: Select all

''-------------------------------------------------
'' The combination of these two calls will set the
'' process priority to its maximum value, 31.
''-------------------------------------------------
SetPriorityClass( GetCurrentProcess(), REALTIME_PRIORITY_CLASS )
SetThreadPriority( GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL )

''-------------------------------------------------
'' The combination of these two calls will set the
'' process priority to its normal value, 8.
''-------------------------------------------------
SetPriorityClass( GetCurrentProcess(), NORMAL_PRIORITY_CLASS )
SetThreadPriority( GetCurrentThread(), THREAD_PRIORITY_NORMAL )

dodicat
Posts: 7983
Joined: Jan 10, 2006 20:30
Location: Scotland

Re: Timeslice for cpu

Post by dodicat »

Dinosaur
Looks as though you'll not be too impressed with usleep.
Seems that it's working range is 1000 µseconds to 999999 µseconds.

Code: Select all


extern "c"
declare function usleep( as ulong) as long '=0 when working
end extern


Function framecounter() As long
    Var t1=Timer,t2=t1
    Static As Double t3,frames,answer
    frames+=1
    If (t2-t3)>=1 Then t3=t2:answer=frames:frames=0
    Function= answer
End Function
screen 19
do
    screenlock
    cls
    draw string(50,50),"fps = " &framecounter
    screenunlock
    'sleep 1
    usleep(1000)
    loop until len(inkey)
 
srvaldez
Posts: 3379
Joined: Sep 25, 2005 21:54

Re: Timeslice for cpu

Post by srvaldez »

dodicat
that may vary between different operating systems, I tested on OS X up to 999999999 that's over 16 minutes and it appears to work ok.
Dinosaur
Posts: 1481
Joined: Jul 24, 2005 1:13
Location: Hervey Bay (.au)

Re: Timeslice for cpu

Post by Dinosaur »

Hi All

Michael, unfortunately I don't have a Windows system available to test on, but what is interesting is that Linux
must have some inbuilt protection there.

Running without any sleeps on a 4 core system:
When continuously updating usb IO the scan time of my program is about 16 micro Seconds
Without usb activity it is about 4 micro Seconds.
The cpu's (using System Monitor)
CPU1 = 20 to 40%
CPU2 = 20 to 30%
CPU3 = 20 to 30%
CPU4 = 15 to 25%

So, IF I was hogging all the cpu cycles, I would not be able to start any other program ie: System Monitor

Dodicat, you guessed right, the delay is very unreliable on my system.
On an Acer Laptop running Linux Mint it shows between 800 and 950.
On my Fitlet it sits steady at 562 ???

Thanks to all you guys for the help and suggestions.
Now all I have to do is VERIFY that the actual output is stable, and I will know that when I run it in the factory.

Regards
marcov
Posts: 3462
Joined: Jun 16, 2005 9:45
Location: Netherlands
Contact:

Re: Timeslice for cpu

Post by marcov »

Dinosaur wrote: Thank you for that reply Marcov, but is a sleep actually needed.?
Is polling really needed? First formulate a problem, then start to exclude options.

In general sleep is a 2nd rate solution, since it still indicates a polling concept. (you return from sleep without knowing for sure there is work to be done, and worse, if there is work to be done, it could have come in at 10% of the sleep's period).

Therefore first option is to work entirely event based.
I know the load on the cpu is reduced by introducing sleeps, (and it will run cooler) but other then is there an actual need ?
Letting other threads run with more minimal latency.
Also , I saw these usleep etc, whilst reading on the subject, but are they accessible from within FB ?
All of these are fairly procedural, though the windows RT timers require a message pump, so if they are not defined for FB, it should not be too hard.

So the main question is how to convert whatever your core workload is into something that generates in an event, or allows to do a blocking read. First solve the core problem, and only then finish the rest. Why do you want to sleep at all in a apparently GUI (and thus event driven) program?
Dinosaur
Posts: 1481
Joined: Jul 24, 2005 1:13
Location: Hervey Bay (.au)

Re: Timeslice for cpu

Post by Dinosaur »

Hi All
First formulate a problem, then start to exclude options.
Marcov, I like the way you think, although let's replace "Formulate" with "Identify"
but that may just be something lost in the translation. :)
Is polling really needed?
The only place where I recognise that I am polling is:

Code: Select all

If Loc(.Handle) > 0 Then
Is it even possible to make that event driven ? and please show me how.
In general sleep is a 2nd rate solution,
I agree and that is why I said that I have never used Sleep statements.
However, I did experience better usb response time by putting in a 1 mSec Sleep.
Ignoring that practice, it should still only mean a possible Timing error of 1 mSec
Why do you want to sleep at all in a apparently GUI (and thus event driven) program?
Yes, CGUI is totally event driven, but I will try all sort of bad practices to get the end result I am looking for,
and that is a repeatable response time from a usb transaction.

If my program is the only app running, and I am controlling the traffic on the usb, I don't see WHY I should get variations of response times
from the hardware in the order of 10 to 20 mSec.

I have achieved the sub mSec response on my latest IO hardware:
http://www.online-devices.com/newsitem.aspx?id=46

Then when I tried to duplicate the result on previous hardware:
Adam-4561 (usb to Rs485) to Adam-4055 I/O
it still has that margin of error.
So, perhaps I have to conclude that in this case the external hardware is to blame.

Regards
Post Reply