A multi-threaded process contains two or more parts that can run concurrently. Each part of such a program is called a thread, and each thread defines a separate path of execution.
What is the difference between a thread and a process?
Process:
- Intuitively, a process is nothing more than the program placed in memory or running with all the run-time environment (or all the resources) associated with it.
- In other words, when your program is located on the hard disk of your machine it is always a program or a simple application, but once executed (or placed in memory) a set of resources (amount of memory space occupied, used processor registers and its status, the owner of the process, the permitted operations, ...) is assigned to it and it simply changes its name. It becomes a process.
- So to simplify, process rhymes with program running.
Thread:
- A thread is a portion or part of the process and the smallest sequential unit of instructions processed independently by the scheduler of the operating system.
- It is simply an execution or processing unit composed of a set of instructions contained in the process. See it as the smallest task or operation within the process that is manageable by the scheduler.
- A thread shares information such as a data segment, a code segment, ..., with its peer threads spawned by the same base-thread (see below in post), while it contains its own registers, stack, ....
- Obviously, a process can be composed of one or more threads, everything will depend on the programmer. In the case of a single-threaded process we speak of single-threading, in the opposite case of multi-threading.
What kind of algorithm can be applied to handle critical sections?
A critical section is a part of a multi-threading program that has to be executed as atomic actions (no concurrence with other threads executing similar actions):
- It is a piece of a program that requires mutual exclusion of access.
- Typically, the critical section accesses a shared resource, such as a data structure, a peripheral device, or a network connection, that does not allow multiple concurrent accesses.
When a program starts up, one thread already begins running immediately. This is usually called the "main" thread of the program, because it is the one that is executed when a program begins:
- It is the thread from which user may spawn other “child” threads (which in turn may spawn other "sub-child" threads).
- Often, it must be the last thread to finish execution because it performs various shutdown actions (as a "child" thread must also do so with respect to its eventual "sub-child" threads spawned).
- But other than that, it can also compete (with its own critical sections) with all other threads explicitly spawned by user.
The method to ensure exclusive action of a critical section may be designed in a algorithm either asynchronous or synchronous, which applies to the threads.
1) ASYNCHRONOUS ALGORITHM EXAMPLES
1.1) Asynchronous algorithm using one flag (a boolean variable) per thread
By putting 'true' its own flag means that the thread wants take the exclusive control to access the shared resource (when shared resource access is ended, the thread resets its own flag to 'false').
1.1.1) Asynchronous wait for expected condition, including a sleep in order to release CPU time
After putting its own flag to 'true', the thread waits that all other flags are set to 'false' before entering its critical section.
When shared resource access is ended, the thread resets its own flag to 'false'.
Algorithm:
Code: Select all
' my_thread_flag = true
' Do
' | Sleep 5, 1
' Loop Until number_of_thread_flag_true = 1
' .....
' Critical section of code
' .....
' my_thread_flag = false
1.1.2) Asynchronous jump if not expected condition
After putting its own flag to 'true', the thread verifies if all other flags are set to 'false' before entering its critical section, otherwise the thread jumps its critical section and continues.
In all cases (after running its critical section or only jumping), the thread resets its own flag to 'false'.
Algorithm:
Code: Select all
' my_thread_flag = true
' If number_of_thread_flag_true = 1 Then
' | .....
' | Critical section of code
' | .....
' End If
' my_thread_flag = false
1.1.3) Asynchronous repeat awake then sleep, until expected condition
After putting its own flag to 'true', the thread verifies if all other flags are set to 'false' before entering its critical section, otherwise the thread resets its own flag to 'false' before sleeping up to a new attempt.
When shared resource access is ended, the thread resets its own flag to 'false'.
Algorithm:
Code: Select all
' Do
' | my_thread_flag = true
' | If number_of_thread_flag_true = 1 Then
' | | Exit Do
' | End If
' | my_thread_flag = false
' | Sleep 5, 1
' Loop
' .....
' Critical section of code
' .....
' my_thread_flag = false
1.2) Asynchronous algorithm using one mutex for all threads
By getting the mutex locking, the thread can take the exclusive control to access the shared resource.
When shared resource access is ended, the thread unlocks the mutex.
Algorithm:
Code: Select all
' Mutexlock
' | .....
' | Critical section of code
' | .....
' Mutexunlock
2) SYNCHRONOUS ALGORITHM EXAMPLES
2.1) Synchronous algorithm using a priority number among the threads
The thread which has the priority runs its critical section, then passes the priority to the next thread.
2.1.1) Synchronous wait for expected condition, including a sleep in order to release CPU time
The thread waits for its turn before executing its critical section.
When shared resource access is ended, the thread passes the priority to the next thread in the thread number list.
Algorithm:
Code: Select all
' While thread_priority_number <> my_number
' | Sleep 5, 1
' Wend
' .....
' Critical section of code
' .....
' thread_priority_number = next thread_priority_number
2.1.2) Synchronous wait for expected condition, including a condwait then a condbroadcast (and mutex) in order to release CPU time
The thread waits for its turn and also a condition signal (condwait) before executing its critical section.
When shared resource access is ended, the thread passes the priority to the next thread in the thread number list, then sends a condition signal to all other threads (condbroadcast).
Algorithm:
Code: Select all
' Mutexlock
' | While thread_priority_number <> my_number
' | | Condwait
' | Wend
' | .....
' | Critical section of code
' | .....
' | thread_priority_number = next thread_priority_number
' | Condbroadcast
' Mutexunlock
2.2) Synchronous algorithm using a mutex for each thread, by self lock and mutual unlock
When one thread has run its critical section, it unlocks the mutex of the next thread and attempts to re-obtain its own mutex.
At initialization all mutexes are locked, and the main thread enters directly in its critical section (code of the main thread slightly different of the other thread, with the self lock pushed at end).
Algorithm for main thread (#0):
Code: Select all
' | .....
' | Critical section of code
' | .....
' Mutexunlock(next thread mutex (#0+1))
' Mutexlock(own thread mutex (#0))
Code: Select all
' Mutexlock(own thread mutex (#N))
' | .....
' | Critical section of code
' | .....
' Mutexunlock(next thread mutex (#N+1))
See all the examples in the following post.