
lock (sharedObj1)
{
...
lock (sharedObj2)
{
...
}
}
Notice that the order of the locks within the Thread2Work technique has been modified to match the order in Thread1Work. First a lock is acquired on sharedObj1, then a lock is acquired on sharedObj2.
Right here is the revised model of the whole code itemizing:
class DeadlockDemo
{
non-public static readonly object sharedObj1 = new();
non-public static readonly object sharedObj2 = new();
public static void Execute()
{
Thread thread1 = new Thread(Thread1Work);
Thread thread2 = new Thread(Thread2Work);
thread1.Begin();
thread2.Begin();
thread1.Be part of();
thread2.Be part of();
Console.WriteLine("Completed execution.");
}
static void Thread1Work()
{
lock (sharedObj1)
{
Console.WriteLine("Thread 1 has acquired a shared useful resource 1. " +
"It's now ready for buying a lock on useful resource 2");
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine("Thread 1 acquired a lock on useful resource 2.");
}
}
}
static void Thread2Work()
{
lock (sharedObj1)
{
Console.WriteLine("Thread 2 has acquired a shared useful resource 2. " +
"It's now ready for buying a lock on useful resource 1");
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine("Thread 2 acquired a lock on useful resource 1.");
}
}
}
}
Confer with the unique and revised code listings. Within the authentic itemizing, threads Thread1Work and Thread2Work instantly purchase locks on sharedObj1 and sharedObj2, respectively. Then Thread1Work is suspended till Thread2Work releases sharedObj2. Equally, Thread2Work is suspended till Thread1Work releases sharedObj1. As a result of the 2 threads purchase locks on the 2 shared objects in reverse order, the result’s a round dependency and therefore a impasse.
Within the revised itemizing, the 2 threads purchase locks on the 2 shared objects in the identical order, thereby guaranteeing that there is no such thing as a risk of a round dependency. Therefore, the revised code itemizing exhibits how one can resolve any impasse state of affairs in your software by guaranteeing that each one threads purchase locks in a constant order.
Greatest practices for thread synchronization
Whereas it’s typically essential to synchronize entry to shared sources in an software, you have to use thread synchronization with care. By following Microsoft’s best practices you’ll be able to keep away from deadlocks when working with thread synchronization. Listed below are some issues to bear in mind:
- When utilizing the lock key phrase, or the System.Threading.Lock object in C# 13, use an object of a non-public or protected reference kind to establish the shared useful resource. The item used to establish a shared useful resource may be any arbitrary class occasion.
- Keep away from utilizing immutable sorts in your lock statements. For instance, locking on string objects might trigger deadlocks on account of interning (as a result of interned strings are basically world).
- Keep away from utilizing a lock on an object that’s publicly accessible.
- Keep away from utilizing statements like lock(this) to implement synchronization. If the this object is publicly accessible, deadlocks might end result.
Notice that you should use immutable sorts to implement thread security without having to write down code that makes use of the lock key phrase. One other approach to obtain thread security is through the use of native variables to restrict your mutable information to a single thread. Native variables and objects are all the time confined to at least one thread. In different phrases, as a result of shared information is the basis reason for race circumstances, you’ll be able to remove race circumstances by confining your mutable information. Nonetheless, confinement defeats the aim of multi-threading, so will probably be helpful solely in sure circumstances.