Do I need a lock in C # when reading / writing a variable?

Code example:

 Int i = 0; Timer timer = new Timer(); timer.Interval = 500; timer.backFunc += readVar; Thread th = new Thread(); th.backFunc += setVar; th.Run(); timer.Run(); void readVar() { Console.WriteLine(i); } void setVar() { i++; } 

    4 answers 4

    Update:

    See it. There is a simple rule: if a variable is accessed from several threads (it is all the same, reading or writing), then this reading needs to be surrounded by lock 'ohm. Non-observance of this rule can lead to an immediately visible wrong result, but (which is much worse!), The appearance of an incorrect result can very rarely occur! Thus, you may have hard-to-replicable program crashes.

    This rule is even valid for the case when the variable is “small” (for example, int ), and can be written “in one go”. For exact reasons, see this answer below and other answers, as well as a discussion in the comments.

    In your case, callbacks from the timer (modifying) do not come in the same stream as the read code, so there is a need for blocking.

    This rule is not absolute: there are cases where locks can actually be avoided. But for this you need to know the subtleties of the work of modern processors, optimizers and specific equipment, and it is very easy to make a mistake. Therefore, my advice - do not try to avoid blocking at all costs, the game is often not worth the candle. Worse, testing often will not show you problems with the code.


    An important case where locks are not needed is the use of TPL and async / await for competitive access without unloading work into an additional stream.

    Example:

     int x = 0; async Task RunVariableChange(int howManyTimes) { for (int i = 0; i < howManyTimes; i++) { await Task.Delay(500); x++; } } async Task ReadVariable(CancellationToken ct) { try { while (true) { await Task.Delay(750, ct); Console.WriteLine(x); } } catch (TaskCanceledException) { // ничего не делать, это ожидаемое исключение } } // использование (*) var cts = new CancellationTokenSource(); var readTask = ReadVariable(cts.Token); await RunVariableChange(20); cts.Cancel(); await readTask; 

    If the code (*) runs in the UI thread, then all cases of access to the variable occur in it, and there is no need for blocking. In fact, we don’t have a multithreaded code here! (However, locks are still needed if Task is not launched in the UI thread, for example, via Task.Run .)

    In addition, using a local variable in the async method is safe even if this method “jumps” from stream to stream, since every await is a synchronization point.

    (Therefore, try to use the “functional” approach: work not with shared, but with local variables, get input as parameters, and return the result as a return value of the function: this will reduce the need for locks.)


    Old answer:

    Yes.

    If the code modifies a variable, then multithreaded access requires the protection of the memory barrier, for example, lock .

    Otherwise, both the optimizer and the processor's cache are entitled to assume that the variable does not change.


    Example with processor cache: let's say stream # 1 runs on processor # 1, and stream # 2 runs on processor # 2. Stream # 1 writes the new value of the variable. This value enters the cache of processor # 1, but does not “fail” immediately into memory, since synchronization with memory is a very slow process. Now, stream # 2 reads the value from processor cache # 2, and there it is old!

    Forcing a "reset" of the processor cache into memory is one of the effects that lock causes.


    Why is this effect rarely seen in practice? The fact is that the cache reset can sometimes occur for external reasons, for example, when switching context.


    I hope I understood the purpose of your code correctly. I had to guess, because neither Thread nor Timer contain an event backFunc .

    • 2
      In the case of the original example (one writer, one reader), it suffices to mark the field volatile . - andreycha
    • 2
      @AlexanderPetrov: No, lock does much more. For example, memory barrier. You still read the articles mentioned in the comments. - VladD
    • one
      @ z668: Added the answer. - VladD
    • one
      @VladD, judging by what I saw here and on other resources in the answers, most developers do not know about this aspect (by the way, extremely important). Almost all use simple field types without any synchronization in multiple threads. I think this nuance is worthy of removal in a separate article so that such errors are not repeated by the majority. - Alexis
    • one
      @ z668: The x86 platform forgives many errors, so many errors go unnoticed. Write separately makes sense, yes. - VladD

    For some reason, the previous answers immediately went into the jungle of memory models, which anyway no one really understands. IMHO, there is a much simpler reason why synchronization is needed in the case of variable increments.

    In fact, i++ not an atomic operation, but a triple - read-modify-write. Rewrite the setVal method as follows:

     void SetVal() { var tempI = i; // пустота! i = tempI + 1; } 

    Now it should be clear that if there is more than one thread (and the presence of a timer indicates the presence of more than one thread), the OS can switch the execution context in the line with the comment пустота . As a result, there will be the following:

     Начальный момент времени i == 42! Thread 1: Прочитал i, записал в tempI 42 Контекст переключился на другой поток Thread 2: Прочитал i, записал в tempI 42 Записал в i tempI + 1, т.е. i теперь равна 43. Контекст переключился на первый поток Thread 1: (tempI у этого потока хранит все то же старое значение, 42) Записал в i tempI + 1, т.е. i равна 43! 

    This is how we lost one increment.

    Now our task is to make an increment, i.e. the read-modify-write operation is atomic (that is, indivisible from the point of view of the execution process). We also need to do so that the threads read the most relevant values ​​each time, but in simple cases (without any lock-free magic, which, judging by the question, is not the time to bother), these two problems will be solved in one fell swoop.

    Most modern runtimes contain libraries for atomic increasing of integers. So, in the .NET Framework there is a class Interlocked , which contains a number of methods, in particular, the proposed Increment(ref int) .

    An alternative is to wrap access to a shared variable by some synchronization primitive, for example, the lock construct.

    Here is an example that shows why synchronization is important:

     private static int _lockLessCount; private static int _lockFreeCount; private static int _lockBasedCount; private static object _countLock = new object(); static void Main(string[] args) { int numberOfIterations = 1000000; var t1 = Task.Run(() => { for (int n = 0; n < numberOfIterations; n++) _lockLessCount++; }); var t2 = Task.Run(() => { for (int n = 0; n < numberOfIterations; n++) _lockLessCount++; }); Task.WaitAll(t1, t2); var t3 = Task.Run( () => { for (int n = 0; n < numberOfIterations; n++) Interlocked.Increment(ref _lockFreeCount); }); var t4 = Task.Run( () => { for (int n = 0; n < numberOfIterations; n++) Interlocked.Increment(ref _lockFreeCount); }); Task.WaitAll(t3, t4); var t5 = Task.Run( () => { for (int n = 0; n < numberOfIterations; n++) lock (_countLock) { _lockBasedCount++; } }); var t6 = Task.Run( () => { for (int n = 0; n < numberOfIterations; n++) lock (_countLock) { _lockBasedCount++; } }); Task.WaitAll(t5, t6); Console.WriteLine("Iterations: {0}", numberOfIterations); Console.WriteLine("LockLessCount: {0}", _lockLessCount); Console.WriteLine("LockFreeCount: {0}", _lockFreeCount); Console.WriteLine("LockBasedCount: {0}", _lockBasedCount); Console.WriteLine("Done!"); } 

    If synchronization was not needed, then at startup we would see the same values, but this is not so.

    Here is the output from my local computer:

     Iterations: 1000000 LockLessCount: 1611802 LockFreeCount: 2000000 LockBasedCount: 2000000 Done! 

    As you can see, with the "simple" access, we lost almost 400 thousand values!

    • Interlocked.Increment guarantees only atomicity. It will not propagate changes to another thread. Here is Eric's article on why synchronization is also needed when reading: blog.coverity.com/2014/03/12/can-skip-lock-reading-integer - VladD
    • Yes, I do not argue with that :) Well, I have missed the reading aspect. My point is that judging by the level of the question, it makes no sense to give in the responses of the memory model, you need to give the topstarter an understanding of the problem. - Sergey Teplyakov
    • one
      Point taken! I will add the answer. - VladD

    No, lock absolutely not required in your code. Although one operation still needs to be changed - the setVar method should look like this:

     void setVar() { Interlocked.Increment(ref i); } 

    With this, your code is protected from racing.


    The only drawback of this approach is the following: let's say setVar is executed in stream 1 (P1), whereas readVar in stream 2 (P2). Suppose that setVar been called already 5 times and changed the values ​​of i to the following [1,2,3,4,5] . Now readVar gets its time readVar and can read any of the above. The only limitation is that readVar cannot read the value that was earlier than what was already read in this stream. For example, in the first call to readVar read 3 , which means that in the next call readVar can be read [3,4,5] and no other value.

    But this is all in theory, on x86 and amd64 processors, readVar always reads the last value. On ARM not always. More .NET is nowhere, as far as I know.

    lock serializes access to a variable, so it ensures that the read value of the last value written under lock will always be obtained (unless of course all write operations are under lock 'th). This is, of course, not free, and in general, code using lock may be slower. But more understandable.

    • Wrong . If you want to go down to the lower level, you need at least another VolatileRead . - VladD
    • @VladD, what's wrong? The above code is completely correct. Yes, there is no guarantee of reading the last value, but was this required? - ixSci
    • Interlocked.Increment , according to the documentation , makes the increase atomic . It does not automatically lead to cache coherence. - VladD
    • @VladD, and where does coherence? What do we sync? We have one variable that needs to be incremented and read, with T1 and T2, where increment occurs in T1 and readings in T2 (T2 later T1). We have no guarantee that the result of T1 will be read in T2 (in fact, on x86 we have, so this situation can only be on ARM, from supported .NET). But sooner or later the result of T1 will still go to readVar() (and it will be pretty fast) - ixSci
    • ... and if not fast enough, you need to insert Sleep(100) ? - VladD

    in the last example, Sergey Teplyakov also tried volatule and got:

     Iterations: 1000000 LockLessCount: 1483306 LockFreeCount: 2000000 LockBasedCount: 2000000 LockVolatileCount: 65866 Done! 

    And I also tried Breakpoint on Console.WriteLine("Done!") And saw:

     LockLessCount: 2000000 LockVolatileCount: 137415 

    IMHO:

    1. Debugger such errors do not detect.

    2. Volatile seems to only help reading in another thread.