Hello. In my project I use EntityFramework. There is a server to which clients connect. - Each connection in a separate thread. - In each connection, data from the database is taken, changed and written back (in one transaction)

Faced with the fact that two users can change the same data and get a "race condition". (i.e. They take А=1 and make А++ , and as a result А=1 written instead of А=2 (increment twice))

The only solution found to use only one possible connection, but in this case, пользователь2 will wait for пользователя1

What is the way out of this situation?

    2 answers 2

    In EF, this problem is solved by adding a Concurrency Token - the last time the row was modified. Then the save code for each of the users will check if the line has changed since the moment of loading:

     public class Department { public int DepartmentID { get; set; } [StringLength(50, MinimumLength = 3)] public string Name { get; set; } // поле для отслеживания даты изменения [Timestamp] public byte[] RowVersion { get; set; } } 

    or, instead of the [Timestamp] attribute, with the fluent configuration:

     modelBuilder.Entity<Department>() .Property(p => p.RowVersion).IsConcurrencyToken(); 

    EF will add a check that RowVersion has not changed to all generated Update and Delete types:

     UPDATE ... SET A = 2 WHERE Id = @id AND @rowVersion = version1 

    The second possible solution is to place the attribute [ConcurrencyCheck] on the "dangerous" properties. Then at Update / Delete the values ​​of the marked properties will be checked. Those. in your case, the upgrade will look like

     UPDATE ... SET A = 2 WHERE ID = @id AND A = 1 

    The first user update will be successful. For the second, SQL will update 0 rows, EF will detect this and throw DbUpdateConcurrencyException or OptimisticConcurrencyException that you will have to handle (for example, by repeating the action for the second user again).

    The mechanism itself and the details of the processing are described in great detail in the standard tutorial Handling Concurrency with the Entity Framework 6

    • And how is EF with pessimistic (that is, essentially familiar) concurrency? The article contains only common words. - VladD
    • We need a solution at the transaction or context level - GLeBaTi
    • @VladD in any way. pessimistic is the worst variant of concurrency, which is generally very applicable. Because it practically turns the application into a single user (at worst). therefore, if it is “familiar,” it is only for winforms / wpf. in the pessimistic web, it is generally impossible to implement it normally - because the user can simply take it and not continue working, which causes the problem of deadlocks. - PashaPash
    • @GLeBaTi At the simple transaction level, this cannot be implemented - none of the standard isolation levels prohibits simultaneous reading. In pure SQL, this is implemented with the usual SELECT WITH UPDLOCK or HOLDLOCK . Well, the corresponding crutches, if you use EF. - PashaPash
    • @GLeBaTi is still a very ugly version with a forced fake update of a row (or an additional table) before its select to put a lock. Or with creation of a special table for locks and calling the form SELECT NULL FROM TestTable WITH (UPDLOCK) before reading the data. But this is essentially a crutch, and in no way pulls up on the best practices. - PashaPash

    If you use IsolationLevel.Serializable for a transaction, then the changes of the thread who first made the commit are applied. The latecomer flies exception (that the resource has been changed by another)

    on the left, stream 1 just closed, because background

    on the left, stream 1 just closed, because background

    • The same behavior will be for SNAPSHOT ISOLATION , which is more predictable in terms of locks. - PashaPash