The situation is as follows:

$transaction = Yii::app()->db->beginTransaction(); $condition=$model->getCondition(); //обычный select из кучи таблиц if($condition) { //... куча функций, апи вызовов, изменений баланса пользователя, в результате $model->getCondition() уже не вернет прежнего результата } $transaction->commit(); 

But if the user makes 10-100 requests at the same time, then the code inside the "if ($ condition) {}" will be executed several times, which is unacceptable. Thus, I need a lock for reading inside a transaction, so that the second user request waits until the first transaction is completed, and only then received a response to the $ model-> getCondition () request — an already updated response. I re-read a bunch of things from man, and found only what needs to be written "transaction-isolation = serializable" in the muscle configuration. But this did not help = (Accordingly, the question is: what to do?

PS tables innodb, of course.

  • SELECT ... FOR UPDATE tried to do? - Zhukov Roman
  • I tried. Does not help. Also, if you quickly click a few times, then you can execute the code inside if 2 times. - anagamin
  • As one of the options, you can consider locking the LOCK TABLES tables before starting all operations and unlocking them after. But there is an essential minus, requests of all users will be locked for reading. - Finies
  • Is it possible to block only those rows that participate in the query (even they are empty from several tables, using group, etc.)? And is it possible to automate this process somehow? Those. I don’t always know which tables are addressed within the transaction, and so that I don’t have to coat all the code, but just prescribe something at the beginning? - anagamin
  • And yes, why transactions do not work as they should work? It seems like isolation serialize should do exactly what I need, no? Why does not do?) - anagamin

1 answer 1

Here the problem is not to isolate transactions on the base. If requests go at the same time, then several selects (as many as there are connections to the database) will be launched at the same time, and they all will bring the result of an unchanged base. Accordingly, the condition is fulfilled, and they all begin to change the base at the same time. In order to avoid this, it is necessary to apply double locking with manual lock (it is possible on the base, it is possible in memory). After checking the condition, the thread should try to capture the lock (which thread is a unique resource, the monitor), and then check to see if it turned out. and only if yes, then continue the work of changing the data, not forgetting to release a unique resource in the file ..

  • Understood, thank you) It is strange that such a banal problem should be solved with such a crutch. And, sort of like, select for update is specially created for this, but does not cope. - anagamin
  • And then what to believe? DB can not be trusted. Because the select time can be executed at the same time, regardless of any lock tables, transactions, select for update, etc., then the locale will also be a mess. To keep a separate file (for each user \ each operation requiring a lock) is idiocy, there should be tens of thousands of such files. And, as practice shows, every few months something falls and the lock hangs. Does the session remain? But it is also not known how they will work at the same time. Either unwrap redis or something like that. No more options? - anagamin