I have a certain piece of code for updating / adding a record to the database that selects a record from the SELECT database, and if it exists, it does UPDATE, if not - INSERT.
The problem is this: if you flood the script (called through dimensional load), then duplicate records begin to appear. Those. As I understand, SELECT somehow are grouped and cached perhaps?
Who can explain this behavior?
It is clear that by adding a unique value to the table, duplicates can be avoided, but according to T.T. a unique field for updating data - any field can be in the course of accessing the script.
This problem arose on a specific working project, but for the test I was able to reproduce it using a script: https://pastebin.com/BjR8tXzC and ab-tester:
ab -n 1000 -c 10 http://site.com/test-overload.php Test table (SHOW CREATE TABLE test_overload ;):
CREATE TABLE `test_overload` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `code` int(11) unsigned NOT NULL DEFAULT '0', `data` text NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Is it possible to somehow change the behavior so that duplicates do not appear? (except INSERT ... ON DUPLICATE KEY UPDATE ...) Can MySQL settings?