The script has worked so far. It was previously executed in 10 seconds, now it loads 30 seconds and gives an error 500 Internal Server Error or

Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE [HY000]: General error: 2006 MySQL server has gone away' in /home/admin/web/site.com/public_html/sql.php 360 Stack trace: # 0 /home/admin/web/site.com/public_html/sql.php(410): PDOStatement-> execute (Array) # 1 {main} thrown in /home/admin/web/site.com/public_html/sql.php on line 410

Line 410:

 $sql = "INSERT INTO table SET ".pdoSet($allowed,$values); $stm = $pdo->prepare($sql); $stm->execute($values); // 410 

/etc/mysql/my.cnf added the following lines to the /etc/mysql/my.cnf file

 max_allowed_packet=64M query_cache_limit=6M 

And edited such

 wait_timeout=30=>3000 

Now the parser works, but for some reason it runs for about 40-50 seconds, instead of 10 normal ones. What can be wrong?

  • Maybe you should use transactions? - androschuk
  • I don't know, maybe, but everything was fine without transactions - Roman Vulchyn
  • @RomanVulchyn well, maybe something is not selected, for some reason in the parser. this is passed to the request .... that is, the garbage is passed ..... hence the error falls out ..... why not look at the logs what arrives in the parser? - Alexey Shimansky
  • @ Alexey Shimansky, lucidly explained, thank you, I will see - Roman Vulchyn
  • @RomanVulchyn how many records in the table? most likely a problem in large volume. - olegatro

1 answer 1

As the table grows in the database and its size, the insertion into it may slow down. This tendency is especially pronounced when there are many indexes in the table, for which restructuring is carried out with each insertion, update, or deletion of a record. If there are really a lot of indexes, you can try to remove parts that are not used or are rarely used.

Another option is to reduce the size of the table in which the data is inserted or to insert it into the aggregating table of a small size beforehand, and then transfer it to the main bulk using a multi-line INSERT query. Moreover, such a table may not be in MySQL at all, but in some fast NoSQL solution located in RAM, for example, in Redis or in Memcached.

The directives you set in my.cnf will not help you

 max_allowed_packet=64M query_cache_limit=6M 

max_allowed_packet - sets the maximum size of the query string, and query_cache_limit sets the maximum size of the resulting table of the cached query - all that is larger than this size is not cached. However, you have a problem with INSERT , which are not cached at all.

If you are dealing with InnoDB tables, it is better to try to increase the size of innodb_buffer_pool_size , so that the entire database can fall into RAM. You can try playing with the number of innodb_log_files_in_group and the size of the transaction log innodb_log_file_size - the records first fall into them and only then are recorded in the main table. However, if all your journals are located on the same disk, this will not give much effect. But it can give the effect of setting innodb_flush_log_at_trx_commit to 0, so that transactions are written to disk not after each request, but once a second.