This optimization can be done by MySQL instead of me using a partitioning mechanism. This is a physical division of a table file into several according to a certain attribute. Thus requests begin will be executed many times faster. The main thing is not to overdo it with their number.
ALTER TABLE `om_log` PARTITION BY RANGE(date) PARTITIONS 6( PARTITION less2015 VALUES LESS THAN (UNIX_TIMESTAMP('2015-01-01')), PARTITION less2016 VALUES LESS THAN (UNIX_TIMESTAMP('2016-01-01')), PARTITION less2017 VALUES LESS THAN (UNIX_TIMESTAMP('2017-01-01')), PARTITION less2018 VALUES LESS THAN (UNIX_TIMESTAMP('2018-01-01')), PARTITION less2019 VALUES LESS THAN (UNIX_TIMESTAMP('2019-01-01')), PARTITION other VALUES LESS THAN (MAXVALUE) );
Now I have made them with a margin, but it is not bad for a good girl to add partitions with crown if necessary, for example, at the end of the year, and suppose we delete the part 5 years ago.
It is impossible to make partitioning of an existing table, which is actively used, a table lock arises and this leaves the site. It is necessary to create a new table to which to switch the work of the site, and then transfer all the records there in small portions.
Now solved the problem of adding a combined index! Request speed 2-5 seconds This is a less acceptable time instead of the former 90 seconds.
countfield is not indexed, but when you try to add an index to it, the local MySQL crashed. The number of entries is now 9 049 853, explain now I will add. - Makarenko_I_Vcountwill always be ides? - Makarenko_I_V