As a result, I came to a solution at the level of application logic. The main database constantly contains 0.5-1 million of the most recent records, and the rest are transferred to the archive. The script-archiver once a day transfers old records to the archive. Initially, users are given only the most recent data from the main database (99% of requests), and if they are not enough, then an archive search is performed.
PS In addition, if the user started flipping through the tape, he made a conclusion not 10, but 50 records. When scrolling, the client issues data from the request 1 time, and places the remaining 4 packets into the array and outputs as needed. When customer data runs out, a new request is made. I don’t immediately display 50 in order not to slow down the browser (there are a lot of graphics). Something similar is found in vk and a number of other large sites.
PPS The final solution at the MySQL level is as follows. I did manual testing with samples based on a simple or composite index that is rigidly written (use index). The best performance was the use of a composite index (for example, (type, update)), the use of which is strictly prescribed in the application code depending on the specific type of sample (by default, in some cases MySQL chooses not the most productive index). I was surprised at the discrepancy between EXPLAIN data and actual performance indicators. Thus, a simple index (EXPLAIN shows rows 10) worked hundreds of times slower than the compound index with rows of several million entries.
PPPS In general, the problem was in the wrong choice of the MySQL index engine for which the search was performed (in some cases only a simple index was used when it was better to use a composite one, and in some cases the search was performed immediately on 2 indices with a combination of results). When prescribing a USE INDEX manually (for each particular case), the performance increased many times over.