There is a project within the company (on Laravel 5.6 (if this is important) that performs the logging function, with 500 thousand records, uploading old records during pagination becomes impossible long. What optimization ways can I do? I created a field where I write the project id, year and month and indexing this field, it’s all the same for 1 project that can generate a lot of logs, I thought to partition it, but according to what sign? Again by months? Then the growth will not be at this stage anyway, because 1 project is logged. You can of course, but the company is vryat another server will be rented (small company). What could be the way out in this situation?
- Comments are not intended for extended discussion; conversation moved to chat . - Yuriy SPb ♦
|
1 answer
It is necessary to add an index on the milliseconds field and make all the conditions of the sample based on this field.
For example, to select entries for the month of August, we make a request
SELECT * FORM logs WHERE milliseconds < :start ORDER BY milliseconds LIMIT 20; First time at: start transmit timestamp in milliseconds
$d = DateTime("2018-09-01"); $start = $d->format('U') . $d->format('v'); To display the second page, you need to remember the value of milliseconds in the last record on the first page. At the same time, pagination will not turn out to be “standard”, but only with the ability to go to the next page.
But all this will work smartly and will not load the database almost at all.
- I didn’t quite understand about milliseconds, this field stores just milliseconds from the time at the time of recording) Laravel doesn’t really want to write milliseconds in the created_at field, so I brought them to a separate field. (they are purely informational) - Ruslan Mirzapulatov
- @RuslanMirzapulatov Unix Timestamp in milliseconds? - Yuriy Prokopets
- @RuslanMirzapulatov could use the created_at field, but there is a possibility that 2 or more entries will occur in one second. Therefore, I assumed that the Unix Timestamp is written in milliseconds in the milliseconds field. - Yuriy Prokopets
- It was possible, but the trick is that later the framework does not allow withdrawing milliseconds, and when sampling it returns them without milliseconds)) - Ruslan Mirzapulatov
- one@Ruslan Mirzapulatov is all about one solution, they say: add a key to the database on the recording date, and make a pager by date: not by pages. Then it will work quickly. You can be confused, and make a pager through the pages with the key: but it will be necessary to add the script for indexing pages into the scheduler - to make a more complex system with caches outside the database. - Goncharov Alexander
|