There is a file, it has 300.000 lines, I need to process each line, and add a value to the database. I use the standard function php
$read = fopen('file.csv', 'r'); while( ($str = fgetcsv($read, 8000, ',')) !== FALSE ): //... парсинг файла endwhile; The file itself is processed very quickly, about 0.6 - 0.9 seconds.
But when I add a line check, or a value, then the speed increases at least 1 time. It is clear that at 300k lines add a check, and it takes time.
But then, I also need to add all these lines to the database, I combine the requests, and send in the 1st request I combine several requests.
$sql_add = "UPDATE `table` SET `prof` = '1,4,6,8' WHERE `id` IN(2,3,4,1,1,2233,3321,1,3,2... сюда еще дописывается в среднем до 1000 айдишников)"; $query = Db::query($sql_add); $query->closeCursor();# не ждать ответа от запроса And thanks to such combined requests - the number of requests decreases. But all the same, about 1,250 requests come to 100k records, and my script hangs for a long time, right up to the timeout.
How are such large files parsed, and how can they be quickly added to the database? .. If most of the time is spent on sending requests and checking data.