$table='tab'; $del=mysql_query("TRUNCATE TABLE `".$table."`"); $fp = fopen ("base.csv","r"); while ($data = fgetcsv ($fp,0,";")) { $num = count ($data); for ($c=0; $c < $num; $c++) { if ($del=false){break; echo 111;} $result=mysql_query('insert into tab (id1,id2,a3,a35,a5,p2,p3,p4,6a6,hf3) values ("'.$data[$c].'","'.$data[$c+1].'","'.$data[$c+2].'","'.$data[$c+3].'","'.$data[$c+4].'","'.$data[$c+5].'","'.$data[$c+6].'","'.$data[$c+7].'","'.$data[$c+8].'","'.$data[$c+9].'")'); $c=$c+9;}} fclose ($fp); 

From the csv file, fewer lines are inserted into the database than, actually, there is in the file. When importing 13000+ through phpmyadmin and with this 10980. Tell me, am I doing something wrong, or can there be easier ways?

  • on timeout does not fall off by chance? - korok

2 answers 2

Generally you have a very suboptimal work with the database.

Let's try this, and then look at the result.

 $table='tab'; $del=mysql_query("TRUNCATE TABLE `".$table."`"); if (!$fp = fopen("base.csv","rb")) die('Не могу открыть файл') $query = 'insert into '.$table.' (id1,id2,a3,a35,a5,p2,p3,p4,6a6,hf3) VALUES '; $n = 0; while ($data = fgetcsv ($fp,0,";")) { if (count($data) < 9) continue; if ($n > 0) $query .= ', '; $n++; $query .= '("'.$data[0].'", "'.$data[1].'", "'.$data[2].'", "'.$data[3].'", "'.$data[4].'", "'.$data[5].'", "'.$data[6].'", "'.$data[7].'", "'.$data[8].'", "'.$data[9].'")'; } fclose ($fp); $result = mysql_query($query); echo $result ? 'Добавлено записей: '.$n : 'Ошибка в запросе: '.mysql_error(); 

UPD

This is a sad mistake, your connection is falling off. If you have access to Mysql configs, you must set the max_allowed_packet variable to a max_allowed_packet value (advise 64M) and wait_timeout (advise 60 seconds). Also in php before the request to put

 set_time_limit(0); ini_set('max_execution_time', 0); 

For it can be like this: the file is processed in 28 seconds, the request is 10 seconds, and on the 30th Apache it terminates the execution of the script.

  • Error in query: MySQL server has gone away - Only_fallen
  • Updated. By the way, perhaps before rewriting the request was the same because of the number of requests. Threat: if the above does not work or you can not change the settings, there are 2 options: split the csv file into 2 smaller ones (up to 8mb), or change the host so that you can change the settings. I chose the first one for myself in one project. - Sh4dow
  • Error in query: You have an error in your SQL syntax; Check out the manual for the right syntax to use the 'Rug "(24181)", "+", "2.85", "2.78", "2.70", "2.63", "0", " Parity "), (" 28 "," 8 'at line 1 This is what the database said - Only_fallen
  • Maybe there are easier ways to push the csv file into the database? - Only_fallen

Perhaps using load data infile will help, it is believed that it works the fastest.