Good afternoon, I have a php script, which consists of 3 parts - parsit different sites. But such a problem - if the first of the sites lies - then the rest are not for some reason. An error is issued

Fatal error: Maximum execution time of 30 seconds exceeded in Z: \ home \ localhost \ www \ script.php on line 34

and all Please tell me how to avoid this.

  • niki-timofe, read, but unfortunately did not understand, because full 0 in php I noticed that if you run a script through the console, it normally goes further, on other sites, if the first one lies. And through the web interface, this is a mistake ( - Guy
  • one
    @ Man, If you do not know the basics of ООП , then there is nothing to help you ... - Niki-Timofe
  • one
    @ niki-timofe, OOP then where is it? Habra re-read what? - Ilya Pirogov

4 answers 4

Judging by the label CURL, it is used. So you need to configure the CURL options so that you do not wait for a response indefinitely - before the timeout. That is, it is CURLOPT_CONNECTTIMEOUT and CURLOPT_TIMEOUT.

Maybe I'm wrong and the word curl in the tags by chance? ) Then bring the code. We will help even if you do not know OOP.

    Or in php.ini :

      max_execution_time = 120 

    Or in .htaccess :

     <IfModule mod_php> php_value max_execution_time 120 </IfModule> 

    If PHP does not work in safe mode, then right in the code:

     set_time_limit(120); 
    • 2
      It seems to me alone that it is enough to set a timeout for a socket that reads a recumbent site?) - Sh4dow
    • @KiTE, but doesn't the error just отсрочится for 90 seconds ?? - Niki-Timofe
    • Your script stops on timeout. That is the problem. One solution is to calculate the maximum time required to process a given number of pages, and set it to max_execution_time . - KiTE pm
    • @ Sh4dow, yes, the CURL session has a CURLOPT_TIMEOUT parameter, and it should be used. But, as I understand it, top-starter sequentially processes several pages in one pass, and sooner or later it will be rested on max_execution_time . - KiTE

    Read about обработку исключений .

      Or maybe it is easier before parsing the site to check whether it lies or not through the header? and if it is stupid to exclude it and not even try to download ...