I need the parsing sites to be taken from the .txt file, but I don’t know if it can be implemented, and most importantly, how to do it.

What I found: Several options: If you need to get a page from a remote server: `enter

$handle = curl_init(); curl_setopt($handle, CURLOPT_URL, "http://www.example.com/"); curl_setopt($handle, CURLOPT_RETURNTRANSFER, true); $homepage = curl_exec($handle); curl_close($handle); echo $homepage; 

But, as can be seen in the example above, only 1 specific site is taken as a basis, and I have the same list of URLs in .txt. I was thinking of finding something like:

  $url = 'file.txt'; $curlCh = curl_init(); curl_setopt($curlCh, CURLOPT_URL, $url); 

    1 answer 1

    First you wrap the CURL code in a user-defined function, then you read the text.file into an array, and then go through this array in a loop, passing the values ​​at each iteration to the wrapper function. Well, print the resulting array convenient for you. For example:

     // Массив ссылок, полученный из текст.файла $links = ['http://www.example.com', 'http://www.spravkaweb.ru/mysql/sql']; $content = []; foreach ($links as $link) { $content[] = get_data($link); } echo '<pre>', print_r($content, true), '</pre>'; function get_data($url) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); $homepage = curl_exec($ch); curl_close($ch); return $homepage; } 
    • Hello! Thank you very much for answering my question, however, since I am a novice coder, the coder couldn’t understand everything. But my main problem is to put everything in txt into an array, and then process it correctly. That's what code I found myphpwiki .blogspot.com / 2017/12 / urls-filegetcontents.html. But - Cruze Fan
    • @Cruze Fan Well then you need to read the manual section on the interaction with the file system. - Edward