I need the parsing sites to be taken from the .txt file, but I don’t know if it can be implemented, and most importantly, how to do it.
What I found: Several options: If you need to get a page from a remote server: `enter
$handle = curl_init(); curl_setopt($handle, CURLOPT_URL, "http://www.example.com/"); curl_setopt($handle, CURLOPT_RETURNTRANSFER, true); $homepage = curl_exec($handle); curl_close($handle); echo $homepage; But, as can be seen in the example above, only 1 specific site is taken as a basis, and I have the same list of URLs in .txt. I was thinking of finding something like:
$url = 'file.txt'; $curlCh = curl_init(); curl_setopt($curlCh, CURLOPT_URL, $url);