Using the CURL library is possible. Recently, the site parsil where everything is on ajax and with the jQuery library. So if you parse with curl and then output to the browser, then everything will be crooked, because the client is currently CURL (server), not you. The cross-domain query does not roll, so you need to take, for example, the Simple HTMLDOM library.
$curl = curl_init(); curl_setopt($curl, CURLOPT_FAILONERROR, 1); curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1); // allow redirects curl_setopt($curl, CURLOPT_TIMEOUT, 10); // times out after 4s curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); // return into a variable curl_setopt($curl, CURLOPT_URL, "https://ya.ru/"); curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 GTB6"); curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false); $data = curl_exec($curl);
Display $ data is not necessary. Everything, your client (server) received the page. Now download http://simplehtmldom.sourceforge.net/ And there is an instruction for example here http://zubuntu.ru/php-simple-html-dom-parser/ That is, we need to continue like this:
$html=str_get_html($data); $result=$html->find(div.spisok span); //получаем массив
Next we sort it out.
foreach ($result as $one){ echo $one; //можем уже выводить то, что нашли }
By this principle, we are looking for data, sorting through it. It is important to accurately determine the "coordinates" of the data.