I make a parser for PHP which should copy all publications from the site and display this information on my site (this is not content theft, I agreed with the site owner)!
I have already written code that copies the list of publications on the main page (title, photo and short text), now I need to parse the contents of each publication, for this I began to parse links to all publications (on the main page of the site).
Now I need to write a function that will parse the contents of each publication on these links.
Please show with an example how to parse the text that is inside each link!
<?php header('Content-type: text/html; charset=utf-8'); require 'phpQuery.php'; function print_arr($arr){ echo '<pre>' . print_r($arr, true) . '</pre>'; } $url = 'http://lifemomentt.blogspot.com/'; $file = file_get_contents($url); $doc = phpQuery::newDocument($file); foreach($doc->find('.blog-posts .post-outer .post') as $article){ $article = pq($article); $text = $article->find('.entry-title a')->html(); //ΠΏΠ°ΡΡΠΈΠ½Π³ Π·Π°Π³ΠΎΠ»ΠΎΠ²ΠΊΠΎΠ² Π½Π° Π²ΡΠ΅ ΠΏΡΠ±Π»ΠΈΠΊΠ°ΡΠΈΠΈ print_arr($text); $texturl = $article->find('.entry-title a')->attr('href'); //ΠΏΠ°ΡΡΠΈΠ½Π³ ΡΡΡΠ»ΠΎΠΊ Π½Π° Π²ΡΠ΅ ΠΏΡΠ±Π»ΠΈΠΊΠ°ΡΠΈΠΈ echo $texturl; } ?>