A very clever question for PHP, SQL. In general, there is approximately such a huge query to the database

select book.*, lol as l, puk as p, boom as b, group_concat( separator '|') as id, .......................... from xxx as px left join ............... left join ................. right join ................ left join ................... where book>8000 group by x.id 

Displays approximately 50,000 results as arrays. There is a php code that must then process the result. Processing takes place in a loop using php

 for() { if($dsf[$i]=='лол'){ //печатаем } elseif($dfs[$i]=='ололо') { //печатаем } else { continue; } } 

Here is how I rewrote all of the above.

 select * from ( select book.*, lol as l, puk as p, boom as b, group_concat( separator '|') as id, .......................... from xxx as px left join ............... left join ................. right join ................ left join ................... where book>8000 group by x.id ) as zzz where xz = 'лол' or xz='ололо'; for() { if($dsf[$i]=='лол'){ //печатаем } if($dfs[$i]=='ололо') { //печатаем } } 

I draw your attention to the fact that in the second version the condition check occurs in SQL, and then the same thing in PHP. Unlike the first option, the second handles fewer results and does not use continue.

Question:

And what actually works faster? And which option is more correct?

  • how would you do, everything was processed in the database or in pkhp? For me, the answer is obvious, why load the memory when everything can be processed in the database as quickly as possible and what do you think? - fddddddg
  • you both have the same constructs in the loop. I do not see the difference, although in the second query you changed the filter like that. - Vfvtnjd
  • not seeing the difference means knowing php sql at php tutorial level for 12 days or not knowing at all - fddddddg
  • I prefer to solve the problem at the server DB level. You have done so in the second version. The database should be responsible for the data, pe-he-pe for the functional, and so on. To each his own place. - Vfvtnjd
  • one
    if xz is indexable, then it will work faster. - Vfvtnjd

2 answers 2

  1. Using right join is not desirable. In the resulting query, it is still reduced to the left, because it is better to write right away.
  2. Do not use select book. * - it is better to rewrite all the fields you need, it is not so difficult to do once, but there is always a benefit.
  3. Naturally, trimming at the base level is better — transfer less data, and this is the thinnest place.
  4. What prevents to make filtering right in the request itself? where book> 8000 AND (xz = 'lol' or xz = 'ololo') group by x.id; This option will be the fastest.
  5. see explain extended

    And what's there to guess? It is necessary to start something like explain query on the server and the server itself will answer that it is faster.

    It all depends on the availability of indexes, the location of tables and so on. Mutots.