Is it possible to somehow organize the possibility of outputting part of a query from n to m position as well as it is implemented in mysql by adding to the request limit n, m?

There is a solution to wrap in several queries, but not suitable because of the load on the subd such queries with "medium-sized" tables:

select * from (select rownum as rn,* from ... where ...) d where d.rn between n and m 

And the scan shows that the subquery is all stored in memory. Is it possible to somehow substitute n and m in the first request (nested)?

  • <pre> select * from ... where rownum> 10 and rownum <20 </ pre> Well, or I didn’t understand something :) - Alex Silaev
  • It will not work this way, the request will return 0 records) The logic is this: we get the first record from the request by other conditions, it has rownum = 1, we look at the condition, it doesn’t fit, the next one, and the next one is also 1, it’s the first. As a result, zero entries) - org

3 answers 3

Hi guys from 2011, I am now in 2015 and we do this:

 SELECT * FROM my_table OFFSET 10 ROWS FETCH NEXT 5 ROWS ONLY; 
     DECLARE CURSOR c_merge IS SELECT * FROM DUAL; type c_merge_row_type is table of c_merge%rowtype; rec_merge c_merge_row_type; BEGIN open c_merge; loop -- здесь устанавливаете размер "порции" для считывания fetch c_merge bulk collect into rec_merge limit 50; for i in 1..rec_merge.count loop // do something end loop; exit when c_merge%notfound; end loop; close c_merge; END; 

    Added from comment.

    This code is 2 nested loops, the 1st one returns you the entries:

    • 0-50
    • 51-100
    • 101-150, etc.

    The second is already processing each entry separately. If you want to process only 100 to 150 add a counter:

    • counter: = 0, records 0-50
    • counter: = 1, records 51-100
    • counter: = 2, records 101-150

    UP : the bulk collect option is not very convenient if you need to handle the last “chunk” of data, this problem can be easily solved by adding / changing the ORDER BY condition in the original query

    • And how to ask what to start? For example, I need to display from the request from 100 to 150 record! - org
    • this code is 2 nested loops, the 1st returns you entries: 0-50 51-100 101-150, and so on, the second already processes each entry separately. if you want to process only 100 through 150 add counter counter: = 0, write 0-50 counter: = 1, write 51-100 counter: = 2, write 101-150 - jmu
    • This is not a good option. If I have a counterparty table for 1,500,000 entries and do I need to select the last ones? - org
    • It is worth noting, I do not quite understand you, this solution allows you to process all the data from the set. Do you have a specific task to process from the entire set only entries from rownum 100-150 (for example)? - jmu 7:07 pm
    • In this case, it is really more convenient to specify the range you need in the sample condition via rownum: and rownum> 150 - jmu

    Here read here: return part of the sorted sample .

    1. Get the records from the 10th to the 19th of the selection by the ALL_TABLES view, sorted by the TABLE_NAME field:
     select o.* from (select rownum rw , o.* from (select o.* from all_tables o order by table_name) o where rownum < 20 ) o where o.rw >= 10; 
    1. Return part of the sorted sample through the analytic function ROW_NUMBER ():
     select o.* from (select o.* , row_number() over (order by o.table_name) rw from all_tables o ) o where o.rw >= 10 and o.rw < 20; 
    • My question is almost the same (I will change it like this): select * from (select rownum as rn, * from ... where ... rn <m) d where d.rn> = n I want no subqueries!) - org
    • Then there is only the procedure that is given by the previous answer, if you compile it will be even faster. - gympi
    • Nested query is not terrible. Oracle has a very efficient query optimizer and it will open parentheses and do everything right, so there’s nothing to worry about. - cy6erGn0m pm
    • In that procedure I can not understand how to start from the n position. Rather, I think that to propose to drive them away, but this is not an option! - org
    • With the enclosed orakl opens, but does all request, therefore does not approach at the big tables. For example, contractors for 1,500,000 users with a 50-100mb dump and 100-1,000 hits per second ... - org