Good afternoon, there is one DB (Transbase) to which I connect via ADO.NET. This database does not support the LIMIT SQL statement, but I need to organize paginal output (sampling a given number of row-s with the ability to set the initial offset for the table), since the tables in this database occupy gigabytes and load the entire table at once is not an option . This database supports cursors, and you can try using offset in the code, and an iterator. But if, for example, the user wants to look at the columns, starting with the 100,000th, you have to twist the cycle until it reaches this offset, which takes up CPU time and creates unnecessary loads.

How can you still organize paginal output from the database, without the LIMIT SQL statement?

  • which DBMS do you work with? - Ruslan_K
  • @Ruslan_K Transbase. - DeusExMachine
  • Here they write that you can use FIRST . Transbase seems to support window functions; if this is true, then offset can be done using row_number() over . - Alexander Petrov
  • one
    Yes, FIRST useless in this context. Transbase does support window functions ( WindowFunction ), but I have not found a way to do this. The documentation says that they apply only to the SELECT statement. - DeusExMachine
  • In general, Transbase window functions do not support row_number() , only: window_function_name ::= { AVG | COUNT | DENSE_RANK | MAX | MIN | RANK | SUM } window_function_name ::= { AVG | COUNT | DENSE_RANK | MAX | MIN | RANK | SUM } window_function_name ::= { AVG | COUNT | DENSE_RANK | MAX | MIN | RANK | SUM } - DeusExMachine

2 answers 2

You can do this: Take a primary key from a sample made by a user from a table, add it to an intermediate table with IDENTITY and a primary key according to IDENTITY, and to display pages you give a request to an intermediate table with an IDENTITY field BETWEEN the necessary values, attaching the main table by the primary the key.

  • As far as I understand you, this option is not very suitable, because it involves downloading the full version of the table, and it can take gigabytes. - DeusExMachine
  • In addition, Transbase allows support for thick primary keys (multi-column), which also adds "swelling" of SQL queries on a large offset to get rows. - DeusExMachine
  • Why complete? Only primary keys. It is clear that this is long and difficult, but I don’t see other alternatives, in the absence of an opportunity built into the engine. - minamoto
  • Perhaps I misunderstood you, and I do not quite understand how, for example, I quickly move over from the 100th row to the 100,000th row? Again, cycle the cursor (Datareader) and read offsets? Then this is not a very clear solution, with an unnecessary overhead, previously read records, I can store it in DataTable, and this is not critical at all. For me it was important to quickly step over to another offset, as is done with Limit, and not quick access to previously read rows. In general, there seems to be no other way but to use cursors in Transbase, which is sad. - DeusExMachine
  • What cycle? If you initially put all the values ​​in a numbered table, then when you move, you simply get records from 100.000 to 100.100 from this table. - minamoto

In general, I figured out how to do it:

 SELECT * FROM ( SELECT column_1, column_2, column_3, RANK() OVER(ORDER BY column_1, column_2, column_3) AS RANK_OFFSET FROM USER_NAME.TABLE_NAME) WHERE RANK_OFFSET BETWEEN 10 AND 30 

But, there are a couple of problems:

  1. Since the window functions are executed last, there is no possibility to use the result of their work in WHERE where the window function was used, and therefore the ALL (!) Table is sampled.
  2. RANK checks for uniqueness, so you need to either specify all the columns in order to calculate the unique RANK (and the RANK calculation undoubtedly eats up quite a few resources on large volumes of the table), or use UNIQUE columns.
  • This method works, though very slowly. And in general, the hellish 32-bit interface, for almost 40 years of outdated transbase, leaves no better choice than migrating to a more modern SQL database. - Sergio Belevskij