Some programmers are able to guess the complexity of their algorithm in O-notation, whether it is C # or SQL. How do they do it? Share sources or explain on your fingers. Thank.

  • one
    In fact, like this, I am more than sure that the algorithm cannot be evaluated. For example, quick sorting at 1 glance is not at all an obvious difficulty. And in the simplest things, just having experience is enough; most of the “algorithms” are estimated stupidly by the number of nested loops. And the question is too general as for me. - pavel
  • one
    on the basic rate of increase in labor intensity (to which the algorithm must be brought). There are 8 of them. The choice is in favor of less. - Deadooshka
  • How to evaluate by the number of nested loops? - Vladimir Smirnov

2 answers 2

A question the answer... :)

Estimated number of actions. Often, this leads to some recurrence relations that are difficult to solve to some extent — although there are also ready-made general solutions, so many algorithms are not so difficult to evaluate.
However, it should be borne in mind that when assessing the complexity of the algorithm, we need to consider the data structures used - for example, in the Dijkstra algorithm a lot depends on how the priority queue is implemented, and in general on graph algorithms a lot depends on the graph representation - adjacency lists or a matrix .. .

As a simple example - the usual multiplication of matrices nxn - each element of the new matrix is ​​obtained by multiplying and summing the products of the elements of the corresponding row and column. The row / column length is n , total, O(n) operations for calculating one element of the resulting matrix. A total of n 2 - total, the complexity of the algorithm O(n 3 ) .

The topic is very extensive, so I recommend to refer to the literature , especially I recommend T. Kormen, C. Leiserson, R. Rivest, C. Stein - Algorithms. Construction and analysis . 3rd ed. - 2013.

  • Somehow you can bring a simplified model for estimating in your mind the difficulties, namely the way of making an approximate superposition of the complexity functions for any linear program? Kormen began to read. - Vladimir Smirnov
  • Let's just say - I find it difficult to answer in general. Usually, cycles, memory operations are considered ... No, I will not undertake to answer your question. It seems to me that this will come to you from experience. - Harry
  • For Cormen is a brave +1. I will add that the complexity of the algorithm is estimated based on the fact that you will have the worst, just the most terrible data set for the operation of this algorithm, such a data set that will require the maximum possible number of iterations. - ParanoidPanda

As well as from other "offhand" assessments (based on intuition), the essence is in the experience already learned: in the availability of building blocks, with which you can operate unconsciously in a chosen area, thanks to purposeful practice.

Most of the code has a simple algorithmic structure. And if you know the estimate for common blocks (algorithms and operations on data structures in your area), then the complexity of the code is obvious. In C ++, the complexity for standard algorithms is clearly indicated. Knowing only which of the three categories of input (random access / RandomAccessIterator, sequential / ForwardIterator, single-pass / InputIterator) is already enough in many cases to assess the complexity of the algorithm.

You may not even know how something is specifically implemented. For example, if an algorithm at some step requires sorting random data, then it is reasonable to assume O (n log n) for an algorithm based on comparisons, regardless of the specific implementation. Or when searching the table in the database, if there are a lot of rows (when it makes sense to talk about big O), you can expect a good implementation of the index to create (the search from O (n) to O (log n) turns into). In case of doubt, can be measured .

On the other hand, even outwardly similar simple code examples may have different complexity .


To find or verify an intuitive answer, you can construct recurrent expressions or partial sums that can be computed using a computer. Since there are O(c*n) == O(n) and O(n*n + n) == O(n*n) and other simplifying transformations, many algorithms can be reduced to a small number of base cases. The process requires care, but is rather straightforward (especially if you use something like wolframalpha, Maple, Maxima, sympy). How to find the time complexity of an algorithm .

Accordingly, there are cases when concentrated efforts, using obvious approaches, do not give a result, then it’s worth to be distracted for a while, switch to other tasks. The insight may come at the most unexpected moment (but this is already beyond the framework of "offhand").


Look at what algorithms are used in tasks that interest you. New algorithms with the best complexity do not appear every day.

Start with the simplest code in your language, framework and find out its complexity (for example, "deleting an element by an index from an array"). Knowing the complexity for elementary constructions, find the complexity for the code blocks (made up of these constructions) that you often come across.

You can do the opposite: start with a higher-level code and gradually go down lower levels of abstraction until you reach the known blocks (adding fixed numbers that are placed in a machine word: O (1). If an arbitrary number n taken, then O (log n) - proportional to the number of bits in a number). See the time complexity table .

Practice until most of the everyday code you are interested in will be able to evaluate offhand.