There should be a total of O (logn) calls to these operations.
Crap. The best complexity for the sorting algorithm is O (n). Even better just does not happen. Maybe O (n * logn)?
============
Suppose we have some subsequence that is already sorted. We take the adjacent "new" element. Reverse it with a half sorted subsequence, after which only the sorted piece is reversible. We check this piece for sorting. If yes, then by the same algorithm we process the new plus second sorted half (division in half and check), if not - then the second quarter of the first half. So by half division we "embed" this new element in its place by getting a new sorted sequence, 1 element longer. This is the first sub-algorithm, a pseudo-realization of a bubble, used for short, no more than 4 elements, blocks.
Second sub-algorithm. There are two sorted subsequences immediately after each other of lengths M and N. We process the first subsequence plus the first element of the second one using the first algorithm. The output is two sorted subsequences immediately after each other of length M + 1 and N-1, but, in addition, that part of the sorted first subsequence is known, into which the next element of the second subsequence should be inserted. Almost emulation merge.
I do not dare to estimate the final complexity. not a pro, but it will turn out just as good as O (n * n). And most likely better.