It seems to me that there is no single bound. Many things can be improved to infinity. For example, the same copies can be optimized with the help of move-semantics, calculations can be brought to the compilation stage with the help of “template magic”, the costs of concatenation can be bypassed by reserving memory in advance. In part, this is a problem of language and “holey” abstractions, and in part, not a smart optimizer.
Each sets its own boundaries based on its aesthetic preferences and practical tasks. For example, if the code is a business logic, then low-level optimizations in it look out of place, and also complicate and slow down development in an unnecessary way.
Since the language is large, programmers usually know only part of the overall picture that they come across during development. Those who deal with low-level code and bit-pressing are often poorly versed in the semantics of exceptions or there are multi-threading. Those who are engaged in lock-free-structures and know the memory model to the subtleties, may well float in matters of packing structures or patterned magic. And those who can calculate md5 at compile time may well be unaware of the internal format and overhead of multiple inheritance or float in details name resolution.
The C ++ language tends to a great emphasis on "small" efficiency, so it seems to me that, developing on it, it is worthwhile to pay more attention to such optimizations.