As far as I know the synchronize , dozens if not hundreds of times slow down the method execution, what caused such a huge loss in speed, and are there any analogs of thread synchronization?
3 answers
Loss occurs only in a multithreaded environment; there is no difference on one stream. The fact is that to execute a synchronized block, all threads must go to the place of its beginning, and then alternately execute a synchronized block on one thread.
There is no clear answer to the question about synchronization analogues, the question is too general and depending on the data and implementation varies greatly, there are blocking algorithms, there are non-blocking, transactional memory. In this case, it is already necessary to understand the specific task, the presentation of the data and the way they are processed, ideally, all data should be processed independently, then the threads will not be synchronized.
As the authors of other answers have already noted, the main reason for the slowdown is that a synchronized method / block can only perform one thread at a time, while others have to wait. But there are other factors that make a much smaller contribution to the slowdown, but can still be noticeable in algorithms with high performance requirements. The first such factor is the overhead of a monitor capture / release operation. For a biased monitor, they are minimal — they fit into several hardware instructions, but with high contention, the virtual machine has to perform significantly more actions. Including actions involving system calls, which leads to a context switch. The second factor is safe points . In the case of HotSpot, they can only be global, so inflating / blowing out the monitor momentarily stops absolutely all JVM threads, including service ones. The third factor, even lower level, is the need to use the memory barrier when capturing the monitor. This reduces the possibilities for optimizing reordering and forces the processors to synchronize the cache memory.
Therefore, when writing multi-threaded programs, it is so important to minimize the use of a split state or make it immutable. Immutability and confinement are the best alternatives to synchronization.
A lock protects a code segment, allowing only one thread to execute that code at a time.
The lock controls threads that attempt to enter a protected code segment.
Each condition object controls the threads that are included in the protected code segment, but which are not yet able to do the work.
The longer the block of code is executed, which the threads should receive access in turn, the worse. BUT, if this is really a code that cannot be operated simultaneously from multiple threads, then there is no other way out. It is necessary to synchronize. The synchronize is simply a handy tool from Java. If there is a lot of such code, then it is better to use more flexible tools. The Lock and Condition interfaces were added to Java SE 5.0 to provide programmers with a high degree of lock control.
Java also has the keyword volatile , which allows a variable to operate simultaneously on a single thread. You can read about some of the pitfalls here. And also a good article from which you can understand how it all works.
The word synchronized in java "on the fingers"
I think it should be clear that if this is a server that thousands of clients are accessing and there is some element that cannot be changed by several clients at the same time, then you will have to queue up. The same if it is an application in which, for example, 8 threads refer to a block of code. The code block, for example, is executed in 1 second. All threads instead of it will be a little more than 8 seconds. This is certainly not an exact explanation, but this is still sufficient. It’s just worth knowing that if synchronization is needed, then this is either the use of synchronize , which will leave the code more readable, but it affects globally, possibly, on parts of the code where synchronization is not needed. Or use other tools for point effects.