For example, I have some kind of Object of type multithreaded reading which occurs much more often than modification and there are two options for implementation:
a) the usual Object obj and ReadWriteLock variables respectively on the read and modify methods
b) volatile Object obj, on the modification method - simple Lock, creating a new object and replacing the reference obj. Non-sync read method (not counting the volatile variable read-write)
Which option is preferable and will work faster and why?
If I decided to make version 1.b and know for sure that my application will run on a server with multiple x86-64 architecture processors, could I remove volatile at all and thus violate the JMM, but relying on the coherence protocols of the processor caches? Does the core of the 2nd processor read the modified data in the core of the first one and will it give at least some performance gain?
- If your change is to create a new object, and not to change it yourself, then of course option 1.b (or AtomicReference, which is almost the same). - Russtam
- Well, you can rely on a strong memory model x86, but you will have to somehow explain to the optimizer that it does not have the right to cache the value. - VladD
- Thanks for answers. 1. AtomicReference (CAS) is the same as blocking? Unloading a stream from a kernel is more expensive than a few times in a cycle. Although of course the big question is how much it is several. And the difference between creating and changing an object in data consistency as I understand it? - slippery
- 2. So if the data in the caches of other processor cores will be marked as invalid when some thread changes them and the piece of hardware will follow this. Why should I somehow explain something to the optimizer? I have, for example, a large map which has 10 entries per 10,000 readings and I most want to not use synchronization at all to do the reading as quickly as possible. Can I in such cases intentionally write the wrong code in terms of jmm - slippery
1 answer
Interest Ask. I'll try to answer, although I do not pretend to be correct. I would be glad to comment in the comments.
1) Which option will be faster? It is difficult to say, because everything strongly depends on the usecas you use. Namely, on the number of writers and readers, the frequency of writing and reading, the need for "honest" synchronization or its criticality, etc. In general, you need to measure and write tests. I wrote the following code for this:
For option with Lock
:
private Object object = new Object(); private final ReentrantLock lock = new ReentrantLock(); Supplier<Object> reader = () -> { while (lock.isLocked()) { } return object; }; Function<Integer, Object> writer = number -> { lock.lock(); object = number; lock.unlock(); return object; };
As far as I know, isLocked()
and unlock()
are related to the happens-before
and there is no need for volatile
for the object
field.
The ReentrantReadWriteLock
with ReentrantReadWriteLock
quite simple:
private Object object = new Object(); private final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock(); Supplier<Object> reader = () -> { Object result; readWriteLock.readLock().lock(); result = object; readWriteLock.readLock().unlock(); return result; }; Function<Integer, Object> writer = number -> { readWriteLock.writeLock().lock(); object = number; readWriteLock.writeLock().unlock(); return object; };
The results are as follows: if the relationship between readers and writers is 0.8, then the variant with the usual lock
turns out to be 2.5-4 times faster ( full test code with jmh )
2) From the obtained measurements I came to the conclusion that the removal of volatile will not play a significant role. It is better to look towards some alternative synchronization mechanisms .
- And in option N1, is there any point in installing what happens before through isLocked () and unlock ()? As far as I understand the write-read volatile variable itself will provide it. And synchronizing for write is necessary to avoid a lost update. maybe I'm wrong - slippery
- seems like that. isLock () is not specifically needed there if the object reference is volatile. But in the example, I tried to achieve the same behavior as the RW lock - if the recording is in progress, then reading is prohibited. - Artem Konovalov
- It is clear, thank you very much for the answer and the opportunity to debate on this topic) It is interesting if you don’t repeat the RWLock principle, but just follow the CopyOnWriteArrayList implementation principle - slippery
- CAS performance will be directly proportional to the number of readers. Interested, need to measure =) - Artem Konovalov