I’m worried about the correctness of a seemingly standard pattern for sending an event in C # (at least until version 6):
EventHandler localCopy = SomeEvent; if (localCopy != null) localCopy(this, args); I read the article by Eric Lippert on Events and races , and I know that this pattern has a problem with calling outdated handlers, but I’m more worried about the problem of the memory model and whether the JIT / compiler is allowed to throw the local copy and rewrite the code as
if (SomeEvent != null) SomeEvent(this, args); with the possibility of a NullReferenceException .
According to the C # language specification, §3.10,
There are references to volatile fields (§10.5.3),
lockstatements (§8.12), and thread creation and termination.
that is (my translation)
The execution critical points, in which the order of side effects should be preserved, are as follows: reference to volatile fields (§10.5.3),
lockstatement (§8.12), and creation and termination of threads
Thus, there are no critical execution points in the code in question, that is, the optimizer is not limited to them.
John Skeet's answer on the topic (2009) says (in my translation):
JIT is not allowed to perform the optimizations you are talking about because of the condition. I know that this question was raised some time ago, but such optimizations are not valid (I asked Joe Duffy or Vance Morrison, I don’t remember exactly.)
But comments refer to this post (2008): Events and Threads (Part 4) , which on our topic says that JIT'ter CLR 2.0 (and probably later versions?) Should not contribute to the optimization of read and write operations existing, that is, with Microsoft .NET problems should not be.
[By the way: I do not understand why the ban on additional reading of fields proves the correctness of the pattern in question. Cann’t the optimizer just see the value of SomeEvent read earlier in another local variable, and throw out exactly one of the reads? Seems like a legitimate optimization.]
Further, here is the article by Igor Ostrovsky on MSDN (2012): The C # Memory Model in Theory and Practice states (my translation):
Optimizations that do not change the order Some optimizations can add or remove some memory operations. For example, the compiler can replace repeated read fields for one read. Or if the code reads a field and writes its value to a local variable, and then reads this variable, the compiler may decide to read the value from the field instead .
Since the ECMA C # specification does not prohibit optimizations that do not change the order, they must be allowed. In fact (I will talk about this in the second part) JIT'ter actually does this type of optimization.
This seems to contradict the answer of John Skit.
So the question is:
- Is the pattern under discussion valid in the current Microsoft implementation of .NET?
- Is it guaranteed that the discussed pattern is valid on competing .NET implementations (for example, Mono), especially when working on exotic processors?
- What exactly (C # specification? CLR specification? Details of the implementation of the current CLR version?) Guarantees the validity of the pattern?
Any regulatory references on the topic are welcome.