I’m worried about the correctness of a seemingly standard pattern for sending an event in C # (at least until version 6):

EventHandler localCopy = SomeEvent; if (localCopy != null) localCopy(this, args); 

I read the article by Eric Lippert on Events and races , and I know that this pattern has a problem with calling outdated handlers, but I’m more worried about the problem of the memory model and whether the JIT / compiler is allowed to throw the local copy and rewrite the code as

 if (SomeEvent != null) SomeEvent(this, args); 

with the possibility of a NullReferenceException .

According to the C # language specification, §3.10,

There are references to volatile fields (§10.5.3), lock statements (§8.12), and thread creation and termination.

that is (my translation)

The execution critical points, in which the order of side effects should be preserved, are as follows: reference to volatile fields (§10.5.3), lock statement (§8.12), and creation and termination of threads

Thus, there are no critical execution points in the code in question, that is, the optimizer is not limited to them.

John Skeet's answer on the topic (2009) says (in my translation):

JIT is not allowed to perform the optimizations you are talking about because of the condition. I know that this question was raised some time ago, but such optimizations are not valid (I asked Joe Duffy or Vance Morrison, I don’t remember exactly.)

But comments refer to this post (2008): Events and Threads (Part 4) , which on our topic says that JIT'ter CLR 2.0 (and probably later versions?) Should not contribute to the optimization of read and write operations existing, that is, with Microsoft .NET problems should not be.

[By the way: I do not understand why the ban on additional reading of fields proves the correctness of the pattern in question. Cann’t the optimizer just see the value of SomeEvent read earlier in another local variable, and throw out exactly one of the reads? Seems like a legitimate optimization.]

Further, here is the article by Igor Ostrovsky on MSDN (2012): The C # Memory Model in Theory and Practice states (my translation):

Optimizations that do not change the order Some optimizations can add or remove some memory operations. For example, the compiler can replace repeated read fields for one read. Or if the code reads a field and writes its value to a local variable, and then reads this variable, the compiler may decide to read the value from the field instead .

Since the ECMA C # specification does not prohibit optimizations that do not change the order, they must be allowed. In fact (I will talk about this in the second part) JIT'ter actually does this type of optimization.

This seems to contradict the answer of John Skit.

So the question is:

  1. Is the pattern under discussion valid in the current Microsoft implementation of .NET?
  2. Is it guaranteed that the discussed pattern is valid on competing .NET implementations (for example, Mono), especially when working on exotic processors?
  3. What exactly (C # specification? CLR specification? Details of the implementation of the current CLR version?) Guarantees the validity of the pattern?

Any regulatory references on the topic are welcome.

  • I suspect that we are talking exclusively about reading the value of a local variable in a cycle (it is hard to imagine an optimization that at the compilation stage could otherwise determine the repeatability of reading), but this is only speculation. If this is your case, then most likely the only normal way to escape from problems is [MethodImpl (MethodImplOptions.NoOptimization)] - Geslot
  • @Geslot: So your opinion - is the pattern generally incorrect in any implementation of .NET? - VladD
  • I repeat that these are only speculations, but once you ask, my opinion is that this pattern cannot guarantee the safety of use in all cases in C # 5, since The article dealt with optimizations in the .NET Framework 4.5 - Geslot
  • @Geslot: Got it, thanks! - VladD

1 answer 1

A good question that showed, at least to me, that everything in the .NET camp with standards is deplorable. Well, okay, let's deal with the question.

Short:

  1. Yes, this pattern is correct and safe in implementing MS.
  2. No, there are no guarantees regarding Mono. Moreover, I could not find any information on the memory model used in Mono.
  3. The guarantee is represented by the implementation that was introduced in Microsoft .NET 2.0 (details below). Since then, there have been no well-known fixes.

Now let's look at why. To convert

 EventHandler localCopy = SomeEvent; if (localCopy != null) localCopy(this, args); 

at

 SomeEvent(this, args); 

the compiler is obliged to somehow remove the check, it can do this by reading SomeEvent (reading the number of times) and making sure that it is really non-zero. But then he will have to read it again (read number 2) to trigger the event . Thus, in the converted version, 2 SomeEvent reads are SomeEvent , compared to one in the original version. We remove for brackets the fact that it is not very clever (usually reading is removed, not added), it is also prohibited. Where? Attention! The MSDN article from October 2006, which is called Understand the Impact of Low-Lock Techniques in Multithreaded Apps , describes changes to the CLR 2.0 memory model.

The article is large, but we are only interested in the following:

  1. It has been noted that the ECMA rules for volatile.
  2. Reads and writes cannot be introduced.
  3. It can be removed from the same thread. If you want to write it again, it will be removed from the same thread. It can be used to make this rule.
  4. Writes cannot move past.
  5. The memory card reads from the same thread.

Because This article was written by one of the developers and given the fact that many book writers refer to it, this can be considered a de facto standard.

True, I found another interesting “reading matter” from modern MSDN articles: The C # Memory Model in Theory and Practice, Part 2 , where a certain Igor Ostrovsky writes the following:

I like the compiler of sometimes reads into one. The compiler can also split a single read into multiple reads. In the .NET Framework 4.5, it is less than specific circumstances. However, it does sometimes happen.

And further in the text. But, since If I didn’t find any “CLR memory model 3” or higher, then we can conclude that no changes have taken place and dear Igor is simply wrong.


Another proof that nothing has changed since 2006 is the fact that SomeEvent?.Invoke(...) converted just to the kind of code that is presented in the question, namely this pattern is currently dominant.

  • Comments are not intended for extended discussion; conversation moved to chat . - PashaPash