I made a memory dump of my network asynchronous application and walked through it with such a great utility like DebugDiag 2 Analysis . She gave an error and a warning, here are their contents:


Error:

In NetworkApplication.DMP GC is running in this process. The thread that triggered the GC is 330.

Warning:

This thread is waiting for .net garbage collection to finish. Thread 330 triggered the garbage collection. It is not a problem. The following threads have a pre-emptive GC disabled 330.

89.45% of threads blocked (534 threads).


Recommendations for correction, which gave tulza:

Error:

When a GC is running, it may be inaccurate. Also, it is not a problematic thread. Too many garbage collections. Too many of them. ASP.NET Case Study: Highly Acceptable.

Warning:

GC Disabled GC to run. Review the blog .NET .


Accordingly, the application is asynchronous and runs on ThreadPool threads with default settings. Version .NET: 4.6.2 x64 , translated for better performance RyuJIT , which in the first versions did not work very well. Maybe this is even a plus, because in this version of the framework, new methods for managing the garbage collector are available. OS: Windows Server 2012 r2 x64 .

Question: In general, how can a GC fight in this case (when it starts to "втыкать" in a childish way)? What needs to be optimized? What advice you can give or how you personally fought with similar problems (it is also interesting to learn from someone else’s experience).

ps to cite any sample code is irrational, the application is large, a lot of code

    1 answer 1

    IMHO, do not fight with GC. We must try to avoid unnecessary, unjustified memory traffic. The topic is non-trivial, but there are two main tips:

    1. Avoid unnecessary "aging" of objects (transitions from Gen0 -> Gen1 -> Gen2);
    2. Avoid frequent creation / re-creation of objects that fall into LOH (Large object heap, there are objects larger than 85,000 bytes).

    PS: Try to analyze the dump of your process with the help of the WinDBG utility with the help of the command !DumpHeap -stat (you can read more on MSDN ). Look at what kind of objects are suspiciously many, analyze the history of the creation of these objects ( GCRoot command).

    • four
      +1 Profile, watch and repair. You can also see the report of the Jetbrains, who are actively fighting memory traffic in Resharper. - andreycha
    • @andreycha, after studying the problem I would like to add that LOH compression helps a lot (available with .NET 4.5.1), and in order not to kick my architecture, I will say that many LOH objects are generated by the framework in the same SslStream. - Alexis
    • @Alexis mmm, compression will reduce the memory footprint, but the time on the GC will increase. And you, I understand, the problem is just in the frequent and long assemblies? - andreycha
    • @andreycha, you are right, but actually there are two problems. The first is that during long-term work, due to uncompressed LOH software, gradually gradually decreases productivity, therefore, now in a cycle after several thousand iteration I compress LOH. The second problem - yes, in very frequent assemblies, removed all Linq, did all sorts of ottimizatsii (watched a bunch of videos of Dztbreynsovtsev) - it became easier. But vseravno strongly sticks. I think this is a feature of .NET. Can you give advice if faced with a similar - is it worth bothering to optimize further, or is it easier to rewrite to the same C ++ in time? - Alexis
    • one
      @Alexis as an additional analysis tool I can advise Heap Allocations Viewer , if you use Resharper, or analogue on Roslyn , if you do not. Sometimes it helps to find "continue", as expressed in JetBranes :). In general, all the problems with GC and memory that I encountered were solved with the help of profilers and small alterations. I do not know how far you have already gone into the investigation, so I cannot give advice about rewriting. - andreycha