In general, there is such a class:
class Metric { public Word Word1; public Word Word2; public Metric(Word word1, Word word2) { Word1 = word1; Word2 = word2; } public int Simil; public override int GetHashCode() { return Word1.GetHashCode() + Word2.GetHashCode(); } } public class Word { public string Text; public Word(string text) { Text = text; } public override int GetHashCode() { return Text.GetHashCode(); } } The class contains data about the similarity of two words.
There is a class that contains a field with the type HashSet.
So, when the collection reaches a size of several million, it becomes noticeable that the process is not as fast as it was at the very beginning of the addition.
Is there any way to avoid this? In theory, LinkedList will be nimble, but then I lose checking for the uniqueness of the pairs of words being checked, and this I use to not recalculate the same thing again.