I have a Django app using mongoengine. The application often performs aggregate database queries. All fields in which $ match occurs are indexed. If necessary, I can attach examples of such requests.
My problem is not understanding how caching works. After several requests to the database, the mongod process eats up all the memory and then drops. The documentation says that when memory runs out, the old records should rub on the LRU principle and new ones be put in their place.
Version 3.2, the engine is MMAPv1. As it is written in the documentation, for normal caching, it is enough to have approximately a 'working set', i.e. dataSize + indexSize. In my case it’s about 370MB (290 dataSize, ~ 80 indexSize)
My my db.stats ():
{ "db" : "random_db", "collections" : 14, "objects" : 155613, "avgObjSize" : 1859.8720672437394, "dataSize" : 289420272, "storageSize" : 338821120, "numExtents" : 32, "indexes" : 22, "indexSize" : 82512192, "fileSize" : 1006632960, "nsSizeMB" : 16, "extentFreeList" : { "num" : 25, "totalSize" : 258314240 }, "dataFileVersion" : { "major" : 4, "minor" : 22 }, "ok" : 1 } The VPS server, on it 1 GB of a RAM, also I included swap on 1 GB. Actually, these 2 gigs are enough for about 70 requests, each of which takes (!) 50 documents (I use $ limit in the pipeline). That is, about 3-4 thousand documents in the cache weigh more than 2 Gigov, although the entire base weighs less than 400 together with the indices.
My db.serverStatus.mem:
{ "bits" : 64, "resident" : 112,//Примерно такое значение при запуске сервиса, оно и растет "virtual" : 2523, "supported" : true, "mapped" : 1136, "mappedWithJournal" : 2272 } Thus, it is assumed that more than 2523MB cache cannot occupy. That doesn't work. The question is quite simple, but more specifically I didn’t find anything in the documentation, everywhere it’s just written “how it should be”.