My suggestion is nosql in sql. Make one tablet with two fields and use. Such a system is also easily scalable. Although in practice I did not implement such a scheme. Implemented projects with one redis, redis + oracle, redis + mysql. Redis has the ability to save to disk, but it is limited by RAM.
There was a project: the personal account of the user of the Internet services of the provider, so there was nothing in the redis except the directories.
If you have all the business logic placed in 8 GB of server RAM, well, or in 2x8 GB of two servers, then you can only redis, but again, 16 GB will not load too much mysql with proper design.
Many people naively believe that adding cache and stuffing everything in there will solve all their problems. I do not want to insult anyone, but if the tests do not collapse anymore, then there will be an unrecoverable situation (consider switching off and working without a cache, for every fireman).
And with the scheme with the base and the cache, I will recommend the following (from existing experience, about 4 projects, 4 I am doing now, about this one):
- Develop a datalogical model for all business logic. Try to develop in such a way that in the future there will be no intermixing operations insert, update and select. I will explain: if there is only insert, then relieve the table of indexes. Try not to use update or better do it on the basis of one field (foreign key), and create a new field in another table. For example: some guys created a project that was constantly falling due to heavy loads (they have 4 link architecture with an app server on Delphi, fell at the level of 4 links - it would not cope with the influx of huge update transactions). We also decided to cache everything, but there was little use for it. They cited the inability to design in nosql. And all the brakes were on update in the usual database, and nosql could not cope with the volume (please correctly understand the volume of RAM). Helped them to re-design the approach to data storage, their relationality was 5 out of 5: I advised them to get rid of update and replace everything with insert. So far it works without redis, although it also exists, but for intermediate garbage (references and settings).
- It will now become a clearer picture of how much data is being chased, which is easy, even in bytes, to calculate, multiplied by the activity and the number of users. Here you can see right away what to put in the cache and when it overflows and everything will collapse. It can only be there immutable directories can be driven, but all the associations do not fit. I saw guys who shove everything from the base in redis, my theory worked, everything flew for them, until then, and then it collapsed. In addition, it is better to optimize some points in the database. And these guys came to this decision on the basis of successful experience with the previous project, only they looked not there, they did not cache the data from the database, but the content already given to the user (html pages).
- Only redis or memcash should not be used. Well, or on the extremes, use them with a database with tables without indexes, insert will fly. And it will start a little long, but depending on the priority.
PS: There is an experience, but in my answers there is a lack of terminology and coherence of sentences due to the fact that thoughts overtake the printed text, so litter for spelling.