Trivial question.

The database of the online store stores absolutely everything - pages, menu items, names of controllers, products, categories, links, etc. Due to the fact that the database will be updated infrequently, the creation of a query caching module is an actual task. Looked through "Google", read the forums. I decided to do something similar to memesh:

  1. Static tables (menus, controllers) are undoubtedly immediately serialized and written to a file.
  2. All requests from the user (for example, product search by filters) are cached - namely, the result of the SELECT (array) is serialized and written to the file with the hashed label of the request itself. If such a label already exists, then the database query does not occur, and the data is extracted from the file, deserialized into an array, and issued to the browser.
  3. When an administrator changes a table, the cache file most likely is completely destroyed, and the browser, not finding the cache file, receives data from the database and “gives the command” to the server to create an updated cache file based on them.

How reasonable is this solution? I will consider all the comments and advice.

  • 1. Why do you want to write to the file, but not to the same memo file? 2. Is using the regular cache of the MySQL query for some reason not suitable? Not effective? - cheops
  • Probably, I have no time to study memkes. And the whole engine is almost completely written ... - Deus
  • When using memkesd, you do not pull a slow disk - all operations are in fast RAM, and if you wish, you can take it to a separate server. In any case, try to implement through a wrapper so that in the future you can simply change the storage. - cheops
  • It is not the data from the database that should be cached, but pieces of html that are ready for output. And the database requests themselves should be optimized so that there would not be much difference to raise data from the file from disk or re-request it from the database. If this does not help, then memkeysh. It’s not longer to deal with it than to write saving something to a file - Mike
  • "ready-made chunks of html" - these are cached static pages, and if a user searches for goods with several filters, the result will be an array and not the html-code ... - Deus

1 answer 1

MySQL itself is smart enough to cache the result of the query! As long as the tables participating in the query do not change, as long as the allocated RAM is enough, the second query will be processed instantly.

With memokesh or other common memory, you achieve exactly the same result: speed in exchange for memory. Try to give the DB more memory first, this is the easiest way to cache!

  • You write that myslq is smart enough, but for some reason, almost all engines use this or that "common memory". I wonder how I can give the DB more memory? - Deus
  • Well, firstly put more RAM on the database server. Then tyunit settings dev.mysql.com/doc/refman/5.7/en/query-cache-configuration.html dba.stackexchange.com/questions/7344/… - artoodetoo
  • This refers to the cache query MySQL, under it can be allocated a fixed amount of memory. However, if it is effective in your conditions, you need to look at the statistics - it may be that it is wasting resources (the number of inserts is more than the number of extracts, or the same is also bad). Plus it does not cache parameterized queries, so the poet often resorts to external cache. - cheops
  • When changes occur frequently, any cache will be ineffective. But MySQL keeps track of the relevance of the cache itself, and in the case of external cache it becomes your concern. Extra space for brakes and jambs. Even the "race condition" can be caught. - artoodetoo
  • @artoodetoo, I use stmt not to use the efficiency of the prepared queries, but to be able to automatically escape "dangerous" characters. A simplified example: $ Stmt-> prepare ("INSERT INTO t VALUES ('$ id', '$ name'); $ stmt-> execute (). Does MySQL cache such queries (without the"? "Characters in sql?) - Deus