Memcache works reference fast - there's nothing to optimize. The only thing you can complain about in terms of performance is the time to connect to the memkes server: but for problems, the acceleration is done with the help of the sysadmin’s direct hands.
It is also necessary to take into account that there are different types of data (the final number is about 25-20)
And why the question of converting data types and "pushing anything into the cache" is not assigned to the service inside the PHP code? As a rule, everything that goes beyond the scope of int and string is stored in JSON, and the layer through which caching takes place (for example, in any PHP framework) gives you everything in the right format at once. What is this approach does not suit?
Is there a mechanism in PHP and in one of the caching systems, using which you can assign the maximum number of records and different types of behaviors for the blocks where they are stored?
All this is governed by "cache layers", for example Zend\Cache or yii\caching . You can write your adapter class, or trait, which extends an existing class in the framework: which adds the behavior you require. What is one adapter can use several cache backends (under the backend, I mean the cache service - memcache, sql, tarantool, file, apc, ...).
For example, where a “dimensionless” cache is required that does not drop upon reboot and there is an admin - tarantool, sql - where there is no admin, where a small amount is required - memcache, where no scaling is needed and simplicity is needed with the ax method file. So you can achieve any performance, regardless of the data formats in the cache - rather, there are other factors.
Performance losses from some kind of “cached benchmark performance,” if they are, then clearly not from memory fragmentation. Let me remind you that memcache is a service that communicates via TCP: that is, you need to connect to it, spend time on the network receive-send cycle of packets, collect packets, and other TCP properties, if the memory server is separate from the execution server, this is also network expectations. All this is an order of magnitude more will eat up processor cycles, than the costs of memkesh to extract fragmented data.
Plus it’s a service and a service that takes on the issue of optimizing and storing your data - you don’t need to think about it. Unless you want to write your "best" cache service: but it will be a completely different question. The cost of converting formats will be, yes. But they will in any case - do not dance with a tambourine.
And the main thing to remember is that the processor power of the machines usually costs a penny compared to the cost of the time programmers kill to achieve ideal performance for their task :)