the main problem of the files is the discovery of everything at once, and tearing out the necessary one, because of which the speed is lost, because the download volume is large.
No, not this :) It is not necessary to store everything inside the files that "emulate" the database. You can store there, for example, identifiers of files with content, and remove the data from them as needed.
But the relatively complex logic of work is already a problem. Without a database, you can get stuck with some kind of sampling on a difficult condition, but with joins, on groups, which in SQL is done with one left ...
As a curious example, you can look at, for example, the GetSimple CMS . Everything is organized in the form of XML-repositories, Nitsche, lives and even works :)