What is better and faster?

Million records in one table or 400 records in 2500 tables? For reference - I want to store each user’s inventory in separate tables or unload everything and pull it out by user id?

  • 7
    I'm certainly not an expert. but we need very weighty arguments in order for each user to create their own set of tables. a million records is not such a large number. - teran
  • I took the figure million for an example, maybe more. Well, for example, this data is constantly deleted and re-recorded. Wash better then update in a separate table than in one huge - hunter
  • 6
    million records in one table or 400 records in 2500 tables? Each table, except the first one, is a rake. Do you need 2499 rakes carefully laid out on your way? - Akina
  • Well, there will always be data manipulation (deletion and re-entry). Will this not slow down the Select? + ID will grow indefinitely. In this case, it may be that not only 1 user will perform manipulations ... - hunter
  • @hunter, usually everything that changes often is better to put in a separate table and bind to the record - maxkrasnov

2 answers 2

If you are worried that your table with a million records will be overloaded - then you worry in vain! MySQL can work with millions of records.

Here are some tips if you need to work with big data:

  • Use indexes. Indexes complicate insertion, but simplify sampling. With it, your select / delete queries will be faster.
  • Break your large select samples into chunks in your application. For example, a php-array of 300,000 records from the database occupied 250mb of memory from me. Splitting into small groups of data with the help of OFFSET LIMIT allowed me to save memory and not to load the database so much.
  • If you assume that your application will be the norm to work with millions of records - look towards MongoDB, it copes better with big data, but worse with less.
  • Read about SELECT INTO OUTFILE , LOAD DATA INFILE , if you are interested in the details of large select and insert queries.

I agree with sanmai and add one more thing. Definitely, you do not need to create such a huge number of tables: aside from irrationality, this greatly complicates the logic of the application (when creating new tables, maintaining links between tables up to date). Try using link tables that will store the relationship between the user and his subject. 1 link - 1 entry.

  • one
    "If you assume that your application will be the norm to work with millions of records - look towards MongoDB, it copes better with big data, but worse with less." The statement is unfounded and not correct. - E_p
  • one
    @MikeSprindzhuk on MongoDB makes real sense to switch when there is already at least a server database cluster. if the project fits on one server, then it’s better to use MySQL / MariaDB - Dmitry Maslennikov

Each new table consumes a server resource for maintaining an open file. Even if you configure open_files_limit using the usual method, server resources will still be used for things that are not needed. In addition, the buffer pool can behave interestingly. Therefore, if you do not know that it will definitely be worse, it is better to keep homogeneous data within one server in one table.

Million records are a trifle today.

  • and if for example there will be 10 billion records? - hunter
  • one
    @hunter up to 100 million. You can safely store everything in one table, if more - you can think about the distribution of the load (but over 100 million there will have to be sharding, because even one table will weigh a lot). - Dmitry Maslennikov