I do not quite understand how DBMS, especially relational, ensure the transactionality, consistency and integrity of data in the database.

For example, if suddenly:

  • Power will be lost;
  • A system call fails, for example, to write to a file;
  • The process will be killed by the system for one reason or another;
  • There will be physical damage to the media (some clusters / sectors have failed).

Even if we consider a database in which data is not stored in the form of tens or hundreds of interdependent tables, but in the form of <key-value> records, then I can hardly imagine how you can protect data from damage with such an assortment of possible failures.

For example, as I imagine:

  • You can make full backups (which may also be damaged after saving), but with increasing data, this method becomes less and less preferable;
  • You can encode the data in redundant form, which, as I think, and so do file systems / media to protect the data from minor damage the size of several bits-bytes.

This is where my ideas end.

If we take into account that the recorded data in modern systems goes a very long way (process buffer, OS buffer, drive buffer), everything becomes even more complicated.

How is all this done? Are any special tools used for this, or can a full-fledged DBMS be implemented using only standard C / C ++ tools: fopen (), fwrite (), etc.?

  • one
    And they generally do not provide. Everything is provided by backup power, backups and other external methods. - Enikeyschik

0