Learning Java. It is written in the literature and on many resources that the use of buffered data entry is the most efficient in terms of performance. But nowhere is it said, by which this efficiency is achieved.

Can anyone enlighten on this issue?

    1 answer 1

    When you use the BufferedReader , you make fewer requests to the underlying Reader , and it in turn makes fewer requests to the InputStream .

    For example, you read a file character by character. For each character, your Reader will call the read method on the FileInputStream , and in turn it will call the low-level function of the operating system, which will request information in the driver, and it will start moving the hard drive head and directly reading itself. And so for each character.

    But if you use the BufferedReader , then it immediately counts 8kb (default buffer size) into memory, and then character-by-character reading will not cause a long chain of calls up to physical reading from the disk.

    Naturally, if you read data in large chunks, the BufferedReader will not speed it up.

    In fact, everything is much more complicated, because the OS or the driver can buffer read data, but you cannot know for sure how a specific InputStream works, and whether there is any internal buffer in it.

    • Естественно, если вы считываете данные большими кусками than, if not a BufferedReader? It turns out there is a faster method of reading? And how else does the BufferedReader work with the console (What is the difference from the Scanner?). If it's not difficult ... - Valentyn Hruzytskyi
    • one
      @ ВалерійГруцький is a popular misconception that BufferedReader always speeds up reading. If you are reading a file in huge chunks, for example, 1 MB each, you will not notice the difference. More precisely, such a reading will be a bit slower than with the usual Reader (extra function calls, cycles, copy bytes, etc.). There is no faster method. Pro console can not answer - Zergatul