There is a task to organize the transfer of files from the client (the program installed on the user's PC) to the server. Considering that everything should be secure and the information about the uploaded file should be added to the database, then the following question has matured.

  1. What protocol would you recommend to use? (Except http / https)

I tend to sftp (well, to ensure security at the same time, I just don’t understand how to notify the database about the transferred file in this case? Yes, and I note that the size of the transferred file can be even more than 10GB . Less is not enough.

In principle, this is something similar to the services of Dropbox and Google Drive. At the same time, as far as I know, Dropbox has a proprietary protocol, and similar ones (Drive, etc.) are somehow classified and have not found any information on the Internet.

  • If these services use sftp, then most likely they have their own servers, which process the reception of files in their own way. I note that the file from the client, as far as I know, is transmitted in the given programs in chunks. - lampa
  • I do not claim that they use sftp, moreover, I even believe that they do not use it. I also think that they are transmitted in chunks. - Sever
  • one
    If you believe the English Wikipedia, then Dropbox uses librsync, which means it transmits the difference between files in blocks . Again, the option is to simply use rsync over ssh. - eigenein
  • hmm .. this is already interesting, right now rummage. Thank. - Sever
  • @eigenein, please add your comment to the answer so that I accept it. Because this is what you need. You gave this comment earlier than the respected ToRcH565, so I would like to accept exactly your answer. - Sever

2 answers 2

Implement the client server application (both the client and the server) with its own file block transfer protocol, using its own (any ready-made alien) symmetric encryption algorithm with a key dependent on the session, updating only changed blocks of the file (you can keep track of the hash with file chunks).

Security guaranteed

    (except http / https)


    In general, everything seems to me like this: the web application hangs on the server, keeps the connection to the database and listens to HTTPS. When POST comes, it uploads a file from the request body to the database.

    If the task is only to send files to the database, MongoDB, for example, supports the file system at the database level. You can forward an SSH tunnel or VPN, and upload via mongofiles .

    • There will be a lot of connections. And files with a size of 10GB and more are unlikely to be properly transferred via http. Web servers hardly sustain the load. - Sever
    • Are there any benchmarks / speed tests? Are there any fundamental differences in the cost of resources for HTTP and non-HTTP connections? - eigenein
    • one
      Thank you for your kindly trying to help me, I really appreciate it, honestly, but let's not go so far, please. I absolutely do not want to prove anything, I ask for advice from people who have already built or are knowledgeable in this field (and in no case do I implore your knowledge). But I will not use http for such purposes. I did indicate this in the question. - Sever