I thought to write like this:

var task1 = Task.Factory.StartNew(() => { File.Copy(Path,NewName1, true); }); var task2 = Task.Factory.StartNew(() => { File.Copy(Path,NewName2, true); }); 

But it beats the exception that the file is occupied by another process: Anonymous of type System.IO.IOException mscorlib.dll but the process could not access the file "D: \ output \ log4 .txt "because this file is being used by another process. Is it possible or some options to simultaneously copy the file in different ways? Or are there any options to speed up this process? Thank.

  • 2
    If there are no specific requirements, then it will be definitely faster in a single thread - user3195373
  • 2
    If we are talking about writing to the same logical media, then yes, it will be faster to process files sequentially. However, if the recording is made on different logical media, the picture changes completely. - Pavel Dmitrenko
  • 3
    Do you have the same NewName ? Show the real code. - VladD
  • one
    @PavelDmitrenko and if it is easy, can you briefly explain? - user3195373
  • one
    @ user3195373 if you write to one physical carrier, simultaneous recording will only slow everything down: in general, when it comes to HDD, disk heads will often reposition in attempts to write fragments to different physical areas of the disk. - Pavel Dmitrenko

2 answers 2

Implementation via FileStream (depending on the typical volume of the file, it makes sense to experiment with the buffer size) :

 public void ParallelCopy(string src, params string[] dsts) { Parallel.ForEach(dsts, new ParallelOptions(), dstOne => { using (FileStream source = new FileStream(src, FileMode.Open, FileAccess.Read, FileShare.Read)) using (FileStream destination = new FileStream(dstOne, FileMode.Create)) { var buffer = new byte[4096]; int read; while ((read = source.Read(buffer, 0, buffer.Length)) > 0) { destination.Write(buffer, 0, read); } } }); } 

Using:

 ParallelCopy(@"x:\source.file", @"c:\destination1.file", @"d:\destination2.file"); 
  • And the dimension of the buffer based on what was determined? Mail size can be from 1 to 100 MB - Alexander Puzanov
  • one
    It can be determined only on the basis of real conditions, so in this case assume that it is conditionally random. - Pavel Dmitrenko
  • Here is a good discussion of the optimal buffer size. The outline is such that 4096 bytes is considered a mid-optimal value. - Pavel Dmitrenko
  • @PavelDmitrenko 4096 bytes is the default size of a logical cluster in NTFS, and therefore it turns out optimal, because this amount of data is read or written in one read or write operation (provided by disk buffer). If the cluster size of the logical disk is different, then the optimal buffer will change. - rdorn
  • @PavelDmitrenko Comments indicate that a larger buffer is used specifically for copying. - Athari

This is even easier:

 public void ParallelCopy(string src, params string[] dsts) { var bytes = File.ReadAllBytes(src); Parallel.ForEach(dsts, d => File.WriteAllBytes(bytes, d)); } 
  • +1, but this is if the file is placed in memory - VladD