📜 ⬆️ ⬇️

How I patch the Universe

image

Habré has a lot of articles about game development, but among them there are very few articles that relate to "backstage" topics. One of these topics is the organization of the delivery, in fact, of the game to a large number of users for a long time (year, two, three). Despite the fact that for some the task may seem trivial, I decided to share my experience of walking the rake in this business for one specific project. Who cares - please.

A small digression about the disclosure of information. Most companies are very jealous that the “internal kitchen” does not become available to the general public. Why - I do not know, but what is - that is. In this particular project - The Universim - I was lucky and the CEO of Crytivo Inc. (previously - Crytivo Games Inc.) Alex Wallet turned out to be absolutely sane in a similar issue, so I have the opportunity to share experiences with the rest.

A little about the patcher in itself


I have been involved in game development for a long time. In some, as a game designer and programmer, in some, as a blend of a sysadmin and a programmer (I don’t like the term devops, as it doesn’t accurately reflect the essence of the tasks I perform in such projects).

At the end of 2013 (horror, how time flies) I thought about delivering new versions (builds) to users. Of course, at that time there were many solutions for such a task, but the desire to make your product and the craving for “bicycle building” won out. Besides, I wanted to explore C # deeper - so I decided to make my patcher. Looking ahead, I will say that the project was a success, more than a dozen companies used it and used it in their projects, some asked to make a version taking into account their wishes.

The classic solution involves the creation of delta packages (or diffs) from version to version. However, this approach brings inconvenience to both tester players and developers - in one case, in order to get the latest version of the game, you need to go through the whole chain of updates. Those. the player needs to consistently carry out a certain amount of data that he (a) will never use, and the developer will keep on his server (or servers) a bunch of obsolete data that some one of the players might need.

In another case, you need to download the patch for your version to the latest, but the developer needs to keep all of this zoo of patches. Some implementations of patch systems require the presence of certain software and the execution of some logic on the servers - which also creates an additional headache for developers. On top of that, often game developers do not want to do anything that is not directly related to the development of the game itself. I will say even more - most are not experts who can be engaged in setting up servers for distributing content - this is simply not their area of ​​activity.

With all this in mind, I wanted to come up with a solution that would be as simple as possible for users (who want to play faster, and not dance with patches of different versions), and for developers who need to write a game, and not find out what and why not updated by the next user.

Knowing how some data synchronization protocols work - when data is analyzed on the client and only changes from the server are transmitted - I decided to use the same approach.
In addition, in practice, from version to version during the whole development time, many game files change slightly - the texture is there, the models, there are some sounds.

As a result, it seemed to me logical to consider each file in the game directory as a set of data blocks. With the release of the next version, the game build is analyzed, a block map is built and the game files themselves are compressed block by block. The client analyzes the existing blocks and downloads only the difference.

Initially, the patcher was planned as a module in Unity3D, however, it turned out one nasty detail that made me reconsider it. The fact is that Unity3D is an application (engine) that is completely independent of your code. And while the engine is running, a whole bunch of files are open, which creates problems when you want to update them.

In Unix-like systems, overwriting an open file (unless it is not specifically blocked) does not pose any problems, but in Windows without dancing with a tambourine, this kind of “feint with ears” does not work. That is why I made the patcher as a separate, application that does not load anything except the system libraries. De facto, the patcher itself turned out to be a utility completely independent of the Unity3D engine, which did not prevent, however, adding it to the Unity3D store.

Patcher algorithm


So, the developers release new versions at regular intervals. Players want these versions to get it. The goal of the developer is to ensure this process with minimal costs and with minimal headaches for players.

From the developer’s side


When preparing a patch, the algorithm for the actions of the patcher looks like this:

○ Create a tree of game files with their attributes and checksums SHA512
○ For each file:
► Break content into blocks.
► Save the SHA256 checksum.
► Compress the block and add it to the block map of the file.
► Save the block address in the index.
○ Save the tree of files with their checksums.
○ Save version file.

The developer needs to upload the received files to the server.

Player side


On the client, the patcher does the following:
○ Copies itself to a file with a different name. This will update the patcher executable file if necessary. Then control is transferred to the copy and the original exits.
○ It downloads the version file and compares it with the local version file.
○ If the comparison did not reveal the difference - you can play, we have the latest version. If there is a difference, move on to the next item.
○ It downloads a tree of files with their checksums.
○ For each file in the tree from the server:
► If there is a file, it counts its checksum (SHA512). If not, it considers that it is, but empty (that is, it consists of solid zeros) and also considers its checksum.
► If the sum of the local file does not match the checksum of the file from the latest version:
► Creates a local block map and compares it with a block map from the server.
► For each local block that is different from the remote block, it downloads the compressed block from the server and overwrites it locally.
○ If there are no errors, it updates the version file.

The size of the data block I made is a multiple of 1024 bytes, after a certain number of tests, I decided that it was easier to operate with blocks of 64K. Although the universality in the code remained:

#region DQPatcher class public class DQPatcher { // some internal constants // 1 minute timeout by default private const int DEFAULT_NETWORK_TIMEOUT = 60000; // maximum number of compressed blocks, which we will download at once private const UInt16 MAX_COMPRESSED_BLOCKS = 1000; // default block size, you can use range from 4k to 64k, //depending on average size of your files in the project tree private const uint DEFAULT_BLOCK_SIZE = 64 * 1024; ... #region public constants and vars section // X * 1024 bytes by default for patch creation public static uint blockSize = DEFAULT_BLOCK_SIZE; ... #endregion .... 

If you make the blocks small, then the client needs fewer changes when the changes themselves are few. However, another problem arises - the size of the index file increases in inverse proportion to the reduction in block size - i.e. if we operate with blocks of 8K, then the index file will be 8 times larger than with blocks of 64K.

I chose SHA256 / 512 for files and blocks from the following considerations: the speed compared to the (obsolete) MD5 / SHA128 differs slightly, but the blocks and files must still be read. And the probability of collisions for SHA256 / 512 is significantly less than for MD5 / SHA128. If to be completely boring - it is in this case, but it is so small that this probability can be neglected.

Additionally, the client takes into account the following points:
► Data blocks can be shifted in different versions, i.e. locally, we have block number 10, and on the server we have block number 12, or vice versa. This is taken into account in order not to download unnecessary data.
► Blocks are requested not one by one, but in groups - the client tries to combine the ranges of the necessary blocks and requests them from the server using the Range header. It also minimizes server load:

 // get compressed remote blocks of data and return it to the caller // Note: we always operating with compressed data, so all offsets are in the _compressed_ data file!! // Throw an exception, if fetching compressed blocks failed public byte[] GetRemoteBlocks(string remoteName, UInt64 startByteRange, UInt64 endByteRange) { if (verboseOutput) Console.Error.WriteLine("Getting partial content for [" + remoteName + "]"); if (verboseOutput) Console.Error.WriteLine("Range is [" + startByteRange + "-" + endByteRange + "]"); int bufferSize = 1024; byte[] remoteData; byte[] buffer = new byte[bufferSize]; HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(remoteName); httpRequest.KeepAlive = true; httpRequest.AddRange((int)startByteRange, (int)endByteRange); httpRequest.Method = WebRequestMethods.Http.Get; httpRequest.ReadWriteTimeout = this.networkTimeout; try { // Get back the HTTP response for web server HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse(); if (verboseOutput) Console.Error.WriteLine("Got partial content length: " + httpResponse.ContentLength); remoteData = new byte[httpResponse.ContentLength]; Stream httpResponseStream = httpResponse.GetResponseStream(); if (!((httpResponse.StatusCode == HttpStatusCode.OK) || (httpResponse.StatusCode == HttpStatusCode.PartialContent))) // rise an exception, we expect partial content here { RemoteDataDownloadingException pe = new RemoteDataDownloadingException("While getting remote blocks:\n" + httpResponse.StatusDescription); throw pe; } int bytesRead = 0; int rOffset = 0; while ((bytesRead = httpResponseStream.Read(buffer, 0, bufferSize)) > 0) { // if(verboseOutput) Console.Error.WriteLine("Got ["+bytesRead+"] bytes of remote data block."); Array.Copy(buffer, 0, remoteData, rOffset, bytesRead); rOffset += bytesRead; } if (verboseOutput) Console.Error.WriteLine("Total got: [" + rOffset + "] bytes"); httpResponse.Close(); } catch (Exception ex) { if (verboseOutput) Console.Error.WriteLine(ex.ToString()); PatchException pe = new PatchException("Unable to fetch URI " + remoteName, ex); throw pe; } return remoteData; } 

It turned out that the client can be interrupted at any time and the next time he starts, he will continue his work de facto, and will not download everything from scratch.

Here you can watch a video illustrating the work of the patcher on the example project of Angry Bots:


About how the game universe was patched


In September 2015, Alex Wallet contacted me and offered to join the project - they needed a solution that would allow providing 30 thousand (with a tail) players with monthly updates. The initial size of the game in the archive is 600 megabytes. Before contacting me, there were attempts to make my own version using Electron, but everything rested on the same problem of open files (by the way, the current version of Electron, and it can) and some others. Also, none of the developers understood how it would all work - I was provided with several bicycle structures, the server part was missing at all - they wanted to do it after all the other tasks had been solved.

Additionally, it was necessary to solve the problem of how to prevent the players from leaking keys - the fact is that the keys were for the Steam platform, although the game itself on Steam was not yet publicly available. Distributing the game was strictly required by the key - although there was a chance that the players could share the key of the game with friends, but this could be neglected, since if the game appears on the Steam key you can activate it only once.

In the normal version of the patcher, the data tree for the patch looks like this:
 ./
 | - linux
 |  | - 1.0.0
 |  `- version.txt
 | - macosx
 |  | - 1.0.0
 |  `- version.txt
 `- windows
     | - 1.0.0
     `- version.txt


I needed to make sure that only those with the right key had access.

I came up with the following solution - for each key we get its hash (SHA1), then use it as a path to access patch data on the server. On the server, we transfer the patch data to a level above the docroot, and in the docroot itself we add symbolic links (symlinks) to the directory with the patch data. Symbolic links have the same names as key hashes, only broken down into several levels (to facilitate the file system), i.e. hash 0f99e50314d63c30271 ... ... ade71963e7ff will be presented as
 ./0f/99/e5/0314d63c30271.....ade71963e7ff -----> / full / path / to / patch-data /

Thus, it is not necessary to distribute the keys themselves to the one who will be engaged in supporting the update servers - it is enough to transfer their hashes, which are absolutely useless to the players themselves.

To add new keys (or delete old ones), just add / remove the corresponding symbolic link.

With such an implementation, the verification of the key itself is clearly not performed anywhere, receiving 404 errors on the client indicates that the key is incorrect (or was deactivated).

It should be noted that key access is not a full-fledged DRM protection — these are simply limitations during the (closed) alpha and beta testing phase. And brute force is easily cut off by the means of the web server itself (at least in Nginx, which I use).

In the month of launch, only for the first day about 2.5 TB of traffic was given, in the following - about the same amount is distributed on average per month:

image

Therefore, if you plan to distribute a lot of content - it is best to calculate in advance how much it will cost you. According to personal observations, the cheapest traffic is from European hosters, the most expensive (I would say “golden”) from Amazon and Google.

In practice, the average annual traffic savings on the Universim project are enormous - compare the figures given above. Of course, if the user doesn’t have the game at all, or it’s very outdated, the miracle will not happen and he will have to download a lot of data from the server — if from scratch, then a little more than the game takes in the archive. However, with monthly updates, everything becomes very good. The American mirror for the incomplete 6 months gave a little more than 10 TB of traffic, without the use of the patcher, this value would have increased significantly.

This is how the project's one-year traffic looks like:

image

A few words about the most memorable “rakes” that we had to step on while working on a custom patcher for the game “The Universim”:

● The biggest trouble was waiting for me from antivirus programs. Well, they don’t like applications that download something from the Internet, modify files (including executables), and then try to launch the downloaded ones. Some antiviruses did not just block access to local files - they also blocked the access to the update server themselves, getting directly into the data that the client downloaded. The solution was to use a valid digital signature for the patcher - this dramatically reduces paranoia in antiviruses, and the use of the HTTPS protocol instead of HTTP allows you to quickly get rid of some of the errors associated with the curiosity of antiviruses.

● Progress update. Many users (and customers) want to see the progress of the update. You have to improvise, since it is not always possible to reliably show progress without doing extra work. Yes, and the exact time of the end of the process of patching also can not be displayed - since the patcher in advance does not have information about which files need to be updated.

● A huge number of users from the United States do not have very high connection speeds to servers from Europe. Moving the update server to the US solved this problem. For users of other continents, we left the server in Germany. By the way, traffic in the United States is much more expensive than European, in some cases - several dozen times.

● Apple is not very good at this application installation method. Official policy - applications should be installed only from their store. But the trouble is that applications at the alpha and beta testing stage are not allowed to the store. And even more so there is nothing to talk about selling raw applications from early access. Therefore, you have to write instructions on how to dance on poppies. The variant with AppAnnie (then they were still independent) was not considered due to the restriction on the number of testers.

● Work with the network is quite unpredictable. In order for the application not to give up immediately, I had to enter the error counter. 9 intercepted exceptions allow you to firmly declare the user that he has problems with the network.

● 32-bit OSs have limits on the size of files that are mapped to memory (Memory Mapped Files (MMF)) for each thread of execution and for the process as a whole. The first versions of the patcher used MMF to speed up the work, but since the game resources files can be huge, I had to abandon this approach and use regular file streams. By the way, there was no particular loss of performance - most likely due to the pre-read OS.

● We must be prepared for the fact that users will complain. No matter how good your product is, there will always be dissatisfied. And the more users your product has (in the case of The Universim there are more than 50 thousand at the moment), the more quantitatively there will be complaints to you. In percentage terms, this is a very small number, but in quantitative terms ...

Despite the fact that, on the whole, the project was a success, there are some drawbacks:

● Even though I initially took out all the main logic separately, the GUI part is different in the implementation for MAC and Windows. The Linux version didn’t deliver any problems - all problems were mostly only when using a monolithic build that doesn’t require a Mono Runtime Environment - MRE. But since for the distribution of such executable files, you must have an additional license - it was decided to abandon monolithic builds and simply require the presence of MRE. The Linux version differs from the Windows version only in support of file attributes specific to * nix systems. For my second project, which will be more than just a patcher, I plan to use a modular approach in the form of a process kernel, which runs in the background and allows you to manage everything using a local interface. And the control itself can be carried out from an application based on Electron and others like it (or simply from a browser). With any ryushechkami. Before talking about the size of the distribution of such applications - look at the size of the games. Demo (!!!) versions of some occupy 5 and more gigabytes in the archive (!!!).

● The structures currently used do not allow to save space when the game is released for 3 platforms - de facto, you need to keep 3 copies of almost identical data, even if compressed.

● The current version of the patcher does not cache its work - every time all checksums of all files are recalculated. It would be possible to significantly reduce the time if the patcher cached the results for those files that are already on the client. But there is one dilemma - if the file gets corrupted (or missing), but a cache entry is kept about this file - then the patcher will skip it, which will cause problems.

● The current version does not know how to work simultaneously with multiple servers (unless you do Round-robin using DNS) - I would like to switch to “torrent-like” technology so that multiple servers can be used simultaneously. There is no talk about using clients as a data source, as this raises many legal issues and it is easier to abandon it initially.

● If you want to restrict access to updates, then this logic will have to be implemented independently. Actually, it is difficult to call it a disadvantage, since each may have their own wishes regarding restrictions. The simplest restriction with the help of keys - without any server part - is made quite easily, as I showed above.

● The patcher is created only for one project at a time. If you want to build something similar to Steam, then a whole content delivery system is required. And this is a completely different project.

I plan to put the patcher myself in open access after the “second generation” is implemented - the game content delivery system, which will include not only the evolved patcher, but also the telemetry module (as developers need to know what the players are doing) Cloud Saves module and some other modules.

If you have a non-commercial project and you need a patcher, write me details about your project and I will give you a copy of it for free. There will be no links here, since this is not the hub “I am promoting”.

I am pleased to answer your questions.

Source: https://habr.com/ru/post/440870/