Virtual desktops (VDI), it would seem, have a huge advantage over the usual personal computers of employees. However, in reality, a very productive and, moreover, expensive “iron” is required in order for the virtual environment to work at an adequate speed and the VDI should not be an irritant in the work activity of the personnel. One of the bottlenecks in the VDI infrastructure is data storage: it depends on its performance whether it is worth contacting this technology or not.
VDI technology allows you to significantly simplify and automate the management of IT infrastructure of the workplaces of employees of the organization. The difference from traditional workstations and laptops is especially noticeable with a large staff. In addition, VDI is an excellent tool for the safe delivery of employee work tools (the so-called desktop-as-a-service) from any location, which is extremely popular in organizations with high security requirements and / or having an extensive branch network (medicine, banks, education, retail, etc.).
If the VDI infrastructure is properly deployed, then very soon it will begin to save the company's funds by drastically reducing the cost of maintenance and reducing the points of vulnerability due to the consolidation of IT equipment and software. At the same time, flexibility will increase when new jobs are added, wherever they are.
Since VDI implies a multitude of users who simultaneously perform their operations in the workplace, the IT infrastructure requires not only high I / O performance, but also ultra-low latency. If “braking” occurs somewhere, it is highly likely that this will cause a negative for users, and they will demand a return to traditional workstations. This means a complete failure of VDI implementation.
According to IDC reports, the most failed VDI implementations were related to the performance of storage systems that could not provide high-speed I / O for the entire infrastructure. So the choice of storage systems with the required performance is perhaps the most important task when designing a VDI. And, it should be noted, one of the most expensive in terms of cost.
Spindle and hybrid (mechanical disks + flash) storage systems, as practice has shown, are not suitable for large-scale VDI projects, since they are based on the principle of caching I / O requests to reduce delays. But the cache size is not infinite. As soon as it is full, delays will increase, users will "rebel" and, consider, the project will be bent. In fact, the requirements for latency on the part of VDI are so high that even traditional RAID-based All Flash systems with NVMe or NVRAM caching can not cope with the load.
In such projects, AccelStor All Flash systems that do not have a cache will be most welcome : all data is written to the SSD immediately after some rearrangement performed on the fly (synchronous recording). Lack of cache means consistency in performance, both in terms of IOPS and throughput, as well as in terms of delays, which, it should be noted, already have very impressive results.
RAID technology was developed in the late 80s of the last century exclusively for the use of traditional spindle disks. And today it is perhaps the most widely and successfully used technology in the field of data storage. But when building All Flash systems using RAID, there are at least two problems:
All Flash AccelStor systems work in a completely different way, since they are specifically designed for the efficient use of solid-state drives. The basis is the proprietary technology FlexiRemap , which does not use cache and RAID. 10 years of research and more than 45 patents reveal the full potential of flash memory. With the support of the IT giant Toshiba Memory and the best-in-show awards at the Flash Memory Summit 2016, FlexiRemap technology is truly revolutionary.
Technology FlexiRemap ® got its name just for the redistribution (remap) of data blocks before recording to SSD. The data will be rearranged into sequential chains and written to drives multiple of 4KB, i.e. in the most comfortable SSD mode. Thanks to this approach, you can achieve a sufficiently high performance (up to 700K IOPS for random write, up to 1.1M IOPS for random read + write) without using a cache.
Most RAID-based storage systems use expensive SAS SSDs, which are necessary to work with two controllers, and, at the same time, utilize their performance by only 10%. FlexiRemap®, in combination with the shared nothing cluster architecture, uses cheaper SATA SSDs and utilizes their performance by 90%, which ultimately means lower cost of ownership (TCO) and faster return on investment (ROI).
In addition, we note that SSDs have a predictable aging cycle associated with the amount of information recorded on them. And as opposed to RAID systems that have a huge overhead on write operations, FlexiRemap ® technology writes data only once, thereby increasing the service life of the drives and reducing the same TCO.
So AccelStor's All Flash systems with FlexiRemap® technology are perfect for projects that use intensive I / O and require extremely low latency. Such as VDI.
AccelStor provides an unlicensed model for its devices. Those. the customer gets access to all technologies (clones, snapshots, replication, deduplication, etc.) immediately and forever. Moreover, technical support for devices (and of course software) extends over the entire life of the array, and not just within the warranty / warranty extension. So here, too, the TCO can be significantly reduced.
Using the deduplication algorithm (FlexiDedupe) allows you to reduce data storage space. In application to VDI, we note that for linked clone, the final compression ratio can easily reach 10: 1. As a result, a fully filled array in combination with deduplication and thin provisioning can provide the cost of one active workplace in the region of 30 USD.
All Flash AccelStor NeoSapphireTM systems were tested using the Login VSI package, which is an excellent tool for measuring the performance and scalability of the VDI infrastructure.
The test used 500 desktops in linked clone mode in VMware Horizon View. All of them were located on the NeoSapphire P710 array (24x SSD, 10G iSCSI, single node, 700K IOPS sustained write) on 5 volumes of 2TB each, plus a separate replica volume. The load was the 8th high density node platform from Supermicro. A couple of separate servers and All Flash (also from AccelStor) an array of VDI infrastructure and test monitoring systems.
The desktops were virtual machines based on Windows 10 Pro (build 1709) with the configuration: 2x vCPU, 3.5GB RAM (100% reserved), 60GB HDD. Two load profiles were made. In terms of Login VSI, they are designated as knowledge and power. The differences are in the degree of influence on computing resources and requirements for input / output.
Worker | Knowledge | Power |
---|---|---|
Description | Well balanced stress load with high consumption of CPU, RAM and IO resources | Very heavy load with maximum stress for the system, requiring very large CPU, RAM and IO resources |
Used software | Adobe Reader Freemind / Java Internet Explorer MS Excel MS Outlook MS PowerPoint MS Word Photo Viewer 7-zip | Adobe Reader Freemind / Java Internet Explorer MS Excel MS Outlook MS PowerPoint MS Word Photo Viewer 7-zip Simultaneous installation of multiple applications |
Note | Larger files were used, higher resolution for graphics |
To understand the typical workload in figures, the approximate resource consumption for each profile (percentages relative to the knowledge profile) is also given:
Worker | Apps open | CPU usage | Disk reads | Disk writes | Iops | Memory | vCPU |
---|---|---|---|---|---|---|---|
Knowledge | 5-9 | 100% | 100% | 100% | 8.5 | 1.5GB | 2 vCPU |
Power | 8-12 | 119% | 133% | 123% | 10.8 | 2GB | 2 vCPU |
Different modes of operation of the VDI infrastructure were tested, each of which exerted a high load on the All Flash array.
As a result, regardless of the load mode, comfortable response time was maintained inside the virtual machines. That is, in other words, this means that the user would feel comfortable when working with a similar working environment.
A pool of 500 desktops in Linked Clone mode | lead time | Average delay |
---|---|---|
Provisioning | ~ 32 min | 0.59ms |
Booting | ~ 2 min | 0.68ms |
Login VSI full test (Login) | 50 min | 0.46ms |
30 minute steady-state | 30 min | 0.56ms |
Power off | ~ 7 min | 0.45ms |
Pool Refresh | ~ 14 min | 0.27ms |
Full test results are available on the Login VSI and AccelStor website .
When storage performance really matters and the All Flash array runs in the center of the VDI infrastructure, systems based on FlexiRemap® technology will provide the best price / performance ratio. Using an enterprise SATA SSD and a built-in algorithm to increase their service life, coupled with the low latency of the device as a whole, is just what was created for use in large-scale VDI projects.
Source: https://habr.com/ru/post/438792/