Sas connector. Unprecedented Serial Compatibility

Introduction

Look at modern motherboards (or even some older platforms). Do they need a special RAID controller? Most motherboards have 3 gigabit SATA ports, as well as audio jacks and network adapters. Most modern chipsets such as AMD A75 and Intel Z68, have support for SATA at 6 Gb / s. With so much support from the chipset, a powerful processor, and I/O ports, do you need additional storage cards and a separate controller?

In most cases, ordinary users can create RAID 0, 1, 5, and even 10 arrays using the built-in SATA ports on the motherboard and special software, and very high performance can be obtained. But in cases where a more complex RAID level of 30, 50, or 60 is required - a higher level of disk management or scalability, then the controllers on the chipset may not be able to cope with the situation. In such cases, professional-grade solutions are needed.

In such cases, you are no longer limited to SATA storage systems. A large number of special cards provide support for SAS (Serial-Attached SCSI) or Fiber Channel (FC) drives, each of these interfaces brings unique advantages.

SAS and FC for professional RAID solutions

Each of the three interfaces (SATA, SAS and FC) has its pros and cons, none of them can be unconditionally called the best. The strengths of SATA-based drives are high capacity and low price, combined with high data transfer rates. SAS drives are renowned for their reliability, scalability, and high I/O speeds. FC storage systems provide a constant and very high data transfer rate. Some companies still use Ultra SCSI solutions, although they can handle up to 16 devices (one controller and 15 drives). Moreover, the bandwidth in this case does not exceed 320 MB / s (in the case of Ultra-320 SCSI), which cannot compete with more modern solutions.

Ultra SCSI is the standard for professional enterprise storage solutions. However, SAS is gaining in popularity as it offers not only significantly more bandwidth, but also greater flexibility in working with mixed SAS/SATA systems, allowing you to optimize cost, performance, availability and capacity even in a single JBOD (set of disks). In addition, many SAS drives have two ports for redundancy. If one controller card fails, then switching the drive to another controller avoids the failure of the entire system. Thus, SAS ensures high reliability of the entire system.

Moreover, SAS is not only a point-to-point protocol for connecting a controller and a storage device. It supports up to 255 storage devices per SAS port when using an expander. Using the two-tier structure of SAS expanders, it is theoretically possible to attach 255 x 255 (or a little more than 65,000) storage devices to one SAS channel, if of course the controller is capable of supporting such a large number of devices.

Adaptec, Areca, HighPoint, and LSI: Four SAS RAID Controller Tests

In this benchmark, we examine the performance of modern SAS RAID controllers, which are represented by four products: Adaptec RAID 6805, Areca ARC-1880i, HighPoint RocketRAID 2720SGL and LSI MegaRAID 9265-8i.

Why SAS and not FC? On the one hand, SAS is by far the most interesting and relevant architecture. It provides features such as zoning that are very attractive to professional users. On the other hand, FC's role in the professional market is declining, with some analysts even predicting its complete demise based on the number of hard drives shipped. According to IDC experts, the future of FC looks rather bleak, but SAS hard drives can claim 72% of the enterprise hard drive market in 2014.

Adaptec RAID 6805

Chip manufacturer PMC-Sierra launched the "Adaptec by PMC" series of the RAID 6 controller family in late 2010. Series 6 controller cards are based on the dual-core ROC (RAID on Chip) SRC 8x6 GB controller, which supports 512 MB cache and up to 6 Gbps per SAS port. There are three low profile models: the Adaptec RAID 6405 (4 internal ports), the Adaptec RAID 6445 (4 internal and 4 external ports), and the one we tested, the Adaptec RAID 6805 with eight internal ports, costing about $460.

All models support JBOD and all levels of RAID - 0, 1, 1E, 5, 5EE, 6, 10, 50 and 60.

Connected to the system via an x8 PCI Express 2.0 interface, the Adaptec RAID 6805 supports up to 256 devices via a SAS expander. According to the manufacturer's specifications, the stable data transfer rate to the system can reach 2 GB / s, and the peak can reach 4.8 GB / s on the aggregated SAS port and 4 GB / s on the PCI Express interface - the last digit is the maximum theoretically possible value for PCI Express 2.0x bus.

ZMCP without the need for support

Our test unit came with an Adaptec Falsh Module 600 that uses Zero Maintenance Cache Protection (ZMCP) and does not use the legacy Battery Backup Unit (BBU). The ZMCP module is a 4 GB NAND flash chip unit that is used to back up the controller cache in the event of a power outage.

Because copying from cache to flash is very fast, Adaptec uses capacitors to support power rather than batteries. Capacitors have the advantage that they can last as long as the cards themselves, while backup batteries need to be replaced every few years. In addition, once copied to flash memory, data can be stored there for several years. In comparison, you usually have about three days to store data before the cached information is lost, which forces you to rush to recover data. As the name suggests, ZMCP is a solution that can withstand power failures.


Performance

The Adaptec RAID 6805 in RAID 0 loses out in our streaming read/write tests. Also, RAID 0 is not the typical case for a business that needs data protection (although it might well be used for a video rendering workstation). Sequential reads are at 640 MB/s, and sequential writes are at 680 MB/s. On these two counts, the LSI MegaRAID 9265-8i takes the top spot in our tests. The Adaptec RAID 6805 performs better in the RAID 5, 6 and 10 tests, but is not the absolute leader. In an SSD-only configuration, the Adaptec controller runs at speeds up to 530 MB/s, but is outperformed by the Areca and LSI controllers.

The Adaptec card automatically recognizes what it calls a HybridRaid configuration, which consists of a mix of HDDs and SSDs, offering RAID levels 1 to 10 in this configuration. This card outperforms its competitors thanks to special read/write algorithms. They automatically route reads to the SSD and writes to both the hard drives and the SSD. Thus, read operations will work as in an SSD-only system, and write operations will work no worse than in a system from hard drives.

However, the results of our tests do not reflect the theoretical situation. With the exception of benchmarks for the Web server, where the data transfer rate for a hybrid system works, a hybrid system of SSD and hard drives cannot come close to the speed of a system with only SSD.

The Adaptec controller performs much better in the HDD I/O performance test. Regardless of the type of benchmarks (database, file server, web server or workstation), the RAID 6805 controller keeps pace with Areca ARC-1880i and LSI MegaRAID 9265-8i, and takes first or second place. Only the HighPoint RocketRAID 2720SGL leads the I/O test. If you replace the hard drives with an SSD, then the LSI MegaRAID 9265-8i is significantly ahead of the other three controllers.

Software installation and RAID setup

Adaptec and LSI have well-organized and easy-to-use RAID management tools. Management tools allow administrators to access controllers remotely over the network.

Installing an array

Areca ARC-188oi

Areca is also bringing the ARC-1880 series into the 6Gb/s SAS RAID controller market. Target applications range from NAS applications and storage servers to HPC, redundancy, security and cloud computing, according to the manufacturer.

Tested ARC-1880i samples with eight external SAS ports and eight PCI Express 2.0 lanes can be purchased for $580. The low-profile card, which is the only card in our suite with an active cooler, is built around an 800MHz ROC with support for 512MB DDR2-800 data cache. Using SAS expanders, Areca ARC-1880i supports up to 128 storage systems. In order to preserve the contents of the cache during a power failure, a battery pack can optionally be added to the system.

In addition to single mode and JBOD, the controller supports RAID levels 0, 1, 1E, 3, 5, 6, 10, 30, 50, and 60.

Performance

The Areca ARC-1880i performs well in RAID 0 read/write tests, reaching 960 MB/s read and 900 MB/s write. Only the LSI MegaRAID 9265-8i is faster in this particular test. The Areca controller does not disappoint in other benchmarks either. Both in working with hard drives and SSDs, this controller always actively competes with the test winners. Although the Areca controller was the leader in only one benchmark (sequential read in RAID 10), it showed very high results, for example, a read speed of 793 MB / s, while the fastest competitor, LSI MegaRAID 9265-8i, showed only 572 MB/s

However, serial communication is only one part of the picture. The second is I/O performance. Areca ARC-1880i excels here as well, competing on equal terms with Adaptec RAID 6805 and LSI MegaRAID 9265-8i. Similar to its victory in the data transfer rate benchmark, the Areca controller also won in one of the I / O tests - the Web server benchmark. The Areca controller dominates the Web server benchmark at RAID 0, 5, and 6, while the Adaptec 6805 takes the lead in RAID 10, leaving the Areca controller in second place, narrowly behind.

Web GUI and setting options

Like the HighPoint RocketRAID 2720SGL, the Areca ARC-1880i is conveniently web-based and easy to set up.

Installing an array

HighPoint RocketRAID 2720SGL

The HighPoint RocketRAID 2720SGL is a SAS RAID controller with eight internal SATA/SAS ports, each supporting 6Gb/s. According to the manufacturer, this low-profile card is aimed at storage systems for small and medium businesses and workstations. The key component of the card is the Marvell 9485 RAID controller. The main competitive advantages are its small size and 8-lane PCIe 2.0 interface.

In addition to JBOD, the card supports RAID 0, 1, 5, 6, 10, and 50.

In addition to the model that was tested in our tests, there are 4 more models in the low-profile HighPoint 2700 series: RocketRAID 2710, RocketRAID 2711, RocketRAID 2721 and RocketRAID 2722, which mainly differ in the types of ports (internal / external) and their number ( 4 to 8). Our tests used the cheapest of these RAID controllers, the RocketRAID 2720SGL ($170). All cables to the controller are purchased separately.

Performance

When sequentially reading/writing to a RAID 0 array of eight Fujitsu MBA3147RC drives, the HighPoint RocketRAID 2720SGL achieves an excellent read speed of 971 MB/s, second only to the LSI MegaRAID 9265-8i. The write speed of 697 MB/s is not as fast, but it still beats the write speed of the Adaptec RAID 6805. The RocketRAID 2720SGL also shows a wide range of results. With RAID 5 and 6 it outperforms other cards, but with RAID 10 the read speed drops to 485 MB/s, the lowest of the four samples tested. Sequential write speed in RAID 10 is even worse - only 198 MB / s.

This controller is clearly not made for SSD. The read speed here reaches 332 MB / s, and the write speed is 273 MB / s. Even the Adaptec RAID 6805, which also doesn't do well with SSDs, performs twice as well. Therefore, HighPoint is not a competitor for two cards that work really well with SSDs: Areca ARC-1880i and LSI MegaRAID 9265-8i - they work at least three times faster.

Everything that we could say good things about the operation of HighPoint in I / O mode, we said. However, the RocketRAID 2720SGL ranks last in our tests across all four Iometer benchmarks. The HighPoint controller is quite competitive with other cards when working with the Web server benchmark, but loses significantly to competitors in the other three benchmarks. This becomes apparent in the SSD tests, where the RocketRAID 2720SGL clearly shows that it is not optimized for SSDs. It clearly doesn't take full advantage of SSDs over HDDs. For example, the RocketRAID 2720SGL achieves 17378 IOPs in the database benchmark, while the LSI MegaRAID 9265-8i outperforms it four times with 75,037 IOPs.

Web GUI and array settings

The RocketRAID 2720SGL web interface is convenient and easy to use. All RAID parameters are easily set.

Installing an array

LSI MegaRAID 9265-8i

LSI is positioning the MegaRAID 9265-8i as a device for the SMB market. This card is suitable for cloud reliability and other business applications. The MegaRAID 9265-8i is one of the more expensive controllers in our test (it costs $630), but as the test shows, this money is paid for its real benefits. Before we present the test results, let's discuss the technical features of these controllers and the FastPath and CacheCade software applications.

The LSI MegaRAID 9265-8i uses a dual-core LSI SAS2208 ROC using an eight-lane PCIe 2.0 interface. The 8 at the end of the device name indicates that there are eight internal SATA/SAS ports, each supporting 6 Gb/s. Up to 128 storage devices can be connected to the controller via SAS expanders. The LSI card contains 1 GB of DDR3-1333 cache and supports RAID levels 0, 1, 5, 6, 10 and 60.

Configuring Software and RAID, FastPath and CacheCade

LSI claims that FastPath can significantly speed up I/O systems when an SSD is connected. According to LSI experts, FastPath works with any SSD, significantly increasing the write/read performance of an SSD-based RAID array: 2.5x write and 2x read, reaching 465,000 IOPS. We have not been able to verify this figure. However, this card was able to get the most out of five SSDs without using FastPath.

The next application for the MegaRAID 9265-8i is called CacheCade. With it, you can use one SSD as cache memory for an array of hard drives. According to LSI experts, this can speed up the reading process by up to 50 times, depending on the size of the data in question, applications and method of use. We tested this application on a RAID 5 array consisting of 7 hard drives and one SSD (the SSD was used for cache). Compared to a RAID 5 system of 8 hard drives, it became clear that CacheCade not only improves I / O speed, but also overall performance (more, the smaller the amount of constantly used data). For testing, we used 25 GB of data and got 3877 IOPS on Iometer in the Web server template, while a regular hard drive array only allowed 894 IOPS.

Performance

In the end, it turns out that the LSI MegaRAID 9265-8i is the fastest I/O out of all the SAS RAID controllers in this review. However, during sequential read/write operations, the controller exhibits average performance, since its sequential performance is highly dependent on the RAID level you are using. When testing the hard drive at the RAID 0 level, we get a sequential read speed of 1080 MB / s (which is significantly higher than the competition). Sequential write speeds at RAID 0 come in at 927 MB/s, which is also faster than the competition. But for RAID 5 and 6, LSI controllers are inferior to all their competitors, surpassing them only in RAID 10. In the SSD RAID test, LSI MegaRAID 9265-8i demonstrates the best sequential write performance (752 MB / s) and only Areca ARC-1880i surpasses it according to the parameters of sequential reading.

If you're looking for an SSD-focused RAID controller with high I/O performance, the LSI controller is the leader here. With few exceptions, it takes first place in our file server, web server, and workstation I/O tests. When your RAID array consists of SSDs, LSI's competitors can't match it. For example, in the benchmark for workstations MegaRAID 9265-8i reaches 70,172 IOPS, while Areca ARC-1880i, which is in second place, is almost two times behind it - 36,975 IOPS.

RAID Software and Array Installation

As with Adaptec, LSI has convenient tools for managing the RAID array through the controller. Here are some screenshots:

Software for CacheCade

RAID Software

Installing an array

Comparison table and test bench configuration

Manufacturer Adaptec Areca
Product RAID 6805 ARC-1880i
Form Factor Low profile MD2 Low profile MD2
Number of SAS ports 8 8
6 Gbps (SAS 2.0) 6 Gbps (SAS 2.0)
Internal SAS ports 2xSFF-8087 2xSFF-8087
External SAS ports Not Not
Cache memory 512MB DDR2-667 512MB DDR2-800
Main interface PCIe 2.0 (x8) PCIe 2.0 (x8)
XOR and clock speed PMC-Sierra PM8013/No data N/A/800 MHz
Supported RAID levels 0, 1, 1E, 5, 5EE, 6, 10, 50, 60 0, 1, 1E, 3, 5, 6, 10, 30, 50, 60
Windows 7, Windows Server 2008/2008 R2, Windows Server 2003/2003 R2, Windows Vista, VMware ESX Classic 4.x (vSphere), Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), Sun Solaris 10 x86 , FreeBSD, Debian Linux, Ubuntu Linux Windows 7/2008/Vista/XP/2003, Linux, FreeBSD, Solaris 10/11 x86/x86_64, Mac OS X 10.4.x/10.5.x/10.6.x, VMware 4.x
Battery Not Optional
Fan Not There is

Manufacturer high point LSI
Product RocketRAID 2720SGL MegaRAID 9265-8i
Form Factor Low profile MD2 Low profile MD2
Number of SAS ports 8 8
SAS bandwidth per port 6 Gbps (SAS 2.0) 6 Gbps (SAS 2.0)
Internal SAS ports 2xSFF-8087 2xSFF-8087
External SAS ports Not Not
Cache memory No data 1 GB DDR3-1333
Main interface PCIe 2.0 (x8) PCIe 2.0 (x8)
XOR and clock speed Marvel 9485/No data LSI SAS2208/800 MHz
Supported RAID levels 0, 1, 5, 6, 10, 50 0, 1, 5, 6, 10, 60
Supported operating systems Windows 2000, XP, 2003, 2008, Vista, 7, RHEL/CentOS, SLES, OpenSuSE, Fedora Core, Debian, Ubuntu, FreeBSD bis 7.2 Microsoft Windows Vista/2008/Server 2003/2000/XP, Linux, Solaris (x86), Netware, FreeBSD, Vmware
Battery Not Optional
Fan Not Not

Test configuration

We connected eight Fujitsu MBA3147RC SAS hard drives (each 147 GB) with RAID controllers and ran benchmarks for RAID levels 0, 5, 6 and 10. SSD tests were carried out with five Samsung SS1605 drives.

Hardware
CPU Intel Core i7-920 (Bloomfield) 45 nm, 2.66 GHz, 8 MB shared L3 cache
Motherboard (LGA 1366) Supermicro X8SAX, Revision: 1.0, Chipset Intel X58 + ICH10R, BIOS: 1.0B
Controller LSI MegaRAID 9280-24i4e
Firmware: v12.12.0-0037
Driver: v4.32.0.64
RAM 3 x 1 GB DDR3-1333 Corsair CM3X1024-1333C9DHX
HDD Seagate NL35 400 GB, ST3400832NS, 7200 rpm, SATA 1.5 Gb/s, 8 MB cache
Power Supply OCZ EliteXstream 800W, OCZ800EXS-EU
Benchmarks
Performance Crystal Disk Mark 3
I/O performance Iometer 2006.07.27
File Server Benchmark
Web Server Benchmark
Database Benchmark
Workstation Benchmark
Streaming Reads
Streaming Writes
4k Random Reads
4k Random Writes
Software and drivers
Operating system Windows 7 Ultimate

Test results

I/O performance in RAID 0 and 5

The benchmarks in RAID 0 show no significant difference between the RAID controllers, with the exception of the HighPoint RocketRAID 2720SGL.




The benchmark in RAID 5 does not help the HighPoint controller regain its lost ground. Unlike the benchmark in RAID 0, all three faster controllers show their strengths and weaknesses more clearly here.




I/O performance in RAID 6 and 10

LSI has optimized its MegaRAID 9265 controller for database, file server and workstation workloads. The benchmark for the Web server passes all controllers well, demonstrating the same performance.




In the RAID 10 variant, Adaptec and LSI are vying for the top spot, with the HighPoint RocketRAID 2720SGL in last place.




SSD I/O performance

The LSI MegaRAID 9265 leads the way here, taking full advantage of solid-state storage systems.




Bandwidth in RAID 0, 5 and degraded RAID 5

The LSI MegaRAID 9265 easily leads this benchmark. The Adaptec RAID 6805 is far behind.


The HighPoint RocketRAID 2720SGL without cache does a good job of sequential operations in RAID 5. Other controllers are not much inferior to it either.


Degraded RAID 5


Bandwidth in RAID 6, 10 and degraded RAID 6

As with RAID 5, the HighPoint RocketRAID 2720SGL demonstrates the highest throughput for RAID 6, leaving the Areca ARC-1880i in second place. The impression is that the LSI MegaRAID 9265-8i simply does not like RAID 6.


Degraded RAID 6


Here, the LSI MeagaRAID 9265-8i shows itself in the best light, although it lets the Areca ARC-1880i go ahead.

LSI CacheCade




What is the best 6Gb/s SAS controller?

In general, all four SAS RAID controllers we tested performed well. All have all the necessary functionality, and all of them can be successfully used in entry-level and mid-level servers. In addition to outstanding performance, they also provide important features such as mixed SAS and SATA environments and scalability through SAS expanders. All four controllers support the SAS 2.0 standard, which increases throughput from 3 Gbps to 6 Gbps per port, and also introduces new features such as SAS zoning, which allows many controllers to access storage resources through a single SAS - expander.

Despite such similarities as a low-profile form factor, an eight-lane PCI Express interface and eight SAS 2.0 ports, each controller has its own strengths and weaknesses, analyzing which you can make recommendations for their optimal use.

So, the fastest controller is the LSI MegaRAID 9265-8i, especially in terms of I/O bandwidth. Although it has some weaknesses, in particular, not very high performance in cases of RAID 5 and 6. MegaRAID 9265-8i leads in most benchmarks and is an excellent professional-level solution. The cost of this controller - $ 630 - is the highest, we should not forget about this either. But for this high cost, you get a great controller that outperforms its competitors, especially when working with an SSD. It also has excellent performance, which becomes especially valuable when connecting large storage systems. What's more, you can increase the performance of the LSI MegaRAID 9265-8i using FastPath or CacheCade, which of course will cost you extra.

The Adaptec RAID 6805 and Areca ARC-1880i controllers show the same performance and are very similar in price ($460 and $540). Both work well, as shown by various benchmarks. The Adaptec controller delivers slightly better performance than the Areca controller, and it also offers the much-requested ZMCP (Zero Maintenance Cache Protection) feature that replaces conventional power failure redundancy and allows operations to continue.

The HighPoint RocketRAID 2720SGL sells for just $170, which is much cheaper than the other three controllers we tested. The performance of this controller is quite sufficient if you work with conventional drives, although it is worse than the Adaptec or Areca controllers. And you should not use this controller to work with SSD.

With the advent of a sufficiently large number of Serial Attached SCSI (SAS) peripherals, we can state the beginning of the transition of the corporate environment to the rails of the new technology. But SAS is not only a recognized successor to UltraSCSI technology, but also opens up new areas of use, raising the scalability of systems downright to unthinkable heights. We decided to demonstrate the potential of SAS by taking a closer look at the technology, host adapters, hard drives, and storage systems.

SAS is not a completely new technology: it takes the best of both worlds. The first part of SAS is about serial communication, which requires less physical wires and pins. The transition from parallel to serial transmission made it possible to get rid of the bus. Although the current SAS specifications define throughput at 300 MB/s per port, which is less than 320 MB/s for UltraSCSI, replacing a shared bus with a point-to-point connection is a significant advantage. The second part of SAS is the SCSI protocol, which remains powerful and popular.

SAS can also use a large set types of RAID. Giants such as Adaptec or LSI Logic offer an advanced set of features for expansion, migration, nesting, and other features in their products, including distributed RAID arrays across multiple controllers and drives.

Finally, most of the actions mentioned today are already performed "on the fly". Here we should note excellent products AMCC/3Ware , Areca and Broadcom/Raidcore, which allowed the transfer of enterprise-class features to SATA spaces.

Compared to SATA, the traditional SCSI implementation is losing ground on all fronts except in high-end enterprise solutions. SATA offers suitable hard drives, has a good price and a wide range of decisions. And let's not forget another "smart" feature of SAS: it easily gets along with existing SATA infrastructures, since SAS host adapters easily work with SATA drives. But the SAS drive cannot be connected to the SATA adapter.


Source: Adaptec.

First, it seems to us, we should turn to the history of SAS. The SCSI standard (stands for "small computer system interface") has always been regarded as a professional bus for connecting drives and some other devices to computers. Hard drives for servers and workstations still use SCSI technology. Unlike the mass ATA standard, which allows only two drives to be connected to one port, SCSI allows up to 15 devices to be connected on one bus and offers a powerful command protocol. Devices must have a unique SCSI ID, which can be assigned either manually or through the SCAM (SCSI Configuration Automatically) protocol. Since the bus IDs of two or more SCSI adapters may not be unique, Logical Unit Numbers (LUNs) have been added to help identify devices in complex SCSI environments.

SCSI hardware is more flexible and reliable than ATA (this standard is also called IDE, Integrated Drive Electronics). Devices can be connected both inside the computer and outside, and the cable length can be up to 12 m, if it is properly terminated (in order to avoid signal reflections). As SCSI has evolved, numerous standards have emerged that specify different bus widths, clock speeds, connectors, and signal voltages (Fast, Wide, Ultra, Ultra Wide, Ultra2, Ultra2 Wide, Ultra3, Ultra320 SCSI). Luckily, they all use the same set of commands.

Any SCSI communication is established between the initiator (host adapter) sending commands and the target drive responding to them. Immediately after receiving a set of commands, the target drive sends a so-called sense code (status: busy, error or free), by which the initiator will know whether he will receive the desired response or not.

The SCSI protocol specifies almost 60 different commands. They are divided into four categories: non-data, bi-directional, read data, and write data.

The limitations of SCSI start to show up when you add drives to the bus. Today it is hardly possible to find a hard drive that can fully load the 320 MB / s throughput of Ultra320 SCSI. But five or more drives on the same bus is another matter entirely. An option would be to add a second host adapter for load balancing, but this comes at a cost. Cables are also a problem: twisted 80-wire cables are very expensive. If you also want to get a "hot swap" of drives, that is, an easy replacement of a failed drive, then special equipment (backplane) is required.

Of course, it's best to place the drives in separate fixtures or modules, which are usually hot swappable along with other nice control features. As a result, there are more professional SCSI solutions on the market. But they all cost a lot, which is why the SATA standard has developed so rapidly in recent years. And although SATA will never meet the needs of high-end enterprise systems, this standard perfectly complements SAS in creating new scalable solutions for next-generation network environments.


SAS does not use a common bus for multiple devices. Source: Adaptec.

SATA


On the left is the SATA connector for data transfer. On the right is the power connector. There are enough pins to supply 3.3V, 5V, and 12V voltages to each SATA drive.

The SATA standard has been on the market for several years, and today it has reached its second generation. SATA I featured 1.5 Gb/s throughput with two serial connections using low-voltage differential signaling. The physical layer uses 8/10 bit encoding (10 actual bits for 8 bits of data), which accounts for the maximum interface throughput of 150 MB/s. After the transition of SATA to a speed of 300 MB / s, many began to call the new standard SATA II, although during standardization SATA-IO(International Organization) planned to add more features first and then call it SATA II. Hence the latest specification is called SATA 2.5, it includes SATA extensions such as Native Command Queuing(NCQ) and eSATA (external SATA), port multipliers (up to four drives per port), etc. But additional SATA features are optional for both the controller and the hard drive itself.

Let's hope that in 2007 SATA III at 600 MB / s will still be released.

Where parallel ATA (UltraATA) cables were limited to 46cm, SATA cables can be up to 1m long, and for eSATA twice that. Instead of 40 or 80 wires, serial transmission requires only a few pins. Therefore, SATA cables are very narrow, easy to route inside a computer case, and don't obstruct airflow as much. A single device relies on a SATA port, making it a point-to-point interface.


SATA connectors for data and power provide separate plugs.

SAS


The signaling protocol here is the same as that of SATA. Source: Adaptec.

A nice feature of Serial Attached SCSI is that the technology supports both SCSI and SATA, as a result of which SAS or SATA drives (or both standards) can be connected to SAS controllers. However, SAS drives cannot work with SATA controllers due to the use of the Serial SCSI Protocol (SSP). Like SATA, SAS follows the point-to-point connection principle for drives (300 MB/s today), and thanks to SAS expanders (or expanders, expanders), more drives can be connected than are available SAS ports. SAS hard drives support two ports, each with its own unique SAS ID, so you can use two physical connections to provide redundancy - connect the drive to two different hosts. Thanks to STP (SATA Tunneling Protocol), SAS controllers can communicate with SATA drives connected to the expander.


Source: Adaptec.



Source: Adaptec.



Source: Adaptec.

Of course, the only physical connection of the SAS expander to the host controller can be considered a "bottleneck", so wide SAS ports are provided in the standard. A wide port groups multiple SAS connections into a single link between any two SAS devices (usually between a host controller and an extender/expander). The number of connections within the connection can be increased, it all depends on the requirements imposed. But redundant connections are not supported, nor are any loops or rings allowed.


Source: Adaptec.

Future implementations of SAS will add 600 and 1200 MB/s bandwidth per port. Of course, the performance of hard drives will not increase in the same proportion, but it will be more convenient to use expanders on a small number of ports.



Devices called "Fan Out" and "Edge" are expanders. But only the main Fan Out expander can work with the SAS domain (see 4x connection in the center of the diagram). Up to 128 physical connections are allowed per Edge expander, and you can use wide ports and/or connect other expanders/drives. The topology can be quite complex, but at the same time flexible and powerful. Source: Adaptec.



Source: Adaptec.

The backplane is the basic building block of any storage system that needs to be hot pluggable. Therefore, SAS expanders often involve powerful rigs (both in a single case and not). Typically, a single link is used to connect a simple snap-in to a host adapter. Expanders with built-in snap-ins, of course, rely on multi-channel connections.

Three types of cables and connectors have been developed for SAS. SFF-8484 is a multicore internal cable that connects the host adapter to the equipment. In principle, the same can be achieved by branching this cable at one end into several separate SAS connectors (see illustration below). SFF-8482 is a connector through which the drive is connected to a single SAS interface. Finally, the SFF-8470 is an external multicore cable, up to six meters long.


Source: Adaptec.


SFF-8470 cable for external multilink SAS connections.


Multicore cable SFF-8484. Four SAS channels/ports pass through one connector.


SFF-8484 cable that allows you to connect four SATA drives.

SAS as part of SAN solutions

Why do we need all this information? Most users will not come close to the SAS topology we discussed above. But SAS is more than a next-generation interface for professional hard drives, although it is ideal for building simple and complex RAID arrays based on one or more RAID controllers. SAS is capable of more. This is a point-to-point serial interface that scales easily as you add more links between any two SAS devices. SAS drives come with two ports, so you can connect one port through an expander to a host system and then create a backup path to another host system (or another expander).

Communication between SAS adapters and expanders (as well as between two expanders) can be as wide as there are available SAS ports. Expanders are usually rackmount systems that can accommodate a large number of drives, and the possible connection of SAS to a higher device in the hierarchy (for example, a host controller) is limited only by the capabilities of the expander.

With a rich and functional infrastructure, SAS allows you to create complex storage topologies, rather than dedicated hard drives or separate network storage. In this case, "complicated" should not mean that it is difficult to work with such a topology. SAS configurations consist of simple disk rigs or use expanders. Any SAS link can be scaled up or down depending on bandwidth requirements. You can use both powerful SAS hard drives and high-capacity SATA models. Together with powerful RAID controllers, you can easily set up, expand or reconfigure data arrays - both in terms of the RAID level and the hardware side.

All of this becomes even more important when you consider how fast corporate storage is growing. Today everyone is talking about SAN - storage area network. It implies a decentralized organization of a data storage subsystem with traditional servers using physically remote storages. By existing networks gigabit Ethernet or Fiber Channel, a slightly modified SCSI protocol is launched, encapsulated in Ethernet packets (iSCSI - Internet SCSI). A system that runs from a single hard drive to complex nested RAID arrays becomes a so-called target (target) and is tied to an initiator (host system, initiator), which treats the target as if it were just a physical element.

iSCSI, of course, allows you to create a strategy for the development of storage, data organization or access control. We get another level of flexibility by removing storage directly attached to servers, allowing any storage subsystem to become an iSCSI target. Moving to remote storage makes the system independent of storage servers (a dangerous point of failure) and improves the manageability of the hardware. From a programmatic point of view, the storage is still "inside" the server. The iSCSI target and initiator can be nearby, on different floors, in different rooms or buildings - it all depends on the quality and speed of the IP connection between them. From this point of view, it is important to note that the SAN is not well suited to the requirements of online applications such as databases.

2.5" SAS hard drives

2.5" hard drives for the professional sector are still perceived as a novelty. We have been reviewing the first such drive from Seagate for quite some time - 2.5" Ultra320 Savvio who left a good impression. All 2.5" SCSI drives use a 10,000 rpm spindle speed, but they fall short of the performance levels of 3.5" hard drives with the same spindle speed. The fact is that the outer tracks of 3.5 "models rotate at a higher linear speed, which provides a higher data transfer rate.

The advantage of small hard drives lies not in capacity: today, for them, the maximum is still 73 GB, while in 3.5 "enterprise-class hard drives we already get 300 GB. In many areas, the ratio of performance to physical volume occupied is very important or energy efficiency. The more hard drives you use, the more performance you reap - paired with the appropriate infrastructure, of course. At the same time, 2.5" hard drives consume almost half as much energy as 3.5" competitors. If we consider the ratio performance per watt (I/O operations per watt), the 2.5" form factor gives very good results.

If you need capacity above all, then 3.5" 10,000 rpm drives are unlikely to be the best choice. The fact is that 3.5" SATA hard drives provide 66% more capacity (500 instead of 300 GB per hard drive), leaving the performance level acceptable. Many hard drive manufacturers offer SATA models for 24/7 operation, and the price of drives has been reduced to a minimum. Reliability problems can be solved by purchasing spare (spare) drives for immediate replacement in the array.

The MAY line represents Fujitsu's current generation of 2.5" drives for the professional sector. The rotation speed is 10,025 rpm, and the capacities are 36.7 and 73.5 GB. All drives come with 8 MB cache and give an average read seek time 4.0 ms and 4.5 ms writes As we already mentioned, a nice feature of 2.5" hard drives is reduced power consumption. Usually one 2.5" hard drive saves at least 60% of energy compared to a 3.5" drive.

3.5" SAS hard drives

The MAX is Fujitsu's current line of high performance 15,000 rpm hard drives. So the name fits perfectly. Unlike 2.5" drives, here we get a whopping 16MB of cache and a short average seek time of 3.3ms for reads and 3.8ms for writes. Fujitsu offers 36.7GB, 73.4GB, and 146GB models. GB (with one, two and four plates).

Fluid dynamic bearings have made their way to enterprise-class hard drives, so the new models are significantly quieter than the previous ones at 15,000 rpm. Of course, such hard drives should be properly cooled, and the equipment provides this too.

Hitachi Global Storage Technologies also offers its own line of high performance solutions. The UltraStar 15K147 runs at 15,000 rpm and has a 16 MB cache, just like the Fujitsu drives, but the platter configuration is different. The 36.7 GB model uses two platters instead of one, while the 73.4 GB model uses three platters instead of two. This indicates a lower data density, but such a design, in fact, allows you to not use the inner, slowest areas of the plates. As a result, the heads have to move less, which gives a better average access time.

Hitachi also offers 36.7GB, 73.4GB, and 147GB models with a claimed seek (read) time of 3.7ms.

Although Maxtor has already become part of Seagate, the company's product lines are still preserved. The manufacturer offers 36, 73 and 147 GB models, all of which feature a 15,000 rpm spindle speed and 16 MB cache. The company claims an average seek time of 3.4ms for reads and 3.8ms for writes.

The Cheetah has long been associated with high performance hard drives. Seagate was able to instill a similar association with the release of the Barracuda in the desktop segment, offering the first 7200 RPM desktop drive in 2000.

Available in 36.7 GB, 73.4 GB and 146.8 GB models. All of them are distinguished by a spindle speed of 15,000 rpm and an 8 MB cache. The average seek time for reading is 3.5 ms and for writing 4.0 ms.

Host adapters

Unlike SATA controllers, SAS components can only be found on server-grade motherboards or as expansion cards for PCI-X or PCI Express. If we take it a step further and look at RAID controllers (Redundant Array of Inexpensive Drives), they are sold, for the most part, as individual cards due to their complexity. RAID cards contain not only the controller itself, but also a redundancy information calculation acceleration chip (XOR engine), as well as cache memory. A small amount of memory is sometimes soldered onto the card (most often 128 MB), but some cards allow you to expand the amount using a DIMM or SO-DIMM.

When choosing a host adapter or RAID controller, you should clearly define what you need. The range of new devices is growing just before our eyes. Simple multiport host adapters will cost relatively little, while powerful RAID cards will cost a lot. Consider where you will place your drives: external storage requires at least one external slot. Rack servers typically require low profile cards.

If you need RAID, then decide whether you will use hardware acceleration. Some RAID cards take CPU resources for XOR calculations for RAID 5 or 6 arrays; others use their own XOR hardware engine. RAID acceleration is recommended for environments where the server does more than store data, such as databases or web servers.

All of the host adapter cards that we cited in our article support 300 MB/s per SAS port and allow for very flexible implementation of the storage infrastructure. Today, few people will be surprised by external ports, and take into account the support for both SAS and SATA hard drives. All three cards use the PCI-X interface, but PCI Express versions are already in development.

In our article, we paid attention to cards with eight ports, but the number of connected hard drives is not limited to this. With the help of a SAS expander (external), you can connect any storage. As long as a 4-lane connection is sufficient, you can increase the number of hard drives up to 122. Due to the performance cost of calculating the RAID 5 or RAID 6 parity information, typical external RAID storages will not be able to load the quad-lane bandwidth enough, even if a large number of drives are used.

48300 is a SAS host adapter designed for the PCI-X bus. The server market today continues to be dominated by PCI-X, although more and more motherboards are equipped with PCI Express interfaces.

The Adaptec SAS 48300 uses a PCI-X interface at 133 MHz, giving a throughput of 1.06 GB/s. Fast enough if PCI bus-X is not loaded by other devices. If you include a lower speed device in the bus, then all other PCI-X cards will reduce their speed to the same. For this purpose, several PCI-X controllers are sometimes installed on the board.

Adaptec is positioning the SAS 4800 for midrange and low end servers and workstations. The suggested retail price is $360, which is quite reasonable. The Adaptec HostRAID feature is supported, allowing you to upgrade to the simplest RAID arrays. In this case, these are RAID levels 0, 1, and 10. The card supports an external four-channel SFF8470 connection, as well as an internal SFF8484 connector paired with a cable for four SAS devices, that is, we get eight ports in total.

The card fits into a 2U rack server when a low-profile slot cover is installed. The package also includes a CD with a driver, a quick installation guide, and an internal SAS cable through which up to four system drives can be connected to the card.

SAS player LSI Logic sent us a SAS3442X PCI-X host adapter, a direct competitor to the Adaptec SAS 48300. It comes with eight SAS ports that are split between two quad-lane interfaces. The "heart" of the card is the LSI SAS1068 chip. One of the interfaces is intended for internal devices, the second - for external DAS (Direct Attached Storage). The board uses the PCI-X 133 bus interface.

As usual, 300 MB/s interface is supported for SATA and SAS drives. There are 16 LEDs on the controller board. Eight of them are simple activity LEDs, and eight more are designed to report a system malfunction.

The LSI SAS3442X is a low profile card, so it fits easily into any 2U rack server.

Note driver support for Linux, Netware 5.1 and 6, Windows 2000 and Server 2003 (x64), Windows XP (x64) and Solaris up to 2.10. Unlike Adaptec, LSI chose not to add support for any RAID modes.

RAID adapters

SAS RAID4800SAS is Adaptec's solution for more complex SAS environments and can be used for application servers, streaming servers, and more. Before us, again, is an eight-port card, with one external four-lane SAS connection and two internal four-lane interfaces. But if an external connection is used, then only one four-channel interface remains from the internal ones.

The card is also designed for the PCI-X 133 bus, which provides enough bandwidth for even the most demanding RAID configurations.

As far as RAID modes are concerned, the SAS RAID 4800 easily outperforms its "younger brother": RAID levels 0, 1, 10, 5, 50 are supported by default if you have enough drives. Unlike the 48300, Adaptec has invested two SAS cables so you can connect eight hard drives to the controller right away. Unlike the 48300, the card requires a full-size PCI-X slot.

If you decide to upgrade your card to Adaptec Advanced Data Protection Suite, you'll be able to upgrade to dual redundant RAID modes (6, 60), as well as a range of enterprise-class features: striped mirror drive (RAID 1E), hot spacing (RAID 5EE), and copyback hot spare. The Adaptec Storage Manager utility has a browser-like interface and can be used to manage all Adaptec adapters.

Adaptec provides drivers for Windows Server 2003 (and x64), Windows 2000 Server, Windows XP (x64), Novell Netware, Red Hat Enterprise Linux 3 and 4, SuSe Linux Enterprise Server 8 and 9, and FreeBSD.

SAS snap-ins

The 335SAS is a four-drive SAS or SATA drive accessory, but must be connected to a SAS controller. Thanks to the 120mm fan, the drives will be well cooled. You will also need to connect two Molex power plugs to the equipment.

Adaptec has included an I2C cable that can be used to control the rig via an appropriate controller. But with SAS drives, this will no longer work. An additional LED cable is designed to signal the activity of the drives, but, again, only for SATA drives. The package also includes an internal SAS cable for four drives, so an external four-channel cable will be enough to connect the drives. If you want to use SATA drives, you will have to use SAS to SATA adapters.

The retail price of $369 is not cheap. But you will get a solid and reliable solution.

SAS storage

SANbloc S50 is a 12-drive enterprise-class solution. You will receive a 2U rackmount enclosure that connects to SAS controllers. This is one of the best examples of scalable SAS solutions. The 12 drives can be either SAS or SATA. Or represent a mixture of both types. The built-in expander can use one or two quad-lane SAS interfaces to connect the S50 to a host adapter or RAID controller. Since we have a clearly professional solution, it is equipped with two power supplies (with redundancy).

If you have already purchased an Adaptec SAS host adapter, you can easily connect it to the S50 and manage drives using the Adaptec Storage Manager. If you install 500 GB SATA hard drives, then we get 6 TB of storage. If we take 300 GB SAS drives, then the capacity will be 3.6 TB. Since the expander is connected to the host controller by two four-lane interfaces, we will get a throughput of 2.4 GB / s, which will be more than enough for an array of any type. If you install 12 drives in a RAID0 array, then the maximum throughput will be only 1.1 GB / s. In the middle of this year, Adaptec promises to release a slightly modified version with two independent SAS I/O blocks.

SANbloc S50 contains the function of automatic monitoring and automatic control of fan speed. Yes, the device is too loud, so we were relieved to return it from the lab after the tests were completed. A drive failure message is sent to the controller via SES-2 (SCSI Enclosure Services) or via the physical I2C interface.

Operating temperatures for actuators are 5-55°C, and for accessories - from 0 to 40°C.

At the start of our tests, we got a peak throughput of just 610 MB/s. By changing the cable between the S50 and the Adaptec host controller, we were still able to reach 760 MB / s. We used seven hard drives to load the system in RAID 0 mode. Increasing the number of hard drives did not lead to an increase in throughput.

Test configuration

System hardware
Processors 2x Intel Xeon (Nocona core)
3.6 GHz, FSB800, 1 MB L2 cache
Platform Asus NCL-DS (Socket 604)
Chipset Intel E7520, BIOS 1005
Memory Corsair CM72DD512AR-400 (DDR2-400 ECC, reg.)
2x 512 MB, CL3-3-3-10
System hard drive Western Digital Caviar WD1200JB
120 GB, 7200 rpm, 8 MB cache, UltraATA/100
Drive Controllers Controller Intel 82801EB UltraATA/100 (ICH5)

Promise SATA 300TX4
Driver 1.0.0.33

Adaptec AIC-7902B Ultra320
Driver 3.0

Adaptec 48300 8 port PCI-X SAS
Driver 1.1.5472

Adaptec 4800 8 port PCI-X SAS
Driver 5.1.0.8360
Firmware 5.1.0.8375

LSI Logic SAS3442X 8 port PCI-X SAS
Driver 1.21.05
BIOS 6.01

Vaults
4-bay, hot-swappable indoor rig

2U, 12-HDD SAS/SATA JBOD

Net Broadcom BCM5721 Gigabit Ethernet
video card built-in
ATi RageXL, 8 MB
Tests
performance measurement c "t h2benchw 3.6
Measuring I/O performance IOMeter 2003.05.10
Fileserver Benchmark
webserver-benchmark
database-benchmark
Workstation Benchmark
System software and drivers
OS Microsoft Windows Server 2003 Enterprise Edition Service Pack 1
Platform driver Intel Chipset Installation Utility 7.0.0.1025
Graphics driver Workstation script.

After examining several new SAS hard drives, three related controllers, and two fixtures, it became clear that SAS was indeed a promising technology. If you refer to the SAS technical documentation, you will understand why. Not only is this a successor to serial SCSI (fast, convenient, and easy to use), but it also offers a great level of scalability and infrastructure growth that makes Ultra320 SCSI solutions look like a stone age.

And the compatibility is just great. If you're planning to buy professional SATA hardware for your server, SAS is worth a look. Any SAS controller or accessory is compatible with both SAS and SATA hard drives. Therefore, you can create both a high-performance SAS environment and a capacious SATA environment - or both.

Convenient support for external storage is another important advantage of SAS. If the SATA storage uses either proprietary solutions or a single SATA/eSATA link, the SAS storage interface allows for increased bandwidth in groups of four SAS links. As a result, we get the opportunity to increase the bandwidth for the needs of applications, and not rest on 320 MB / s UltraSCSI or 300 MB / s SATA. Moreover, SAS expanders allow you to create a whole hierarchy of SAS devices, so that administrators have more freedom of action.

The evolution of SAS devices will not end there. It seems to us that the UltraSCSI interface can be considered obsolete and slowly written off. It is unlikely that the industry will improve it, unless it continues to support existing implementations of UltraSCSI. Still, new hard drives, the latest models of storage and equipment, as well as an increase in interface speed to 600 MB / s, and then to 1200 MB / s - all this is intended for SAS.

What should be a modern storage infrastructure? With the availability of SAS, the days of UltraSCSI are numbered. The sequential version is a logical step forward and does everything better than its predecessor. The question of choosing between UltraSCSI and SAS becomes obvious. Choosing between SAS or SATA is somewhat more difficult. But if you look into the future, then SAS components will still be better. Indeed, for maximum performance or in terms of scalability, there is no alternative to SAS today.

Hard drive for the server, features of choice

The hard drive is the most valuable component in any computer. After all, it stores information with which the computer and the user work, in the event that we are talking about a personal computer. Every time a person sits down at a computer, he expects that the operating system loading screen will now run through, and he will start working with his data, which the hard drive will give out “to the mountain” from his bowels. If we are talking about a hard drive, or even an array of them as part of a server, then there are tens, hundreds and thousands of such users who expect to get access to personal or work data. And all their quiet work or recreation and entertainment depends on these devices that constantly store data in themselves. Already from this comparison it is clear that requests for hard drives of home and industrial class are unequal - in the first case, one user works with it, in the second - thousands. It turns out that the second hard drive should be more reliable, faster, more stable than the first one many times over, because they work with it, many users rely on it. This article will discuss the types used in the corporate sector hard drives and features of their design, allowing to achieve the highest reliability and performance.

SAS and SATA drives - so similar and so different

Until recently, the standards of industrial-grade and household hard drives differed significantly and were incompatible - SCSI and IDE, now the situation has changed - the vast majority of hard drives on the market are SATA and SAS (Serial Attached SCSI) hard drives. The SAS connector is versatile and form factor compatible with SATA. This allows you to directly connect to the SAS system both high-speed, but at the same time small capacity (up to 300 GB at the time of writing) SAS drives, as well as slower, but many times more capacious SATA drives (up to 2 TB at the time of writing). ). Thus, in one disk subsystem, it is possible to combine vital applications that require high performance and rapid data access, and more economical applications with a lower cost per gigabyte.

This interoperability benefits both backplate manufacturers and end users by reducing hardware and engineering costs.

That is, both SAS devices and SATA devices can be connected to SAS connectors, and only SATA devices can be connected to SATA connectors.

SAS and SATA - high speed and large capacity. What to choose?

SAS disks, which replaced SCSI disks, completely inherited their main properties that characterize a hard drive: spindle speed (15000 rpm) and volume standards (36,74,147 and 300 GB). However, the SAS technology itself differs significantly from SCSI. Let's take a quick look at the main differences and features: The SAS interface uses a point-to-point connection - each device is connected to the controller by a dedicated channel, unlike it, SCSI works on a common bus.

SAS supports a large number of devices (> 16384), while the SCSI interface supports 8, 16, or 32 devices on the bus.

The SAS interface supports data transfer rates between devices at speeds of 1.5; 3; 6 Gb / s, while the SCSI interface bus speed is not allocated to each device, but is divided between them.

SAS supports the connection of slower SATA devices.

SAS configurations are much easier to assemble and install. Such a system is easier to scale. In addition, SAS hard drives inherited the reliability of SCSI hard drives.

When choosing a disk subsystem - SAS or SATA, you need to be guided by what functions will be performed by the server or workstation. To do this, you need to decide on the following questions:

1. How many simultaneous, diverse requests will the disk process? If large - your clear choice - SAS disks. Also, if your system will serve a large number of users - choose SAS.

2. How much information will be stored on the disk subsystem of your server or workstation? If more than 1-1.5 TB, you should pay attention to a system based on SATA hard drives.

3. What is the budget allocated for the purchase of a server or workstation? It should be remembered that in addition to SAS disks, you will need a SAS controller, which also needs to be taken into account.

4. Do you plan, as a result, to increase the volume of data, increase productivity or increase the fault tolerance of the system? If so, then you need a SAS-based disk subsystem, it is easier to scale and more reliable.

5. Your server will run mission-critical data and applications - your choice is heavy-duty SAS drives.

A reliable disk subsystem, it is not only high-quality hard disks from a well-known manufacturer, but also an external disk controller. They will be discussed in one of the following articles. Consider SATA drives, what types of these drives are and which ones should be used when building server systems.

SATA drives: consumer and industrial sector

SATA drives used everywhere, from consumer electronics and home computers to high-performance workstations and servers, differ in subspecies, there are drives for use in household appliances, with low heat dissipation, power consumption, and as a result, low performance, there are drives - middle class, for home computers, and there are drives for high-performance systems. In this article, we will consider the class of hard drives for productive systems and servers.

Performance characteristics

Server class HDD

HDD desktop class

Rotational speed

7,200 rpm (nominal)

7,200 rpm (nominal)

Cache size

Average delay time

4.20 ms (nominal)

6.35 ms (nominal)

Transfer rate

Reading from drive cache (Serial ATA)

maximum 3 Gb/s

maximum 3 Gb/s

physical characteristics

Capacity after formatting

1,000,204 MB

1,000,204 MB

Capacity

Interface

SATA 3Gb/s

SATA 3Gb/s

Number of sectors available to the user

1 953 525 168

1 953 525 168

Dimensions

Height

25.4mm

25.4mm

Length

147 mm

147 mm

Width

101.6 mm

101.6 mm

0.69 kg

0.69 kg

impact resistance

Shock resistance in working condition

65G, 2ms

30G; 2 ms

Shock resistance when not in use

250G, 2ms

250G, 2ms

Temperature

In working order

-0° C to 60° C

-0° C to 50° C

Out of Service

-40° C to 70° C

-40° C to 70° C

Humidity

In working order

relative humidity 5-95%

Out of Service

relative humidity 5-95%

relative humidity 5-95%

Vibration

In working order

Linear

20-300Hz, 0.75g (0 to peak)

22-330Hz, 0.75g (0 to peak)

Free

0.004 g/Hz (10 - 300 Hz)

0.005 g/Hz (10 - 300 Hz)

Out of Service

low frequency

0.05 g/Hz (10 - 300 Hz)

0.05 g/Hz (10 - 300 Hz)

High frequency

20-500Hz, 4.0G (0 to peak)

The table shows the characteristics of hard drives from one of the leading manufacturers, in one column data are given for a server-class SATA hard drive, in the other for a conventional SATA hard drive.

From the table we can see that disks differ not only in performance characteristics, but also in operational characteristics, which directly affect the life expectancy and successful operation of the hard drive. You should pay attention to the fact that outwardly these hard drives differ insignificantly. Consider what technologies and features allow you to do this:

Reinforced shaft (spindle) of the hard drive, some manufacturers are fixed at both ends, which reduces the influence of external vibration and contributes to the precise positioning of the head unit during read and write operations.

The use of special intelligent technologies that take into account both linear and angular vibration, which reduces the positioning time of the heads and increases the performance of disks up to 60%

RAID runtime debugging feature - prevents hard drives from dropping out of RAID, which is a characteristic feature of conventional hard drives.

The height adjustment of the heads in combination with the technology of preventing contact with the surface of the plates, which leads to a significant increase in the life of the disk.

A wide range of self-diagnostic functions that allow you to predict in advance the moment when the hard drive will fail and warn the user about it, which allows you to have time to save information to a backup drive.

Features that reduce the rate of unrecoverable read errors, which increases the reliability of the server hard drive compared to conventional hard drives.

Speaking about the practical side of the issue, we can confidently say that specialized hard drives in servers "behave" much better. The technical service receives many times fewer calls on the instability of the operation of RAID arrays and failures of hard drives. Support by the manufacturer of the server segment of hard drives is much faster than conventional hard drives, due to the fact that the industrial sector is a priority for any manufacturer of data storage systems. After all, it is in it that the most advanced technologies that guard your information are used.

Analogue of SAS disks:

Hard drives from Western Digital VelociRaptor. These 10K RPM drives are equipped with a SATA 6 Gb/s interface and 64 MB of cache. The MTBF of these drives is 1.4 million hours.
More details on the manufacturer's website www.wd.com

You can order an assembly of a server based on SAS or an analogue of SAS hard drives in our Status company in St. Petersburg, you can also buy or order SAS hard drives in St. Petersburg:

  • call +7-812-385-55-66 in St. Petersburg
  • write to the address
  • Leave an application on our website on the page "Online application"

Little has changed over the past two years:

  • Supermicro is ditching the proprietary "flipped" UIO form factor for controllers. Details will be below.
  • LSI 2108 (SAS2 RAID with 512MB cache) and LSI 2008 (SAS2 HBA with optional RAID support) are still in service. Products based on these chips, both from LSI and from OEM partners, are well debugged and are still relevant.
  • There were LSI 2208 (the same SAS2 RAID with LSI MegaRAID stack, only with a dual-core processor and 1024MB of cache) and (an improved version of LSI 2008 with a faster processor and PCI-E 3.0 support).

Transition from UIO to WIO

As you remember, UIO boards are ordinary PCI-E x8 boards, in which the entire element base is located on the reverse side, i.e. when installed in the left riser, it is on top. This form factor was needed to install boards in the lowest slot of the server, which allowed four boards to be placed in the left riser. UIO is not only a form factor of expansion boards, it is also cases designed for installing risers, risers themselves and motherboards of a special form factor, with a cutout for the bottom expansion slot and slots for installing risers.
This solution had two problems. Firstly, the non-standard form factor of expansion boards limited the choice of the client, since under the UIO form factor, there are only a few controllers SAS, InfiniBand and Ehternet. Secondly, there are not enough PCI-E lines in the slots for risers - only 36, of which there are only 24 lines for the left riser, which is clearly not enough for four boards with PCI-E x8.
What is WIO? At first it turned out that it was possible to place four boards in the left riser without having to "turn the sandwich butter up", and there were risers for regular boards (RSC-R2UU-A4E8+). Then the problem of the lack of lines (now there are 80) was solved by using slots with a higher pin density.
UIO riser RSC-R2UU-UA3E8+
WIO riser RSC-R2UW-4E8

Results:
  • WIO risers cannot be installed in UIO motherboards (eg X8DTU-F).
  • UIO risers cannot be installed in new WIO boards.
  • There are risers for WIO (on the motherboard) that have a UIO slot for cards. In case you still have UIO controllers. They are used in platforms under Socket B2 (6027B-URF, 1027B-URF, 6017B-URF).
  • New controllers in the UIO form factor will not appear. For example, the USAS2LP-H8iR controller on the LSI 2108 chip will be the last one, there will be no LSI 2208 for UIO - only a regular MD2 with PCI-E x8.

PCI-E controllers

At the moment, three varieties are relevant: RAID controllers based on LSI 2108/2208 and HBA based on LSI 2308. There is also a mysterious SAS2 HBA AOC-SAS2LP-MV8 on a Marvel 9480 chip, but write about it because of its exoticism. Most use cases for internal SAS HBAs are storage with ZFS under FreeBSD and various flavors of Solaris. Due to the absence of problems with support in these operating systems, the choice in 100% of cases falls on LSI 2008/2308.
LSI 2108
In addition to UIO "shny AOC-USAS2LP-H8iR, which is mentioned in two more controllers were added:

AOC-SAS2LP-H8iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 8 internal ports (2x SFF-8087). It is an analogue of the LSI 9260-8i controller, but manufactured by Supermicro, there are minor differences in the layout of the board, the price is $40-50 lower than LSI. All additional LSI options are supported: activation, FastPath and CacheCade 2.0, cache battery protection - LSIiBBU07 and LSIiBBU08 (now it is preferable to use BBU08, it has an extended temperature range and comes with a cable for remote mounting).
Despite the emergence of more powerful controllers based on the LSI 2208, the LSI 2108 is still relevant due to the price reduction. Performance with conventional HDDs is enough in any scenario, the IOPS limit for working with SSDs is 150,000, which is more than enough for most budget solutions.

AOC-SAS2LP-H4iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 4 internal + 4 external ports. It is an analogue of the LSI 9280-4i4e controller. Convenient for use in expander cases, as you don’t have to bring the output from the expander outside to connect additional JBODs, or in 1U cases for 4 disks, if necessary, provide the ability to increase the number of disks. Supports the same BBUs and activation keys.
LSI 2208

AOC-S2208L-H8iR
LSI 2208, SAS2 RAID 0/1/5/6/10/50/60, 1024MB cache, 8 internal ports (2 SFF-8087 connectors). It is an analogue of the LSI 9271-8i controller. The LSI 2208 is a further development of the LSI 2108. The processor became dual-core, which made it possible to raise the performance limit in terms of IOPS "m up to 465000. Support for PCI-E 3.0 was added and increased to 1GB cache.
The controller supports BBU09 battery cache protection and CacheVault flash protection. Supermicro supplies them under part numbers BTR-0022L-LSI00279 and BTR-0024L-LSI00297, but it is easier to purchase from us through the LSI sales channel (the second part of the part numbers is the native LSI part numbers). MegaRAID Advanced Software Options activation keys are also supported, part number: AOC-SAS2-FSPT-ESW (FastPath) and AOCCHCD-PRO2-KEY (CacheCade Pro 2.0).
LSI 2308 (HBA)

AOC-S2308L-L8i and AOC-S2308L-L8e
LSI 2308, SAS2 HBA (with IR firmware - RAID 0/1/1E), 8 internal ports (2 SFF-8087 connectors). This is the same controller, it comes with different firmware. AOC-S2308L-L8e - IT firmware (pure HBA), AOC-S2308L-L8i - IR firmware (supporting RAID 0/1/1E). The difference is that L8i can work with IR and IT firmware, L8e can only work with IT, firmware in IR is locked. It is an analogue of the LSI 9207-8 controller i. Differences from LSI 2008: a faster chip (800 MHz, as a result - the IOPS limit has risen to 650 thousand), PCI-E 3.0 support has appeared. Application: software RAIDs (ZFS, for example), budget servers.
Based on this chip, there will be no cheap controllers supporting RAID-5 (iMR stack, out of ready-made controllers - LSI 9240).

Onboard controllers

In the latest products (X9 boards and platforms with them), Supermicro denotes the presence of a SAS2 controller from LSI with the number "7" in the part number, the number "3" indicates the chipset SAS (Intel C600). It just doesn't differentiate between the LSI 2208 and 2308, so be careful when choosing a board.
  • The controller based on LSI 2208 soldered on motherboards has a maximum limit of 16 disks. If you add 17, it will simply not be detected, and you will see the message "PD is not supported" in the MSM log. This is offset by a significantly lower price. For example, a bundle "X9DRHi-F + external controller LSI 9271-8i" will cost about $500 more than an X9DRH-7F with LSI 2008 on board. Bypassing this limitation by flashing in LSI 9271 will not work - flashing another SBR block, as in the case of LSI 2108, does not help.
  • Another feature is the lack of support for CacheVault modules, there is simply not enough space on the boards for a special connector, so only BBU09 is supported. The ability to install the BBU09 depends on the enclosure used. For example, the LSI 2208 is used in the 7127R-S6 blade servers, there is a BBU connector, but to mount the module itself, you need an additional MCP-640-00068-0N Battery Holder Bracket.
  • The SAS HBA (LSI 2308) firmware will now be required, since in DOS on any of the boards with LSI 2308 sas2flash.exe does not start with the error "Failed to initialize PAL".

Controllers in Twin and FatTwin platforms

Some 2U Twin 2 platforms come in three versions, with three types of controllers. For example:
  • 2027TR-HTRF+ - Chipset SATA
  • 2027TR-H70RF+ - LSI 2008
  • 2027TR-H71RF+ - LSI 2108
  • 2027TR-H72RF+ - LSI 2208
Such diversity is ensured by the fact that the controllers are placed on a special backplane that connects to a special slot on the motherboard and to the disk backplane.
BPN-ADP-SAS2-H6IR (LSI 2108)


BPN-ADP-S2208L-H6iR (LSI 2208)

BPN-ADP-SAS2-L6i (LSI 2008)

Supermicro xxxBE16/xxxBE26 Enclosures

Another topic that is directly related to controllers is the modernization of cases with . Varieties have appeared with an additional basket for two 2.5" disks located on the rear panel of the case. The purpose is a dedicated disk (or mirror) for loading the system. Of course, the system can be loaded by allocating a small volume from another disk group or from additional disks fixed inside the case (in 846 cases, you can install additional fasteners for one 3.5" or two 2.5" drives), but the updated modifications are much more convenient:




Moreover, these additional disks do not need to be connected specifically to the chipset SATA controller. Using the SFF8087->4xSATA cable, you can connect to the main SAS controller through the expander's SAS output.
P.S. Hope the information was helpful. Remember that the most complete information and technical support for products from Supermicro, LSI, Adaptec by PMC, and other vendors is available from True System.

RAID 6, 5, 1, and 0 array tests with Hitachi SAS-2 drives

Apparently, the days when a decent professional 8-port RAID controller cost quite impressive money are gone. Today there are solutions for the Serial Attached SCSI (SAS) interface, which are very attractive both in terms of price and functionality, and in terms of performance. About one of them - this review.

Controller LSI MegaRAID SAS 9260-8i

Earlier we already wrote about the second generation SAS interface with a transfer rate of 6 Gb / s and a very cheap 8-port LSI SAS 9211-8i HBA controller designed for organizing entry-level storage systems based on the simplest SAS and SATA RAID arrays. drives. The LSI MegaRAID SAS 9260-8i model will be a higher class - it is equipped with a more powerful processor with hardware calculation of arrays of levels 5, 6, 50 and 60 (ROC technology - RAID On Chip), as well as a significant amount (512 MB) of onboard SDRAM memory for efficient data caching. This controller also supports 6 Gb/s SAS and SATA interfaces, and the adapter itself is designed for PCI Express x8 version 2.0 bus (5 Gb/s per lane), which is theoretically almost enough to meet the needs of 8 high-speed SAS ports. And all this - at a retail price of around $ 500, that is, only a couple of hundred more expensive than the budget LSI SAS 9211-8i. The manufacturer himself, by the way, refers this solution to the MegaRAID Value Line series, that is, economical solutions.




LSIMegaRAID SAS9260-8i 8-port SAS controller and its SAS2108 processor with DDR2 memory

The LSI SAS 9260-8i board has a low profile (MD2 form factor), is equipped with two internal Mini-SAS 4X connectors (each of them allows you to connect up to 4 SAS drives directly or more via port multipliers), is designed for the PCI Express bus x8 2.0 and supports RAID levels 0, 1, 5, 6, 10, 50, and 60, dynamic SAS functionality, and more. etc. The LSI SAS 9260-8i controller can be installed both in 1U and 2U rack servers (Mid and High-End servers) and in ATX and Slim-ATX cases (for workstations). RAID is supported by a hardware - built-in LSI SAS2108 processor (PowerPC core at 800 MHz), understaffed with 512 MB of DDR2 800 MHz memory with ECC support. LSI promises processor data speeds of up to 2.8 GB/s for reading and up to 1.8 GB/s for writing. Among the rich functionality of the adapter, it is worth noting the functions of Online Capacity Expansion (OCE), Online RAID Level Migration (RLM) (expanding the volume and changing the type of arrays on the go), SafeStore Encryption Services and Instant secure erase (encrypting data on disks and securely deleting data ), support for solid state drives (SSD Guard technology), and more. etc. An optional battery module is available for this controller (with it, the maximum operating temperature should not exceed +44.5 degrees Celsius).

LSI SAS 9260-8i Controller Key Specifications

System interfacePCI Express x8 2.0 (5 GT/s), Bus Master DMA
Disk interfaceSAS-2 6Gb/s (supports SSP, SMP, STP, and SATA protocols)
Number of SAS ports8 (2 x4 Mini-SAS SFF8087), supports up to 128 drives via port multipliers
RAID supportlevels 0, 1, 5, 6, 10, 50, 60
CPULSI SAS2108 ROC (PowerPC @ 800 MHz)
Built-in cache512 MB ECC DDR2 800 MHz
Energy consumption, no more24W (+3.3V and +12V supply from PCIe slot)
Operating/Storage Temperature Range0…+60 °С / −45…+105 °С
Form factor, dimensionsMD2 low-profile, 168×64.4 mm
MTBF value>2 million h
Manufacturer's Warranty3 years

Typical applications of the LSI MegaRAID SAS 9260-8i are as follows: a variety of video stations (video on demand, video surveillance, video creation and editing, medical images), high-performance computing and digital data archives, various servers (file, web, mail, databases). In general, the vast majority of tasks solved in small and medium-sized businesses.

In a white-orange box with a frivolously smiling toothy lady's face on the "title" (apparently in order to better lure bearded system administrators and harsh system builders) there is a controller board, brackets for its installation in ATX, Slim-ATX cases, etc., two 4-disk cables with Mini-SAS connectors on one end and regular SATA (without power) on the other (for connecting up to 8 drives to the controller), as well as a CD with PDF documentation and drivers for numerous Windows versions, Linux (SuSE and RedHat), Solaris and VMware.


LSI MegaRAID SAS 9260-8i boxed controller package (MegaRAID Advanced Services Hardware Key mini card is available upon separate request)

LSI MegaRAID Advanced Services software technologies are available for the LSI MegaRAID SAS 9260-8i controller with a special hardware key (sold separately): MegaRAID Recovery, MegaRAID CacheCade, MegaRAID FastPath, LSI SafeStore Encryption Services (their consideration is beyond the scope of this article). In particular, in terms of improving the performance of an array of traditional disks (HDD) using a solid state drive (SSD) added to the system, MegaRAID CacheCade technology will be useful, with which the SSD acts as a second-level cache for the HDD array (an analogue of a hybrid solution for HDD), in some cases, providing an increase in the performance of the disk subsystem up to 50 times. Also of interest is the MegaRAID FastPath solution, which reduces the latency of the SAS2108 processor for I / O operations (by disabling HDD optimization), which allows you to speed up the array of multiple solid state drives (SSDs) connected directly to the SAS 9260-8i ports.

It is more convenient to configure, configure and maintain the controller and its arrays in the corporate manager in the operating system environment (the settings in the BIOS Setup menu of the controller itself are not rich enough - only basic functions are available). In particular, in the manager, in a few mouse clicks, you can organize any array and set its operation policies (caching, etc.) - see screenshots.




Example screenshots of the Windows manager for configuring RAID levels 5 (top) and 1 (bottom).

Testing

To explore the base performance of the LSI MegaRAID SAS 9260-8i (without the MegaRAID Advanced Services Hardware Key and related technologies), we used five high-performance SAS drives with a spindle speed of 15K rpm and support for the SAS-2 interface (6 Gb / c) - Hitachi Ultrastar 15K600 HUS156030VLS600 with a capacity of 300 GB.


Hitachi Ultrastar 15K600 hard drive without top cover

This will allow us to test all the basic levels of arrays - RAID 6, 5, 10, 0 and 1, and not only with the minimum number of disks for each of them, but also "for growth", that is, when adding a disk to the second of the 4-channel SAS ports of the ROC chip. Note that the hero of this article has a simplified analogue - a 4-port LSI MegaRAID SAS 9260-4i controller based on the same element base. Therefore, our tests of 4-disk arrays are equally applicable to it.

The maximum payload sequential read/write speed for the Hitachi HUS156030VLS600 is about 200 MB/s (see chart). Average random access time when reading (according to specifications) - 5.4 ms. Built-in buffer - 64 MB.


Hitachi Ultrastar 15K600 HUS156030VLS600 sequential read/write speed chart

The test system was based on an Intel Xeon 3120 processor, an Intel P45 chipset motherboard, and 2 GB of DDR2-800 memory. The SAS controller was installed in a PCI Express x16 v2.0 slot. The tests were carried out under the operating systems Windows XP SP3 Professional and Windows 7 Ultimate SP1 x86 (pure American versions), since their server counterparts (Windows 2003 and 2008, respectively) do not allow some of the benchmarks and scripts we used to work. The tests used were AIDA64, ATTO Disk Benchmark 2.46, Intel IOmeter 2006, Intel NAS Performance Toolkit 1.7.1, C'T H2BenchW 4.13/4.16, HD Tach RW 3.0.4.0, and Futuremark's PCMark Vantage and PCMark05. The tests were carried out both on unallocated volumes (IOmeter, H2BenchW, AIDA64) and on formatted partitions. In the latter case (for NASPT and PCMark), the results were taken both for the physical beginning of the array and for its middle (volumes of arrays with the maximum available capacity were divided into two equal logical partitions). This allows us to more adequately evaluate the performance of solutions, since the fastest initial sections of volumes, on which file benchmarks are carried out by most browsers, often do not reflect the situation on other sections of the disk, which can also be used very actively in real work.

All tests were performed five times and the results were averaged. We will discuss our updated methodology for evaluating professional disk solutions in more detail in a separate article.

It remains to add that in this test we used the controller firmware version 12.12.0-0036 and drivers version 4.32.0.32. Write and read caching for all arrays and drives has been enabled. Perhaps the use of more modern firmware and drivers saved us from the oddities seen in the results of early tests of the same controller. In our case, such incidents were not observed. However, we also do not use the FC-Test 1.0 script, which is very doubtful in terms of the reliability of the results (which in certain cases the same colleagues “want to call confusion, vacillation and unpredictability”) in our package, since we have repeatedly noticed its failure on some file patterns ( in particular, sets of many small, less than 100 KB files).

The charts below show results for 8 array configurations:

  1. RAID 0 of 5 disks;
  2. RAID 0 of 4 drives;
  3. RAID 5 of 5 disks;
  4. RAID 5 of 4 drives;
  5. RAID 6 of 5 disks;
  6. RAID 6 of 4 drives;
  7. RAID 1 of 4 drives;
  8. RAID 1 of 2 drives.

By a four-disk RAID 1 array (see the screenshot above), LSI obviously means a stripe + mirror array, usually referred to as RAID 10 (this is also confirmed by the test results).

Test results

In order not to overload the review web page with a countless set of diagrams, sometimes uninformative and tiring (which some "rabid colleagues" often sin :)), we have summarized the detailed results of some tests in table. Those who wish to analyze the subtleties of our results (for example, to find out the behavior of the defendants in the most critical tasks for themselves) can do this on their own. We will focus on the most important and key test results, as well as on average indicators.

First, let's look at the results of "purely physical" tests.

The average random access time for a read on a single Hitachi Ultrastar 15K600 HUS156030VLS600 drive is 5.5 ms. However, when organizing them into arrays, this indicator changes slightly: it decreases (due to effective caching in the LSI SAS9260 controller) for “mirror” arrays and increases for all the others. The largest increase (about 6%) is observed for arrays of level 6, since the controller has to access the largest number of disks at the same time (three for RAID 6, two for RAID 5 and one for RAID 0, since access in this test occurs in blocks of only 512 bytes, which is significantly less than the size of array striping blocks).

The situation with random access to arrays during writing (blocks of 512 bytes) is much more interesting. For a single disk, this parameter is about 2.9 ms (without caching in the host controller), however, in arrays on the LSI SAS9260 controller, we see a significant decrease in this indicator due to good write caching in the 512 MB SDRAM buffer of the controller. Interestingly, the most dramatic effect is obtained for RAID 0 arrays (random access time during writes drops by almost an order of magnitude compared to a single drive)! This should undoubtedly have a beneficial effect on the performance of such arrays in a number of server tasks. At the same time, even on arrays with XOR calculations (that is, a high load on the SAS2108 processor), random write accesses do not lead to an obvious performance drop - again thanks to the powerful controller cache. Naturally, RAID 6 is slightly slower here than RAID 5, but the difference between them is essentially insignificant. In this test, I was somewhat surprised by the behavior of a single “mirror”, which showed the slowest random access when writing (perhaps this is a “feature” of the microcode of this controller).

Linear (sequential) read and write speed graphs (in large blocks) for all arrays do not have any peculiarities (they are almost identical for reading and writing, provided controller write caching is enabled) and all of them are scaled according to the number of disks participating in parallel in the "useful » process. That is, for five-disk RAID 0 disks, the speed "fivefolds" relative to a single disk (reaching 1 GB / s!), for five-disk RAID 5 it "quadruples", for RAID 6 - "triples" (triples, of course :)), for a RAID 1 of four disks, it doubles (no "y2eggs"! :)), and for a simple mirror, it duplicates the graphs of a single disk. This pattern is clearly visible, in particular, in terms of the maximum reading and writing speed of real large (256 MB) files in large blocks (from 256 KB to 2 MB), which we will illustrate with a diagram of the ATTO Disk Benchmark 2.46 test (the results of this test for Windows 7 and XP are almost identical).

Here, only the case of reading files on a RAID 6 array of 5 disks unexpectedly fell out of the general picture (the results were repeatedly rechecked). However, for reading in blocks of 64 KB, the speed of this array is gaining its 600 MB / s. So let's write off this fact as a "feature" of the current firmware. We also note that when writing real files, the speed is slightly higher due to caching in a large controller buffer, and the difference with reading is more noticeable, the lower the real linear speed of the array.

As for the interface speed, which is usually measured in terms of buffer writes and reads (multiple accesses to the same disk volume address), here we have to state that it turned out to be the same for almost all arrays due to the inclusion of the controller cache for these arrays (see . table). Thus, the recording performance for all participants in our test amounted to approximately 2430 MB / s. Note that the PCI Express x8 2.0 bus theoretically gives a speed of 40 Gb / s or 5 Gb / s, however, according to useful data, the theoretical limit is lower - 4 Gb / s, which means that in our case the controller really worked according to version 2.0 of the PCIe bus. Thus, the 2.4 GB / s we measured is, obviously, the real bandwidth of the controller's on-board memory (DDR2-800 memory with a 32-bit data bus, as can be seen from the configuration of the ECC chips on the board, theoretically gives up to 3.2 GB/s). When reading arrays, caching is not as "comprehensive" as when writing, therefore, the speed of the "interface" measured in utilities is usually lower than the speed of reading the controller's cache memory (typical 2.1 GB / s for arrays of levels 5 and 6) , and in some cases it "drops" to the read speed of the buffer of the hard drives themselves (about 400 MB / s for a single hard drive, see the graph above), multiplied by the number of "consecutive" drives in the array (this is exactly the cases of RAID 0 and 1 from our results).

Well, we figured out the "physics" in the first approximation, it's time to move on to the "lyrics", that is, to the tests of the "real" application boys. By the way, it will be interesting to find out whether the performance of arrays scales when performing complex user tasks as linearly as it scales when reading and writing large files (see the ATTO test diagram a little higher). The inquisitive reader, I hope, has already been able to predict the answer to this question.

As a “salad” to our “lyrical” part of the meal, we will serve desktop-based disk tests from the PCMark Vantage and PCMark05 packages (under Windows 7 and XP, respectively), as well as a similar “track” application test from the H2BenchW 4.13 package of the authoritative German magazine C'T. Yes, these tests were originally designed to evaluate desktop and low-cost workstation hard drives. They emulate the performance of typical tasks of an advanced personal computer on disks - working with video, audio, photoshop, antivirus, games, swap files, installing applications, copying and writing files, etc. Therefore, their results should not be taken in the context of this article. as the ultimate truth - after all, other tasks are more often performed on multi-disk arrays. Nevertheless, in light of the fact that the manufacturer itself positions this RAID controller, including for relatively inexpensive solutions, such a class of test tasks is quite capable of characterizing a certain proportion of applications that will actually be run on such arrays (the same work with video, professional graphics processing, swapping OS and resource-intensive applications, copying files, antivirus, etc.). Therefore, the importance of these three comprehensive benchmarks in our overall package should not be underestimated.

In the popular PCMark Vantage, on average (see diagram), we observe a very remarkable fact - the performance of this multi-disk solution almost does not depend on the type of array used! By the way, within certain limits, this conclusion is also valid for all individual test tracks (task types) included in the PCMark Vantage and PCMark05 packages (see the table for details). This may mean either that the controller firmware algorithms (with cache and disks) almost do not take into account the specifics of the operation of applications of this type, or that the main part of these tasks is performed in the cache memory of the controller itself (and most likely we observe a combination of these two factors ). However, for the latter case (that is, the execution of tracks to a large extent in the RAID controller cache), the average performance of solutions is not so high - compare these data with the test results of some "desktop" ("chipset") 4-disk RAID 0 arrays and 5 and inexpensive single SSDs on the SATA 3 Gb / s bus (see review). If, compared to a simple "chipset" 4-disk RAID 0 (and on twice slower hard drives than the Hitachi Ultrastar 15K600 used here), LSI SAS9260 arrays are less than twice as fast in PCMark tests, then relatively not even the fastest "budget" single SSD all of them definitely lose! The results of the PCMark05 disk test give a similar picture (see table; it makes no sense to draw a separate diagram for them).

A similar picture (with some reservations) for arrays based on the LSI SAS9260 can be seen in another "track" application benchmark - C'T H2BenchW 4.13. Here, only the two slowest (in terms of structure) arrays (RAID 6 of 4 disks and a simple “mirror”) are noticeably behind all other arrays, the performance of which, obviously, reaches that “sufficient” level when it no longer rests on the disk subsystem, and in the efficiency of the SAS2108 processor with the controller cache for these complex access sequences. And in this context, we can be pleased that the performance of arrays based on LSI SAS9260 in tasks of this class almost does not depend on the type of array used (RAID 0, 5, 6 or 10), which allows you to use more reliable solutions without compromising the final performance.

However, “not everything is Maslenitsa” - if we change the tests and check the operation of arrays with real files on the NTFS file system, then the picture will change dramatically. Thus, in the Intel NASPT 1.7 test, many of whose “pre-installed” scenarios are quite directly related to tasks typical for computers equipped with the LSI MegaRAID SAS9260-8i controller, the array disposition is similar to what we observed in the ATTO test when reading and writing large files - the speed increases proportionally as the "linear" speed of the arrays grows.

In this chart, we show an average of all NASPT tests and patterns, while in the table you can see the detailed results. Let me emphasize that we ran NASPT both under Windows XP (this is what numerous browsers usually do) and under Windows 7 (which, due to certain features of this test, is done less frequently). The fact is that Seven (and its "big brother" Windows 2008 Server) use more aggressive algorithms of their own caching when working with files than XP. In addition, copying large files in the "Seven" occurs mainly in blocks of 1 MB (XP, as a rule, operates in blocks of 64 KB). This leads to the fact that the results of the "file" Intel NASPT test differ significantly in Windows XP and Windows 7 - in the latter they are much higher, sometimes more than twice! By the way, we compared the results of NASPT (and other tests of our package) under Windows 7 with 1 GB and 2 GB of installed system memory (there is information that with large amounts of system memory, caching of disk operations in Windows 7 increases and NASPT results become even higher) , however, within the measurement error, we did not find any difference.

We leave the debate about which OS (in terms of caching policies, etc.) is “better” to test disks and RAID controllers for the discussion thread of this article. We believe that it is necessary to test drives and solutions based on them in conditions that are as close as possible to the real situations of their operation. That is why, in our opinion, the results obtained by us for both operating systems are of equal value.

But back to the NASPT average performance chart. As you can see, the difference between the fastest and slowest of the arrays we tested here is on average a little less than three times. This, of course, is not a five-fold gap, as when reading and writing large files, but it is also very noticeable. The arrays are actually located in proportion to their linear speed, and this cannot but rejoice: it means that the LSI SAS2108 processor processes data quite quickly, almost without creating bottlenecks when arrays of levels 5 and 6 are actively working.

In fairness, it should be noted that NASPT also has patterns (2 out of 12) in which the same picture is observed as in PCMark with H2BenchW, namely that the performance of all tested arrays is almost the same! These are Office Productivity and Dir Copy to NAS (see table). This is especially evident under Windows 7, although for Windows XP the trend of "convergence" is obvious (compared to other patterns). However, in PCMark with H2BenchW there are patterns where there is an increase in array performance in proportion to their linear speed. So everything is not as simple and unambiguous as some might like.

At first, I wanted to discuss a chart with the overall performance of arrays, averaged over all application tests (PCMark + H2BenchW + NASPT + ATTO), that is, this one:

However, there is nothing much to discuss here: we see that the behavior of arrays on the LSI SAS9260 controller in tests that emulate the work of certain applications can vary dramatically depending on the scenarios used. Therefore, it is better to draw conclusions about the benefits of a particular configuration based on what tasks you are going to perform at the same time. And one more professional test can significantly help us with this - synthetic patterns for IOmeter, emulating this or that load on the storage system.

Tests in IOmeter

In this case, we will omit the discussion of numerous patterns that carefully measure the speed of work depending on the size of the access block, the percentage of writes, the percentage of random accesses, etc. This is, in fact, pure synthetics, providing little useful practical information and of interest rather purely theoretically. After all, we have already clarified the main practical points regarding “physics” above. It is more important for us to focus on patterns that emulate real work - servers of various types, as well as file operations.

To emulate servers such as File Server, Web Server and DataBase (database server), we used the same-named and well-known patterns proposed in our Intel time and StorageReview.com. For all cases, we tested arrays with a command queue depth (QD) from 1 to 256 with a step of 2.

In the Database pattern, which uses random disk accesses in blocks of 8 KB within the entire array, one can observe a significant advantage of arrays without parity (that is, RAID 0 and 1) with a command queue depth of 4 or higher, while all parity-checked arrays (RAID 5 and 6) demonstrate very similar performance (despite a twofold difference between them in the speed of linear accesses). The situation is easily explained: all arrays with parity showed similar values ​​in tests for the average random access time (see the diagram above), and this parameter mainly determines the performance in this test. It is interesting that the performance of all arrays increases almost linearly with increasing command queue depth up to 128, and only at QD=256, in some cases, you can see a hint of saturation. The maximum performance of arrays with parity at QD=256 was about 1100 IOps (operations per second), that is, the LSI SAS2108 processor spends less than 1 ms to process one portion of data of 8 KB (about 10 million single-byte XOR operations per second for RAID 6 ; of course, the processor also performs other I/O and cache tasks in parallel).

In the file server pattern, which uses blocks of different sizes for random read and write accesses to the array within its entire volume, we observe a picture similar to the DataBase, with the difference that here five-disk arrays with parity (RAID 5 and 6) noticeably outperform their 4-disk counterparts and at the same time demonstrate almost identical performance (about 1200 IOps at QD=256)! Apparently, adding a fifth drive to the second of the two 4-lane SAS ports on the controller somehow optimizes the computational load on the processor (due to I / O operations?). It might be worth comparing 4-disk arrays in terms of speed when the drives are connected in pairs to different Mini-SAS connectors of the controller in order to identify the optimal configuration for organizing arrays on the LSI SAS9260, but this is a task for another article.

In the web server pattern, where, according to the intention of its creators, there are no disk write operations as a class (and hence the calculation of XOR functions for writing), the picture becomes even more interesting. The fact is that all three five-disk arrays from our set (RAID 0, 5 and 6) show identical performance here, despite the noticeable difference between them in terms of linear reading and parity calculations! By the way, the same three arrays, but of 4 disks, are also identical in speed to each other! And only RAID 1 (and 10) falls out of the picture. Why this happens is difficult to judge. It is possible that the controller has very efficient algorithms for selecting "good drives" (that is, those of five or four drives from which the necessary data comes first), which in the case of RAID 5 and 6 increases the likelihood of data arriving from the platters earlier, preparing the processor in advance for necessary calculations (think of the deep command queue and the large DDR2-800 buffer). And this can ultimately compensate for the delay associated with XOR calculations and equalize them in “chance” with “simple” RAID 0. In any case, the LSI SAS9260 controller can only be praised for its extremely high results (about 1700 IOps for 5-disk arrays with QD=256) in the Web Server pattern for arrays with parity. Unfortunately, the fly in the ointment was the very poor performance of the two-disk “mirror” in all these server patterns.

The Web Server pattern is echoed by our own pattern, which emulates random reading of small (64 KB) files within the entire array space.

Again, the results were combined into groups - all 5-disk arrays are identical to each other in terms of speed and lead in our “race”, 4-disk RAID 0, 5 and 6 also cannot be distinguished from each other in terms of performance, and only “DSLRs” fall out of the general masses (by the way, a 4-disk “reflex”, that is, RAID 10 is faster than all other 4-disk arrays - apparently, due to the same “choosing a good disk” algorithm). We emphasize that these regularities are valid only for a large command queue depth, while with a small queue (QD=1-2), the situation and leaders can be completely different.

Everything changes when servers work with large files. In the conditions of modern "heavier" content and new "optimized" operating systems such as Windows 7, 2008 Server, etc. working with megabyte files and 1 MB data blocks is becoming increasingly important. In this situation, our new pattern, which emulates random reading of 1-MB files within the entire disk (details of the new patterns will be described in a separate article on the methodology), comes in handy in order to more fully assess the server potential of the LSI SAS9260 controller.

As you can see, the 4-disk "mirror" here no longer leaves anyone hope for leadership, clearly dominating in any order of commands. Its performance also first grows linearly with the command queue depth, but with QD=16 for RAID 1, it saturates (about 200 MB/s). A little “later” (at QD=32) “saturation” of performance occurs in arrays that are slower in this test, among which “silver” and “bronze” have to be given to RAID 0, and arrays with parity turn out to be outsiders, losing even before a brilliant RAID 1 of two drives, which turns out to be unexpectedly good. This leads us to the conclusion that even when reading, the XOR computational load on the LSI SAS2108 processor when working with large files and blocks (arranged randomly) is very burdensome for him, and for RAID 6, where it actually doubles, sometimes even exorbitant - the performance of solutions barely exceeds 100 MB / s, that is, 6-8 times lower than with linear reading! "Excessive" RAID 10 is clearly more profitable to use here.

When accidentally writing small files, the picture is again strikingly different from what we saw earlier.

The fact is that here the performance of arrays practically does not depend on the depth of the command queue (obviously, the huge cache of the LSI SAS9260 controller and rather big caches of the hard drives themselves affect), but it changes dramatically with the type of array! The undisputed leaders here are "unpretentious" ones for the RAID 0 processor, and "bronze" with more than a twofold loss to the leader - in RAID 10. All arrays with parity formed a very close single group with a two-disk SLR ), three times losing to the leaders. Yes, this is definitely a heavy load on the controller's processor. However, frankly speaking, I did not expect such a “failure” from the SAS2108. Sometimes even soft RAID 5 on a “chipset” SATA controller (with caching Windows tools and calculation using the PC central processor) is able to work faster ... However, the controller still outputs “its” 440-500 IOps stably - compare this with the diagram on the average access time when writing at the beginning of the results section.

Switching to random writing of large files of 1 MB each leads to an increase in absolute speed indicators (for RAID 0 - almost to the values ​​\u200b\u200bfor random reading of such files, that is, 180-190 MB / s), but the overall picture remains almost the same - arrays with parity many times slower than RAID 0.

The picture for RAID 10 is curious - its performance drops with increasing command queue depth, although not much. For other arrays, there is no such effect. The two-disk "mirror" here again looks modest.

Now let's look at patterns in which files are read and written to disk in equal numbers. Such loads are typical, in particular, for some video servers or during active copying / duplication / backup of files within the same array, as well as in the case of defragmentation.

First - files of 64 KB randomly throughout the array.

Here, some similarity with the results of the DataBase pattern is obvious, although the absolute speeds of arrays are three times higher, and even with QD=256, some performance saturation is already noticeable. A higher (compared to the DataBase pattern) percentage of write operations in this case leads to the fact that arrays with parity and a two-disk “mirror” become obvious outsiders, significantly inferior in speed to RAID 0 and 10 arrays.

When switching to 1 MB files, this pattern generally remains, although the absolute speeds approximately triple, and RAID 10 becomes as fast as a 4-disk stripe, which is good news.

The last pattern in this article will be the case of sequential (as opposed to random) reading and writing large files.

And here already many arrays manage to accelerate to very decent speeds in the region of 300 MB / s. And although the gap between the leader (RAID 0) and the outsider (two-disk RAID 1) remains more than twofold (note that this gap is fivefold for linear reads or writes!), RAID 5, which is among the top three, and the rest of the XOR arrays that have pulled up, do not may not be encouraging. After all, judging by the list of applications of this controller, which LSI itself gives (see the beginning of the article), many target tasks will use this particular nature of array accesses. And it's definitely worth considering.

In conclusion, I will give a final diagram in which the indicators of all the above-mentioned IOmeter test patterns are averaged (geometrically over all patterns and command queues, without weight coefficients). It is curious that if the averaging of these results within each pattern is carried out arithmetically with weight coefficients of 0.8, 0.6, 0.4 and 0.2 for command queues 32, 64, 128 and 256, respectively (which conventionally depth of the command queue in the overall operation of drives), then the final (for all patterns) normalized array performance index within 1% will coincide with the geometric mean.

So, the average "hospital temperature" in our patterns for the IOmeter test shows that there is no way out of "physics with math" - RAID 0 and 10 are clearly in the lead. For arrays with parity, the miracle did not happen - although the LSI SAS2108 processor demonstrates in in some cases, decent performance, in general, it cannot "reach" such arrays to the level of a simple "stripe". At the same time, it is interesting that 5-disk configurations clearly add compared to 4-disk configurations. In particular, 5-disk RAID 6 is unequivocally faster than 4-disk RAID 5, although in terms of "physics" (random access time and linear access speed) they are actually identical. The two-disk “mirror” was also disappointing (on average, it is equivalent to a 4-disk RAID 6, although two XOR calculations per data bit are not required for a mirror). However, a simple "mirror" is obviously not a target array for a sufficiently powerful 8-port SAS controller with a large cache and a powerful processor "on board". :)

Price Information

The LSI MegaRAID SAS 9260-8i 8-port SAS controller with a complete set is offered at a price of around $500, which can be considered quite attractive. Its simplified 4-port counterpart is even cheaper. A more accurate current average retail price of the device in Moscow, relevant at the time you read this article:

LSI SAS 9260-8iLSI SAS 9260-4i
$571() $386()

Conclusion

Summing up what was said above, we can conclude that we will not risk giving unified recommendations “for everyone” on the 8-port LSI MegaRAID SAS9260-8i controller. Everyone should draw their own conclusions about the need to use it and configure certain arrays with its help - strictly based on the class of tasks that are supposed to be launched. The fact is that in some cases (on some tasks) this inexpensive "megamonster" is able to show outstanding performance even on arrays with double parity (RAID 6 and 60), but in other situations, the speed of its RAID 5 and 6 clearly leaves much to be desired. . And the only salvation (almost universal) will be only a RAID 10 array, which can be organized with almost the same success on cheaper controllers. However, it is often thanks to the processor and cache memory SAS9260-8i that the RAID 10 array behaves here no slower than a stripe of the same number of disks, while ensuring high reliability of the solution. But what you should definitely avoid with the SAS9260-8i is a two-disk "reflex" and 4-disk RAID 6 and 5 - these are obviously suboptimal configurations for this controller.

Thanks to Hitachi Global Storage Technologies
for hard drives provided for testing.

mob_info