Netgear ReadyNAS 716 Review: 10GBase-T in a Desktop NAS
by Ganesh T S on January 1, 2014 3:00 PM EST- Posted in
- Enterprise
- NAS
- NetGear
- 10G Ethernet
Testbed Setup and Testing Methodology
Evaluation of NAS performance under both single and multiple client scenarios is done using the SMB / SOHO NAS testbed we described earlier. Tower / desktop form factor NAS units are usually tested with Western Digital RE drives (WD4000FYYZ). However, the presence of 10-GbE on the ReadyNAS 716 meant that SSDs had to be used to bring out the maximum possible performance. Therefore, evaluation of the Netgear RN716X was done by setting up a RAID-5 volume with six OCZ Vector 4 120 GB SSDs.
AnandTech NAS Testbed Configuration | |||
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB | ||
CPU | 2 x Intel Xeon E5-2630L | ||
Coolers | 2 x Dynatron R17 | ||
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 | ||
OS Drive | OCZ Technology Vertex 4 128GB | ||
Secondary Drive | OCZ Technology Vertex 4 128GB | ||
Tertiary Drive | OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND) | ||
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) | ||
Network Cards | 6 x Intel ESA I-340 Quad-GbE Port Network Adapter | ||
Chassis | SilverStoneTek Raven RV03 | ||
PSU | SilverStoneTek Strider Plus Gold Evoluion 850W | ||
OS | Windows Server 2008 R2 | ||
Network Switch | Netgear ProSafe GSM7352S-200 |
Thank You!
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the RevoDrive Hybrid
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
Netgear XS712T
Our primary testbed switch, the GSM 7352S, doesn't support 10GBase-T. Its 10GbE ports are SFP+ needing copper direct attached cables. We could have gone in for SFP+ to 10GBase-T converters, but, keeping in mind the growing popularity of 10GBase-T, a dedicated 10GBase-T switch made more sense. Netgear came forward with the XS712T, a 12-port 10GBase-T switch. The unit also has two SFP+ copper ports to allow stacking / uplinking.
In our testbed, the SFP+ ports on both the GSM 7352S as well as the XS712T are link aggregated and connected to each other. The GSM 7352S acts as a DHCP server and provides an IP to the XS712T. The 10GBase-T ports of the NAS were also connected to the XS712T (which acts as a DHCP forwarder) and they obtained an IP address in the same subnet as the virtual machines connected to the ports of the GSM 7352S. For teaming purposes, link trap and STP mode were enabled. The mode was set to 802.3ad dynamic link aggregation and the hash mode was set to 'Src/Dest MAC, VLAN, EType,Incoming Port'.
Thank You!
24 Comments
View All Comments
Guspaz - Wednesday, January 1, 2014 - link
Yikes, that's a highly questionable decision, to go with btrfs instead of ZFS as the default file system. ZFS has been in production use for seven years now, proven through widespread deployments and available on every *nix platform you can think of, while btrfs is still beta quality (without even an official stable release) and nowhere near feature-competitive with ZFS...JDG1980 - Wednesday, January 1, 2014 - link
Agreed. This is a full-fledged Xeon PC with ECC RAM, so why not go with ZFS? It would seem to be the obvious choice for a high-quality, time-tested software RAID system.By the way, it would really be better if you listed the suggested retail price on the first page of reviews along with the other specs. (A quick Google search seems to indicate that the street price is $2500-$3000.)
Runiteshark - Wednesday, January 1, 2014 - link
Probably because it takes a bit more effort to get ZFS running in Linux than btrfs, but not that much. It recently went stable and is working just fine on a 72 bay Supermicro chassis I have in test for the past 3 months. All this being said, why didn't they just go with a BSD solution?nafhan - Thursday, January 2, 2014 - link
While, BTRFS has been supported as a root file system in SLE and Oracle Linux since 2012. ZFS: not available from the vendor on either (even though Solaris is owned by Oracle). That's probably it right there.shodanshok - Friday, January 3, 2014 - link
I agree. While BTRFS is quite stable now, considering the critical role assigned to a filesystem I would go with a FS with a proven track record (and fsck). Moreover being a CoW filesystem, BTRFS tend to be extremely fragmentation prone in some circumstances, basically everytime a file rewrite is required, for example a database or a virtual machine (but I think that a similar NAS units is primarily assigned with archiving role).SirGCal - Wednesday, January 1, 2014 - link
Yup, I have two 8-disk systems myself. One running hardware LSI controller for RAID 6 and one using ZFS for the same effective protection. Sure the hardware controller is actually a tiny bit faster at hard reads, but for the $600 price tag, so what. All of my current systems are going to be ZFS. These arrays in a box are interesting until they decide to go with some other pooling system... If there is a real comparable reason and argument for BTRFS instead of ZFS, I'd like to see it.Runiteshark - Wednesday, January 1, 2014 - link
I tested btrfs recently with a large disk array (read 45 4TB drives) and the performance was very poor. Ended up going with JFS and shunned XFS because it's not stable in the event of power issues.shodanshok - Friday, January 3, 2014 - link
Hi,from my understanding JFS and JFS2 are more or less unsupported from some time now.
What problem did you have with XFS? It is designed to manage the exact case you describe: a lot of space spread over a lot of spinning disks. When using XFS, the only two thing that can lead to data loss are:
1. no barrier/FUA support in the disk/controller combo
2. an application that rewrite files with truncate and do _not_ use fsyncs
Case n.1 is common to all filesystem: if your disk lies about cache flush, then no filesystem can save you. The only thing that can somewhat lessen the risk is journal checksumming, that is implemented in both XFS, EXT4 and BRTFS, but I don't know about JFS.
Case n.2 is really an application shortcoming, but EXT4 and BTRFS choice here are the more sensible one: detect such corner case and apply a work-around. Anyway, with application the properly use fsync, XFS is rock stable.
Regards.
Runiteshark - Wednesday, January 1, 2014 - link
On one hand, I'm happy that 10g is becoming more prevalent slowly for the con/prosumer grade market, however products like this make my head hurt. The performance that you were able to get out of this host were nothing short of embarrassing, and could of easily been handled by a single gigabit link. I think this primarly stems from vendors still using software RAID without using good quality HBAs. You can most certainly have a fantastic software solution that is high performance without a real RAID controller or even a high end HBA, however it requires you use ceph, or ZFS.The performance you are seeing out of this is actually very similar to a HP Microserver that I have running on FreeNAS with 2GB of ram, LAGG gigabit ports, 4x4TB 7200rpm Seagates + 32GB USB3 OS drive, granted the entire unit cost no more than $1800, and only has 4 slots instead of 6. Without a doubt if I was going to build something bigger, I'd use a Supermicro X9DR7-TF+, same as what I use in production for $800, get a decent chassis, the LSI BBU and have support for up to 16 drives with 2 10G ports with an Intel X540 chipset, which all toghether would still be significantly less than this solution, and obviously blow the performance of this out of the water.
hpglow - Wednesday, January 1, 2014 - link
Runiteshark not good at reading or convertng bits to bytes? With some of the tests pushing over 600 MB/sec a 1G ethernet port would be saturated more than 4 times over not including packet overhead. A 1Gb ethernet port is good for only 125 MB/sec.