Micron Launches 9400 NVMe Series: U.3 SSDs for Data Center Workloads
by Ganesh T S on January 9, 2023 9:20 AM EST- Posted in
- SSDs
- Storage
- Micron
- Enterprise SSDs
- 176-layer
Micron is taking the wraps off their latest data center SSD offering today. The 9400 NVMe Series builds upon Micron's success with their third-generation 9300 series introduced back in Q2 2019. The 9300 series had adopted the U.2 form-factor with a PCIe 3.0 x4 interface, and utilized their 64L 3D TLC NAND. With a maximum capacity of 15.36 TB, the drive matched the highest-capacity HDDs on the storage amount front at that time (obviously with much higher performance numbers). In the past couple of years, the data center has moved towards PCIe 4.0 and U.3 in a bid to keep up with performance requirements and unify NVMe, SAS, and SATA support. Keeping these in mind, Micron is releasing the 9400 NVMe series of U.3 SSDs with a PCIe 4.0 x4 interface using their now-mature 176L 3D TLC NAND. Increased capacity per die is also now enabling Micron to present 2.5" U.3 drives with capacities up to 30.72 TB, effectively doubling capacity per rack over the previous generation.
Similar to the 9300 NVMe series, the 9400 NVMe series is also optimized for data-intensive workloads and comes in two versions - the 9400 PRO and 9400 MAX. The Micron 9400 PRO is optimized for read-intensive workloads (1 DWPD), while the Micron 9400 MAX is meant for mixed use (3 DWPD). The maximum capacity points are 30.72 TB and 25.60 TB respectively. The specifications of the two drive families are summarized in the table below.
Micron 9400 NVMe Enterprise SSDs | |||||
9400 PRO | 9400 MAX | ||||
Form Factor | U.3 2.5" 15mm | ||||
Interface | PCIe 4.0 NVMe 1.4 | ||||
Capacities | 7.68TB 15.36TB 30.72 |
6.4TB 12.8TB 25.6TB |
|||
NAND | Micron 176L 3D TLC | ||||
Sequential Read | 7000 MBps | ||||
Sequential Write | 7000 MBps | ||||
Random Read (4 KB) | 1.6M IOPS (7.68TB and 15.36TB) 1.5M IOPS (30.72TB) |
1.6M IOPS (6.4TB and 12.8TB) 1.5M IOPS (25.6TB) |
|||
Random Write (4 KB) | 300K IOPS | 600K IOPS (6.4TB and 12.8TB) 550K IOPS (25.6TB) |
|||
Power | Operating | 14-21 W (7.68TB) 16-25W (15.36TB) 17-25W (30.72TB) |
14-21 W (6.40TB) 16-24W (12.8TB) 17-25W (25.6TB) |
||
Idle | ? W | ? W | |||
Write Endurance | 1 DWPD | 3 DWPD | |||
Warranty | 5 years |
The 9400 NVMe SSD series is already in volume production for AI / ML and other HPC workloads. The move to a faster interface, as well as higher-performance NAND enables a 77% improvement in random IOPS per watt over the previous generation. Micron is also claiming better all round performance across a variety of workloads compared to enterprise SSDs from competitors.
The Micron 9400 PRO goes against the Solidigm D7-5520, Samsung PM1733, and the Kioxia CM6-R. The Solidigm D7-5520 is handicapped by lower capacity points (due to its use of 144L TLC), resulting in lower performance against the 9400 PRO in all but the sequential reads numbers. The Samsung PM1733 also tops out at 15.36TB with performance numbers similar to that of the Solidigm model. The Kioxia CM6-R is the only other U.3 SSD with capacities up to 30.72TB. However, its performance numbers across all corners lags well behind the 9400 PRO's.
The Micron 9400 MAX has competition from the Solidigm D7-P5620, Samsung PM1735, and the Kioxia CM6-V. Except for sequential reads, the Solidigm D7-P5620 lags the 9400 MAX in performance as well as capacity points. The PM1735 is only available in an HHHL AIC form-factor and uses PCIe 4.0 x8 interface. So, despite its 8 GBps sequential read performance, it can't be deployed in a manner similar to that of the 9400 MAX. The Kioxia CM6-V tops out at 12.8TB and has lower performance numbers compared to the 9400 MAX.
Despite not being the first to launch 32TB-class SSDs into the data center market, Micron has ensured that their eventual offering provides top-tier performance across a variety of workloads compared to the competition. We hope to present some hands-on performance numbers for the SSD in the coming weeks.
18 Comments
View All Comments
RZLNIE - Tuesday, January 10, 2023 - link
I suggest Micron 9400 PRO and MAX go against the Kioxia CM7-R and CM7-V, which haven't released the briefs. The performance of Kioxia CM6-R and CM6-V is worse than Micron 9400 series. Moreover, they are very hot. And predictably, Memblaze will introduce their new product after the Micron, soon.dwillmore - Tuesday, January 10, 2023 - link
I was part way through the article before I realized these were just flash drives, not crosspoint. It was the capacity figures that triggered the realization. No way someone's going to put 30.7TB of that in a drive. You can put stuff in space for less that that would cost. And who would want a tiny U.2 PCI-E v4x4 'straw' to drink it through?ganeshts - Tuesday, January 10, 2023 - link
This is for the datacenter market. Very different requirements. The key driver here is the IOPS. Think of a server supporting 50 different engineers doing a project compile. Lots of small files being read in simultaneously by different users. All of them can be from this one single disk. Sequential bandwidth is only one part of the story. Random IOPS is key in a lot of other scenarios - databases, ML training, OLTP etc. etc..dwillmore - Tuesday, January 10, 2023 - link
Wow, what was your hint? Was it the U.3 profile? Maybe the title of the article?What I'm refering to is the ration between drive size and both total daily data written or write speed. In other terms, how long--at full speed--does it take to exceed the DWPD or to fill a drive. The larger the drive and the slower the interface, the worse the ratio.
Not every workload is the same in the datacenter, you know. You need to make sure you're using hardware that's appropriate for your needs. So, understanding where these drives fit in that arena is important. Hence my "large drive, small interface" comment. That tells us where in the spectrum of performance this(these) drive sits.
schujj07 - Tuesday, January 10, 2023 - link
Don't forget that these will be used in a SAN so the writes will be spread across more drives. According to VMware's vSAN documentation, even a lowly 1 DWPD drive could be a write caching drive if the capacity is high enough. I can tell you that at 7.68TB that would fall into their Performance Class F (2nd highest, needs 350k+ random IOPS for the highest level) and Enduranc Class D (highest level for 7.3PB+ write endurance). Basically it would qualify for being a write cache drive for the highest performance vSANs. Also you will run into storage network bottlenecks before PCIe bus bottlenecks in an array. Basically if you have 24x PCIe 4 SSDs you would need quad 400 Gbps connections to be able to take all the possible storage bandwidth.mode_13h - Wednesday, March 15, 2023 - link
> The larger the drive and the slower the interface, the worse the ratio.Okay, but the sequential write speed is 7 GB/s and the max capacity is 30700 GB. That means you can fill the largest drive in just 1.22 hours, which is massively better than the *days* it takes to fill a 20 TB enterprise hard drive. So, what's the problem with these SSDs?
Back before NVME, a SATA drive you could fill in the same amount of time would be just 2.6 TB, and I'm pretty sure there were already enterprise drives bigger than that. You can currently buy SATA drives in sizes of 4 TB or bigger, in fact.
wojtow - Tuesday, January 10, 2023 - link
The pricing is not unreasonable for what you get. As for bandwidth, in the enterprise, you're also likely to put the drives into a raid array, and your bandwidth is aggregated across all the drives in the array for sufficiently busy workloads.schujj07 - Tuesday, January 10, 2023 - link
You will be limited by your network speed before the bus speed.Threska - Wednesday, January 11, 2023 - link
Assuming it crosses the network. For those with a monster server those numbers would be impressive.https://youtu.be/4TwfM3s2Wdw
schujj07 - Wednesday, January 11, 2023 - link
In this case it would be your storage network. 99.9% of all applications are run either on a VM or in a container. It is getting rarer by the day that a company runs even their largest DBs on a physical appliance. Doing it in a VM is the better solution.