DomSmith Posted 3 hours ago Posted 3 hours ago (edited) I just wanted to share a few things I've learned about the lspcie command as it's helped me understand how my NVME storage devices connect. The build is based on an Asus Prime H310T R2.0 Motherboard with an i5-8400, 32GB of DDR4-2666. and a pair of SATA drives for the OS. I currently use this as a working 'temporary backup' location, to host my Steam library, running PiHole and running a Windows VM (hosting legacy game servers!!!) The whole thing idles at 19W to 26W with just the PiHole and Windows VM in use. I've also added a 2.5 GbE NIC in the 'WiFi' slot which allows quick access to my Steam library, while the quite probably more reliable onboard 1 GbE port is used for management and other 'services'... The main storage comes from 4 x WD NVME drives mounted on the PCIe card (2 on each side). The PCIe card is a SU-EM5204(A2), and which a quick 'google' reveals a variety of conflicting information, so let's see what we can learn from the lspci command... The card appears to use an ASMedia ASM1806, which (according to the ASMedia website) is a PCIe Gen2 switch with 2 upstream and 4 downstream ports. The 'upstream' port reports that the link is capable (LnkCap) of 5GT/s (PCIe gen 2) over 8 lanes (weird?), however the link status (LnkSta) shows that is is connected over only 2, noting that the link width is 'downgraded'. This is to be expected as I can see from the physical interface that the card is only wired for an x2 connection, and I also know that the Motherboard also only presents 2 lanes to the NVME port (where the PCIe riser is connected). If I run the same command on one of the downstream ports, I can see that they are presenting and connecting (to the respective NVME SSDs) 1 PCIe V2 lane. Looking at one of the NVME SSDs, we can see that they are PCIe V4 (16GT/s) x4 (4 lanes), but are operating at PCIe V2 (5GT/s) x1 lane only. Putting all this together, I now understand that each drive connects to the PCIe switch at PCIe V2 x1, and the switch then connects to the CPU at PCIe V2 x2. This means that the 4 drives are sharing 10GT/s (less switching overheads) back to the CPU, and as 1 of the 4 drives is for redundancy / parity only 75% of that is usable data, so I'm probably getting about 7GT/s to the array. Next I'm going to look at how to actually test the internal performance, right after I've found a way of recovering my data from the HDDs you may have noticed (now disconnected) in the first pic... Edited 3 hours ago by DomSmith 1 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.