Jump to content

Recommended Posts

Posted (edited)

I just wanted to share a few things I've learned about the lspcie command as it's helped me understand how my NVME storage devices connect.

The build is based on an Asus Prime H310T R2.0 Motherboard with an i5-8400, 32GB of DDR4-2666. and a pair of SATA drives for the OS.

I currently use this as a working 'temporary backup' location, to host my Steam library, running PiHole and running a Windows VM (hosting legacy game servers!!!)

The whole thing idles at 19W to 26W with just the PiHole and Windows VM in use.

image.thumb.jpeg.e51e4933cb3075001997fa79320f8499.jpeg

image.thumb.jpeg.e8d1f58570fe74d8551bbe74d95e51cf.jpeg

I've also added a 2.5 GbE NIC in the 'WiFi' slot which allows quick access to my Steam library, while the quite probably more reliable onboard 1 GbE port is used for management and other 'services'...

 

The main storage comes from 4 x WD NVME drives mounted on the PCIe card (2 on each side).

image.thumb.jpeg.d5ac667525c71ddc019e408e3f59c9de.jpeg

The PCIe card is a SU-EM5204(A2), and which a quick 'google' reveals a variety of conflicting information, so let's see what we can learn from the lspci command...

image.jpeg.3039058e4493684043465ee68c80e498.jpeg

The card appears to use an ASMedia ASM1806, which (according to the ASMedia website) is a PCIe Gen2 switch with 2 upstream and 4 downstream ports.

image.jpeg.46a6fe7043f1e0302d1adfd58df041d8.jpeg

The 'upstream' port reports that the link is capable (LnkCap) of 5GT/s (PCIe gen 2) over 8 lanes (weird?), however the link status (LnkSta) shows that is is connected over only 2, noting that the link width is 'downgraded'. This is to be expected as I can see from the physical interface that the card is only wired for an x2 connection, and I also know that the Motherboard also only presents 2 lanes to the NVME port (where the PCIe riser is connected).

image.thumb.jpeg.a57506155c33959a636df58536cd7b57.jpeg

If I run the same command on one of the downstream ports, I can see that they are presenting and connecting (to the respective NVME SSDs) 1 PCIe V2 lane.

image.jpeg.b59820dc94c4d7b7bc2e047e45bc6881.jpeg

Looking at one of the NVME SSDs, we can see that they are PCIe V4 (16GT/s) x4 (4 lanes), but are operating at PCIe V2 (5GT/s) x1 lane only.

 

Putting all this together, I now understand that each drive connects to the PCIe switch at PCIe V2 x1, and the switch then connects to the CPU at PCIe V2 x2.

This means that the 4 drives are sharing 10GT/s (less switching overheads) back to the CPU, and as 1 of the 4 drives is for redundancy / parity only 75% of that is usable data, so I'm probably getting about 7GT/s to the array.

Next I'm going to look at how to actually test the internal performance, right after I've found a way of recovering my data from the HDDs you may have noticed (now disconnected) in the first pic...

 

Edited by DomSmith
  • Like 1
Posted
11 hours ago, Todd Miller said:

As an aside, is all your storage NVME?  Is it generally accepted that that type of storage is stable enough for a NAS with all the writing and rewriting?  Just curious.

Hi Todd, In brief, yes and yes (I think...).

I was using 2 SATA SSDs for 'quick' storage and 4 SSHDs (500GB HDDs with onboard 8GB flash, yeah they were a thing...) for bulk storage. This was stable enough but not very quick (when running VMs) and the power draw felt a little excessive.

 

I have considered write-endurance but as I think that, as with anything, as long as you are aware of what you're building it for you can select suitable components.

Take for example the offerings from WD, the WD Green 1TB offers claims an endurance rating of 80TBW, whereas the Blue version offers 600 and the Red boasts 2000.

From what I can see this is on-par with SATA-SSDs (as they essentially use the same or similar flash technology) , and I wouldn't even compare it to HDDs as most of the newer ones I've seen use SMR which is not suitable to NAS applications no matter how the manufacturer tries to sell it...

If you compare these based on TBW alone (as PCIe link width and speed are irrelevant in this case) then although the Red is double the price of it's less reliable counterparts, it's supposedly 25 times bore durable than the Green so 'good value' if the TBW written rating is your main consideration.

 

In this version of my build I've used WD SN740 drives which have an advertised endurance of 200TBW. These drives came from new laptops that were being upgraded so didn't cost me much 😉 so ideal for this proof of concept / learning experience.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...