RobotsNeverDie Posted November 30, 2024 Posted November 30, 2024 Terms I suggest you understand before reading this: Network Interface Card (NIC), PCIE, Ethernet I use my main pc for video editing. I like most people only have a 1 gbit port on my pc and nas. All my local networking gear (switches, routers) also only support 1 gbit. This wasn't cutting it for me when transferring video files back and forth from my nas. The 1 gbit connection for the rest of my house was fine as this is plenty fast for the rest of my devices as their most intensive use would be 4K HDR Plex streams or web browsing. Just like Ethernet is the protocol most popular for wired network connections today there is a technology called Infiniband that is a networking protocol heavily used in enterprise gear. Mellanox (now owned by Nvidia) has a product line called ConnectX. It's what they call their Infiniband gear. Currently ConnectX-8 is their latest iteration that supports 800 gbit. Since this is the latest tech these are still very expensive. The older editions get dumped on ebay in bulk when corporations upgrade their systems. The one I use is ConnectX-3 and supports up to 56gbit but the two cards I purchased support up to 40gbit. These cards are PCIE 3.0 and use an X8 slot. They will fit in an X4 slot but will be limited to 32gbit as each pcie 3.0 lane supports 8gbps. The specific cards I run are the dual port Mellanox MCX354-QCBT. Normally with fiber optic cables you would need a transceiver. These are what plug into the ports on fiber optic networking gear to convert the signals into light. The reason these are not just built into networking gear is because they come in different strengths depending on cable length. If you use a transceiver made for distances over 14 miles on a 100ft cable you'll burn out the transceiver on the other end. These transceivers come in different form factors to fit different ports. The ones in these Mellanox cards I linked above use a QSFP+ port. They are not interchangeable. SFP with SFP, SFP+ with SFP+, QSFP+ with QSFP+, QSFP28 with QSFP28. Now my NAS and my PC are in the same room so I can simplify my solution by using a direct attach cable (dac). This is a cable that already has transceivers on the ends and will plug directly into my connectx-3 cards. The DACs use a copper wire instead of fiber optics so they are limited to about 20 feet. Any longer than that and you will need to use transceivers and a fiber optic cable. This is the DAC I used but you can find cheaper options in varying lengths. Once I get my new drives to build my HexOS nas I will write an actual how to guide on how to set everything up in Windows and HexOS as you need to manually set the IP address for these two cards to be able to talk to each other over the dac. I have had this setup for a little over a year now so you may be able to find faster nics such as Connectx-4 cards Additional info for more advance users. On some of these ConnectX cards you can change the network protocol on the ports in the card. On my specific cards I can use them in Infiniband or Ethernet. They support 10gbe so with a qsfp+ to rj45 transceiver or media converter you can connect them to your 10gb ethernet switch. Quote
Theo Posted November 30, 2024 Posted November 30, 2024 @RobotsNeverDie Stupid question, but wouldn't using thunderbolt 4/5 Simplify this all? especially as your systems are in the same room? Quote
RobotsNeverDie Posted November 30, 2024 Author Posted November 30, 2024 1 hour ago, Theo said: @RobotsNeverDie Stupid question, but wouldn't using thunderbolt 4/5 Simplify this all? especially as your systems are in the same room? That depends. Do you already have two thunderbolt capable systems? Then yes. If you need to buy thunderbolt add on cards then no. It will be significantly more expensive and essectially the same amount of work (install 2 pcie cards and plug in 1 cable). Quote
Theo Posted November 30, 2024 Posted November 30, 2024 My mistake when i looked before the Pcie cards were pretty cheap. Just realised they were only 20Gbps for ~ $20 Quote
Guentha Posted December 1, 2024 Posted December 1, 2024 connectx3s are awesome cards. I use them in my cluster. 2 dual port cards in each server bonded into a 160GB interface to a pair of sx6036 switches. I run ethernet mode because it’s simpler and I can’t actually use that much bandwidth, but the cluster likes speedy redundant links. Who would have thought 40GB is cheaper than 10gb. pretty sure I have spent more on DAC cables than cards and switches. Not all cards are equal. there are IB cards, ethernet cards and VPI cards. Quote
TayschrennSedai Posted December 1, 2024 Posted December 1, 2024 IMHO if you're running those speeds, you might as well be running iSCSI or go with a FC connection vs SMB. But that's just my two cents. Quote
Axel Becker Posted January 22 Posted January 22 @RobotsNeverDie have you already wrote the guide in how to install the Mellanox cards and set up the IPs? I am a total noob and could really need some guidance. I have the Connect-X3 Pro in my NAS and my Windows PC. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.