Announcement

Collapse
No announcement yet.

PCie lanes(?) 16x 8x etc

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PCie lanes(?) 16x 8x etc

    Do the number of PCIe lanes have any affect on RT performance?
    I see different mobos have different numbers of lanes 16x 8x etc, once you add more than a couple of cards.

    Anyone know?

    thanks

  • #2
    We are running a node with 4 Titan X who are in x16 but physically speaking, the cpu has only 40 pci lanes, so in the best case scenario each cards would be fed by a 10x bandwidth.
    It's running great, don't see any slowdown between 1 card being in 16x or 4 being in "theory" 10x. When use of 4 cards, it's linearly scales and is 4 times faster than with 1 card.

    Although, last year I did some test with pci extender that goes from 16x to 1x and Indid notice a slowdown, but only during the refining stage when the noise threshold kicks in, because a lot of data is send through back and forwards.
    I didn't see any slowdown when I said to RT to render for x time without any noise threshold. As there the info is send back and forward as chunk and not as tinny constant rate. My example was that the same render using a noise threshold over 16x was 2m30s and over 1x it was 3m20. But without noise threshold, no difference.

    Now on a everyday basis, who will not use the noise threshold? It's what makes vray so powerful. But 1x is far from 8x so my guess is that even at 8x you should be pretty good, and in the end, it also depends on you CPU capability as most of them today (high end) are only allowing 40 PCI lanes per CPU, either for i7 and xeons.
    If you really want to push it with rigs with multiple GPU, you should go for dual xeons but if you ask me, you'd never see any difference between 8x, 10x or 16x

    Stan
    3LP Team

    Comment


    • #3
      Thanks for the insight.
      Im looking into PCIe riser cables (think you mentioned them in another thread)
      Want to be putting in 4 x titan X and then 1 more card for the viewport (quadro)

      Comment


      • #4
        We bought a Asus x99 WS motherboard, it has 7 PCI 16x and we tried to push more than 4 cards through risers, in fact 7.
        It seems that since the new x99 platform and specially asus, risers with more than 20cm cable are not working correctly, so for the moment we haven't been able to use them, just be carefull with risers as you might not end up be able to use them
        I still have hope though, but it takes time to R&D and we have loads on our plate ATM

        Stan
        3LP Team

        Comment


        • #5
          Originally posted by 3LP View Post
          but it takes time to R&D and we have loads on our plate ATM
          yep, I hear you.

          Riser cables seem to be pretty "non-standard" as well, in that, it seems to be difficult to find them on usual component shops.

          Comment


          • #6
            id imagine with the correct custom case design a 15-20cm riser should be sufficient? one row of cards plugged into the board, the next row directly above the first on a rail, with the risers going down between the first row of cards?

            ive also been considering options for a gpu-dense system. with a waterblock attached, a titan-x is essentially a 1 slot card, but unfortunatly the dvi plug riser and watercooling connectors take it back to 2 slots.

            id imagine if you were brutal, the dvi plug could be "removed" (eek) and some custom plumbing might solve the watercooling connector issue.

            if you were being *really" creative, a doublesided waterblock that cools the front of one card and the back of the next could be designed, so you could literally stack the cards into a solid block of gpu powah.

            id also imagine the cost of getting custom waterblocks made might just be higher than the cost of the gpus themselves
            Last edited by super gnu; 28-08-2015, 12:42 AM.

            Comment


            • #7
              hehe that would be some great engineering, but it would probably be alot cheaper to just get 2 seperate systems with 4 gpu's each. Would give you some extra CPU power at the same time as well.

              As to PCI/e risers, they vary alot in quality, most issues also emerge when using multiple risers close to each other. Are you using shielded or unshielded risers? I would definatly go for shielded ones when clustering gpu's. Also I would recommend against placing 4 gpu's in the board directly and using risers in slots between them, this way the riser cables will be blocking airflow between the cards which are probably already getting hot in there.

              Comment


              • #8
                We have finally made it with the new riser we bought, and it's working perfectly, no speed decrease and stable.
                We just purchased the other 6 so we should have our total build done by end next week if everything goes well.
                As a reminder, this will be a node with 7 Titan X and when they are OCed (as I have them in two nodes ATM) they are actually slightly faster than a NVidia VCA (CF benchmark chart)

                I'll keep you guys posted,

                Cheers
                Stan
                3LP Team

                Comment


                • #9
                  Originally posted by 3LP View Post
                  I'll keep you guys posted,
                  Yes please

                  Comment

                  Working...
                  X