Announcement

Collapse
No announcement yet.

vrayRT on 16+cards? How many cards are supported?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • vrayRT on 16+cards? How many cards are supported?

    Hello,

    I am still working on my setup seeing the 7-8 card wasn't enough for my needs. I would like to ask that pci-e splitting works efficiently with vrayRT or not? I think about 16-32 6GB cards. I would also ask if VrayRT supports peer to peer transfer, so the multi gpu system can transfer the data in between the cards without the processor.

    here is the link for the pci-e splitter. (not a cheap simple splitter thing)
    http://www.magma.com/expressbox-16-basic

    Thank You!

  • #2
    bus is pci3 3.0 cards pcie 3.0 .. the internal bandwidth of the splitter dont know...

    Comment


    • #3
      info

      I found the info:
      pci express v2.0

      14 slots, x8 PCIe
      2 slots, x16 PCIe

      Comment


      • #4
        14 slots at 8x pcie? What motherboard can run that?

        You can use distributed RT rendering with GPU's. So just hock up few machines together.
        CGI - Freelancer - Available for work

        www.dariuszmakowski.com - come and look

        Comment


        • #5
          Originally posted by losbellos73 View Post
          I would also ask if VrayRT supports peer to peer transfer, so the multi gpu system can transfer the data in between the cards without the processor.
          V-Ray does not really transfer any information between the GPUs themselves; all transfers are from/to the main memory.

          As DADAL pointed out, it might be a lot simpler to use another machine for DR.

          Best regards,
          Vlado
          I only act like I know everything, Rogers.

          Comment


          • #6
            That was just an other point.

            Dadal didn't seem to read my post. Its a pci-e splitter external with external power supply etc...
            There is a windows 7 driver for it, so I guess windows sees them, but I am not sure that vrayRT would work with it.
            again here is the link.

            http://www.magma.com/expressbox-16-basic

            Comment


            • #7
              Originally posted by losbellos73 View Post
              There is a windows 7 driver for it, so I guess windows sees them, but I am not sure that vrayRT would work with it.
              again here is the link. http://www.magma.com/expressbox-16-basic
              We haven't tested it here, so I don't really know. I can only say that V-Ray RT will use all CUDA devices that are accessible to it through the nVidia drivers. However, I don't know for example if there is any performance penalty for the extender. I know that with some older extenders, the GPU performance was quite a bit lower than if the GPUs are directly plugged into the machine. Whether this is the case for this particular extender, I don't know - it would be best to test. Why don't you ask those guys directly if they have some actual numbers?

              Best regards,
              Vlado
              I only act like I know everything, Rogers.

              Comment


              • #8
                I ask them then will see.. I will post back.

                Comment


                • #9
                  no i think Dadal has a very good point. that thing may have 14 pcie8x slots, but they are all connected to either 1 or 2 pcie 8x or 16x slots in your pc.

                  this means you are sharing the bandwith usually reserved for 2 cards between 14. having extra slots doesnt magically give your motherboard more connection bandwith.

                  Comment


                  • #10
                    As far as I know each chipset has a number of pcie lines that can be used on GPU/accessories http://www.intel.com/Assets/PDF/prod...duct-brief.pdf for example x58 here has 36 lanes. Now if you have 14 GPU's and want to run them at full speed then you need 14gpu x 16lanes = 224 lanes on chipset to fully utilize ur GPU power. I have never meet a motherboard that has that many lanes. However the product you show us appears to be able to give ur machine more lanes. So it has its own chipset that manages them. I'm just not sure if thats what it is and if its that how well can it do it...

                    It looks like I could connect to my EVGA SR 2 around 7 boxes like this which would result in around 112 GPU - I cant imagine data transfer and all that to be possible to be fully utilized... its dark magic in my eyes at this moment.
                    Last edited by Dariusz Makowski (Dadal); 23-09-2013, 08:22 AM.
                    CGI - Freelancer - Available for work

                    www.dariuszmakowski.com - come and look

                    Comment


                    • #11
                      this box doesnt give you any more lanes, it just divides them up between however many cards you put in there.

                      pcie works like that.. you can have a 16x pcie slot that only has pcie1x bandwith.. in fact iirc you can also plug a pcie1x card into a pcie16x slot.. leaving most of the slot empty.. its quite flexible.)

                      its also not clear ( it mentions 2 host cards at one point, but also mentions a single pcie16 interface to the host..) how many lanes you have to work with.

                      worst case scenario you have 16 pcie lanes split between the cards in this box.. best case.. 32.


                      not something id even consider tbh. unless your work involves almost no data transfer to and from the cards, this will be a massive bottleneck compared to some nice fat motherboards decked out with as many gpu's as possible.

                      Comment


                      • #12
                        I will ask and see if they can install a vrayRT demo and a max...

                        Comment


                        • #13
                          Yeah I would be interested to see some bench from those guys as well.

                          But guys, you should consider what a PCI express v3 lane give as bandwidth :

                          http://en.wikipedia.org/wiki/PCI_Express



                          So on a 16 lane pci you've got "15.75 GB/s" of bandwidth in each direction, so this means just under 1Go/s in each direction.
                          Just saying, this is more than 1 CD or 1/4 of a DVD of transfer speed each second.

                          I don't know what's the number of data that needs to be transfer (maybe Vlado can lighten us on this one) but I recon it's still enormous amount of data.
                          So in the case if you use a CG in a 1x pci lane (16x lane divided by 16 cards) each lane has 985 MB/s bandwidth speed. This is ruffly comparable to a 1 Go network card.

                          In the end, it all depend on how much info needs to be transfer to each card for :

                          1) the initialized phase (the scene mesh, the textures, and all the other stuff that needs to be send to the GPU, this could be few Go, so few seconds, till here, not a big issue if used a 1x pci lane I'm guessing).

                          2) the progressive render that parse info from the CPU to the GPU.

                          I don't know if we need to consider the latency, on a network card it's so minimal, I think inside a PCI lane, it has to be so small that it could be insignificant, am I wrong?

                          Loads of bitcoint farm are running on ati cards with 1x pci lane, I don't know how this could be in any way linked to what RT is doing, but just as a side note...

                          Stan
                          3LP Team

                          Comment


                          • #14
                            BTW the new asrock mobo have 2 chips handling pci-e and more lanes. Each x16 (there is 4 slot real 16 lane pci-e) will divide, so you will get 16 cards with 4 lanes.
                            Probably it doesn't work like this but its a good calc for now, and of course divide the theoretical bandwidth by 2 and there you will have a still usable bandwidth on 16 cards.

                            Thats an other story that there is 2000$ extra cost for each slot. Its a price of an other machine with 7 cards about... Especially in this asrock case worth to think since its 1x16,6x8 lanes.. but he compatibility and the no need for slow networks... might be worth. Will see.

                            Otherwise 10G network cards cost about 1000$ for each machine and if I want more machine even the 10G switch is costy to...

                            Comment


                            • #15
                              Originally posted by losbellos73 View Post
                              BTW the new asrock mobo have 2 chips handling pci-e and more lanes. Each x16 (there is 4 slot real 16 lane pci-e) will divide, so you will get 16 cards with 4 lanes.
                              Probably it doesn't work like this but its a good calc for now, and of course divide the theoretical bandwidth by 2 and there you will have a still usable bandwidth on 16 cards.

                              Thats an other story that there is 2000$ extra cost for each slot. Its a price of an other machine with 7 cards about... Especially in this asrock case worth to think since its 1x16,6x8 lanes.. but he compatibility and the no need for slow networks... might be worth. Will see.

                              Otherwise 10G network cards cost about 1000$ for each machine and if I want more machine even the 10G switch is costy to...
                              There is 10g network card that cost around 100£ from what I remember...
                              CGI - Freelancer - Available for work

                              www.dariuszmakowski.com - come and look

                              Comment

                              Working...
                              X