Announcement

Collapse
No announcement yet.

SEDI Concept - Search for Extra Distributing Intelligence

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • SEDI Concept - Search for Extra Distributing Intelligence

    Want to put this on the wishlist but it's really not a part of VRay. I'm just throwing a concept out there to get some wheels turning. I came with the idea in a dream last night (yes I do have strangely productive dreams.) Wish I was a knowledgable programmer cause I'd get the groundwork running myself. I think it is very feasible though it would take some significant amount of volunteering on the 3D community's behalf. It is an idea that would revolutionize the movies, gaming, and culture in many unseen ways. Perhaps something that would be taken for granted some day- just like the www itself. Any ideas/input on this idea would be welcome:


    SEDI - Search for Extra Distributing Intelligence.

    Just a thought. A pun/take on SETI (in case one is not familiar with this):

    http://setiathome.ssl.berkeley.edu/

    Why not use other people's computers across the www for animations while their comps are on but not in use? I personally wouldn't have any particular use for this (as it would be going overboard for what I do), but I'm sure plenty of studios/indie animators out there would appreciate the extra rendering power when in a crunch. Instead of a render farm of 30 computers, you could possibly obtain 300-500 computers for free. Ideally there would be a plugin for all the different 3d programs out there. But myself being a Max use, I'd rather root for the home-team.

    If there's a lack of good-will incentive among the community to install something such as this, perhaps this can turn into a client relationship where an individual recieves some small royalties for letting an organization have access to their SEDI setup. But it would have to be very cheap for everyone (I'm very wary of profit-making abuse on this part.) This would be based on the number of computers, available hours, available ram, and processing power.

    I'd prefer if this technology was non-profit open-source (as I'm sure most people would.) Perhaps it can be based on the amount of time a client has their service available to the public in relations to the amount of time they themselves want to have access to the network. A do or die situation.

    There would also have to be safety switches installed such as the ability to adjust the amount a CPU can be accessed over a period of time. I wouldn't want my personal computer to have a melt-down. And I guesss there would have to be mirroring/rerendering in case someone suffers from a power outage or their kids decide to go 'surfing' in the middle of the night. Also, there would have to be some safer level of encryption in the file transfers.

    -jujubee
    LunarStudio Architectural Renderings
    HDRSource HDR & sIBL Libraries
    Lunarlog - LunarStudio and HDRSource Blog

  • #2
    I've had this idea before, don't know if anyone would actually implement it though....but, here's a great way to manage it:

    By letting others render on your machines you accrue credits on the central server, and then you can cash in your credits for higher priority on your submitted jobs.

    Comment


    • #3
      Dynedain- that's a really good idea. Would kinda even up the playing field.

      One more thing I forgot to mention above- you'd also have to test for computer reliability/internet connection speeds.

      I wish there was some sort of way to get this off the ground. Unfortunately I don't have the programming skills to be able to contribute much to a project of this nature.
      LunarStudio Architectural Renderings
      HDRSource HDR & sIBL Libraries
      Lunarlog - LunarStudio and HDRSource Blog

      Comment


      • #4
        yeah, i have no programming skills of this nature either

        the nice thing about the credit idea is that essentially what happens is you are 'banking' your processor cycles so you can use them later

        Its a lot like if you have a solar-powered home connected into the city electrical grid. In the US, the electric company is required to buy any excess power you generate, so you essentially produce power during the day and then retrieve power at night, using the power grid/company as a storage (sortof). Credits for rendering would work the same way.

        Comment


        • #5
          Jujubee, what did you had in mind for file transfer? My scenes is almost always over 50meg, that would take quite awhile to send. Also there is huge issues with compatibility and licenses.

          I don't see this idea to be feasible before everyone have HIGH speed internet connections, and with high I mean HIGH.

          /Thomas
          www.suurland.com
          www.cg-source.com
          www.hdri-locations.com

          Comment


          • #6
            Ever try remote network rendering? it's a nightmare. Here are some issues that I have come up against:

            1. Software setup. With max and I'm sure others as well, you need to install the program on all the render slaves. That also includes plugins and any third party software. For 3rd party renderers like Mental Ray and RenderMan, where you pay per CPU, you also have licensing issues.

            2. Storage. Some of these programs are quite large. I wouldn't imagine people wpuld be too happy with you taking up all their hard disk space. And think about how much space you'll need for the rendered frames. We are doing a job that requires about 200Gb for the beauty pass alone. Not sure many desktop machines would have that free.

            3. Bandwidth. Imagine trying to pull that same data back to your file server across the net with the current bandwidth available to consumers.

            4. Hardware. 3D render slaves require a higher spec computer than your average mum and dad would buy. Anything less than a 12 month old P3 with 512Mb of RAM is not worth adding to the farm.

            5. Hardware/Software conflicts. Varying flavours of OS and hardware configurations will make your job of troubshooting a failed job a nightmare. For example, we sent a job recently where Speedtree trees came back on some frames smaller than others. It was an issue with Regional settings. Same can be said of other plugins that use decimals. US settings use the (.) whereas European setting use (,) comma. Fancy your chances off telling others to change the settings on their computer to match yours?

            6. Access. When a job fails, you need to be able to access the machines, perhaps reboot them or find out why they failed. Imagine the warm welcome you'd get from your render host when you turn up at 2am wanting access to the render slave that's in their daughter's room.

            My point is it's bad enough trying to manage a render farm locally under controlled conditions, with computers you are familiar with. Your idea is not new...in fact it has been around for some time now. The reason why it has not been implemented is, not because no one has thought about it before, but because it would take a miracle for it to work.
            sigpic

            Vu Nguyen
            -------------------------
            www.loftanimation.com.au

            Comment


            • #7
              You've all got valid points- and I can't say that I've had as much experience as all of you. What sets 99% of the population aside from those that forge ahead are not the ideas, but the ability to follow through on them. I can be a lame duck, and I'll be the first to admit my shortcomings. When I posted I didn't think it was a completely original idea. I'd like it to be, but chances are if you have an 'original' idea, than 20 other people already have it. I just haven't seen it printed anywhere. And 'no', I don't think we need a mircale. I think we need a bunch of really creative programmers to be willing to volunteer time and dedication to something which they will most likely not see a single dime from. And I don't think it would be an easy task.

              Those technical problems aside- I still think it can be accomplished. Perhaps I'm a little overly optimistic. And some day you will see something like this. You may not think it possible now- but it's not out of reach. I don't think the 'average' person would be installing this kind of software for the next 15 years- at least until processing power becomes a non-issue as well as bandwidth. Perhaps by then there will be no need for distributed computing at all. But I can forsee that it would involve mainly the people in the field already- more than likely those with higher end systems. I believe a very high percentage of people on this forum already run screaming machines with the fastest connections currently available.

              As for the future, new technologies already in research will enable us to transfer large files easily. This may be several years away. For now, this system would have to assess what someone's connection speeds are like as well as their reliability.

              As for hard drive space, I think there must be a way to segment files into smaller portions- no need to send one huge file containing large amounts of unneeded data to a remote site. No need to send a 80 meg bitmap when only a 1/16th of it is going to be used. A more efficient dedicated parser should be able to handle sending out localized renders of frames- no need for an entire frame if you have a 'smart distribution' system. These files can be reduced. They can even be mirrored for fail-safe options.

              As for licensing, this is the whole entire point of the idea. There wouldn't be a need for licensing as this would ideally be open-source. It would have to be a distribution engine that operates as a 'freeware' plugin. Anyone can install it. I'm sure there would be some issues as it would conflict with other rendering packages. But given the option, 99.9% of all the people in this field would switch over in a heartbeat if this technology was available and reliable. These groups would have no choice but to eventually cave into the demands of the greater population. Tell me you wouldn't use this if it was highly reliable and available.

              As for access, if something fails then at least it's mirrored across another ten machines- no need to pay anyone a visit. And as for hardware/software conflicts, I'm sure someone out there could write more extensible code that would enable a proper translation request to occur across different systems. It sounds like the current language was not written properly or with those issues in mind. It's nothing that isn't possible.

              You can't think of it from the perspective of it being impossible. I've thought of many more things that were truly impossible. Something like this is in reach. The technology already exists to handle/segment large packets of streaming data.
              LunarStudio Architectural Renderings
              HDRSource HDR & sIBL Libraries
              Lunarlog - LunarStudio and HDRSource Blog

              Comment


              • #8
                The problems w/ software and version controll and job size and everything else can be solved fairly easily by using the SETI or distributed.net models. Just send out the pure math in chunks for the distributed machines to render, dont rely on each machine to do a whole frame or pass or anything like that.

                Comment


                • #9
                  I think the whole system could be universal if every renderer had a custom tailored plugin that splits the renderload into such tiny universal chunks that it makes sense sending them over the internet to other computers.

                  However I see 2 drawbacks:

                  1.) Making the chunks universally processable by any computer (only equipped with a universal client software without the need for special plugins that are renderer-specific) would mean a lot of processing prior to sending in such a way that they basically represent a number of computations for the client to chew on and the results of which only make sense to the server machine which issued them initially (which would at least solve the security issue). Using such simplified chunks would be necessary since the color of a given pixel in a rendering is dependant on the rest of the scene, especially when algorythms like raytracing for relections and refractions and radiosity are used and from an ammount of data and speed of internet connections point of view transfering the whole scene wouldn´t make much sense. So any higher order data but simple equations are, I think, prohibited in the first place. This is still not a problem and might still work technically but it brings us to...

                  2.) The ammount of preprocessing (splitting and partially digesting the scene data into low level chunks, recollecting the finished data from the clients and feeding it back into the main rendering pipeline) and network management required would soon become the bottleneck and probably consume more processing power than rendering the image itself. I think for the same reason there is currently no point in using more than 10 machines for distributed rendering with Vray.

                  However, I´d like to be proven wrong.

                  Stefan
                  Stefan Kubicek
                  www.keyvis.at

                  Comment

                  Working...
                  X