Announcement

Collapse
No announcement yet.

Distributed Render with Max 2017 Phoenix not working

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Distributed Render with Max 2017 Phoenix not working

    Trying a new install of Max 2017 with Vray 3.40.01 and Phoenix 1606126821 from nightly phoenixFD_adv_22501_max2017_vray_30_x64_26821.
    Slaves have same new install of Max 2017 running on trial, same process used for the last few years.
    Used same downloaded installation executable used on workstation and all render slaves. Tried with install as render slave then tried as full workstation.

    Buckets processed with local host render correctly, but those buckets handled by distributed render slaves are blank, only results are background and other geometry.
    Generated sim with UNC server cache location. Made sure server folder is accessible from slaves. Verified share security access and antivirus okay.

    Tried sending render job to backburner manager and server on Slave1 using only local host. Slave1 rendered properly. Thinking this might set up the cache folders, I tried again from the workstation using distributed render, but again the buckets processed by Slave1 were blank.

    Any ideas? I'm sure I am forgetting something.

  • #2
    I thin this is a permission issue, we had already similar complains. When you check the visibility you use the browser and for some reasons the permissions are different.
    ______________________________________________
    VRScans developer

    Comment


    • #3
      That's what I thought. So last night on Slave1 I did a clean uninstall of chaos and autodesk products and did manual clean of folders, etc. Reinstalled Max 2017 trial, vray and phoenix nightly 26821. Registered VRaySpawner 2017 as service on Slave1 and made sure that the service is running, also VRLService is running.

      This morning made small water sim with cache on Workstation folder using UNC address in output simulation cache save path, and set the same UNC address on the input preview & render cache path.
      First thing was to render by backburner on Slave1 manager and Slave1 server. Renders perfect, so I believe Render1 has permissions to the cache path on Workstation. Then immediately switched to render on Workstation with distributed render, use only Render1 server and use local host, turned on Transfer missing assets and Use Cached Assets. Buckets handled by Workstation have water, buckets handled by Render1 are blank.
      Attached are Phoenix logs and render results.
      Any thoughts?
      Attached Files

      Comment


      • #4
        the DR log seems to be not full, is it really like this?
        ______________________________________________
        VRScans developer

        Comment


        • #5
          I copied the actual file to a folder then renamed to attach it to this post. I will run the test again tonight and verify.
          I'm guessing this to be a UNC issue with DR slaves.

          Comment


          • #6
            Confirmed the Phoenix log file is odd. See attached.
            Attached Files

            Comment


            • #7
              Update:
              On Slave1, I tried shutting off the VRaySpawner service and instead manual Launch Vray DR Spawner from the start menu. Then ran DR render from Workstation using only the slave and rendered the water sim okay.

              Edit:
              Updated workstation and slaves to Vray 34002 and Phoenix nightly 26822. No change.

              Update:
              All slaves work if I launch DR Spawner on each machine. No luck getting the registered spawner service to work on any machine.
              Didn't we have similar issues with the 2016 version?
              Last edited by rporter8555; 13-06-2016, 07:55 PM.

              Comment


              • #8
                I have never had good results (or they are hit and miss) with any thing like render management apps or vray spawners etc when ran as a service, always run the app as normal now and not had any issues.
                Adam Trowers

                Comment


                • #9
                  actually the odd file can be a hint too, this is possible to happen only if in some way two maxes are started simultaneously.
                  perhaps this may be a direction , not sure. we have to reproduce this in order to say more.
                  ______________________________________________
                  VRScans developer

                  Comment


                  • #10
                    Thanks for the information. This time I was careful not to run both service and application at same time and got better log files.

                    I ran a test where I watch Task Manager to be certain of status of max processes running or not running, and captured the phoenix log files . I registered spawner service on Slave1 and see max running as a background process, then started a DR render assignment from workstation (which failed). Captured phoenix log file. Then stopped the service, watched max background process close. Then launch DR Spawner and see max run as an application. Started render assignment (success). Closed DR Spawner. Captured phoenix log file. See attachments. Hope this helps.
                    Attached Files

                    Comment


                    • #11
                      Flipbook: I don't disagree. I ran my farm successfully with 2016 versions and spawner services, but now with the 2017 upgrades begin debugging again.
                      Last edited by rporter8555; 14-06-2016, 02:29 AM.

                      Comment


                      • #12
                        the first one seems to be ok, the second one has no liading of the cache, i mean there is no attempt to load it at all.
                        ______________________________________________
                        VRScans developer

                        Comment


                        • #13
                          I had the same problem and my IT dept ended up creating a dummy account for the spawner and backburner services on the slaves and gave that account access to the folders holding my maps and phoenix cache files.
                          Cheers,
                          -dave
                          ■ ASUS ROG STRIX X399-E - 1950X ■ ASUS ROG STRIX X399-E - 2990WX ■ ASUS PRIME X399 - 2990WX ■ GIGABYTE AORUS X399 - 2990WX ■ ASUS Maximus Extreme XI with i9-9900k ■

                          Comment


                          • #14
                            That solved it.

                            Originally posted by Syclone1 View Post
                            I had the same problem and my IT dept ended up creating a dummy account for the spawner and backburner services on the slaves and gave that account access to the folders holding my maps and phoenix cache files.
                            Syclone1, Thank you! Your tip reminded me that I needed to go into the properties dialog of the VraySpawner Service and set the Log On to my local admin account, but also set the Recovery to restart the service after the first two failures.
                            Now all slaves in the farm are running with registered services and are handling the water sim fast. They restart automatically after machine restarts without having to go into each one to launch the spawner application.

                            Edit: Now also a backburner job is processing the water sim with DR working very well on all slaves. All is well.
                            Last edited by rporter8555; 15-06-2016, 07:15 PM.

                            Comment


                            • #15
                              would be good to keep track to this thread, you are not the first user hitting this problem
                              ______________________________________________
                              VRScans developer

                              Comment

                              Working...
                              X