Announcement

Collapse
No announcement yet.

attaching by list, commit charge and gc()

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • attaching by list, commit charge and gc()

    Hi All,

    I have a bunch of leaves that I baked in to geometry from pflow, but they were separate, so I wanted them to be one mesh so I started to attach the geometry together. However when I did this my commit charge jumped up. It kept going up the more I added and the only way I was able to fix it was to type "gc()" in to the maxscript listener. Then the commit charge would reset itself down.

    If I let the commit charge go up, it went as high as 9000M and my computer started to crawl.

    Running windows XPx64 and Max 2008 64bit. I'm starting to hate XP - it's so old now, if I try and render something on my machine it craps itself if it's too complex and I basically have to do a full restart to clear the system. I wish I knew why.

    I've been using 7 at home which I have to say I like...though I've not used Max on it yet so I'm still reserving full judgment until then.

    Still It'd be great to know what the problem is. Windows is automatically setting the pagefile size but it seems to get out of control.

  • #2
    I think it's more a Max problem than XP's. If you convert your base mesh to an editable poly before attaching the leaves then your memory usage will remain more or less the same.
    Dan Brew

    Comment


    • #3
      Are you doing all of this manually? Because if you are, there are a number of attach scripts available that automate the entire process & stop Max hitting the floor.

      Off the top of my head www.neilblevins.com has one in his Soulburn scripts pack.
      MDI Digital
      moonjam

      Comment


      • #4
        I found the same thing in my leaf script - if you attach the meshes with no gc() command your copy of max will very quickly fill up memory wise and crash - hence I put in a gc() command every loop which I'm sure slows things down a huge amount. I wrote it on max 9 32 bit admittedly so it could theoretically work fin under 64 bit with a lot of ram. Either way the attach command is a memory hog.

        Comment


        • #5
          found this in a search.. A couple pointers from the last time i had to do this.. the optimium grouping of objects to collapse is the square root of the raw number of objects to collapse together (compliments some dave stewart case study testing)

          shutting undo off is pretty much required, as it will keep an undo snapshot (memory!) after each individual attach.. thats why the collapse utility blows up after a few hundred objects

          I did a little helper struct to do this and supply a little feedback while collapsing, casewise it'll take down 10k objects in ~1.5 minutes and without too much ram thrashing.

          Its not bulletproof.. i went further with error checking and material handling in the current version, but haven't gotten around to separating it from its parent scripts again yet.
          Dave Buchhofer. // Vsaiwrk

          Comment


          • #6
            Dave,
            Thanks a lot for this script. I've just tested it on an array of 100.000 objects and it worked fine (max2009). Memory was stable and no crashes ( I was using boxes though). I often work with Revit models and dealing with tens of thousands of object was a big chore.
            You could probably improved it a bit (like automatically select object from the same families and attach them based on some material properties etc and this could become a valid revit2max importer

            I noticed that it uses only one CPU. Pardon my profound ignorance into how Max internally works but is it possible to, let's say, divide the whole array into number of groups and assign each group to one cpu and attached them finally to one object? That'd I guess speed up things quite a bit?

            Thanks again for a very useful script.

            Zoran

            Comment


            • #7
              yea i've got quite a few improvements i've done to it, but i guess a lot of those come under the 'done at work' shouldn't give them away clause lol.

              Glad it was helpful.. hope someday they fix this whole revit->max insanity
              Dave Buchhofer. // Vsaiwrk

              Comment


              • #8
                Originally posted by zoranm View Post
                I noticed that it uses only one CPU. Pardon my profound ignorance into how Max internally works but is it possible to, let's say, divide the whole array into number of groups and assign each group to one cpu and attached them finally to one object? That'd I guess speed up things quite a bit?
                Zoran
                Not possible with maxscript AFAIK.
                It's all single threaded, on the same thread as the UI.
                It is possible to fire off seperate threads with dotNet, but that only works for something like math routines.
                As soon as you manipulate scenenodes, it's all on the same thread again.
                Marc Lorenz
                ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___
                www.marclorenz.com
                www.facebook.com/marclorenzvisualization

                Comment

                Working...
                X