Announcement

Collapse
No announcement yet.

AI Animation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Pretty special!
    Bobby Parker
    www.bobby-parker.com
    e-mail: info@bobby-parker.com
    phone: 2188206812

    My current hardware setup:
    • Ryzen 9 5900x CPU
    • 128gb Vengeance RGB Pro RAM
    • NVIDIA GeForce RTX 4090 X2
    • ​Windows 11 Pro

    Comment


    • #32
      Originally posted by rusteberg View Post
      This was put together within a week. Generated almost an hour of footage while trying to maintain consistent color grading and character reference. Obviously there’s artifacts that people (especially here) will notice, but not nearly as weird (and at times “inspiring”) as some of the stuff it generated. I created a one page script and fed it to text to speech, then chopped it all up and composed:

      Very nice, also cool script/story . Can you explain a bit more about the process? Did you first generate images and references or did you feed it real images and references to generate first frames. And afterwards image + prompt to video?
      A.

      ---------------------
      www.digitaltwins.be

      Comment


      • #33
        Excellent little vignette. The script of course is what lifts it; very tight and perfectly impactful.
        Realistically the visuals could have been hand drawn simplicity, full cgi Pixar-esque or your own style that we've seen, or filmed for real...it still would have worked imo because of the script.

        That it's AI is of course the point of the exercise and in those terms it works well enough but as you said, the errors are glaring for anyone with an eye for detail. It'll improve...probably already has with
        a paid version. There's a reason Film/TV is buying into it big right now. I recently read that James Cameron now has a big stake in Stable something or other. Watch this space
        https://www.behance.net/bartgelin

        Comment


        • #34
          I generated thousands of reference images. Those images have prompt meta data in them which help direct the first frame of each shot when fed to video generator. You can feed it regular photos all day long and it will come up with some really wild shit until it starts to better recognize what it is seeing. For instance "Oh that's an arm. arm: this is what i've seen an arm do. Oh, that's a horse: this is what i've seen a horse do." Facial features are strictly limited to real human reference. I've had to hack drawn lips by photoshoping human eyes and nose into the drawn phoneme in order for it to go "Ok it's got eyes and a nose, that's gotta be the mouth" (that was when using the Rudrabha method a while back, have not tested recently). If it's something too "abstract", then it has a hard time doing anything with it at all and will consistently morph into the closest thing it can reference once it locks in on a noise pattern. Hope that helps explain the process and limitations I've come across while working with it.

          Attached Files

          Comment


          • #35
            Originally posted by glorybound View Post

            I have heard that for 30 years, yet I am still here.
            This ^
            I'm so tired of these pessimistic predictions and I don't know why people do it. Is the goal to make people worry about their jobs? For what purpose?
            The secret lies in adaption. The base knowledge that every user here has about 3d software, like composing and post production, will always give us an advantage over "casuallay generated" Ai-slop. I think concerning Archviz, Ai will greatly help with materials, vegetation and light calculations.

            glorybound, your ai-animation is impressive. I try stuff like that on Pixverse, I have also used Runway and the results are always fascinating. But they fall apart quickly and I could never use them for work.
            What you generated here could be used as a quick sequence in a longer animation for example.

            Comment


            • #36
              I want to chime in. After this first AI hype, it is now being standardized in almost all organizations around the entire globe. You're not hearing so much about in the same hyped way as before, just because of it's being mainstreamed. The world is being flooded by startup companies offering AI services, and some of them are actually delivering good result.

              But as with this, I think there are some key takes:
              • All people simply don't have interest in learning prompts and bringing in effort to output visuals. People have lives and interests.
              • For commisioned work, we all know that changes is a huge thing, both last minute changes and modifications months or years later after the initial delivery. How should this really come about in a pipeline where you shortcut yourself into an early bliss? Opening a project file and being able to change a tiny specified detail while outputting new visuals from several angles with the updated detail will always be important to do. How should you do this with AI? I'm sure there will come specialized solutions for this as well, but in the end you still need to spend time on making this happen, both pipeline wise and production wise.
              • Another thing is copyrights. I still believe there could be copyright issues, since the models are trained on all kind of data.
              • AI will not bring the value to the table by itself. You need to spend time and effort, and in the end it is a shortcut for a good result that you can't replicate if you need to change the design or make more angles.
              • I compare it with environmental effects vs cars. Should we stop using cars or should we put effort in making the cars adaptive to new environmental standards? I mean, 3d programs and technology are getting faster. One of the most important things for me, and that actually brings in value to the company, is project files. You can reuse assets, you can make changes on specific things etc. With render times minimized, I don't really see why we should use generative AI at all, since it do take time. Enhancing grass and people is a thing, though.
              • However, I am not so sure about the future for 3ds max, as it still lacks essential functionality. One example is GIS. In a future not so far away, drone footage will be obsolete, and digital terrain data will be the only needed thing. Twinmotion has this support, and unfortunately it is a very important thing. So I can clearly see that Epic is winning this game. The future is data driven, also with GIS data in mind. We need to show/stream the visible map to the entire horizon of our imagery. At a later time, we will be forced to use Twinmotion or similar for this, and this is a process that I fear a bit, since we need to maintain a higher polycount in our models to be able to compete on that exact point. 3ds max is becoming too niche and is moving away from archviz.
              • All people in every type of job is being adapted to AI. Students soon no longer know about the world without AI support. So yes, it's here, and it will stay here with enlargened force. I mean, it's a rat race of businesses enpowering their incompetent people with advantages. But please bear in mind the effects of when this is no longer an edge for the firm. What will then give you edge?
              So it reality, I'm not so sure what kind of software I will use for my work in the future. I think 3ds max is falling behind. But even faster render speed is coming with even better GPUs, and so the glory of generative AI is drowning a bit with this.
              Other people can perhaps use generative AI to replace their creative skills or if they are lazy in general. And lazy people do not hold on to customers for long.

              Still, I am a bit concerned about the massive adaptation of AI supported workflows in the big CAD softwares, which enables architects to finally do our job (again, after we took it from them in early mid 2000). It is not really hard to create eyecandy imagery without any in-depth value, which is enough for many customers, but it will not give people any edge.
              Last edited by jon_berntsen; 28-01-2025, 02:37 AM.

              Comment


              • #37
                I watched a tech podcast, and they talked about AI in their world. Several were tech writers, and they all said they spend more time fixing AI text than writing it. I see the same thing in my work. I use AI on almost all of my work, and I delete 90% of it, and what I keep I set to a low opacity. Right now, AI seems to be at the entry-level at best. For quick ideas and fictitious artwork, it might be good. However, considering the cost, it is a significant failure. Open AI is losing millions at their $200 per month prescription, and I don't know about you, but I am not willing to pay anything close to that to get such limited benefits. The money will stop flowing once the investors see that they will not get a return, and all the AI costs will skyrocket.
                Bobby Parker
                www.bobby-parker.com
                e-mail: info@bobby-parker.com
                phone: 2188206812

                My current hardware setup:
                • Ryzen 9 5900x CPU
                • 128gb Vengeance RGB Pro RAM
                • NVIDIA GeForce RTX 4090 X2
                • ​Windows 11 Pro

                Comment


                • #38
                  Well at some things, it is quite good actually. Like making small Maxscripts to make my workflow faster. Or sometimes it can figure technical issues out by laying out what can all go wrong, for instance a problem in my network that I had for like a year, which no amount of googling or asking on Reddit solved, was answered in in less than 3 minutes.

                  And for archviz depending on which AI you use (I tried a lot of them) I couldn't find anything better than using the creative upscaler of Magnific to make my images 'blend' a bit better and a little more realistic. Granted it fucks up some things, but for greenery, concrete, tarmac (sometimes), curbs, or shitty google streetview / satellite, it makes it blend more nicely and adds a nice bit of dirt. But you need to find a good workflow to make it work.

                  I'm about to try the new deepseek r1 and if it's any as good as they say than I won't need chatgpt for the coming year for sure. I tried running it locally with ollama in anythingLLM and it seems to be running fine and quite "fast" actually on a 3080ti. It's funny because you can actually see it think, and the thought process really resembles that of a person.
                  A.

                  ---------------------
                  www.digitaltwins.be

                  Comment


                  • #39
                    I agree that Magnific.ai is good at things like that, which is all I use it for. Using masks in Stable Difusion gives me control over specific things like plant species. I wanted to try ComfiUI, but It seems too complicated to install, so it is a pass. As far as coding, I haven't tried. But again, is something like Magnific.ai going to be worth $200+ a month? If it can't be cracked and pirated, it might be worth it.

                    Bobby Parker
                    www.bobby-parker.com
                    e-mail: info@bobby-parker.com
                    phone: 2188206812

                    My current hardware setup:
                    • Ryzen 9 5900x CPU
                    • 128gb Vengeance RGB Pro RAM
                    • NVIDIA GeForce RTX 4090 X2
                    • ​Windows 11 Pro

                    Comment


                    • #40
                      Originally posted by glorybound View Post
                      I agree that Magnific.ai is good at things like that, which is all I use it for. Using masks in Stable Difusion gives me control over specific things like plant species. I wanted to try ComfiUI, but It seems too complicated to install, so it is a pass. As far as coding, I haven't tried. But again, is something like Magnific.ai going to be worth $200+ a month? If it can't be cracked and pirated, it might be worth it.
                      I got a good deal with black friday so I'm set for a year, but in general I find the pricing ridiculous since I don't use all the credits every month.
                      A.

                      ---------------------
                      www.digitaltwins.be

                      Comment


                      • #41
                        I think credits are taking over subscriptions. I watched a YouTube video on a PhotoShop AI killer, and everything was credit-based. You can work in an image using their AI, but exporting it from their software is credited. Imagine Choas letting you render as much as you want, but you have to pay for every image you save - insane!
                        Bobby Parker
                        www.bobby-parker.com
                        e-mail: info@bobby-parker.com
                        phone: 2188206812

                        My current hardware setup:
                        • Ryzen 9 5900x CPU
                        • 128gb Vengeance RGB Pro RAM
                        • NVIDIA GeForce RTX 4090 X2
                        • ​Windows 11 Pro

                        Comment


                        • #42
                          Originally posted by glorybound View Post
                          I think credits are taking over subscriptions. I watched a YouTube video on a PhotoShop AI killer, and everything was credit-based. You can work in an image using their AI, but exporting it from their software is credited. Imagine Choas letting you render as much as you want, but you have to pay for every image you save - insane!
                          Don't give them any ideas...
                          A.

                          ---------------------
                          www.digitaltwins.be

                          Comment

                          Working...
                          X