Announcement

Collapse
No announcement yet.

AI Animation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Pretty special!
    Bobby Parker
    www.bobby-parker.com
    e-mail: info@bobby-parker.com
    phone: 2188206812

    My current hardware setup:
    • Ryzen 9 5900x CPU
    • 128gb Vengeance RGB Pro RAM
    • NVIDIA GeForce RTX 4090 X2
    • ​Windows 11 Pro

    Comment


    • #32
      Originally posted by rusteberg View Post
      This was put together within a week. Generated almost an hour of footage while trying to maintain consistent color grading and character reference. Obviously there’s artifacts that people (especially here) will notice, but not nearly as weird (and at times “inspiring”) as some of the stuff it generated. I created a one page script and fed it to text to speech, then chopped it all up and composed:

      Very nice, also cool script/story . Can you explain a bit more about the process? Did you first generate images and references or did you feed it real images and references to generate first frames. And afterwards image + prompt to video?
      A.

      ---------------------
      www.digitaltwins.be

      Comment


      • #33
        Excellent little vignette. The script of course is what lifts it; very tight and perfectly impactful.
        Realistically the visuals could have been hand drawn simplicity, full cgi Pixar-esque or your own style that we've seen, or filmed for real...it still would have worked imo because of the script.

        That it's AI is of course the point of the exercise and in those terms it works well enough but as you said, the errors are glaring for anyone with an eye for detail. It'll improve...probably already has with
        a paid version. There's a reason Film/TV is buying into it big right now. I recently read that James Cameron now has a big stake in Stable something or other. Watch this space
        https://www.behance.net/bartgelin

        Comment


        • #34
          I generated thousands of reference images. Those images have prompt meta data in them which help direct the first frame of each shot when fed to video generator. You can feed it regular photos all day long and it will come up with some really wild shit until it starts to better recognize what it is seeing. For instance "Oh that's an arm. arm: this is what i've seen an arm do. Oh, that's a horse: this is what i've seen a horse do." Facial features are strictly limited to real human reference. I've had to hack drawn lips by photoshoping human eyes and nose into the drawn phoneme in order for it to go "Ok it's got eyes and a nose, that's gotta be the mouth" (that was when using the Rudrabha method a while back, have not tested recently). If it's something too "abstract", then it has a hard time doing anything with it at all and will consistently morph into the closest thing it can reference once it locks in on a noise pattern. Hope that helps explain the process and limitations I've come across while working with it.

          Attached Files

          Comment

          Working...
          X