Comments
Loading Dream Comments...
You must be logged in to write a comment - Log In
More failures after finally getting a red stripe. First 10 seconds of this is more or less what I want, just without the red piping. The one time I actually got red stripes (at -:15 mark) is I believe based on close inspection after breaking out the individual frames the pair of stripes I requested, just touching along the edge.
Last 10 secs of this compilation is a pair of prompts attempting to spread the two red stripes apart by putting a white stripe between them. That's, in fact, what my original image & a lot of the earlier videos in this sequence showed, with ANOTHER strip of white below that slightly wider.
Beginning to wonder if it is even *possible* to use text prompts to create a match to what I was earlier getting from the original image, then the final frames from each previous clip. I've not yet found a combination of Stable Diffusion models and prompts to reliably get them to appear even on those single frames. I need to try loading one of those frames into Pixlr to see if I can get what I want that way.
I think the next time I get enough energy to do a video clip, I'm gonna tell Tramma to hold that pose while I slowly zoom out to a long shot. I'm fairly confident that I can *eventually* browbeat 1 of the 4 or 5 AI image editors I can access to duplicate that dratted piping. That will give me a starting image, & I can either just accept the jump-cut, or MAYBE the idiot AI will deign to listen to me for a change & begin a clip with a wipe or dissolve or something.
DDG desperately needs a real AI video editor, not that I've found 1 online that's really worth much yet. Runway seems to be headed the right direction, & there's some high-dollar apps out there that could do what I'd need to do to fix the first 10 seconds of this compilation. Replacer-video on my local Automatic1111 should in theory be able to do it, if I could ever figure out the masking & prompting, or just bite the bullet & hand-mask 313 frames. I've also seen a workflow for Comfyui that appears to be capable of doing what I want, but that would require me learning how to use the interface as well as that workflow.
Which, of course, would leave me with a clip that can't be brought back in triumph to DDG anyway, since we can't import videos to use for guide or prompt purposes. I document these thoughts in the descriptions on the odd chance that a) anyone actually reading might learn something from my struggles, b) the developers might by some miracle read some of this & realize just how badly they need to add a scripting system and/or (preferably and) more video editing capability than simply pasting clips together - and DOCUMENTATION, software needs DOCS.
Point here is AI promised to do away with a lot of the drudgery of creative work with images. Reality is it has simply shifted drudgery from untold hours manipulating masks, layers, & pixels to MORE hours chasing prompts with essentially zero guidance other than our own pages of notes on failures.