SDXL, SD-CN, Gen-2, AnimateDiff, SadTalker, ModelScope text-to-video, Warp Fusion, Deforum, Photoshop Generative Fill, Infinite Zoom A1111, Runwayml automatic background removal, FILM interpolation and Topaz upscaling.
Sounds like I need to branch out from midjourney to try more things!
Yeah I never liked Midjourney much tbh.
It’s high fidelity but pushes everything towards the same aesthetic which I find very bland.
My cousin recently did a project training various artists work in stable diffusion and he did some of my Bryce art. Something kind of charming about the fidelity of it.
Asked Midjourney for FX pedals, now I want to start a boutique pedal company
What fx would you put into these?
those first ones
I think ai does synthesizers etc better than it does anything else. some of em are incredible.
Nice. I could glitch those goldfish up with my Hypno.
Runway Gen-2 Motion Brush
https://x.com/runwayml/status/1723033256067489937?s=52&t=7ZGdKqhYE-nOmf0SvyPTWw
definitely these ones…really like the look of them
i would probably go with a 5 step sequencer of some sort on each, some sort of p-lockable parameters for the fx, maybe have reverb, delay, distortion, kinda the usual suspects or some sort of granular fx? or…hang on a second, i think i’m starting to think about a trimmed down version of an Elektron machine in a pedal form
I’ve been revisiting some old work made with Blender + CLIP Guided Diffusion, upscaling and adding more detail using Stable Diffusion & controlnets.
I think one of the dead giveaways of (current) AI art that I encounter is the notable mismatch between the off-center edginess of a subject/concept and the totally polished lighting, photorealistic execution, and painstaking detail of the technical aspects.
I like the „haunted b/w stuff“, that the.vape.noise aka Petr Valek is squeezing out of AI.