• Sign up or login, and you'll have full access to opportunities of forum.

Fallenmystic's AI-assisted works

Go to CruxDreams.com
Nice results! When you write that you're working on improving the workflow I'd be interested to real also exactly what did you want to achive. Not the technical side but what was you goal in improving what this time? I also like experimenting with SD and I am interested in reading what problems other creators are facing. Thank you!
 
Nice results! When you write that you're working on improving the workflow I'd be interested to real also exactly what did you want to achive. Not the technical side but what was you goal in improving what this time? I also like experimenting with SD and I am interested in reading what problems other creators are facing. Thank you!
It was mostly technical stuff. I hoped the other AI discussion thread would become a place for AI creators to share their tips and even collaborate on models, but that hasn't happened yet. So, I'm glad that you're interested in what other members are experimenting with SD. :)

My main goal in this experiment was to find a way to preserve image consistency when it contains a complex composition with many different subjects.

As you know, SDXL cannot generate such scenes in a single pass, even if you use control nets. As such, I need to rely on inpainting to add missing details and fix errors. The problem is that every time I change a region that way, it loses some consistency.

Inpainted regions usually have slightly different tints, blurriness, lighting direction, and so on from the rest of the image. And when such issues accumulate as you repeat the process, it will give you the impression that the image is obviously fake, not too different from the feeling when you see most photo manip works.

There are two approaches I have found to mitigate the issue - 1) do as much as possible in a single generation pass to minimise the need for inpainting, and 2) run the output through the sampler again (i.e. img2img or upscaler) to restore the consistency.

Last weekend, I tried various ideas for doing both of those approaches and managed to find a workflow that greatly improved what I could achieve in the past in terms of 1).

This was the base image I generated in a single pass using the new method:

quarry - base.jpg

As you can see, the overall composition is already there, so I only needed to inpaint sparingly to preserve the initial image consistency as much as possible.

I'm willing to share what I found with other CF members working with AI. If anyone is interested in technical details, please ask me in the other AI discussion thread.
 
I made this while testing out a variant of the Pony diffusion model. I didn't fix all the glaring errors, like the deformed figures in background, but I thought it was good enough to share:

View attachment 1488293
The bondage looks pretty realistic, in contradiction to many other renders. Still, no one would have this expression on her face when crucified. Basically, 8 out of 10.
 
The bondage looks pretty realistic, in contradiction to many other renders. Still, no one would have this expression on her face when crucified. Basically, 8 out of 10.
You're right about the expression. It was made as a test to see how far I could go with a Pony model with a simplified workflow, so I didn't care to spend hours fixing and enhancing it as I usually do with other images I post here.

By the way, the test result was pretty positive overall, and I think it will save me a lot of time when I prepare a base image. :)
 
You're right about the expression. It was made as a test to see how far I could go with a Pony model with a simplified workflow, so I didn't care to spend hours fixing and enhancing it as I usually do with other images I post here.

By the way, the test result was pretty positive overall, and I think it will save me a lot of time when I prepare a base image. :)
I would love to see results from your future work.
 
Back
Top Bottom