• Sign up or login, and you'll have full access to opportunities of forum.

Discussion about A.I.

Go to CruxDreams.com
The long-awaited Stable Diffusion 3 will be released next week:
Switched to Fooocus last month and been using it ever since. Looking forward to seeing it adapted for SD3.

Thanks to AI, I've been able to realize several fantasies I've had for years at a level of fidelity that no hobby-level 3D engine is remotely capable of rendering. It's now part of my Daz workflow when creating one-offs.
 
Here's one of my recent pieces. The only thing not enhanced is my antagonist. He's all Iray and was rendered separately. Krea kept replacing his visor with a pair of eyes. Rendering him separately was how I solved that problem. I then merged him with the rest of the image in PSP.

My protag, her hair, and outfit were all set up and rendered in Daz, and enhanced with the rest of the image on Krea.AI. Her expression and makeup were then inpainted with Fooocus.

Investment-Banker.jpg


The rest of my AI gallery is at https://fantasydeath.net/ai-art/
 
Last edited:
I haven't tried it yet, but this is quite a significant improvement to the Krita AI plugin:


I've spent much time developing my workflow to solve the problem of creating realistic SD images with non-trivial compositions. And regional prompting to create a base image is among the major ingredients of the "secret sauce" I've found. (Well, not really a "secret" because it's not like I tried to hide it, but more like nobody cares when I wanted to share it :p).

Anyway, if you are interested in generating complex scenes that still look consistent, this might be a great tool to use without the complexity of ComfyUI.
 
I haven't tried it yet, but this is quite a significant improvement to the Krita AI plugin:


I've spent much time developing my workflow to solve the problem of creating realistic SD images with non-trivial compositions. And regional prompting to create a base image is among the major ingredients of the "secret sauce" I've found. (Well, not really a "secret" because it's not like I tried to hide it, but more like nobody cares when I wanted to share it :p).

Anyway, if you are interested in generating complex scenes that still look consistent, this might be a great tool to use without the complexity of ComfyUI.
thanks, looks interesting, maybe I'll play around with this again, if the whole thing still runs on my minimal setup ;)
 
There's disappointing news about SD3. As I mentioned earlier, Stability released SD3 today, which turned out to be quite different from what they promoted and what the community expected.

From the angry posts currently flooding r/StableDiffusion, the model seems incapable of reliably depicting correct human anatomy, even in SFW images. The heavy NSFW filtering on the training set has likely crippled the model, as it did with SD2, or it could be because it is a dumbed-down version (2B parameters), which they claimed would be "all you need".

Whichever is the case, one thing is certain: we won't be seeing better NSFW images made with SD3 soon. Where things will go from here, however, remains to be seen. I believe there are three possibilities:
  • The community (e.g. Civitai) will finetune SD3 to improve it, as they did with SDXL. However, they also changed the license of SD3 to be more restrictive than that of SDXL, which may dissuade people from doing it.
  • The community will stick with SDXL, which will remain the best open-source model for a long time. SDXL is a good tool for NSFW artworks but requires complex workflows for anything other than simple images.
  • The community will abandon SD altogether in favour of things like Pixart Sigma, which is an alternate base model that is said to be great but has never taken off due to SD's popularity.
It looks like I'll have to use my SDXL workflow for some time more.
 
Last edited:
From the angry posts currently flooding r/StableDiffusion, the model seems incapable of reliably depicting correct human anatomy, even in SFW images.
As a foot fetishist who's been hoping SD3 would fix those issues instead of making them worse, I'm going to take this news as my que to remove AI from my workflow until it matures more. I'm already sick to death of not being able to reliably get eyes pointing where I want them. AI is an incredible tool, but a stubborn bitch of one, too.
 
Last edited:
I had to test it out and as reported it's struggling with human anatomy and not suitable for nudity (yet) but the hardware requirement seems to be the same as SDXL
Looks like it was released in a "alpha" level of development, the cars look almost OK and we can add text that is almost readable.
2024-06-12 22_54_49-Task Manager.jpgComfyUI_00006_.jpgComfyUI_00014_.jpgComfyUI_00032_.jpgComfyUI_00034_.jpg
 
I didn't mention Luma Dream Machine in my previous post, but I probably should have since it was bigger news than the release of SD3 for most people.

In short, it's another Sora competitor that generates videos from texts or images. What's special about it is that it's open to everyone, unlike Sora or Kling, so you can try it yourself if you want. I tried to make a few videos as a test and was pretty impressed by the quality.

Of course, it won't allow generating anything NSFW, and there's very little way of controlling its output other than by prompting. I don't feel like using it much until such limitations are overcome.

Still, I believe it shows that the current AI technology is already good enough to create on-demand NSFW videos only if someone trains their models over an uncensored dataset.
 
I've been looking more into SD3 and I'm confused. Is it the entire code for SD that's borked in v3, or just the official base model that comes with SD3?
There's little code involved in a Stable Diffusion release. When Stability releases a base model, the community starts to fine-tune it to create uncensored and better checkpoints and train Lora over it. The community is also responsible for Controlnet support, which usually takes several months to arrive. Applications like ComfyUI or Foocus also need to be changed.

If a base model is bad, it won't affect the code part, but it will take more effort to mitigate the problem by finetuning it. The problem with SD3 is that it will take more powerful hardware to finetune than SDXL, and the changed license or lost trust of the parent company may dissuade the community from making such efforts.
 
Last edited:
There's a bit more cheerful thing to share.

I haven't mentioned it until now, but something called "Pony Diffusion" has been all the rage recently. In short, it's a new model trained on a huge anime image website. What makes it interesting is that each image is tagged/captioned by hand, so the model is noticeably superior in understanding prompts compared to other models.

Of course, the drawback is that it's focused on anime. This was why I hadn't given it much attention so far, but people have lately begun creating new models by merging them with other photorealism models.

I tested one of such variants today and was quite impressed by the result. This, for example, was the result of a simple prompt "a naked Asian girl walking on all fours in public, wearing a slave collar with a leash":

pony.jpg

It's pretty impressive, considering I didn't use any Lora or inpaint the initial result. Maybe I should use it to create base images for my future renders.
 
This, for example, was the result of a simple prompt "a naked Asian girl walking on all fours in public, wearing a slave collar with a leash"
Once upon a time, a friend of mine expressed the desire to just think, and have her paintings created automatically. I, with the arrogance of technical skill, believed -and still do- that the nature of art is a mixture of thought and technique - the product of a kind of battle in the studio (just as sex should be the product of a kind of battle in bed :detenido: ). However, it is possible that AI is taking us into unexplored areas of art. It certainly does with visual storytelling.
 
I've been looking more into SD3 and I'm confused. Is it the entire code for SD that's borked in v3, or just the official base model that comes with SD3?
You have to wait for the large models + finetuned checkpoints. There's a reddit post that, after the large models are trained, they're released to the public.
As always - it will take some time to get them matured - same with Blender :jump1:
 
'm already sick to death of not being able to reliably get eyes pointing where I want them.
ComfyUI + OpenPose + detect face.

1718453035144.png

Then you have a face structure including the body. you can edit the detected eye points. Use the same seed to recreate your latest.
ANW-OP-_00675.jpg

At least this worked for me in ComfyUI. You will need to rerun the prompt ofc. To use the same prompt without stopping, add a {0|1} tag .
 
Last edited:
After some hesitations I have decided to take a stab at Krea with their free option, so I selected a few of my old renders (I'm not rendering new pictures these days), some already published, and some unpublished.
Attached, the results. Consider that I have marginally tweaked the prompts that are automatically offered by the AI.
I have mixed feelings about the results, and I'd like to hear your feedback. Some of the flaws are really annoying, and for now I'm going to leave them there. I'm pretty sure that all can be corrected with some work, anyway.

the_wait.jpgspy_treatment.jpgforced_intimacy.jpgconfess.jpgchainfuck.jpg
 
Back
Top Bottom