• Sign up or login, and you'll have full access to opportunities of forum.

Remember Visions of Darkness?

Go to CruxDreams.com
Quick update: I'm in a better state of mind than I was yesterday. I just need to remember there are resources available to manage stress, and use them.

In addition to serving the community as a safe place to post original dark art, I plan on being a content creator someday and I want VoD to partly serve as a resource people can reach me through for product support. That'll be rather hard to do if it doesn't exist.

:p
 
The problem is that you haven't used the exact part (i.e. ControlNet) which would have allowed you to do what you claimed to be impossible (i.e. generating images in the way you want, not by chance.)

By the way, ControlNet has long been available for SD 1.5. What I mentioned about my waiting for it to arrive was about using it for SDXL in SD.Next (an A1111 clone).
I had expressed myself incorrectly. Of course I use Controlnet, I also work with Inpaint Sketch and Img2Img etc. I actually meant that I have not yet dealt with SDNext yet.

When you claim that it's impossible to do what I've been doing for a year, I can confidently say that you are wrong and may try to point to material with which you may learn about it.
...
Your work is without question impressive, and I think that of all of us here on this forum (as far as they comment) you understand the most about the subject. I myself have benefited greatly from your instructions.
Nevertheless, I think that the technique at the moment still sets very large limits to a free image design. Two figures standing next to each other and looking at the viewer are still easy to do, but as soon as limbs are crossed, concrete objects should be in the picture, several different actors appear or a more complicated interaction (like a flogging) should be depicted, it becomes difficult. I think the chain in your image was the most difficult to make. There is the lora "on a leash" but I think the rest you had to do with Inpaint or even Photoshop. If it is different, I thank you very much if you explain how you managed to do it. I mean that without irony.

P.S.: I just realised that @Zardoz is the author of this thread where you have uploaded several AI works yourself. I'm rather surprised that you've come that far without using ControlNet. Try things like ControlNet or custom train Loras, and it will help you get the result you want.

As I said, I know Controlnet and how to use Loras. It would be great if you try to edit one of my scenes as I have attached here (once as a 3D version, once as a drawing) with your possibilities. I would be very interested to understand how this can be done.
 

Attachments

  • 3whip_01_OZ.jpg
    3whip_01_OZ.jpg
    885 KB · Views: 192
  • 051_OZ.jpg
    051_OZ.jpg
    639.5 KB · Views: 157
Nevertheless, I think that the technique at the moment still sets very large limits to a free image design. Two figures standing next to each other and looking at the viewer are still easy to do, but as soon as limbs are crossed, concrete objects should be in the picture, several different actors appear or a more complicated interaction (like a flogging) should be depicted, it becomes difficult.
It's doable, but admittedly challenging. So yeah, I agree with you.

I never insisted that AI tools - as we have them at the moment - are perfect, or even better than traditional alternatives like Daz3D in every aspect. Actually, I mentioned myself how constructing complex scenes, especially with non-trivial interactions between characters, remains to be the biggest remaining roadblock in the other thread.

But admitting such a limitation is something quite different from what you argued in your post above, quote:
With AI, it's completely different. I take a theme and formulate the prompt, play around with the settings, and enjoy what AI comes up with for me. But I don't feel at all like it's something that's legitimately mine. To be honest, it's something. And most of the time, it's better than anything I could make. But it's not mine.
Above all: it has nothing to do with what I had in mind before. On the contrary, I feel how the image I carry in my head is displaced by the results AI shows me. Since I realized this, my interest in AI has cooled a lot. I don't see how AI should have more fun than I do.
And you mentioned this while agreeing with NyghtVision3D's claim which categorically denies the feasibility of AI being a legitimate tool of art because it supposedly lacks "control".

Again, I never claimed Stable Diffusion is perfect or better than Daz3D in every conceivable way. But to claim that AI-generated images can't even be considered art in principle because they involve no human creative process, you'll need a better ground than "it's difficult to make a certain type of scenes".

I think the chain in your image was the most difficult to make. There is the lora "on a leash" but I think the rest you had to do with Inpaint or even Photoshop. If it is different, I thank you very much if you explain how you managed to do it. I mean that without irony.
Actually, I consider that particular scene as a more or less "easier" kind among those involving complex interactions between multiple characters. Depicting the leash wasn't much of a hassle because I have my own Loras depicting slave collars and chains. I only chose not to use my Lora for the collar, in the end, because I felt it'd be better if the slave were wearing a dog leash instead to suit the overall theme better.

As you mentioned above, the difficulty arises mostly when characters interact with each other in a complex manner or when part of their limbs is hidden. But in this particular case, the only thing that can be challenging is depicting the hand that grabs the slave girl's hair (which is still perfectly doable with inpainting, by the way). The rest is just using ControlNets to fix the respective poses for the individual characters.

As an example of a composition which I found truly challenging, here's one of my WIP render that I haven't posted here since I'm not satisfied with the result:

01446-1672363931.jpg

As you can see, it involves a lot of hidden body parts and uncommon interactions (e.g. inspecting someone's teeth with a finger). I'm still trying to figure out what could be the best approach to overcome such a limitation, especially now I'm using a much more powerful tool than A1111 / SD.next (i.e. ComfyUI).

It would be great if you try to edit one of my scenes as I have attached here (once as a 3D version, once as a drawing) with your possibilities. I would be very interested to understand how this can be done.
I feel more or less confident that I'd be able to pull it off the first scene at least if I try. However I feel hesitant to spend several hours just to prove a point since I feel hard-pressed to find enough time to make renders depicting what I genuinely like. I hope you understand.

But I did a few AI renders involving a whipping in the past, like this one which I decided was not good enough to post on CF without a remaster:

01734-460569543-whipping.jpg

I was about to test out my new base character asset made with Daz3D+Blender anyway, so I'll try to make it a whipping scene involving multiple characters when I have time to work on it.

Nice images, by the way :)
 
There is the lora "on a leash" but I think the rest you had to do with Inpaint or even Photoshop. If it is different, I thank you very much if you explain how you managed to do it. I mean that without irony.
I noticed that I didn't mention any concrete technique you can use in such a situation in my previous post (which I cannot edit now). So, I'll briefly mention one of the tricks I use that might be useful to you:

I believe you already know how to use ControlNet models, but remember that you can also easily edit the result of what the preprocessor generated by hand,

It's especially handy for such preprocessors like lineart which produces an easily editable image. You can even set up an iterative workflow around the process, like drawing a very rough sketch -> using it as a ControlMap image -> generating a bunch of images and picking one you like -> running the preprocessor over the result -> enhancing the result by hand if necessary -> repeat the process until you get what you want.

You can also edit the render itself rather than the ControlNet image using a similar process, like drawing a crude leash over the rendered image and converting it to a ControlNet image again, for example. There can be a lot of variants of such an iterative process, and I think developing such workflows may be one of the most important skills an AI artist - if you don't object to such a term - better have.

But such a discussion better be done in a more appropriate thread, so I suggest continuing there if you have more things to say about the subject.
 
I didn't forget you then, I lost a lot of my work, pictures, and works then, and I haven't restored some things to this day, after that I didn't trust you anymore, and I don't.
Sorry for not addressing this sooner, but I haven't forgotten what you wrote and am revisiting the thread to publicly apologize to you and anyone else who lost work when I closed VoD.com in 2020.

I understand and respect your distrust in me. I royally fucked up three years ago, and nearly screwed the pooch again last month. Consequently, neither you, nor anyone else who shares your opinion of me, has any reason to believe I won't decide to close VoD.net someday in the not-too-distant future. I accept that.

No one knows what the future may hold, but I am learning how to manage stress while helping my wife try to resolve her medical issues. To that end, I am committed to doing my level best to make VoD.net successful in its own right for as long as life and circumstances will allow.
 
Last edited:
Back
Top Bottom