• Sign up or login, and you'll have full access to opportunities of forum.

Discussion about A.I.

Go to CruxDreams.com
News item currently causing a stir here in Oz
View attachment 1442149
https://www.theguardian.com/culture...d-symphony-orchestra-ai-facebook-ad-criticism

Looks like bog standard dodgy first rendition by an AI, but using it in an ad for an orchestra is, um, brave. He has a skirt, she has a random box on her lap, banana fingers etc.
And an interesting way to play the violin :D


P.S.: Speaking of AI and music, there's another area of art in which AI has helped me greatly, by the way. I won't delve into details since it's not a community for guitarists. So, to make a long story short, I've spent a long, long time searching for a way to reproduce great guitar tones I hear from classic rock records at home, and only a couple of days ago, I managed to find it, thanks to an AI amp modeller.

The best part is that all the involved elements are completely open-source, and I don't think any non-AI commercial alternative comes close to what I'm not getting out of my current setup. We surely entered an interesting era.
 
Last edited:
I feel most of it is because AI still involves a lot of unnecessary hassles to make a good image as exactly one intended, and that there are many more people using it as an image slot machine than a serious tool for creative expression. It can certainly lead to the feeling that they look the same due to many factors - for example, the faces generated in a single prompt will usually look very similar. Still, few care to generate each face to give it individuality. Also, AI models tend to be biased to generate idealistic females because their training sets usually contain more images of good-looking women than otherwise.

Combined with the fact that it tends to introduce anatomical deformities and other similar errors, especially if you don’t care to upscale fix them by inpainting, they give the impression of being uncanny or fake.

But I feel the other parts of what you said could be due to bias. Art is a subjective matter, of course, but if the subject is limited to (photo)realism, we can objectively compare different artistic expression methods. Contrary to what was mentioned, photorealism is one area in which AI is definitely superior to traditional methods like 3D art.

PBR, or Physically Based Rendering, is the current standard for a 3D renderer to depict materials. As the name implies, it tries to mimic how light particles interact with a surface according to its physical traits. When it became popular, it revolutionised the entire 3D field by allowing much more realistic renders previously impossible to achieve. By the way, that’s why Daz3D renders from its early days look much more cartoonish in comparison because it didn’t have a PBR workflow back then.

The problem is, however, that it’s impossible to perfectly simulate every little physical interaction between a surface like human skin and light particles that way. Even if you can render using a supercomputer, it won’t look perfectly realistic because the few - typically 3 - textures we use to depict different aspects of a physical surface are utterly insufficient to capture every little detail of it.

Of course, it is possible to achieve photorealism with 3D art, and that’s why we have all those films with perfect CGI effects. But to approach that level of realism with the PBR approach, you’ll need top-level talent in the industry as well as an enormous amount of time and effort to photo scan and/or sculpt required textures for each character you create, which is well beyond the reach of any Daz3D artist.

In comparison, AI doesn’t try to mimic physical traits. Instead, it simply predicts the most likely pixel values given a condition. So, if it was trained over photos, it would produce photographic results.

As such, it’s just a fact that AI is far better than 3D—not to mention Daz3D, which doesn't have a top-quality renderer—at producing photorealistic images. One can easily tell a Daz3D render from a photograph, but it's not always easy for an AI render.

Compare these two images: the left is a promotional image from Daz3D's website, which I'm sure was made by someone proficient at using the tool, and the right is an AI render I just made with a single prompt without any editing. Which do you think feels more "fake" with "plasticky" skin?

View attachment 1441829 View attachment 1441932

I think you know the answer.

P.S.: Regarding photorealism in 3D art, it's not all about materials and lighting that count. Sometimes, small details like that slightly depressed skin under the underwear on her back or those tiny body hairs on her arm that you see from the right image can give you an impression that it's a real human being.

Details like those are extremely difficult to depict with the traditional 3D modelling method and nearly impossible if you just use ready-made assets and a simplified tool, as is the case with most Daz3D creators.
For me, the one on the right looks more fake - in fact it absolutely screams AI to me. The highlights are too bright, though the hair looks much more realistic and the eyes have just enough sparkle to lift the image over and above the usual 3D renders (IMO it's the eyes and hair that often look bad in Daz images)

The one on the left could be accused of the highlights being slightly too muted, but overall I like the left one better as the excessive highlight bloom on the AI one takes away a lot of the subtlety of the skin texture. Also I feel that the strong backlighting on the right hand image somewhat detracts from the overall image. I get that it lends a bit of extra Z-axis dimensionality, but I see it as a bit distracting.

Overall I love the hair and eyes of the AI image but otherwise I prefer the Daz one

I guess it all comes down to personal taste in the end, and neither of them actually look real... :)
 
Last edited:
For me, the one on the right looks more fake - in fact it absolutely screams AI to me. The highlights are too bright, though the hair looks much more realistic and the eyes have just enough sparkle to lift the image over and above the usual 3D renders (IMO it's the eyes and hair that often look bad in Daz images)

The one on the left could be accused of the highlights being slightly too muted, but overall I like the left one better as the excessive highlight bloom on the AI one takes away a lot of the subtlety of the skin texture. Also I feel that the strong backlighting on the right hand image somewhat detracts from the overall image. I get that it lends a bit of extra Z-axis dimensionality, but I see it as a bit distracting.

Overall I love the hair and eyes of the AI image but otherwise I prefer the Daz one

I guess it all comes down to personal taste in the end... :)

It's perfectly fine if you prefer Daz3D renders to AI-generated images. As you justly noted, it's a matter of personal taste, so there's no right or wrong answer to it.

However, if you think the Daz3D version in those images looks more "real"... well, let's just say I can bet some money that it'd be quite a minority opinion. :p

By the way, the problem you found in the lighting of the AI image is a matter of choice and preference - I could have easily made it more subdued by lowering the CFG a bit.

On the other hand, the things that tell the Daz3D render is not real are not of such a nature. Look at how it rendered the skin, for example, and notice how little variation in specularity it shows compared to the AI image (observe how the skin is highlighted on each girl's upper arm if you're unsure where to look).

In general, Daz3D sells models with pretty high-quality textures, including a specular map. However, it usually takes textures and a shader setup comparable to what you see in the Digital Emily project, for example, to match the quality of what the AI-generated image shows.

And, as you can see from the link above, it's not something you can easily achieve with a tool like Daz3D using typical textures included in character bundles you'd buy from Daz Store. (To put it into perspective, the texture pack for the project comes in two archive files, 800Mb and 500Mb each. And that's just for the face.)

Again, it's perfectly fine and a respectable opinion if you prefer how Daz3D renders a human character. You can even say that the Daz3D render looks better since what counts as aesthetically pleasing can be subjective.

But if you argue that it's more "realistic" (as in "photorealism") than what AI can generate, I have to suspect some strong bias or lack of technical knowledge might be in play.
 
Last edited:
However, if you think the Daz3D version in those images looks more "real"... well, let's just say I can bet some money that it'd be quite a minority opinion. :p
Indeed so, but that's not what I said - I just said that in this particular example, I prefer the Daz version (adding at the end of my post that none of them look real)
But if you argue that it's more "realistic" (as in "photorealism") than what AI can generate, I have to suspect some strong bias or lack of technical knowledge might be in play.
But I didn't argue that point of view - Like I said, neither of the pictures look real - it's just that I prefer the look of the Daz one, though I prefer the hair and eyes on the AI one.
There's no right or wrong here - there's room for both techniques and combining the two is a viable method
 
Indeed so, but that's not what I said - I just said that in this particular example, I prefer the Daz version (adding at the end of my post that none of them look real)

But I didn't argue that point of view - Like I said, neither of the pictures look real - it's just that I prefer the look of the Daz one, though I prefer the hair and eyes on the AI one.
There's no right or wrong here - there's room for both techniques and combining the two is a viable method
You said "more fake", which I had to understand as "less realistic" ;)

But I don't intend to argue on this point since it looks like that wasn't what you actually meant. If we are talking about personal taste, there's absolutely nothing I object to. And I also agree that combining both methods can yield great results.
 
Details like those are extremely difficult to depict with the traditional 3D modelling method and nearly impossible if you just use ready-made assets and a simplified tool, as is the case with most Daz3D creators.
Bone detail in AI is exceptionally good. I mean protruding ribs, contracted muscles on bones, even veins... These days I mostly use AI enhancements to improve hair slightly (on the really bad hairs).
 
AI is getting really good but it's still a long way away from understanding the nuances that make women desirable. Real bodies are imperfect, and that's what makes them unique. AI tends to distill this down to an idealised formula which banishes any hint of physical individuality and results in everything looking the same :(
On the whole, AI generates what users tell it to in a highly generalized fashion. Though I haven't seen much of the work you're alluding to, I'd wager the negative prompts users are entering are telling it not to include such imperfections.

I generated this with the following prompts:

Positive: plus-size 30yr old blonde woman with freckles
Negative: runway model

ai woman.png

On a personal note, my favorite kill in the entire Halloween franchise is Bob's knife impaling in the original flick. I waited 44 years for a sequel to do a female version of it, and Halloween Ends ruined theirs with a shitty plot device. Consequently, the scene was a colossal disappointment for me. I've rendered a few of my own versions with Daz Studio over the years but, since seeing Ends, I keep longing more and more for an AI version that rivals Ends. As such, I recently decided to swallow my artistic pride and become a hypocrite by re-installing Stable Diffusion and learning how to use ComfyUI to fulfill my desire.

Mind you, I'm not particularly thrilled with myself but, damnit, @fallenmystic is right about Daz and PBR. Iray is wonderful tech but can only take artists looking for hyper-realism so far.

Incidentally, there are a handful of artists on the Daz forums currently experimenting with AI to enhance their work. A couple days ago one of them shared a highly useful looking tip I can't wait to try. They ran their render through a cartoon filter to help make SD's ControlNet Canny processor see lines better. The before and after images are remarkably similar in detail with regard to posing and composition. It's the third post up from the bottom of page 19 on this thread.
 
Last edited:
Ho-ly shit.

I'm learning ComfyUI with a free course I found on YouTube and this has me wide-eyed with possibilities.

7:09 - 8:23 - Create multiple images with different prompts simultaneously.

Yeah, the flexibility and possibilities ComfyUI provides are quite unmatched by any other SD frontends. :)

The only thing I don't like is how cumbersome it is to inpaint, or how it lacks a layered workflow. That's why I found AI Diffusion for Krita a perfect tool for me since it's basically ComfyUI plus traditional image editor GUI with layers.
 
Typical software video tutorial on YouTube:

Narrator: "Today, I'm going to show you step-by-step how to accomplish x to produce y."

:: ten minutes in, narrator's own experience kicks in and they start glossing over details ::

Narrator: "Now, we just do this, this, and that, and voila!"

Me: "WTF did you just do?!" :: can't figure it out even at .5 speed ::

Also Me: :: quickly develops headache and closes browser for a nap ::

Link starts at 11:22 into the vid. At 11:37, he appears to try connecting the Conditioning output of the Negative Prompt to the latent_image input on the second KSampler but then the green wire he dragged up to the KSampler disappears and I lose him from there.

 
Last edited:
I found another instructor. This is an img2img conversion of one of my 3D portraits using ComfyUI. I literally let out a long, wide-eyed gasp when I saw the AI version. I used a third-party app called Upscayl to upscale the AI version to match the 3D version's 4K resolution.

"Date Night: Your Place or Mine" - 3D Version
Date Night - Your Place or Mine - 3D.jpg

AI Version
AI Version.jpg
 
Last edited:
Here's the project I'm learning how to enhance. It's my fifth female take on Bob's kill since becoming a 3D artist in 2009.

Unfortunately, Stable Diffusion can't render humans from their profile without making their faces look like bastardized versions of Picasso paintings - it needs to see as much of the face as possible in order to generate an accurate reproduction. Therefore, I've set up two different camera angles with my character's head facing two different directions. The first one is my 3D version. The second is what I'll be enhancing with AI once I get the hang of SD's ComfyUI interface and figure out how to get it to accurately enhance the scene.

Incidentally, SD also has trouble processing high-resolution images, so the version I'll be enhancing is starting out at 1280 x 720.

"By the Numbers" 3D Version

By the Numbers.jpg

To be Enhanced

By the Numbers - To be Enhanced.png
 
Last edited:
Here's the project I'm learning how to enhance. It's my fifth female take on Bob's kill since becoming a 3D artist in 2009.

Unfortunately, Stable Diffusion can't render humans from their profile without making their faces look like bastardized versions of Picasso paintings - it needs to see as much of the face as possible in order to generate an accurate reproduction. Therefore, I've set up two different camera angles with my character's head facing two different directions. The first one is my 3D version. The second is what I'll be enhancing with AI once I get the hang of SD's ComfyUI interface and figure out how to get it to accurately enhance the scene.

Incidentally, SD also has trouble processing high-resolution images, so the version I'll be enhancing is starting out at 1280 x 720.

I suspect the issue might be the window blind obstructing the character. In my case, I haven't had too much trouble with profile shots, although it's true that most SD models have a bias towards characters facing the camera.

On the other hand, I haven't yet successfully depicted a character behind bars (e.g. prison, cage, etc.) correctly. If I were to remaster the first image with AI, I'd probably render the blind in a separate pass with a transparent background and paste the result over the AI-generated image afterwards.

As for the resolution, I'm unsure if 1280x720 is ideal for SDXL. You may already know it, but SDXL has a set of recommended resolutions that would produce better images. ComfyUI also has a custom node that can automatically set one of those resolutions depending on the ratio and size of the input image by the way.

You can also set up an upscaling stage within ComfyUI instead of using an external program like Upscayl. Upscayl seems to be a very good choice but it doesn't do anything ComfyUI can't, while ComfyUI offers a bit more flexibility and fine control than Upscayl (e.g. SUPIR, iterative upscale, tiled upscale, etc.).
 
Is ComfyUI supposed to offer greater control over what generated images look like, or just greater control over the processes needed to generate them?

I was under the impression that, with Comfy being node-based, one could overcome SD's weaknesses through direct manipulation of the individual processes, but I'm seeing the same kinds of flaws in generated images regardless of whether I use ComfyUI or A111, and I'm getting tired of the headaches I keep getting trying to sort out Comfy's noodle soup in the tutorials I'm trying to learn from.
 
Is ComfyUI supposed to offer greater control over what generated images look like, or just greater control over the processes needed to generate them?

I was under the impression that, with Comfy being node-based, one could overcome SD's weaknesses through direct manipulation of the individual processes, but I'm seeing the same kinds of flaws in generated images regardless of whether I use ComfyUI or A111, and I'm getting tired of the headaches I keep getting trying to sort out Comfy's noodle soup in the tutorials I'm trying to learn from.
It’s more of the latter, so it wouldn’t produce better images than simpler frontends per se. It’s main strength is in the way it allows you to develop your own workflows that may require uncommon configurations or automation.
 
It’s main strength is in the way it allows you to develop your own workflows that may require uncommon configurations or automation.
This point dawned on me while running a few tests in A111 yesterday afternoon. I found myself missing the greater degree of control I was learning to have with ComfyUI.

I've been studying Blender off and on for the last few months with the long-term goal of making and selling content for Daz someday. While at work today, it also occurred to me that, if I can't handle ComfyUI, there's no way I'm gonna wrap my head around Blender's shader nodes as I start learning increasingly advanced techniques. So, I'm going to keep at this project with Comfy.
 
Last edited:
This point dawned on me while running a few tests in A111 yesterday afternoon. I found myself missing the greater degree of control I was learning to have with ComfyUI.

I've been studying Blender off and on for the last few months with the long-term goal of making and selling content for Daz someday. While at work today, it also occurred to me that, if I can't handle ComfyUI, there's no way I'm gonna wrap my head around Blender's shader nodes as I start learning increasingly advanced techniques. So, I'm going to keep at this project with Comfy.
In fact, Blender’s shader nodes (as well as the geometry nodes, for that matter) have a lot in common with ComfyUI aside from the learning curve. I think it’s worth to get used to a node-based workflow, if you intend to get serious with 3D art since many tools like Blender or Substance Designer provide such interfaces. Good luck with your project! :)
 
View attachment 1439839 View attachment 1439840

Fine. Let's talk then. Before we start, please save your trouble of telling me how this or that part of those images is imperfect because I know. I just didn't want to spend a good part of a Saturday morning just to prove my point.

I believe now it's your turn to actually prove what you claimed instead of just nitpicking my works. I'm very much interested in seeing what you'll produce as proof that Dall-e can generate NSFW images because you are pretty much the only person I know who claimed that. Give me a simple image of a naked girl with her nipples and private part, then we can talk.

I'll be waiting.
The food fork is shown on the other side,she holds the fork upside down, and are you really holding it in your hands?
in image 2, all fingers are deformed.
 
Back
Top Bottom