• Sign up or login, and you'll have full access to opportunities of forum.

Discussion about A.I.

Go to CruxDreams.com
And... I'm... done.

I wound up ditching ComfyUI after all, and have spent the last couple days learning how to use Krita's Stable Diffusion/ComfyUI plugin. Being able to work in layers is a godsend, but it doesn't negate feeling like you're using a slot machine. Every click of the Refine button felt like another coin in the slot while hoping SD didn't mangle my character's expression too much. ControlNet only goes so far in maintaining some semblance of the time and effort you spent fine-tuning parameter dials and tweaking Iray shaders to your exact specifications. More often than not, you're still left Inpainting details SD missed so you can refine them further. The results may look impressively seamless, but they are in fact a completely random amalgamation of generated elements.

Comparatively speaking, you can also kit-bash content in Daz and Poser. The difference between them and AI is that, once you create a mixed-and-matched set of elements you like, you can re-use them in any project you desire and they will render out the exact same way every single time.

I installed Krita Friday evening and have been up since 9am yesterday morning figuring out how to bend the plugin to my will. Getting the knife to generate properly was an especially massive pain in the ass until I remembered Daz can render separate objects as canvases. So, I rendered my character without the knife in her chest, and then added the knife in post. The stab wound is an Iray wound decal I had Krita turn into red liquid - I went through more than 50 different generations of image samples using about a dozen different prompts until I got something close enough to resembling a bloody stab wound that I was finally able to say, "Fuck it, that's good enough." I had to Google tips on blood prompts and found a Reddit thread that suggested "red liquid". That, of all things, is what worked, but barely.

Incidentally, someone on ArtUntamed recently told me they see AI as a kind of render engine. Their work certainly exemplifies what can be done with it. Expanding on their analogy though, I'd call it the world's first unpredictable render engine. :p

I wrote a bit ago in this thread that this project was a one off for me using AI to realize a 44-year-long dream of seeing a female version of Bob's death from the original Halloween. I wish I could say I'm overjoyed with the outcome, but I'm too relieved to finally be done with it, so I'll just say I'm happy with what I managed to wrangle out of the AI. Despite the headache and numerous exasperated sighs I exhaled working on this, I must admit I learned a lot and can no longer say I won't experiment more, especially with SD3 around the corner. As much as I enjoy Iray's consistency, it simply cannot produce this level of realism in human figures. For all of AI's faults, scenes like this can be achieved, thanks in large part to Daz Studio's Render Canvas function, and it's the closest I'll ever get to working with real models.

With that in mind, here's my final homage to 1978's Halloween, and my first official fetish-themed AI-enhanced 3D render.
By the Numbers - AIE.jpg
 
Last edited:
This AI we're talking about here is not "General AI", which is what some people refer to as an existential threat - stuff like HAL 9000 or SkyNet.
This AI can't think, and will never think. It's simply a badly named plagiarizing app.
It needs an ENORMOUS amount of real human data before it can do its thing, and this data was all scraped from the internet without asking, because if they had to ask they wouldn't have gotten enough to make it work. For images this dataset is called LAION5B and is used in all the AIs, even the ones that claim to be ethical and not using it (it's still used as a base, then other smaller datasets are added on top).

AI is making creatives jobless en masse right now, it's been happening in several industries since last year. Writers, photographers, illustrators, game artists, voice actors, translators, I wonder who's next, maybe musicians. It's not just a matter of "hey it's just like when they invented the camera, just keep scrolling and ignore it if you don't like it".
 
This AI we're talking about here is not "General AI", which is what some people refer to as an existential threat - stuff like HAL 9000 or SkyNet.
This AI can't think, and will never think. It's simply a badly named plagiarizing app.
It needs an ENORMOUS amount of real human data before it can do its thing, and this data was all scraped from the internet without asking, because if they had to ask they wouldn't have gotten enough to make it work. For images this dataset is called LAION5B and is used in all the AIs, even the ones that claim to be ethical and not using it (it's still used as a base, then other smaller datasets are added on top).

AI is making creatives jobless en masse right now, it's been happening in several industries since last year. Writers, photographers, illustrators, game artists, voice actors, translators, I wonder who's next, maybe musicians. It's not just a matter of "hey it's just like when they invented the camera, just keep scrolling and ignore it if you don't like it".
Well it's like another industrial revolution. Whether you like it or not... That's life, nothing is granted... In former ind. revs a huge number of people lost their job. Did it become a better world by machines taking the jobs from people? I don't think so. But still this is rhe way of life.
 
This AI we're talking about here is not "General AI", which is what some people refer to as an existential threat - stuff like HAL 9000 or SkyNet.
This AI can't think, and will never think. It's simply a badly named plagiarizing app.
It needs an ENORMOUS amount of real human data before it can do its thing, and this data was all scraped from the internet without asking, because if they had to ask they wouldn't have gotten enough to make it work. For images this dataset is called LAION5B and is used in all the AIs, even the ones that claim to be ethical and not using it (it's still used as a base, then other smaller datasets are added on top).

AI is making creatives jobless en masse right now, it's been happening in several industries since last year. Writers, photographers, illustrators, game artists, voice actors, translators, I wonder who's next, maybe musicians. It's not just a matter of "hey it's just like when they invented the camera, just keep scrolling and ignore it if you don't like it".
While I may not entirely agree with - or may not be knowledgeable of the subject, to be honest - those claim we have already reached the level of Artificial General Intelligence, we might be much closer to this goal than you seem to think we are. Publicly available AIs nowadays already outperform most law school graduates in bar exam, for instance, and they even show rudimentary “self awareness” when you take a screenshot of your browser while chatting with them and ask them to describe the image.

I’m afraid you’re mixing up a few different kinds of AI and imagine how they work without much understanding of how they actually work. The training over copyrighted material still remains a controversial issue, but generative AIs like Stable Diffusion are not simply “plagiarising apps” as you claim. They don’t work like some image search engine which store the original images and produce “new” images by mix and match them.

What they actually do is learning general concepts like a “human face”, like how they are oblong in shape and have two eyes, and so on. So, unless you’re ready to call pretty much every portrait ever existed as a “plagiarism” of how humans look in general, it doesn’t make a sense to criticise generative AIs being a simple copycat.

More importantly, what really matters in AI artworks is that they still depict what their human authors intended, not something AI randomly decides to copy from someone else’s original. We have a few crux artists on CF who have consistently published quality artworks with a distinctive style. If you think about how many original images of naked girls on a cross that look like their renders would be in the LAION5B dataset, I’m sure you’d see the problem with your argument.

As for the matter of AIs taking over human jobs, I agree that it may become a serious issue in a short term. But if we set aside the speed and scope of the AI revolution we’re witnessing now, it’s not fundamentally different from how other disruptive technologies have destroyed some jobs while creating new ones. While it’s undoubtedly unfortunate for those who’ve been affected by the change, it also opens up new possibilities and opportunities for others as well.

You mentioned game artists as victims of the AI technology, but you didn’t mention how the same technology has opened up a lot of exciting opportunities for big game studios and indie game developers alike. I have learned 3D modeling and a few different game engines to pursue my dream of making a game that I like someday. But as I learned more, it became only apparent that it’s not something an individual hobbyist can do in their spare time.

Since the recent advancements of AI technology, however, it suddenly came within a realm of possibility. Thanks to AI, now I can generate fully rigged characters and motion captured animations for them without hiring anyone or purchasing a gear. I can also make fully voiced lines for such characters with perfect lip syncing animations, all of which hadn’t been possible until now unless you own a game studio with a team of professionals.

And as a gamer, I have reasons to eagerly wait for the next generation games that will be made with AI technology. For example, games that feature NPCs that act and speak like actual human beings are just around the corner. And as a fan of creative indie games, I can’t wait to see what kind of games people will create when they can create their own assets in such a quality that was only possible with larger game studios in the past.

Advancements of technology always produce both opportunities and problems. And the problems they create are usually dealt with new solutions, instead of going back to the time when they had never existed. While it’s unfortunate AI may threaten some jobs like voice actors or animators in game industry we can’t, and probably shouldn’t stay forever in the time when only large game studios could make quality 3D games and NPCs repeat the same lines like “I am sworn to carry your burdens” over and over once you exhaust their dialogue trees.
 
Last edited:
(Moved from a different thread.)

I'm not talking about Daz Studio. Do I have the necessary skills? Not 100%, not 90%, not 80%, but I'm working on improving my skills whenever my free time allows.
About the quality and the costs. Have you ever heard of MetaHuman Animator? There was a demo last year. Pictures speak louder than words, so take a look at the two videos.

1) The Hellblade2 Demo - fyi - this is metahumans + a proper face rig + mocap data from a real actor. Not a human being.

2) The conference: This was my "whow" effect last year.
This technique is free for all, as said in the video. You need an Unreal / Epic account to have access. My ue dev account there is pre 2018.

3) I'm mostly a Blender guy, so I tried to find out how to do it there. My focus is about photoreality, so there was no way around keentools. (https://keentools.io), facetools in combination with geotracker + sculpting + rigging. AI is nice, tried to integrate it into my workflow too, rendered thousands of images the last months, at the moment there are way too many glitches for quality results if we talk about Sd1.5/SDXL.

I don't want to pollute your thread with tech info, so I will show my results soon at sn, au and here at crux in my thread.
MetaHuman is certainly an impressive technology. But it's mainly for rendering real-time characters specifically targeted for Unreal Engine (the last time I checked, the license even prohibited usage outside the platform).

I'm afraid you're overestimating the technical - not artistic - skills of the average creators who contribute kinky art content to communities like CF. There's a reason why Daz3D/Poser became the de facto king of 3D creation tools.

Most people just purchase ready-made 3D assets and don't care much about the material or renderer settings before they hit the render button. Sure, installing Unreal Engine and rendering a simple MetaHuman character might not be that difficult. However, since MetaHuman was made for a different target audience, replacing Daz3D with Unreal Engine would be quite another matter.

You won't easily find all the necessary assets outside of the Daz3D Marketplace/Renderosity. While importing them to Unreal is possible, they won't be as useful without all the morph options and presets.

More importantly, even if you manage to do all that, MetaHuman won't look as good as what you can easily render with Stable Diffusion, if the criteria is photorealism.

Like you, I'm mainly a Blender person and have strived to find a way to render photorealistic human characters with it. My conclusion was that it's definitely possible, but you need to be highly proficient at things like sculpting, shaders, and so on.

When I said "highly proficient," I meant being so at a professional level. I followed a few photorealistic 3D human projects until a few years back. Even for people who know the ins and outs of Blender like Blitter, it usually takes tremendous time and effort to create a truly photorealistic outcome, which you can achieve with Stable Diffusion without too much trouble nowadays.

By the way, there was a somewhat outdated public project to create a photorealistic human model called "The Wikihuman Project" (a.k.a. Digital Emily). Although the model only depicts a single character's head, it uses quite a complex shader setup and nearly a gigabyte worth of texture maps.

All these things are way out of the league for most kink art creators when only a very few of them know how to create even a simple 3D model like a cross, not to mention photorealistic human characters.

I know you're very skilled at Blender and probably can create something as good as the Hellblade demo if you use MetaHuman. But if someone can produce a 3D render that matches the quality of an AA game promo video, they must be far better than having "a little skill" as you said.

It's true that using Stable Diffusion XL can be quite frustrating at times. But if one is determined to spend a fraction of time in learning the necessary skills that would be needed to create a photorealistic 3D character, they would be able to produce better quality (i.e. close to photorealism) renders than what even the most skilled professional Unreal Engine artists can achieve.

TLDR: You can achieve photorealism in Blender and something very close in Unreal Engine. But it'll require far more time and effort than it would to learn how to use Stable Diffusion properly.
 
(Moved from a different thread.)


MetaHuman is certainly an impressive technology. But it's mainly for rendering real-time characters specifically targeted for Unreal Engine (the last time I checked, the license even prohibited usage outside the platform).

I'm afraid you're overestimating the technical - not artistic - skills of the average creators who contribute kinky art content to communities like CF. There's a reason why Daz3D/Poser became the de facto king of 3D creation tools.

Most people just purchase ready-made 3D assets and don't care much about the material or renderer settings before they hit the render button. Sure, installing Unreal Engine and rendering a simple MetaHuman character might not be that difficult. However, since MetaHuman was made for a different target audience, replacing Daz3D with Unreal Engine would be quite another matter.

You won't easily find all the necessary assets outside of the Daz3D Marketplace/Renderosity. While importing them to Unreal is possible, they won't be as useful without all the morph options and presets.

More importantly, even if you manage to do all that, MetaHuman won't look as good as what you can easily render with Stable Diffusion, if the criteria is photorealism.

Like you, I'm mainly a Blender person and have strived to find a way to render photorealistic human characters with it. My conclusion was that it's definitely possible, but you need to be highly proficient at things like sculpting, shaders, and so on.

When I said "highly proficient," I meant being so at a professional level. I followed a few photorealistic 3D human projects until a few years back. Even for people who know the ins and outs of Blender like Blitter, it usually takes tremendous time and effort to create a truly photorealistic outcome, which you can achieve with Stable Diffusion without too much trouble nowadays.

By the way, there was a somewhat outdated public project to create a photorealistic human model called "The Wikihuman Project" (a.k.a. Digital Emily). Although the model only depicts a single character's head, it uses quite a complex shader setup and nearly a gigabyte worth of texture maps.

All these things are way out of the league for most kink art creators when only a very few of them know how to create even a simple 3D model like a cross, not to mention photorealistic human characters.

I know you're very skilled at Blender and probably can create something as good as the Hellblade demo if you use MetaHuman. But if someone can produce a 3D render that matches the quality of an AA game promo video, they must be far better than having "a little skill" as you said.

It's true that using Stable Diffusion XL can be quite frustrating at times. But if one is determined to spend a fraction of time in learning the necessary skills that would be needed to create a photorealistic 3D character, they would be able to produce better quality (i.e. close to photorealism) renders than what even the most skilled professional Unreal Engine artists can achieve.

TLDR: You can achieve photorealism in Blender and something very close in Unreal Engine. But it'll require far more time and effort than it would to learn how to use Stable Diffusion properly.
I know. It is not the time to show half baked results. I have a picture at AU, made of a tutorial, this was two years ago.
 
(Moved from a different thread.)


MetaHuman is certainly an impressive technology. But it's mainly for rendering real-time characters specifically targeted for Unreal Engine (the last time I checked, the license even prohibited usage outside the platform).

I'm afraid you're overestimating the technical - not artistic - skills of the average creators who contribute kinky art content to communities like CF. There's a reason why Daz3D/Poser became the de facto king of 3D creation tools.

Most people just purchase ready-made 3D assets and don't care much about the material or renderer settings before they hit the render button. Sure, installing Unreal Engine and rendering a simple MetaHuman character might not be that difficult. However, since MetaHuman was made for a different target audience, replacing Daz3D with Unreal Engine would be quite another matter.

You won't easily find all the necessary assets outside of the Daz3D Marketplace/Renderosity. While importing them to Unreal is possible, they won't be as useful without all the morph options and presets.

More importantly, even if you manage to do all that, MetaHuman won't look as good as what you can easily render with Stable Diffusion, if the criteria is photorealism.

Like you, I'm mainly a Blender person and have strived to find a way to render photorealistic human characters with it. My conclusion was that it's definitely possible, but you need to be highly proficient at things like sculpting, shaders, and so on.

When I said "highly proficient," I meant being so at a professional level. I followed a few photorealistic 3D human projects until a few years back. Even for people who know the ins and outs of Blender like Blitter, it usually takes tremendous time and effort to create a truly photorealistic outcome, which you can achieve with Stable Diffusion without too much trouble nowadays.

By the way, there was a somewhat outdated public project to create a photorealistic human model called "The Wikihuman Project" (a.k.a. Digital Emily). Although the model only depicts a single character's head, it uses quite a complex shader setup and nearly a gigabyte worth of texture maps.

All these things are way out of the league for most kink art creators when only a very few of them know how to create even a simple 3D model like a cross, not to mention photorealistic human characters.

I know you're very skilled at Blender and probably can create something as good as the Hellblade demo if you use MetaHuman. But if someone can produce a 3D render that matches the quality of an AA game promo video, they must be far better than having "a little skill" as you said.

It's true that using Stable Diffusion XL can be quite frustrating at times. But if one is determined to spend a fraction of time in learning the necessary skills that would be needed to create a photorealistic 3D character, they would be able to produce better quality (i.e. close to photorealism) renders than what even the most skilled professional Unreal Engine artists can achieve.

TLDR: You can achieve photorealism in Blender and something very close in Unreal Engine. But it'll require far more time and effort than it would to learn how to use Stable Diffusion properly.
If you analyze sd1.5 or sdxl 2 versus a photo in photoshop or affinity photo, what do you notice first?

And what are the differences to a render image?
 
If you analyze sd1.5 or sdxl 2 versus a photo in photoshop or affinity photo, what do you notice first?

And what are the differences to a render image?
It's an interesting question, and I think we must clarify what we mean by "SD1.5/XL image" and "(3D) render image" first to avoid confusion.

As for SDXL (let's forget about SD 1.5 for now since they are not much different in this regard), I'll assume we are talking of an unmodified output you can expect from a good photorealism model generated using a single prompt.

As for 3D rendering, I'll assume a good-quality image created with Daz3D because that's what would be most relevant to the conversation in this context.

I didn't choose something like a complex ComfyUI workflow or high-quality Blender render because we'll be talking about their respective flaws compared to real photographs.

As I mentioned above, I know that it is possible to produce a result that's practically impossible to tell from real photos using either of the methods, provided you have infinite time and top-level skills. So, I'll talk of more practical cases first.

First off, both of them suffer from the lack of details but in different ways.

SDXL struggles to depict a subject with sufficient anatomic or surface details, especially when it's rendered in a small area. But if you tell it to generate an extreme closeup, it will give you a remarkably detailed image, although it'll still lack definitions in smaller areas.

In comparison, Daz3D cannot render such minute details as micro hairs or creases on human skin, regardless of the render resolution. However, the biggest issue of Daz3D renders is that they don't represent how light affects the subject like a human body in a realistic manner.

As I mentioned in an earlier post, PBR is always an approximation of how lights interact with physical surfaces, and what Daz3D provides is a quite simplified implementation of the concept. Even with a far more advanced renderer as the Cycles engine from Blender, human characters often look realistic only in very specific lighting conditions.

The problem with SDXL can be overcome with things like inpainting and/or upscaling, which will produce results that are indistinguishable from real photos.

The problem with 3D rendering, however, cannot be fixed at a Daz3D level. To mitigate it, you'll need a bunch of highly detailed photo-scanned textures including some unusual ones like scatter maps, high-res meshes with sculpted details, and custom shader setups. And even if you have money/time/skills to meet all these requirements, usually things like hair or eyebrows give away that it's not a real photo.

That's probably why constructing a scene in a 3D modeller and then using AI to fill out the details is such a popular approach. An AI upscaler, for example, can make both 3D renders and AI-generated images indistinguishable from real photos.

Again, all this talk is only relevant when we confine ourselves to the problem of making a fictional image that looks exactly like a photo. In a more practical context, however, Blender or even Daz3D is good enough to produce stunningly "realistic" images, if we don't require such a demanding standard.
 
Last edited:
It's an interesting question, and I think we must clarify what we mean by "SD1.5/XL image" and "(3D) render image" first to avoid confusion.

As for SDXL (let's forget about SD 1.5 for now since they are not much different in this regard), I'll assume we are talking of an unmodified output you can expect from a good photorealism model generated using a single prompt.

As for 3D rendering, I'll assume a good-quality image created with Daz3D because that's what would be most relevant to the conversation in this context.

I didn't choose something like a complex ComfyUI workflow or high-quality Blender render because we'll be talking about their respective flaws compared to real photographs.

As I mentioned above, I know that it is possible to produce a result that's practically impossible to tell from real photos using either of the methods, provided you have infinite time and top-level skills. So, I'll talk of more practical cases first.

First off, both of them suffer from the lack of details but in different ways.

SDXL struggles to depict a subject with sufficient anatomic or surface details, especially when it's rendered in a small area. But if you tell it to generate an extreme closeup, it will give you a remarkably detailed image, although it'll still lack definitions in smaller areas.

In comparison, Daz3D cannot render such minute details as micro hairs or creases on human skin, regardless of the render resolution. However, the biggest issue of Daz3D renders is that they don't represent how light affects the subject like a human body in a realistic manner.

As I mentioned in an earlier post, PBR is always an approximation of how lights interact with physical surfaces, and what Daz3D provides is a quite simplified implementation of the concept. Even with a far more advanced renderer as the Cycles engine from Blender, human characters often look realistic only in very specific lighting conditions.

The problem with SDXL can be overcome with things like inpainting and/or upscaling, which will produce results that are indistinguishable from real photos.

The problem with 3D rendering, however, cannot be fixed at a Daz3D level. To mitigate it, you'll need a bunch of highly detailed photo-scanned textures including some unusual ones like scatter maps, high-res meshes with sculpted details, and custom shader setups. And even if you have money/time/skills to meet all these requirements, usually things like hair or eyebrows give away that it's not a real photo.

That's probably why constructing a scene in a 3D modeller and then using AI to fill out the details is such a popular approach. An AI upscaler, for example, can make both 3D renders and AI-generated images indistinguishable from real photos.

Again, all this talk is only relevant when we confine ourselves to the problem of making a fictional image that looks exactly like a photo. In a more practical context, however, Blender or even Daz3D is good enough to produce stunningly "realistic" images, if we don't require such a demanding standard.
I am aware about all that. But I should have been more precise. If you compare an Ai pic and a photo (let‘s forget about the limb problem for a moment), you will mostly have degradations in the transitionzone from clothes to the skin. Often they‘re „shifted“, however sometimes I got extremely realistic results. with img2img, control.net and pose references.
So Ai does obviously a better job than any render engine. The question is what is missing, what explains the difference. How can you get as close as possible to real photos… mostly it is a mix being not too precise, light and shadow, and some tricks.

Maybe I will post one or two pictures in q3 this year, as a proof of concept of a new method to gain more realism.
 
I am aware about all that. But I should have been more precise. If you compare an Ai pic and a photo (let‘s forget about the limb problem for a moment), you will mostly have degradations in the transitionzone from clothes to the skin. Often they‘re „shifted“, however sometimes I got extremely realistic results. with img2img, control.net and pose references.
So Ai does obviously a better job than any render engine. The question is what is missing, what explains the difference. How can you get as close as possible to real photos… mostly it is a mix being not too precise, light and shadow, and some tricks.

Maybe I will post one or two pictures in q3 this year, as a proof of concept of a new method to gain more realism.
I think an example of what you mentioned as "shifting" would help me understand what you mean by that.

In my case, I didn't notice such a problem. I haven't had that much trouble generating photorealistic images if I can use a single sampling step followed by an upscaling process.

For me, the biggest issue has been the accumulative inconsistency (e.g., the lighting condition in one area doesn't match that of another) introduced by successive inpainting processes, which is needed to create a more complex scene.

I hope that SD 3.0 will significantly reduce the number of inpainting steps needed to mitigate the problem.
 
On a side note, by the way, the most photorealistic SDXL model I've seen so far is one called "Boring Reality":


(Image linked from an external host. You can see more examples from the link above.)

It's an experimental model, so their examples show the usual AI problems like deformed limbs or texts. But if I focus on the lighting and tone alone, I don't think I've seen many models that can produce more natural results.
 
On a side note, by the way, the most photorealistic SDXL model I've seen so far is one called "Boring Reality":


(Image linked from an external host. You can see more examples from the link above.)

It's an experimental model, so their examples show the usual AI problems like deformed limbs or texts. But if I focus on the lighting and tone alone, I don't think I've seen many models that can produce more natural results.
There are some ways to create alpha plane people, dull posed pic with ultra lo quality but 4k, directly from daz, that's the best quality I get at the moment.
Generation is about 100sec.
1.jpg
What I meant by shift: sometimes the neck area, which is attached to the clothing, is not done properly. Out of focus, misaligned. Not here. Details get better.
 
Last edited by a moderator:
The main problem with render images is still the hair. They are getting better, especially with Medusa and some custom hairnodes. The transition from the hair root out of the skin to the hair needs at least a good transition, and this is where geonodes come into play. I'm not fully done, I took up Chris Jones' idea.
Chris Jones is one of the guys I learn the most from.

"Ed" by Chris in 2014, far away from any AI, Lightwave :) (I was a modo guy these days haha)
"Colour", Blender
"Plica", Blender
Best thread ever :)

I tried to combine Blender with SDXL, it is too soon., at least for me.
 
Last edited:
The main problem with render images is still the hair. They are getting better, especially with Medusa and some custom hairnodes.
I like Blender's new hair system. It has become much easier to style the hair now. I think Daz3D creators should seriously consider a hybrid approach using Blender, if just for the hair and the Cycles engine.

"Ed" by Chris in 2014, far away from any AI, Lightwave :) (I was a modo guy these days haha)
Yeah, it's been possible to create photorealistic 3D renders of humans for quite some time. Otherwise, we wouldn't have so many CGI-enhanced films nowadays.

But such results are quite far removed from what you said could be achieved with "a little skill", which has been my point. ;)
 
I like Blender's new hair system. It has become much easier to style the hair now. I think Daz3D creators should seriously consider a hybrid approach using Blender, if just for the hair and the Cycles engine.


Yeah, it's been possible to create photorealistic 3D renders of humans for quite some time. Otherwise, we wouldn't have so many CGI-enhanced films nowadays.

But such results are quite far removed from what you said could be achieved with "a little skill", which has been my point. ;)
You‘re right.

@Wragg wrote „AI can put more realistic expressions on faces than 3D can! :)

I had to disagree. Because it depends on your skills and on your tools. „Little“ was wrong on my side.



I have recently tried to find out where we stand with Daz and Blender. It gets way better.

Concerning the new hair system: Some results of a speed session below.

IMG_6281.pngIMG_6282.pngIMG_6280.png
Blender 4.0.2 and 4.1. She‘s an old G8.1 character I kitbashed years ago in Daz3D for one of my stories, imported with diffeo some weeks ago. Haven‘t started to sculpt her. I love the hair system, some settings needs to be finetuned (hair diameter to approx. 0.6mm)
Ref: Megan in Daz:
IMG_6283.png
 
Last edited:
But what‘s up with Blender and Ai? Is it necessary? Below a scene from a 2021 tutorial, which was also a milestone in gaining knowledge for me: Tut available at BM, CgCookie.com. Scenes rendered on my laptop. Scene by Kent Trammell. 100% made in Blender.

IMG_6287.pngIMG_6286.jpegIMG_6288.jpeg

And: I haven‘t shown anything I made from scratch/default cube the last two years. Maybe this fall.
 
Last edited:
But what‘s up with Blender and Ai? Is it necessary? Below a scene from a 2021 tutorial, which was also a milestone in gaining knowledge for me: Tut available at BM, CgCookie.com. Scenes rendered on my laptop. Scene by Kent Trammell. 100% made in Blender.

View attachment 1461527View attachment 1461528View attachment 1461529

And: I haven‘t shown anything I made from scratch/default cube the last two years. Maybe this fall.
I think that'd depend on specific requirements - not every image has to look exactly like a photograph.

But it's probably safe to say that there are many ways to incorporate AI into your Blender workflow.

The example you showed is certainly stunningly realistic (by the way, I purchased that particular Blender course years ago, which I haven't finished yet :oops:), but it's still obvious that it's a 3D render. But what if you add a simple AI upscaling step at the end of the pipeline?

original.jpeg upscaled.jpg
(The first one is the original render down-scaled to 1K, and the right image results from applying an AI upscaler to it.)

While the difference is subtle, the right image looks much closer to a real photograph than the original.

More importantly, you'll have to be as skilled as Kent Trammell (who created this commercial tutorial on photorealistic rendering technique) to be able to create something of that quality. And even if you meet the qualification, it'll still take several weeks or even months to finish a single unique character that way.

Then see what I - who is but an ordinary Blender & SD user - just created based on the same image:

asian.jpg

It took me 5 minutes to finish this render, and I believe you can see the benefits of AI-based workflow in productivity.

As I mentioned above, there are many different ways to incorporate AI into a Blender-based workflow. There's an add-on that automatically generates PBR materials for the current scene. You can also write a shader node tree & Rigify rig that automatically generates control maps for SD, for example.

There are many other possibilities than those mentioned outside of Blender, too. For example, nowadays, it's possible to create a fully rigged character from a single photograph, create motion capture animations for that character from an arbitrary video (without any gear), and then make a fully voiced video with perfect lipsync animations without hiring a voice actor.

Of course, you don't have to use AI if you are happy with existing tools and workflows. But for people like me, it can open up a whole new world of possibilities which had been denied due to lack of skill, time, or money.
 
Back
Top Bottom