MetaHuman Creator is a breakthrough in creating realistic human chracters.
MetaHuman created with a face rig, ready to be animated.

MetaHuman Creator was easily one of the biggest things that happened in 2021 so far. It is hard to describe by words how incredibly fast it is to create completely believable humans. And this includes very advanced facial and body rigs. To call this a game changer feels like understatement. Well, this is the result of decades worth of work with companies like 3Lateral and Cubic Motion that are now under the wing of Epic Games.

The way how the process goes as of today is that one signs up for MetaHuman Creator Early Access and then gets to the online site where the design and processing of the characters happen. What this means is that we don’t need super-spec PC to do this as most of the magic happens in the cloud.

Creating MetaHuman is very intuitive and easy that works by blending ready made templates. It seems limiting at first but there is real depth and freedom to this especially when diving to deeper level. There are several hairstyles, beards and so on to choose from. And the template animation that shows the character like posing in front of a mirror is so real it is kind of creepy.

It is amazing how MetaHumans are able to avoid the pitfall of the uncanny valley. Character Creator 3 is a great software but even that has issues with especially the out of the box expressions looking cheap. One needs to really know deeply facial animation and rigging to be able to make good results with that. MetaHuman on the other hand delivers very believable and natural results instantly.

When the character is finished in MetaHuman Creator it is as easy as launching Bridge (What used to be Quixel Bridge) and downloading MetaHuman from there. All of your metahumans appear there as downloadable files.

We need to setup the Bridge plugin from the Export settings. It is also possible to get MetaHumans to work with Maya and we can export as high as 8K texture maps.

MetaHuman with custom clothes I rigged in Maya.
MetaHuman with custom clothes I rigged in Maya.

Now what goes on under the hood is really remarkable stuff. What we get are believable shaders, absolutely fantastic hair that is based on grooms that reacts to physics, and full facial and body rig. The facial rig is something we could see in multi-million dollar Hollywood production. It is also pretty easy to use.

Furthermore what’s amazing about this is that we can do high-quality facial animation of MetaHuman characters with with iOS LiveLink. This takes advantage of iPhone ARKit. We can drive facial animation with Live Link Face that is available here. It is quite easy to setup and it is not difficult to imagine the possibilities here.

Rigging custom clothes from Marvelous Designer + ZBrush is relatively straight forward and one can also use the template body that comes with MetaHuman to transfer skin weights to make weighing of the rig faster. The bodies come in several templates and are named logically with a naming convention.

Now, there are limitations of course, one being extremely small amount of garments (the existing garments look good though). Also I feel like the shoulder rig could be better, what comes to body rigs I feel CC3 has more stable rig. In it’s current form – raytracing doesn’t work well with the eyes with simple HDRI for example making the horririble black-eye effect that looks like it is from the Ring franchise. It is easy to somewhat workaround this by lighting the eyes separately or doing a second render pass just for the eyes, but needless to say this is somewhat cumbersome. I suspect that these issues will get fixed in near future so it is not a deal breaker anyhow.

If you are into making characters I definitely recommend to check out MetaHuman Creator. It is free and fun. I will be creating in-depth tutorial on the topic soon.

Here is a scene I created In Unreal Engine 4.26 using MetaHuman.
https://www.artstation.com/artwork/XnL0Jw

If you are new to material editor in Unreal check out this small tutorial on using material functions. It is easy and fun.

I just got offer from a company offering freelance work in Artstation. In addition of asking me to complete free art test they suggested the following:

1- Per model system you will be paid for the model, validated by the client, depending on the complexity level 
– Low complexity: 15 euro 
– Medium complexity: 19-25 euro 
– High Complexity: 27-30 euro 
– Casuastic: extreme complexity 
If the modeler is doing only the “mesh” part (without textures) , this price would be reduced to 50 % 

2- Monthly system assure you a constant volume of production with fixed remuneration 
– PRO-modelling level (if you have developed the high level of skills + english ) 
– Master-modelling level: our basic, designated for modelers who would be creating Mesh and textures for PBR models 
– Pro modelling : starting from 700 euro for 10 models all complexity 
– Master modeling : 200 euro for 10 models per month all complexity 

And the abuse goes on.

So let’s break this down.

High complexity model with textures is in best case 30 euros. High complexity model with textures can easily take a week to make. So in that case daily salary is 6 euro.

The CG industry is soon to become 160 Billion dollar industry. How is this not abuse of artists?

Morning 23rd (which was Labour Thanksgiving day in Japan, National holiday) I noticed that my PayPal balance had became zero.

I contacted PayPal and they said this was a system error, not unauthorized access. This is what they said in their official statement that came the following day:

This week, you may have noticed that the balance in your PayPal account was temporarily unavailable. Due to a technical issue, a bank withdrawal had been initiated incorrectly, which meant your balance appeared to be transferred out of your PayPal account.

They also said the following

We deeply apologize for any confusion this may have caused. We acted quickly to cancel this bank withdrawal so the balance was not transferred into your bank account.

NHK reported the news. Twitter also responded and Paypal Japan became a trending topic.

I think couple of things to take out of their statement. They said “withdrawal had been initiated incorrectly”. This is bad communication.

Do they mean to say that a bank withdrawal had been made in error? This should be obvious since all users report that they had not themselves initiated the bank withdrawal. Or do they mean that the a bank withdrawal that had been made in error was also somehow incorrectly made.

I actually asked the support if they would just let it go through. But I wasn’t given that option. So this could lead to speculation that not only the withdrawal was made in error it was also initiated incorrectly.

Furthermore the statement said “appeared to be transferred out of your PayPal Account”. Appeared? Transferred to were? So if it only appeared to be the case, it then means that it wasn’t. And out of your PayPal account, is different than saying to users bank account for example.

Not very professional communication to say the least, and I wonder if whoever wrote this is native English speaker. Or..

There is a theory floating around that Japan government is behind this, force testing the automatic withdrawal to Japanese bank accounts. There is actually PayPal notification from May 2021 that says that the accounts that hold large amounts such as those exceeding 1 million yen will be under special scrutiny and maybe automatically forced to withdraw to bank account.

There doesn’t seem to be evidence however that the automatic withdrawal would happen for lesser sums. If Japanese government is to crackdown on some kind of tax avoidance occurring in online banks, it would make sense to target larger sums.

Either way this isn’t good user experience. If this was sincere error, it was a bad one. If this was some kind of covert government move, worse. I was able to get my JPY funds returned, but not the USD which are still floating around somewhere. As of today I have no way of knowing when I will get my money back.

PayPal says they are doing everything they can to make sure this won’t happen in future. Unfortunately I don’t feel too confident about this.

This is something I have seen so many times. Being somewhat experienced at this point, I have learned how to deal with this (check at the end). But seriously, if you are a project manager in CG project, please do not do this.

This is how it feels like, usually.

The most common mistake in CG team management would be several people giving feedback to artists. Sometimes the people are supervisors of other projects, other artists or some non-CG people who just want to chime in how they feel about the work being done. This is extremely annoying for artists, especially when the feedback from several people is conflicting.

The feedback must come from a single source, the person giving it usually being CG supervisor or art director. Actually if the pipeline is somewhat simple, it doesn’t matter so much if the person giving feedback is really hands on with the project, as long as the person takes responsibility of the feedback. It is no good confirming the work done is okay, only to return to artists with changes a week after. If it is good, then it must be considered as approved work, and further changes must be acknowledged as extra work.

Also if you are CG artist on a project, please do not waste time commenting on others work in company Slack, unless that is specifically required by the lead for some reason. Throwing thumbs up is okay, but even positive feedback can throw off what the actual supervisor has to say. If you notice issue with someone’s work that would have easy fix, feel free to throw them a direct note, but do not pollute the Slack channel.

I have seen this so many times that now usually when this happens, I usually just stop everything I am doing, raise my hands and ask clear direction from the person in charge. But I have also brought up the issue with at least three companies in near past because this was clearly impacting the productivity in major way.

Be clear who is in charge of the particular project’s art direction and make sure your team knows.

Revisiting the June prediction by SIGGRAPH.

Source: Siggraph

CG Graphics market, that’s 150 Billion and will be 160. Look also how little impact Covid has had to the industry. It is not negligible but the predictions are relatively stable.

Testing 3D software on Macbook Pro M1 Max

I just bought M1 Max MacBook Pro for production work, here are some tests with 3D software on M1 Max. I want to see if I would be able to use this as a truly portable 3D creation machine. Truly portable meaning something that actually can do 3D on battery power, also without the fans getting wild.

My initial 3D software tests on M1 Max are following:

  • Substance 3D Designer runs very well currently via Rosetta 2, iRay renders also working fast, similar speed to modern gaming notebooks. Native version is coming according to Adobe.
  • Substance 3D Painter runs amazing. Just tested painting on 10 UDIM tile VFX model. The model has around 1 million quads. I was able to paint across tiles in 4K smoothly on battery power. Ambient Occlusion baking was the only area where this fell (significantly I must say) behind Nvidia powered computer. Currently Substance 3D Paintel runs through Rosetta 2.
  • Marvelous Designer is defunct on M1, does not work with system hanging on complex garments.
  • ZBrush works very well, similar to high end desktop such as 3950X based machine. Single core performance of the machine is really that good.
  • Maya runs fine, similarly to other modern notebooks. If this gets native M1 code could be amazing.
  • Redshift under Maya host was a bit disappointing, unsure for the reason why, unsure if the plugin runs native under non-native Maya.
  • Blender runs okay. I tested M1 version. Completely usable for high-end 3D work and high end cycles renders, well render fine, although not very fast. Maybe the M1 optimization is not maybe in ideal state yet.
Testing 3D software on M1 Max. Substance Designer will have Native M1 version coming.
Downloading Substance 3D Designer from Adobe will give this note – Apple silicon version is coming.

I will keep testing different 3D software in more lengthy tests, but the initial tests are mind blowing. Especially what Substance Painter was able to pull off, 10 4K UDIM texture sets on multi million quad model is not easy feat for a notebook. While working on this model the fans did not even spin up, and I could do such work in a cafe if I really need to.

Even in it’s current state I cannot see why I wouldn’t be able to use this in production. And if the apps get optimized for M1 this can change the industry.

See also

Here is a handy website for checking which apps are M1 ready.

Rendering out movies in Unreal Engine is somewhat counterintuitive. We can render movies using the Render to Video in Sequencer which gives us the rudimentary options. But we will miss on ability to queue longer renders with this plus we can’t use anti-aliasing. Welcome Movie Render Queue plugin.

You need to enable Movie Render Plugin in Plugins in order to access the settings. IF you need Additional Render passes, make sure you enable also the Movie Render Queue Additional Render Passes plugin.

Enable Movie Render Queue plugin first.
Enable Movie Render Queue plugin and restart first.

It is also now possible to render out Prores with Movie Render Queue by enabling. Apple Prores Media plugin. If you intend to use Prores 422 HQ for example, enable this plugin.

Having the plugins installed, you can then access the Movie Render Queue from the Window > Cinematics menu.

Opening the window will give you a clean slate that is a bit confusing.

Clicking the green + Render button will enable us to add any level sequences to it. Then each of the level sequences will have their own settings we can customize, or choose existing preset.

Clocking the Unsaved Config* preset will allow us to change render settings.

Here I have opened the Setting dialog of the default preset and added .EXR output instead of the default 8-bit .jpg. It is surprising that these .EXR files are 16-bit instead of 32-bit that the standard Sequencer outputs, I should wonder why this is so. We can also change output to Prores or Render method. We can render out path traced animations for example.

Some of the settings in Render Queue plugin. Having access to anti-aliasing settings is especially useful here.

What’s great about this is that we have access to Anti-aliasing options here. What I have found that works decently well is having some Spatial Sample Count like 8 or 16 while also adding Temporal Sample Count. Spatial Sample Count increases quality of each frames and Temporal Sample Count helps motion blur, reducing glittering pixels especially on shiny surfaces. I am still experimenting of what would work best for quality while keeping the render time decent enough here.

Lastly we can save the presets. If we want to save each of the sequence outputs to point to a different folder for example we can also do it here.

Definitely use Movie Render Queue if you can, it can save a lot of time and give you a higher quality animations.

See the official documentation of the feature over here.

What Apple just announced is definitely fascinating. There is tons to unpack here, but let me just put my thoughts together from the point of view of ex Adobe demo artist who had to run Unreal on a laptop in front of corporate clients.

Running heavy 3D software on notebook (such as Unreal or Substance Painter with something actually in viewport) has so far meant immediately finding outlet and plugging kilograms worth of power brick to outlet. That, or letting the performance die. I can’t stress how huge difference there has been on either plugged or unplugged. To put it simply, without a plug, no 3D, period.

I am quite happy with my 17 inch Zephyrus with 32gigs of memory and a 2080. This works well, as long as it is plugged, and if I set the performance profile to quiet it is also decent quiet, not bad at all. Can also run games in silent mode decently enough. But compiling shaders in Unreal engine is painfully slow when comparing to my AMD tower. If I was to travel and needing to answer to a client work, this would be near undoable, as average shader compile of a new project can take 3X as long as it does on my desktop, the computer being unusable at that time.

Same goes with my 2019 Macbook Pro, by the way, or make that 4X the wait. If Mac would do the same job on battery power, this would be fantastic.

The unified memory is fascinating as well, if Substance Painter got optimization to take advantage of the shared GPU memory, this could potentially challenge RTX3090 machine. I don’t know how feasible this is, but I am a dreamer.

However. At this point it is hard to say how useful the new Mac would be for someone running Substance or Unreal. Unreal key features such as ray tracing support or nanite do require NVidia graphics card which are non-existent at the moment for Apple. I would doubt there will be a workaround for this anytime soon.

No matter how attractive the Apple system is, this would be the deal breaker for me – so many workflows are tied CUDA or Nvidia Raytracing. Also add to this the missing software on Mac. And there is for example the Delighting feature of Substance Sampler.

Cinema 4D and recently Redshift are fascinating though which both run reportedly well on M1. Having Unreal run natively on M1 with Raytracing and Nanite, that would be very interesting.

UE5 Early Access 2 is fantastic to have as a testing grounds. According to Epic this is not supposed to be used as a production tool. This is likely to still change somewhat until the final release.

I wanted to list the most annoying problems I have so far with UE5 EA2 and I plan to update this with the fixes as they come. This is written as of October 17th, 2021.

  1. High-resolution Screenshot tool crashes.. a lot. At this time it is nearly impossible to get anything higher than 1X captured this way.
  2. Subsurface doesn’t work. (Subsurface Profile however works)
  3. Lumen reflections are often weird and soft (as this is still being developed)
  4. Lumen interior lighting is weird and kind of broken.
  5. No tesselation. Only way to tesselate terrain is to use Virtual Heightfields. This is can be a deal breaker.
  6. No cascades
  7. Splotches on meshes when using Raytracing. Use th is to fix: r.raytracing.normalbias 32
  8. Wrong textures with Nanite meshes. Here is fix. Set the Skylight to movable.
  9. VR Preview crashes the editor when playing
  10. Some 4.27 assets will not work but also Niagara systems from 4.26 will crash the editor.

This all said, this still early access and we likely will see most of them addressed in coming months. I also wish the versioning of the early access was made more obvious in the launcher. It is not obvious at all if we are using EA1 or EA2 and so on.

Few months back I participated on a project as a freelance CG artist making environments. Some organizational changes happened and I was no longer needed. I was on my way out, saying goodbye to folks and suddenly I was logged out of Slack. I couldn’t even finish the sentence or even say thanks. I was just deleted.

About a year ago, same thing happened. I knew the person I was working with so I could still reach him with email, but similarly I got no notice, instead a taste of a rubber boot.

Then again, yesterday. I got suddenly logged out.

I would understand if I had done some unforgivable mistake and earning a kick to the behind, but I didn’t. I would like to think I had been helpful to the projects. But getting a good old Nokia to the rear does make me doubt how valuable I was afterall.

For good heavens sake, proper way to do this would be to write at least a short note. I also think Slack could improve it’s service by at least giving option to automatically send them, a pre-written template by Slack would do just fine.

Here is example for you managers. Feel free to copy-paste this if it is too difficult to write one yourself:

“I’d like to thank you for the collaboration! It has been a blast, hope to get to work with you again. As the work is completed we got to remove your Slack account, but let’s keep in touch! You can contact me at XXXX.”

I hope this helps.

One of the less intuitive things in Unreal World Outliner is the the way how to hide lights or other objects. Disabling them by clicking on the eye in World Outliner results them turning off in the viewport, however reloading a scene will re-enable them causing confusion. Same goes for everything else.

Better way For lights, toggle Affects World to disable/enable it without deleting the actor/component.

For meshes, toggle either Visible (completely hides it) and/or Hidden in Game (hides in game view but not in editor view) to hide it.

This is definitely a head-scratcher. I think better way is to not to use the eye icon at all.

Having splotches in Unreal Engine ray tracing objects with HDRI Environment? Try this console command:

r.raytracing.normalbias 32

I am really pleased to announce that I have had a very talented web designer working on redesign of this site for several weeks now. The new design is result of simplification and making this site more robust and easy to use.

I believe in a site without unnecessary annoyance or clutter. I think what we will reveal tomorrow is going to be really beautiful.

There will be a lot of new content to come in the following weeks, so stay tuned.

As always, thanks for visiting my site.