@Professor Moriarty thank you to you both for the compliments! I mean the main reason how I am able to do people with my animations, I used Adobe Fuse. Before Adobe killed it off, I was able to export a decent amount of characters ready to animate with facial shape keys. The uniforms were made from the base character mesh, with the details in UV mapped textures. For facial animation I use face cap on the iphone (bought my partner's old Iphone 11 + XL max pro whatchamacallit), and then use animation nodes to transfer the shape key animation from iphone blendshapes to the mixamo shapes. For the body animation I use mocap from Mixamo (Adobe still supports uploading characters, and downloading their Mocap), then I transfer the mocap using Rococo's blender add-on. For things not in mocap, I've found the (I think new) auto-IK feature in blender for certain things, such as a comm badge tap.
If I were to approach it differently today, as in literally today, I'd likely use Meta-human Creator, it came out this week, it's free, and it was designed with the iphone mocap approach. The downside is that the fine print is that all characters must be rendered in Unreal. Guess that is something I should learn next then. Character creator has some really cool features, but it's also extremely expensive and billed annually. Other options I considered:
Blender human generator, released a few weeks back, first feedback I gave to them was that not having the iphone ARkit blendshapes as an option out of box was a deal breaker for me at the present moment. Of course I could always use that app in conjunction with
Faceit. Additionally Daz studio is free, and I could always use my animation node approach for transferring shape keys. One of my characters was made with a trial version of
Keentools Facebuilder.
Some other really cool things coming out: Nvidia Audio2Face. Nvidia had their GTC conference last week and I watched the showcase for this new project, the concept is that you import your character, then you map certain points on the character to line up with the pre-trained model, then it animates the face based on what the audio is. It was pretty cool, could use some improvement, but its still in development.
Once I have the final project posted my plan is to do a tutorial series on how I made everything. After that will likely go back into a modeling phase (the bridge In these renders I built almost 2-3 years ago, could be improved.)
Back to the topic of Depth of Field. I 100% agree that my scene would look that much better with it. I had to sit through an entire lecture on the importance of depth of field in my college films studies class. I don't think I am going to implement it for this first go around. I'm tired. I feel like I am spending so much time trying to get my animations to not crash that it is exhausting. This is my pandemic project, and the pandemic is almost over, fingers crossed. (One of my projects for work right now is to prep the office space for eventual return to office life) BUT that being said hobbies will still be hobbies.