This guy's Star Trek channel is one of my favorites. He does a good job explaining this....
"A little though" (6:10 appx) definitely describes it per the narrator, along with anyone who's compared film restoration to artificial enhancement. You'll get far more detail in downscaling a native film scan than an upscaled "AI" image re-downscaled too. It's good for some aspects, but characters still look very much like wax mannequins, complete with un-detailed tunics and other adornments.
Did Babylon 5 get remastered for HBO Max? I guess Warner Bros. have more money than Paramount.
They went back to the original films but upscaled the CGI. The show always looked good, but the screencaps of the remastered film negs show a far greater color gamut than any "AI" can begin to accurately produce, ditto for actual detail - which are the key tenets of "HD". All one need do is take an episode of the show, select one frame. Find the timestamp on the SD master and the new HD one. Copy the SD frame to another image. Use that AI to upscale the SD to HD and compare that revised SD image to the native HD frame image. If people still think there's no appreciable difference*, take both these images, downscale them to SD, then place both next to the same frame from the original SD image you placed to the side earlier. Then compare the results and see how the shrunken HD image looks far crisper and more lush and vibrant than either SD, and you'll likely see the "AI-enhanced" SD not look too much better or different from the original SD as well, yet the HD image sitting by both of them still looks much better.** /itIsThatSimple
*



** the larger the native material means the more you can upscale and tweak it. Especially in print media, where you can stretch a 300ppi image to 150ppi and it'll still look sharp and good enough despite the enlargement. But considering that HD is 1980x1080P for blu-ray definition (I'm, not even comparing 4k right now), you're taking 720x480i and stretching it about 3x. Never mind that i means "interlaced" (two fields are combined to form one frame) and "P" means "progressive" (one full frame is shown and thus takes half the time to generate). Now add in 30 frames per second and motion video - when deinterlacing, the very first step before you do anything else, you're merging fields while calculating adjacent frames so you don't end up with motion artifacting and sometimes even frames are stripped out to eliminate that fluid motion video effect - yes, interpolation can restore frames but it doesn't always work perfectly - it's far easier to remove detail than to artificially recreate it via what amounts to guesswork based on a repeating algorithm... The TDLR version is even more simple: AI can't do much for very low resolution images, which is why ENTERPRISE's upscale only looks mildly blurry in some f/x shows (rendered in 720P, upscaled to 1080P isn't that much of a stretch) yet playing the same upscaling tools on NTSC video still looks very tacky. /sheldonMode
Last edited: