Professor Moriarty wrote:
Ah yes, that
The overall strategy here was to recreate in LightWave the Arriflex 35mm camera that cinematographer Jerry Finnerman used back in 1967. If I could duplicate this camera in LightWave, then all I would have to do is position a simple rectangle at the appropriate spot to act as my "viewscreen", project an image of the asteroids whizzing by onto its surface as an animated surface texture, and then "film" the viewscreen in LIghtWave as the LightWave camera pivoted in perfect synchronization with the 1967 meatspace camera. Then it would just be a simple (but tedious) matter of rotoscoping
Bill Shatner in front of the LightWaved viewscreen, and finally, layering my masked viewscreen (with a transparent, animated, Shatner-shaped "hole" in it) on top of the original live action footage. Piece of cake!
Unfortunately, analyzing a scene to find the three-dimensional reference points that recreate the position and movement of a meatspace camera in 3-D cyberspace is tricky stuff. It works pretty good when the real world camera being reconstructed in cyberspace is stationary XYZ-wise and is only pivoting on one of its axes. Unfortunately, Finnerman apparently didn't quite keep the camera locked down while it was pivoting to track William Shatner, which gave my motion tracking software fits. Fortunately, a friend of mine in the UK has a copy of Andersson Technologies SynthEyes
, which does an excellent job of dealing with imperfect camera setups like this. You should have seen the keyframe instructions that thing spit out--literally EVERY frame of that short sequence had to be keyframed--there was not one pair of adjoining frames where the camera didn't shake, wobble or otherwise do something other than just smoothly transit along one axis of rotation!
Oy vey, sounds very tedious....
AND YOU GET TO DO IT AGAIN for later in the same episode.
- W -
* However it's nice to know it can be done *