The Camera Array seems to be designed to have one rig that houses multiple cameras in a circular array that could potentially rotate to point the camera at the target. That being said, IRL sensors can get only a specific WaveLength of the EM spectrum at one time. Visible Light sensors are different from IR, different from NIR, different from X-rays, etc. Ergo the rotation of the rig to point a different sensor at it if you're relying on capturing IRL existing EM Wave Lengths constantly being spread about through space. TL;DR = My theory is that all the different camera's have a different sensor in each dome, to capture the various spectrum, rotate & point to target capture the full set.