Build a vision capture system for apple vision pro

Hi, I am a newbie here.

We have been given a task to build a robotic vision system to capture an immersive video in a hazed environment, which will later be played on Apple Vision Pro. I am thinking of starting with 2 or 4 basic CMOS camera sensors, such as IMX378, AR0144, or VD66GY, and designing an FPGA-based circuit to synchronously capture and store raw frame-by-frame data. Some frame initial processing such as demosaicing and filtering can also be done by the FPGA. Then, I would use software for post-processing to convert the data into a compatible video format for Apple Vision Pro.

Will this idea work? I can handle the raw data capture, but I’m unsure if this approach is feasible and what post-processing software I should use.

Thanks a lot for your suggestions!

Charlie

Build a vision capture system for apple vision pro
 
 
Q