Lack of AVCameraCalibrationData parity in RAW Photo workflows

I am requesting technical clarification and a formal feature addition regarding the availability of per-frame calibration data for Standard RAW captures on iPhone 17 and 17 Pro.

The Technical Gap: Currently, AVCaptureVideoDataOutput provides a direct path to the AVCameraCalibrationData object, allowing real-time access to the Intrinsic Matrix, Radial/Tangential Distortion coefficients, and Lens Shading Maps. However, this same level of geometric transparency is missing from the AVCapturePhoto RAW/ProRAW delegate.

While standard RAW files contain some metadata, they lack the physics sidecar required for precision manual alignment. Because the lens assembly in the iPhone 17 Pro is dynamic shifted constantly by Optical Image Stabilization (OIS) and high-speed Voice Coil Motors (VCM) for focus a static or factor calibration is mathematically insufficient for high-precision workflows.

The Problem: Without the 1:1 hardware state at the millisecond of exposure, we cannot perform accurate geometric reconstruction from RAW stills. We are forced to choose between the high dynamic range of a RAW sensor dump and the geometric precision of the video pipeline.

Final Questions:

Is there a documented, supported method to force the inclusion of the AVCameraCalibrationData object or a raw metadata sidecar in the AVCapturePhoto workflow?

If not, can Apple provide parity between the Video and Photo APIs so that the "rawest" data (RAW) is accompanied by the "rawest" physics (Calibration Data)?

Providing the pixels without the lens geometry limits the utility of the RAW format for any technical workflow requiring sub-pixel geometric integrity.

Lack of AVCameraCalibrationData parity in RAW Photo workflows
 
 
Q