I am developing an ARKit based application that requires plane detection of the tabletop at which the user is seated. Early testing was with an iPhone 8 and iPhone 8+. With those devices, ARKit rapidly detected the plane of the tabletop when it was only 8 to 10 inches away. Using iPhone 15 with the same code, it seems to require me to move the phone more like 15 to 16 inches away before detecting the plane of the table. This is an awkward motion for a user seated at a table. To validate that it was not necessarily a feature of my code, I determined that the same behavior results with Apple's sample AR Interaction application. Has anyone else experienced this, and if so, have suggestions to improve the situation?
Hello,
The difference in cameras between these phones could help explain the situation. Likewise, the amount of machine learning processing the camera feeds is a factor. Lastly, if the iPhone 15 is a Pro model with a LiDAR camera then that affects plane detection too.
Two things you could try:
-
Scan an area further from the player on iPhone 15 i.e. scan the seat opposite them (rather than top-down).
-
Move the iPhone side-to-side when scanning for planes. This gesture is considered helpful for scanning feature points to establish tracking and then detect planes.