0
votes

I am working on a POC using the sample provided by apple https://developer.apple.com/sample-code/wwdc/2017/PlacingObjects.zip.

Right now placing object works fine after detecting the surface. But when i move the object from the detected surface to some other space like walls or some obstacle, it is overlapping with the 3D object.

Is it possible to detect the obstacles while placing/moving the 3D object through camera? Is there any sort of API available in ARKit to find the obstacle in the surface?

If not is there any workaround or calculation that we can do to find the obstacle/wall and let user not place/move the object above/beyond the obstacle/wall?

1
In terms of walls and such, ARKit is still unable to detect vertical surfaces. So there might be an issue with detecting those, or even non-horizontal surfaces. The only 'obstacles' it might be able to detect are other Virtual Objects that you've placed and you can deal with those by dealing with collisions and giving them physics bodies. Sadly I don't think you'll be able to do what you're looking toAlan
@AlanS Is there any way we can find if the virtual object's space over which it is placed is a horizontal surface or not?yaali
Sorry I didn't particulary understand. Do you mean the space over the object or the space the object is over? For space over object i'm not too sure how you could check that, for space under an object, you can use horizontal plane detection essentially.Alan
@AlanS In simple words, i want to detect of my entire object is placed in proper horizontal surface or not. How to do that?yaali
I'm sorry to say it but i'm not sure how you would do that.Alan

1 Answers

1
votes

The short answer at this stage is no, unfortunately.

Detecting vertical planes, or objects in a scene, is quite difficult. My understanding is that Apple is working on vertical plane detection, and that there are a couple of startups doing the object detection stuff.

The best option will be to wait for 6d.ai, as this is what they are working on (although they are in stealth so hard to tell exactly).

If you have any Core ML experience then you could use an object detection model (find a third party one) to recognise objects in a scene and use that as a proxy for geometry that is off limits. There's also Matroid which provide object detection / tracking capabilities.

The following are not specific ARKit / iOS examples, but might help you later on.

Vuforia has support for scene understanding: https://library.vuforia.com/articles/Training/Getting-Started-with-Smart-Terrain

Hololens sort of has support for it as well: https://elbruno.com/2017/04/21/hololens-spatial-understanding-vs-spatial-mapping-and-a-step-by-step-on-how-to-use-it/