I'm trying to understand Google's ARCore API and pushed their sample project (java_arcore_hello_ar
) to GitHub.
In this example, when you deploy the app to your Android, any horizontal surfaces/planes are detected. If you tap on a detected plane, "Andy" the Andrid robot will be rendered in the location that you tap. Pretty cool.
I'm trying to find where in the code:
- Thats a horizontal surface/plane gets detected; and
- Where the logic lives to resize & re-orient Andy correctly (I assume if the point you tap is further away from the camera, he will be rendered small, etc.)
I believe when planes are detected, that the Android framework calls the onSurfaceCreated
method:
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
GLES20.glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
// Create the texture and pass it to ARCore session to be filled during update().
mBackgroundRenderer.createOnGlThread(/*context=*/this);
mSession.setCameraTextureName(mBackgroundRenderer.getTextureId());
// Prepare the other rendering objects.
try {
mVirtualObject.createOnGlThread(/*context=*/this, "andy.obj", "andy.png");
mVirtualObject.setMaterialProperties(0.0f, 3.5f, 1.0f, 6.0f);
mVirtualObjectShadow.createOnGlThread(/*context=*/this,
"andy_shadow.obj", "andy_shadow.png");
mVirtualObjectShadow.setBlendMode(BlendMode.Shadow);
mVirtualObjectShadow.setMaterialProperties(1.0f, 0.0f, 0.0f, 1.0f);
} catch (IOException e) {
Log.e(TAG, "Failed to read obj file");
}
try {
mPlaneRenderer.createOnGlThread(/*context=*/this, "trigrid.png");
} catch (IOException e) {
Log.e(TAG, "Failed to read plane texture");
}
mPointCloud.createOnGlThread(/*context=*/this);
}
However that code looks like it assumes the user has already tapped on the surface. I'm not seeing an if
-conditional that basically says "Render Andy if the user has tapped on a detected plane/surface.". Can anyone spot where this might be happening?