1
votes

I'm trying to understand Google's ARCore API and pushed their sample project (java_arcore_hello_ar) to GitHub.

In this example, when you deploy the app to your Android, any horizontal surfaces/planes are detected. If you tap on a detected plane, "Andy" the Andrid robot will be rendered in the location that you tap. Pretty cool.

I'm trying to find where in the code:

  1. Thats a horizontal surface/plane gets detected; and
  2. Where the logic lives to resize & re-orient Andy correctly (I assume if the point you tap is further away from the camera, he will be rendered small, etc.)

I believe when planes are detected, that the Android framework calls the onSurfaceCreated method:

@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
    GLES20.glClearColor(0.1f, 0.1f, 0.1f, 1.0f);

    // Create the texture and pass it to ARCore session to be filled during update().
    mBackgroundRenderer.createOnGlThread(/*context=*/this);
    mSession.setCameraTextureName(mBackgroundRenderer.getTextureId());

    // Prepare the other rendering objects.
    try {
        mVirtualObject.createOnGlThread(/*context=*/this, "andy.obj", "andy.png");
        mVirtualObject.setMaterialProperties(0.0f, 3.5f, 1.0f, 6.0f);

        mVirtualObjectShadow.createOnGlThread(/*context=*/this,
            "andy_shadow.obj", "andy_shadow.png");
        mVirtualObjectShadow.setBlendMode(BlendMode.Shadow);
        mVirtualObjectShadow.setMaterialProperties(1.0f, 0.0f, 0.0f, 1.0f);
    } catch (IOException e) {
        Log.e(TAG, "Failed to read obj file");
    }
    try {
        mPlaneRenderer.createOnGlThread(/*context=*/this, "trigrid.png");
    } catch (IOException e) {
        Log.e(TAG, "Failed to read plane texture");
    }
    mPointCloud.createOnGlThread(/*context=*/this);
}

However that code looks like it assumes the user has already tapped on the surface. I'm not seeing an if-conditional that basically says "Render Andy if the user has tapped on a detected plane/surface.". Can anyone spot where this might be happening?

1

1 Answers

4
votes

The tap detection is done by in the mGestureDetector:

mGestureDetector = new GestureDetector(this, new GestureDetector.SimpleOnGestureListener() {
    @Override
    public boolean onSingleTapUp(MotionEvent e) {
        onSingleTap(e);
        return true;
    }

    @Override
    public boolean onDown(MotionEvent e) {
        return true;
    }
});

Which is linked to the SurfaceView

mSurfaceView.setOnTouchListener(new View.OnTouchListener() {
    @Override
    public boolean onTouch(View v, MotionEvent event) {
        return mGestureDetector.onTouchEvent(event);
    }
});

Both things happen in onCreate(), so now every time you tap the surface view (the "main" view in the activity),

private void onSingleTap(MotionEvent e) {
    // Queue tap if there is space. Tap is lost if queue is full.
    mQueuedSingleTaps.offer(e);
}

is called and the tap is stored. This queue is then processed in every frame drawing (which in turn is issued by the system's UI drawing cycle) here

MotionEvent tap = mQueuedSingleTaps.poll();
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
    for (HitResult hit : frame.hitTest(tap)) {
       ...

This adds a new anchor (i.e. a point "locked" in the physical world") at which an Android object is rendered (cf. this line).