2
votes

I'm building an application that will run on both HoloLens and mobile devices (iOS/Android). I'd like to be able to use the same manipulation handlers on all devices with the goals:

  1. Use ARFoundation for mobile device tracking and input
  2. Use touch input with MRTK with ManipulationHandler and otherwise use touch input as normal (UI)
  3. Simulate touch input in the editor (using a touch screen or mouse) but retain the keyboard/mouse controller for camera positioning.

So far I've tried/found:

  1. MixedRealityPlayspace always parents the camera, so I added the ARSessionOrigin to that component, and all the default AR components to the camera (ARCameraManager, TrackedPoseDriver, ARRayCastManager, etc.)
  2. Customizing the MRTK pointer profile to only countain MousePointer and TouchPointer.
  3. Removing superfluous input data providers.
  4. Disabling Hand Simulation in the InputSimulationService

Generally speaking, the method of adding the ARSessionOrigin to the MixedRealityPlayspace works as expected and ARFoundation is trivial to set up. However, I am struggling to understand how to get the ManipulationHandler to respond to touch input.

I've run into the following issues:

  1. Dragging on a touch screen with a finger moves the camera (editor). Disabling the InputSimulationService fixes this, but then I'm unable to move the camera...
  2. Even with the camera disabled, clicking and dragging does not affect the ManipulationHandler.
  3. The debug rays are drawn in the correct direction, but the default touchpointer rays draw in strange positions. I've attached a .gif explaining this. This is using touch input in the editor. The same effect is observed running on device (Android).

This also applies to Unity UI (world space canvas) whereby clicking on a UI element does not trigger (on device or in editor), which suggests to me that this is a pointer issue not a handler issue.

I would appreciate some advice on how to correctly configure the touch input and mouse input both in editor and on device, with the goal being a raycast from the screen point using the projection matrix to create the pointer, and use two-finger touch in the same way that two hand rays are used.

1
This is possible, just saw somebody post a Tweet about it. Asking for more details about how they did it... twitter.com/takabrz1/status/1150550825892114433Julia Schwarz
@JuliaSchwarz this is where I got to. It works as expected, with the one exception that graphic raycasters conflict with the world space canvas (CanvasUtility) and I can't seem to get both working at the same time: gist.github.com/camnewnham/d0c4d3a8361ace1d3b9c8cdb16820e6cnewske
Hi @newske this is great! I looked at this a bit today, came up with similar results. I've created an issue to track this: github.com/microsoft/MixedRealityToolkit-Unity/issues/5390Julia Schwarz
Hi @newske, I was able to get unity ui working on Android today, the gist you posted didn't quite work for me, but this one did (made a few other changes): gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69Julia Schwarz

1 Answers

0
votes

Interacting with Unity UI in world space on a mobile phone is supposed to work in MRTK, but there are few bugs in the input system preventing it from working. The issue is tracked here: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5390.

The fix has not been checked in, but you can apply a workaround for now (thanks largely to the work you yourself did, newske!). The workaround is posted in the issue. Please see https://gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69. Just replace each file in MRTK with the version in the gist.