Apple does not offer any kind of API for providing your own pixel buffers for use by ARKit’s world tracking mechanisms. The APIs you see in the header files and documentation are all the APIs there are.
If you think about it, such an API probably isn’t very feasible anyway. Dig even just a little bit into the details of how visual-inertial odometry works, and you’ll notice that world tracking crucially requires two things:
detailed knowledge of how optics and imaging sensors interact to determine the spatial relationship between a pixel in the camera feed and a real-world feature some distance from the camera
precise time synchronization between the imaging pipeline and physical motion sensing systems (accelerometer, gyroscope, IMU sensor fusion hardware or software)
If a user-space app gets to modify pixel buffers in between their capture by the camera and their use by ARKit, (1) no longer holds, so tracking quality would likely suffer — the world tracking algorithms aren’t seeing the world in the way they expect to.
Additionally, any modification to pixel buffer contents takes nonzero time, so (2) no longer holds either — the system has no idea how much delay your image processing has introduced (assuming it’s even predictable), so the image timing doesn’t match up with the motion sensor timing. At best, tracking quality is mostly unhindered but there’s a noticeable lag. (More likely everything falls apart.)
If what you mean by “boost tracking quality” actually has less to do with how tracking works and more to do with how the user experiences your app, there might be other avenues open to you.
For example, if the user is marking points in space to measure a room, but has a hard time precisely tapping on the corner where wall meets floor, you can offer UI to assist with that task. Maybe zoom in on the camera feed while the user drags a marker, or encourage the user to physically move closer to the real-world feature they want to mark.