13
votes

I'm currently writing an application that displays a lot, and I mean, a lot of 2D paths (made of hundreds, thousands of tiny segments) on an HTML5 canvas. Typically, a few million points. These points are downloaded from a server into a binary ArrayBuffer.

I probably won't be using that many points in the real world, but I'm kinda interested in how I could improve the performance. You can call it curiosity if you want ;)

Anyway, I've tested the following solutions :

  1. Using gl.LINES or gl.LINE_STRIP with WebGL, and compute everything in shaders on the GPU. Currently the fastest, can display up to 10M segments without flinching on my Macbook Air. But there are very strict constraints for the binary format if you want to avoid processing things in JavaScript, which is slow.

  2. Using Canvas2D, draw a huge path with all the segments in one stroke() call. When I'm getting past 100k points, the page freezes for a few seconds before the canvas is updated. So, not working here.

  3. Using Canvas2D, but draw each path with its own stroke() call. Despite what others have been saying on the internet, this is much faster than drawing everything in one call, but still a lot slower than WebGL. Things start to get bad when I reach about 500k segments.

The two Canvas2D solutions require looping through all the points of all the paths in JavaScript, so this is quite slow. Do you know of any method(s) that could improve JavaScript's iteration speed in an ArrayBuffer, or processing speed in general?

But, what's strange is, the screen isn't updated immediately after all the canvas draw calls have finished. When I start getting to the performance limit, there is a noticeable delay between the end of the draw calls and the update of the canvas. Do you have any idea where that comes from, and is there a way to reduce it?

1
Tell us more about your apps requirements. What's the nature of these paths? Do these paths change? Or just move?Simon Sarris
Have you considered batching paths in some other quantity than 1 or all. Say in groups of 1000? I'm guessing that might give you a substantial speed up.Robin Caron
@SimonSarris : This is a mapping application with vector paths, so they can pretty much change whenever they want, as they are sent by the server. As I said, I probably won't use that many in the real world, but I'm curious as to how much performance I can squeeze out of the JS engine!F.X.
@RobinCaron : That's a nice idea, I'll investigate that!F.X.
What browser version are you testing this on? I'm not sure at all whether this is the cause, but Chrome Canary got "deferred rendering" for canvas recently. Perhaps that makes for the delay between calling and actual rendering. Try disabling it on chrome://flags and see if it changes things.pimvdb

1 Answers

8
votes

First, WebGL was a nice and hype idea, but the amount of processing required to decode and display the binary data simply doesn't work in shaders, so I ruled it out.

Here are the main bottlenecks I've encountered. Some of them are quite common in general programming, but it's a good idea to remember them :

  • It's best to use multiple, small for loops
  • Create variables and closures at the highest level possible, don't create them inside the for loops
  • Render your data in chunks, and use setTimeout to schedule the next chunk after a few milliseconds : that way, the user will still be able to use the UI
  • JavaScript objects and arrays are fast and cheap, use them. It's best to read/write them in sequential order, from the beginning to the end.
  • If you don't write data sequentially in an array, use objects (because non-sequential read-writes are cheap for objects) and push the indexes into an index array. I used a SortedList implementation to keep the indexes sorted, which I found here. Overhead was minimal (about 10-20% of the rendering time), and in the end it was well worth it.

That's about everything I remember. If I do find something else, I'll update this answer!