Creating interactive elements on an HTML canvas can be conceptually challenging when approaching the problem from a DOM-centric perspective. Where the DOM provides built-in support for layers of interaction, the contents of an HTML canvas is essentially a pool of raw pixel data. Interaction with canvas content requires an abstract representation of the data that is to be visualized. Leveraging this, event handling, bounds detection, model and view representations must all be managed explicitly by the developer.
Fortunately the DOM does provide us a starting point from which to build from. While a canvas’s content may be static, the canvas itself provides hooks within the DOM to access some critical information provided by an event reference. With that data, we can begin to answer perhaps the most pertinent question: interaction coordinates.
We first require some basic information about where our interaction event is on the canvas in relation to the viewport. Ultimately we want to track positional coordinates of interaction any time a registered interaction event is triggered. We can construct such behavior by binding
touchmove events with the following handler function:
In this setup,
trackOffset() calculates relative offset x and y positions from the event’s target node. The handler expects an event reference as an argument provided when the listener is dispatched. Regardless of its specific type, the event object contains critically useful data, including target nodes, relational positioning, mouse and keyboard information, touch references, etc.
The node method
getBoundingClientRect() determines bounding box information reported in object form. The resulting object will have
top and most likely a
height property. Each offset property reports a value relative to the element’s position within the browser’s viewport. Absent width and height properties can be alternatively calculated by subtracting each axial position value pairing.
Similarly, the event member
clientY properties report mouse/touch/pointer offset values relative to the viewport. Subtracting the target’s left and top position values from them respectively will give us offset coordinates relative to the node itself.
Note: If you intend to cache your target’s bounds, be certain that all of your page elements have fully rendered and all styles have been applied before doing so. Premature calls to
getBoundingClientRect() can report inaccurate values.
In the interest of extending this behavior to multiple targets in the future, let’s wrap this behavior within a constructor
Passing a target element as an argument and using the
new keyword, we can create a tracker instance for our canvas.
Notice that I’ve modified the
trackOffset() function slightly from the original code block. For brevity, I’ve removed the try/catch block. We now have a well defined target element as a function argument, and the
offset variables have been elevated in scope.
Let’s check out an interactive detection area demo that will simply report back our offset values to test our
Now that we are able to reliably report our interaction coordinates relative to our target’s origin within the viewport, we can put that data practical use.
Rendering output to a canvas element requires executing a series of draw operations within a chosen context. A filled rectangle may be represented in a code block similar to the following:
Once this code executes and scope is lost, all data associated with the rectangle shape we’ve just rendered is irretrievable. Instead, let’s create a shape constructor which holds a generic set of rendering instructions.
Our constructor has some default values defined to backfill undefined parameters. The constructor then simply assigns instance properties which define its size, position and styling.
Using those property values, the
Rectangle prototype method
definePath() creates a
2Dpath onto a given context. Lastly, prototype method
render() handles the actual rendering of its path to the provided context by setting its styling options and invoking the final fill command. Keeping these tasks separate will allow us to later test path interaction without actually rendering pixel data to the canavs.
We can define a new
Rectangle instance and render it to a specific canvas now as follows:
Applying some rendering loops to a more structured array of shape instances provides the following output:
Unlike our initial, minimalist rendering code block, we can reference a shape instance (given proper scope) and trigger its
render() methods at any time going forward.
Leveraging our previously constructed coordinate tracking function, we can set additional listeners to our canvas to specify how the tracking data should be applied to its rendered content.
Lets assume we have some variable declarations within scope, including a reference to a 2d context, a
CoordinateTracker() instance, a placeholder reference to a future hovered shape, and a pixel cache of the current canvas contents. A
mousemove event listener is added to a canvas to execute our extended behavior.
Additionally we need to define a function to determine if a coordinate falls within the 2Dpath of a shape instance. The shape’s
definePath() prototype method is called to map it to our canvas’s context. While this path is still ‘live’, context provides methods
isPointInStroke() to determine if a coordinate’s x and y values fall within its path or stroke respectively (yes, thankfully we dont have to trace the pixels within its area ourselves!). Either method returns a boolean value as a result, which in the following example conditionally assigns our placeholder reference to the current shape instance:
With this, we can define a higher-level event handler
interactionHandler() which will loop through each shape instance in the array and test for interaction given the coordinates being reported by the
CoordinateTracker() instance. If an interaction exists, or if one is just ending, We trigger a method to handle how to redraw the canvas content conditionally.
The ‘if’ condition within the handler is doing several things:
shapesarray is being copied
- The array copy is being reversed
- The array copy is being iterated, invoking
testShapeInteraction()for each iteration until it ends or until a truthy result is returned
We create a copy of the
shapes array so we are able to reverse it without affecting the original. The array is reversed so the iteration order favors the highest shapes on the rendering stack. This ensures that shapes painted ontop of other shapes in instances where they overlap take priority.
During each iteration, a shape instance is being checked for interaction - essentially to test if our coordinate hover data falls within the shape’s path. When an interaction is detected, the canvas is told to repaint its contents accordingly.
Below is a preview is the resulting behavior:
In situations where you are looping through a large number of paths or paths with particularly complex shapes, such as high fidelity map data, efficiency becomes paramount. It is important that your loop structures are concise and efficient. Here are a few things to consider if your interaction checking suffers from noticeable lag:
- Make sure your iteration method allows for some form of termination to prevent redundancy
- Use for loops over array iteration methods
- Do not draw to the canvas in loops when no pixel information requires change
- Cache deeply structured objects to reduce the time the interpreter requires to traverse the prototype chain
- Avoid DOM changes that require repainting or reflowing
Using methods outlined in a previous article, I’ve written an interactive demo that renders continental US counties to a canvas. Using the coordinate tracking mechanisms outlined in this article, each county can be individually highlighted when the cursor hovers over its position. Its purpose is to test interaction responsiveness when many paths are present.