Interactive HTML Canvas Paths code

Creating interactive elements on an HTML canvas can be conceptually challenging when approaching the problem from a DOM-centric perspective. Where the DOM provides built-in support for layers of interaction, the contents of an HTML canvas is essentially a pool of raw pixel data. Interaction with canvas content requires an abstract representation of the data that is to be visualized. Leveraging this, event handling, bounds detection, model and view representations must all be managed explicitly by the developer.

Fortunately the DOM does provide us a starting point from which to build from. While a canvas’s content may be static, the canvas itself provides hooks within the DOM to access some critical information provided by an event reference. With that data, we can begin to answer perhaps the most pertinent question: interaction coordinates.

Tracking Interaction

We first require some basic information about where our interaction event is on the canvas in relation to the viewport. Ultimately we want to track positional coordinates of interaction any time a registered interaction event is triggered. We can construct such behavior by binding mousemove and touchmove events with the following handler function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function trackOffset(evt) {
// stop default touch behavior
e.preventDefault();
// reference to target element bounding box
var elBounds,
offset = {};
// cant capture with no bounding rect support
try {
// grab bounding rect of event target
elBounds = evt.target.getBoundingClientRect();
// calculate hover offsets relative to the target element
offset.x = e.clientX - elBounds.left;
offset.y = e.clientY - elBounds.top;
// report error
} catch (error) {
console.log(error);
}
}

In this setup, trackOffset() calculates relative offset x and y positions from the event’s target node. The handler expects an event reference as an argument provided when the listener is dispatched. Regardless of its specific type, the event object contains critically useful data, including target nodes, relational positioning, mouse and keyboard information, touch references, etc.

The node method getBoundingClientRect() determines bounding box information reported in object form. The resulting object will have bottom, left, right, top and most likely a width and height property. Each offset property reports a value relative to the element’s position within the browser’s viewport. Absent width and height properties can be alternatively calculated by subtracting each axial position value pairing.

Similarly, the event member clientX and clientY properties report mouse/touch/pointer offset values relative to the viewport. Subtracting the target’s left and top position values from them respectively will give us offset coordinates relative to the node itself.

Note: If you intend to cache your target’s bounds, be certain that all of your page elements have fully rendered and all styles have been applied before doing so. Premature calls to getBoundingClientRect() can report inaccurate values.

In the interest of extending this behavior to multiple targets in the future, let’s wrap this behavior within a constructor CoordinateTracker().

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
function CoordinateTracker(element) {
// hold x/y position data
var offset = {},
elBounds;
// register hover listeners
element.addEventListener("mousemove", trackOffset);
element.addEventListener("touchmove", trackOffset);
// register hover clear listeners
element.addEventListener("mouseout", resetTracking);
element.addEventListener("touchend", resetTracking);
// give read-access to hover object
return {
get offset() {
return offset;
}
}
// reset the bounding area
function getBounds() {
elBounds = element.getBoundingClientRect();
}
// handler to reset hover values
function resetTracking() {
offset = {};
}
// handler to track hover coordinates
function trackOffset(evt) {
// stop default touch behavior
evt.preventDefault();
// ensure the element bounds are up-to-date
getBounds();
// calculate hover offsets relative to the target element
offset.x = e.clientX - elBounds.left;
offset.y = e.clientY - elBounds.top;
}
}

Passing a target element as an argument and using the new keyword, we can create a tracker instance for our canvas.

1
var objTracker = new CoordinateTracker (document.getElementById ("myCanvasEl"));

Notice that I’ve modified the trackOffset() function slightly from the original code block. For brevity, I’ve removed the try/catch block. We now have a well defined target element as a function argument, and the elBounds and offset variables have been elevated in scope.

Let’s check out an interactive detection area demo that will simply report back our offset values to test our CoordinateTracker() behavior.

Now that we are able to reliably report our interaction coordinates relative to our target’s origin within the viewport, we can put that data practical use.

Defining Paths

Rendering output to a canvas element requires executing a series of draw operations within a chosen context. A filled rectangle may be represented in a code block similar to the following:

rendering a 50x100 red rectangle
1
2
3
4
5
6
7
8
var canvas = document.getElementById("myCanvasID"),
context = canvas.getContext("2d");
context.fillStyle = "red";
context.fillRect(
0,
0,
50,
100);

Once this code executes and scope is lost, all data associated with the rectangle shape we’ve just rendered is irretrievable. Instead, let’s create a shape constructor which holds a generic set of rendering instructions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// Rectangle constructor
function Rectangle(position, dimensions, options) {
// define default fallback argument values
var defaultPosition = [0, 0],
defaultDimensions = [50, 100],
defaultOptions = {
fillColor: "red"
};
// return instance properties reflecting argument or fallback values
this.position = position || defaultPosition;
this.dimensions = dimensions || defaultDimensions;
this.options = options || defaultOptions;
}
// define a series of instructions to build a 2dpath within a given context
Rectangle.prototype.definePath = function(context) {
var ratio = window.devicePixelRatio || 1;
context.beginPath();
context.rect(
this.position[0] * ratio,
this.position[1] * ratio,
this.dimensions[0] * ratio,
this.dimensions[1] * ratio);
};
// Render the shape to a given context
Rectangle.prototype.render = function(context) {
if (context) {
context.save();
context.fillStyle = this.options.fillColor;
this.definePath(context);
context.fill();
context.restore();
}
};

Our constructor has some default values defined to backfill undefined parameters. The constructor then simply assigns instance properties which define its size, position and styling.

Using those property values, the Rectangle prototype method definePath() creates a 2Dpath onto a given context. Lastly, prototype method render() handles the actual rendering of its path to the provided context by setting its styling options and invoking the final fill command. Keeping these tasks separate will allow us to later test path interaction without actually rendering pixel data to the canavs.

We can define a new Rectangle instance and render it to a specific canvas now as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
var canvas = document.getElementById("myCanvasID"),
context = canvas.getContext("2d"),
// define rectangle properties
oX = 0,
oY = 0,
dX = 50,
dY = 100,
fillColor = "red",
// create new Rectangle instance
redRect = new Rectangle(
[oX, oY], [dX, dY], {
fillColor: fillColor
});
// invoke instance rendering method to canvas context
redRect.render(context);

Applying some rendering loops to a more structured array of shape instances provides the following output:

multiple rectangle instances rendered onto a canvas

Determining Path Interaction

Unlike our initial, minimalist rendering code block, we can reference a shape instance (given proper scope) and trigger its definePath() and render() methods at any time going forward.

Leveraging our previously constructed coordinate tracking function, we can set additional listeners to our canvas to specify how the tracking data should be applied to its rendered content.

Lets assume we have some variable declarations within scope, including a reference to a 2d context, a CoordinateTracker() instance, a placeholder reference to a future hovered shape, and a pixel cache of the current canvas contents. A mousemove event listener is added to a canvas to execute our extended behavior.

1
2
3
4
5
6
7
8
9
10
11
12
13
var canvas = document.getElementById("myCanvas"),
context = canvas.getContext("2d"),
interaction = new CoordinateTracker(canvas),
hoveredShape,
cachedCanvas = context.getImageData(
0,
0,
canvas.width,
canvas.height);
// event listener for interaction on canvas
canvas.addEventListener(
"mousemove",
interactionHandler);

Additionally we need to define a function to determine if a coordinate falls within the 2Dpath of a shape instance. The shape’s definePath() prototype method is called to map it to our canvas’s context. While this path is still ‘live’, context provides methods isPointInPath() and isPointInStroke() to determine if a coordinate’s x and y values fall within its path or stroke respectively (yes, thankfully we dont have to trace the pixels within its area ourselves!). Either method returns a boolean value as a result, which in the following example conditionally assigns our placeholder reference to the current shape instance:

1
2
3
4
5
6
7
8
9
10
11
12
function testShapeInteraction(shape) {
// first define a 2Dpath to a context
shape.definePath(context);
// use method 'isPointInPath' to check if hover coordinates fall within shape's bounds
if (context.isPointInPath(
interaction.hover.x,
interaction.hover.y)) {
// point is within path! set reference to hovered shape and return it
hoveredShape = shape;
return shape;
}
}

With this, we can define a higher-level event handler interactionHandler() which will loop through each shape instance in the array and test for interaction given the coordinates being reported by the CoordinateTracker() instance. If an interaction exists, or if one is just ending, We trigger a method to handle how to redraw the canvas content conditionally.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// interaction event handler to determine if shape is being hovered over
function interactionHandler() {
// clone, reverse and iterate through our shape instance array
// conditionally determine if a shape is being hvoered
if (!shapes.slice().reverse().some(testShapeInteraction)) {
// no hovered shape - set reference to null
if (hoveredShape) {
paintCachedState();
hoveredShape = null;
}
} else {
// hovered shape found - repaint the cache and ontop, paint the hovered shape
paintCachedState();
paintHoveredShape();
}
}
1
2
3
4
5
6
// paints a pre-defined path on the context blue
function paintHoveredShape() {
hoveredShape.definePath(context);
context.fillStyle = "cornflowerBlue";
context.fill();
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
// wipes the canvas clean and repaints the
function paintCachedState() {
// clear our canvas pixel data
context.clearRect(
0,
0,
canvas.width,
canvas.height);
// draw our cached pixel data
context.putImageData(
cachedCanvas,
0,
0);
}

The ‘if’ condition within the handler is doing several things:

  • The shapes array is being copied
  • The array copy is being reversed
  • The array copy is being iterated, invoking testShapeInteraction() for each iteration until it ends or until a truthy result is returned

We create a copy of the shapes array so we are able to reverse it without affecting the original. The array is reversed so the iteration order favors the highest shapes on the rendering stack. This ensures that shapes painted ontop of other shapes in instances where they overlap take priority.

During each iteration, a shape instance is being checked for interaction - essentially to test if our coordinate hover data falls within the shape’s path. When an interaction is detected, the canvas is told to repaint its contents accordingly.

Below is a preview is the resulting behavior:

Notes on Performance

In situations where you are looping through a large number of paths or paths with particularly complex shapes, such as high fidelity map data, efficiency becomes paramount. It is important that your loop structures are concise and efficient. Here are a few things to consider if your interaction checking suffers from noticeable lag:

  • Make sure your iteration method allows for some form of termination to prevent redundancy
  • Use for loops over array iteration methods
  • Do not draw to the canvas in loops when no pixel information requires change
  • Cache deeply structured objects to reduce the time the interpreter requires to traverse the prototype chain
  • Avoid DOM changes that require repainting or reflowing

Using methods outlined in a previous article, I’ve written an interactive demo that renders continental US counties to a canvas. Using the coordinate tracking mechanisms outlined in this article, each county can be individually highlighted when the cursor hovers over its position. Its purpose is to test interaction responsiveness when many paths are present.