In Pursuit of Pixel Perfection

Designing games for crispy pixel art

This article is under construction—code blocks and diagrams soon!

OVER THE PAST FEW YEARS, as a solo game developer with a preference for pixel art, I’ve become moderately obsessed with the technique of pixel perfection. The objective is to render 2D sprites in a way such that each real pixel in the texture is matched with exactly one scaled pixel on the display. Or, to get rid of the jargon, pixel perfect means that every pixel shown is an identical same-sized square—no stretches and no rectangles, only nice, crisp pixels.

A little background context. The origins of pixel perfection have their roots in the earliest arcade and home console video games of the 1980s, the most often cited being Nintendo’s NES. A common misconception is that pixel art was always “crisp” and rendered with hard boundaries. In fact, back in the pioneering days of gaming, pixel art was displayed on CRTs—cathode-ray tube televisions—that would blur and eliminate the harsh lines due to their nature as analogue devices. Nowadays, due to the decades-long resurgence of pixel art in indie games directed towards digital displays such as LCDs and OLED, the artform has retained its bitmap nature but has discarded its technical quirks. As a result, games like Shovel Knight, Mina the Hollower, and The Messenger allude to early pixel art’s resolution and palette limitations while rejecting its historically smooth nature. Thus a new movement has emerged, from the idea of crisp pixel art, to embrace crispiness in all its glory. Thus is pixel perfect, where every pixel is as perfect as can be.

So, from a development perspective, how does one go about creating this effect of “pixel perfection”? In my experience, the task of creating a pixel perfect experience can be broken down into three components. The first is pixel resolution—how can we render the game so that each pixel is indeed one pixel and not many? Ultimately, that component is the basis to any truly pixel perfect game. The second idea, more tangentially related, is palette enforcement—how do we limit the colour palette to reflect the technological limitations of early consoles, or even just for stylistic effect? The last idea, which by its very nature is somewhat more difficult to resolve, is the question of alignment—when the camera moves, how do we stop things from jittering?

Native Resolution and Pixel Upscaling

A true pixel, on a display or texture, is by its very nature the smallest possible unit of visual information. For example, on an old monochrome Macintosh, every pixel can either be black or white. It cannot be half black or half white, either in spatial division or (for the 1-bit display) alpha blending. So the pixel is the unit of 1x1, or the smallest indivisible image. Ascending from the depths of the philosophical netherworld, we know the pixel to be the fundamental unit of pixel art. After all, that’s what it is! Pixel art is made of pixels.

The most naive approach to displaying pixel perfection is one-to-one correspondence. For every pixel on the screen, the game developer thinks, I will simply associate it with one pixel on the render texture! The flaw of this methodology, as most of us have already experienced before, is that the resolution of a typical pixel art sprite (32x32 to 128x128) and the usual game camera resolution (512x288 or 570x320) are so much radically smaller than the typical monitor (1920x1080 or even 4K) that one cannot even discern the individual pixels on the display! Instead, we must scale up our final image so that the player can appreciate our work.

It should be noted that we should scale up our final rendered output instead of the sprites of individual entities. This is because scaling characters directly would allow them, in movement and rotation, to misalign themselves from the pixel grid. In some cases this effect is acceptable, but not for pixel perfection.

When scaling up to a pixel perfect render, or the process of upscaling, the only acceptable factors are integers. This is because your display is made of an integer number of pixels. You cannot display a pixel at fractional coordinates unless you use filtering/antialiasing, which would violate pixel perfection. So, with pixel perfection, we want to upscale our output (originally rendered at our native resolution, the pixel dimensions where each pixel is 1x1) by the largest integer that would provide an upscaled texture which fits inside the screen dimensions.

In displaying the texture, most game engines offer two filtering methods for filling in missing detail in an upscaled image. The first, linear, interpolates or “mixes” the values of the nearest known pixels, creating a smooth but blurry effect. The second, nearest, simply uses the value of the nearest pixel. For our purposes, we want to use nearest filtering to achieve the pixel effect. In some engines, this is known as point filtering.

With this scaling finished, you have essentially a working pixel perfect system! There remain a few further minor issues. For one, the upscaled view will typically be displayed in the top-left corner unless you have applied an engine’s UI alignment feature. Otherwise, some simple algebra can be performed to calculate the correct margins to offset the upscaled texture in order to centre it on the screen.

Another consideration is to recalculate the scale when the player resizes the window. Since this is likely to be an infrequent occurrence (unless you are developing a metagame where the player must resize the window to progress) an event handler for window resizing can be hooked up to a regeneration of the scale factor.

Lastly, the main issue that occurs with creating what is really a screen within a screen is that any positional inputs, most notably the mouse cursor, will not reflect accurate coordinates in the engine. To solve this, a wrapper layer can be written that converts a screen position to a world position, using a relative world coordinate and the centre of the screen as the origin.

Palette Enforcement

My personal favourite part of the de-facto pixel perfect style, since it is not necessarily required, is the enforcement of a limited colour palette. It has long been known to pixel artists that using a limited palette makes for more pleasing art than being allowed to use the entirety of the RGB spectrum. Of course, while an artist can pick and verify the colour of every pixel as he or she works, the dynamic nature of a game, especially when paired with the presence of alpha blending which can create colours outside the intended palette, makes this kind of consistency more difficult. We must instead develop an algorithm that we can put between the rendered output and the final output capable of determining the closest colours that match our available choices.

In the most abstract way, a list of “available colours” is first hardcoded or generated at runtime. Then, on every frame, we loop through every pixel. For each pixel on the 2D grid we loop through the palette colour list, setting the final rendered colour to the palette colour with the lowest distance from the original texture colour—after all, a colour can be considered as a vector, either in RGB or HSV (hue, saturation, value) format, and thus has “distances” or degrees of similarity to other colours. This overall procedure converts from a broader colour space to a more limited one.

Since this process is done in the rendering stage and is quite intensive due to the amount of pixels that need to be iterated over, the optimal strategy is to implement the palette enforcement in a shader, leveraging the GPU to perform the procedure every frame. With this method, a uniform colour array can be created in the shader and get assigned at runtime based on a script that reads over an input palette list or texture. The iteration happens during the fragment shader, returning the closest possible colour after the distance calculation is complete.

This naive approach certainly works. However, one might notice the inefficiencies prevalent in this method. For every pixel on the 2D display we have to re-loop through all colours in the palette. This means that the total amount of elementary operations every frame is the width of the display times the height of the display times the palette size! This already creates a noticeable slowdown, increasing greatly for larger palette sizes. The use of an if statement to determine distance comparisons in the GPU shader is further not optimal. Ideally, we would be able to compute the desired colour directly via mathematical computations, rather than through logical computations, as this is what the GPU is more efficient at.

For this, a LUT, or lookup texture, is suitable. By trading space efficiency for time efficiency, we can generate a texture at startup and bake it into a file. Instead of comparing colour distances in a list, we set up a uniform texture sampler in the shader. By passing in the texture, we can use a formula during iteration to convert each red, green, and blue combination into a unique coordinate on the lookup image.

Balancing precision and speed/space efficiency can be done by converting the coordinate into percentages of x and y. Multiplying by the overall texture scale allows the use of a texture of any size for colour storage, with proportional precision loss for textures less than the full 256 colour space.

Given the accessibility of storage memory, the only downside of a lookup texture is the long generation time it takes to first set up a texture. A script has to be written to go through the traditional distance comparison process outlined earlier, only with colours matching the coordinate function and with outputs to a new image texture. This process can take some time, so it is ideal to either insert a pleasant loading screen or, like mentioned earlier, bake the image into the game assets directly.

Camera Movement and the Infamous Pixel Jitter

Even with all of these implementations aiming for pixel perfection, there still remain a few minor issues that, depending on your perspective, are either completely negligible or absolutely earth shattering.

The first issue is that sprites in the original render might have shimmering or other “ripple” effects. The solution for this is to make sure that all visual textures and sprites are separated from the actual collision and game logic so that the sprites can be rounded to the grid. By rounding pixel textures to the grid, this ensures that movements in the game snap to pixel increments, avoiding any issues with “half-pixels” or fractional pixels that show up in an inter-frame format.

Another common annoyance that typically afflicts pixel perfect implementations is the sharp pixel movement of the camera. Although this is of course intentional, as the point of pixel perfect is to make sure that everything renders on pure pixel increments, one might still want to have the camera move smoothly while everything else snaps to a grid. In order to achieve this, a sub-pixel offset should be added to the shader which is updated as the camera moves. This sub-pixel offset is then used to move the rendering region of the rendered display texture. If the texture is set to have one-pixel margins in all four directions, the illusion of a smooth camera can be created. An example of this implementation in Godot can be found in the following video, which is where I myself learned about this technique!

The last major annoyance that I have encountered in my explorations of pixel perfection (and which I have yet to develop a solution) is that of jittering when the camera is set to smoothly interpolate towards a moving target. In the case of a player moving and a camera that smooths, the increments the camera moves often do not match with the increments that the player moves. This means that when the camera is accelerating, the player often jitters back and forth along the screen. Additionally, when the player is at the “terminal” point on the screen, or the position at which the velocity of the camera matches the velocity of the player, fluctuations in the actual speed of the camera’s smoothing can cause irritating jitters around the player’s “terminal” position. One could argue that this issue is negligible, but given a long exposure over the course of an action or RPG game, the problem can become quite distracting.

One could consider potential solutions such as experimenting with other methods of smoothing or by using some sort of predictive calculation, but the problem, as of yet and as far as I’m aware, has not been solved.


The PICO-8 virtual console, with its 128x128 resolution and artificially limited colour palette, has brought the pixel perfect style to the contemporary indie vogue. Regardless of whether those who use the virtual console are using it because of their stylistic preferences or because they prefer it for other things, they are all still contributing to the collection of works containing pixel perfection. A distinct style offering a reminiscent, retro, yet modern look to most games. A polished look. A testament to the style that is fundamentally the modern impressionism. To the artform both born and launched by the computer and spread by the internet and the work of many quaint, visionary indie developers.

Striving for pixel perfection, we will always come up short. But maybe we can achieve a little pixel excellence in the process.


Back to