Circuit Biscuits

Technical Deep Dives

Technical Deep Dive 4 - Displays, Pixels, And Graphics

Core Question

What really changes when a system moves from showing text to showing graphics?

It Is All Representation

Text feels different from graphics because people read language and interpret images in different ways. But inside the system, both are representation problems.

A display does not understand:

  • words
  • symbols
  • faces
  • arrows

It only receives state that eventually controls visible elements.

Text Is Already Graphics

One of the most useful ideas for learners here is that text is not an exception. A letter shown on a matrix is already a graphic object. It is a shaped pattern of lit and unlit positions.

That means the jump from "text output" to "bitmap graphics" is not really a jump between two different worlds. It is a shift in how intentionally we think about the visual pattern.

The Byte, The Character, And The Bitmap

This lesson is also a good place to talk about one of computing's most useful historical ironies.

The byte became deeply associated with text because computers needed a practical way to store character codes. In early systems, a byte was a convenient unit for representing a symbol such as:

  • A
  • 7
  • ?
  • space

That is where character encoding enters the story. ASCII, for example, assigns numbers to characters. The capital letter A is decimal 65, which is binary 01000001. The letter itself is not stored as a picture. It is first stored as a code.

That is the irony: a byte helps us represent a letter abstractly, but to actually show that letter on a display, we often turn back to bit patterns again.

From ASCII To Glyph Data

The journey usually looks like this:

  1. a character such as A is stored as a code value
  2. the firmware looks that code up in a font table
  3. the font table provides a glyph bitmap
  4. the glyph bitmap is copied into a display buffer
  5. the display hardware eventually lights the required pixels

So text rendering contains two layers of representation:

  • symbolic representation: the character code
  • spatial representation: the bitmap glyph

This distinction matters because it helps students see why text is both language and graphics at the same time.

Why Bytes Fit Bitmaps So Well

A byte is 8 bits wide, which makes it a very natural storage unit for small bitmap rows.

That matters technically because an 8-pixel row can be stored in one byte:

10111101

If we interpret 1 as LED on and 0 as LED off, then one byte can describe one whole row of an 8-pixel-tall or 8-pixel-wide shape, depending on how the firmware arranges the data.

For an 8x8 bitmap, a common pattern is:

  • 8 rows
  • 1 byte per row
  • total = 8 bytes

That is compact, readable, and easy to shift, mask, or transmit.

For a wider display such as 8x32, the same idea still scales. A row may need multiple bytes, but the principle is unchanged: the image is stored as grouped bits, and bytes are simply a convenient way to package them.

This is one reason bitmaps feel so fundamental in embedded graphics. They match the grain of the machine very naturally.

Pixels, Subpixels, And Display Media

The matrix teaches the general idea in a simple form: one light, one visible element.

Modern colour displays usually go further. A pixel is often made from smaller red, green, and blue light sources. By changing their intensities, the screen creates different colours. That is RGB.

Printers solve a different problem. They mix pigments rather than emitted light, so CMYK becomes the more useful model there.

This is a good point to introduce a mature engineering instinct: visual systems depend on medium. Light-based displays and ink-based prints are both "colour systems," but they are not doing the same physical job.

Time Matters As Much As Space

Once graphics enter the lesson, timing becomes unavoidable.

  • scrolling text is timed pattern change
  • animation is timed frame change
  • PWM brightness is timed power delivery

These look like different features, but they are closely related. They all rely on the system changing state over time quickly enough that human perception produces a smooth result.

That is why persistence of vision belongs in the same lesson as:

  • text scrolling
  • animation frames
  • brightness control

Why This Matters

This merged lesson is stronger because it treats display engineering as one connected idea:

  • character codes and font tables
  • stored visual state
  • transmitted data
  • controlled timing
  • human perception

That is the real display stack, whether the output is a tiny matrix, a phone screen, or a television.