The forgotten basement

Most software developers work in languages whose abstractions are so dense that the underlying hardware becomes completely invisible. Anyone developing in Python, JavaScript, or Java thinks in objects, functions, and data structures — not in voltage levels, gates, or clock edges. Even C programmers, who traditionally stand closer to the hardware, often go years without ever thinking about what physically happens inside the processor when they write a + b.

This abstraction is a strength. It makes software development productive and lets us write business logic without worrying about transistors. But it has a price: most developers have no mental model of what happens beneath their code. When performance issues appear, when cache effects surprise us, or when communication with hardware colleagues stalls, the foundation needed to interpret the symptoms is missing.

This article builds a bridge. It leads from the familiar Boolean — which every developer uses daily — to the flip-flop, the smallest memory element in computer hardware. The goal is not to turn software developers into electrical engineers, but to construct a robust mental model: a picture of how your own if-statements and variables physically exist.

Booleans everyone knows

Every common programming language has a Boolean data type — a value that takes exactly two states: true or false. In most languages, these values are combined with operators that every developer knows in their sleep: && for AND, || for OR, ! for NOT, and in many languages ^ for exclusive OR.

A typical condition in code might look like this:

if (user.isLoggedIn && (user.isAdmin || user.hasPermission)) {
    showAdminPanel();
}

What happens here is, at the conceptual level, pure propositional logic. The expression evaluates to a single truth value, either true or false. The order of evaluation follows clear rules: first the parenthesis, then the AND, finally the if-comparison.

What developers rarely realize: these operators are not an invention of the respective programming language. They are not convenience features the compiler adds. They are the direct representation of a mathematical structure known since the 19th century — Boolean algebra, named after George Boole. And they have a second form of existence beyond the code: as physical circuits made of silicon.

From logical operator to gate

A logic gate is an electronic circuit that combines one or more binary inputs into a single output according to a fixed rule. The AND gate, for instance, outputs a logical 1 if and only if both inputs are 1 — exactly the same truth table the && operator implements. The OR gate does the same for ||, the NOT gate (also called inverter) for !, the XOR gate for ^.

ABA AND BA OR BA XOR B
00000
01011
10011
11110

This table is not just an abstract logician's game. It is the exact specification of a physical circuit. When a chip designer integrates an AND gate into a CPU, they guarantee through semiconductor physics that the circuit behaves exactly like these four lines of truth table.

A particularly elegant property of Boolean algebra is NAND completeness: with only a single gate type — the NAND gate, the negation of AND — every possible logical function can be built. An inverter is a NAND with both inputs tied together. An AND is a NAND followed by a second NAND acting as inverter. An OR is also assembled from three NAND gates. This insight is not just academic: some semiconductor technologies can produce NAND gates particularly efficiently, which has led to widespread standardization on this building block.

Where the bit physically lives

A bit in code is abstract — a 0 or a 1. In hardware, it is a voltage. The convention in use today: a voltage near 0 V represents a logical 0, a voltage near the supply voltage (typically 3.3 V, 1.8 V, 1.2 V, or even less in modern processors) represents a logical 1. A bit is therefore nothing more than "voltage at point X above or below a threshold".

The component that switches these voltages is the transistor. The dominant type in logic circuits today is the MOSFET (Metal-Oxide-Semiconductor Field-Effect Transistor). For the mental model, a simplified view is enough: a transistor is an electronic switch with three terminals. Two of them are the switching path, the third is the control element (the "gate"). When a sufficient voltage is applied to the gate, the switching path becomes conductive; without it, it blocks. The switch is therefore not operated mechanically, but electrically.

From two or four such transistors, a gate can be built. From several thousand gates, an arithmetic logic unit. From several hundred million gates, a modern CPU. A current smartphone processor contains on the order of 15 to 20 billion transistors on an area smaller than a thumbnail. Each one of these transistors is a switch that can be turned on and off several billion times per second.

From gate to useful circuit

Single gates alone are nothing you can compute with. Only their interconnection produces the building blocks that make a program actually usable. The simplest meaningful example is adding two bits.

When adding two bits A and B, there are four possible input combinations. Three of them yield a single-digit result (0+0=0, 0+1=1, 1+0=1), the fourth a two-digit sum (1+1=10, that is, sum 0 with carry 1). The sum corresponds exactly to an XOR of the inputs, the carry corresponds exactly to an AND. This circuit — XOR plus AND — is called a half adder.

A half adder, however, can only handle the lowest bit of an addition because it does not consider an incoming carry. For the higher digits you need a full adder, which has three inputs: A, B, and a carry-in from the bit below. A full adder is essentially built from two half adders and an OR gate.

Chaining n full adders together, with the carry passed up to the next adder each time, yields an n-bit adder. This is exactly how hardware addition is built in every modern CPU, with various optimizations for speed (such as carry-lookahead adders), but at its core: cascaded full adders made of AND, XOR, and OR gates.

When a software developer writes a + b in their code, this circuit runs at the end — a series of full adders made of AND, XOR, and OR gates, made of transistors, manufactured from doped silicon.

The same logic builds multiplexers (selection circuits omnipresent in buses and memory addressing), decoders, comparators, and shift registers. The building blocks of a processor are all assembled by this pattern: many small, simple gates, cleverly interconnected into complex functions.

The hardware's memory: flip-flops

So far, all circuits introduced have been combinational: the output depends only on the current state of the inputs, with no memory at all. Such circuits can compute, but cannot store anything. For a processor to hold variables, it needs sequential circuits — circuits with state, whose output depends not only on current inputs but also on their history.

The simplest sequential element is the flip-flop. A flip-flop stores exactly one bit. It has two stable states — set (Q=1) and reset (Q=0) — and can be switched between them via its inputs. The basic variant, the SR flip-flop, can be built from two cross-coupled NAND gates. The variant most commonly used in modern CPUs is the D flip-flop, with a data input D, a clock input, and an output Q. On each rising clock edge, the flip-flop captures the value of D into Q and holds it there until the next edge arrives.

A single flip-flop stores one bit. 64 flip-flops, clocked together, form a 64-bit register. A modern CPU contains many such registers: the general-purpose computation registers, the program counter register, the status registers with their flags. All are nothing more than bundled flip-flops.

When a local int variable lives in your code, it most likely lives physically in exactly 32 or 64 flip-flops of a CPU register — as long as it has not been spilled to main memory.

Larger memories use other, denser cells — SRAM (static RAM, used for caches) employs flip-flop-like structures with six transistors per cell, DRAM (main memory) stores bits as charge in tiny capacitors. But the mental model "one storage element = one flip-flop" carries through every layer of abstraction. A 1 MB cache is conceptually nothing more than eight million individual bit-stores, addressable in parallel.

The link to theory: finite state machines

As soon as hardware has state, its behaviour can be described as a finite state machine (FSM) — a mathematical model with a finite set of states, transitions between them, and an input alphabet that triggers the transitions. This is no coincidence: precisely this theory is the natural tool for describing sequential hardware.

Remarkable is how naturally programmers use finite state machines every day, without using the term:

Programmers work daily with this abstraction. Hardware engineers just use it more directly: they build state machines as circuits by storing a state encoding in flip-flops and wiring the transitions through combinational gates. Once you have seen the bridge, you recognize it everywhere — and you notice that finite state machine theory has its most natural home in hardware.

Why this is worth knowing

Software developers will be able to do their jobs without this knowledge. But anyone who has once traced this arc benefits on multiple levels:

Better mental models for performance. Cache effects become plausible once you see that a 64-byte cache line is simply 512 flip-flop-like cells, addressed as a block. Branch prediction becomes concrete when you know that every conditional jump can hit a pipeline with ten or more stages. SIMD instructions become less mysterious once you understand that the hardware simply has 4 or 8 or 16 parallel adders running with a single instruction at the same time.

Better communication with hardware colleagues. Anyone who has ever worked in a mixed software/hardware team knows the friction at the interface: the software person doesn't understand why "just flipping a bit" takes half a day; the hardware person doesn't understand why you can't simply add a few lines of code. A common vocabulary that includes register, flip-flop, and clock domain removes a large part of this friction.

Solid foundation for embedded, IoT, and FPGA. Anyone planning to enter one of these fields cannot do without this foundation. Embedded C without an understanding of the underlying hardware remains groping in fog. FPGA programming is at its core direct circuit design at the level of gates and flip-flops.

And finally: it is simply fun to understand at what levels your own code runs. An if-statement is not just a piece of source text — it is an elegant abstraction across several translation layers, ultimately grounded in physical voltages flowing through silicon.

📘 Going deeper with the IT Compendium

If you want to follow the arc from Boolean to flip-flop systematically, you will find the concepts sketched here in the IT Compendium for IT Specialists with complete truth tables, schematics, and worked examples:

  • Chapter 2.1 — Boolean algebra: operators, truth tables, De Morgan's laws, axioms
  • Chapters 3.1–3.2 — Electrical engineering and semiconductors: from Ohm's law to the MOSFET
  • Chapter 3.3 — Digital logic: basic gates, half/full adders, multiplexers, flip-flop variants
  • Chapter 3.4 — Automata theory: finite state machines and the Turing machine

The compendium is primarily aimed at IT apprentices preparing for their certification, but works equally well as a systematic refresher for experienced software developers who want to understand the "basement" of their codebase.

View IT Compendium → 29.00 €
GS

Gerd Schmitt

Computer science graduate, embedded systems engineer since 1990. Diploma thesis in control engineering with assembler hardware drivers, and ever since continuously involved in projects where software, FPGA, and analog/digital electronics meet. Author of the IT Compendium.

Embedded project planned, or software meeting hardware?

Whether embedded development, FPGA design, real-time systems, or bridging the gap between a software team and the hardware world — I support your team on projects where code and circuit meet. First conversation free of charge.