š I rebuilt an LLM⦠with pixels.
on April 26, 2026
Large Language Models (LLMs) are based on abstract concepts: probability distribution, autoregressive generation, large-scale optimization.
These mechanisms are difficult to observe directly.
An alternative approach is to project these concepts into a visual and deterministic system, allowing us to study their dynamics.
This work is inspired by an initial demonstration shared on Twitter: Allen Explains thread
The goal is to build an educational prototype that will:
- represent a sequential generation
- observe an optimization process
- compare different training regimens
System Architecture
The prototype is implemented in PHP (Symfony 8) and is based on four main components:
- Cellular Automaton (Game of Life)
- Lambda language (AST JSON)
- Genetic Algorithm
- Streaming pipeline (NDJSON + SSE)
Full source code: llm-game-of-life
Representation: from text to grid
An LLM models a distribution:
[ P(x_1, x_2, ..., x_n) ]
broken down into:
[ \prod_{t=1}^{n} P(x_t \mid x_{ ]
In this prototype, this structure is transposed:
| LLM | Prototype | | -------------- | ---------- | | Token | Cell | | Sequence | Grid | | Generation | Frame | | Model | Program | | Inference loop | Simulation |
Each frame corresponds to a generation step.
The SSE flow produces a sequence:
frameā ā frameā ā frameā ā ā¦
equivalent to a self-regressive generation.
Model: program rather than network
Unlike traditional LLMs, no neural networks are used.
The model is defined as a program in a mini lambda language, represented as an AST JSON file:
{
"type": "sequence",
"nodes": [
{ "type": "birth", "x": 1, "y": 1 },
{ "type": "next" }
]
}
This program acts as a transition function on the grid.
This approach replaces:
- the weights of a model ā by instructions
- the layers ā through explicit transformations
Optimization: Genetic Algorithm
The training is based on a population of programs.
Each generation follows:
- Fitness Assessment
- Selection
- Crossover
- Mutation
- Elitism
This process replaces gradient descent.
Unsupervised training
Unsupervised mode maximizes a fitness function based on:
- entropy (diversity)
- movement (variation between frames)
- lifetime
Objective :
[ \text{fitness} = f(\text{entropy}, \text{motion}, \text{lifetime}) ]
This regimen is analogous to pretraining:
- absence of target
- exploration of the solutions space
Supervised Training
The supervised mode introduces a target:
- glider
- blinker
- block
Fitness is becoming:
[ \text{fitness} = -d(\text{frame}, \text{target}) + \lambda \cdot \text{penalty} ]
Or :
- (d) is a distance between grids
- the penalty limits the size of the programs
This mode corresponds to fine-tuning.
Preferences and selection
A comparison mechanism can be introduced:
- Two programs produce two sequences
- a preference is applied
- The selection favors the best
This diagram represents a simplification of RLHF / DPO:
[ \max \log P(\text{preferred}) - \log P(\text{rejected}) ]
Generation and streaming
The results are produced in NDJSON and disseminated via SSE:
- Each chunk = one frame
- Each stream = one generation
Canvas-based visualization interface:
- matrix rendering
- real-time display
- metrics (fitness, generation, seed)
Presentation slides: Slidewire presentation
Benchmark and reproducibility
The system includes a benchmark pipeline:
- deterministic seed
- double execution
- sequence hash
Metrics:
- duration (
duration_ms) - memory (
peak_memory_mb) - final fitness
- reproducibility
Boundaries
This prototype is not intended to reproduce a real LLM:
- no transformer
- no tokenization
- no probabilistic model
- no gradient
This is a computational analogy, useful for:
- observe an optimization dynamic
- visualize a sequential generation
- compare different learning regimes
Conclusion
Modern LLMs rely on mechanisms that are difficult to grasp directly.
Transposing this into a visual system allows us to:
- make generation observable
- to materialize the optimization
- isolate the fundamental concepts
This approach does not replace existing models, but offers a conceptual exploration tool.
- Source of the Twitter post: https://x.com/allen_explains/status/2044757995549319172?s=12
- The project's source code: https://github.com/matyo91/llm-game-of-life
- The presentation slides: https://github.com/matyo91/slidewire
Resources
- What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers? https://github.com/humanlayer/12-factor-agents
- AIE Miami Keynote & Talks ft. OpenCode. Google Deepmind, OpenAI, and more! : https://www.youtube.com/watch?v=6IxSbMhT7v4
- AIE Miami Day 2 ft. Cerebras, OpenCode, Cursor, Arize AI, and more! : https://www.youtube.com/watch?v=DeM_u2Ik0sk
- How AI is transforming software engineering: a conversation with Gergely Orosz, @pragmaticengineer: https://www.youtube.com/watch?v=CS5Cmz5FssI
- Microsoft at ICLR 2026: Deep Learning, LLM Reasoning, Generative Models: https://www.linkedin.com/pulse/microsoft-iclr-2026-deep-learning-llm-reasoning-generative-h74se/
- ASUS DGX Spark: KI auf dem Schreibtisch ā Nie wieder TokenāKosten! | Live Modellvergleich: https://www.youtube.com/watch?v=dP4zE-DTWAg
- š£ [VIRTUAL SUMMIT DAY 1/5] How to outperform 99% of people using AI: https://www.youtube.com/watch?v=yzhg9Ks859I Ready to launch your own agent? : https://hermes-agent.org/fr/
- PaperClip + Agent HermĆØs, it's insane! : https://www.youtube.com/watch?v=PUaZ5o8u0wY
- 30-minute workshop by the creator of Claude Code that will teach you more about vibe-coding: https://x.com/heyamit_/status/2046489651775713498?s=46
- What Young People Expect from HR: Why the Generational Approach is a Misleading Concept: https://www.insign.fr/en/insights/young-workforce-expectations-generational-approach-an-intellectual-scam