Origins
Where it started: old-school Psychodeli.
Psychodeli was on the desktop of millions in the 90s via AfterDark: a spiral twisted on itself. That phrase is a pretty literal map of the engine — a smooth spiral index, pushed and pulled by a time-driven twist, then folded back into color by periodic wave functions. It’s not “just a preset,” it’s a compact little physics poem.
These are classic old-school presets with the default wave function ($\sin$). Though still enhanced from the original, these views illustrate the core idea before we start swapping distance metrics, wave functions, and layering the smoothing architecture that keeps it all from tearing itself apart.
In early 2025, the project to modernize the original look began — keeping the spirit, upgrading the machinery. That effort eventually delivered the current engine: WebGL 2.0, an HDR color pipeline, and a steady 60fps with a multitude of user control mechanisms, eqs. An early detour mixing CSS transforms and WebGL was eventually abandoned for pure GL.
My quip is it was "coding the vibe" not vibe coding. Whatever you call it, the throughput over a year has been staggering — and the output is miles from what I could have done solo.
§ 1 — The Force Field
Every pixel feels every node.
In 1997, the original C code calculated the distance from every pixel to every moving node, summing them up to determine that pixel's color. It did this on the CPU, one pixel at a time, for ~300,000 pixels. It was a gravitational force field — the same mathematics used to model electrostatic fields and gravitational lensing.
Today, Psychodeli+ runs the exact same formula, but pushes it to the GPU via WebGL, calculating millions of pixels simultaneously at 60fps.
The moving points are Knots — mathematical singularities that reach into every pixel and pull. Influence is infinite (inverse square law) but decays rapidly with distance. The visual pattern emerges from the interference of these competing forces.
On top of this base layer optionally sits the kaleidoscope — a second shader pass that slices the rendered frame into symmetric wedges and mirrors them. We originally added it to hide seam artifacts at the edges of the cylindrical parameterization. It worked so well it became a first-class feature, with modes that respond to audio, breath at phrase boundaries, and shift segment count on the beat.
Two nodes. The force field flows between an attractor (positive charge) and a repulsor (negative charge).
Inverse Distance Weighting (Shepard's Method)
For every pixel $\vec{P}$, the shader calculates a Force Vector $\vec{F}_{total}$ — the weighted sum of vectors pointing to each node:
Where $\vec{K}_i$ is the position of Knot $i$, $w_i$ is its charge (positive = attractor, negative = repulsor), and $p$ is the metric power (usually 2 — the inverse square law). The trick is that forces add as vectors. Between two identical attractors, their pulls cancel — creating a saddle point.
Deep dive: Color mapping & Lorenz weight modulation
The raw force vector is invisible. We convert it to color using polar coordinates. Field Magnitude ($r$) uses a log scale because forces near singularities approach infinity:
Field Angle ($\theta$) represents flow direction: $\theta = \operatorname{atan2}(F_y, F_x)$. Final Color Index:
Each node's effective weight is modulated in real-time by the Lorenz Attractor's $z$-variable, creating a "breathing" effect even when node positions are fixed:
§ 2 — The Lorenz Attractor
Chaos is the heartbeat.
The original Psychodeli moved its nodes with simple periodic functions — smooth, predictable, looping. The first thing AI changed was the motion itself. Each node began to ride a Lorenz attractor — the system of differential equations that Edward Lorenz discovered in 1963 while modeling weather convection.
The Lorenz system is deterministic but sensitive to initial conditions — two trajectories starting 0.0001 apart diverge exponentially. Same inputs, same outputs, but you can't predict where it'll be in 30 seconds. That's what makes the patterns breathe.
Three nodes riding Lorenz attractors in slow motion. The "breathing" is the z-variable modulating each node's gravitational weight.
The equations
Three coupled differential equations. Three variables. One butterfly.
With the classic parameters $\sigma = 10$, $\rho = 28$, $\beta = 8/3$, the trajectory never repeats. It orbits two lobes — the famous butterfly shape — switching between them unpredictably. The $x$ and $y$ variables drive node position. The $z$ variable does something subtler: it modulates each node's gravitational weight.
So the force field is never still — each node's influence expands and contracts chaotically. The math is perfectly repeatable. The motion never repeats.
Deep dive: Why chaos looks organic
Random noise looks like static — high-frequency, spatially uncorrelated. Chaotic systems have structure: smooth trajectories, bounded orbits, fractal cross-sections. The Lorenz attractor is bounded to a finite region of phase space but fills it densely.
That's also what makes heartbeats, turbulent flow, and tree branches in wind look the way they do — deterministic but never repeating. Driving nodes with Lorenz gives the motion that same quality. It looks alive because the math actually is the same.
§ 3 — The AI Collaboration
Who taught it to hear?
The original Psychodeli was written by Benjamin McCurtain — at CMU in the late 1990s. It shipped on millions of computers as part of Berkeley Systems' AfterDark screensaver suite. Then AfterDark withered, the hardware disappeared, and the code went extinct.
Twenty-five years later, we brought it back as an exploration of what AI could do with the core ideas.
The numbers
| Commits | 1,662 in 11 months |
| Step change | 737 commits since November 2025 with Gemini in AGY & Claude Code |
| JavaScript | 146,000 lines of core application code |
| Shaders | 8,600 lines of GLSL |
| Documentation | 159 technical documents |
| Audio analyzers | 20+ custom — none existed in the original |
| Input | MIDI (learn mode — wiggle a knob, it binds), gyroscope, three-zone touch gestures, tap corners, mouse wheel effects, 40+ keyboard shortcuts |
| Distribution | PWA, macOS desktop app (Electron, beta), deep-linkable — every parameter is URL-encodable |
| Palettes | 151 curated + HSV generation for custom. Display P3 wide gamut on capable hardware. |
Gotchas: Meta Quest's browser can't keep up with the per-pixel shader load. On Windows, check that your browser is actually using the discrete GPU (Settings → Display → Graphics) or you'll get 15fps for no reason.
The experimentation loop
88 commits (~5% of all commits) are pure refactoring — but they touched 142,000 lines (19% of total code churn), deleting twice as much as they added. That ratio held roughly constant across the whole year. Every month, roughly 1 in 6 commits was consolidating what the previous month's experiments had taught us.
Early on, the refactoring was partly driven by switching models — different AI collaborators leave different fingerprints in the code, and reconciling those styles is real work. Later, with Claude as primary collaborator, the refactoring shifted to architectural: breaking 6,000-line monoliths into composable modules, unifying parameter systems, killing dead code paths that accumulated from rapid feature iteration.
The git log tells the real story: 39 reverts, 58 abandoned features, 116 regression fixes. I got excited about adding an understanding layer that could pick up nuance from Sly Stone, and early on I'd abandon maybe a third of what the AI generated. But those abandoned branches weren't wasted — they were how the architecture found itself.
Reverts + abandoned features — % of commits by month
The rhythm was: push hard, let things get messy, then consolidate before pushing again. By September it was time to formalize: the AudioReactivityBus, a pub/sub layer that decoupled analyzers from visuals. Deferring that structure to the last responsible moment also benefitted from better coding models as 2025 progressed. My pattern of asking the model to "write a note to your future self so you don't mess up the next attempt so badly" has vanished with the 11/25 generation of models.
In my way of working, the generation is cognitively fast — you can try an idea without the overhead of researching and implementing it yourself. Knowing what to keep, and having the appetite to throw the rest away, is hard work.
The original code could draw spirals. It couldn't hear.
The entire audio system was built in collaboration with AI. Not generated by AI. Built with AI, the way you build with a colleague who happens to have read every paper on psychoacoustics.
It doesn't just hear volume.
Psychodeli+ runs 20+ custom analyzers including:
Harmonic tension — chord complexity, key detection via Krumhansl-Schmuckler profiles. Melodic contour — pitch velocity, phrase arcs, vibrato. Instrument classification — speech, brass, strings, percussion. (Honest caveat: the specific labels are still pretty unreliable. But it doesn't matter much — a false positive on "brass" vs. "strings" still tells the system something changed in the spectral character, which is the signal that actually drives visuals.)
Beat structure — BPM, syncopation, section boundaries (verse/chorus/drop). Phrase momentum — not just where beats are, but where the music is going.
Open-source starting points included meyda.js (audio feature extraction) and BeatDetektor (beat detection).
Perceptual scaling uses Weber-Fechner law so visual changes match how humans actually perceive loudness — logarithmically, not linearly.
What AI actually contributed
The design goal was to build something that hears music, not just audio. I'm a cognitive psychologist by training, so the target was always perceptual — not spectrum analysis but something closer to how a listener actually processes sound. Loudness follows Weber-Fechner. Key detection uses Krumhansl-Schmuckler profiles. Phrase momentum tracks where the music is going, not just where it is.
What AI contributed was the mathematical knowledge to wire that up — modular forms, non-Euclidean metrics, Lorenz attractors, perceptual scaling laws. Was that creative? I don't know. But the capability became a kind of art material — and the result is extinct code that grew leopard spots.
§ 4 — Transcendental Waves
We trawled Wikipedia for fancy math to throw at Claude.
The original Psychodeli used $\sin()$ — the simplest periodic function. At the cusp of '26, the latest generation of coding models had gotten good enough that we could try something reckless: browsing Wikipedia pages for exotic mathematics and asking Claude, what would this look like as a wave function?
The Riemann zeta function produces sharp crystalline edges. Möbius transformations fold space inside out. And modular forms — the mathematics behind string theory and monstrous moonshine — create stained glass motifs.
Combined modular form (mode 11) — Dedekind eta + Jacobi theta + Eisenstein series. Cathedral windows from number theory.
The progression
Every wave mode replaces $\sin(\theta)$ in the core color equation. The force field geometry stays the same — only the "lens" changes.
| Mode | Source | Visual character |
|---|---|---|
| sin | Trigonometry 101 | Smooth, predictable, the original |
| zeta | Riemann zeta function | Sharp edges, crystalline fractures |
| tanh | Hyperbolic tangent | Saturated plateaus, neon bands |
| Möbius | Complex analysis | Folding, twisting, inside-out |
| eta | Dedekind eta · number theory | Rhythmic cascades, partition patterns |
| theta | Jacobi theta · lattice theory | Frosted glass, crystalline grids |
| Eisenstein | Eisenstein series · modular forms | Starburst symmetry, snowflakes |
| modular | All three combined | Stained glass cathedral windows |
sin()
The original. Smooth and predictable.
Riemann zeta
Crystalline edges from the most famous unsolved problem in math.
Möbius
Space folds inside out. Conformal maps from complex analysis.
Modular (combined)
Eta + theta + Eisenstein. Cathedral windows from number theory.
Deep dive: What are modular forms?
Modular forms are functions on the upper half of the complex plane that satisfy specific symmetry properties under the modular group — the set of all Möbius transformations with integer coefficients. They appear in:
Number theory — counting lattice points, proving Fermat's Last Theorem. String theory — partition functions of vibrating strings. Moonshine — the mysterious connection between the Monster group (the largest sporadic simple group, with ~8×10⁵³ elements) and modular forms.
The Dedekind eta function creates rhythmic cascades from its infinite product. The Jacobi theta function produces crystalline lattice grids from Gaussian-weighted sums. The Eisenstein series creates star-shaped symmetry from summing over all lattice points. Combined, they produce patterns that look like stained glass — because both stained glass and modular forms arise from the mathematics of symmetric tiling.
§ 5 — Metric Geometry
Change the ruler, change reality.
Standard physics uses Euclidean distance — $d = \sqrt{x^2 + y^2}$ — which produces perfectly circular field shapes. But distance is a choice, not a law. Swap the metric and the "gravity wells" around every node transform from circles into diamonds, squares, and concave stars.
They look startlingly like biological cell patterns, fish scales, and crystal lattices.
| Mode | Formula | Shape | Visual character |
|---|---|---|---|
| Standard | $\sqrt{x^2+y^2}$ | Circles | Smooth, organic, classical physics |
| Manhattan | $|x|+|y|$ | Diamonds | Crystalline, tiled, Islamic geometry |
| Chebyshev | $\max(|x|,|y|)$ | Squares | Grid-like, architectural, pixel art |
| Astroid | $(|x|^{0.5}+|y|^{0.5})^2$ | Stars | Concave, biological, cell-like |
| Hyperbolic | $|xy|$ | Hyperbolae | Void-like, non-Euclidean, alien |
The astroid metric is particularly striking. Its concave star shapes create interference patterns that look like biological cell boundaries — the same geometry that emerges from surface tension minimization in soap films and cell membranes. The math didn't know it was doing biology.
§ 6 — Turing Patterns
How math grows leopard spots.
In 1952, Alan Turing published his only paper on biology: The Chemical Basis of Morphogenesis. He proved that two chemicals diffusing at different rates and reacting with each other could spontaneously self-organize into stable patterns — spots, stripes, and spirals.
Psychodeli+ doesn't use Turing's exact reaction-diffusion equations, but it produces mathematically identical results through a different mechanism: competing frequencies of trigonometric interference.
Turing-like stripes emerging from high-frequency interference in the force field.
The Math of Morphogenesis
In our shader, the final color is determined by a nested cosine function with two competing frequency multipliers:
When $\text{freq}_1$ and $\text{freq}_2$ are low (e.g., 2.0 and 3.0), you get smooth, rolling gradients. But when you crank them high (e.g., 20.0 and 24.0) and zoom in, the functions begin to alias against the underlying force field geometry. The interference between the two high-frequency waves creates macro-scale structures: Moiré patterns.
Leopard Spots
High freq1, low freq2, Euclidean metric.
Zebra Stripes
Balanced high frequencies, 2 nodes.
Coral Reef
High frequencies with Astroid metric.
What surprised us: it doesn't matter whether you get there via chemical diffusion, cellular automata, or trigonometric interference. When local forces compete on different scales, the same patterns show up.
§ 7 — The Living System
It directs itself. It watches you back.
Everything above describes the materials — force fields, metrics, wave functions, patterns. But Psychodeli+ also has an AI director that composes them in real time, and a camera system that watches the viewer and learns what works.
Algorithmic Exploration — the AI director
No hard-coded mappings. No table that says "kick drum → zoom in." Instead, the system builds audio fingerprints — ~20-feature vectors (spectral centroid, harmonic tension, beat phase, energy slope…) — and tags each one with whatever visual state was showing at the time. When the current moment's fingerprint is close enough to a stored one (cosine similarity above a confidence threshold), it triggers a turbo replay: snap back to that visual state. Recurring textures get recurring treatments. It discovers the associations by running.
A four-phase cycle — Discovery, Development, Variation, Return — controls how aggressively it explores vs. consolidates. Early in a session it tries everything; later it develops themes and creates callbacks to earlier moments. User input (touch scrubs, mouse wheel, gyroscope, MIDI) feeds in alongside audio — it's all just more signal. The face tracking loop below is the newest input, and the first attempt at closing the feedback gap.
Face tracking + reinforcement learning
What we’re working on now: a local-only, webcam-based feedback loop. Not surveillance — just a way to test whether the visuals are landing: head-bob beat sync, motion energy, sustained gaze.
Visual processing uses Google MediaPipe — off-the-shelf computer vision, composable building blocks.
Face Mode: head bob intensity, gaze tracking, beat sync, and the SYNTAX-RL panel. The stick figure tracks your actual head or body position, not both due to performance contraints. Ctrl-shift-d for this debug panel.
Early experiments feed those signals into SYNTAX-RL (Q-learning): a light bias layer over Algorithmic Exploration. Design constraint: no hard mappings — it’s not “button X = effect Y,” it’s “these kinds of states tended to keep attention in this musical context.”
Playground
Explore...
Algo exploration will take you on a journey, but hit 'h' for lots of ways to steer your own destiny.
In memoriam
Benjamin McCurtain
A spiral twisted on itself. Everything else grew from here.
With Eric Pitcher and Jefferson Provost
Web application (PWA)
Install to your home screen for a full-screen, app-like experience.