Back to Work

Vibe Coding · Creative Tech · Audio Visualization

AudVis

A real-time audio visualizer built on HTML5 Canvas and the Web Audio API — supporting mic input, file upload, and streaming from YouTube, Spotify, and Apple Music. Four visualization modes, volume control, and sensitivity tuning. Vanilla JavaScript, zero dependencies.

RoleSolo Developer
TypeVibe Coding
StackHTML5 Canvas · Web Audio API · JS · CSS

Sound you can see, built entirely in the browser.

AudVis started as a challenge to myself: could I build something visually impressive using only browser-native APIs with zero external libraries? The answer was yes — and the process taught me more about real-time graphics rendering and audio processing than any tutorial could.

The result is an interactive audio visualizer that accepts sound from three different sources, transforms raw frequency data via FFT analysis, and renders it in real time across four distinct visualization modes. It has since grown into a proper multi-file project — separated HTML, CSS, JS, and a dedicated streaming service module — running at smooth 60fps framerates with volume control and adjustable sensitivity.


Real-time graphics and audio processing are unforgiving.

Canvas animations have no safety net. A poorly optimized render loop drops frames immediately and visibly. Combining that with live audio analysis — where the data updates 60 times per second — meant every decision had a direct performance cost.

🎵

Audio pipeline complexity

Each input source — microphone, file, stream — needs its own routing through the Web Audio API graph. Building a unified interface over three different source types without redundant code was the first real design challenge.

🖼️

Canvas performance

Clearing and redrawing the entire canvas every frame is expensive. I had to learn which operations to batch, when to use requestAnimationFrame vs setInterval, and how to avoid unnecessary state changes.


Web Audio API does the heavy lifting.

The Web Audio API exposes an AnalyserNode that performs Fast Fourier Transform (FFT) analysis on live audio, returning frequency-domain data as a byte array. That array is what drives every visual — each bar, wave, and particle maps directly to a frequency bucket. The analyser runs at an FFT size of 256 with a smoothing time constant of 0.8, striking a balance between frequency resolution and smooth transitions.

🔊
Web Audio API
Core audio graph: source nodes feed into an AnalyserNode extracting frequency and waveform data. FFT size 256, smoothing 0.8 — covers 0–22kHz depending on sample rate.
🎨
HTML5 Canvas 2D
All visuals rendered via the Canvas 2D context. requestAnimationFrame drives the loop at 60fps, keeping rendering in sync with the display refresh rate without manual timing logic.
🎤
MediaStream API
getUserMedia() captures microphone input as a MediaStream, routed into the Web Audio graph — the same pipeline as file or stream sources, keeping the audio routing unified.
🎵
Streaming Service Module
A dedicated streaming-service.js handles platform detection and URL validation for YouTube, Spotify, and Apple Music — cleanly separated from the core visualizer logic.

Four ways to see the same sound.

Each mode uses the same underlying frequency data but visualizes it with a completely different algorithm. Building each one from scratch deepened my understanding of the data itself — how frequency buckets cluster, how amplitude maps to perceived loudness, how time-domain vs frequency-domain data behaves.

📊
Frequency Bars
Each bar represents a frequency band. Bar height maps to intensity, colors shift by frequency value, and glow effects enhance the visual impact.
〰️
Waveform
Real-time audio waveform with a mirror effect for symmetry. Smooth anti-aliased line rendering that responds directly to audio amplitude.
Circular
Multiple expanding circles whose size is driven by frequency intensity. Color, opacity, and radius vary to create hypnotic ripple effects that pulse with the audio.
Particle System
100 animated particles that bounce around the canvas and connect to each other based on audio levels. Dynamic movement, color-coded connections — the hardest to tune, the most interesting to watch.

Performance is a design decision.

This project changed how I think about front-end work. Every line of code in a render loop has a cost that shows up visibly and immediately. That feedback loop — write something, see it stutter, figure out why — made performance optimization feel less like an optimization and more like a design constraint I had to work within from the start.

I also developed a much deeper appreciation for what browser APIs can actually do. The Web Audio API's node graph architecture is genuinely elegant — routing audio through composable nodes mirrors how physical signal chains work, which made understanding it feel intuitive once it clicked.

Building with browser-native APIs means understanding exactly what the platform can do — not just what a framework exposes.
Web Audio API HTML5 Canvas FFT Analysis requestAnimationFrame Vanilla JavaScript Performance Optimization Responsive Design

Next Vibe Coding Project

Noter →

View Project