Machine Learning

Spatiotemporal
Acoustic Manifold

A 3D visualisation of birdsong using machine learning

Each bird species produces a unique acoustic fingerprint. This project extracts 57 audio features per frame from birdsong recordings — including MFCCs, chroma, spectral centroid, and onset strength — then uses PCA to reduce that high-dimensional space down to 3 principal components. The result is an interactive 3D trajectory rendered in real-time with Three.js, animated and synced to the audio. Colour maps to energy (blue → silence, orange → peak call). Size maps to amplitude.

What I Built

Tech Stack

Technology Role
Python Core language
librosa Audio feature extraction
scikit-learn (PCA) Dimensionality reduction
NumPy Numerical computing
Three.js 3D rendering
WebGL Graphics
ES Modules JavaScript modules

Skills Demonstrated

Why This Project Matters

The Three.js + PCA combination is a strong differentiator — it shows you can bridge the ML pipeline and the frontend output end-to-end.

This project demonstrates the ability to take raw audio data, apply machine learning techniques for dimensionality reduction, and create an engaging visual experience that brings the data to life.