Light of Baldr
Illuminating the machine mind
We build trustworthy AI systems that can be understood, secured, and deployed at the edge. Where ancient wisdom meets modern intelligence.
Three Domains, One Vision
Building trustworthy AI systems that can be understood, secured, and deployed at the edge.
AI Research
Interpretability
Mechanistic interpretability, sparse autoencoders, neural topology. We look inside machines so you can understand what they see.
Security
Operations
Offensive and defensive cybersecurity. Penetration testing, security auditing, infrastructure hardening. Protection by design.
Infrastructure
DevOps
Homelab to production. Kubernetes orchestration, bare-metal GPU clusters, edge AI deployment. Engineering as craft.
Featured Work
Recent research and publications
SAE Interpretability Datasets
Open datasets for training and evaluating sparse autoencoders on neural network activations. Enabling mechanistic interpretability research.
Neural Topology Research
Exploring the structure of thought through the lens of neural network topology. Understanding how information flows through artificial minds.
Model Reasoning Inspection
MRI: A framework for inspecting and understanding the reasoning processes of large language models. See what the model sees.
“We look inside machines so you don't have to trust them blindly.”
In an age of black-box AI, we choose transparency. Every neural pathway can be illuminated, every decision traced, every vulnerability found. This is not just engineering — it is craft. The same deliberate, purposeful work that built the great halls of old.