System Online :: v2.2

ADRIAN LAYNEZ ORTIZ

Research & Engineering

Mathematics & Computer Science.

Mechanistic Interpretability · High-Performance Engineering.

4

Research Sections

50+

Interactive Visualizations

2

Languages

Curiosity

Abstract Mathematics Visualization
Currently Building

Deep Learning Engine — CUDA / C++

Custom kernels for matrix operations and backpropagation

About

Bridging Abstract Mathematics & Machine Intelligence

I am pursuing a double degree in Mathematics and Computer Science at the Universidad Complutense de Madrid. My research focuses on understanding neural networks at their deepest level — from gradient dynamics to kernel-level optimization.

I specialize in Mechanistic Interpretability — the science of reverse-engineering how neural networks represent and process information internally. Rather than treating models as black boxes, I decompose their circuits to understand why they work.

My mission: make AI systems transparent through rigorous mathematical analysis and low-level engineering.

Technical Proficiencies

PythonTypeScriptReactNext.jsPyTorchFastAPILinear AlgebraLaTeXGit
Selected Work

Engineering from First Principles

Every project begins with a question. From reimplementing seminal papers to writing bare-metal GPU kernels, each one is an exercise in deep understanding.

Nano-Transformer
Ground-up reproduction of 'Attention Is All You Need' in PyTorch — Multi-Head Attention, Positional Encodings, and LayerNorm implemented without pre-built Transformer modules.
CUDA Matrix Kernels
Handwritten CUDA kernels exploring SGEMM optimization — from naive implementations to tiled shared-memory strategies, benchmarked against cuBLAS.
Autograd Engine
Lightweight reverse-mode automatic differentiation library. Dynamically constructs computation graphs and propagates gradients via the chain rule.
The Mathematics of Deep Learning
Interactive articles exploring the rigorous theory behind modern AI — SGD convergence analysis, the linear algebra of LoRA, and differential geometry on neural manifolds.
Distributed Inference
Architectural explorations in data-parallel training, model sharding, and optimized inference pipelines for large-scale neural networks.
Open to Opportunities

Let's Build Something
Together

Whether it's a research collaboration, an internship opportunity, or just a conversation about the mathematics of intelligence — I'd love to hear from you.