Uzay Macar

📜 = Patent   📝 = Paper   💻 = Code   🔗 = Link

Mechanistic Interpretability

Mechanistic interpretability aims to reverse-engineer how machine learning models work by analyzing their internal computations. My research focuses on developing principled methods for attributing the behavior of thinking models, e.g. understanding how individual reasoning steps influence downstream computations and final outputs.

💻 Public repository for principled attribution research🔗 Interactive interface for causal attribution of multi-step reasoning in thinking models

Computational Neuroscience and Brain-Machine Interfaces

Formulation of computational models for understanding the brain and the development of brain-machine interfaces. For example: autonomous optimization of neuroprosthetic stimulation parameters for motor cortex and spinal cord outputs.

📝 Autonomous Optimization of Neuroprosthetic Stimulation Parameters That Drive the Motor Cortex and Spinal Cord Outputs in Rats and Monkeys