image.png

🛬 Attending


<aside> 😎 Google Scholar

</aside>

<aside> 🤠 Research Gate

</aside>

<aside> 😎 ORCID

</aside>

<aside> 🧐 Academic CV

</aside>

Socials


<aside> 😝

Linkedin

</aside>

<aside> 😁 BlueSky

</aside>

<aside> 😆 X/Twitter

</aside>


I aim to improve AI agents by elevating their upper bound (Reasoning🧠→Smarter🤔) and lower bound (Alignment⚖️→Safer🫡) Specifically, my vertical research focus on improving faithful evaluation to better inform post-training (reasoning-driven RL and alignment) See my *slides for Synergizing Post-Training with The Science of Evaluation.* To that end, I often leverage horizontal methods from actionable interpretability (e.g. model diffing techniques incl. task vectors) and robustness probes (e.g. longitudinal analysis as contamination probe)

Education


<aside> 🐔 PhD

Advisor:

</aside>

<aside> 🐥 MSc. Interdisciplinary Science ETH (CS and Physics) ETH Zurich, Switzerland (2025)

Master Thesis: Prof. Zhijing Jin, Prof. Bernhard Schölkopf

</aside>

<aside> 🐣 BSc. Interdisciplinary Science ETH (CS, Physics and Chemistry) ETH Zurich, Switzerland (2025)

Bachelor Thesis: Prof. Mrinmaya Sachan

</aside>


Portfolio in Progress


Portfolio Items