About
I'm a graduating senior at Purdue University majoring in Computer Science. My research focuses on developing better training algorithms for large neural networks, and leveraging AI to accelerate scientific research.
Over the past 2+ years, I've been working with nanoHUB to develop agents that perform autonomous research by leveraging software tools, public data repositories, and HPC clusters. We expect our Autonomous Universal Research Assistant (AURA) to be available to the public soon.
I've also been exploring meta-learned neural network optimizers that can extract more generalizable and otherwise useful information during backpropagation.
Feel free to email me if you're interested in my work, hiring, or would just like to connect!
Latest News
2026
2026
2026
2025
2025
2025
2025
2024
Publications & Preprints
Projects
Multi-agent research assistant that accesses 1.6M+ data points and 340+ published software tools on nanoHUB, plus HPC clusters, to perform autonomous multi-domain research across biology, physics, and nanotechnology. Produces validated, reproducible results and scales with growing scientific contributions.
View on nanoHUBActive learning loop paired with public physics simulation data to accelerate materials discovery, achieving a 10x speedup in optimal alloy candidate identification. Published in Computational Materials Science, Elsevier.
GitHubCost-efficient LLM pretraining using learned optimizers. Achieved a 30% performance gain versus Adam across several zero-shot tasks, improving zero-shot transfer and robustness across downstream tasks.
Meta-training pipeline integrating Sharpness-Aware Minimization into learned optimizer training, improving stability and zero-shot transfer across architectures, tasks, and initializations.
Intelligent health calendar prototype that generates personalized fitness and medication schedules using Kaiser clinical records and Apple HealthKit data. Implemented a privacy-first architecture using Apple Foundation Models with all health data processed and stored fully on-device.
Zeroth-order gradient estimation to accelerate training and improve generalization. CMA-ES-based warmup optimizer that outperforms SGD in early training for noisy, nonconvex loss landscapes.
Robust benchmarking suite to compare generalization across optimizers for neural networks. Designed evaluation protocols to measure optimizer stability, consistency, and rollout quality across diverse tasks and architectures.
Data structure combining Splay Trees with Hash Tables to accelerate access of nearby and recently queried information in O(log n). Exploits temporal and spatial locality to reduce amortized lookup cost.
GitHubIn-memory hierarchical pseudo-filesystem supporting efficient retrieval using Heaps. Implements core filesystem operations (insert, search, delete) with optimized access patterns for hierarchical data.
GitHubGraph algorithms suite implementing SCC detection, maximum spanning trees, and constrained pathfinding for clustering and network optimization problems.
GitHubStack-driven DFS and queue-based BFS traversal for generating and solving grid-based mazes. Implements both iterative deepening and breadth-first strategies for pathfinding.
GitHub