Engineer · Researcher · Writer
GGSIPU · B.Tech CSE | WorldQuant · Applied ML Scholar
Scroll to walk ↓
Guru Gobind Singh Indraprastha University, New Delhi. Focused on Software Engineering and AI. This is where I started writing code that did more than just print, systems that could learn, parse, and reason.
Worked through their Applied ML program focused on quantitative finance. Built end-to-end pipelines, data ingestion, cleaning, warehousing, feature engineering, and model evaluation. Learned that in production ML, 90% of the work is data plumbing.
Ran a comparative study benchmarking BackPropagation against nature-inspired optimizers, Mu-Lambda, Mu+Lambda, Ant Colony, and Simulated Annealing. Measured convergence rates, computational cost, and stability across different loss surfaces. Each method had a regime where it outperformed the rest.
Contributed to uGroot, a Linux AI assistant. Shipped 10+ features across NLP modules and CLI tooling, became one of the top contributors to the project. First real experience writing code that other people actually depend on.
Presented the BackProp vs MetaHeuristics comparative study at the Open Data Science Conference. Walked through the experimental setup, convergence analysis, and practical trade-offs between gradient-based and population-based optimization.
Founding Engineer → Senior Engineer · 2021–24 · US, Remote. AI-powered math tutoring platform. Joined as the first engineer and helped build the core product from scratch, the parser, the ML layer, and the infrastructure under it.
Built a custom parsing pipeline that takes natural-language math problems (quadratics, systems of equations) and compiles them into solvable representations. Handles ambiguity in how students phrase questions, outputs step-by-step solutions. This became the backbone of the tutoring engine.
Fine-tuned BERT for multi-class classification to route incoming math questions to the correct solving function. Trained on thousands of question variants, handling everything from basic algebra to multi-step word problems.
Scaled the math-solving framework to handle 200 RPS at <300ms p99.99 latency. FastAPI serving layer, Redis for caching frequently-hit problem types, Kubernetes for auto-scaling, CI/CD through Jenkins and AWS CodeDeploy. Also cut compute costs ~20% by optimizing container images and cold-start times.
Founding Engineer → Backend Lead · 2024–25 · New Delhi. Credit intelligence platform for emerging markets. I owned the backend architecture, data pipelines, AI agents, search infrastructure, and the deployment stack.
Designed a multi-agent system that sources financial reports and court filings across 35+ countries. Separate agents handle crawling, translation, and relevance scoring. The pipeline replaced what used to take analysts days of manual searching and reduced turnaround by 30%.
Architected the data scraping and processing layer on a distributed spot-instance fleet. Built retry logic, dead-letter queues, and automatic failover so jobs migrate when nodes get reclaimed. Processes 1M+ jobs daily, cut infrastructure costs by 90% compared to on-demand.
Built a real-time search layer over 5M+ financial records using HiRedis and RedisSearch. Implemented automatic re-indexing so new records become searchable within seconds of ingestion. Serves queries at 30ms p99, fast enough for analysts to search interactively without waiting.
Founding Engineer → Team Lead · 2025–Present · New York. AI infrastructure company. We build the environments and tooling that frontier AI labs use to train agents on real-world software tasks.
Reverse-engineered production applications and rebuilt them as reproducible Docker environments with automated grading pipelines and feedback loops. These environments are used by AI agents from OpenAI, Anthropic, and Google to learn tool-use and computer-use through reinforcement learning.
Full end-to-end ownership of 10+ production applications, from architecture to realistic high-volume seed data to deployment. Built to be indistinguishable from real apps so the agents training inside them encounter genuine complexity, not toy problems.
Built multi-layer agentic systems that generate, validate, and auto-fix issues within Model Context Protocols. One agent layer produces outputs, another evaluates them against constraints, a third patches failures, creating a closed-loop correction pipeline for production-grade AI apps.
If you know, you know.
Engineer · Researcher · Writer
GGSIPU · B.Tech CSE, GPA 8.18 | WorldQuant · Applied ML Scholar
GGSIPU, B.Tech Computer Science, GPA 8.18. Major in SE & AI.
WorldQuant, Applied ML Scholar. Production pipelines for hedge fund trading data. Research on BackPropagation vs MetaHeuristics.
GirlScript SoC, Contributed to Linux AI Assistant uGroot. 10+ features merged, top contributor.
ODSC, Presented comparative study of BackPropagation vs Metaheuristics.
Founding Engineer → Senior Engineer, 2021–24. Parser/compiler for math, BERT fine-tuning (ROC-AUC 0.84), in-house framework (20% cost cut), scaled to 200 RPS at <300ms p99.99. Automation pipeline improving QA by 30%.
Founding Engineer → Backend Lead, 2024–25. Multi-agent financial reports across 35+ countries (30% faster). Fault-tolerant scraping at 1M+ jobs/day (90% cost cut). HiRedis search on 5M+ records at 30ms p99. CI/CD with GitHub Actions & AWS CDK.
Founding Engineer → Team Lead, 2025–Present, NYC. RL training environments for AI agents (OpenAI, Anthropic, Google). 10+ production app environments. Multi-agent MCP correction systems.