Building at the intersection of code, care, and data. Incoming MS in Health Informatics at Weill Cornell.
I studied computer science at Cornell and care about building software that makes healthcare safer.
Previously: product design at Bitly and spott, software at EpicHire, product management at Capital Group, and database analysis at the Illinois Department of Public Health. Now working in medical billing before interning at NewYork-Presbyterian Hospital this summer and starting my MS in Health Informatics at Weill Cornell this fall.
Structured clinical data (FHIR, SNOMED, OMOP) that makes the tools clinicians already use smarter, not noisier.
Models physicians will actually act on — calibration, subgroup performance, and the UX that earns trust.
Data quality as a justice issue. Who gets counted, who gets miscoded, who's never in the record at all.
Why I'm pursuing health informatics.
It started with an 18-hour economy flight to India. She dismissed the pain in her leg as typical travel fatigue — the kind of ache you'd expect after a day cramped in economy class. But when she could barely walk a few days later, a doctor took one look at her swollen legs and sent us straight to the emergency room. A Doppler ultrasound revealed deep vein thrombosis. A clot that could have traveled to her lungs at any moment.
She spent two weeks in the hospital. She survived. And I've spent years thinking about how close we came to a very different outcome.
Here's what stays with me: her risk factors weren't hidden. Long-haul flight. Limited mobility. A profile that, fed into the right model, might have flagged her before she ever boarded the plane. What if an algorithm had suggested compression socks, or prompted her doctor to recommend prophylactic medication? The data to prevent her suffering existed. The connection just wasn't made in time.
Luck shouldn't be a healthcare strategy.
I came to this field through computer science. At Cornell, I fell in love with the precision of systems — the way a well-designed data structure could make something complex feel inevitable. I paired that with UX design, learning to ask not just "does this work?" but "does this make sense to the person using it?" I interned as a product designer at Bitly and spott, wrote software at EpicHire, and managed a product roadmap at Capital Group. Each role gave me a different lens on what it means to build something that actually gets used.
But healthcare kept pulling me back.
The summer I spent as a database analyst at the Illinois Department of Public Health was my first real exposure to health data at scale. I expected to find technical problems. What I found instead were structural ones — inconsistent coding, missing fields, reporting delays that made recent data unreliable. A case that looked like an anomaly was often just an artifact of how the data had been collected. I started to understand that data quality isn't a technical footnote. It's the foundation on which every analysis, every public health decision, every resource allocation sits.
When the foundation is shaky, the people making decisions can't always see it. The communities affected by those decisions always feel it.
Around that time, I read Eric Topol's Deep Medicine. His argument — that AI has the potential to restore the human connection in medicine by taking on the cognitive load that buries clinicians in documentation and second-guessing — articulated something I hadn't been able to fully name. Technology, designed well, doesn't replace the physician. It gives them back the time and attention to actually be present with a patient.
That's the kind of work I want to do.
Not AI for its own sake. Not technology as a solution looking for a problem. But carefully designed, rigorously evaluated, equitably deployed systems that help clinicians catch what they might otherwise miss — and help patients like my mom get flagged before they ever end up in an emergency room.
This fall, I'm beginning my MS in Health Informatics at Weill Cornell Graduate School of Medical Sciences. I'm going in with a clear focus: clinical decision support, predictive analytics, and the question of how we build AI systems in healthcare that are trustworthy enough to actually change behavior at the bedside.
The technical problems are genuinely hard. But the harder questions are human: How do you integrate a model into a physician's workflow without adding cognitive burden? How do you explain an output in a way that earns trust rather than eroding it? How do you audit a system for bias when the training data itself reflects decades of inequity in who received care and how?
Before the program starts, I'll be spending the summer at NewYork-Presbyterian Hospital on the People Strategy & Operations team, working on improving the experience for care team members — doctors, nurses, PAs. That's a different layer of the same problem: when the people delivering care are poorly supported by their systems, everyone downstream feels it.
My mom survived because a doctor recognized her symptoms in time. Thousands of others aren't so lucky — preventable conditions claim lives every day while the data to save them sits untouched in disconnected systems.
I'm not interested in waiting for the emergency room moment. I want to be part of building the systems that catch it in the data first.
If you're working on these problems — or thinking about them — I'd love to connect. My contact info is below.