Applied AI for healthcare decision systems.

Not another chatbot — systems people can trust.

I design workflow-integrated AI for regulated environments: decision support, data quality gates, and governance-aware automation.

My focus is deployable systems with clear decision boundaries, explainability, and real operational adoption.

Decision Support SystemsWorkflow IntelligenceData Quality & ValidationGoverned AI

Background

Section

I come from a multidisciplinary background spanning customer success, clinical support, and operational roles in international tech environments.

Working closely with healthcare professionals exposed me to the real friction points of clinical workflows — delays, ambiguity, manual workarounds, and systems that technically “work” but fail people.

My transition to AI started with hands-on problem solving.

I began prototyping decision support patterns and reliability-first data foundations. What started as curiosity became a mission: build MedTech AI that survives contact with reality.

AI Philosophy in Med-Tech

Section

AI in Med-Tech is not about adding a chatbot on top of existing systems. It’s an architectural shift that reshapes how data, decisions, and responsibility flow across the stack.

I focus on data-centric engineering, interpretable decision support, and workflow-integrated AI that clinicians can understand and trust — with governance, safety, and adoption treated as core design constraints.

Technology only matters when it works under real constraints.

Applied Stack & Skills

Section

Languages

Python (applied), SQL, TypeScript

Applied AI & Decision Systems

Rule-based + model-assisted decision support, human-in-the-loop patterns, explainability, failure modes, governance-aware workflows

ML Foundations (working knowledge)

Baselines, model evaluation, gradient boosting concepts, lightweight forecasting patterns

Data Quality & Pipelines

Validation patterns, schema enforcement, trust gating, deterministic preprocessing, edge-case handling

APIs & Deployment (pragmatic)

API prototyping (FastAPI-style), container basics, CI/CD fundamentals, deploy patterns (serverless/VM), monitoring basics

Frontend for Decision UIs

Next.js, React, Tailwind — dashboards and guided UX for non-technical users

Systems & Architecture (conceptual)

System design thinking, observability concepts, policy gates, MLOps basics, reliability-first design

Framework-heavy deep learning is intentionally not the focus of this portfolio. The emphasis is on deployable systems, trust, and decision-making under real-world constraints.

Beyond Models: Product & Delivery Experience

Section

Before focusing full-time on applied AI, I worked in international tech environments where systems are built, shipped, and held accountable.

This experience shapes how I design AI today: not as isolated models, but as products that must survive real users, workflows, and constraints.

I bring a product and delivery mindset to AI, with experience in cross-functional collaboration, workflow-first design, and stakeholder-ready communication — using tools like Jira, Power BI, and Salesforce in real delivery contexts.

This is why I care about governance, failure modes, and human-in-the-loop design as much as models and metrics.

Selected Projects

Section

These projects are designed to demonstrate system thinking and real-world Med-Tech impact. The emphasis is on architecture, workflow leverage, and deployable design—not “more code for the sake of code.”

Clinical Data Quality Gate

Trust before metrics

Live

In medical and dental clinics, data often arrives incomplete, inconsistent, or simply wrong. This project shows a reliability-first data quality layer that cleans and validates incoming records — and blocks downstream usage when trust cannot be established.

Input

Messy patient records (inconsistent fields and formats) • Missing or conflicting demographic / clinical values • Free-text notes (when present)

Transformation

Normalization (formats, field mapping) • Deterministic validation and plausibility checks • Explicit trust classification (OK / ATTENTION / BLOCKED) • Downstream gating with clear, human-readable reasons

Output

Trusted, structured patient records • Blocked records with clear reason messages • Suggested next-step actions (what to fix)

Technical choice

Hard validation rules and explicit trust gates; AI is presented only as an assistive extraction layer (not a decision-maker). In clinical workflows, bad data is more dangerous than missing automation. Trust must be established before analytics or decisions are allowed downstream.

What I deliberately didn’t do

No KPI dashboard, no black-box predictions, and no chatbot-driven decision-making.

Snapshot

Hook

Stop bad data before it contaminates clinical workflows.

Problem

Analytics and automation break when source data cannot be trusted.

Approach

Normalize and validate incoming data, then gate downstream usage with transparent rules.

Tech Stack

Next.js, TypeScript, Zod

Impact

Production-style data quality gate that protects downstream analytics and automation.

KPI Command Center

Cross-channel KPIs + explainable 7-day forecast

Live

Most teams track many KPIs but still struggle to answer: What matters today? And what might break next? This dashboard turns customer, sales, and marketing signals into a clear health status, plain-English drivers, and concrete actions — using synthetic data only.

Input

Synthetic daily business signals (customer, sales, marketing) • Region + channel filters (EU/NA/APAC, Email/Chat/Phone) • Scenario sliders (ticket volume, marketing push)

Transformation

KPI aggregation per area (Customer / Sales / Marketing) • Overall health score (0–100) + 'Main signal' summary • Baseline vs model forecast (simulated gradient boosting behavior)

Output

Guided dashboard views (Overview → Area → Actions) • Human-readable 'why' explanations (drivers) • Recommended actions with impact level

Technical choice

Explainable forecasting + decision signals (not black-box predictions). Operational decisions need clarity and trust — the model supports judgment, it doesn't replace it.

What I deliberately didn’t do

No opaque end-to-end pipelines, no proprietary data, no 'AI magic' claims.

Snapshot

Hook

A business dashboard that stays readable, even for non-technical people.

Problem

Cross-team metrics are often tracked in silos, so early warning signals are missed.

Approach

Combine a small set of meaningful KPIs into one health view, then explain what's driving changes and what to do next.

Tech Stack

Next.js, TypeScript, Recharts

Impact

A portfolio-ready example of product-thinking + applied analytics with clear, explainable signals.

Multi-Agent Backoffice: Why Humans Still Matter

A comic-style demo of AI failure modes (and accountability).

Live

A simulated MedTech backoffice where each AI agent makes a locally reasonable suggestion — and the system fails when those suggestions collide. The UI makes failure visible (as a comic), then forces a human sign-off step to show accountability.

Input

A situation snapshot (backlog, budget, constraints) • Short free-text context (what the team ‘hears’) • Simple policies (compliance strict, discount cap)

Transformation

Agents propose actions (structured outputs + confidence/evidence) • Conflict detector flags collisions (growth vs capacity, discounts vs cash) • Policy gate blocks unsafe actions (e.g., compliance shortcuts) • CEO sign-off step enforces human accountability

Output

Agent proposals (comic dialogues) • Visible collisions + severity • Rules: allowed / needs approval / blocked • Post-mortem: what failed and why

Technical choice

Deterministic agent outputs + readable governance layers (no LLM required). For a portfolio demo, reproducibility beats randomness. The message is about failure modes and governance, not model magic.

What I deliberately didn’t do

I intentionally did NOT build a fully autonomous agentic system — the goal is to show where AI stops and humans must decide.

Snapshot

Hook

A comic-style multi-agent office simulator that makes conflicts, rules, and accountability impossible to ignore.

Problem

Multi-agent systems can sound smart individually and still fail as a system: misalignment, overconfidence, policy blindness, brittleness.

Approach

Structured agent outputs → conflict detection → policy gate → CEO sign-off → post-mortem (all visible in UI).

Tech Stack

Next.js, TypeScript, Tailwind, deterministic simulation

Impact

Shows governance and human accountability patterns recruiters actually care about (beyond demos that ‘just work’).

Contact

Section

If you’re building healthcare products and care about real-world deployment—not demos—I’d love to connect.