Enhancing Genomic Data Workflows for Public Health Labs

Overview

This project focused on enhancing a government-funded internal platform used by research institutes across the country to report disease sightings and track transmission. Built for use within secure government environments, the system centralizes genomic data submission and analysis for epidemiologists to detect emerging patterns and report on public health trends more efficiently.

The Problem Space: Why It Mattered

Legacy system was sample-based, requiring users to manually select and run analysis on samples every time a new sample is added. This means redundant manual steps, poor traceability, and inconsistent analysis of inputs and outputs. The new workflow shifts to a study-based data model, allowing users to group samples into a study, and trigger automated analysis.

My Role

I led the 0-1 user experience design strategy and interface development in close collaboration with epidemiologists, data scientists, and engineers. The goal was to reimagine how users interact with a complex, evolving dataset. I worked hands-on to align design decisions with both real-world workflows and the logic of a newly implemented data architecture.

Outcome

  • Achieved 100% user satisfaction for the redesigned data model and workflows.

  • Increased task efficiency for core actions like sample download and management.

  • Two rounds of task-based usability testing showed significant improvement in learnability and a noticeable drop in user error after iterative refinements.

A User-Centered Design Approach

Understanding the User Base

We designed for a range of laboratory roles with varied technical expertise. Most users had between 6 months and 1 year of experience on the platform and fell into two key groups:

  • Lab Technicians: Frequently handled sample uploading and needed efficient, error-proof workflows for managing large batches of genomic data.

  • Project Managers: Focused on reviewing trends, exporting reports, and making decisions based on aggregated results.

Core User Needs

Through interviews and workflow observations, we identified several recurring pain points and must-haves:

  • Bulk Sample Handling: Users needed to upload, analyze, and export results for large volumes of samples at once. Manual entry or single uploads slowed down workflows significantly.

  • Naming Confusion: Although the system generated unique sample identifiers, users primarily relied on their internal lab IDs, creating mismatches and confusion during search and validation.

  • Report Dependence: Reports were not just a summary — they were essential deliverables for downstream analysis and decision-making. Users frequently relied on the system to generate clear, customizable reports.

These insights guided design decisions toward simplifying batch actions, improving identifier visibility, and ensuring that exported reports mirrored lab-specific language and formatting needs.

Design Shift: From Sample-Centric to Study-Centric Workflow

These findings directly informed our most strategic change: transitioning from a sample-based model to a study-based workflow.

In the new system:

  • Users can group samples into studies, giving context to their work and allowing flexibility to add or remove samples at any time.

  • Each time a study is updated, the system automatically generates an updated report, reducing manual steps and ensuring alignment with how labs actually work.

  • The shift also allowed us to better support bulk operations, customizable exports, and intuitive naming references — all core user needs identified during research.

This architectural change was not just a technical improvement — it was a direct response to the realities of lab workflows and the mental models users already had.

Key Features & Strategic Design Decisions

1. Study-Centric Dashboard

Pain Point: Lab work is often project-based, yet the legacy system treated each sample as a disconnected entity. A new dashboard designed around “studies”. Users can create, manage, and track studies as evolving containers of sample data.

What we built: A new dashboard designed around “studies” instead of individual samples. Users can create, manage, and track studies as evolving containers of sample data.

2. Bulk Sample Actions

Pain Point: Nearly every technician we spoke with cited repetitive manual entry as a top frustration.

What we built: Support for bulk upload, bulk transfer of samples between studies, and bulk download of reports and raw data all within a study context. Users can now manage large datasets in just a few clicks, significantly reducing repetitive tasks and error-prone manual steps.

3. Customizable, Auto-Generated Reports & Notifications

Pain Point: Users previously had to manually run analysis after uploading a new sample, which introduced delays and increased the risk of missing data due to human error.

What we built: With the new study-based model, the system can automatically regenerate study reports whenever samples are added or removed. Users can opt into real-time notifications, giving them immediate visibility.

4. Sample Home Page

Pain Point: Sample-level information was scattered or missing entirely, forcing users to cross-reference external spreadsheets just to verify upload status or track data.

What We Built
A dedicated detail page for each sample that consolidates key information in one place, including real-time upload status, access to genomic data, and controls to manage its association with one or more studies.

Iteration & Testing

Testing Approach

We conducted two rounds of task-based usability testing with a mix of lab technicians and project managers (n = 6), focusing on core workflows:

  • Uploading sample sets

  • Adding sample(s) to an existing study

  • Downloading processed sample file(s)

Participants completed tasks using high-fidelity prototypes, and we collected both quantitative metrics (task completion, error rate, and time-on-task) and qualitative insights (usability friction, confidence, and expectations).

What We Heard from Users

“When asked to download sample files, I thought I should go to the sample home page, not the study page.”
“I didn’t realize I could download files from the study page after selecting samples.”

  • Users were unclear on where to find file downloads due to the new study-based model.

  • Terminology used in the UI didn’t match the language most lab staff and scientists were familiar with.

  • File download actions were hidden until users made a selection, leading to missed functionality.

What We Changed

  • Download and Add to Study access in both places: Users can now download files and add a sample to study from both the Study and Sample Detail pages, matching with how different users conceptualize data ownership (study-first vs. sample-first)

  • Consistent interaction design: We standardized the download functionality across both views to ensuring a familiar, predictable experience no matter where the user starts.

  • Persistent status indicator: Indicating “0 Samples Selected,” before a selection is made to improving discoverability.

  • Terminology overhaul: Replaced ambiguous internal terms with more widely understood scientific language.

  • Contextual guidance: Added inline tips and microcopy to explain next steps and reduce friction.

The Result

  • 33% increase in task success for downloading processed sample files

  • 20% reduction in user errors, including fewer misclicks and retries

  • Faster completion across all three workflows with fewer pauses and questions during task walkthroughs

  • Higher confidence and satisfaction reported in feedback, with users describing the experience as “more intuitive” and “easy to follow”

Round 1️⃣ Completion Rate
Task

Upload Samples
✅ 100%
Add Sample to a Study
⚠️ 67%
Download Sample Files
❌ 33%
Round 2️⃣ Completion Rate
Task

Upload Samples
✅ 100%
Add Sample to a Study
✅ 100%
Download Sample Files
✅ 100%

Reflection & Design Learning

Design Trade-Offs

  • I initially hid key system indicators like “0 Samples Selected” until users selected samples, assuming that familiar checkbox patterns would guide them — as seen in many modern tools.

  • What I learned: Visual simplicity doesn’t always mean better usability. In complex workflows, clarity must come first — especially when users aren’t confident about what to do next.

What Surprised Me

  • Even “digitally fluent” users didn’t always follow expected interaction patterns.

  • Key insight: Familiarity with tech doesn’t guarantee familiarity with your interface, especially in specialized tools. Assumptions based on user profile aren’t always accurate.

How This Changed My Approach

  • Small UX details can make or break the experience in data-heavy systems.

  • Takeaway: A tooltip, a label, or a visible state change can drastically improve comprehension and task flow. I now prioritize early testing of interface clarity, not just structure or flow, especially for systems where precision and efficiency are critical.