2026

AI Marine Companion

Project
AquaGo
Role
Product Designer(0 β†’ 1οΌ‰Β· End-to-end

Turn every dive into a lasting discovery

What is AquaGo?

AquaGo is an AI-powered app that helps divers quickly identify marine species and turn each discovery into a lasting collection.

I focused on making identification fast and trustworthy in a low-quality image environment, prioritizing speed in the first interaction while building trust over time.

This shifts identification from a one-time action into a repeatable behavior loop.

πŸ’ͺ Β Core challenge

How do you design AI that feels trustworthy when the environment makes perfect accuracy impossible?

🀿  Target users

Recreational divers as the primary audience, with ocean enthusiasts as an extended user group.

⚠️  Problem

After a dive, identifying marine species is fragmented, time-consuming, and often unreliable. Divers lose confidence in results and excitement fades before knowledge is retained β€” breaking the loop between experience and learning.

User Research & Insights

Research Approach
Research was conducted in two phases.

First, qualitative interviews with divers across experience levels to understand post-dive behaviors, identification workflows, and motivations around recording and sharing.

Second, field research in the Philippines β€” observing divers in real diving contexts to uncover how behaviors, needs, and pain points differ across users from different countries and diving cultures.

This cross-market perspective directly shaped decisions around how the app handles uncertainty, language, and the balance between speed and accuracy.

Insights across both phases were synthesized into key patterns, user segments, and opportunity areas that informed product strategy.
Pain points
  • Fragmented workflow – Divers rely on multiple sources β€” instructors, Google, and communities.
  • Low confidence in results –Results are inconsistent and hard to trust.
  • Time-consuming workflow – Identification requires multiple manual steps.
  • Loss of post-dive excitement – Excitement fades quickly after the dive.
Key Insights

Divers experience a high-intent moment immediately after a dive, but the journey breaks due to fragmented tools and low confidence β€” turning curiosity into friction instead of continuation.
‍
These insights pointed to a product that needed to solve three things in sequence: reduce friction, build habit, and enable identity.

Product Strategy

An identification app can evolve in three directions β€” as a utility, a knowledge platform, or a social product, each with different tradeoffs.

I chose to start as a utility and deliberately sequence toward social. This decision prioritizes trust before community β€” without reliable identification, users won’t build identity or share discoveries.

The tradeoff is slower initial growth, but it establishes a strong foundation for long-term retention and expansion.
The strategy maps directly to a growth loop:
  • Activate β€” Reduce friction between β€œI just dived” and β€œI know what I saw.”
  • Retain β€” Turn each identification into a collection users want to build on.
  • Expand β€” Enable identity and sharing once trust is established.
The MVP deliberately focuses on Activation and early Retention β€” validating the core loop before investing in social infrastructure.

Measuring Success

I defined success across three stages of the product loop:
  • Activation β€” % of users completing identification after a dive Β 
  • Trust β€” % of users accepting vs correcting AI results Β 
  • Retention β€” repeat usage across dives and collection growth
These metrics are designed to validate whether the core loop not only functions, but creates sustained trust and repeat behavior over time.

Live Prototype (AI-assisted)

To validate how the core AI identification loop behaves in real scenarios, I built a functional prototype using Google AI Studio (Gemini API).

This allowed me to test key product decisions β€” including multi-species recognition, confidence-based result handling, and user verification flows β€” under real-world conditions such as low-light underwater images.

These insights directly informed the UX, shaping the "Identification Ritual" and the confidence-based interaction model.

Solution

I designed a core loop to convert a one-time interaction into repeatable behavior:
πŸ“Έ Capture
↓
Β πŸ”Ž Identify
↓
βœ… Verify & build trust
↓
πŸ“– Learn
↓
🐟 Collect
The flow prioritizes speed in the first interaction, then progressively builds trust and drives repeat usage over time.

MVP Scope

The MVP deliberately limits scope to identification, collection, and learning β€” focusing on solving the core post-dive workflow without expanding into social or ecosystem features.
1. AI-powered species identification
Supports both single and batch recognition for real post-dive workflows. Divers often return with multiple unknown species β€” batch identification (up to 10 photos) enables fast, efficient processing.

Each image may contain multiple species, with up to three detected per photo. For each species, multiple candidate matches are provided and ranked by confidence, enabling comparison and human verification.

Built for real dives β€” many photos, many species. From suggestion to discovery
2. Digital collection system
Turns one-time identification into long-term accumulation, allowing divers to revisit, organize, and build a personal record of discoveries over time.
3. Learning experience
Transforms identified species into meaningful knowledge, helping divers understand, validate, and learn from their discoveries over time.
What we chose not to build in v1 β€” and why:
Social features were the most requested, but adding them before the identification loop was validated would have meant building community on top of an untested trust model.

Video-based identification was also intentionally excluded. While video is the dominant capture format in diving, it introduces significantly higher noise and complexity β€” multiple species, motion, and overlapping detections β€” which would reduce clarity in the verification flow.

The risk wasn't the features themselves β€” it was introducing them before the core loop was fast, interpretable, and trustworthy.
Future Opportunities
  • Community-driven Q&A for species identification Β 
  • Enhanced aquarium system for personalization and progression Β 
  • Marketplace for diving-related content and tools
These features are intentionally deferred to focus on validating the core identification and collection loop before expanding into social and ecosystem layers.

User Flow

The experience is designed to be fast, intuitive, and centered around a high-intent moment immediately after a dive.

Design Decisions

1. AI-first identification with user control
AI provides a primary match by default to reduce decision time and cognitive load. This introduces a tradeoff β€” the system may not always be correct in low-visibility underwater conditions.

To address this, alternative candidates and confidence levels are surfaced, allowing users to verify or adjust results.This balances speed with user control, ensuring the system remains efficient without compromising trust.
2. Structure multi-species recognition
We chose to structure multi-species recognition to handle up to three species per image.

This balances real-world complexity with cognitive load β€” capturing multiple species without overwhelming users during review.
3. Designed for real post-dive workflows
Divers typically review many photos in batches after a dive. Batch processing and streamlined review flows support efficient, real-world usage.
4. Handle uncertainty in AI identification
AI identification is inherently uncertain in underwater conditions. Instead of hiding errors, the system surfaces confidence levels and provides alternative paths:
  • Low (<40%) : Surfacing a "Slipped away" state to guide users toward community help or a retake.
  • Mid (40-80%) : Providing up to 3 candidates for users to manually verify and "help" the AI learn.
  • High (>80%) : Directly identifying the species while allowing quick manual adjustments for 100% accuracy.
Optimization: Aggregating features from multiple photos to boost confidence in challenging visibility.
5. Designed with engineering feasibility in mind
Key AI decisions β€” such as capping species detection at three per image and using confidence thresholds to determine

UI state β€” were validated against technical feasibility before being finalized, ensuring design constraints aligned with what the model could realistically deliver.

These constraints weren't just limitations; they shaped how information should be presented to users.

To validate how these decisions would translate into user experience, the three-tier confidence system was prototyped and tested through AI-assisted rapid prototyping β€” simulating how different uncertainty states would feel to users before implementation.

Key Features Showcase

Identification Experience
AI-powered identification designed for fast and effortless decision-making.
‍
Users can take photos in the moment or upload single and batch dive images, enabling a flexible entry into the flow.

A primary result is provided by default, with alternative candidates available for adjustment β€” balancing speed with trust and control.
Collection & Spatial Memory
A structured system for organizing and revisiting discoveries over time.
‍
Users can collect identified species, explore them through a visual aquarium, and navigate their discoveries by location.

By connecting species, places, and individual sightings, the experience evolves beyond a static list into a spatial memory of their diving journey.

Visual & Interface Design

Visual design
The visual system is inspired by the underwater environment, using depth, light, and subtle gradients to create a calm and immersive experience.
Low-poly Representation
Marine species are represented using a low-poly visual style to balance realism and clarity.

This approach reduces visual noise from real underwater photography while maintaining recognizability, making species easier to distinguish during identification.
Glass-like Interface
A glass-like UI system is used to maintain clarity while preserving a sense of depth. It allows content to remain readable without losing the atmospheric quality of the ocean.
Light & Dark Modes
Both dark and light modes are designed to support different usage contexts β€” from bright outdoor conditions to post-dive review in low-light environments.
Reflection
From isolated features to a product system
‍
‍
The hardest decision wasn't what to build β€” it was what not to build. Cutting scope down to a core loop taught me that a product's first job isn't to be complete. It's to prove the loop works.

‍Designing trust into AI, not around it

‍
The most important design decision in this project wasn't the visual system β€” it was how to make uncertainty visible without making it feel like failure. Honesty builds more trust than false precision.

‍What cross-market field research changed

‍
Field research in the Philippines exposed things interviews couldn't β€” how trust, confidence, and connectivity constraints shape user behavior differently across contexts.