An AI-powered mental health companion that nurtures wellness through journaling, therapy, and personalized insights





The Challenge
Design an AI mental wellness companion that feels emotionally attuned—not robotic or prescriptive.
The Outcome
Shipped 3 cross-platform features in 12 weeks, doubling session time and earning 85% positive emotional resonance feedback.
Team
1 Lead Designer(Me)
1 CTO
1 Founding Designer
2 Supporting Designers
Duration
3.5 months
Platform
Mobile App (iOS, Android)
Tools
Figma
Miro
After effect
Microsoft 365
This case study reflects my personal perspective as the lead designer. It does not represent the official views of Lepal.ai.
Due to NDA constraints, certain implementation details have been omitted or generalized.
Hover me!
Background: why we built lepal.ai
Most wellness apps speak. Few listen.
Most wellness apps offer productivity advice, not emotional continuity. From our initial interviews and diary studies, we uncovered systemic failures…
Our target users—Gen Z—needed something deeper.

They wanted…
Empathy, not efficiency.
Continuity, not one-off check-ins.
Emotional safety, not productivity hacks.
core challenge
How might we design an emotionally intelligent AI that doesn’t just respond—but remembers, adapts, and stays?
Solution overview
Introduce Lepal.ai — a wellness app that speaks with you, not at you.

Solution #1
🧠 My Journal
What it is:
A mood-aware journaling flow that adjusts prompts based on your emotional history.
Why it matters:
Because some days, you're ready to reflect. Other days, you just need a soft landing.
Solution #2
🔮 Crystal Ball
What it is:
A once-a-day micro-interaction that poses a reflective question—like a fortune cookie, but smarter.
Why it matters:
Because sometimes, the best nudge is the one you didn't know you needed.


Solution #3
🌍 Therapy Planet
What it is:
An AI conversation space where you can pick a topic—like burnout or love—and just talk.
Why it matters:
Because naming the problem is hard. We let you choose a door and take it from there.
Discover
Designing for emotional wellbeing requires more than usability, it demands emotional clarity.
To uncover the invisible friction points, I led qualitative research with five Gen Z graduate students, combining interviews and diary studies to surface how tone, memory, and perceived care shaped trust over time.
Method #1
5 semi-structured interviews with Gen Z graduate students
Method #2
3-day mood & journaling diary study
Method #3
Emotional drop-off mapping across Day 1–3

Me explaining how to journal emotions
Reasoning
Why This Approach?
We used qualitative methods to uncover how users feel, not just what they do.
By combining live interviews with in-the-moment diaries, we revealed the hidden friction and unmet emotional needs that traditional usability testing often misses.
What Guided Our Design Choices
Our early insights made it clear that users didn’t want to be told what to do. They wanted something that responded to how they felt—on their terms. These principles grounded every design decision that followed.
Design Principle #1
Let users lead — provide structure, not control
Design Principle #2
Speak with care — match tone to user energy
Design Principle #3
Reward consistency — favor small rituals over deep work
From these sessions, I identified five emotional failures that caused disengagement:
Challenge #1
Scripted, generic feedback
AI responses felt templated—like productivity tips, not real empathy.
Challenge #2
No emotional memory
The app didn’t acknowledge past entries or mood history, leading users to feel invisible.
Challenge #3
One-size-fits-all tone
The tone was cheerfully upbeat, even when the user wasn’t.
Challenge #3
Emotional effort > emotional return
Users had to open up a lot without getting meaningful support back.
Challenge #3
Drop-off after Day 3
Without personalization or evolution, users disengaged within 72 hours.

Then I reframed the problem space…
Instead of building a solution-oriented AI coach, I pivoted to a daily emotional companion. A product designed not to fix emotions, but to stay with them.
From
A wellness app that delivers advice
to
A daily companion that offers continuity and gentle presence
This reframing guided every design principle, interaction model, and system decision. It helped us translate abstract user needs into specific, repeatable design behaviors.
Ideation
Before locking in features, I facilitated a series of workshops to explore and visualize over a dozen concept directions.
I sketched storyboards, mocked up tone experiments in Figma, and built lightweight flows to test what felt trustworthy, intuitive, and emotionally sustainable. Rather than chasing novelty, I focused on what users might want to come back to.

Screenshot of the workshop I faciltated on figjam
What Guided Our Design Choices
To select our MVP features, I established three criteria based on user interviews.
Design criteria #1
Value vs. complexity
Prioritized features with high emotional resonance and manageable development scope
Design criteria #2
Frequency of use
Focused on lightweight interactions users could return to regularly
Design criteria #3
Potential for emotional connection
Selected ideas that could foster emotional presence and trust, not just usability
Based on these criteria, I categorized the features ideated during the workshop.

Note: We prioritized features that could create habitual emotional touchpoints without introducing risk or complexity.

I moved forward with three experience pillars:
pillar #1
Crystal Ball
one reflective question per day
pillar #2
My Journal
mood-adaptive journaling with summary
pillar #3
Therapy Planet
topic-driven AI chat sessions
Emotion-Aware AI Implementation
After I decided features,I found that a one-size-fits-all AI tone still felt impersonal or emotionally off, so I designed a tone framework that adapts responses based on the user's emotional state.
Here's how I approached it:

To begin, I identified what types of emotional signals would be most effective in helping the AI "read the room." I focused on two key input types that we could later simulate or integrate.
1
Keyword extraction from user-written journal entries
Lately, I’ve been trying to push forward, but
I feel stuck
somewhere between motivation
and burnout...
2
Lightweight weekly emotional check-ins
How have you been feeling this week?
😰 Feeling anxious
Not at all
Very much
These signals fed into an Emotion Classifier, which I designed to group inputs into 4 core emotional states.
Emotional states #1
Anxious/Stressed
Emotional states #2
Sad/Low Energy
Emotional states #3
Neutral/Curious
Emotional states #4
Positive/Confident
Next, I mapped each emotional state to a corresponding tone strategy.
This include:
Tone strategy #1
Tone of voice
Should the assistant be validating? Playful? Calm?
Tone strategy #2
Response structure
Should it open with empathy, ask a reflective question, or give gentle guidance?
Tone strategy #3
Linguistic markers
Sentence length, emoji use, punctuation style, and word choice.
For example:
Lately, I’ve been trying to push forward, but
I feel stuck
somewhere between motivation
and burnout...
As a final step, I collaborated closely with the PM and ML engineers to implement the system, I co-developed an AI tone framework that aligned with different user emotional states
with other product designers
ML Engineers
PM
created a tone style guide myself, ensuring consistency across responses.
helped map user input to emotional categories using LLM embeddings and classification logic
prioritized use cases that aligned with user retention goals (e.g., supporting users who frequently report negative moods)
How this impact on UX?
By aligning tone with emotional context:
1
The assistant felt more human and trustworthy.
2
It reduced the emotional labor on users by not forcing them to explain or justify their feelings.
2
It also created more varied, natural interactions over time, reducing “AI fatigue.”
Design Execution
Once we agreed on the function, I began designing the form, starting with simple wireframes.
I held frequent critique sessions with the developers and PM to review and iterate the designs until we reached consensus on layout and functionality.

Few screens from wireframes I created
Final UI
The final UI reflects all three emotional principles—calm presence, flexibility, and trust-building cues—while maintaining clarity and responsiveness across devices.

Target success metrics
Most wellness apps track. We listened.
We wanted success to mean more than just engagement. I worked with the PM and engineers to define what meaningful success could look like. Because we were building something unfamiliar, we grounded our expectations in real behavior change.
core features shipped 🚀
+ adaptive tone engine
+
%
Increased user engagement
in closed beta vs benchmark
%
said "felt emotionally intelligent"
Reflection
LePal taught me that good product design is about designing relationships. Especially when the product is meant to support emotional well-being.
I learned how to: