An AI-powered mental health companion that nurtures wellness through journaling, therapy, and personalized insights





About lepal.ai
An AI mental wellness companion (iOS, Android) designed to feel emotionally attuned, moving beyond robotic or prescriptive interactions to provide a truly supportive user experience.
Overview
The Challenge: Users needed a mental wellness tool that felt empathetic rather than mechanical, while the business required higher long-term retention in a competitive market.
The Solution: Developed an Emotionally Intelligent AI Tone Framework that adapts responses based on the user's real-time emotional state, ensuring a supportive and human-like presence.
Outcomes
85% Increase
Positive Resonance
15% Retention Lift
in Day-7 retention
60% Satisfaction Boost
in user engagement
Team
1 Lead designer (me)
2 Engineers
1 Conversation designer
1 CTO
1 Founding designer
My role
I owned 5+ core features from 0โ1
Designed AI Tone Style Guide
Led A/B testing & prompt optimization
platform
Mobile App
(iOS, Android)
This case study reflects my personal perspective as the lead designer. It does not represent the official views of Lepal.ai.
Due to NDA constraints, certain implementation details have been omitted or generalized.
Hover me!
Background: why we built lepal.ai
Most wellness apps speak. Few listen.
Most wellness apps offer productivity advice, not emotional continuity. From our initial interviews and diary studies, we uncovered systemic failuresโฆ
Our target usersโGen Zโneeded something deeper.

They wantedโฆ
Empathy, not efficiency.
Continuity, not one-off check-ins.
Emotional safety, not productivity hacks.
core challenge
How might we design an emotionally intelligent AI that doesnโt just respondโbut remembers, adapts, and stays?
Solution overview
Introduce Lepal.ai โ a wellness app that speaks with you, not at you.

Solution #1
๐ง My Journal
What it is:
A mood-aware journaling flow that adjusts prompts based on your emotional history.
Why it matters:
Because some days, you're ready to reflect. Other days, you just need a soft landing.
Solution #2
๐ฎ Crystal Ball
What it is:
A once-a-day micro-interaction that poses a reflective questionโlike a fortune cookie, but smarter.
Why it matters:
Because sometimes, the best nudge is the one you didn't know you needed.


Solution #3
๐ Therapy Planet
What it is:
An AI conversation space where you can pick a topicโlike burnout or loveโand just talk.
Why it matters:
Because naming the problem is hard. We let you choose a door and take it from there.
Discover
Designing for emotional wellbeing requires more than usability, it demands emotional clarity.
To uncover the invisible friction points, I led qualitative research with five Gen Z graduate students, combining interviews and diary studies to surface how tone, memory, and perceived care shaped trust over time.
Method #1
5 semi-structured interviews with Gen Z graduate students
Method #2
3-day mood & journaling diary study
Method #3
Emotional drop-off mapping across Day 1โ3

Me explaining how to journal emotions
Reasoning
Why This Approach?
We used qualitative methods to uncover how users feel, not just what they do.
By combining live interviews with in-the-moment diaries, we revealed the hidden friction and unmet emotional needs that traditional usability testing often misses.
What Guided Our Design Choices
Our early insights made it clear that users didnโt want to be told what to do. They wanted something that responded to how they feltโon their terms. These principles grounded every design decision that followed.
Design Principle #1
Let users lead โ provide structure, not control
Design Principle #2
Speak with care โ match tone to user energy
Design Principle #3
Reward consistency โ favor small rituals over deep work
From these sessions, I identified five emotional failures that caused disengagement:
Challenge #1
Scripted, generic feedback
AI responses felt templatedโlike productivity tips, not real empathy.
Challenge #2
No emotional memory
The app didnโt acknowledge past entries or mood history, leading users to feel invisible.
Challenge #3
One-size-fits-all tone
The tone was cheerfully upbeat, even when the user wasnโt.
Challenge #3
Emotional effort > emotional return
Users had to open up a lot without getting meaningful support back.
Challenge #3
Drop-off after Day 3
Without personalization or evolution, users disengaged within 72 hours.
Then I reframed the problem spaceโฆ
Instead of building a solution-oriented AI coach, I pivoted to a daily emotional companion. A product designed not to fix emotions, but to stay with them.
From
A wellness app that delivers advice
to
A daily companion that offers continuity and gentle presence
This reframing guided every design principle, interaction model, and system decision. It helped us translate abstract user needs into specific, repeatable design behaviors.
Ideation
Before locking in features, I facilitated a series of workshops to explore and visualize over a dozen concept directions.
I sketched storyboards, mocked up tone experiments in Figma, and built lightweight flows to test what felt trustworthy, intuitive, and emotionally sustainable. Rather than chasing novelty, I focused on what users might want to come back to.

Screenshot of the workshop I faciltated on figjam
What Guided Our Design Choices
To select our MVP features, I established three criteria based on user interviews.
Design criteria #1
Value vs. complexity
Prioritized features with high emotional resonance and manageable development scope
Design criteria #2
Frequency of use
Focused on lightweight interactions users could return to regularly
Design criteria #3
Potential for emotional connection
Selected ideas that could foster emotional presence and trust, not just usability
Based on these criteria, I categorized the features ideated during the workshop.

Note: We prioritized features that could create habitual emotional touchpoints without introducing risk or complexity.
I moved forward with three experience pillars:
pillar #1
Crystal Ball
one reflective question per day
pillar #2
My Journal
mood-adaptive journaling with summary
pillar #3
Therapy Planet
topic-driven AI chat sessions
Emotion-Aware AI Implementation
After I decided features,I found that a one-size-fits-all AI tone still felt impersonal or emotionally off, so I designed a tone framework that adapts responses based on the user's emotional state.
Here's how I approached it:

To begin, I identified what types of emotional signals would be most effective in helping the AI "read the room." I focused on two key input types that we could later simulate or integrate.
1
Keyword extraction from user-written journal entries
Lately, Iโve been trying to push forward, but
I feel stuck
somewhere between motivation
and burnout...
2
Lightweight weekly emotional check-ins
How have you been feeling this week?
๐ฐ Feeling anxious
Not at all
Very much
These signals fed into an Emotion Classifier, which I designed to group inputs into 4 core emotional states.
Emotional states #1
Anxious/Stressed
Emotional states #2
Sad/Low Energy
Emotional states #3
Neutral/Curious
Emotional states #4
Positive/Confident
Next, I mapped each emotional state to a corresponding tone strategy.
This include:
Tone strategy #1
Tone of voice
Should the assistant be validating? Playful? Calm?
Tone strategy #2
Response structure
Should it open with empathy, ask a reflective question, or give gentle guidance?
Tone strategy #3
Linguistic markers
Sentence length, emoji use, punctuation style, and word choice.
For example:
Lately, Iโve been trying to push forward, but
I feel stuck
somewhere between motivation
and burnout...
As a final step, I collaborated closely with the PM and ML engineers to implement the system, I co-developed an AI tone framework that aligned with different user emotional states
with other product designers
ML Engineers
PM
created a tone style guide myself, ensuring consistency across responses.
helped map user input to emotional categories using LLM embeddings and classification logic
prioritized use cases that aligned with user retention goals (e.g., supporting users who frequently report negative moods)
How this impact on UX?
By aligning tone with emotional context:
1
The assistant felt more human and trustworthy.
2
It reduced the emotional labor on users by not forcing them to explain or justify their feelings.
2
It also created more varied, natural interactions over time, reducing โAI fatigue.โ
Design Execution
Once we agreed on the function, I began designing the form, starting with simple wireframes.
I held frequent critique sessions with the developers and PM to review and iterate the designs until we reached consensus on layout and functionality.

Few screens from wireframes I created
Final UI
The final UI reflects all three emotional principlesโcalm presence, flexibility, and trust-building cuesโwhile maintaining clarity and responsiveness across devices.

Target success metrics
Most wellness apps track. We listened.
We wanted success to mean more than just engagement. I worked with the PM and engineers to define what meaningful success could look like. Because we were building something unfamiliar, we grounded our expectations in real behavior change.
core features shipped ๐
+ adaptive tone engine
+
%
Increased user engagement
in closed beta vs benchmark
%
said "felt emotionally intelligent"
Reflection
LePal taught me that good product design is about designing relationships. Especially when the product is meant to support emotional well-being.
I learned how to: