HypeRunner

October 2024 - Present
Deployed IOS app providing AI-powered personalized, responsive audio encouragement while you run. I used Claude to help with my coding of the app, and I am currently in the process of making it commercially available on the App Store.
Project Outcomes:
iOS App Development + Design
‍Content Creation + App Marketing
Rapid User Testing
Responsive Encouragement
The sun is shining bright, and the temperature is perfect for our runners to tackle this 10 mile race.
Marty, Audrey, and Ryan are all lined up at the starting line, their eyes fixed on the road ahead.
(Actual commentary generated by the app)
Personalized Goals
The user can edit run details to key the commentator in on your aspirations for run.

For a quick start, the runner can simply begin running without the inputting details.
HypeRunner - Mobile Audio Motivation
HypeRunner is the the first running app to provide live AI audio encouragement that is responsive to the runner's progress (distance, pace, and training goals). While you run, an AI sportscaster will commentate on your great progress, adding an extra boost to your training goals or simply making the run more enjoyable!

It calls the Anthropic API to generate the personalized commentary and the ElevenLabs API to generate the audio with a custom voice design. While other apps provide audio motivation for running, none have incorporated gen AI models to achieve this level of personalization.

HypeRunner is a fully functional iOS app, internally deployed on TestFlight, and is planned to be publicly release to the App Store in early 2025.
4 Rounds of Rapid User Testing
Used the app live on a run, with a group of 5 runners and a new iteration of the app every week, for 4 weeks straight during the month of November 2024.
It seems they've hit a bit of a snag ... Alex, Hunter, Eric, Kate, and Alain had targeted an 8.5 minute pace for this 2 mile run, but they're currently clocking in at a modest 10.49 minute per mile!
(Actual commentary generated by the app)
Project Video
4 minute YouTube video explaining the app and the process of making it.
Objectives
Due to the innovation-first approach of the project it had 2 objectives: it had the traditional design goal of meeting an unmet user need, and the secondary goal of innovating a positive, undiscovered use case for AI.
The first objective was to address a gap in inclusivity in the running community for beginner runners, a problem I had observed firsthand as the captain of RISD's running club.
Problem 1
The full potential of use cases for AI have not yet been fully explored.
Goal 1
Discover and develop a new way to use AI that will positively impact people’s lives.
The second objective was focused on making new discoveries to advance the possibilities of AI software applications.
Problem 2
Runners with less experience find it difficult get started and intimidating to join more advanced runners.
Goal 2
Help previously excluded runners feel like part of the community through joyful encouragement.
Project Structure
To test the actual user experience with such a novel, unexpected material of generative AI, it was important to test people’s reactions with a real, deployed prototype rather than just a mockup.

I used Claude to help with my coding of the app, which allowed me to learn how to code an IOS app in React Native. I conducted rapid cycles of app development and then testing with real people going on a live run with the app. I conducted these tests in 1 week cycles for 4 weeks.

I shared my progress online in order to both generate publicity for the app, as well as to timestamp my work - this is the first time that generative AI has been integrated into the use case of motivational running audio.
Initial Research
I started the project by approaching generative AI as a material, experimenting with it to discover its unique properties, strengths, and weaknesses.

I tested out multi-functional apps such as Claude Computer Use, sound capabilities of apps such as NotebookLM, Boomy, SeamlessExpressive. MusicFX, and ElevenLabs, video capabilities of apps such as Runway, and image generation capabilities of apps such as Vizcom.
Takeaway
Out of this research, I found that the podcast-generating NotebookLM had a humorous joy from the surprise, personalized attention, and the enthusiastic encouragement of the output. 

It felt encouraging to feed the AI some content from my life (i.e. my college transcript, a past project, a past essay) and to pretend like a real podcaster was praising my effort. I took this strength of humorous joy and constant enthusiasm into consideration when developing my own app.
Initial Ideation
With the initial experimentation in mind, I created more than 100 sketches of ideas, including a run commentator idea.
Problem Identification
As a captain of RISD's running club, I have had direct observation of beginner runners who state that they are "too intimidated" to run with faster runners because they are "afraid" that they will be dropped from the pack.
While there are fitness apps that attempt to meet this need for motivation and encouragement, the content is not dynamic to meet these user's unique circumstances and preferences in their fitness journeys.
Members of the RISD Running Club
Inspiration
When I was in High School in Boston, at some of the major track meets we would have a YouTube livestream with someone commentating on our races like it was a sports broadcast seen on TV. Hearing the commentary made me feel like I was a pro runner on a world stage, and it was funny, but also encouraging to hear.

I was inspired to provide this same type of fun encouragement to other runners.
An example of one of the high school track meet livestreams
References
Other apps provide audio motivation for running, including Nike’s Run Club app and Zombies Run, but none have tapped into the level of personalization that can be achieved by incorporating generative AI models to create the audio.

Strava’s Athlete Intelligence beta feature is providing chat motivation, but, as of now, do not have a live running audio use case of the feature.
There have been projects online of people using gen AI to create sports commentaries for actual sports matches, such as this FIFA commentator project and this tennis match commentator project, however, their data collection methods (computer vision and recorded scores) is different from my location-based implementation.

Additionally, their intended use case is for generating content of actual sports events to be shared with the public, whereas my intended use case is for personal, recreational activity.
Initial Prototype
Focusing on the core functionality of the app, I created a simple web app that tracks location data as you run and generates an AI sports broadcast with encouraging comments spoken by a stereotypical sports broadcaster voice.
User Testing Round 1
I then tested this app with 5 runners who had no idea what the app would be. Their results were encouraging, as they found the output humorous and motivating, connecting with the altered “hyped” headspace that I was intending for the app to provide.
"It was very encouraging"
Claire, Live Tester
"The register of the voice elicits a competitive, energetic response"
Audrey, Live Tester
Implementation
The commentary is generated by feeding the runner's current location into a prompt to an Anthropic (the model behind Claude) API endpoint to generate the text of  the sports broadcast. The commentary is dynamic and personalized due to the innate generative properties of LLM’s and the constantly changing run progress being fed into the prompt.

This text commentary is then fed into a call to an ElevenLabs API endpoint, which is an AI text-to-speech synthesis model. I created a custom voice design to achieve a stereotypical, enthusiastic male sports broadcaster voice.
Second Iteration
After the positive feedback from the first round of user testing, I spent the next week refining the functionality of the commentary. I added a new settings page where the user could input information such as goal pace, goal distance, and their name, which allowed for more personalization in the generated commentary.
User Testing Round 2
I then tested this iteration with 5 new people who had no idea what they app would be. Users reacted positively to this extra personalization, especially when the commentary mentioned their name and gave specific support to help them meet their goal pace.
Takeaways:
  • Desired hallucinations/inaccuracies to be fixed
  • Desired variation in commentary (milestones, endpoints, personalization)
  • Started humanizing the commentator (and said he should have a friend)
Chasing User Engagement
In my personal observation of the test, I noticed that despite high initial interest at the start of the run, the excitement and reaction to the app seemed to dwindle as the run went on.

In my next iteration, I made it a focus to try to make the commentary more unpredictable and varied throughout the run.
Modified Runner's Journey Map
Third Iteration
After the takeaways from the previous week's test, I targeted the goal of sustaining user interest. I started to develop the visual identity and branding of the app, and based on the feedback from the previous iteration, I also incorporated a second female commentator, who would respond to the remarks of the first commentator.

Additionally, I migrated the code from a web app to an iOS TestFlight app built with React Native (JavaScript).
Name & Initial Branding
The initial logo for the app was inspired by a series of long exposure photographs I had previously made, which recorded the motion of the running by tracking lights attached to specific points on my body. The logo connected the cyclical form of running to the soundwaves of my app’s audio output.

This idea for the logo was eventually scrapped due to feedback that it wasn't "recognizable" enough about its functionality and connection to exercise and fitness.
The Speed of Light, my art from 2019
Initial Visual Treatments
The initial visual treatments put high emphasis on the loading screen animation graphics, as well as the need for a GPS map to be accessible through the active run screen. These valuations came from my testing experience, where I observed that the highest point of visual engagement during the run was the start, where the suspense and anticipation was also at its peak and the user had not commenced their movement yet.
In subsequent user tests and critiques within my studio, I would eventually deprioritize the loading screen animation and emphasize the GPS map as a highly important visual element for the app.
User Testing Round 3
I then tested this iteration with 5 new people, who again, had no idea what the app would be.

Note: A lot of the footage from the week 3 video was lost due to technical issues. The week 3 video can be found here:
In this test, it was apparent that runners with more experience seemed to react less strongly to the app. These runners run long distances every day, so they are probably already keen to run and do not need the extra support as much.

But as I had seen in my other tests, those runners who did not have that same intrinsic motivation, who needed something a little extra, had very strong reactions to the product.
This was a turning point which indicated to me that the target user for the software is a runner with less experience, who had not yet been able to achieve their inner goals by themselves.
"My mom should get on this app!"
Dillon, Live Tester
Fourth Iteration
The next week I continued to iterate on the visual treatment of the app, focusing on smoothing the friction discovered from the previous week and creating a more intuitive experience through incorporating familiar patterns from existing popular running apps into my own interface.  

Most notably, I added a map view and run stats monitor on the home page to allow the user to view their run progress and to ground the app in it's running use case. I also explored various color ideas, and refined the start button.
Color & Start Screen Explorations
At this point I was refining the identity of the app through color and experimental layout choices. Below are some of these explorations:
It was important for me to have a visual signifier of the core audio functionality of the app, which differentiated it from other run tracking apps. My solution was to create a start button that had radial soundwave iconography, visually informing the user that this button not only starts the tracking of their run, but also starts the audio generation.
To create a cohesive branding for the app, I eventually incorporated this start button icon into the app logo, substituting the linear soundwave iconography from the original logo idea with the geometric language of the start button.
User Testing Round 4
For this round of testing I conducted 2 tests: one was another live run with 5 new people. I also had a beta tester remotely download the app from testflight and use the app on their own device, without me being there, and record their reactions.
Brendan, Remote Beta Tester
Live 5-Person Group Run Test
"I usually run pretty hard, so I liked the messaging that it was a race"
Brendan, Remote Beta Tester
"When I'm racing, there are other people cheering - this (app) gives that."
Osmond, Live Tester
Current Design
From the 4 rounds of testing, I achieved a working proof of concept application that accomplishes the core intended functionality of providing fun encouragement to runners of all experience levels.

Currently the app has 3 main screens - the start screen, settings screen, and active run screen. As I continue to develop the product from a proof of concept TestFlight app to shippable version, the interface will continue to evolve.
Design Evolution
Landing Page
Takeaways
From this project, I realized the benefits of using a real, deployed app for user testing - I was able to explore interactions that would not have been possible to imagine without genuine reactions and behaviors to a functional prototype. This project has also showed me how a designer can, and should, contribute to the evolving conversation around AI by bringing in a human-centered, experience-focused perspective on the technology.

Finally, deploying a project in real life has shown that there are so many facets of an application that influence a user's experience beyond the interface itself; some examples being server latency, the subscription model (or lack thereof), and how it connects the user to other people in their community.
Next Steps
I am currently working on refining the functionality of the app to the point where it can be shipped.  I plan to release the app to the App Store in early 2025.