🚗

Using Generative UXR to Redesign MMR

 
This case study is also available as a Slide Deck and PDF Download.
 

 
 
Client: Manheim
Role: Lead UX Researcher
Industry: Automotive
Duration: 6 Weeks
 
Project Overview Cox Automotive’s Manheim is the largest wholesale auction in the United States and is where the majority of used car dealers do business. Manheim provides many tools to dealers, but their most used (and most lucrative) tool would be the Manheim Market Report—or MMR.
This project was a part of Manheim’s overall initiative to update their technology and interfaces on all ends. My role was was leading research for anything that fell under the umbrella of Manheim.com.
 
Outcomes
  • Corrected stakeholder assumptions as to what dealers were using the tool for, and increased design budget to create solutions for the actual user need
  • Increased users' trust in the MMR Tool and Manheim itself by 15%
  • Decreased dealers’ time spent on task by 20%
 
 
 
 
Estimated Reading Time: 15-20 Minutes

I. Introduction

 
Context
Manheim Market Report (MMR) is one of Manheim's most well-known product features. The purpose of the MMR is to provide a vehicle's sales transactions from the past 13 months, and gives car dealers, both buyers and sellers, the data needed to make an informed sale or purchase.
 
Previous design of the MMR Tool from 2021.
Previous design of the MMR Tool from 2021.
 
 
The Problem
Past evaluative research found that car dealers tended to use the tool while either A) showing a vehicle to customers or B) making a purchase—both fast-moving situations. Passive user feedback also told us that despite the large user base, a majority of users did not trust the MMR's data. Car dealers need a way to quickly understand the MMR tool in fast-pace situations, a goal that is blocked by a design that is slow-to-digest and a general distrust of the data shown.
 
notion image
 
 
 
 
 
 
 

II. Scope and Align

 
The Kick-Off Workshop
I started the project off with a week of scoping meetings. In this instance, I facilitated a kick-off workshop to:
  • Clarify the stakeholders' request so that I could identify what we already knew, stakeholder assumptions, any obstacles there may be, expectations and more.
  • Collect stakeholder assumptions (used to clarify later findings if needed), and then mapped them out according to risk levels and current understanding
 
notion image
 
Assumptions
One thing I always try to collect are stakeholders' assumptions, because I make it a point to address if the assumptions are correct during actual testing, and if they're not, how do they differ?
For example, one of the assumptions held by the product owner of the tool was that "Dealers want an updated valuation visual and the updates will help them complete their tasks faster than the original."
 
Hypothesis
By the end of the workshop, we used dot voting to agree that the hypothesis was "We believe that:
  • Modernizing the design of the valuation visual
  • Adding in our confidence in the data
  • Adding in a feature that allows users to adjust the MMR by adding in different car options (ex: 4WD)
will achieve higher satisfaction by dealers and will quicken task rate completion.We will know our hypothesis is correct if we see positive user feedback, faster task completion rates, and higher CSAT scores."
 
 
 
 
 

III. Methodology

 
Choosing a Methodology
The Kick-Off workshop, along with individual stakeholder interviews, gave me enough information to select research questions and methods.
Our Research Objectives were, extracted from our hypothesis were:
  • To understand if a modernized design will aid dealers, and in what ways.
  • Will adding in confidence intervals decrease users' distrust of Manheim's Data?
  • Will adding in a feature that allows users to adjust their valuations by adding in options be useful?
TAs this was a Generative Research project based on an existing product, so it was important that we understood:
 
  • How dealers used the tool at the time
  • What needs to have when they consult the tool
  • Where the gaps were between the actual product and the dealer needs
 
I wanted to first identify behavioral data without researcher intervention, and then attitudinal data and so I decided on the methods of remote contextual inquiry with active inquiry. We then called those same dealers back for in-depth user interviews.
 
 
 
 
 

IV. Stakeholder Recap

 
Test Plan Review
Before I start the actual recruitment process, I will review the interview guide with stakeholders to make sure that:
1) We're all on the same page and that expectations are clear
2) Stakeholders are able to voice any concerns or opinions before research begins
3) If there's anything in the test plan that they would like to see included
4) To confirm how they would prefer for updates to be relayed (check-ins, weekly emails, DMs)
 
notion image
 
 
 
 
 

V. Recruitment

Sampling
Manheim had five clearly defined user segments and so for both the contextual inquiries and interviews, I wanted to recruit 5 participants per segment (stratified sampling) Usually, around 15 is enough to reach saturation, but given how tumultuous the past two years had been because of COVID, I wanted to overshoot our participant needs.
notion image
 
Acquiring
My process for acquiring participants was:
  1. Utilize UserIQ (now defunct) to launch incentive pop-ups on the MMR tool itself on the Manheim website. This way, I only attracted dealers who used the actual tool. Dealers weren't shown the incentive more than twice.
  1. The pop-up then led to a Qualtrics screener, and if they passed that, they were then directed to my Calendly to sign up for two incentivized research sessions.
 
 
 
 
 
 
 
 

VI. Testing

Contextual Inquiries
Still deep in the midst of COVID, contextual inquiries had to be conducted
remotely. The dealer was still working within the true and typical context of their activity. Here's how we set up the project:
  • Dealer would share their screen for an hour, with their camera and microphone on. Our allotted time together was 2 hours. Dealer was advised to think aloud for the sake of the study.
  • For behaviors I found interesting or that didn't fit with our assumptions, I would inquire at that moment.
notion image
 
After the end of each inquiry, I used grounded theory to discover emerging patterns. I interrogated the data to identify:
  • Pain Points
  • Experience
  • Environment
  • Unmet Needs
  • and more
I then took to LogRocket and watched and noted 25 session recordings with power-users, and used this information for our first round of finding confirmation.
We used our findings to inform the following interviews.
 
User Interviews
From those inquiries, we were able to create three prototypes that we hypothesized could better fit user needs according to behavioral and attitudinal data.
Concepts Tested
notion image
notion image
notion image
 
We had a good number of participants from the contextual inquiries who were able to participate in interviews.
We interviewed participants for a week and a half, with 5-6 intervews a day. Each interview was around 45-50 minutes, at the end I always asked if the participants wanted to be included in our UX Research panel for future studies.
 
notion image
 
 
 
 
 
 

VII. Analysis

Inductive Coding
For the interviews, I did analysis after each user interview to make sure I wasn't overwhelmed during the report creation week. I used inductive coding to identify qualitative insights at the most granular level.
notion image
 
 
 
 
 
 

VIII. Deliverables

Research Report
Our research project was massive, and could only really be held by a research report. I created an in-depth report that went into findings like:
  • Dealers go to the transactions below the MMR tool to calculate value for themselves. On their own, the confidence intervals have little to no impact on dealer's actions.
  • When it comes to Data Visualizations, customers preferred to have a speedometer as it was industry standard
  • The 'Similar vehicles' section was denounced by all users and they wanted that real estate to be used for transaction instead.
 
Results of 1 of the 4 concepts we tested with users. See full image.
Results of 1 of the 4 concepts we tested with users. See full image.
 
So, remember those assumptions from the beginning? The one where stakeholders thought not only were dealers using the visual of the speed dial but that they would want updated version? Our most important finding made that assumption obsolete. We found that instead of using the visual, the first things dealers did was scroll to the bottom of the page to calculate the transactions themselves.
Stakeholders assumed that dealers would use the visualization to make quick decisions, but that was proven false when nearly 100% of all users interviewed said they went straight  to the transactions at the bottom to make decisions. See Full Image.
Stakeholders assumed that dealers would use the visualization to make quick decisions, but that was proven false when nearly 100% of all users interviewed said they went straight to the transactions at the bottom to make decisions. See Full Image.
 
 
 
 
 
 

IX. Socializing

Read Out
I read-out the presentation to stakeholders over a series of a week. Due to some of our stakeholders being time poor, there needed to be multiple presentations to accommodate their schedule. At the end of the read-out, I will always ask who else do you think would find this information valuable, and then do my best to relay the report to the identified parties.
 
 
notion image
UX Team Presentation
Next step in the process was presenting our findings to the rest of the Manheim UX Team (50 strong), to make sure that we informed as many people as possible in case this affected their projects.
notion image
 
 
 
 
 

X. Decision Workshop

Lightning Jam
All the findings of this research, while very valuable, don't mean much if they can't be acted upon. That's why I facilitated a workshop called a Lightning Decision Jam which is a process that brings stakeholders together to help prioritize what of the research is urgent and what needs to be fixed. On my end, LDJs are helpful because it really makes it clear to me whether or not the research was comprehended.
 
 
 
 
 
 

XI. Tracking

Research Repository
At the end of it all, I archived our findings into our research repository. Our Research Repository was a work in process, but the intention was that stakeholders from all over Manheim could come to this hub and dig up research insights quickly.
Metric Reminders
In this research repository, I made sure to make notes in the document (as well as set reminders on my calendar) to measure KPIs of any design changes in the upcoming months.
 
 
 
 
 
 

XII. Outcomes

Correcting Understanding
  • Increased users' trust in the MMR Tool and Manheim itself by 15%
  • Decreased dealers’ time spent on task by 20%
 
Perhaps the most impactful insights gained from the research were:
  • Users weren't using the MMR tool itself to make decisions, but the transaction list at the bottom of the page. Changing the layout of the tool increased user satisfaction and resulted in more conversions for Manheim, ultimately bringing in more profit.
 
 
 
 
 
 

XIII. Lessons Learned

Get Research In Early
While all this research was extremely informative, one very large obstacle I encountered was that business waited until a week before their new design launched to bring in testing. Research should be involved as early as possible, so that any urgent issues found can be solved before development is already underway.
 
Research was bought in entirely too late to change the course of the project. Designs were well underway for nearly 5 months before I caught wind. I managed to salvage what I could, but unfortunately the product that launched was not A) asked for or B) meet any user needs. See full image.
Research was bought in entirely too late to change the course of the project. Designs were well underway for nearly 5 months before I caught wind. I managed to salvage what I could, but unfortunately the product that launched was not A) asked for or B) meet any user needs. See full image.
 
Although we couldn’t launch a truly user-centered design, we found through user testing that the prototypes that prioritized list of transactions above data visualization increased user trust and decreased task completion time.
 
What stakeholders thought users wanted to see — a visualization with Manheim’s data interpretation. When in reality, users used the below to make all their decision, and had to scroll all the way to the bottom of the page and select a CTA to see the following information.
What stakeholders thought users wanted to see — a visualization with Manheim’s data interpretation. When in reality, users used the below to make all their decision, and had to scroll all the way to the bottom of the page and select a CTA to see the following information.
 
notion image