Pinterest is an image sharing platform designed for creative inspiration. In using Pinterest, I noticed that its content tends to lack context or credibility, and users often leave comments with questions that go unanswered.
I wondered: Without context, how can users deepen their understanding of what they are seeing? Could they be missing out on related content that they would enjoy? I decided to dig deeper.
Project goals
1) Confirm the problem: Do users experience this as an issue? If so, how does it impact their use of Pinterest?
2) Develop a solution: Based on user research, design a feature that addresses the problem and integrates seamlessly into the existing experience.
Learning from real Pinterest users
I began user research with the goal of learning 1) how people use Pinterest generally, and 2) their experiences looking for context about pins ("posts" on Pinterest).
I began the study with an interview exploring Pinterest users' habits and experiences with the platform. Then I ran a usability test, asking them to show how they seek out more information about content on Pinterest.
Takeaways from user research
Takeaway 1: It really is hard to learn more about content on Pinterest.
In my usability test, 5/5 users clicked the pin's source link to learn more about a pin, but 3/5 expected the experience to be unhelpful or annoying. Overall, users said they search for more information about a pin occasionally, rather than often. How this might this change if the information were easy to find?
Takeaway 2: Users will leave Pinterest to find out what they're looking at.
5/5 users would consider looking up a pin using Google Lens or text search. Like the source link, this takes the user to an external site, risking dropoff. Only 1/5 users tried using Pinterest’s built-in visual search, suggesting that they either don’t know about the tool, or don’t expect it to be helpful.
Are there existing solutions?
I had now confirmed the problem, and I needed to find out if other products were addressing similar issues. I identified two main types of solutions among competitors: crowdsourced feedback and AI image recognition.
Guided by my research up to this point, I started to brainstorm solutions.
Filling in the details
I decided that the feature should include:
Envisioning user interactions
From concept to wireframes
On paper, I explored ways to seamlessly integrate the new feature into Pinterest’s existing interface.
Building a polished prototype
Having defined the layout and interaction patterns for the feature, I refined the details of my wireframes and brought them together them in a working prototype.
Key design decisions
I had designed a solution, but I needed to find out if it would be clear to users, if they would find it useful, and how they would feel when using it.
Referring back to my initial guiding question, I wanted to know: Does the solution help Pinterest users more easily find information about a pin, without leaving the site?
Designing a study
Using Maze, I set up an unmoderated online study with two parts:
1) Usability test: Participants were shown a pin with a piece of art. They were asked to show how they might find out what style of art it was, and then to give feedback on the answer.
2) Questionnaire: Participants were asked to complete rating scales and open-ended questions to reflect on their experience with the usability test.
How did the new feature perform?
I collected data from 16 Pinterest users. Overall, the participants rated the task as fairly easy overall (rated 4.5/5) and saw potential utility in the feature (also rated 4.5/5).
An issue of trust
Although testers found the feature easy to use, 3 of 16 expressed concern about the reliability of AI-based tools. This feedback echoed public sentiment about AI, and I would need to address it.
The risk of misinformation with AI is real, so it would be unrealistic to promise accurate responses. Instead, I chose to acknowledge users’ legitimate concerns.
Building trust through transparency
1) Acknowledging AI's limitations
A disclaimer now appears when users first open the AI search tool, explaining that information provided may not be accurate. There is also a link to learn more.
2) Encouraging user feedback
Instead of asking for users’ feedback in terms of agreement, I added the open-ended prompt, “What do you think of this response?” to encourage user feedback, even if they don’t know for sure whether the information is correct.
What's next?
Hypothetically, there are several next steps I might pursue as part of a team working toward the release of this feature. These include:
1) Internal QA to assess the reliability of the AI tool, given AI's known risk of "hallucinations" and misinformation.
2) Additional usability testing to ensure that the design is well validated with users before putting the feature into production.
3) Collect usage data after rollout (or a beta release) to gain insight into the feature's performance, catch any issues, and suggest areas for improvement
1) Usability is more than just ease of use
I expected that testing might uncover an aspect of the design causing confusion or difficulty. In reality, the biggest issue was around trust of AI tools. I could have addressed this more proactively if I had broadened my understanding of what usability can mean.
2) Designing for a rapidly changing technology
While I was in the middle of this project, ChatGPT released a new image description feature which hadn’t been available a week before, when I was first researching AI solutions. A lesson in the importance of staying informed about a changing landscape.
3) Working with a design system
This was my first time working from a design system. It provided helpful structure and reduced the time I spent on UI decisions. On the other hand, I learned to consult the design system and not make assumptions.