res-dog-cat-2-060117-hero-810w-223h-d.jpeg

Google Immersive Design Exercise

The prompt: Design an immersive experience for the following scenario (utilizing AR or VR).

Millions of animals are currently in shelters and foster homes awaiting adoption. Design an experience that will help connect people looking for a new pet with the right companion for them. Help an adopter find a pet which matches their lifestyle, considering factors including breed, gender, age, temperament, and health status. Provide a high-level flow and supporting wire frames.

 

 

Challenges

VR/AR tech constraints

Finding the sweet spot between constraints of current-state VR/AR technology and ideal-state UX. Balancing what is immediately implementable with a compelling, stickier user experience.

Minimizing friction

The cost of activating VR/AR is often high for users, especially in comparison to using web/mobile. Designing an experience that minimizes effort while maximizing the benefit VR/AR technology can offer.

Do few things, well

With new technologies that are not fully mainstream, it's even more critical to focus on doing even fewer things, well, to set the user up for success.

 

 

Discovery: Unpacking the problem

Competitive Research

To understand the current landscape, I conducted a competitive overview of sites/apps that enable potential pet adopters to connect with their prospective pet, such as Petango, Petfinder, and national/local animal shelter sites.

VR/AR technology's key benefit lies in immersion, in other words, how the user may forget that they are experiencing a computer-generated world.

A-ha moment | To best leverage VR/AR capabilities, I decided to focus on its potential for creating an immersive connection between adopters and pets. Thus in my competitive audit, I examined how current site/apps create a pet connection through photographs and video.

 
246x0w.jpg

WeRescue — Adopt A Pet is a mobile app that aggregates adoption listings from hundreds of shelters nationwide. Users are shown circular thumbnails of available pets. Tapping on one opens the pet's profile page, with an auto-advance slideshow of images. Users can share out pet profiles as an app download and deep-link.

 
shs-logo-color.jpg

Seattle Humane's listings are powered by Petango, an adoption aggregator site. Clicking on a pet thumbnail opens a pop-window with that pet's profile page, which includes a numbered carousel to show pictures or play video, which opens another pop-up YouTube window.

 
petfinder_logo.png

As another adoption aggregator app, Petfinder emphasizes pet images over topline information like breed, age, and gender. Tapping on a pet tile opens its profile, which includes a fullscreen image carousel and swipe interactions to view other pet results.

 
Meow-Cat-Rescue.jpg

Meow Cat Rescue is a local shelter servicing cats and dogs in the greater Seattle area. Users can browse a high-level results view of available pets. Clicking on a thumbnail opens an external tab for that pet's profile page on Petfinder.

 

From early surveying of adopters, I heard that photos/video were critical in narrowing down prospective pets. After the competitive audit, I learned image assets vary in terms of resolution and quality. Many sites surface this multimedia as a carousel, but browsing pets on a local site meant users frequently had to go to an external site or navigate pop-up windows to look at photos/video. As I heard from adopters, this results in an experience that can feel randomizing and impersonal, challenging in turn for an adopter to feel a connection to the pet.

 

 

User Interviews

I began with two assumptions:

  1. End users are already bought into adopting a cat/dog from an animal shelter.

  2. Two key audiences are potential cat/dog adopters (the end user) and animal shelter workers (the content creators of pet profiles).

I screened and interviewed recent adopters (within the past 2 years), prospective adopters (within the next 6 months), as well as former animal shelter employees or volunteers.

For adopters, I asked them to walk me through their end-to-end thinking and process from the moment they began looking to the moment they signed adoption papers. I wanted to understand the role pet photos/video played in their decision process, as well as how they utilized adoption sites/apps.

Wendy
I looked all over! Animal shelter sites for the county and cities close by, Facebook groups, Petfinder. It got tiring.
— Wendy, adopted a dog a month ago
Christi
I probably looked through photos for 80 different cats online, a lot of them with [my partner], before we narrowed down which ones to visit in-person.

It is all about the pictures when you’re online, but when you’re in-person, it’s all about the connection, about looking into the cat’s eyes.
— Christi, adopted three cats a few years ago

For shelter employees/volunteers, I wanted to understand their pain points in creating pet profiles and their interactions with prospective pet adopters.

Maarten
There’s always too much to do at the shelter and never enough time. Any pictures we posted were because of one volunteer who also happened to be a photographer.
— Maarten, former rescue shelter volunteer
Anna
It’s such a shame. Older animals, ones with health issues, black animals—they languish in the shelter for six months, a year plus. There’s a total misunderstanding around how playful and lovable they can be. On the other hand, puppies and kittens get adopted within a week or two at most.
— Anna, former veterinarian technician and rescue shelter volunteer
 

Key Takeaways

  • Adopters have to navigate a fragmented digital ecosystem that can also feel redundant, as aggregator sites provide contact info for local shelters, whose sites in turn show listings from aggregator sites.

  • The top three important criteria for adopters: appearance (e.g., color, "cuteness", breed), size (as it relates to physical space, children, or other pets at home), age (younger preferred).

  • Photos/video are critical for narrowing down the online search.

  • Temperament and adopter-pet connection are critical for the final adoption decision, though adopters can only confirm this in-person.

  • One person in a household typically drives the early online search. Other family members are brought in for decision-making as the search progresses.

  • Animal shelters are strapped for time and resources when it comes to processing applications, fielding questions, creating pet profiles, caring for the animals, and more.

  • Shelters see a high turnover of "desirable" pets that are younger and healthier, in contrast to "undesirable" pets whose personalities are misunderstood.

 

 

Definition: Aligning on experience goals and considerations

Experience Goals

Synthesizing my discovery findings, I converged on a set of goals for the experience that could provide mutual benefit for potential adopters and animal shelter pets.

Goal 1  |  Utilize VR/AR to shorten the adoption funnel from interest to action (from looking at a pet online to visiting it live)

I learned from user interviews that a live in-person visit is critical for adopters to feel a connection to and understand the temperament of their prospective pet. There is no replacement for a visit before an adoption decision is made.

Goal 2  |  Enable immersive interactions between an adopter and a prospective pet 

Live visits are so powerful because adopters can feel how a pet actively responds to them, in sharp contrast to passively browsing pets online, where pets’ photos/video can feel static. How can VR/AR technology strengthen the connection between users and potential pets in a way that other technologies cannot? What if an adopter could "feed" a pet and experience it reacting to them?

Goal 3  |  Provide users with an immediate understanding of a pet's appearance and size

Appearance and size are two of the most important criteria to an adopter. VR/AR technology could provide users 1) a more photorealistic representation of pets 2) to-scale in relation to their home.

Goal 4  |  Increase user awareness of and empathetic connection towards misunderstood pets who are older, black, or unhealthy

VR/AR content creation is still costly in terms of time and labor. To laser-focus this technology's effectiveness, perhaps VR/AR could be used to first shift adopter perception of overlooked pets, especially since those animals may remain in shelters for months to over a year, in contrast to the easily popular pets who may get adopted so quickly before their online profiles are even completed.

 

 

Choosing Augmented Reality

bavor-tweet.png

I decided that AR is the more suitable medium, over VR, for this challenge for a number of reasons. Potential adopters wanted to understand how the animal could fit into their life and (literally) into their home. Users would benefit more from experiencing the animals in their home rather than be transported to a foreign context in VR.

 

Seamless Integration Through WebAR

Screen Shot 2018-04-11 at 4.13.26 PM.png

Given that countless pet adoption sites on the aggregated and local level abound, I was hesitant to further dilute a fragmented ecosystem by creating yet another site or app. It was also important to minimize friction as much as possible for not just the users, but for the under-resourced animal shelter workers as well. I decided to leverage Google’s recent WebAR work, which enables users to experience augmented reality in-browser, without the requirements of a headset or native AR app.

 

 

Design: User flow, technical assumptions, and challenges

Defining the user flow

Using two phones, paper cutouts, and a stuffed dog, I created a photographic storyboard to document key steps in the user flow.

Imagine a potential adopter is browsing PetFinder.com on their smartphone at home. Jim views Jojo’s profile and photo/video gallery. The last tile shows a 3D rendering of Jojo.

After Jim taps on this last tile, the camera activates to show a full-screen feed of his living room. An object target (also known as a reticle) renders on the living room floor.

When Jim taps the reticle onscreen, a 3D video rendering (also known as volumetric video) of Jojo cocking his head side to side appears, rendered at the actual size of Jojo.

Jim moves closer to Jojo to immediately get a sense of his size in relation to his home. Jim taps the Treat button and watches Jojo eat a treat that was just thrown to him in the volumetric video.

Jim's family joins in as well to interact with Jojo. His partner taps the Trick button, and they watch Jojo wag his tail.

After they finish playing with Jojo in 3D, Jim returns to Jojo's profile and calls the shelter to make an appointment to visit him.

 

I realized the user flow would work differently for desktop, assuming most do not have a back-facing camera.

As the user clicks on the "3D" tile in the image gallery, a modal appears of a 3D rendering of a generic living room. On the floor appears a volumetric video of Jojo at rest, at his actual size in relation to common living room furnishings. The user clicks the Treat or Trick buttons to interact with Jojo, before returning to his profile to make an appointment with the shelter.

 

Testing out WebAR integration

Before progressing further into prototyping, I needed to validate any technical assumptions and constraints. I downloaded the WebARonARCore app onto a Pixel phone and quickly realized that AR-enabled browser is still very experimental. I consulted with a few VR/AR developer friends who also confirmed that WebAR is at least a year away from mainstream access, much less volumetric video integrated into WebAR.

 

First challenge, unexpected

WebAR, and volumetric video in AR-enabled browser experiences, are not production-ready at the moment. This meant my solution would not be implementable in the near-future. To address this challenge, I needed to pivot on the AR activation itself.

Though I felt very strongly about the user being able to seamlessly experience AR pets within their browser, I had to shift to a native AR mobile app experience, which could still benefit users by enabling interactions with pets within the actual environment of their home.

 

Probing content creation requirements

In parallel, I needed to understand the content creation of volumetric videos. I assumed I would need a green screen studio (to enable the isolation and superimposition of an AR pet) as well as a camera capable of stereoscopic capture (to capture the depth and 3D dimensionality of the pet. I looked into volumetric video platforms like EF EVE, learning that hardware requirements might likely comprise two Xbox ONE Kinect for Windows v2, a Windows 10 PC, and two Kinect adapters for Windows. All told, this technology would cost around $2500.

 

Second challenge, expected

Volumetric video creation is expensive—not just hardware costs, but the time and labor costs of setup, capture, and post-production, especially for animal shelters which are understaffed and under-resourced. I learned from user interviews that high-quality images of pets often came from a volunteer/staff member who already had the tools and knowledge to independently photograph. Therefore, I decided to address this challenge by assuming the content creation responsibility myself (in this hypothetical world of the design exercise). I could pilot portable videogrammetry for pets in local Seattle shelters, by focusing on low turnover pets—older, unhealthy, or black animals. 

Announcing my new side business: a portable videogrammetry studio comprised of a green screen and depth camera rig setup. I would capture three "interactions" with each pet: at rest, enjoying a treat, and doing some sort of trick.

 

 

Prototyping: User testing and learning

I sketched out what the user might see on screen, which also helped me wrap my mind around how to structure this as a user test.

Third challenge, kind of expected

Now how to create a prototype of sufficient fidelity that would test my design decisions and assumptions? I didn't have the time or resources to actually build this out in Unity, but at minimum, I needed the user to experience a live camera feed and some approximation of the pet as volumetric video.

I got crafty and gathered supplies that included: two stuffed animals (one of which is remote-controllable), Play-Doh, cardstock, scissors, tape, ribbon, and of course, a smartphone.

User Testing

First, I warmed up users to AR by providing them a few minutes to explore apps like Just A Line and Houzz on a test phone. I wanted to subtly familiarize users with moving themselves and AR objects around in space.

My two goals for user testing:

  1. Assess whether the user felt differently about the pet based on pictures or volumetric video

  2. Understand any gaps or difficulties experienced in the AR activation flow

I guided users through first looking at a pet profile, "activating" AR, interacting with the pet via "volumetric video", before finally returning to the pet profile. At each step, I asked them to articulate aloud what they thought, felt, and did. Simultaneously, in response to their actions, I swapped out UI elements on-screen, turned on the camera, and acted out the "AR" dog interactions.

Key Takeaways

  • The "Meet in 3D" CTA did not set clear user expectations for the AR activation. Users interpreted it as a 360 video panning interaction or as a 3D pet model to rotate in 360.

  • There was confusion around the Tap to Place message and reticle interaction.

  • Users were instantly excited as soon as the dog moved onscreen in the live camera.

  • Users wanted more tricks and interactions with the pet

  • The Undo arrow button (for re-placing the reticle) caused confusion. Everyone interpreted it as a Back function to return to the pet profile.

  • As soon as users returned to the profile, they looked for shelter contact information to visit in-person.

I love this so much! Seeing the dog move and do things in response to me gave me the same rush as seeing a real dog on the street! I understand a little more what it’s like to be with him, better than just pictures. I feel connected to him now.
 

Iterating in Wireframes

Based on the user testing, I revised the wireframes to set clearer expectations. I explored a new iconographic treatment that persists across the pet gallery. On closer examination of the WebAR reticle interaction, I realized I needed to break that step down. I also moved the undo/replace button, under the assumption that the user does not need to re-do plane detection. If I were to take this back into user testing, I would validate these decisions with, ideally, a functional build on a phone as well.

 

 

Conclusions and Looking Ahead

Ultimately, I was pleasantly surprised with how my prototype delivered on my outlined goals:

Goal 1  |  Utilize VR/AR to shorten the adoption funnel from interest to action (from looking at a pet online to visiting it live)

Users who looked at just photo/video galleries expressed an interest in browsing other pet profiles, in contrast to interacting with the volumetric video pets, which motivated their next step to contact the shelter.

Goal 2  |  Enable immersive interactions between an adopter and a prospective pet 

Every user talked about how much more they felt "connected" and "invested" in the pet. Users intellectually knew that the volumetric video was neither real nor live, but the immersion of AR interactions in the user's physical environment overrode this.

Goal 3  |  Provide users with an immediate understanding of a pet's appearance and size

This goal was trickier to deliver on since I used a stuffed dog that resembled an actual adoptable pet. Looking ahead, I would work with my developers to build an app with actual volumetric video to test this goal.

Goal 4  |  Increase user awareness of and empathetic connection towards misunderstood pets who are older, black, or unhealthy

Limited by my prototyping materials at this early stage, I would plan to test with volumetric video of an animal from this category in the next phase.

My experience with this design exercise hewed very closely to how I design VR/AR experiences with and for my clients in base reality—speak to the North Star of your vision, but build the foundation of what is actually possible now.