Know Before You Go

By Neeraj Dharmadhikari & Arthur Tham

An Application Design for COVID-19 Information

Project in User Interface Design & Evaluation – Fall 2020 (INF231)


  • Overview of the Problem According to the Centers for Disease Control and Prevention website, as of December 2020, there are 15 million cases of COVID-19 and around 285 thousand COVID-19 related deaths in the United States. Subsequently, COVID-19 restrictions recommend people to stay indoors to help stop the spread of the virus; however, this poses new challenges for everyday tasks, such as buying food, groceries, medicine, and other essential items. There might be other reasons why people might choose to step outside their homes; however, they may not be equipped with the proper knowledge of the pandemic restrictions for the locations they want to visit.

  • Our Application We decided to learn about what information people want to know before going outside during the pandemic and then created an application prototype tailored to those needs that can help people know all the information they want before they go somewhere. Introducing Know Before You Go...

Human-Centered Design

Our approach to designing the prototype implemented Human-Centered Design (HCD), a process working around the understanding of peoples’ wants and needs with constant involvement from many testers, survey participants, and our design team.

We started by designing a survey to assess the needs of our target audience, which are smartphone users aged 18-50 that want or need to go out but are concerned about the pandemic.

After learning about their experiences and perspectives, we created workflow diagrams and scenarios that these people might encounter, focusing on how they will access the information they need quickly. We converted our diagrams into paper mockups and used rapid low-fidelity prototyping to evolve our design.

With our peers’ feedback, we iterated on our design using Marvel and Figma to create interactive prototypes. Finally, we performed usability testing with a few people using the think-aloud protocol and observation via remote calling. We took our observations from the testing sessions to iteratively improve our design.

Formative Stage

In our formative stage, we used a Google Form survey that took around 5 to 10 minutes to complete. It consisted of four main sections:

Demographic-based information: This is where we asked the participants for their name, age, and current location.
Experiences During the Pandemic: This is where we asked the participants about their comfort levels and factors in visiting locations they go to during the pandemic. We asked how important it is for them that certain restrictions exist and whether they would make decisions based on specific factors or information.
Situational Questions: This is where we provided the participants with scenarios and asked what they would do if they experienced a similar scenario. This is also where we asked them directly whether they would benefit from an application that provided information to prevent them from being in scenarios they do not want to be in.
Raffle Information: This is where the participants opted in to the $10 Starbucks gift card raffle that we ran at the conclusion of the open survey period.

We reached out to our contacts and received 48 responses with ages ranging from 18-29 and mostly living in California. From these responses, we observed the following:


Percentage of people (n=48) claiming that they are...


Uncomfortable hanging out with more than 6 people 95.8%
Uncomfortable being around someone not wearing a facemask 66.7%
Uncomfortable entering a location not requiring a facemask 97.9%
Find it important to maintain 6 feet of physical distancing93.8%

Quotes From Survey



COVID-19 Related Policies: Based on the qualitative responses to our survey’s scenario questions, we learned how uncomfortable our participants are around people that do not wear facemasks, sometimes leading our participants to react involuntarily (ie. having a panic attack). We also learned that participants wanted to know “what restaurants are not allowing outdoor seating,” and “how often the location [is] sanitized.” We gathered information like this to design a component that shows them whether locations enforce these restrictions and have features that help them feel safe.

Crowd Density: Many participants claimed to “leave [a location] if there are too many people” or stay away when they see crowded areas during the pandemic. They also want to know about the maximum capacity of various locations. This led us to design a component for crowd density, a measure of how many people are congregating in one location, to show our potential users how crowded a location is.

Hours: Many participants also found themselves unaware of various locations’ operating hours changing due to the pandemic. One participant “couldn’t tell just from Google or Yelp whether it was open because they each conflicted with each other.” Another participant put it best: it is “more important to prove that [a given time] is for sure the hours of the business.” Thus, we decided to design a component that shows the updated hours of locations and show an indicator listing when the hours were last updated. This will give the user confidence that the hours are indeed up-to-date.

Design Stage

Based on what we learned above, we designed two tasks we believe recognize the needs of our participants:


1. You don't know the updated COVID-19 policies between two locations. Look up their COVID-19 policies and make a decision on which location you would go to.
2. Explore nearby cafes in your area that offer outdoor seating. Which locations did you find? Which location would you go to based on their location details?

We created a diagram to map out our application’s workflow. We started by drawing a workflow diagram where the items are our proposed screens and the arrows are the transitions between the screens. In each screen, we wrote notes on what features should go in them, and prioritized them by importance.


Workflow diagram

The screen with the location’s details on COVID-19 policies is the most important screen; so we worked on that screen first. We iterated on the screen’s design through rapid prototyping, first drawing our separate visions of the screens in 5 minutes then meeting together to discuss their important features. Eventually, we merged the concepts that we liked into a singular sketch for that screen for a total of five main screens:


Five Screens

Paper Prototype: From left to right: Landing screen, search screen, results screen, explore screen, and the location details screen.

Adding Functionality with Marvel
Next, we took photos of our sketched screens and uploaded them to the Marvel prototyping tool. This allowed us to add basic interactivity to our UI elements that, when clicked on, enabled users to traverse the flow structure that we designed. We had two people in our design focus group test this prototype by asking them to complete our two tasks. From this review, we learned that it was difficult for them to figure out which features were buttons or labels. This made it difficult to discover the functionality of our prototype.

CLICK HERE FOR OUR MARVEL PROTOTYPE

Transition to Figma
After discussing these reviews, we modified our sketches and reconstructed them on Figma for higher-fidelity-prototype. With Figma, we can add phone design elements to better show buttons, labels, scroll-bars, and more. We made sure that buttons were distinct from other UI elements by enlarging the buttons, adding a gradient effect inside them, and curving the edges. We also organized our content in symmetric, concise subsections, paying attention to colors and fonts. The result is a higher-fidelity prototype that we can give to our testers for evaluation.

CLICK HERE FOR OUR FIGMA PROTOTYPE OR FEEL FREE TO PLAY AROUND BELOW



Five Figma Screens

Evaluation Stage

Methods
We divided our evaluation component into two rounds of two people each using a mix of Zoom and in-person interviews. We recorded the participants’ screens and observed their behavior by tracking their pointer movements and encouraging them to think aloud as they completed their tasks. After completing each task, we asked them which location they would go to and why. We ended by asking follow-up questions to our participants about their choices that differed from our expectations.

CLICK HERE FOR OUR INTERVIEW TEMPLATE

Population Sample
Our population sample consists of users that did not participate in our survey, have tasks that require them to go outside in some way, and are smartphone users. Our participants are aged 23-47 and currently use iPhones. We interviewed 4 people total across two rounds of testing. We also made sure that they had access to a computer with a web browser and were willing to be recorded for our evaluation.

Methods
In our rounds of evaluation, we gained knowledge on feature discovery, button placement, and making clear the overall functionality and purpose of our application. Some of our labels were confusing to our testers, drawing their attention away from our application’s functionality. We also realized that some of our navigational features were inconsistent with some screens lacking buttons to undo actions. Nevertheless, we were glad that our participants were able to make decisions on which location to go to (or not go to) and finish the tasks that we provided.

Quotes From Our Evaluation Interviews