Stay Safe addresses the need for individuals to stay informed and aware of criminal activity in their surroundings. Whether one is concerned about the safety of their own neighborhood, considering a new place to live, planning a trip to an unfamiliar area, or even evaluating a potential real estate investment. Our focus on this portion of our project has been on building and testing a prototype to see how our work and research has been so far. We’ve taken the feedback on our wireframes and applied it to our prototype, as well as creating a more professional look for the project, rather than just placeholders everywhere. With this prototype, we conducted tests with potential users, and gathered up any notes to help us understand what we’ve done well and what still needs improvement.
For our research, we created a test protocol that intended to explore the ins and outs of the prototype, as well as gathering information and thoughts from our testers. For our protocol, we had 6 testers. We clarified what our test was about and the intended goals of it to the user and try to get an idea of how knowledgeable they were on the subject. We asked if they’d used any crime apps before, and if they had a preferred one, as well as what they’d want to see from a crime app. Then, we provided the users with 5 tasks to complete, wanting them to achieve some sort of goal within our prototype. These tasks would intend to explore various aspects of the prototype, to see how intuitive and clear each aspect of our system would be. Before we could do any of this however, we created a consent form that went over what we were researching, and required written consent before continuing onto the tests. With this form, we made it clear that no personal information would be saved, that we could stop the test at any point if they wished, and our stored data would be cleared after the semester finished.
We had our users think-aloud, to get an idea of how a user might think while using our app, since they don’t know the full functionality of the app like the team members do. We’d check to see if they could finish the task and ask them to rate how difficult it was on a scale from 1-5, with 1 being the hardest and 5 being the easiest. Then, we’d gather any extra thoughts or notes about the task after their completion. We’d repeat this process for each task, ending the session with a debrief with them. The debrief served to gather any remaining feedback they had, asking what they liked and disliked about the prototype, what confusing or unexpected things they came across, features that they expected but weren’t there, and their thoughts on what could be improved. Everything done in this section of the project served to check what we’ve done well with our prototype, and what could be improved in the future. Tests like these are vital to ensuring that you’ve created the best possible project that you can.
From each of our tasks, we collected data on a few different things. Whether or not the user completed the test, what score of difficulty they gave it, and extra notes. Across our 5 tests, 4 of them had a 100% completion rate. However, the last task had a 66% completion rate, with 4 users managing to complete the test and the 2 others failing to do so. The users who failed the task expressed their confusion with the menus, leading to them getting lost. Once they’d finished their assigned task, we’d get a difficulty rating from them on a scale from 1-5. Users found most of the tasks easy to complete, with 4 out of 5 and 5 out of 5 being the most common scores, with occasional outliers. However, the task with a 66% completion rate stood out, receiving difficulty scores of 2 and 3, reflecting the challenges users faced.
Overall, user feedback was positive, with some suggestions for improvement from users. Whenever there was a dip in score or completion rates, the issue that users had tended to be the same, which helps us know if one experience was just a fluke or not. Trends with our quantitative data showed that generally, tasks were successfully completed and rated as “easy to complete”, with a couple exceptions here and there. When there was confusion about something, it tended to lead to a lower score. As for qualitative trends, we had received praise for a few different features and visual clarity of some areas, with a few key areas we could improve on, such as changing our “starred” reports section to “favorites” or “important” instead, alert and icons not having expected interactivity, and some unclear UI elements for example. Some of these complaints were due to limitations with the prototype, so some areas are only a problem due to the project’s temporary limits. Following our tests, our debriefs conducted at the end of each test helped us cultivate a handful of answers and suggestions we could look to for potential improvements.
Our full notes are available in a spreadsheet that we created for this assignment.
Throughout the process, we identified from users what features they enjoyed and what parts needed improvement due to confusion or needed more modifications. This phase of the project emphasized perfecting the design and transitions on the prototype so that it worked smoothly as a potential app. Then we got feedback from users that may or may not have had experience with a crime alert app. Looking back on the ratings we received from our research, generally most people understood and were able to navigate their way through the prototype while a select few got confused with the functionality of the app. After carefully reviewing the notes we took during the research process, we went back to the prototype app to understand why users felt confused or lost on certain tasks. Some recommendations made by users were to make the delete option more user friendly on the starred reports or even how to navigate to the starred report page. Another critique was to provide the user with the option of alerting their friends/family more easily on the map. Finally, another suggestion that was raised was to make the notification display page clearer as it was confusing to navigate to certain features. These were the key areas highlighted in the user feedback, with the rest of our app receiving positive notes or little feedback overall, so those aspects of the design can stay the same. The insights we gained from this phase of the project helped us cultivate a better experience with our app, and we believe that our notes and recommendations gained from our tests will lead to a much better final product.
Our prototype is available as well to view, to see how we did everything. We couldn’t implement feedback from our tests, so the version we have was finished right before the user tests.
Some limitations affected the potential findings and feedback we could gather. First, the users testing our prototype were students that have a strong background in technology. Going forward, getting feedback from users with different backgrounds would prove to be more beneficial in our research to find out if StaySafe is user friendly. For example, having a wider range of ages and different experiences using technology would benefit the research process even more. With all our users testing our prototype, we found almost all have never really properly used a crime alert app. The average user’s knowledgeability will vary depending on the type of project you’re working on. Additionally, we were able to gather six users to test our prototype while some groups were only able to get 5 due to not enough people signing up. It’s a little unfortunate if that ends up being the case with your user tests, but it’s out of your hands, so you just must do your best with what you’ve got.