For our native Hub app, Product and UX conducted multiple rounds of user testing to validate features through monthly research groups with our Cast members. In January of this year we used a card sorting exercise to understand groupings then we asked them to prioritize features using a purchase exercise. In February's feedback sessions, we gathered insights on the app's information architecture and evaluated it against our proposed app navigation. This valuable feedback from the users of our products ensures that we build on data, and not assumptions, and that our app is intuitive and enjoyable to use.
I will be showcasing the tree test study below, but all research, including card sorting, was conducted using Optimal Workshop's suite of user testing tools.
We presented 4 viable versions of our app's structure and we gave the users "tasks" to identify the location of specific features within. Using Treejack's click through tree tests we were able to see the path a user took to find the item- or if they got lost finding it- as well as how long it took them to find the correct item. Once the click through exercises were completed, we then facilitated a guide conversation where we asked the group which was their favorite out of the four. We then showed them wireframes of the designs and asked them to share their thoughts now having seen low fidelity versions of the app's structure.
This valuable data helped us to determine what IA was best and allowed the user to complete specific tasks successfully before we took the app any further. It opened up dialogue for the users to share additional info and gather key insights and feedback.
Click through exercises for each IA were presented in a tree format and the users were given 17 "tasks" to find certain items. For example- "Go view your most recent paystub". Then the user would click through the levels of the app version to determine the location of the item. We were able to see how they arrived at the correct location, how direct it was and the speed at which they found it. Four different IAs were presented, and this was version 1.
I presented wireframes that gave the participants a more informed view of where the items were placed for each IA. These were presented after the groups had competed all the tree tests to see if the options they had an easy time finding- or even a difficult time finding- made more sense seeing them in a low fidelity version of the app.
Snapshots of the Report Out from this session including objectives, methods with takeaways and next steps.