For our native Hub app, Product and UX conducted multiple rounds of user testing to validate features through monthly research groups with our Cast members. In January of this year we used a card sorting exercise to understand groupings then we asked them to prioritize features using a purchase exercise. In February's feedback sessions, we gathered insights on the app's information architecture and evaluated it against our proposed app navigation. This valuable feedback from the users of our products ensures that we build on data, and not assumptions, and that our app is intuitive and enjoyable to use. 

I will be showcasing the tree test study below, but all research, including card sorting, was conducted using Optimal Workshop's suite of user testing tools. 
We presented 4 viable versions of our app's structure and we gave the users "tasks" to identify the location of specific features within. Using Treejack's click through tree tests we were able to see the path a user took to find the item- or if they got lost finding it- as well as how long it took them to find the correct item. Once the click through exercises were completed, we then facilitated a guide conversation where we asked the group which was their favorite out of the four. We then showed them wireframes of the designs and asked them to share their thoughts now having seen low fidelity versions of the app's structure. 

This valuable data helped us to determine what IA was best and allowed the user to complete specific tasks successfully before we took the app any further. It opened up dialogue for the users to share additional info and gather key insights and feedback.
Treejack
Click through exercises for each IA were presented in a tree format and the users were given 17 "tasks" to find certain items. For example- "Go view your most recent paystub". Then the user would click through the levels of the app version to determine the location of the item. We were able to see how they arrived at the correct location, how direct it was and the speed at which they found it. Four different IAs were presented, and this was version 1.

Data from this particular IA test shows that we were somewhat above having half the users able to fully find what they need in the app. In this version, items were not placed in locations that many users found intuitive even though they had made sense to us.

This was an example of a somewhat successfully placed feature item in the app as 88% of users ended up finding it, and no users skipped it out of frustration. This led us to determine that most users would find this a suitable place for this feature to live. We also gathered insights on why certain users had trouble navigating to this location and what their pain points were.

This is what that task's journey looked like in wireframe format that we showed them after the IA tests were completed.

Some views of where users looked attempting to locate the featured task. This helps to illustrate the directness of certain task locations and also let us know where else they might look for this info. The bolded sections are where the item actually lived, in some cases the feature can be access in two areas of the app.

Wireframes
presented wireframes that gave the participants a more informed view of where the items were placed for each IA. These were presented after the groups had competed all the tree tests to see if the options they had an easy time finding- or even a difficult time finding- made more sense seeing them in a low fidelity version of the app.
Report Out
Snapshots of the Report Out from this session including objectives, methods with takeaways and next steps.
Back to Top