I felt like my site navigation and categories made sense, but I know after conducting a few other IA studies that this is not always the case. I'd never assessed my site's IA before and both the quantity and type of content had changed drastically since I constructed it in 2017. Originally, my site served as a landing page for prospective university employers as I searched for a professorship. Now, I host a variety of researchers, business people, designers, project managers, and writers (hello to all of you, by the way!). I needed to check my assumptions about how designers, academics, and other people categorized specific types of information on a website.
I was constrained by Optimal Workshop's 20-card and 10-participant maximum (and no screening or post-study questions!). Ideally, I would triangulate card sorts with qualitative data like open-ended questions (asking participants to describe why they assigned some of the cards, why they chose specific words for categories, etc.) and interviews (asking participants about how they conceptualize information).
My original IA:
Home
Purpose of my website
Academic Research
Conferences
Mentorship
Research topics
Research awards
Guest podcast episodes
Guest blog posts
Projects
UX & Web Design
Community Volunteering
Service Learning
Creative
Graphic designs
3D designs
Photography
Videography
Teaching
Classes I teach
Teaching honors and awards
Blog
Birbs
Contact
LinkedIn, Email, etc.
I decided to begin with an open card sort since I was starting from square one.
The Open Card Sort.
Setup.
My items were shortened summaries of key content on my most important pages. I did not test over "birbs" or "blog," for example. Although maybe I should someday. Some examples included:
"Infographics I’ve made for companies’ marketing and outreach projects."
"Link to a guest blog post I wrote on a professional psychology site."
"A list of my psychology conference presentations."
I ended up using all 20 of the cards OW allowed. As you'll see in a moment, it was overkill. Lesson learned.
Data collection.
I posted the link to my study in numerous design Slack channels and Facebook groups.
Results.
After 24 hours, I ended up with 9 participants and (wait for it) a 60% attrition rate! My poor, overwhelmed participants!
Despite the flailing, my faithful participants managed to sort the 20 items into around 5 categories all on their own.
It was clear from the dendrogram that three of the five categories were divided into designs, academics, and about/links. This matched my mental model of the content. Closer inspection confirmed that participants had chosen words that (mostly) matched my navigation menu titles, such as "Academics" and "Volunteering." So that was a good sign.
However, everyone used "Designs" or "Portfolio" in place of what I had labeled "Creative." They also I was also surprised to see some participants group my teaching career into community service, haha. It's fine. *cries in professor*
I merged lexically similar categories ("academics" vs "academia"). After careful consideration and some discussion, I also merged some semantically similar categories ("research" vs "studies"). I tried to avoid this since the lexical choices should represent what users expect to find, but these cases were few. This bolstered the weights in the standardization grid, allowing me to better assess overlap.
I apologize for the blurriness. Not sure what's causing it.
Takeaways.
I learned in the open card sort that some of the words on my items were "leading." A noticeable portion of participants used a lexical matching strategy (i.e., if the word "psychology" was in the item somewhere, they automatically grouped the item into a "psychology" category even if the content was semantically unrelated). In reality, users visiting my website may already have an idea of what they're looking for, so I'm less concerned about it than I would be for a site for which people are expected to be complete novices.
After corrections, three strong categories remained: About/links, designs, and volunteering. Although "Academia" and "Research" were also popular, the items assigned to these categories varied extensively. Since the three main categories had little variation, I set them and their items aside. Next, I wanted to figure out how participants would sort the fuzzier items distributed across the remaining categories with a closed card sort.
The Closed Card Sort.
Setup.
To reduce the massive attrition rate and to simplify the task, I made the following changes to the new set of cards:
I removed semantically duplicated items ("Psychology research topics" vs "Research studies I've presented")
I tried to eliminate leading words without oversimplifying the items to the point of abstraction or ignoring the importance of context. Research is research, for example. The word had to stay, and participants in the open card sort had showed me that they felt the word belonged high up in the architecture.
I reduced the item complexity for easier comprehension. As an example, I changed one of my items from "Contributions I've made as a website content writer and designer on a data literacy project" to "Content writing for a data literacy project."
I ended up with 10 cards instead of 20. Each one represented a fuzzy item from the open card sort. I chose the following categories based on the open card sort data:
Teaching
Web design
Honors and awards
Academic research
Service (I chose this word instead of volunteering because some of the remaining items could be considered service, but not volunteering... And in retrospect, maybe I should have left it as volunteering.)
Data collection.
I distributed the study in different Facebook groups and Slack channels to ensure random sampling and user variety (but also so that I wouldn't irritate the same people too much). In a perfect world, I'd invite some participants back for a re-test to see what they think of the new arrangement.
Results.
The attrition rate dropped significantly (yay!).
Immediately, there was high agreement for several items that had been fuzzy in the open card sort. Academic research, teaching, and honors/awards.
All but three of the 10 cards found reliable homes in one of the three main categories.
My decision to use "service" as a category did not serve me well (see what I did there?). Another possibility was that none of the items fit strongly into that category. Perhaps both were true, but I lean toward the latter.
In the open card sort, 50% of participants grouped the Service-Learning item into "Service" and 50% assigned it to other categories. However, when categories were provided in the closed card sort, participants grouped Service-Learning under "Teaching." This shows the potential power of context! It also contradicted where I had originally housed the item, which had always lived under "Service."
So which items were still fuzzy? The same items that had been giving me trouble before starting this whole study! Mentorship and guest blog posts/podcast episodes remained enigmatic. One participant even lamented about how tough it was to categorize them.
"This is hard! Guest blog/podcast content and service-learning projects could be either service or teaching, I think. Data literacy content could also go a few places."
Takeaways.
Web design as a category seems similar enough to be subsumed into Design, but I will have to test that assumption. Since mentorship and guest blog posts/podcasts are still fuzzy, I assigned them to the categories on which they ranked highest in the closed card sort.
My next step is to run a tree test to assess the fit of this architecture in context!