Resilia-Academy.png

Resilia

 

Opening Academy Doors

Enabling Foundations and Nonprofits to build capacity and amplify their community impact

 
 
 

Our Team

At the time, I was onboarding to the company and delivered this hands-on work while getting to know the team, their processes, our users, and completing HR training.

The core team was the Knowledge Pod:

  • I led discovery, design, and user testing in ~5 weeks

  • I partnered with the VP of Product, an Engineering Manager and 3 Engineers

Other contributors:

  • Researcher who shared our ‘Voice of the User’ insights

  • A Designer for design and user insights

  • Customer Success team for user and design insights

  • 13 users on usertesting.com

 

Our Why

Aligning business, customer, and user challenges

Business Goals: Strengthen our core offerings by increasing Academy value (engagement) and reducing direct 1:1 coaching sessions (cost) while maintaining our user-centric, nonprofit first philosophy.

Customer Challenges: Foundations purchased Resilia for their Nonprofits to build capacity ultimately amplifying their impact and judged efficacy through Nonprofit usage data.

User Challenges: Nonprofits are often very busy and self-taught so libraries like ours provide actionable tools and resources enabling them to become experts in grant writing, fundraising, board management, staffing, and more–so they can do more with fewer resources. User feedback suggested they trust our Academy content; but, that it wasn’t robust or advanced enough because they often couldn’t find what they needed.

 

Research and Discovery Overview

To understand, empathize, and align

  • Gained user feedback clarity from the ‘Voice of the Customer’ insights and watched Gong calls of conversations between users and Customer Success

  • Audited existing product; identifying opportunities and mapping our user journey

  • Competitor research; created accounts with other online learning tools

  • We defined success, problem statement, scope, terminology, technical restrictions, and timeline

Definition of success

Increase user engagement

  • By offering easier ways to browse and search so users can find relevant, personalized content

  • Reduce cost by promoting Community features for peer support rather than 1:1 Coaching

Success will be measured by

  • Increase in HVAs from 55% to 75%

  • Increase in new content views by 30/month

 

Discovery Details

User insights, sentiment, and goals

We summarized the ‘Voice of the User’ insights, gong call recordings, and conversations with internal team members identified user pain points were that our Academy was:

  • Not robust enough - they can’t find what they need quickly so they assume it’s not there and go to coaching or community or leave to look elsewhere

  • Not advanced enough – clarified that ‘advanced’ applied to the organization, not the individual

  • Not personalized enough - they wanted us to recommend content to them based on their organization size, roles, priorities, usage history, and they wanted to know what other people were viewing

Competitor analysis

  • Tabs: All offered tab variations by all, mine, and recommended

  • Views: All offered advanced search, filter, sort, and list vs card views, categorization, and similarities in content card information and size

  • Categories: All offered sections for recent activity and general engagement, popularity, newness, roles, topics, type, level, interests, profile, and survey responses

Feature audit and technical review

  • How are our users learning outside of our product – webinars, white papers, checklists, local grantee websites, newsletters, resources from their board or colleagues, LinkedIn Learning, motivational interview trainings, CEUsforless.com, and google

  • User challenges within our product – gated access to the full library and filters are only available after user searches, unclear visual hierarchy caused by inconsistent styling and components

  • Who authors content and how frequent – uploaded weekly by instructional designers who are nonprofit experts

  • What technical constraints exist – how is content tagged, what is the level of effort to add more tags

 
 

Before our Changes

My Hypothesis

If we made the following changes, our user engagement and satisfaction would increase because our users would feel our content is more…

  • robust if we “Opened the Doors” aka displayed all resources with visibility options (search, filter, sort, display, featured)

  • personalized if we sectioned resources into 3 Tabs (all, interacted with, suggestions)

  • advanced if we added a filter for Organization Level and tagged resources respectively

Bonus Questions

Would our users benefit from “marketing moments” to encourage content and “Community” engagement rather than 1:1 Coaching?

Would our users benefit from modifications to the content cards; why or why not?

Academy Home: Option to pick up where users left off; but no visibility into all resources

Scrolling Down: Highlights content trends but not directly personalized

 

Design, Test, and Build Overview

Ideating, prototyping, and testing iteratively

  • Low-fi wireframes: initial scoping

  • Design System: Synced 4 design libraries to my Figma file and consulted with engineering often

  • Mid-fi designs and Copy: Iterating based on team feedback

  • 2 rounds of A/B testing: Created prototypes, wrote scripts, synthesized and shared findings with stakeholders, and iterated the designs

  • Engineering Scope: Met with Engineering Manager to align scope using the MoSCoW framework

  • High-fidelity Handoff: Iterated based on scope and delivered ensuring design system alignment

  • Met weekly, for VQA reviews until the work was fully developed

 
 

Starting Low Fidelity

Designing the:

Low-fidelity Wireframes

 

For initial internal concept validation and technical input:

  • Tabs: Primary and or nested tabbed experience, tab label terminology?

  • Search/Filter: Various Search and Filter functionality and placement?

  • Personalization: Featured content, types, and their level of emphasis?

  • Content Cards: Change content card sizes and content?

  • Design system: What component options are available?

Tabs with nested search bar and filters, emphasize recommendations above all content, reduce content cards size to fit more above the fold and offer scrolling

Highlight fewer topics, emphasize content stats, keep existing content cards offering the fewest changes to achieve goals.

Top level search, nested filters, leverages existing content type chips to highlight variations in content types, combines recommended and all content using different card patterns.

 

Increase fidelity and A/B Test - Round 1

A/B Testing - Round 1 Results:

Comparing 2 designs to gain team alignment

 

I created the test plan, script, prototype, and leveraged usertesting.com; View Prototype

  • Tabs: Starting with “My Library” was very confusing; but all users used tabs

  • Search/Filter: 66% prefer search and looked to the top of the page; 33% didn’t engage with filter at all but the instruction panel was blocking part of search bar on Option 1

  • Filter Placement: 66% engaged with in-line filters over top level filter

  • Personalized: Users “loved” priorities, roles, new; No preference on content order

  • Marketing Moments: 33% found highlights distracting/confusing; 66% didn’t acknowledge

  • Cards/Chips: Slight preference to unified, enclosed cards without colors; Chips not used

A/B Testing - Round 1:

Option 1 Designs

Tabs, in-line filters, display, and sort options; marketing moments, new content cards

Option 1 test showing the All Resources Tab selected

Option 1 test showing the My Library Tab selected

Option 1 test showing the Recommended Tab selected

 

A/B Testing - Round 1:

Option 2 Designs

Single page with top level drawer filters, marketing moments, and chips

Option 2 test showing the All Resources with sectioned content for personalization

Option 2 test showing the All Resources with the filter drawer open

 

A/B Testing - Round 2

A/B Testing - Round 2 Results:

2 designs to validate design decisions

“[The tabbed design] has more opportunities for personalization… Makes it easy to find a variety of resources” – user tester

After collecting feedback from users, the team, customer success, and design, I refined the designs and tested again to ensure we applied feedback correctly. View Prototype

  • Tabs: 100% understood and 60% commented on liking the labels

  • Search/Filter: Top search with filters was preferred; but we needed to combine into an ‘advanced search’ rather than 2 options (search + filter); 70% didn’t engage with filter

  • Filter Placement: Section filters were no long necessary

  • Personalized: 80% said “tabs felt more personalized” with an option to move tabs into left nav, 40% looked there too; 1 person mentioned liking the “request content” option

  • Copy: Supporting copy was helpful; 60% read it out loud

  • Content Cards: No comments so we can reduce from scope and circle back later

  • Marketing Moments: Muted highlights were no longer distracting but didn’t bring much value either so we can remove since they aren’t a priority for our goal

This allowed us to prioritize requirements and engineering effort. We descoped the content card changes, in-line section filters, and content request.

A/B Testing - Round 2:

Option 1 Designs

Tabs, Search + Filter vs in-line

Option 1 screenshot from the ‘All Resources’ page prototype that was part of round 2 user testing

Option 1 screenshot from the ‘My Library’ page prototype that was part of round 2 user testing

Option 1 screenshot from ‘Recommendations’ page prototype that was part of round 2 user testing

 

A/B Testing - Round 2:

Option 2 Designs

Single page, Drawer vs In-Line Filters

Option 2 screenshot from ‘All Resources’ page prototype that was part of round 2 user testing

Option 2 screenshot from ‘All Resources’ page prototype with the open ‘Filter’ that was part of round 2 user testing

 

Additional Search Variations

I created these options to gauge engineering’s LOE compared to the designs we tested to adjust scope if needed throughout the project.

 

Users often did look to the left nav so when evaluating scope, putting the tabs in the left nav was a winning solution; however, the experience would be broken because of the Coaching tab since we couldn’t align the Academy header and advanced search.

Users were using a drawer like this when searching from the home page so we debated on leveraging this existing pattern; however, we wouldn’t have been able to include any of the nested filters which was the main advantage of the advanced search.

Originally, the search was sticky; however this increased eng effort significantly so it was descoped.

 

Sharing with Engineering

Key changes after collecting feedback and adjusting for technical limitations:

  • Aligning to user feedback/goals:

    • Displayed All Resources with Advanced Search solved the “robustness”

    • Separate pages and focused, sectioned content solved for the “personalization”

    • Adding an Organization Level Filter solved for the “advanced” (we also paired this with a benchmarking assessment to gauge our user’s org level)

  • Internal alignment:

    • We aligned our design system adding hierarchy to the page

    • The ‘marketing moments’ and “content request” were descoped as they weren’t tied to our original KPIs/goals

  • Engineering alignment:

    • We reverted to the original cards to accommodate the timeline and effort

    • Tabs were moved into the left nav since we couldn’t move Coaching into the top tab, and engineering effort to add tabs was high

 

Academy Home: All Resources visible

Advanced Search menu active

Recommendations

A personal library

 

Outcomes

User Impact Narrative:

  • Product experience before: Imagine… you go to a library and instead of walking in, there are just a few books on a stand where a librarian is sitting and you have to tell the librarian what you need

  • Real life experience: Imagine… you go to a library, you walk in and have access to everything, you’re walking around, guided by signage, you have the option to ask a librarian for support if needed – endless possibilities for content discovery

  • New product experience: We digitized the real experience (referencing Jakob Nielsen’s Usability Heuristic, match between the system and the real world)

What we accomplished:

  • Hit our Goals: Offered easier ways to browse and search, increasing personalization as validated by testing and we minimized coaching; 6% engagement increase in the first month

  • Reinforced our reputation: Viewed as a user-centered, nonprofit first platform

  • Established a process: Leveraged MoSCoW method to prioritize engineering effort based on user impact improving design handoff, and enabled us to build a backlog of complete and ready features for future sprints

  • Increased user research capabilities: Sharing my question writing framework and prototype testing process template optimized the quality and speed of testing

  • More user-centric organization: Involving Customer Success/Rev Ops into our review cycles enabled them to ask/provide useful customer/user insights

  • Bonds were formed: This was my 1st project with this Pod as I was new to the org

Reflection

  • Discovery: I could have surveyed users to uncover the specific learning tools and resources they used at the start instead of waiting until the user testing phase

  • Testing: I learned user testers didn’t know they could drag and drop the usertesting.com instruction panel and users prefer Advanced Search to Search + FIlter

  • Collaboration: I could have estimated each enhancement with engineering earlier in the process to prevent descoping during development

  • Metrics: I should have prioritized creating a dashboard of our key metrics in Amplitude to track ongoing usage data to speak to the impact of our changes quantitatively

  • Roadmap: Having an active fully ready backlog for engineering and suggestions for further Academy improvements to discover empowered our team and improved collaboration

    • Future Discovery Example: We aim to offer a Content Request Form to ensure we’re providing resources that our users want and get value from