← All work
UX / UI Design · Woolworths · 2024–2025

Range & Space
— ART

Transforming a critical but unscalable tool into a unified web platform that empowers Woolworths Range Analysts to conduct category reviews faster, more independently and with clearer outcomes for Category Managers.

Role UX / UI Designer
Team Design team of 2
Duration April 2024 — December 2025
Status Designs complete, development initiated
Prototype View in Figma →
~$46bn
Annual sales influenced through range reviews
~$800m
Lost sales addressed through better ranging
1–2%
Typical category sales uplift from data-driven ranging
280
Range reviews per year underpinned by the platform
About this project

A centralised web application replacing Woolworths' fragmented Google Sheets-based Assortment Recommender Tool (ART), designed for Range Analysts to efficiently manage category range reviews across the 900+ store network. Through our UX process, we identified that users' core needs were to reduce manual effort and time spent on data validation, and gain a clearer, more confident way to communicate ranging decisions to Category Managers — all within one unified platform.

My role

UX / UI Designer — end-to-end across the full project.

User Research, Jobs-to-be-Done, Persona Development, Information Architecture, Wireframing, Prototyping, User Testing and Developer Handoff. AI-assisted design using Claude — accelerating research synthesis, conceptual design, wireframing, rapid prototyping, synthetic testing and documentation throughout.

The problem

A critical tool that was no longer fit for purpose

Range Analysts at Woolworths are responsible for making critical ranging decisions that determine product assortment across a 900+ store network — but the tools and processes supporting this work, most critically V10, were making it harder, not easier.

V10 was fundamentally unscalable. Reviews took too long, a high barrier to entry limited who could run them, and critical pain points — manual data validation consuming the majority of setup time, poor error troubleshooting, a disjointed workflow and no effective way to communicate ranging rationale to Category Managers — created a significant bottleneck.

How do we transform a tool that only experienced analysts could navigate into one that supports the full team — without compromising the depth that power users rely on?

ART Google Sheet V10 — fragmented tabs
ART Google Sheet V10 — previous tool
ART Web Application — new unified platform
ART Web Application — unified platform
The solution

A unified platform built around analyst needs

A unified platform built around a hub-and-spoke navigation model — Run Summary as the central control point, replacing fragmented Google Sheets and Excel workflows.

Automated data health validation reduced hours of manual setup to minutes. Guided workflows with progressive disclosure served both novice and expert analysts. Integrated run summaries and change tracking gave analysts a clear way to communicate ranging rationale to Category Managers for the first time.

ART web application — Run Summary hub
ART web application — Run Summary hub
My approach

How AI shaped my UX process

A comparison of the conventional UX process against the AI-assisted approach applied on this project.

The traditional way
Conventional UX delivery
Research & interviews
Manual synthesis
Wireframing & prototyping
Testing & design iteration
Handover
The new way
AI-accelerated UX delivery
Research & interviews
AI synthesis
AI-assisted wireframing & prototyping
Testing & rapid iteration
Handover
The new way in practice
01I used AI tools to accelerate key parts of the process — research synthesis, conceptual design, wireframing, prototyping and iteration — significantly reducing time on production tasks
02SMEs provided domain expertise and validated outputs at key stages throughout the project
03AI supported the execution while I led the strategy, design decisions and quality of the final output
The process

How I structured and delivered this project

01 — Engagement
Existing tool analysis & scope
The starting point wasn't designing a new tool — it was understanding why V10 had become a bottleneck despite being business-critical.

I mapped V10 and V11 across their capabilities, architecture and limitations — documenting the fragmented workflow spanning Google Sheets, BigQuery and external Excel files before any design work began.

Key outcomes
System architecture documented: Google Sheets with manual BigQuery integration, fragmented across 10+ spreadsheet tabs with external Excel files for comparison
Workflow baseline mapped: Reviews requiring 20–100+ runs across 7 workflow stages
Technical limitations identified: Google Sheets reaching capability limits, no integrated comparison tools, 200+ concurrent reviews requiring a scalable platform
Information architecture defined: Unified web platform with hub-and-spoke navigation replacing fragmented workflows, supporting parallel V11 operation during transition
AI note
AI tooling was introduced progressively — used initially for knowledge organisation and structuring observations. Deeper integration began from Discovery onwards.
ART Google Sheet V10
ART Google Sheet V10
Range Outcome Analysis Tool (ROA)
Range Outcome Analysis Tool (ROA)
02 — Discover
Initial discovery
Discovery revealed the real problem wasn't ranging decisions — it was everything analysts had to do before they could make one.

15 discovery interviews surfaced a consistent pattern: the pain wasn't in the ranging decisions themselves but in everything required to get there — manual validation, fragmented tooling and no clear way to communicate recommendations to Category Managers.

Key outcomes
Critical pain points identified: 50–70% setup time on manual validation, 2–3 hours creating comparisons, 3+ hours troubleshooting, and difficulty communicating the rationale behind range recommendations to Category Managers
Jobs-to-be-done mapped: Configure with data confidence, evaluate scenarios efficiently, implement tactics with impact preview, communicate recommendations compellingly
Two personas defined: Expert Range Analyst (8+ years, 15–20 reviews, 50+ tactics) and Novice Range Analyst (18 months, 8–12 reviews) — requiring progressive disclosure to serve both
Overview of problems and opportunities — affinity map
Overview of problems & opportunities
Range Analyst persona
Range Analyst persona
03 — Define
User journey mapping
A collaborative user journey mapping workshop confirmed that analysts don't work linearly — the navigation needed to reflect that.

Research insights were translated into a concrete navigation structure through a collaborative journey mapping workshop with SMEs and stakeholders, validating workflow progression before any wireframing began.

Key outcomes
Workshop validation: Navigation architecture and workflow progression confirmed with SMEs and stakeholders
Tool structure & site map defined: Run Summary as central control with collapsible navigation to Set up, Category data and Tactics
Core workflow mapped: New Review → Initial setup → Run Summary hub → Category data/Tactics → Execute run → Publish
User journey workshop
User journey workshop
Refined site map
Refined site map
03 — Define
Tool architecture & wireframing
Establishing the tool architecture and site map at lo-fi stage meant avoiding costly structural changes once hi-fi design began.

User journeys were translated into lo-fi wireframes across all five core sections, defining screen layouts, navigation patterns and content hierarchy ready for SME validation.

Key outcomes
Tool architecture defined: Hub-and-spoke navigation model with Run Summary as central hub
Core screens designed: Landing page (review tracking), Set up (staged validation), Run Summary (central hub), Category data (universe management), Tactics (constraints application)
Progressive disclosure implemented: Default simplified views with "show more" revealing advanced options
Analysis page wireframe
Analysis page — lo-fi wireframe
Category data page wireframe
Category data — lo-fi wireframe
03 — Define
Lo-fi iteration
SME sessions revealed that how you surface errors matters as much as catching them.

Weekly SME reviews validated business rules and refined the information architecture, while fortnightly check-ins with developers identified technical feasibility issues — all before any hi-fi investment.

Key outcomes
SME validation & design iteration: Business rules and workflow completeness validated, with iterations incorporating feedback on data validation logic and error categorisation
Critical refinement: SMEs confirmed need for clear distinction between 'errors' (blocking) vs 'alerts' (advisory) — staged validation prevents overwhelming users
Workflow logic validated: Hub-and-spoke navigation confirmed as matching non-linear analyst working patterns
AI approach
Dedicated Claude projects per tool section managed context limits and enabled deep, section-specific knowledge input — significantly improving output quality across complex, information-dense sections.
Category data lo-fi
Category data lo-fi
Runs summary lo-fi
Runs summary lo-fi
Tactics lo-fi
Tactics lo-fi
04 — Design
Hi-fi iteration
Fortnightly dev reviews meant design and engineering stayed aligned throughout — not just at handoff.

Lo-fi wireframes were built into high-fidelity Figma prototypes across all five sections, applying accessibility standards and a consistent visual language throughout.

Key outcomes
Interactive prototypes: Hi-fidelity Figma prototypes with consistent visual language, trialling Claude for HTML prototype generation alongside Figma
Accessibility implemented: WCAG 2.1 Level AA compliant design patterns applied throughout all screens
Landing page and Runs summary hi-fi
Landing page & Runs summary
Category data site and group view
Category data — site & group view
Tactics guided workflow
Tactics — guided workflow
Tactic selection experienced
Tactic selection — experienced user
04 — Design
User testing
User testing surfaced a clear behavioural split — novice analysts needed guidance through complexity, while experts needed it out of their way. Progressive disclosure served both.

25+ sessions across the core screens revealed where analysts succeeded, where they hesitated and where the design needed to change.

Key outcomes
Setup flow (8 sessions): Initial confusion with staged validation — users expected all-at-once errors but responded with relief once explained. 70% reduction in troubleshooting time
Universe management (6 sessions): Bulk selection not immediately obvious — once discovered, users immediately recognised it as a significant time saver
Tactics workflow (6 sessions): Behavioural split confirmed — novice users gravitating to guided workflow, experts immediately seeking direct access
"Just scanning down, I can see which runs solved high-priority issues. It's going to save me so much time."
Range Analyst, user testing session
"The flow makes sense, it's very intuitive. I feel like somebody newer would understand it… we're trying to cater to different needs."
Range Analyst, user testing session
User testing session
Remote testing via Figma prototype
04 — Design
Hi-fi iteration & refinement
Every refinement had to balance what users asked for against what the design system could sustain.

User feedback was incorporated across all five sections while maintaining design system consistency throughout.

Validation outcomes
Run Summary confirmed: "Just scanning down, I can see which runs solved high-priority issues. It's going to save me so much time." — range analyst
Tactics workflow confirmed: "The flow makes sense, it's very intuitive… we're trying to cater to different needs." — range analyst
Performance validated: Table performance, bulk operations and responsive behaviour confirmed under realistic load
Power user refinements: Advanced sorting and filtering capabilities added to support complex, multi-criteria analysis
Landing page and Runs summary refined
Landing page & Runs summary — refined
Category data article and space allocation
Category data — article & space allocation
Tactic selection experienced
Tactic selection — experienced user
Tactic selection guided
Tactic selection — guided
05 — Deliver
Final deliverable & handoff
Close collaboration with the engineering team — from fortnightly reviews through to UAT — was just as important as the specifications themselves.

Comprehensive specifications were delivered across all five sections — covering layout, interaction behaviours, edge cases and error states.

Key outcomes
Production specifications delivered: Figma prototypes and comprehensive handoff documentation across all five MVP workflows, with a dedicated delivery file for screen specifications supported by Figma Dev Mode
UAT sessions conducted: Build limitations identified and resolved before final handoff — ensuring designs translated accurately into the build without last-minute rework
Screen specs delivery file
Screen specifications — Figma Dev Mode
Demo of web app designs
Demo of web app designs

View the end-to-end prototype

Full interactive Figma prototype covering all core analyst workflows

Open in Figma →
Strategic impact

Outcomes & delivery

Immediate deliverables
Research artefacts15 discovery interviews, 25+ usability testing sessions, 4 collaborative workshops
Validated personasExpert and novice Range Analyst profiles grounded in behavioural observations
Hi-fi prototypes5 core sections with comprehensive task workflows
Production specificationsLayout, behaviours, edge cases and error states across all workflows
Process innovation value
Evidence-based prioritisationResearch backed confident pushback on partial release, protecting UX quality
Validate before building25+ testing sessions catching design issues before development investment
AI-accelerated deliveryClaude transformed research synthesis, wireframing and prototyping — significantly reducing delivery time
Progressive disclosureApplied throughout to serve both novice and expert analysts
Next steps
Development was well underway when paused — back-end infrastructure largely in place, close to a fully functional MVP.
The project stands as a strong foundation, ready to resume with minimal ramp-up.
Reflection

Key learnings

Critical success factors

Validate before building25+ testing sessions caught design issues before development. The insight "I'm not going to analyse my run unless I've fixed the problems" validated key prioritisation decisions.
Evidence-based prioritisationResearch-backed pushback ensured first release included a complete workflow, not just partial capabilities.
Solved V10's biggest pain pointsUnified interface replacing 10+ fragmented tabs, automated pre-run validation and integrated run comparison tools.

Primary challenges

Stakeholder alignment and partial release pressureCompeting priorities required continuous evidence-based pushback — balancing business urgency against releasing an incomplete experience.
Rapidly evolving AI landscapeAI capabilities shifted mid-project, requiring continuous recalibration of tools and processes throughout delivery.

What would be done differently

Better cross-team communicationPriority changes by different team members caused misalignment — clearer protocols between SMEs, developers and stakeholders would have helped.
Dedicated product managerAbsence of a PM created gaps in priority-setting and decision ownership.
Earlier AI integrationEarlier adoption of synthetic testing and AI-assisted validation would have accelerated iteration cycles.