Pushpay · Giving Platform · 2023–2024 · NDA

Making data
speak plainly.

End-to-end UX for AI-powered giving analytics — from the data layer up. Teaching an AI what church giving data means, designing how administrators query it, and exploring what a truly conversational giving intelligence could become.

My role

Senior UX Designer
End-to-end UX, data taxonomy, user research, future vision design

Collaborators

Data Engineers · US Stakeholders · UX Research team

What I worked on

AWS QuickSight Data Taxonomy User Research AI UX Future Vision

Phase 01

AWS QuickSight integration

AI-powered giving analytics embedded in the Giving platform — transaction data, recurring schedules, donor insights. Designed end-to-end and launched.

Shipped · Live

Phase 02

"Ask AI" — conversational giving intelligence

A future-state vision for a Claude-like experience across all Giving data — conversational queries, history, data visualisation, follow-ups. Initial exploration completed before handoff.

Handed off · In progress

AI that understands
church giving data.

Church administrators manage complex giving data — transactions, recurring schedules, lapsed donors, campaign performance, fund allocation. Making sense of it historically meant manual exports, spreadsheet work, or relying on a data team. The opportunity: AI-powered analytics in the hands of the people who needed it most.

The integration used AWS QuickSight to surface giving insights — but before any interface could be designed, the data itself needed to be structured so AI could reason about it accurately.

"Before you can design how an administrator asks a question, you have to make sure the AI understands what the answer actually means."

The work beneath
the interface.

Most UX work on AI products starts at the interface — how the user asks the question, how the answer is displayed. This project went a layer deeper: working with data engineers to ensure the AI had the context to answer correctly in the first place.

That meant reviewing every field in the giving data model — transactions, recurring schedules, donor records — understanding what each field meant to an administrator, how they'd talk about it, and what kind of queries they'd expect it to answer.

Labels, descriptions, and semantic context gave the AI the right framing — so when an administrator asks "show me lapsed donors from last quarter," the system understands what "lapsed" means in church giving, not just as a generic data concept.

I also worked on calculated fields — experimenting with custom metrics and suggesting them to the data engineering team based on how administrators actually think about giving performance. UX knowledge informed the data model, not just the other way around.

Data concept What it means to the system What it means to an admin
Lapsed donor
No transaction records in defined period
Someone who used to give regularly but has stopped — a re-engagement opportunity
Recurring schedule
Automated payment plan with frequency and amount
Predictable giving — the backbone of a church's financial planning
Fund allocation
Transaction tagged to a specific designated fund
Where donors are directing their generosity — often tied to specific campaigns or ministries
Net giving
Total transactions minus refunds in period
The real number — what actually came in after any reversals or failures

What administrators
actually needed to ask.

Before designing the interface, I conducted user testing and interviews with church administrators to understand how they thought about giving data — not how the system stored it. The gap between those two things is where most data tools fail.

The research shaped both the taxonomy and the interface — ensuring the queries the AI could answer matched what administrators actually asked.

01
Administrators think in outcomes, not data fields. They ask "how are we trending compared to last year?" not "show me transactions grouped by date range with year-over-year delta." The taxonomy work bridged that gap.
02
Trust is the primary UX problem in AI data tools. Administrators needed confidence that what the AI surfaced was accurate before they'd act on it — especially for financial decisions. Validation and transparency became core design requirements, not afterthoughts.
03
Context changes the question. "Top donors" means something different at a 200-person church versus a 5,000-person church. Understanding the administrator's context — church size, giving culture, campaign cycles — shaped how results needed to be framed.

The trust problem
nobody talks about.

The biggest UX learning wasn't about the interface — it was about confidence. Administrators were willing to use AI for queries, but only if they could verify the output before acting on it.

It's an underaddressed problem in AI product design. When the answer is a number tied to a financial decision, "trust the AI" isn't good enough. The design had to surface enough context that an administrator could sanity-check the result without re-running the query.

Core insight

AI outputs need
built-in verification.

Administrators need to see why the AI answered what it answered — not just what. Showing the data sources, time range, filters applied, and calculation logic isn't a nice-to-have. It's the difference between a tool they trust and one they don't use.

What giving intelligence
could become.

While the QuickSight integration handled transaction data, the "Ask AI" exploration went further — designing what fully conversational giving intelligence could look like with more control over the AI layer.

The vision: a Claude-like experience for giving data — administrators having a real conversation with their numbers, following up, saving queries, building on previous answers, and getting proactive insights instead of reactive reports.

Conversational queries

Natural language questions with follow-up capability — "show me lapsed donors" → "which of those gave more than $500 last year?"

Query history

Saved and revisitable queries so administrators can track the same metrics over time without re-building from scratch.

Suggested queries

Proactive suggestions based on church size, giving patterns, and seasonal giving cycles — surfacing insights administrators didn't know to ask for.

Data visualisation

Automatic chart and table generation from query results — the right format for the type of data being surfaced.

Project creation

Grouping related queries into a project or campaign view — so giving analysis around a capital campaign lives together.

Validation layer

Transparent sourcing for every answer — showing which data, which date range, which calculations — so administrators can verify before acting.

This exploration was handed off to another designer when I moved to lead the Design System team. The taxonomy work, research findings, and initial interaction model formed the foundation for the next phase of this feature.

Back to work

View all case studies →