When an Account Executive is prepping for a QBR, or a Customer Success Manager is trying to spot churn, a dashboard is usually too slow. They don't want to filter through charts—they need a clear narrative about product usage: Is this account actually using what they pay for? Who are the internal product champions? Are they abandoning core features?
This isn't about summarizing Zendesk tickets or Salesforce notes. This is about giving your revenue team direct access to your product telemetry.
But if you just point an AI agent at your product data warehouse to answer these questions, it usually fails. It hallucinates because it’s guessing what "Active User" or "Churn Risk" actually means for your specific B2B SaaS.
I previously wrote about using "soft prompts" to guide Claude, but Anthropic recently released Claude Skills, a native way to give Claude specialized workflows.
Here is how I built a zero-hallucination, self-serve product analytics agent in 3 steps by combining dbt, Google MCP, and Claude Skills.
The Architecture: "Context as Code"
Most people try to fix AI hallucinations by writing giant system prompts. That’s fragile.
Instead, I push the definitions into the database schema itself. The AI reads your product metadata just like a human analyst would, and then uses a strict Claude Skill to format the output.
The Stack
- GrowthCues Core: My open-source dbt project that calculates product metrics from Segment/Rudderstack data and pushes definitions to BigQuery.
- BigQuery: Stores the product event data AND the metadata (column descriptions).
- Google MCP Toolbox: The secure bridge that allows Claude to query BigQuery directly from your machine.
- Claude Desktop: The conversational interface running the custom Skill.
- Claude Skill: A custom workflow that guides Claude to analyze product usage and generate a structured report.
Step 1: The Logic (Discover the Signal)
I use GrowthCues Core to build the semantic layer and product usage signal foundation. The secret isn't just the SQL; it's the schema.yml. Using dbt's persist_docs feature, I embed the exact business logic into the columns.
For example, here is how I define a volumetric product churn signal:
- name: fct_account_metrics_daily
columns:
- name: volume_change_ratio_7d
description: >
[Definition] Product Usage Contraction Signal.
[Formula] (Sum of events last 7 days) / (Sum of events prior 7 days).
[Context] < 1.0 means product usage is declining. < 0.5 is a strong churn risk.
When I run dbt run, these descriptions are written directly into the BigQuery metadata. The mathematical definition of Churn Risk lives inside the warehouse.
Step 2: The Architecture (Govern the Signal)
Next, I connect Claude Desktop to BigQuery using the Google MCP Toolbox. This provides Claude with specific tools like get_table_info and execute_sql.
I install the toolbox binary and update my claude_desktop_config.json file. You can find this file here:
- Mac:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Here is the exact configuration I use:
{
"mcpServers": {
"bigquery": {
"command": "/path/to/toolbox",
"args": ["--prebuilt", "bigquery", "--stdio"],
"env": {
"BIGQUERY_PROJECT": "YOUR_PROJECT_ID"
}
}
}
}
To let the MCP Toolbox use my Google user credentials, I authenticate via the Google Cloud CLI on my local machine:
gcloud auth application-default login
gcloud config set project YOUR_PROJECT_ID
Once authenticated, Claude has a secure, read-only bridge to my BigQuery project. It can now inspect schemas and execute product-level SQL locally.
Step 3: The Execution (Turn Signal into Action)
Instead of hoping Claude figures out what to do with the product data, I give it a strict operating procedure using a Claude Skill.
A Skill is just a folder containing a SKILL.md file with YAML frontmatter. I built an analyze-account-health skill that gives Claude a step-by-step playbook specifically for product usage (not customer support or sales data). The Skill instructs Claude to read the metadata, run specific queries, and format the output in a structured way. Here’s a snippet of SKILL.md:
---
name: analyze-account-health
description: Summarizes B2B account health by analyzing product usage patterns, risk signals, and expansion opportunities.
---
### Step 0: Discover Schema
Use `get_table_info` to read the column descriptions in `fct_account_metrics_daily` and `fct_user_metrics_daily`. Look for the [Definition] and [Context] tags.
### Step 1: Quick Health Triage
Query recent product metrics to identify volume changes, feature stickiness ratios, and dormant risk flags based on the metadata.
### Step 2: User-Level Analysis
Query the data to find Product Champions (top-ranked users by volume) and Admins/Buyers (first users in the account).
### Step 3: Present Report
Format the output strictly as follows:
## Executive Summary
## Key Product Metrics
## Risk Factors
## User Intelligence
## Recommendations
Deploying the Skill
- Ensure Code execution and file creation is enabled in Claude (Settings > Capabilities).
- Download my open-source skill folder and compress it into a ZIP file.
- In Claude, navigate to Customize > Skills.
- Click the "+" button, upload the ZIP file, and toggle the skill on.
The Payoff: A Self-Driving Product-Led Revenue Team
Now, a Customer Success Manager simply opens Claude Desktop and types:
/analyze-account-health TechCorp
Claude automatically recognizes the command. It uses MCP to read the BigQuery metadata, understands your exact, mathematical definition of product churn, writes perfect SQL, and outputs a structured, QBR-ready report in seconds.
Your GTM team gets instant, reliable answers about product adoption, and your data engineers are finally free from the ad-hoc ticket queue.
Ready to build the baseline?
This workflow completely changes how sales and CS operate, but it only works if your underlying data model is clean. If your product telemetry is messy, the AI will just generate messy reports faster.
I help B2B scaleups fix this through The Signal Foundation. It’s a 1-week deployment sprint where I:
- Discover the Logic: Mathematically define exactly what product activation and churn mean for your business, stripping out the vanity metrics.
- Govern the Architecture: Deploy the GrowthCues Core dbt packages to clean up your product telemetry and embed this logic into your warehouse.
- Execute the Action: Set up the exact workflows (like Reverse ETL and Claude Skills) so your team can act on the product data immediately.
I’m a bootstrapped solo founder and revenue architect. That means you get direct, dependable access to me. I deliver working systems, not slide decks.
Get the free Claude Skill on GitHub or book an alignment call to see if your product data is ready for The Signal Foundation.