AI ads infrastructure

Natural Language Google Ads: How It Works

Natural language Google Ads management is real. Here's the prompt-to-API pipeline, what works, what fails, and why ambiguity becomes a clarification.

NotFair Team|

Natural language Google Ads management means typing a sentence — "pause every keyword in the brand campaign with zero conversions in the last 30 days" — and having that sentence translated into the right Google Ads API calls and executed against your live account. It's not a wrapper around the Ads UI. It's an AI model with structured tool access reading your data and writing back through the API.

This works because of a specific architecture: an AI model (Claude, GPT, etc.) plus an MCP server that exposes Google Ads API operations as named tools. NotFair is the production MCP server most people use. The AI maps your sentence to one or more tool calls, the tools hit the API, results come back, and the AI summarizes them in plain English.

The prompt-to-API pipeline, end to end

  • Step 1: parse intentThe model classifies the sentence as a read or write, identifies the entities (campaigns, keywords, search terms), and pulls out the filter expressions ("zero conversions", "last 30 days").
  • Step 2: pick the toolReads almost always go through runScript with a GAQL query. Writes go through specific tools like pauseKeyword, bulkPauseKeywords, addNegativeKeyword, or updateCampaignBudget.
  • Step 3: build the structured callThe model fills in the tool arguments: customer ID, resource names, filter clauses. For runScript, it writes a JS sandbox snippet that calls ads.gaql or ads.gaqlParallel.
  • Step 4: server validatesNotFair checks scopes, validates the GAQL syntax, and for writes, gates the call behind an approval prompt that you confirm.
  • Step 5: API executionThe Google Ads API receives the call under NotFair's developer token, authenticated as your Google identity via OAuth.
  • Step 6: summarizeThe model takes the raw API response (JSON, often verbose) and renders it as a readable answer with the numbers that matter.

A real prompt mapped to real tool calls

You: Find search terms that wasted more than $100 with no conversions in the last two weeks and add them as negatives.

Under the hood: 1. runScript({ script: "return await ads.gaql(`SELECT search_term_view.search_term, metrics.cost_micros, metrics.conversions, campaign.id FROM search_term_view WHERE segments.date DURING LAST_14_DAYS AND metrics.cost_micros > 100000000 AND metrics.conversions = 0`)" }) — returns 7 terms 2. (Approval gate) — model surfaces the 7 terms and asks for confirmation 3. bulkAddNegativeKeywords({ keywords: ["free crm", "crm tutorial", ...], campaignId: 12345, matchType: "EXACT" }) — executes after you approve

Claude: Found 7 search terms over $100 with zero conversions. Total wasted spend: $1,432. Added all 7 as exact-match negatives.

Why this is more reliable than "AI clicking buttons"

Browser-automation agents that drive the Google Ads UI exist, but they're slow and brittle. They depend on DOM selectors that Google changes, struggle with auth, and can't handle bulk operations cleanly. The MCP path skips the UI entirely. The AI talks to the same REST API that an enterprise PPC platform would talk to. No clicking, no waiting on page loads, no breakage when Google ships a redesign.

How ambiguity is handled (this is the important part)

Natural language is ambiguous. "Pause my underperforming campaigns" is not a precise instruction. The AI's job, when faced with ambiguity, is to ask a clarification question rather than guess and execute. NotFair's tools are structured so that destructive writes require explicit parameters — there is no API call for "pause underperforming things," so the model has to resolve the ambiguity into specific resource names before any write can happen.

You: Pause my underperforming campaigns.

Claude: "Underperforming" needs a definition. I see three reasonable interpretations: 1. CPA above target ($85 target on this account) — would pause 2 campaigns ($340/day spend) 2. ROAS below 1.0 last 30 days — would pause 4 campaigns ($890/day spend) 3. Zero conversions in last 14 days with $500+ spend — would pause 1 campaign ($120/day spend) Which rule do you want, or want to combine them?

Failure modes you should know about

  • Hallucinated metricsModels occasionally invent numbers when context is thin. The fix: every claim should be backed by a tool call. If Claude says "CPA is $40" without a runScript call before it, treat that number as suspect.
  • Wrong scope"Pause this keyword" can refer to one of many keywords with similar text. Tools require resource IDs; ambiguity should produce a clarification, not a guess.
  • Date interpretation"Last week" means different things (last calendar week vs last 7 days). The AI should state which interpretation it picked.
  • Stale changesIf you make a change in the Google Ads UI mid-conversation, the AI is working from cached numbers. Re-running a query is cheap; do it after manual UI edits.

What a good natural-language ads system looks like

  • GroundedEvery numerical claim ties to a specific tool call. No vague "your CTR is low."
  • ReversibleWrites flow through an approval gate. Many operations also expose an undoChange tool for emergency rollback.
  • ComposableOne sentence can fan out into 20 parallel queries (gaqlParallel) when correlation across surfaces is needed for an audit.
  • AuditableThe conversation log is the audit log. You can scroll back to see exactly what was asked, what tools fired, and what changed.

Setting up a natural language interface to your account

The shortest path: visit notfair.co/connect, complete OAuth with the Google identity that has Ads access, pick the customer IDs you want exposed, and install the connector in Claude Desktop, Claude Web, or ChatGPT. Cursor and Windsurf both support MCP via stdio config and work the same way. From OAuth to first natural-language audit is about five minutes.

FAQ

Try MCP with Google Ads

Connect your Google Ads account to NotFair in 30 seconds and start querying campaigns from Claude.

Connect Google Ads

FAQ

Common questions about Model Context Protocol.

What is natural language Google Ads management?

Typing English sentences that get translated into Google Ads API calls — reads (queries, audits) and writes (pause, bid changes, negatives). The translation happens through an AI model with structured tool access via MCP.

How does the AI know which API call to make?

An MCP server exposes named tools (pauseKeyword, runScript, addNegativeKeyword, etc.) with structured argument schemas. The model picks a tool from that list based on the sentence and fills in the arguments.

What happens when my prompt is ambiguous?

A well-built system asks a clarification question instead of guessing. "Pause underperforming campaigns" should produce "how do you want to define underperforming?" not a silent bulk pause based on the model's guess.

Can I trust the numbers the AI reports?

Trust them only if they tie to a tool call. Every claim should be backed by a query result. If the AI states a metric without first calling a tool, ask it to re-run with the underlying GAQL so you can verify.

Does this work for ChatGPT, Cursor, and Claude?

Yes. ChatGPT uses Codex connectors. Cursor and Windsurf use stdio MCP config. Claude supports both hosted connectors (Desktop, Web) and the AdsAgent plugin in Claude Code.