AI ads infrastructure
Google Ads AI Copilot: Suggests, Drafts, You Approve
An AI copilot for Google Ads sits next to you — drafting changes, surfacing waste, executing on approval. Here's how it differs from Smart Bidding.
An AI copilot for Google Ads is exactly what it sounds like — an AI sitting next to you in your account, suggesting changes, drafting bid updates, and executing on your approval. Same idea as GitHub Copilot for code: the human stays in the driver's seat, the AI handles the tedious work, every change is reviewable. It's a different category from Google's Smart Bidding (which is autopilot, not copilot) and from agency software (which is dashboards, not collaboration).
The product most marketers use today is Claude or ChatGPT plus a Google Ads MCP connector. NotFair is the hosted connector — it exposes about 40 tools backed by the Google Ads API and gates every destructive write behind your approval. The result is a copilot you can talk to: ask it questions, get suggestions, approve or reject each change.
Copilot vs autopilot: the important distinction
Smart Bidding is autopilot. You set a target, Google optimizes bids inside its black box, and you don't see the per-decision rationale. It's powerful but opaque. A copilot is the opposite: every action is initiated by a human prompt, the rationale is visible ("I'm pausing these because they had $1,200 in spend and zero conversions"), and you can approve, modify, or reject individually.
- Smart Bidding — Optimization decisions made inside Google's models. You set objectives. You see outcomes. Limited insight into per-bid rationale.
- AI copilot — Diagnostic and editing decisions made through conversation with you. You see every proposed change before it ships. Full audit trail of what was changed and why.
- Agency dashboards — Reporting layers on top of Google Ads. Pretty charts, no agency in the chart.
- Browser-automation agents — AI driving the Ads UI. Slow, brittle, and skips the API safety net. Avoid for production work.
What a Google Ads copilot actually does
- Reads — Audits, search-term reviews, change-history queries, quality-score breakdowns, performance dashboards. Runs autonomously since reads can't break anything.
- Drafts — Writes proposed changes (paused keywords, new negatives, bid updates, RSA copy variants) but does not execute until you approve.
- Suggests — Surfaces "three quick wins" or "the one bid change that would most improve ROAS" based on account state.
- Executes — On approval, calls the Google Ads API directly. No clicking through the UI. No CSV uploads.
- Reverses — Most operations expose an undoChange tool. If a change was a mistake, the rollback is one prompt.
What it feels like to work with a copilot
You: Anything I should know about this morning?
Claude: Three things: 1. Yesterday's spend on Search - US was $214 above pacing because impression share spiked overnight. Causes: a competitor paused a Brand campaign and a new search term "crm comparison 2026" started matching broad. CTR is healthy (4.1%), CPA is fine ($38). No action needed unless you want to cap budget. 2. The Black Friday RSA you A/B tested last week is statistically significant — Variant B has 23% better CTR. Want me to pause Variant A and shift impressions? 3. Three keywords in Search - APAC dropped to QS 2 in the last 24 hours. Likely landing-page experience flagging on a recent product page change.
You: Pause Variant A and look at the APAC quality score issue.
Why "still in control" matters here
Many marketers we talk to are nervous about handing AI the keys. The valid concern: an AI that misreads context can torch a budget overnight. The copilot pattern solves this directly. Read access can run autonomously — it's safe. Write access never runs without you. Every paused campaign, every bid update, every negative keyword is a diff you saw and approved. The AI is a force multiplier, not a replacement.
The four tiers of trust to graduate through
- Tier 1 (week 1) — Read-only audits and diagnostics. Get comfortable with the model's reasoning before granting any write.
- Tier 2 (week 2-3) — Approve writes one at a time. Review every diff. Use this period to calibrate trust on small-stakes changes (negatives, RSA edits).
- Tier 3 (month 2) — Batch approvals on low-risk operations (negatives, paused-keyword adds). Still gate budget and bid-strategy changes individually.
- Tier 4 (month 3+) — Recurring autonomous tasks for read-only reports. "Audit my account every Monday morning and email the summary" runs without you.
Where a copilot beats the alternatives
- vs hiring an agency — A copilot runs the diagnostics an agency would charge $1,500 for in 60 seconds, anytime you want, with the same quality.
- vs hiring a junior media buyer — A copilot handles the 80% of work that's mechanical (search-term cleanup, negative adds, bid maintenance), freeing humans for strategy.
- vs Google Ads Editor — Editor is a fat client for bulk edits. A copilot is a conversational layer on top of every API operation, including diagnostics Editor doesn't support.
- vs scripts — Google Ads Scripts require code. A copilot writes the GAQL for you and explains the result. Same power, no engineering effort.
Setting up your copilot
- Sign in at notfair.co/connect with the Google identity that has Ads access.
- Pick the customer IDs the copilot can see — single account or every account in a manager (MCC).
- Install the connector in your AI client of choice — Claude Desktop, Claude Web, ChatGPT (Codex connectors), Cursor or Windsurf (stdio config).
- First prompt: "audit my account." From there, copilot work is just conversation.
FAQ
Try MCP with Google Ads
Connect your Google Ads account to NotFair in 30 seconds and start querying campaigns from Claude.