# AI Models
# AI Models
Sitefire tracks your brand's presence in AI-generated answers across multiple platforms and markets.
## Supported Platforms
- **ChatGPT**
- **Google Gemini**
- **Google AI Mode**
- **Google AI Overviews**
- **Perplexity**
- **DeepSeek**
- **Claude** (soon)
- **Microsoft Copilot** (soon)
If you need support for another model, please let us know at [support@sitefire.ai](mailto:support@sitefire.ai).
## Markets
Sitefire supports visibility tracking across 180+ markets globally. Configure your target markets in the dashboard to see region-specific results.
---
# Talk to Us
# Talk to Us
Need help with Sitefire? We're here for you.
## Book a Call
The fastest way to get help is to book a short call with our team. We'll walk through your issue together and get it sorted.
[Book a support call](https://pulse.cal.com/jochen/intro) - pick a time that works for you.
## Email Us
Prefer async? Reach out at [support@sitefire.ai](mailto:support@sitefire.ai). We typically respond within a few hours on business days.
## What We Can Help With
- Account setup and configuration
- Connecting your CMS (Webflow, Framer, and others)
- Understanding your visibility reports and GEO scores
- Prompt set customization
- Billing and plan questions
- Bug reports and feature requests
## Community
Follow us for updates and tips:
- [LinkedIn](https://www.linkedin.com/company/106162152)
- [X / Twitter](https://x.com/sitefireai)
---
# remove-category
import { Steps, Callout } from 'nextra/components'
# Remove a category
Categories group prompts so you can slice visibility data by topic. If a category is no longer relevant, you can delete it from any prompt's toolbar.
Removing a category won't rewrite history. It may still appear in older reports so your past data stays comparable.
### Open Prompts
In the Sitefire app sidebar, navigate to **Prompts**.
### Select a prompt
Click any prompt in the list. The contextual toolbar appears at the top of the panel.
### Open the category selector
In the toolbar, click the **category** dropdown to see every category in your workspace.
### Edit the category
Hover the category you want to remove and click **Edit** next to its name.
### Delete
In the edit panel, click **Delete**.
### Confirm
Confirm the prompt and Sitefire removes the category from your workspace.

---
# Quick Start
import { Steps } from 'nextra/components'
# Quick Start
Get your first AI visibility report in under 5 minutes.
### Create an account
Sign up at [app.sitefire.ai](https://app.sitefire.ai) with your work email.
### Add your domain
Enter your company domain (e.g., `example.com`) and your most important subpage URLs. Sitefire automatically extracts your core topics and brand information.
### Generate your prompt set
Sitefire analyzes your domain and generates a realistic set of prompts that real users would ask AI models about your topics. You can customize and expand this set.
### Run your first analysis
Click "Analyze" and Sitefire will query AI models with your prompt set. Within minutes, you'll see:
- Your **visibility score** across AI platforms
- Which of your pages get **cited**
- Where **competitors** appear instead
- Specific **content recommendations**
### Optimize
Review the AI-optimized content briefings Sitefire generates. Edit, approve, and publish directly to your CMS.
## Next Steps
- [GEO Overview](/docs/geo-overview) - Learn the fundamentals
- [Visibility Tracking](/docs/visibility-tracking) - Deep dive into metrics
- [Improve Content](/docs/actions/improve-content) - How Sitefire scores and optimizes your pages for AI citation
---
# Agent Plugin
import { Tabs, Callout } from 'nextra/components'
# Agent Plugin
Sitefire's agent plugin lets AI assistants access your action pipeline directly inside the conversation. Browse your tracked topics, create actions, read briefings, and work with generated articles — without switching tabs.
The plugin includes an **MCP server** for data access, **product knowledge** so your agent understands Sitefire's domain model, and **slash commands** for common workflows like reviewing actions and triggering article generation.
**MCP Server URL:** `https://app.sitefire.ai/api/mcp`
---
## Getting Started
Install the full plugin (MCP server + product knowledge + slash commands) with a single command:
```bash copy
npx skills add sitefire-ai/skills
```
This works with **Claude Code**, **Codex**, **Cursor**, **Gemini CLI**, and any agent that supports the [Agent Skills](https://github.com/anthropics/skills) spec.
On first use, a browser window opens where you sign in to your Sitefire account and approve access.
### What's included
| Component | Purpose |
|-----------|---------|
| **MCP Server** | Auto-connects your agent to Sitefire's API (OAuth) |
| **Product Knowledge** | Your agent understands Sitefire's domain model, workflows, and terminology |
| **`/sitefire-actions`** | Check existing actions and execute ready briefings |
| **`/sitefire-discover`** | Analyze visibility data and find new topics to work on |
| **`/sitefire-write-all`** | Trigger article generation for all ready briefings at once |
### Claude Code on the web
If you use [Claude Code on the web](https://claude.ai/code) with **Remote Control**, the plugin installs on your local machine and works automatically through the web interface.
If you use Claude Code on the web **without** Remote Control (cloud sessions), add the MCP server URL manually in your session settings. You can also upload the skills individually — see the **Claude.ai** tab for instructions.
Setting up Sitefire on claude.ai has two parts: a **connector** that gives Claude access to your data, and **skills** that teach Claude how Sitefire works and add workflow commands.
### 1. Add the connector
1. Open [claude.ai](https://claude.ai)
2. Click **Customize** in the bottom-left, then click **Connectors +**
3. Click **Add custom connector**
4. Enter a name (e.g. "Sitefire") and the server URL: `https://app.sitefire.ai/api/mcp`
5. Click **Add**
6. Click **Connect** to authorize Claude to access your Sitefire account
{/* TODO: Replace with actual GIF */}
{/*  */}
### 2. Add the skills
The skills give Claude context about Sitefire's domain model — what actions, briefings, and articles are and how they work. They also add commands like `/sitefire-discover` that make common workflows easy to trigger.
1. Download the 4 skill files from the [sitefire-ai/skills](https://github.com/sitefire-ai/skills/tree/main/skills) repository (one `.md` file from each folder: `Sitefire`, `sitefire-actions`, `sitefire-discover`, `sitefire-write-all`)
2. Go to **Customize** > **Skills**
3. Click **+** > **Create skill** > **Upload a skill**
4. Drag and drop each of the 4 files
The connector alone works — Claude can call Sitefire's API without the skills. But the skills make the experience significantly better: Claude understands Sitefire's terminology, knows the right workflow for each situation, and offers the slash commands.
Claude Desktop supports installing the full plugin (MCP server + skills) from a `.zip` file or folder.
### Option A: Install the plugin (recommended)
1. Download the [Sitefire plugin](https://github.com/sitefire-ai/skills/archive/refs/heads/main.zip) (.zip)
2. Open Claude Desktop and go to **Settings** > **Plugins**
3. Drag the `.zip` file (or unzipped folder) into the plugin area
4. Restart Claude Desktop
This installs the MCP server, product knowledge, and all slash commands at once.
### Option B: Add the MCP server manually
If you only want the data connection without the skills:
1. Open Claude Desktop and go to **Settings** (gear icon or `Cmd+,` on Mac)
2. Navigate to **Developer** > **Edit Config**
3. Add the following to your `claude_desktop_config.json`:
```json copy
{
"mcpServers": {
"Sitefire": {
"url": "https://app.sitefire.ai/api/mcp"
}
}
}
```
4. Restart Claude Desktop
Any AI client that supports remote MCP servers can connect to sitefire. Use the following URL:
```
https://app.sitefire.ai/api/mcp
```
Most clients accept a JSON configuration like this:
```json copy
{
"mcpServers": {
"Sitefire": {
"url": "https://app.sitefire.ai/api/mcp"
}
}
}
```
Check your client's documentation for where to add MCP server configurations.
**Authentication** uses your existing Sitefire account. When you connect for the first time, a browser window opens where you sign in and approve access. The AI assistant only sees data belonging to your company — the same data you see in the Sitefire dashboard.
{/* TODO: Replace with actual screenshot */}
{/*  */}
---
## How It Works
Sitefire's action pipeline follows a simple flow:
**Topics** are the keywords you track (e.g. "crm software", "email marketing tools"). When you want to improve your visibility for a topic, you create an **action**. Sitefire diagnoses the situation, produces a **briefing** with a concrete plan, and for content-related actions, can generate a full **article**.
```
Topic → Create Action → Diagnosis → Briefing → Article (optional)
```
**Action types:** Create new content, improve an existing page, pursue editorial coverage, or engage in forums (Reddit, Q&A sites). You can pick a type or let Sitefire's AI choose the best one.
## What You Can Do
| Capability | What it does |
|---|---|
| **Browse topics** | Search your tracked topics by name, see search volume and country, check which ones already have actions |
| **Create actions** | Start a new action for any topic. Pick the action type yourself, or let Sitefire's AI diagnose and choose the best approach |
| **Check action status** | See where each action is in the pipeline - diagnosing, briefing ready, article generated |
| **Read briefings** | Pull the full briefing content - competitive analysis, recommended angle, talking points |
| **Read articles** | Get the generated article content, ready to edit and publish |
| **List all actions** | See all your actions at a glance with their topics, visibility scores, and current status |
Every query is scoped to your company. The AI assistant cannot see or modify data from other Sitefire accounts.
---
## Example Workflows
### Find topics and create actions
You want to find topics that don't have actions yet and kick off the diagnosis workflow.
**What to say:**
> Show me my Sitefire topics related to "email marketing". For any that don't have an action yet, create one and let Sitefire decide the best action type.
The assistant searches your topics, checks which ones already have actions, and creates new actions for the rest. Sitefire's AI diagnoses each topic and picks the best approach - create new content, improve an existing page, pursue editorial coverage, or engage in forums.
**Follow up with:**
> Check the progress on those actions. Which ones have briefings ready?
The assistant checks each action's status and tells you which diagnoses are complete and which briefings you can read.
### Turn a briefing into content
Once an action's diagnosis is complete and the briefing is ready, you can use the AI assistant to produce the actual content.
**For new content:**
> Get the briefing for my "best CRM software" action. Use it to write a full article draft following the recommended angle and structure.
The assistant reads the briefing - competitive landscape, recommended content angle, talking points - and writes a complete article draft based on it.
**For improving existing content:**
> Get the briefing for my "sales automation" action. Here's my current page [paste content]. Apply the recommended changes.
The assistant reads the briefing's improvement recommendations and applies them to your existing content, focusing on the changes that matter most for AI visibility.
{/* TODO: Replace with actual GIF */}
{/*  */}
---
## Tips
- **Search topics first.** Ask the assistant to search your topics before creating actions. Topic names need to match what Sitefire tracks.
- **Check back on actions.** Creating an action starts a diagnosis that takes a minute or two. Ask the assistant to check the action status to see when the briefing is ready.
- **Combine with other tools.** The assistant can use Sitefire briefings alongside web search, Google Docs, or other tools. Ask it to draft an article from a briefing, or apply improvement recommendations to content you paste in.
---
# KPIs
import { Callout } from 'nextra/components'
# KPIs
Sitefire tracks your AI visibility across four surfaces: the answers AI models give, the citations they pull from, the bots that crawl your site, and the traffic those answers send back to you. This page is the canonical reference for every metric you'll see across the product.
## At a glance
| Metric | Unit | What it measures |
|--------|------|------------------|
| [Visibility Score](#visibility-score) | % | Share of AI answers mentioning your brand |
| [Share of Voice](#share-of-voice) | % | Your mentions vs. all brand mentions |
| [Average Position](#average-position) | Rank (1 = best) | Average rank when you're mentioned |
| [Citation Rate](#citation-rate) | % | Share of answers citing your pages |
| [Citation Share](#citation-share) | % | Your citations vs. all citations |
| [Bot Requests](#bot-requests) | Count | AI bot visits to your site |
| [Active Bots](#active-bots) | Count | Distinct AI bots crawling you |
| [Top Provider](#top-provider) | Name | Provider sending the most crawl traffic |
| [Top Page](#top-page) | Path | Most-crawled page |
| [AI Sessions](#ai-sessions) | Count | GA4 visits from AI engines |
| [Conversions](#conversions) | Count | GA4 conversions from AI traffic |
| [Conversion Rate](#conversion-rate) | % | AI-referred sessions that convert |
| [Engagement Rate](#engagement-rate) | % | Engaged AI-referred sessions |
| [Bounce Rate](#bounce-rate) | % | Non-engaged AI-referred sessions |
| [Avg. Duration](#avg-duration) | Seconds | Average AI-referred session length |
---
## Visibility metrics
### Visibility Score
*Unit: percentage (0-100%)*
The share of AI answers that mention your brand, weighted by how much each prompt matters to you.
**Why it matters** - Tells you how often your brand shows up in AI answers at all. The headline metric, before you worry about how prominently.
**Formula**
```
Score = Σ (prompt_weight × mention_rate) / Σ prompt_weight
prompt_weight = prompt_volume × model_weight
prompt_volume = topic_volume / active_prompts_in_topic
```
**Inputs**
| Input | Description |
|-------|-------------|
| `topic_volume` | Monthly search volume for the topic. Topics are the broader subjects you track; each topic contains one or more prompts. |
| `active_prompts_in_topic` | Number of active prompts under the topic. The topic's volume is split evenly across them, so adding more prompts to a topic dilutes each prompt's weight. |
| `model_weight` | Your configured model mix. Weights across ChatGPT, Perplexity, Gemini, and other supported models, summing to 100%. |
| `mention_rate` | The share of answers on that prompt/model where your brand is mentioned (0-100%). |
**Example**
You're **Northwind**, a CRM for small businesses. You track **Contoso** and **Fabrikam** as competitors. You've set up two topics, each with a single active prompt:
| Topic | Topic volume | Active prompts | Prompt volume | Prompt text |
|-------|--------------|----------------|---------------|-------------|
| Small-business CRM tools | 1,000 | 1 | 1,000 ÷ 1 = **1,000** | A: "best CRM for small business" |
| Sales pipeline tracking | 500 | 1 | 500 ÷ 1 = **500** | B: "how to track sales pipeline" |
Each topic's volume splits evenly across its active prompts. In this example each topic has only one, so each prompt gets the full topic volume. If a topic had 3 active prompts, each would get a third of the topic volume.
Model weights: ChatGPT 80%, Perplexity 20%. Each prompt is asked on each model, so you get 4 answers.
**Answer 1** — Prompt A on ChatGPT
> For small businesses, these CRMs come up most often:
>
> 1. **Northwind** — strong pipeline management with a generous free tier [1]
> 2. **Contoso** — simple contact management, low learning curve [2]
> 3. **Fabrikam** — broader feature set, geared toward growing teams
>
> Sources: [1] northwind.com/features · [2] contoso.com
**Answer 2** — Prompt A on Perplexity
> The most-recommended CRMs for small businesses:
>
> 1. **Contoso** — affordable, used by thousands of small teams [1]
> 2. **Northwind** — richer feature set for active pipeline tracking [2]
>
> Sources: [1] contoso.com · [2] northwind.com/features · [3] fabrikam.com
**Answer 3** — Prompt B on ChatGPT
> Core practices for tracking a sales pipeline:
>
> - **Contoso**'s guide recommends a weighted-probability approach [1]
> - **Fabrikam**'s playbook emphasizes stage-by-stage forecasting [2]
>
> Sources: [1] contoso.com/guide · [2] fabrikam.com
**Answer 4** — Prompt B on Perplexity
> Three tools come up for pipeline tracking:
>
> 1. **Northwind** — visual kanban with stage probabilities [1]
> 2. **Contoso** — solid CRM with a built-in pipeline view
> 3. **Fabrikam** — enterprise-grade forecasting
>
> Sources: [1] northwind.com/tracking
Extracted, that's:
| # | Prompt | Model | Mentions (in order) | Citations |
|---|--------|-------|---------------------|-----------|
| 1 | A | ChatGPT | Northwind, Contoso, Fabrikam | northwind.com/features, contoso.com |
| 2 | A | Perplexity | Contoso, Northwind | contoso.com, northwind.com/features, fabrikam.com |
| 3 | B | ChatGPT | Contoso, Fabrikam | contoso.com/guide, fabrikam.com |
| 4 | B | Perplexity | Northwind, Contoso, Fabrikam | northwind.com/tracking |
Northwind's mention rate per prompt:
- Prompt A: mentioned on ChatGPT *and* Perplexity → 80% + 20% = **100%**
- Prompt B: mentioned only on Perplexity → 0% + 20% = **20%**
```
Score = (1,000 × 100% + 500 × 20%) / 1,500 = 73.3%
```
*Related: [Share of Voice](#share-of-voice) · [Average Position](#average-position)*
---
### Share of Voice
*Unit: percentage (0-100%)*
Your mentions as a percentage of all brand mentions in the same AI answers.
**Why it matters** - Tells you whether you're gaining ground on competitors. Visibility Score can rise across the whole category at once; Share of Voice only rises when you're winning a bigger slice of mentions than competitors are.
**Formula**
```
Share of Voice = your_mentions / total_mentions
```
**Inputs**
| Input | Description |
|-------|-------------|
| `your_mentions` | Count of answers in the selected prompts and date range that mention your brand. |
| `total_mentions` | Count of mentions of any tracked brand in the same answers. |
**Example**
You're **Northwind**, a small-business CRM. You track **Contoso** and **Fabrikam** as competitors. Across two prompts asked on ChatGPT and Perplexity, you collect 4 answers:
| # | Prompt | Model | Mentions (in order) | Citations |
|---|--------|-------|---------------------|-----------|
| 1 | A | ChatGPT | Northwind, Contoso, Fabrikam | northwind.com/features, contoso.com |
| 2 | A | Perplexity | Contoso, Northwind | contoso.com, northwind.com/features, fabrikam.com |
| 3 | B | ChatGPT | Contoso, Fabrikam | contoso.com/guide, fabrikam.com |
| 4 | B | Perplexity | Northwind, Contoso, Fabrikam | northwind.com/tracking |
Count mentions:
- Total mentions across all answers: 3 + 2 + 2 + 3 = **10**
- Northwind mentions: 1 + 1 + 0 + 1 = **3**
```
Share of Voice = 3 / 10 = 30%
```
Northwind's Visibility Score on this data is 73.3% but Share of Voice is 30%. Northwind shows up in most answers, but is only one of several brands named each time.
*Related: [Visibility Score](#visibility-score) · [Average Position](#average-position)*
---
### Average Position
*Unit: rank (1 is best; lower is better)*
When your brand is mentioned, the average rank it appears at within the AI answer.
**Why it matters** - Being mentioned 7th in a list of 10 is very different from being the first recommendation. Shows how prominently you appear, not just whether you appear.
**Formula**
```
Average Position = mean(rank) across all answers where your brand is mentioned
```
**Example**
You're **Northwind**, tracking **Contoso** and **Fabrikam**. Across two prompts asked on ChatGPT and Perplexity, you collect 4 answers:
| # | Prompt | Model | Mentions (in order) |
|---|--------|-------|---------------------|
| 1 | A | ChatGPT | Northwind, Contoso, Fabrikam |
| 2 | A | Perplexity | Contoso, Northwind |
| 3 | B | ChatGPT | Contoso, Fabrikam |
| 4 | B | Perplexity | Northwind, Contoso, Fabrikam |
Northwind's rank in each answer where it's mentioned:
- Answer 1 - rank **1**
- Answer 2 - rank **2**
- Answer 3 - *not mentioned, skipped*
- Answer 4 - rank **1**
```
Average Position = (1 + 2 + 1) / 3 = 1.33
```
Northwind typically leads the list when it's mentioned at all.
*Related: [Visibility Score](#visibility-score) · [Share of Voice](#share-of-voice)*
---
## Citation metrics
### Citation Rate
*Unit: percentage (0-100%)*
The percentage of AI answers that cite one of your pages as a source.
**Why it matters** - Mentions are about your brand name showing up. Citations are about your content being the source the AI pulled from. This is what GEO content strategy targets directly.
**Formula**
```
Citation Rate = answers_citing_you / total_answers
```
**Inputs**
| Input | Description |
|-------|-------------|
| `answers_citing_you` | Count of answers that include at least one citation pointing to a domain you own. |
| `total_answers` | Count of all answers in the selected prompts and date range. |
**Example**
You're **Northwind** (northwind.com), tracking **Contoso** and **Fabrikam**. Across two prompts asked on ChatGPT and Perplexity, you collect 4 answers with these citations:
| # | Prompt | Model | Citations |
|---|--------|-------|-----------|
| 1 | A | ChatGPT | **northwind.com**/features, contoso.com |
| 2 | A | Perplexity | contoso.com, **northwind.com**/features, fabrikam.com |
| 3 | B | ChatGPT | contoso.com/guide, fabrikam.com |
| 4 | B | Perplexity | **northwind.com**/tracking |
Northwind-owned pages appear in answers 1, 2, and 4 — 3 of the 4 answers.
```
Citation Rate = 3 / 4 = 75%
```
*Related: [Citation Share](#citation-share) · [Visibility Score](#visibility-score)*
---
### Citation Share
*Unit: percentage (0-100%)*
Your citations as a percentage of all citations in the same AI answers.
**Why it matters** - If your Citation Rate is flat but Citation Share is dropping, competitors are winning more of the source slots.
**Formula**
```
Citation Share = your_citations / total_citations
```
**Inputs**
| Input | Description |
|-------|-------------|
| `your_citations` | Count of citations to any domain you own. |
| `total_citations` | Count of all citations in the same answer set, across every domain. |
**Example**
You're **Northwind** (northwind.com), tracking **Contoso** and **Fabrikam**. Across two prompts asked on ChatGPT and Perplexity, you collect 4 answers with these citations:
| # | Prompt | Model | Citations |
|---|--------|-------|-----------|
| 1 | A | ChatGPT | **northwind.com**/features, contoso.com |
| 2 | A | Perplexity | contoso.com, **northwind.com**/features, fabrikam.com |
| 3 | B | ChatGPT | contoso.com/guide, fabrikam.com |
| 4 | B | Perplexity | **northwind.com**/tracking |
Count citations:
- Total citations: 2 + 3 + 2 + 1 = **8**
- Northwind citations: 1 + 1 + 0 + 1 = **3**
```
Citation Share = 3 / 8 = 37.5%
```
*Related: [Citation Rate](#citation-rate)*
---
## AI Crawler metrics
### Bot Requests
*Unit: count*
Total requests from AI bots (GPTBot, ClaudeBot, PerplexityBot, and others) to your site in the selected date range.
**Why it matters** - Crawls feed citations. No crawls means no citations. A drop in crawls is an early warning for citation drops weeks later.
The delta on the card compares the last 7 days against the 7 days before that, using your current bot and path filters.
---
### Active Bots
*Unit: count*
The number of distinct AI bots that accessed your site in the selected range.
**Why it matters** - Shows how broadly AI platforms are picking you up. Heavy crawling from a single bot is a more fragile position than steady crawling from several.
---
### Top Provider
*Unit: provider name*
The AI provider (OpenAI, Anthropic, Google, Perplexity, and others) that sent the most bot requests in range.
**Why it matters** - Shows where your crawl traffic is concentrated, which helps you prioritize content for the platforms that actually read it.
---
### Top Page
*Unit: URL path*
The page path that received the most AI-bot requests.
**Why it matters** - Your highest-AI-attention page. Often the best candidate for structured data or a freshness update.
---
## AI Referral metrics (GA4)
These metrics come from your connected Google Analytics 4 property and cover visits to your site from AI search engines.
### AI Sessions
*Unit: count*
Visits to your site from AI search engines like ChatGPT, Gemini, and Perplexity, measured by GA4.
**Why it matters** - Turns visibility into actual traffic. If Visibility Score is climbing but AI Sessions aren't, the mentions aren't generating clicks.
---
### Conversions
*Unit: count*
Conversion events from AI-referred visitors, using the conversion goals you've set up in GA4.
**Why it matters** - The bottom line. Only as accurate as the conversion goals configured in your GA4 property.
---
### Conversion Rate
*Unit: percentage*
The percentage of AI-referred sessions that convert.
**Formula**
```
Conversion Rate = conversions / sessions
```
The delta shown on the card is in percentage points (pp).
---
### Engagement Rate
*Unit: percentage*
The percentage of AI-referred sessions that are "engaged" by GA4's definition: the visitor viewed 2+ pages, converted, or spent 10+ seconds actively on the site.
**Why it matters** - Filters out low-intent traffic. High sessions with low engagement means AI is sending visitors who aren't a fit.
**Formula**
```
Engagement Rate = engaged_sessions / sessions
```
---
### Bounce Rate
*Unit: percentage*
The percentage of sessions that aren't engaged: no second page view, no conversion, and under 10 seconds of active time (GA4's definition).
**Formula**
```
Bounce Rate = 1 - Engagement Rate
```
High Bounce Rate with long Avg. Duration often means visitors read the page but didn't click further. Not necessarily a failure.
---
### Avg. Duration
*Unit: seconds*
The average time between the first and last event in a session.
---
# Sitefire Documentation
import { Cards } from 'nextra/components'
# Sitefire Documentation
Sitefire tracks how AI models answer questions about your industry and whether they mention or cite your brand. When they don't, it diagnoses why and tells you what to do about it.
## Methodology and actions
Understand how Sitefire analyzes AI visibility and what actions it recommends.
## How to
Step-by-step guides for common tasks.
## Integrations
Connect Sitefire to your analytics, CMS, and AI assistants.
### Tracking
Measure AI-driven visitors in your existing analytics.
### CMS
Connect your CMS so Sitefire can publish content directly.
### Agent Plugin (MCP)
Let AI assistants like Claude and ChatGPT access your Sitefire data directly.
---
# AI Answer
# AI Answer
Generative Engine Optimization (GEO) is the practice of optimizing your online presence to appear in AI-generated answers.
### Brand Mentions and Search Queries
AI models don't answer questions in isolation. They break each prompt into multiple background search queries - we call this the **query fanout** - and then synthesize results into a single answer. Sitefire tracks which brands get mentioned and every search query the AI generates behind the scenes.

### Sourced Pages and Citations
For each AI answer, Sitefire identifies the **sourced pages** the model pulled information from and the **citations** it included in the response. A single page can be cited multiple times within one answer - each citation is tracked separately.

---
# Diagnosis
# Diagnosis
For every topic cluster, Sitefire runs a diagnosis agent that analyzes why certain content gets cited by AI engines and yours doesn't. The output is a structured report that ends with one clear recommended action.
---
## What the Agent Analyzes
### 1. Visibility vs. Citations
The agent starts by mapping who gets mentioned and who gets cited. These are different signals:
- **Visibility** means an AI engine talks about your brand when answering a question
- **Citation** means it links to your content as a source
A brand can have high visibility (AI engines mention it frequently) but zero citations (they never link to its pages). This is common - and it's the gap Sitefire closes.
### 2. Citation Landscape
Who owns the citations for this cluster? The agent classifies every cited URL:
| Classification | Example |
| :--- | :--- |
| **Corporate** | Company blogs, agency sites, SaaS review pages |
| **Editorial** | TechRadar, Forbes, industry publications |
| **UGC** | Reddit threads, Stack Overflow, community forums |
| **Competitor** | Direct competitor pages |
| **Own** | Your own content |
The distribution matters. If editorial sites own 50% of citations, the strategy is different than if corporate blogs dominate.
### 3. Top-Cited Content (3C Classification)
The agent classifies every top-cited page along three axes to identify the winning formula:
#### C1: Content Type
The most important signal. AI engines overwhelmingly cite the same type of page for a given query. If every cited result is a blog post, your product page won't break through - regardless of how good it is.
| Content Type | Description |
| :--- | :--- |
| Blog Post / Article | Editorial content - guides, comparisons, listicles |
| Product / Feature Page | Marketing page for a product, feature, or pricing |
| Category / Listing Page | Collection or directory of items |
| Landing Page | Focused conversion page for a service or tool |
| Video | YouTube or video-dominant result |
| Interactive Tool | Calculator, checker, generator, playground |
| Documentation | Reference docs, API docs, knowledge base |
| Forum / Discussion | Reddit, Stack Overflow, community Q&A |
#### C2: Content Format
How is the content structured? A how-to guide and a comparison article serve fundamentally different intents - even though both are blog posts. Format mismatches are the most common missed opportunity.
| Content Format | Signals |
| :--- | :--- |
| How-to Guide | Step-by-step instructions |
| Listicle | Numbered list - "10 best...", "top N..." |
| Definitive Guide | Long-form, "complete guide", "everything you need" |
| Comparison | X vs Y, head-to-head evaluation |
| Review | Single product or service evaluation |
| Opinion / Thought Piece | Perspective, argument, commentary |
| Roundup | Curated from multiple sources or experts |
| Statistics Post | Aggregated data - "N statistics about..." |
| Checklist | Actionable verification list |
| Case Study | Real-world implementation with results |
#### C3: Content Angle
What is the hook? The winning angle tells you how to position your content against what already gets cited.
| Angle | Signals |
| :--- | :--- |
| Freshness | Current year in title, "2026", "updated" |
| Speed / Ease | "quick", "easy", "in 5 minutes" |
| Cost | "free", "cheap", "budget", "open-source" |
| Audience-specific | "for beginners", "for enterprise", "for developers" |
| Depth | "complete", "ultimate", "everything you need" |
| Niche Specificity | Narrow use case or industry vertical |
| Authority | "expert-tested", "we reviewed N products", data-driven |
### 4. Your Existing Content
Finally, the agent looks at what you already have. Does your site have a page that matches the winning C1 + C2 profile? If so, how does it perform compared to the top-cited content?
---
## The 4 Actions
Based on the diagnosis, Sitefire recommends one of four actions:
| Diagnosis finding | Action | What it means |
| :--- | :--- | :--- |
| C1 or C2 mismatch | [Create content](/docs/actions/create-content) | You don't have the right type or format of page. No amount of optimization will fix a product page competing against listicles. |
| C1 + C2 match, low citations | [Improve content](/docs/actions/improve-content) | You have the right page - it just underperforms. Sitefire scores it with the GEO Score and generates specific improvements. |
| UGC dominates citations | [Engage UGC](/docs/actions/engage-ugc) | AI engines cite Reddit threads and forum answers, not corporate content. The strategy is to participate where citations happen. |
| Editorial dominates citations | [Editorial coverage](/docs/actions/editorial-coverage) | AI engines cite publications like TechRadar or Forbes. The strategy is to earn coverage from the outlets that get cited. |
The key principle: when C1 or C2 don't match, you need new content - not optimization. Only when your content type and format already match the cited results does page-level improvement make sense.
---
# Google Analytics (GA4)
import { Steps } from 'nextra/components'
# Google Analytics (GA4)
By default, GA4 lumps visits from ChatGPT, Perplexity, and other AI platforms into the generic "Referral" channel. This guide creates a dedicated "Artificial Intelligence" channel so AI traffic shows up separately in your reports.
> **Time needed:** ~5 minutes. Changes apply retroactively to historical data.
---
## Setup
### Open Channel Groups
Go to **Admin** (gear icon, bottom-left) > under **Property settings**, expand **Data display** > click **Channel groups**.
Click the blue **"Create new channel group"** button.

### Create the Channel Group
Name the group something descriptive (e.g., "With AI Traffic"). This creates a copy of all default channels that you can customize.
Click **"Add new channel"** to define your AI traffic channel.

### Configure the AI Channel
Set the channel name to **Artificial Intelligence** (or whatever label you prefer in reports).
Configure the condition:
- **Dimension:** Source
- **Condition:** matches regex
- **Value:** paste the regex below
```regex copy filename="AI Source Regex"
chatgpt\.com|chat\.openai\.com|perplexity\.ai|claude\.ai|gemini\.google\.com|copilot\.microsoft\.com|deepseek\.com|you\.com|meta\.ai|poe\.com
```
This covers the major AI platforms that drive the vast majority of referral traffic.
Click **Apply**, then **Save channel**.

### Reorder - Place AI Above Referral
This step is critical. GA4 assigns traffic to the **first matching channel** in the list. If "Referral" sits above your new AI channel, AI traffic will still get caught by Referral and never reach your custom channel.
Use the drag handles to move **Artificial Intelligence** directly above **Referral**, then save.

### Switch to the Custom Channel Group
Navigate to **Reports** > **Life cycle** > **Acquisition** > **Traffic acquisition**.
Click the dimension dropdown (the blue pill at the top of the table) and select **Session Custom channel group**.

### View Your AI Traffic
You should now see **Artificial Intelligence** as its own row in the acquisition table, with Sessions, Users, Engagement Rate, and all other standard metrics broken out.

---
## Which Metrics to Track
| Metric | Use | Detail |
| :--- | :--- | :--- |
| **Sessions** | Primary metric | Total visits from AI platforms. Most stable number for weekly trends. |
| **Engagement Rate** | Quality check | Sessions with 10+ seconds, 2+ pages, or a conversion. Compare against Organic Search. |
| **New Users** | Quarterly review | Whether AI platforms drive net-new audience. |
> GA4 undercounts AI traffic by an estimated 30-40%. Free-tier ChatGPT users and mobile AI apps often don't send referrer headers, so those visits appear as "Direct". Use [Cloudflare tracking](/docs/tracking/cloudflare) as a complementary signal.
---
# Cloudflare
import { Callout, Steps } from 'nextra/components'
# Cloudflare
If your site is behind Cloudflare, you already have AI bot analytics built in - no log parsing, no code changes. Cloudflare's **AI Crawl Control** identifies crawlers like GPTBot, ClaudeBot, and PerplexityBot and shows you exactly which bots visit which pages.
You can use this data in two ways:
1. **Read it manually** in the Cloudflare dashboard - great for weekly check-ins
2. **Connect API to Sitefire** via API token - so we can pull the data automatically and combine it with your [GEO visibility metrics](/docs/geo-overview)
---
## Read AI Traffic in Cloudflare
Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/), select your domain, and navigate to **AI Crawl Control** in the left sidebar.
### The Two Metrics That Matter
| Metric | What it tells you | Where to find it |
| :--- | :--- | :--- |
| **Allowed Requests by Operator** | How often AI bots crawl your site, grouped by operator (OpenAI, Anthropic, Google, Perplexity, etc.). This is your supply-side leading indicator - bots need to crawl your content before they can cite it. | Overview tab > "Crawlers grouped by operators". For trends: Metrics tab > "Allowed requests", group by **Operator**. |
| **Referrals by Operator** | How often humans click through from AI platforms to your site. The closest conversion metric for AI visibility. Unlike GA4, Cloudflare captures this server-side. | Metrics tab > "Referrals over time", group by **Operator**. |
Referral data requires a paid Cloudflare plan (Pro or above). On the free plan, use [GA4](/docs/tracking/google-analytics) for referral tracking.
**Export:** On the Metrics tab, click **Download CSV** or **Download image** for your own reporting.
---
## Connect API to Sitefire
For automated, continuous tracking, give Sitefire read-only access to your Cloudflare analytics via API token. We query the same data the dashboard shows and surface it alongside your [visibility tracking](/docs/visibility-tracking) and [GEO Score](/docs/actions/improve-content).
> **Time needed:** ~5 minutes. Two steps: find your Zone ID, create an API token.
### Find Your Zone ID
Go to **Cloudflare Dashboard** > select your domain. On the right sidebar of the **Overview** page, copy the **Zone ID** (a 32-character hex string).
### Create an API Token
Go to **My Profile** (top-right avatar) > **API Tokens** > **Create Token** > scroll past templates and click **Create Custom Token**.
| Setting | Value |
| :--- | :--- |
| Token name | `Sitefire - AI Analytics (read-only)` |
| Permissions | **Account** > **Account Analytics** > **Read** |
| Zone Resources | Include > Specific zone > select your domain |
Leave IP filtering and TTL blank unless your security policy requires them.
Click **Continue to summary**, review, then **Create Token**. **Copy the token immediately** - Cloudflare only shows it once.
Do not use your Global API Key. It grants unrestricted access to all zones and resources. API tokens are scoped, revocable, and the only method we support.
### Share with Sitefire
Email **support@sitefire.ai** with:
- Your **API token** (from Step 2)
- Your **Zone ID** (from Step 1)
- The **domain name** the zone belongs to
We configure the sync on our end and you'll see AI bot data in your dashboard within a few hours.
---
# AWS CloudFront
import { Tabs, Callout, Steps } from 'nextra/components'
# AWS CloudFront
CloudFront access logs record every request to your site - including the user-agent string that identifies AI crawlers like GPTBot, ClaudeBot, and PerplexityBot. By giving Sitefire read-only access to these logs, we can show you exactly which AI bots visit which pages, how often, and how that changes over time.
> **Time needed:** ~15 minutes. Two steps: enable logging, create an IAM role.
---
## How It Works
CloudFront writes a gzip-compressed log file to S3 for every batch of requests. Each line includes the URL path, timestamp, and user-agent. Sitefire assumes a read-only IAM role in your account, syncs new log files, filters for AI bot user-agents, and surfaces the insights in your dashboard.
This is the same cross-account IAM pattern used by Datadog, New Relic, and other SaaS tools. No credentials are shared. You stay in full control.
---
## Step 1: Enable CloudFront Logging
If your distribution already has logging enabled, skip to [Step 2](#step-2-create-an-iam-role-for-sitefire).
Standard Logging v2 (launched November 2024) is the recommended option for new setups. It delivers logs to S3 without requiring bucket ACLs, and the console handles all permissions automatically.
### Open the Logging tab
Go to **CloudFront** > **Distributions** > select your distribution > **Logging** tab > click **Add**.
### Configure S3 delivery
- Select **Amazon S3** as the destination
- Choose or create an S3 bucket (e.g., `yourcompany-cf-logs`)
- Optionally set a prefix
- Output format: select **W3C** (our parser requires this format)
- Field selection: make sure **cs(User-Agent)** is included (it is by default)
The console automatically creates the required S3 bucket policy. No manual permission setup needed.
### Save
Logs start appearing in your bucket within a few minutes.
### Verify logs are flowing
Wait 5 minutes, then check that files are appearing in your bucket:
```bash
aws s3 ls s3://YOUR-BUCKET/YOUR-PREFIX/ --recursive --summarize | tail -3
```
You should see `.gz` files. If the bucket is empty, double-check that logging is enabled on the correct distribution.
If you already have legacy standard logging enabled (the toggle under **General** > **Settings** > **Edit**), you're all set. No changes needed - Sitefire works with legacy logs.
Just note the **S3 bucket name** and **prefix** where your logs are stored, and continue to Step 2.
---
## Step 2: Create an IAM Role for Sitefire
This role grants Sitefire **read-only** access to your log bucket - nothing else.
Your **Account ID** and **External ID** are shown in the Sitefire app. Go to **Crawler Analytics** → **Connect CDN** → **AWS CloudFront** to find them.
### Create a new IAM role
Go to **IAM** > **Roles** > **Create Role** > select **Custom trust policy**.
Paste the following trust policy:
```json copy filename="Trust Policy"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::SITEFIRE_ACCOUNT_ID:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "UNIQUE_EXTERNAL_ID"
}
}
}
]
}
```
Replace `SITEFIRE_ACCOUNT_ID` and `UNIQUE_EXTERNAL_ID` with the values shown in the Sitefire setup wizard.
Click **Next**.
### Attach a permission policy
Click **Create policy** (opens in a new tab), switch to the **JSON** editor, and paste:
```json copy filename="Permission Policy"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::YOUR-LOG-BUCKET",
"arn:aws:s3:::YOUR-LOG-BUCKET/*"
]
}
]
}
```
Replace `YOUR-LOG-BUCKET` with your bucket name from Step 1.
Name the policy something like `sitefireLogReaderPolicy`, then save it.
Go back to the role creation tab, refresh the policy list, and attach the policy you just created. Click **Next**.
### Name and create the role
Name the role `sitefireLogReader` (or similar) and click **Create role**.
### Enter details in Sitefire
Go back to the Sitefire setup wizard and click **I've created the IAM role**. Enter:
- The **Role ARN** from the role summary page (e.g., `arn:aws:iam::123456789012:role/sitefireLogReader`)
- Your **S3 bucket name** and prefix (if any)
Click **Connect & Import** to validate the connection and start syncing.
That's it. Sitefire will validate the connection and start importing the last 7 days of AI bot traffic. You'll see data in your dashboard within minutes.
---
# webflow
import { Steps, Callout } from 'nextra/components'
# Webflow
This guide walks you through creating a Webflow API token for [CMS integration](/docs/cms/setup) so Sitefire can publish content to your site.
### Open Site Settings
Open the Webflow Designer for your site. Click the **Webflow menu** (top left) and select **Site settings**.

### Navigate to Apps & Integrations
In the left sidebar, navigate to **Apps & Integrations**.

### Generate an API Token
Scroll down to the **API access** section and click **Generate API token**.

Give the token a descriptive name (e.g., `Sitefire`) and set the permissions as follows:
| Permission | Level | Purpose |
| --- | --- | --- |
| Assets | Read and write | Upload hero/OG images |
| Authorized User | Read-only | Token verification |
| CMS | Read and write | Push blog posts as drafts, upsert by slug |
| Components | Read-only | Verify layout compatibility |
| Pages | Read and write | Set meta titles, descriptions, OG tags |
| Sites | Read-only | List sites in workspace, read site info |
All other permissions can stay at **No access**.

Copy the token immediately - Webflow only shows it once.
Paste the token into the **API Key** field in Sitefire when [connecting your Webflow site](/docs/cms/setup). Then select your site and collection, map your fields, and you can start pushing articles.
---
# CMS Integrations
# CMS Integrations
Sitefire can push articles directly to your CMS as drafts. Currently supported: **Framer** and **Webflow**.
## Framer
### 1. Get your Project URL
1. Open your project in [Framer](https://framer.com)
2. Copy the URL from your browser's address bar — it looks like:
```
https://framer.com/projects/YourSite--aAbBcCdD1234567890xy
```
3. Paste this URL into the **Project URL** field in Sitefire
### 2. Create an API key
1. In your Framer project, click the **Framer** menu (top-left) → **Settings**
2. Scroll to **API** and click **Generate API Key**
3. Copy the key and paste it into the **API Key** field in Sitefire
For step-by-step instructions with screenshots, see the [Framer API key guide](/docs/cms/framer).
### 3. Select a collection
After connecting, Sitefire will show the CMS collections in your Framer project. Select the collection where you want articles to be published (e.g. "Blog Posts").
### 4. Map your fields
Map each article field (Title, Body, Slug, Excerpt, Date) to the corresponding CMS field. Fields that aren't mapped will be skipped during push.
---
## Webflow
### 1. Create an API token
1. In the [Webflow Designer](https://webflow.com/dashboard), open your site and go to **Site settings** (Webflow menu → Site settings) → **Apps & Integrations** → **API access**
2. Click **Generate API token**
3. Give the token a name (e.g. `Sitefire`) and set these permissions:
| Permission | Level | Purpose |
| --- | --- | --- |
| Assets | Read and write | Upload hero/OG images |
| Authorized User | Read-only | Token verification |
| CMS | Read and write | Push blog posts as drafts, upsert by slug |
| Components | Read-only | Verify layout compatibility |
| Pages | Read and write | Set meta titles, descriptions, OG tags |
| Sites | Read-only | List sites in workspace, read site info |
All other permissions can stay at **No access**.
4. Copy the token and paste it into the **API Key** field in Sitefire
For step-by-step instructions with screenshots, see the [Webflow API token guide](/docs/cms/webflow).
### 2. Select a site
If your workspace has multiple sites, Sitefire will ask you to pick one. If you only have one site, it's selected automatically.
### 3. Select a collection
Choose the CMS collection where articles will be published. Sitefire shows all collections from your selected site.
### 4. Map your fields
Map each article field to the corresponding Webflow CMS field. Sitefire auto-detects common field names (like "Post Body" → Body, "Name" → Title), but you can adjust the mapping.
**Supported field types:**
- **PlainText** — used for Title, Slug, Excerpt
- **RichText** — used for Body (markdown is converted to HTML automatically)
- **Date** — used for publish date
---
## Pushing articles
Once connected, each completed article shows a **Push to [CMS]** button. Articles are always pushed as **drafts** — you can review and publish them in your CMS editor.
If an article with the same slug already exists in your CMS, Sitefire updates it instead of creating a duplicate.
### Tables in Webflow
Webflow's rich text fields don't support HTML tables. When your article contains tables, Sitefire converts them to a copyable HTML snippet inside a blockquote. To display the table on your site:
1. Push the article to Webflow
2. Open it in the Webflow Designer
3. Find the blockquote containing the table HTML
4. Replace it with a **Custom Code** embed block
5. Paste the HTML into the embed block
---
# framer
import { Steps, Callout } from 'nextra/components'
# Framer
This guide walks you through creating a Framer API key for [CMS integration](/docs/cms/setup) so Sitefire can publish content to your site.
### Open Site Settings
In the Framer editor, click the **gear icon** (top-right) to open **Site Settings**. Make sure you're on the **General** tab in the left sidebar, then scroll down past Site Title, Site Description, and Preview.

### Create an API Key
Scroll down to the **API Keys** section and click **Add API Key**.

Copy the generated key immediately - it won't be shown again.
Paste the key into the **API Key** field in Sitefire when [connecting your Framer project](/docs/cms/setup). Then choose your collection and map your fields to finish setup.
---
# Improve Content
# Improve Content
The [diagnosis](/docs/3c-content-classification) found that your page matches the dominant content type (C1) and format (C2) for this topic - but it's not being cited by AI engines, or earns far fewer citations than competitors. You have the right content. It needs to be improved.
Sitefire scores the page with the **GEO Score** and generates a full improvement briefing with prioritized fixes.
---
## GEO Score
The GEO Score measures how well a page is optimized to be cited by AI engines like ChatGPT, Perplexity, or Google AI Overviews. It answers one question: when an AI reads this page, can it extract trustworthy, well-structured content worth citing?
A high GEO Score doesn't guarantee citations, but a low one almost guarantees you won't get them.
---
## The 11 Tests
Every page is evaluated against 11 tests, grouped into three categories. Each test scores 0-10 and belongs to an impact tier (T1 = 3x weight, T2 = 2x, T3 = 1x) that determines how much it affects the final score.
### Authority - Can the AI trust what's on the page?
| # | Test | Impact | Before | After |
| :--- | :--- | :--- | :--- | :--- |
| 1 | Source Citations | T1 (3x) | "Studies show most users prefer simple designs." | "78% of users prefer simple designs (Webflow UX Survey, 2024)." |
| 2 | Statistics & Data | T1 (3x) | "Significantly improves build times for many teams." | "Reduces average build time by 43% across 1,200 teams (Source, 2024)." |
| 3 | Freshness Signals | T2 (2x) | No date visible on the page. | "Last reviewed: January 2026" below the title. |
| 4 | Author Attribution | T2 (2x) | No byline. Content appears anonymous. | "By Sarah Chen, Head of Design" with link to bio. |
### Readability - Can the AI extract the answer quickly?
| # | Test | Impact | Before | After |
| :--- | :--- | :--- | :--- | :--- |
| 5 | Answer-First Structure | T1 (3x) | "There are several factors to consider when choosing a website builder..." | "Webflow is the best website builder for design-led teams." |
| 6 | Paragraph Length | T2 (2x) | One paragraph covering pricing, features, and limitations in 8 sentences. | Three paragraphs - one per topic, each 2-3 sentences. |
### Structure - Can the AI parse the page correctly?
| # | Test | Impact | Before | After |
| :--- | :--- | :--- | :--- | :--- |
| 7 | FAQ + Schema | T1 (3x) | Questions on the page but no structured data markup. | Each Q&A pair in FAQPage JSON-LD with matching visible HTML. |
| 8 | Tables & Structured Data | T1 (3x) | "Plan A costs $14/mo, Plan B $16/mo, Plan C $17/mo." | A comparison table with columns for Plan, Price, and Features. |
| 9 | Semantic HTML | T2 (2x) | Content in nested `
` elements. | Main content in `
`, sections in ``. |
| 10 | Heading Hierarchy | T2 (2x) | Multiple H1 tags, H2 jumping to H4, generic labels. | Single H1, descriptive H2s for each section, H3s for subsections. |
| 11 | Visible vs Hidden Content | T3 (1x) | FAQ answers collapsed inside JavaScript accordions. | All answers visible in the initial HTML. |
---
## Scoring
Each test scores 0 to 10, multiplied by its tier weight (T1 = 3x, T2 = 2x, T3 = 1x).
**GEO Score = (sum of all weighted test scores / 250) x 100**
The maximum of 250 comes from: (5 T1 tests x 10 x 3) + (5 T2 tests x 10 x 2) + (1 T3 test x 10 x 1) = 150 + 100 + 10 = 250.
---
## From Score to Action
The raw 0-10 scores get translated into severity levels that determine what to fix first. Severity depends on both the score and the test's impact tier - a low score on a T1 test is more urgent than the same score on a T2 test.
| Impact | Score 0-3 | Score 4-7 | Score 8-10 |
| :--- | :--- | :--- | :--- |
| **T1 (3x)** | Critical | Moderate | No action |
| **T2 (2x)** | Moderate | Minor | No action |
| **T3 (1x)** | Minor | Minor | No action |
A score of 3 on Source Citations (T1) is **Critical**. The same score of 3 on Freshness (T2) is only **Moderate**. The tier reflects how much that dimension actually moves AI citation rates.
---
## How an Improve Content Action Looks
After scoring the page, Sitefire generates a full improvement briefing:
1. **GEO Score audit** - each of the 11 tests scored with a rationale
2. **Quick wins** - the 2-3 highest-impact, lowest-effort fixes
3. **Recommendations** - every improvement with severity, affected section, before/after examples, and estimated effort
4. **Competitor comparison** - your page side-by-side with the top-cited competitors across key dimensions
5. **Implementation checklist** - grouped by effort (quick wins, content enrichment, technical changes)
---
# Engage UGC
# Engage UGC
Coming soon.
---
# Editorial Coverage
# Editorial Coverage
Coming soon.
---
# Create Content
# Create Content
Coming soon.