Table of Contents
- Getting Started
- Adding a New Property
- Setting Up Your Property
- Brands
- Entities
- Categories
- Queries
- Tags
- Locations
- Citation Prompts
- Prompt Sets
- Models
- Objectives
- Dashboard
- Visibility Tracking
- Running Association Probes
- Measuring Relevance
- Citation Mining
- Snippet Optimization
- Holistic Optimization
- Page Grounding
- Treewalker
- Veracity
- Organic Search (GSC)
- Intent Classification
- Benchmark
- Gap Analysis
- Reports
- Exporting Data
- Sharing with Other Users
Getting Started
Logging In
- Open Auxy in your browser at
app.auxy.com. - Click Sign in with Google.
You will be redirected to the Properties list after login.
Navigation
- Properties list — your home screen, showing all properties you have access to.
- Property detail — click any property to open it. The detail view contains these tabs:
| Tab | Purpose |
|---|---|
| Dashboard | Brand visibility overview with mention share, visibility scores, and trend charts |
| Tracking | Grounded visibility tracking with per-entity, per-location snapshots over time |
| Associations | Discover which brands AI models associate with your entities and queries |
| Relevance | Measure how relevant AI models consider a brand to a query or entity |
| Citations | Extract and analyze citations from grounded AI responses |
| Optimizer | Iteratively optimize snippets or full pages for better ranking |
| Treewalker | Probe what a model recalls about a website from parametric memory |
| Organic | Google Search Console data — queries, pages, position, clicks |
| Analysis | AI-powered gap analysis combining probe data, GSC, and optimizer insights |
| Benchmark | Cross-model comparison of brand mentions and rankings |
| Report | Generate a comprehensive print-ready HTML report |
| Settings | Manage entities, brands, queries, categories, tags, locations, models, objectives, and members |
To log out, click Logout in the header.
Adding a New Property
A property represents a domain you want to analyze.
- From the Properties list, enter a Domain (e.g.,
example.com). - Optionally enter a Display Name for easier identification.
- Click Add Property.
You are automatically added as the property owner. The property appears in your list with columns for Entities, Runs, Mentions, Citations, and Last Run.
First-time users see an onboarding wizard that walks through the initial setup: adding a domain, generating entities, identifying brands, and generating queries.
To remove a property, click the Archive button next to it. Archived properties can be restored by an admin.
Setting Up Your Property
Before running probes, you need to populate your property with data. Go to the Settings tab.
Brands
Brands are the companies, products, or names you want to track in AI responses.
Add a single brand:
- Enter a Brand name.
- Select an ownership status: Owned, Competitor, or Unclassified.
- Click Add Brand.
Bulk add: Expand the Bulk Add section, paste brand names one per line, and submit.
Auto-identify: Click Identify via AI to have Gemini visit your property URL and automatically identify your primary brand and known variants.
Managing brands:
- Use the Search bar to filter brands by name.
- Filter by Category using the dropdown.
- Sort by clicking column headers (E2B count, Q2B count, Avg Rank, etc.).
- Use checkboxes to select multiple brands, then apply bulk actions: Set Owned, Set Competitor, Set Unclassified, or tag them.
Marking brands as Owned is important — relevance probes only run against owned brands, and results throughout the app highlight owned brands.
Brand hierarchy: Brands are automatically organized into a matryoshka hierarchy based on name prefixes. For example, “BetMGM” is the parent of “BetMGM Casino”. This hierarchy is used for visibility rollups where child brand mentions are included in the parent’s totals.
Entities
Entities are the topics, products, or concepts associated with your property (e.g., “running shoes”, “trail running”).
Add a single entity:
- Enter an Entity name.
- Optionally select a Category.
- Click Add Entity.
Bulk add: Paste entity names one per line.
Generate via AI: Click Generate via AI and Auxy will visit your property’s URL with Gemini to automatically extract relevant entities.
Entity tracking: Entities can be tagged with tracking to include them in nightly visibility probes. Toggle tracking from the entity’s row or from the Tracking Settings sub-tab.
Categories
Categories group entities and queries into hierarchical themes.
- Enter a Category name.
- Click Add Category.
Categories can be renamed inline and show entity and query counts. You can also bulk add categories one per line.
Queries
Queries are the search terms used to test brand visibility in AI responses.
Five ways to add queries:
- Queries List — view and manage existing queries with filters for Entity, Category, Tag, and Source.
- Manual Entry — enter a single query with optional Entity, Category, and Tags.
- Enter a List — paste queries one per line with optional Entity/Category/Tags applied to all.
- Import via CSV — upload a CSV file. Auxy detects headers and lets you map columns to Query text, Entity, Category, or Tag. A preview shows the first 5 rows before import.
- Generate via AI — provide a URL (defaults to your property URL), set a count (1-500), and optionally add special instructions. Auxy uses Gemini to generate relevant queries.
Tags
Tags are free-form labels you can apply to entities, brands, queries, and categories.
- Enter a Tag name.
- Click Add Tag.
Tags can be assigned inline from any item’s row across the Settings sub-tabs. Special tags:
trackingon entities and locations enables them for nightly visibility probes.ignoreon brands excludes them from citation brand mention extraction.
Locations
Locations add geographic context to grounded probes (e.g., “Australia”, “United States”).
- Enter a Location name.
- Click Add Location.
Generate via AI: Auxy can infer relevant locations (HQ + operational markets) from your property URL.
When a location is selected during a probe run, grounded search results are geo-targeted to that location. For Google this uses lat/lng geocoding; for OpenAI it uses ISO country codes.
Locations can be tagged with tracking to include them in nightly visibility probes.
Citation Prompts
Citation prompts are custom prompts used specifically for citation mining.
- Enter a Prompt text.
- Click Add Citation Prompt.
You can also bulk add prompts one per line or auto-generate them from your entities or categories using AI.
Prompt Sets
Prompt sets let you organize citation prompts into named groups.
- Go to the Citations tab and click Create Set.
- Add prompts to the set from entities, queries, or custom text.
- When running citation mining, select a specific prompt set to scope the run.
Prompt sets can be duplicated and managed independently.
Default Models
Auxy supports multiple LLM providers and models:
| Provider | Models |
|---|---|
| Gemini 3 Flash | |
| OpenAI | GPT-5.4 |
- Use the checkboxes to activate or deactivate models for your property.
- Only active models are used during probe runs.
- Pricing information is shown for each model and feeds into the cost calculator on the Relevance tab.
- Default models for new properties: Gemini 3 Flash Preview + GPT-5.4.
Objectives
Objectives let you describe your brand’s priorities, positioning, competitive landscape, and guardrails. These feed into the Gap Analysis to produce more targeted recommendations.
Sections: Priorities, Positioning, Competitors, Guardrails, Other (audience, market context, voice).
Each section can be written manually or auto-suggested by AI via the Suggest button.
Dashboard
The Dashboard tab provides a brand visibility overview for your property.
Key metrics:
- Mention Share — what percentage of total AI mentions each brand receives.
- Visibility Score — combines mention share with average rank:
mention_share * max(0, (11 - avg_rank)) / 10. A brand mentioned frequently at rank 1 scores highest. - Brand History — trend lines showing how brand visibility changes over time.
Filters let you slice the data by category, model, run, and location.
Visibility Tracking
The Tracking tab runs grounded E2B probes on a nightly schedule against your tracked entities and locations.
Setup
- Tag entities with
trackingto include them in visibility probes. - Tag locations with
trackingto geo-target the probes. - Optionally toggle Include agnostic to also run probes without any location targeting.
How It Works
Each night, Auxy asks each active model (with grounding enabled) to recommend ten brands for each tracked entity, wrapping brand names in [[brand]] markers. The responses are parsed to extract brands with rank and context.
Results are aggregated into visibility snapshots — per-entity, per-location, per-brand scores recorded daily.
Sub-tabs
- Mentions — brand mention trends over time with share and rank data.
- Citations — citation data from grounded tracking responses.
- Tracking Settings — toggle tracking on/off for individual entities and locations.
Running Manually
Click Run Visibility Probe to trigger an immediate visibility probe outside the nightly schedule.
Running Association Probes
The Associations tab discovers which brands AI models associate with your entities and queries.
Probe Types
| Type | Question Asked | Requires |
|---|---|---|
| E2B (Entity-to-Brand) | “What brands associate with this entity?” | Entities |
| B2E (Brand-to-Entity) | “What entities associate with this brand?” | Brands |
| Q2B (Query-to-Brand) | “What brands are relevant for this query?” | Queries |
| All | Runs E2B + B2E + Q2B sequentially | Entities + Brands + Queries |
| Automagic | Runs E2B + B2E + Q2B in one click | Entities + Brands + Queries |
Running a Probe
- Select a Probe type from the dropdown.
- Optionally select a Location.
- Click Run.
- Monitor progress in real time with per-model progress bars.
Reading Results
QAS (Query Association Score) — shown for Q2B probes:
- Rank, Brand name, Score bar, Mention count, QAS percentage.
- Owned brands are highlighted in bold.
- Per-model breakdown shows how each model ranks brands.
E2B Aggregate — brand mentions across all entities:
- Brand name, Score bar, Mention count, Share percentage.
B2E Aggregate — entity/item mentions per brand:
- Brand name, Unique items count, Total mentions.
Top Queries for Owned Brands — the queries that most frequently surface your brands.
Use filters (Run, Category, Location, Model, Source) to slice results across different dimensions.
Measuring Relevance
The Relevance tab measures how relevant AI models consider a brand-query or brand-entity pair, using repeated independent samples for statistical confidence.
Running a Relevance Probe
- Select a Source type: Queries or Entities.
- Select an Owned brand to test (only owned brands are available).
- Optionally select a Location.
- Set N per pair (1-100) — the number of independent samples per brand-source pair. Higher values give more statistical confidence.
- Review the cost estimate shown in real time (API calls and estimated cost).
- Click Run.
Reading Results
QRS (Query Relevance Score) — percentage of “yes” responses:
- Brand, Relevance bar, Yes count, Total probes, QRS percentage.
- Per-model breakdown.
Items Not Relevant — sources that scored 0%, indicating potential visibility gaps.
Citation Mining
The Citations tab extracts and analyzes citations from grounded AI responses to see which domains and URLs are being cited.
Setting Up
Before mining, add citation prompts via Settings > Citation Prompts, use your existing queries as source material, or create a Prompt Set to organize prompts into groups.
Running Citation Mining
- Select a Source: Citation Prompts, Queries, or Both.
- Optionally select a Prompt Set to scope the run to a specific group.
- Optionally filter by Tag (or select “Untagged” for items with no tags).
- Optionally select a Location.
- Click Run Citation Mining.
Citation mining uses grounded API calls (Google Search, OpenAI web search, Anthropic web search) and extracts both the URLs cited in the final answer (“selected”) and URLs the model browsed but didn’t cite (“unselected”).
Reading Results
Summary cards at the top show:
- Responses — total AI responses collected.
- Citations — total citations extracted.
- Unique Domains — distinct domains cited.
- Search Queries — grounding queries used.
Detailed breakdowns:
- Domain Breakdown — domain name, total count, provider-specific counts, owned status.
- Brand Mentions in Responses — brand name, total mentions, owned status. Brands are detected via word-boundary matching against your brand list.
- Top Cited Sources — URLs ranked by citation count with title, domain, and ownership.
- Search Queries (collapsible) — the grounding queries the models actually used, with frequency.
Reprocessing Mentions
If you update your brand list after citation runs, click Reprocess Mentions to re-extract brand mentions from existing responses without re-running the probes.
Snippet Optimization
The Optimizer tab (Classic mode) iteratively improves snippets to achieve better ranking in AI responses.
Starting a New Run
- Discover — enter a query. Auxy runs a grounded search to find the current ranking of snippets.
- Resolve URLs — map discovered items to their real URLs (resolves redirects).
- Select Target — choose which item is yours (auto-detect available).
- Configure — select a model, set max steps, number of ideators, and N samples for ranking.
- Start — click Run to begin the optimization loop.
How It Works
Auxy runs an iterative cycle for each step:
- Ideate — multiple parallel ideators generate hypotheses and edited snippets.
- Rank — each proposal is ranked against all other snippets using N-sample median ranking for robustness.
- Select — the best-performing edit becomes the new baseline.
Early stopping triggers if rank 1 is achieved. Plateau stopping triggers after N non-improving steps.
After completion, Auxy generates:
- A Storyteller narrative summarizing what worked, what didn’t, and key insights.
- A Content Brief with concrete page edit suggestions based on the winning snippet changes.
Viewing Results
The list view shows all previous runs with:
- Query, Target URL, Baseline rank, Best rank, Status.
- Click View Details for per-attempt analysis with a visual rank chart.
You can provide human feedback as constraints to guide the next round of optimization. Feedback overrides all other optimization considerations.
Holistic Optimization
Holistic mode optimizes full page content rather than just snippets.
How It Differs from Classic
- Full page content is fetched for all competing items (via DataForSEO or Gemini URL context).
- Line-based edits — the ideator proposes specific line insertions, replacements, and deletions rather than rewriting a snippet.
- Richer ranking — the ranker sees full page content, not just snippets.
Workflow
- Discover — grounded search with intent to fetch full pages.
- Fetch All — parallel fetch of all items’ page content.
- Review — inspect fetched content, manually edit if needed.
- Baseline Rank — rank all items with full content to establish baseline.
- Optimize — iterative edit loop with line-level precision.
Page Grounding
Page Grounding probes which specific lines of a page are relevant to given queries.
Running a Probe
- Enter a URL to analyze.
- Enter one or more queries.
- Select a model.
- Click Run.
Auxy fetches the page content, numbers every line, and asks the model (with Google Search grounding scoped to the URL’s domain) which lines match each query. Lines are scored by how frequently they match across queries (0-1 scale).
Results
- Line-by-line heatmap showing relevance scores.
- Per-query match breakdown.
- Export as CSV, HTML, or PDF.
Treewalker
Treewalker probes a model’s parametric memory — what it recalls about a website without searching the web.
How It Works
- Auxy asks the model to list items associated with your website multiple times (N runs, default 5).
- For each generated token, the model reports its top-5 alternative tokens with probabilities (logprobs).
- If any alternative exceeds the confidence threshold, Auxy “branches” by swapping in that token and completing from there, discovering less-prominent associations.
- Results are aggregated across runs to show appearance rates, confidence levels, and branch alternatives.
Paired Sessions
Each Treewalker run creates two paired sessions:
- Ungrounded — pure parametric memory (logprobs enabled, no web search).
- Grounded — same prompt but with Google Search enabled (no logprobs).
An AI analysis compares both to identify:
- Primary Bias — what the model recalls from training.
- Grounding Bias — what changes when real-time search is available.
- Strongest/Weakest Associations — by confidence scores.
- Gap Analysis — items in grounded results missing from parametric memory.
Promoting Items
Items discovered by Treewalker can be promoted to your Entities list via the Toggle Entity button.
Veracity
Veracity fact-checks grounded AI responses by assessing how faithfully the model used its cited sources.
Running a Check
- Enter a query.
- Select a model (Google Gemini for grounded search).
- Click Run.
How It Works
- Auxy runs a grounded search and collects the response with all cited URLs.
- For each cited URL, the actual page content is fetched.
- An AI assessment compares what the grounded response claims against what the source page actually says.
- Each URL receives a fidelity score based on:
- Survived — claims accurately reflected from the source.
- Lost — information in the source that was dropped or ignored.
- Distortions — claims that misrepresent or distort the source material.
Results
- Per-URL fidelity scores with detailed survived/lost/distortion breakdowns.
- Aggregate score — average fidelity across all cited URLs.
Organic Search (GSC)
The Organic tab shows Google Search Console data for your property, enabling comparison between AI visibility and organic search performance.
Connecting GSC
- Go to Settings and find the GSC Connection section.
- Click Connect GSC to authorize with your Google account (requires
webmasters.readonlyscope). - Select and Link the relevant GSC site to your property.
Importing Data
Two import modes:
- Full Import — daily granular data going back up to 480 days. Fetches per-day, per-query, per-page, per-country, per-device rows.
- Fast Import — aggregated 90-day summary by query and page. Faster and lighter. Automatically triggers query gap computation.
A Snapshot fetch pulls 5-dimension summaries (query, page, country, device, date) for dashboard use.
Viewing Data
The Organic tab shows:
- Top queries by clicks/impressions/position.
- Top pages by performance.
- Country and device breakdowns.
Intent Classification
Intent classification labels your queries with multi-label taxonomies.
Creating a Taxonomy
- Go to the Citations tab (Intent section).
- Create a Label Set or choose a preset:
- Search Intent — Informational, Navigational, Transactional, Commercial Investigation.
- Funnel Stage — TOFU, MOFU, BOFU.
- Content Type — Question, Comparison, How-To, Review, List.
- Or Generate Taxonomy from your query data — Auxy analyzes your GSC/fanout queries and proposes labels.
Running Classification
- Select a Label Set.
- Choose Source: GSC queries, fanout queries (from citation search queries), or both.
- Optionally enable Adult Filter to auto-tag explicit content.
- Click Classify. Queries are classified in batches of 10 with up to 100 parallel workers.
Results
- Binary matrix showing which labels apply to each query.
- Per-label distribution stats.
- Export as CSV for external analysis.
Preview lets you test classification on a sample of 10 queries before committing.
Benchmark
The Benchmark tab provides cross-model comparison of how different AI models perceive your brands.
Shows per-model, per-brand:
- Mention counts across probe types.
- Average rank positions.
Useful for identifying which models are most/least favorable to your brand.
Gap Analysis
The Analysis tab generates an AI-powered strategic assessment by combining data from multiple sources.
Data Sources
| Section | What It Includes |
|---|---|
| Objectives | Your brand priorities, positioning, and guardrails |
| Entities & Brands | Your configured entities, brands, and queries |
| Associations | Q2B + E2B association data |
| Relevance | ARC relevance scores per brand |
| Citations | Citation domains and brand mentions |
| Treewalker | Parametric memory probe results |
| GSC Data | Live Google Search Console query and page data |
| Query Gaps | Queries with AI visibility but zero organic traffic |
| Optimizer Insights | Patterns from optimizer runs |
Running an Analysis
- Click Sections to see available data sections with token counts.
- Select which sections to include (or include all).
- Click Run Analysis.
Auxy sends the combined data to Gemini with a strategic analysis prompt. The output covers:
- Strengths — where your brand performs well.
- Weaknesses — areas of low visibility or relevance.
- Gaps — opportunities identified from cross-referencing AI and organic data.
- Optimization Directions — concrete recommendations.
- Query Strategy — query-level recommendations.
Query Gaps
Query gaps are queries that AI models associate with your brand/entities but that drive zero organic search traffic. These represent content opportunities.
Click Compute Gaps to identify them (requires GSC data).
Reports
The Report tab generates a comprehensive, print-ready HTML report covering:
- Property overview (entity, brand, query counts).
- Probe coverage summary.
- Dashboard data sliced by category, model, and location.
- Association details (Q2B, E2B, B2E) with per-model breakdowns.
- Relevance scores by brand and model.
- Citation analysis (domains, brand mentions, top URLs).
- Brand hierarchy with inclusive/exclusive counts.
- Visibility tracking trends.
The report downloads as an HTML file that can be opened in any browser or converted to PDF.
Exporting Data
Multiple export options are available across the app.
CSV Exports
| Export | Contents |
|---|---|
| Dashboard | Brand visibility summary |
| QAS | Query Association Scores |
| E2B | Entity-to-Brand associations |
| Q2B | Query-to-Brand associations |
| B2E | Brand-to-Entity associations |
| Citations | Citation domains |
| Citation URLs | Individual cited URLs |
| Citation Prompts | All citation prompts with source and enabled status |
| Citations Full | Complete citation data with response text |
| Queries | All queries with entity and category |
| Entities | All entities with categories |
| Brands | Brands with counts and ownership |
| Citation Tables | Per-table exports (mentions, search queries) |
| Intent Matrix | Binary label matrix per query (from Intent Classification) |
Page Grounding Exports
Page grounding results can be exported as CSV, HTML, or PDF.
Sharing with Other Users
Auxy uses a membership system to share properties with other users.
Adding Members
- Go to Settings > Members.
- Enter the user’s Email address (must be a
dejan.com.auaccount). - Select a Role:
| Role | Permissions |
|---|---|
| Owner | Full access. Can add/remove members, archive the property, and manage all data. |
| Editor | Can manage data, run probes, and configure settings. |
| Viewer | Read-only access to results and exports. Cannot trigger any POST actions. |
- Click Add Member.
Managing Members
- The Members sub-tab shows all members with their Name, Email, and Role.
- Owners can Remove members or change roles.
- The property creator is automatically the first owner.
What Members See
All members with access to a property share the same data:
- Entities, brands, queries, categories, tags, and locations.
- All probe runs and their results.
- Citation mining data.
- Optimizer runs and analyses.
- Visibility tracking snapshots.
- GSC data and gap analyses.
- Exported datasets.
Sharing Workflow
- Create a property and set it up with brands, entities, and queries.
- Add team members via Settings > Members with appropriate roles.
- Run probes and analyses — results are immediately visible to all members.
- Generate reports or export CSV files to share with stakeholders who don’t have Auxy access.
Best Practices
- Use descriptive Display Names for properties so members can identify them quickly.
- Mark brand ownership accurately — it determines what appears in relevance probes and result highlighting.
- Coordinate on entity and query naming to avoid duplicates.
- Use tags to organize and filter large datasets across the team.
- Use locations consistently so probe results are comparable across runs.
- Set up Objectives before running Gap Analysis for more targeted recommendations.
- Tag entities and locations with
trackingto enable nightly visibility monitoring.
