Auxy User Manual

Table of Contents

  • Getting Started
  • Adding a New Property
  • Setting Up Your Property
    • Brands
    • Entities
    • Categories
    • Queries
    • Tags
    • Locations
    • Citation Prompts
    • Prompt Sets
    • Models
    • Objectives
  • Dashboard
  • Visibility Tracking
  • Running Association Probes
  • Measuring Relevance
  • Citation Mining
  • Snippet Optimization
  • Holistic Optimization
  • Page Grounding
  • Treewalker
  • Veracity
  • Organic Search (GSC)
  • Intent Classification
  • Benchmark
  • Gap Analysis
  • Reports
  • Exporting Data
  • Sharing with Other Users

Getting Started

Logging In

  1. Open Auxy in your browser at app.auxy.com.
  2. Click Sign in with Google.

You will be redirected to the Properties list after login.

Navigation

  • Properties list — your home screen, showing all properties you have access to.
  • Property detail — click any property to open it. The detail view contains these tabs:
TabPurpose
DashboardBrand visibility overview with mention share, visibility scores, and trend charts
TrackingGrounded visibility tracking with per-entity, per-location snapshots over time
AssociationsDiscover which brands AI models associate with your entities and queries
RelevanceMeasure how relevant AI models consider a brand to a query or entity
CitationsExtract and analyze citations from grounded AI responses
OptimizerIteratively optimize snippets or full pages for better ranking
TreewalkerProbe what a model recalls about a website from parametric memory
OrganicGoogle Search Console data — queries, pages, position, clicks
AnalysisAI-powered gap analysis combining probe data, GSC, and optimizer insights
BenchmarkCross-model comparison of brand mentions and rankings
ReportGenerate a comprehensive print-ready HTML report
SettingsManage entities, brands, queries, categories, tags, locations, models, objectives, and members

To log out, click Logout in the header.


Adding a New Property

A property represents a domain you want to analyze.

  1. From the Properties list, enter a Domain (e.g., example.com).
  2. Optionally enter a Display Name for easier identification.
  3. Click Add Property.

You are automatically added as the property owner. The property appears in your list with columns for Entities, Runs, Mentions, Citations, and Last Run.

First-time users see an onboarding wizard that walks through the initial setup: adding a domain, generating entities, identifying brands, and generating queries.

To remove a property, click the Archive button next to it. Archived properties can be restored by an admin.


Setting Up Your Property

Before running probes, you need to populate your property with data. Go to the Settings tab.

Brands

Brands are the companies, products, or names you want to track in AI responses.

Add a single brand:

  1. Enter a Brand name.
  2. Select an ownership status: Owned, Competitor, or Unclassified.
  3. Click Add Brand.

Bulk add: Expand the Bulk Add section, paste brand names one per line, and submit.

Auto-identify: Click Identify via AI to have Gemini visit your property URL and automatically identify your primary brand and known variants.

Managing brands:

  • Use the Search bar to filter brands by name.
  • Filter by Category using the dropdown.
  • Sort by clicking column headers (E2B count, Q2B count, Avg Rank, etc.).
  • Use checkboxes to select multiple brands, then apply bulk actions: Set Owned, Set Competitor, Set Unclassified, or tag them.

Marking brands as Owned is important — relevance probes only run against owned brands, and results throughout the app highlight owned brands.

Brand hierarchy: Brands are automatically organized into a matryoshka hierarchy based on name prefixes. For example, “BetMGM” is the parent of “BetMGM Casino”. This hierarchy is used for visibility rollups where child brand mentions are included in the parent’s totals.

Entities

Entities are the topics, products, or concepts associated with your property (e.g., “running shoes”, “trail running”).

Add a single entity:

  1. Enter an Entity name.
  2. Optionally select a Category.
  3. Click Add Entity.

Bulk add: Paste entity names one per line.

Generate via AI: Click Generate via AI and Auxy will visit your property’s URL with Gemini to automatically extract relevant entities.

Entity tracking: Entities can be tagged with tracking to include them in nightly visibility probes. Toggle tracking from the entity’s row or from the Tracking Settings sub-tab.

Categories

Categories group entities and queries into hierarchical themes.

  1. Enter a Category name.
  2. Click Add Category.

Categories can be renamed inline and show entity and query counts. You can also bulk add categories one per line.

Queries

Queries are the search terms used to test brand visibility in AI responses.

Five ways to add queries:

  1. Queries List — view and manage existing queries with filters for Entity, Category, Tag, and Source.
  2. Manual Entry — enter a single query with optional Entity, Category, and Tags.
  3. Enter a List — paste queries one per line with optional Entity/Category/Tags applied to all.
  4. Import via CSV — upload a CSV file. Auxy detects headers and lets you map columns to Query text, Entity, Category, or Tag. A preview shows the first 5 rows before import.
  5. Generate via AI — provide a URL (defaults to your property URL), set a count (1-500), and optionally add special instructions. Auxy uses Gemini to generate relevant queries.

Tags

Tags are free-form labels you can apply to entities, brands, queries, and categories.

  1. Enter a Tag name.
  2. Click Add Tag.

Tags can be assigned inline from any item’s row across the Settings sub-tabs. Special tags:

  • tracking on entities and locations enables them for nightly visibility probes.
  • ignore on brands excludes them from citation brand mention extraction.

Locations

Locations add geographic context to grounded probes (e.g., “Australia”, “United States”).

  1. Enter a Location name.
  2. Click Add Location.

Generate via AI: Auxy can infer relevant locations (HQ + operational markets) from your property URL.

When a location is selected during a probe run, grounded search results are geo-targeted to that location. For Google this uses lat/lng geocoding; for OpenAI it uses ISO country codes.

Locations can be tagged with tracking to include them in nightly visibility probes.

Citation Prompts

Citation prompts are custom prompts used specifically for citation mining.

  1. Enter a Prompt text.
  2. Click Add Citation Prompt.

You can also bulk add prompts one per line or auto-generate them from your entities or categories using AI.

Prompt Sets

Prompt sets let you organize citation prompts into named groups.

  1. Go to the Citations tab and click Create Set.
  2. Add prompts to the set from entities, queries, or custom text.
  3. When running citation mining, select a specific prompt set to scope the run.

Prompt sets can be duplicated and managed independently.

Default Models

Auxy supports multiple LLM providers and models:

ProviderModels
GoogleGemini 3 Flash
OpenAIGPT-5.4
  • Use the checkboxes to activate or deactivate models for your property.
  • Only active models are used during probe runs.
  • Pricing information is shown for each model and feeds into the cost calculator on the Relevance tab.
  • Default models for new properties: Gemini 3 Flash Preview + GPT-5.4.

Objectives

Objectives let you describe your brand’s priorities, positioning, competitive landscape, and guardrails. These feed into the Gap Analysis to produce more targeted recommendations.

Sections: Priorities, Positioning, Competitors, Guardrails, Other (audience, market context, voice).

Each section can be written manually or auto-suggested by AI via the Suggest button.


Dashboard

The Dashboard tab provides a brand visibility overview for your property.

Key metrics:

  • Mention Share — what percentage of total AI mentions each brand receives.
  • Visibility Score — combines mention share with average rank: mention_share * max(0, (11 - avg_rank)) / 10. A brand mentioned frequently at rank 1 scores highest.
  • Brand History — trend lines showing how brand visibility changes over time.

Filters let you slice the data by category, model, run, and location.


Visibility Tracking

The Tracking tab runs grounded E2B probes on a nightly schedule against your tracked entities and locations.

Setup

  1. Tag entities with tracking to include them in visibility probes.
  2. Tag locations with tracking to geo-target the probes.
  3. Optionally toggle Include agnostic to also run probes without any location targeting.

How It Works

Each night, Auxy asks each active model (with grounding enabled) to recommend ten brands for each tracked entity, wrapping brand names in [[brand]] markers. The responses are parsed to extract brands with rank and context.

Results are aggregated into visibility snapshots — per-entity, per-location, per-brand scores recorded daily.

Sub-tabs

  • Mentions — brand mention trends over time with share and rank data.
  • Citations — citation data from grounded tracking responses.
  • Tracking Settings — toggle tracking on/off for individual entities and locations.

Running Manually

Click Run Visibility Probe to trigger an immediate visibility probe outside the nightly schedule.


Running Association Probes

The Associations tab discovers which brands AI models associate with your entities and queries.

Probe Types

TypeQuestion AskedRequires
E2B (Entity-to-Brand)“What brands associate with this entity?”Entities
B2E (Brand-to-Entity)“What entities associate with this brand?”Brands
Q2B (Query-to-Brand)“What brands are relevant for this query?”Queries
AllRuns E2B + B2E + Q2B sequentiallyEntities + Brands + Queries
AutomagicRuns E2B + B2E + Q2B in one clickEntities + Brands + Queries

Running a Probe

  1. Select a Probe type from the dropdown.
  2. Optionally select a Location.
  3. Click Run.
  4. Monitor progress in real time with per-model progress bars.

Reading Results

QAS (Query Association Score) — shown for Q2B probes:

  • Rank, Brand name, Score bar, Mention count, QAS percentage.
  • Owned brands are highlighted in bold.
  • Per-model breakdown shows how each model ranks brands.

E2B Aggregate — brand mentions across all entities:

  • Brand name, Score bar, Mention count, Share percentage.

B2E Aggregate — entity/item mentions per brand:

  • Brand name, Unique items count, Total mentions.

Top Queries for Owned Brands — the queries that most frequently surface your brands.

Use filters (Run, Category, Location, Model, Source) to slice results across different dimensions.


Measuring Relevance

The Relevance tab measures how relevant AI models consider a brand-query or brand-entity pair, using repeated independent samples for statistical confidence.

Running a Relevance Probe

  1. Select a Source type: Queries or Entities.
  2. Select an Owned brand to test (only owned brands are available).
  3. Optionally select a Location.
  4. Set N per pair (1-100) — the number of independent samples per brand-source pair. Higher values give more statistical confidence.
  5. Review the cost estimate shown in real time (API calls and estimated cost).
  6. Click Run.

Reading Results

QRS (Query Relevance Score) — percentage of “yes” responses:

  • Brand, Relevance bar, Yes count, Total probes, QRS percentage.
  • Per-model breakdown.

Items Not Relevant — sources that scored 0%, indicating potential visibility gaps.


Citation Mining

The Citations tab extracts and analyzes citations from grounded AI responses to see which domains and URLs are being cited.

Setting Up

Before mining, add citation prompts via Settings > Citation Prompts, use your existing queries as source material, or create a Prompt Set to organize prompts into groups.

Running Citation Mining

  1. Select a Source: Citation Prompts, Queries, or Both.
  2. Optionally select a Prompt Set to scope the run to a specific group.
  3. Optionally filter by Tag (or select “Untagged” for items with no tags).
  4. Optionally select a Location.
  5. Click Run Citation Mining.

Citation mining uses grounded API calls (Google Search, OpenAI web search, Anthropic web search) and extracts both the URLs cited in the final answer (“selected”) and URLs the model browsed but didn’t cite (“unselected”).

Reading Results

Summary cards at the top show:

  • Responses — total AI responses collected.
  • Citations — total citations extracted.
  • Unique Domains — distinct domains cited.
  • Search Queries — grounding queries used.

Detailed breakdowns:

  • Domain Breakdown — domain name, total count, provider-specific counts, owned status.
  • Brand Mentions in Responses — brand name, total mentions, owned status. Brands are detected via word-boundary matching against your brand list.
  • Top Cited Sources — URLs ranked by citation count with title, domain, and ownership.
  • Search Queries (collapsible) — the grounding queries the models actually used, with frequency.

Reprocessing Mentions

If you update your brand list after citation runs, click Reprocess Mentions to re-extract brand mentions from existing responses without re-running the probes.


Snippet Optimization

The Optimizer tab (Classic mode) iteratively improves snippets to achieve better ranking in AI responses.

Starting a New Run

  1. Discover — enter a query. Auxy runs a grounded search to find the current ranking of snippets.
  2. Resolve URLs — map discovered items to their real URLs (resolves redirects).
  3. Select Target — choose which item is yours (auto-detect available).
  4. Configure — select a model, set max steps, number of ideators, and N samples for ranking.
  5. Start — click Run to begin the optimization loop.

How It Works

Auxy runs an iterative cycle for each step:

  1. Ideate — multiple parallel ideators generate hypotheses and edited snippets.
  2. Rank — each proposal is ranked against all other snippets using N-sample median ranking for robustness.
  3. Select — the best-performing edit becomes the new baseline.

Early stopping triggers if rank 1 is achieved. Plateau stopping triggers after N non-improving steps.

After completion, Auxy generates:

  • A Storyteller narrative summarizing what worked, what didn’t, and key insights.
  • A Content Brief with concrete page edit suggestions based on the winning snippet changes.

Viewing Results

The list view shows all previous runs with:

  • Query, Target URL, Baseline rank, Best rank, Status.
  • Click View Details for per-attempt analysis with a visual rank chart.

You can provide human feedback as constraints to guide the next round of optimization. Feedback overrides all other optimization considerations.


Holistic Optimization

Holistic mode optimizes full page content rather than just snippets.

How It Differs from Classic

  • Full page content is fetched for all competing items (via DataForSEO or Gemini URL context).
  • Line-based edits — the ideator proposes specific line insertions, replacements, and deletions rather than rewriting a snippet.
  • Richer ranking — the ranker sees full page content, not just snippets.

Workflow

  1. Discover — grounded search with intent to fetch full pages.
  2. Fetch All — parallel fetch of all items’ page content.
  3. Review — inspect fetched content, manually edit if needed.
  4. Baseline Rank — rank all items with full content to establish baseline.
  5. Optimize — iterative edit loop with line-level precision.

Page Grounding

Page Grounding probes which specific lines of a page are relevant to given queries.

Running a Probe

  1. Enter a URL to analyze.
  2. Enter one or more queries.
  3. Select a model.
  4. Click Run.

Auxy fetches the page content, numbers every line, and asks the model (with Google Search grounding scoped to the URL’s domain) which lines match each query. Lines are scored by how frequently they match across queries (0-1 scale).

Results

  • Line-by-line heatmap showing relevance scores.
  • Per-query match breakdown.
  • Export as CSV, HTML, or PDF.

Treewalker

Treewalker probes a model’s parametric memory — what it recalls about a website without searching the web.

How It Works

  1. Auxy asks the model to list items associated with your website multiple times (N runs, default 5).
  2. For each generated token, the model reports its top-5 alternative tokens with probabilities (logprobs).
  3. If any alternative exceeds the confidence threshold, Auxy “branches” by swapping in that token and completing from there, discovering less-prominent associations.
  4. Results are aggregated across runs to show appearance rates, confidence levels, and branch alternatives.

Paired Sessions

Each Treewalker run creates two paired sessions:

  • Ungrounded — pure parametric memory (logprobs enabled, no web search).
  • Grounded — same prompt but with Google Search enabled (no logprobs).

An AI analysis compares both to identify:

  • Primary Bias — what the model recalls from training.
  • Grounding Bias — what changes when real-time search is available.
  • Strongest/Weakest Associations — by confidence scores.
  • Gap Analysis — items in grounded results missing from parametric memory.

Promoting Items

Items discovered by Treewalker can be promoted to your Entities list via the Toggle Entity button.


Veracity

Veracity fact-checks grounded AI responses by assessing how faithfully the model used its cited sources.

Running a Check

  1. Enter a query.
  2. Select a model (Google Gemini for grounded search).
  3. Click Run.

How It Works

  1. Auxy runs a grounded search and collects the response with all cited URLs.
  2. For each cited URL, the actual page content is fetched.
  3. An AI assessment compares what the grounded response claims against what the source page actually says.
  4. Each URL receives a fidelity score based on:
  • Survived — claims accurately reflected from the source.
  • Lost — information in the source that was dropped or ignored.
  • Distortions — claims that misrepresent or distort the source material.

Results

  • Per-URL fidelity scores with detailed survived/lost/distortion breakdowns.
  • Aggregate score — average fidelity across all cited URLs.

Organic Search (GSC)

The Organic tab shows Google Search Console data for your property, enabling comparison between AI visibility and organic search performance.

Connecting GSC

  1. Go to Settings and find the GSC Connection section.
  2. Click Connect GSC to authorize with your Google account (requires webmasters.readonly scope).
  3. Select and Link the relevant GSC site to your property.

Importing Data

Two import modes:

  • Full Import — daily granular data going back up to 480 days. Fetches per-day, per-query, per-page, per-country, per-device rows.
  • Fast Import — aggregated 90-day summary by query and page. Faster and lighter. Automatically triggers query gap computation.

A Snapshot fetch pulls 5-dimension summaries (query, page, country, device, date) for dashboard use.

Viewing Data

The Organic tab shows:

  • Top queries by clicks/impressions/position.
  • Top pages by performance.
  • Country and device breakdowns.

Intent Classification

Intent classification labels your queries with multi-label taxonomies.

Creating a Taxonomy

  1. Go to the Citations tab (Intent section).
  2. Create a Label Set or choose a preset:
  • Search Intent — Informational, Navigational, Transactional, Commercial Investigation.
  • Funnel Stage — TOFU, MOFU, BOFU.
  • Content Type — Question, Comparison, How-To, Review, List.
  1. Or Generate Taxonomy from your query data — Auxy analyzes your GSC/fanout queries and proposes labels.

Running Classification

  1. Select a Label Set.
  2. Choose Source: GSC queries, fanout queries (from citation search queries), or both.
  3. Optionally enable Adult Filter to auto-tag explicit content.
  4. Click Classify. Queries are classified in batches of 10 with up to 100 parallel workers.

Results

  • Binary matrix showing which labels apply to each query.
  • Per-label distribution stats.
  • Export as CSV for external analysis.

Preview lets you test classification on a sample of 10 queries before committing.


Benchmark

The Benchmark tab provides cross-model comparison of how different AI models perceive your brands.

Shows per-model, per-brand:

  • Mention counts across probe types.
  • Average rank positions.

Useful for identifying which models are most/least favorable to your brand.


Gap Analysis

The Analysis tab generates an AI-powered strategic assessment by combining data from multiple sources.

Data Sources

SectionWhat It Includes
ObjectivesYour brand priorities, positioning, and guardrails
Entities & BrandsYour configured entities, brands, and queries
AssociationsQ2B + E2B association data
RelevanceARC relevance scores per brand
CitationsCitation domains and brand mentions
TreewalkerParametric memory probe results
GSC DataLive Google Search Console query and page data
Query GapsQueries with AI visibility but zero organic traffic
Optimizer InsightsPatterns from optimizer runs

Running an Analysis

  1. Click Sections to see available data sections with token counts.
  2. Select which sections to include (or include all).
  3. Click Run Analysis.

Auxy sends the combined data to Gemini with a strategic analysis prompt. The output covers:

  • Strengths — where your brand performs well.
  • Weaknesses — areas of low visibility or relevance.
  • Gaps — opportunities identified from cross-referencing AI and organic data.
  • Optimization Directions — concrete recommendations.
  • Query Strategy — query-level recommendations.

Query Gaps

Query gaps are queries that AI models associate with your brand/entities but that drive zero organic search traffic. These represent content opportunities.

Click Compute Gaps to identify them (requires GSC data).


Reports

The Report tab generates a comprehensive, print-ready HTML report covering:

  • Property overview (entity, brand, query counts).
  • Probe coverage summary.
  • Dashboard data sliced by category, model, and location.
  • Association details (Q2B, E2B, B2E) with per-model breakdowns.
  • Relevance scores by brand and model.
  • Citation analysis (domains, brand mentions, top URLs).
  • Brand hierarchy with inclusive/exclusive counts.
  • Visibility tracking trends.

The report downloads as an HTML file that can be opened in any browser or converted to PDF.


Exporting Data

Multiple export options are available across the app.

CSV Exports

ExportContents
DashboardBrand visibility summary
QASQuery Association Scores
E2BEntity-to-Brand associations
Q2BQuery-to-Brand associations
B2EBrand-to-Entity associations
CitationsCitation domains
Citation URLsIndividual cited URLs
Citation PromptsAll citation prompts with source and enabled status
Citations FullComplete citation data with response text
QueriesAll queries with entity and category
EntitiesAll entities with categories
BrandsBrands with counts and ownership
Citation TablesPer-table exports (mentions, search queries)
Intent MatrixBinary label matrix per query (from Intent Classification)

Page Grounding Exports

Page grounding results can be exported as CSV, HTML, or PDF.


Sharing with Other Users

Auxy uses a membership system to share properties with other users.

Adding Members

  1. Go to Settings > Members.
  2. Enter the user’s Email address (must be a dejan.com.au account).
  3. Select a Role:
RolePermissions
OwnerFull access. Can add/remove members, archive the property, and manage all data.
EditorCan manage data, run probes, and configure settings.
ViewerRead-only access to results and exports. Cannot trigger any POST actions.
  1. Click Add Member.

Managing Members

  • The Members sub-tab shows all members with their Name, Email, and Role.
  • Owners can Remove members or change roles.
  • The property creator is automatically the first owner.

What Members See

All members with access to a property share the same data:

  • Entities, brands, queries, categories, tags, and locations.
  • All probe runs and their results.
  • Citation mining data.
  • Optimizer runs and analyses.
  • Visibility tracking snapshots.
  • GSC data and gap analyses.
  • Exported datasets.

Sharing Workflow

  1. Create a property and set it up with brands, entities, and queries.
  2. Add team members via Settings > Members with appropriate roles.
  3. Run probes and analyses — results are immediately visible to all members.
  4. Generate reports or export CSV files to share with stakeholders who don’t have Auxy access.

Best Practices

  • Use descriptive Display Names for properties so members can identify them quickly.
  • Mark brand ownership accurately — it determines what appears in relevance probes and result highlighting.
  • Coordinate on entity and query naming to avoid duplicates.
  • Use tags to organize and filter large datasets across the team.
  • Use locations consistently so probe results are comparable across runs.
  • Set up Objectives before running Gap Analysis for more targeted recommendations.
  • Tag entities and locations with tracking to enable nightly visibility monitoring.