dejan
  • Home
  • Blog
  • Our Models
Schedule a Video Call
AI Rank Login

  • How Google Decides When to Use Gemini Grounding for User Queries

    How Google Decides When to Use Gemini Grounding for User Queries

    Mar 29, 2025

    —

    by

    Dan Petrovic
    in AI, Google, SEO

    Google’s Gemini models are designed to provide users with accurate, timely, and trustworthy responses. A key innovation in this process is grounding, the ability to enhance model responses by anchoring them to up-to-date information from Google Search. However, not every query benefits from grounding, and Google has implemented a smart mechanism to decide when to…

  • Cross-Model Circuit Analysis: Gemini vs. Gemma Comparison Framework

    Cross-Model Circuit Analysis: Gemini vs. Gemma Comparison Framework

    Mar 29, 2025

    —

    by

    Dan Petrovic
    in AI, Google, SEO

    1. Introduction Understanding the similarities and differences in how different large language models represent and prioritize brand information can provide crucial insights for developing robust, transferable brand positioning strategies. This framework outlines a systematic approach for comparative circuit analysis between Google’s Gemini and Gemma model families, with the goal of identifying universal brand-relevant circuits and…

  • Neural Circuit Analysis Framework for Brand Mention Optimization

    Neural Circuit Analysis Framework for Brand Mention Optimization

    Mar 29, 2025

    —

    by

    Dan Petrovic
    in AI, Google, SEO

    Leveraging Open-Weight Models for Mechanistic Brand Positioning 1. Introduction While our previous methodology treated language models as black boxes, open-weight models like Gemma 3 Instruct provide unprecedented opportunities for direct observation and manipulation of internal model mechanics. This framework extends our previous methodology by incorporating direct neural circuit analysis, allowing for precise identification and targeting…

  • Strategic Brand Positioning in LLMs: A Methodological Framework for Prompt Engineering and Model Behavior Analysis

    Strategic Brand Positioning in LLMs: A Methodological Framework for Prompt Engineering and Model Behavior Analysis

    Mar 29, 2025

    —

    by

    Dan Petrovic
    in AI, Google, SEO

    Abstract This paper presents a novel methodological framework for systematically analyzing and optimizing the conditions under which large language models (LLMs) generate favorable brand mentions. By employing a structured probing technique that examines prompt variations, completion thresholds, and linguistic pivot points, this research establishes a replicable process for identifying high-confidence prompting patterns. The methodology enables…

  • AlexNet: The Deep Learning Breakthrough That Reshaped Google’s AI Strategy

    AlexNet: The Deep Learning Breakthrough That Reshaped Google’s AI Strategy

    Mar 21, 2025

    —

    by

    Dan Petrovic
    in AI, Google, Machine Learning

    When Google, in collaboration with the Computer History Museum, open-sourced the original AlexNet source code, it marked a significant moment in the history of artificial intelligence. AlexNet was more than just an academic breakthrough; it was the tipping point that launched deep learning into mainstream AI research and reshaped the future of companies like Google.…

  • The Next Chapter of Search: Get Ready to Influence the Robots

    The Next Chapter of Search: Get Ready to Influence the Robots

    Mar 19, 2025

    —

    by

    Dan Petrovic
    in Google, SEO

    It’s an exciting time to be in SEO. Honestly, it feels like 2006 all over again – a period of rapid change, innovation, and frankly, a whole lot of fun. For a while there, things had gotten a little… predictable. Technical SEO, keyword research, competitor analysis, link building, schema… it was all necessary, of course,…

  • Revealed: The exact search result data sent to Google’s AI.

    Revealed: The exact search result data sent to Google’s AI.

    Mar 14, 2025

    —

    by

    Dan Petrovic
    in AI, Google

    UPDATE: Addressing guardrails, hallucinations and context size. 1. People are reporting difficulties in recreating the output due to guardrails and hallucinations. 2. Snippet context sometimes grows to several chunks. Guardrails Google attempts (and in many cases) succeeds at blocking these requests, but it does so in a very clumsy way so that we actually get…

  • Beyond Rank Tracking: Analyzing Brand Perceptions Through Language Model Association Networks

    Beyond Rank Tracking: Analyzing Brand Perceptions Through Language Model Association Networks

    Feb 27, 2025

    —

    by

    Dan Petrovic
    in AI, Google, Keyword Research, SEO

    This post is based on the codebase and specifications for AI Rank, an AI visibility and rank tracking framework developed by DEJAN AI team: https://airank.dejan.ai/ Abstract: Traditional SEO has long relied on rank tracking as a primary metric of online visibility. However, modern search engines, increasingly driven by large language models (LLMs), are evolving beyond…

  • Teaching AI Models to Be Better Search Engines: A New Approach to Training Data

    Teaching AI Models to Be Better Search Engines: A New Approach to Training Data

    Feb 13, 2025

    —

    by

    Dan Petrovic
    in Machine Learning

    A recent patent application* reveals an innovative method for training AI models to become more effective at understanding and answering human queries. The approach tackles a fundamental challenge in modern search technology: how to teach AI systems to truly understand what people are looking for, rather than just matching keywords. The Core Innovation The traditional…

  • Self-Supervised Quantized Representation for KG-LLM Integration

    Self-Supervised Quantized Representation for KG-LLM Integration

    Feb 6, 2025

    —

    by

    Dan Petrovic
    in Machine Learning

    Paper: https://arxiv.org/pdf/2501.18119 This paper proposes a method called Self-Supervised Quantized Representation (SSQR) for seamlessly integrating Knowledge Graphs (KGs) with Large Language Models (LLMs). The key idea is to compress the structural and semantic information of entities in KGs into discrete codes (like tokens in natural language) that can be directly input into LLMs. Here’s a…

←Previous Page Next Page→

DEJAN

Better SEO through machine learning.

AI Rank Privacy Policy | Noli esse malus.