Author: Dan Petrovic

  • Site Engagement Metrics

    Site Engagement Metrics

    To access the feature in Chrome visit: chrome://site-engagement/ Google Site Engagement Metrics Framework plays a crucial role in assessing and analyzing user engagement with websites. This framework leverages detailed metrics, such as user interactions and engagement scores, to provide insights into browsing behavior. Here’s a breakdown of how this system works, based on the Site…

  • Beyond Links: Understanding Page Transitions in Chrome

    Beyond Links: Understanding Page Transitions in Chrome

    When SEOs think about user behavior, the conversation often revolves around clicks, links, and conversions. But in Chrome, there’s an underlying layer of data that tells a much richer story—page transitions. These are the bread and butter of how users navigate, revealing not just where they go, but how they got there. For SEOs, understanding…

  • Both humans and AI return similar results when asked for a random number

    Both humans and AI return similar results when asked for a random number

    Veritasium asked 200,000 humans for a random number and we asked AI for 200,000 random numbers and the overlap is incredible! Human Outliers AI Outliers The rest appears to be eerily aligned. We both like 2 and 7. But what I think is the most interesting part is the near-perfect alignment on least random numbers.…

  • Chrome AI Models

    Chrome AI Models

    Chrome’s AI-driven segmentation platform enhances user experiences by predicting behaviours and tailoring features accordingly. Explore the different models that power these optimizations and how they shape web interactions.

  • Attention Is All You Need

    Attention Is All You Need

    Summary by: https://illuminate.google.comPaper: https://arxiv.org/abs/1706.03762 Host Welcome to this discussion on the groundbreaking paper, “Attention Is All You Need.” This paper introduces the Transformer, a novel neural network architecture based solely on the attention mechanism, eliminating the need for recurrence and convolutions. Let’s start with the core motivation behind this work. What were the limitations of…

  • The State of AI

    The State of AI

    Access the report here: stateof.ai Transcript All right, let’s dive in. We’re tackling the state of AI report 2024 this time around. Seventh year they put this out. Nathan Benaish and Airstreet Capital, they really have their fingers on the pulse of AI. Talk about a must-read if you want to understand what’s really happening…

  • ILO

    ILO

    The ILO App: A Step-by-Step Tool for Managing SEO Data and Improving Link Structures Managing SEO efficiently can be a complicated process, especially for websites with a large number of pages. The ILO app aims to simplify this by offering a structured, step-by-step approach. It brings together tools for handling key aspects of SEO, like…

  • Resource-Efficient Binary Vector Embeddings With Matryoshka Representation Learning

    Resource-Efficient Binary Vector Embeddings With Matryoshka Representation Learning

    When conducting an advanced SEO analysis, I frequently utilise vector embeddings for text feature extraction, similarity searches, clustering, retrieval, ranking and so on. One of the main burdens on top of compute is storage space, as these files tends go into terabytes for very large websites. Today I did a deep analysis and realised I’ve…

  • Query Intent via Retrieval Augmentation and Model Distillation

    Query Intent via Retrieval Augmentation and Model Distillation

    The paper, titled “QUILL: Query Intent with Large Language Models using Retrieval Augmentation and Multi-stage Distillation”, focuses on enhancing query understanding tasks, particularly query intent classification, by leveraging Large Language Models (LLMs) with retrieval augmentation and a novel two-stage distillation process. Retrieval Augmentation: The paper proposes the use of retrieval augmentation to provide LLMs with…

  • Search Query Quality Classifier

    Search Query Quality Classifier

    We build on the work by Manaal Faruqui and Dipanjan Das from Google AI Language team to train a search query classifier of well-formed search queries. Our model offers a 10% improvement over Google’s classifier by utilising ALBERT architecture instead of LSTM. With accuracy of 80%, the model is production ready and has already been…