Category: AI
-
People call them AI. That’s it.
Poll Results on Social Media: What Do We Call ChatGPT, Claude, Gemini, Perplexity? Across 864 total votes collected on social media polls, respondents gave a fragmented view on how to label tools like ChatGPT, Claude, Gemini, and Perplexity. Results: Overall, the dominant label is still AI, though notable minorities prefer “Chatbots,” “AI Assistants,” or alternative…
-
GPT-5 Made SEO Irreplaceable
OpenAI’s latest model is trained to be intelligent, not knowledgeable. Wait, what? Yup. You read that right. Here’s an example: Now, you may think this is some pretty esoteric knowledge not broadly relevant to most end users and you’re right. But here’s a tiny, open source model from Google, Gemma 3 4B, just knowing this…
-
Google’s Query Fan-Out System – A Technical Overview
This article describes Google’s system for automatically generating multiple intelligent variations of search queries using a trained generative neural network model. Unlike traditional systems that rely on pre-defined rules or historical query pairs, this system can actively produce new query variants for any input, even for queries it has never seen before. Primary Inputs List…
-
GPT-5 System Prompt
Here it is: Credit to: https://x.com/elder_plinius/status/1953583554287562823H/T https://x.com/DarwinSantosNYC for spotting it.
-
Human Friendly Content is AI Friendly Content
What do humans and AI have in common? We don’t read. Instead we rely on attention mechanisms to process text information. When optimising content for AI and humans you must get to the point early and optimise content to reduce cognitive load. Striking parallels in attention and information processing Transformers use attention mechanisms mathematically equivalent to…
-
Analysis of Gemini Embed Task-Based Dimensionality Deltas
When generating vector embeddings for your text using Gemini Embed there are several embedding optimisation modes: For each one you get slightly different embeddings, each optimised for the task at hand. The embeddings for semantic similarity are the most unique from all other types while retrieval query, retrieval document and fact verification embeddings are most…
-
Prompt Engineer’s Guide to Gemini Schemas
Prompt Engineer’s Guide to Gemini API GenerateContentResponse Schemas This guide provides a comprehensive and technical deep dive into the GenerateContentResponse schema, which is the primary output structure for the Gemini API’s GenerateContent method. Understanding this schema is crucial for effectively parsing, interpreting, and utilizing the responses generated by the Gemini model. 1. Overview/Summary The GenerateContentResponse…
-
Top 10 Most Recent Papers by MUVERA Authors
MUVERA Authors: 1. Laxman Dhulipala (Google Research & UMD) Top 10 Recent Papers (2023-2025) Research Focus Areas 2. Majid Hadian (Google DeepMind) Top 10 Recent Papers (2023-2025) Research Focus Areas 3. Jason Lee (Google Research & UC Berkeley) Top 10 Recent Papers (2023-2025) Research Focus Areas 4. Rajesh Jayaram (Google Research) Top 10 Recent Papers…
-
Training a Query Fan-Out Model
Google discovered how to generate millions of high-quality query reformulations without human input by literally traversing the mathematical space between queries and their target documents. Here’s How it Works This generated 863,307 training examples for a query suggestion model (qsT5) that outperforms all existing baselines. Query Decoder + Latent Space Traversal Step 1: Build a…
-
Dissecting Gemini’s Tokenizer and Token Scores
As a technical SEO, you might be diving into machine learning (ML) to understand how tools like Google’s Gemini process text. One foundational concept is subword tokenization—breaking words into smaller pieces called “tokens.” While tokens themselves are context-agnostic (they don’t consider surrounding words), they do carry an inherent bias: each token’s likelihood reflects how prominent…