Advanced Prompting Techniques for AI SEO

by

in

Most marketers treat AI like a magic box: prompt goes in, content comes out. But AI models are more like highly skilled interns—they need clear instructions, context, and examples to do their best work.

The quality of your AI output is directly determined by the quality of your prompts. Master prompt engineering, and you can:

  • Generate SEO content that actually ranks (not generic fluff)
  • Automate repetitive SEO tasks without sacrificing quality
  • Analyze competitors and extract insights at scale
  • Create content briefs, meta descriptions, and schema markup in seconds
  • Conduct keyword research with semantic understanding

This article covers a wide range of these techniques from this amazing repo, providing concrete examples and practical code implementations to help you create high-quality, optimized content that resonates with both search engines and your target audience.

Let’s break down the techniques by category and show you exactly how to use them for SEO.

Basic Techniques: The Foundation

1. Zero-Shot Prompting

What it is: Giving the AI a task without any examples—just a clear instruction.

SEO Use Case: Quick meta description generation

Example Prompt:

Write a meta description for a blog post titled "How to Reduce Cart Abandonment in E-commerce." 
The meta description should be 150-155 characters, include the keyword "reduce cart abandonment," 
and create urgency.

When to use it: Fast, one-off tasks where you need a straightforward answer.


2. Few-Shot Prompting

What it is: Providing 2-5 examples of the desired output format before asking for new content.

SEO Use Case: Creating consistent product descriptions across an e-commerce catalog

Example Prompt:

You are an e-commerce copywriter. Write product descriptions following these examples:

Example 1:
Product: Wireless Bluetooth Headphones
Description: Immerse yourself in crystal-clear sound with our Wireless Bluetooth Headphones. 
Featuring 40-hour battery life, active noise cancellation, and ultra-comfortable ear cushions, 
these headphones are perfect for commuters, travelers, and audiophiles alike.

Example 2:
Product: Stainless Steel Water Bottle
Description: Stay hydrated in style with our Stainless Steel Water Bottle. Double-wall vacuum 
insulation keeps drinks cold for 24 hours or hot for 12 hours. BPA-free, leak-proof, and 
available in 6 vibrant colors.

Now write a product description for: Organic Cotton Yoga Mat

When to use it: When you need consistent formatting, tone, and structure across multiple pieces of content.


3. Role Prompting

What it is: Instructing the AI to adopt a specific persona or expertise level.

SEO Use Case: Creating expert-level content that demonstrates E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness)

Example Prompt:

You are a certified nutritionist with 15 years of experience specializing in plant-based diets. 
Write a 300-word section for a blog post explaining the protein requirements for vegan athletes. 
Use evidence-based information and cite general nutritional guidelines.

When to use it: When you need content that sounds authoritative and matches a specific expertise level.


4. Emotion Prompting

What it is: Adding emotional cues or urgency to prompts to influence the AI’s tone and style.

SEO Use Case: Creating compelling calls-to-action and engaging introductions

Example Prompt:

Write an introduction for a blog post about cybersecurity threats facing small businesses. 
The tone should create a sense of urgency and concern without being alarmist. Make the reader 
feel that this information is critical for protecting their business.

When to use it: Landing pages, email campaigns, and content where emotional resonance matters.


5. Batch Prompting

What it is: Processing multiple inputs in a single prompt to save time.

SEO Use Case: Generating title tag variations for A/B testing

Example Prompt:

Generate 5 different title tag variations for each of these pages. Each title should be 50-60 
characters, include the primary keyword, and have a unique angle:

1. Page about "email marketing automation"
2. Page about "social media scheduling tools"
3. Page about "content calendar templates"

When to use it: When you have multiple similar tasks that can be processed together.


Advanced Reasoning Techniques

6. Zero-Shot Chain of Thought (CoT)

What it is: Asking the AI to “think step by step” before providing an answer.

SEO Use Case: Keyword research and search intent analysis

Example Prompt:

I want to rank for "best project management software." Let's think step by step:

1. First, analyze the search intent behind this keyword
2. Then, identify what type of content currently ranks for this term
3. Next, determine what subtopics and related keywords should be covered
4. Finally, suggest a content structure that would be competitive

Provide your analysis.

When to use it: Complex SEO strategy questions that require reasoning.


7. Few-Shot Chain of Thought

What it is: Providing examples of step-by-step reasoning, then asking the AI to apply the same logic.

SEO Use Case: Competitive content analysis

Example Prompt:

I'll show you how to analyze a competitor's blog post, then you'll do the same for a new URL.

Example Analysis:
URL: competitor.com/guide-to-seo
Step 1: Word count is 3,200 words
Step 2: Includes 15 H2 subheadings covering keyword research, on-page SEO, link building
Step 3: Has 8 custom images and 2 embedded videos
Step 4: Internal links to 12 related articles
Step 5: Conclusion: Comprehensive, multimedia-rich, well-structured

Now analyze this URL: competitor.com/content-marketing-strategy

When to use it: Teaching the AI your specific analysis framework.


8. Self-Ask Prompting

What it is: The AI generates its own sub-questions and answers them sequentially.

SEO Use Case: Creating comprehensive FAQ sections

Example Prompt:

I'm writing a guide about "starting a podcast." Generate a list of questions a beginner would 
ask, then answer each one in 2-3 sentences. Format it as an FAQ section suitable for schema markup.

When to use it: FAQ pages, “People Also Ask” optimization, and comprehensive guides.


9. Analogical Prompting

What it is: The AI generates its own relevant examples before solving the problem.

SEO Use Case: Creating relatable content for complex topics

Example Prompt:

Explain how Google's PageRank algorithm works by first thinking of an analogous real-world system, 
then using that analogy to explain the concept in simple terms for a non-technical audience.

When to use it: Making technical SEO concepts accessible to clients or broader audiences.


10. Meta Prompting

What it is: Providing a structured blueprint plus step-by-step reasoning.

SEO Use Case: Creating detailed content briefs

Example Prompt:

Create a content brief for a blog post targeting the keyword "how to improve website speed."

Use this structure:
1. Primary keyword and search intent
2. Target audience and their pain points
3. Competitive analysis (top 3 ranking pages)
4. Recommended word count and content depth
5. Required subtopics and H2/H3 structure
6. Internal linking opportunities
7. Call-to-action recommendation

Think through each section systematically and provide detailed recommendations.

When to use it: Comprehensive SEO planning and content strategy.


Advanced Break Down Techniques

11. Least-to-Most Prompting

What it is: Breaking a complex problem into smaller sub-problems and solving them sequentially.

SEO Use Case: Technical SEO audits

Example Prompt:

I need to audit a website's technical SEO. Let's break this down from simplest to most complex:

1. First, check if the site has a robots.txt and XML sitemap
2. Then, analyze page speed scores
3. Next, review mobile-friendliness
4. After that, check for broken links and redirect chains
5. Then, examine structured data implementation
6. Finally, assess crawl budget and indexation issues

Start with step 1 and work through each systematically.

When to use it: Complex, multi-step SEO processes.


12. Plan and Solve Prompting

What it is: The AI creates a plan first, then executes it step-by-step.

SEO Use Case: Content gap analysis

Example Prompt:

I want to identify content gaps between my site and my competitor's site.

First, create a plan for how to conduct this analysis.
Then, execute the plan using these URLs:
- My site: mysite.com/blog
- Competitor: competitor.com/blog

Provide actionable recommendations.

When to use it: Strategic SEO projects that benefit from upfront planning.


13. Program of Thoughts (PoT)

What it is: Generating code or symbolic steps to solve a problem precisely.

SEO Use Case: Creating regex patterns for htaccess redirects

Example Prompt:

I'm migrating a blog from /blog/post-title/ to /articles/post-title/

Generate the Apache .htaccess redirect rules needed to:
1. Redirect all /blog/ URLs to /articles/
2. Preserve the slug
3. Use 301 redirects
4. Handle both www and non-www versions

Provide the exact code.

When to use it: Technical implementations requiring precise syntax.


Advanced Ensemble Techniques

14. Self-Consistency Prompting

What it is: Generating multiple reasoning paths and picking the most common answer.

SEO Use Case: Validating keyword difficulty assessments

Example Prompt:

Analyze the keyword "best CRM software" and determine its difficulty level (Easy/Medium/Hard/Very Hard).

Generate 3 different analyses using different reasoning approaches:
1. Based on domain authority of ranking pages
2. Based on content depth and quality of top results
3. Based on backlink profiles of ranking pages

Then, provide a final consensus difficulty rating.

When to use it: Important decisions where you want multiple perspectives.


15. Multi-Chain Reasoning

What it is: Multiple reasoning paths that are synthesized into a superior final answer.

SEO Use Case: Comprehensive content strategy development

Example Prompt:

I want to create a content strategy for a B2B SaaS company selling project management software.

Approach this from three angles:
1. Keyword research and search demand analysis
2. Competitor content gap analysis
3. Customer journey and intent mapping

Then, synthesize these three analyses into a unified 6-month content roadmap.

When to use it: High-stakes strategy work requiring multiple analytical lenses.


Advanced Critique Techniques

16. Self-Refine Prompting

What it is: The AI generates content, critiques it, then improves it iteratively.

SEO Use Case: Optimizing existing content

Example Prompt:

Here's a blog post introduction:

"SEO is important for businesses. It helps you get more traffic. In this post, we'll talk about SEO."

First, critique this introduction and identify what's wrong with it from an SEO and engagement perspective.
Then, rewrite it to be compelling, keyword-optimized, and hook the reader immediately.

When to use it: Improving underperforming content.


17. Chain of Verification

What it is: Draft an answer, verify it independently, then correct it.

SEO Use Case: Fact-checking AI-generated content

Example Prompt:

Write a paragraph explaining how Google's Core Web Vitals affect rankings.

Then, verify each claim you made:
1. Is this factually accurate based on official Google documentation?
2. Are there any outdated or incorrect statements?
3. Are there important nuances missing?

Finally, provide a corrected version if needed.

When to use it: Ensuring accuracy in informational content.


Advanced Multilingual Techniques

18. Chain of Translation

What it is: Translate first, then perform the task for clearer reasoning.

SEO Use Case: International SEO and multilingual keyword research

Example Prompt:

I want to target Spanish-speaking users searching for "how to lose weight."

First, translate this keyword into Spanish and identify the most natural phrasing.
Then, research related long-tail keywords in Spanish.
Finally, suggest content topics that would resonate with Spanish-speaking audiences.

When to use it: Expanding into international markets.


Advanced Multi-Step Techniques

19. Rephrase and Respond

What it is: Rewrite the question clearly, then answer the clarified version.

SEO Use Case: Understanding ambiguous search queries

Example Prompt:

A user searches for "apple optimization."

First, rephrase this query to clarify what the user likely means (are they asking about Apple device optimization, apple orchard optimization, or something else?).

Then, provide 3 possible interpretations and suggest what type of content would satisfy each intent.

When to use it: Ambiguous keywords or search queries.


20. Step-Back Prompting

What it is: Identify the general principle first, then solve the specific problem.

SEO Use Case: Diagnosing ranking drops

Example Prompt:

My website's rankings dropped 30% last week.

First, step back and explain the general principles of why rankings drop (algorithm updates, technical issues, competitor improvements, etc.).

Then, apply those principles to diagnose my specific situation and recommend a troubleshooting process.

When to use it: Problem-solving that benefits from first-principles thinking.


Practical SEO Workflows Using These Techniques

Workflow 1: Creating a Pillar Page

Combine: Role Prompting + Meta Prompting + Self-Refine

Step 1 (Role + Meta): "You are an SEO content strategist. Create a detailed outline for a pillar 
page about 'email marketing' including primary keyword, search intent, H2/H3 structure, internal 
linking strategy, and word count recommendation."

Step 2 (Self-Refine): "Review this outline. Is it comprehensive enough to compete with the top 3 
ranking pages? What's missing? Improve the outline based on your critique."

Step 3: "Now write the introduction section following the improved outline."

Workflow 2: Competitive Content Analysis

Combine: Few-Shot CoT + Batch Prompting

"I'll show you how to analyze one competitor article, then you'll analyze 5 more.

Example:
URL: competitor.com/seo-guide
- Word count: 2,800
- Headings: 12 H2s covering keyword research, on-page, technical, link building
- Media: 6 screenshots, 1 infographic
- Links: 8 internal, 3 external to authority sites
- Unique angle: Focuses on local SEO

Now analyze these 5 URLs: [list URLs]"

Workflow 3: Keyword Clustering

Combine: Zero-Shot CoT + Self-Consistency

"I have these 20 keywords: [list]. Let's think step by step:

1. First, group them by search intent (informational, commercial, transactional)
2. Then, cluster them by topic similarity
3. Next, identify which clusters should target the same page vs. separate pages
4. Finally, recommend a URL structure for each cluster

Generate 3 different clustering approaches, then provide your recommended final structure."

Implementation Tips

Start Simple: Begin with basic techniques (Zero-Shot, Few-Shot, Role) before moving to advanced methods.

Iterate: Don’t expect perfect output on the first try. Refine your prompts based on results.

Combine Techniques: The real power comes from chaining multiple techniques together.

Save Your Best Prompts: Build a prompt library for recurring SEO tasks.

Test and Measure: Compare AI-generated content performance against human-written content.


Prompt engineering isn’t about replacing human expertise—it’s about amplifying it. These 29 techniques give you a structured framework for getting better results from AI tools, whether you’re doing keyword research, creating content, or conducting technical audits.

The marketers who win in the AI era won’t be the ones who use AI the most. They’ll be the ones who use it the best.

Technical Reference & Deep Dive

Implementation Details (using LangChain)

Here are Python implementations using LangChain (v1.0) to showcase these techniques with the Gemini model. These examples will classify news headlines and extract key phrases.

Prerequisites:

  • Google Gemini API key (obtained via Google AI Studio).
  • Python environment with LangChain, Google Generative AI, and Pydantic installed:
pip install langchain langchain-google-genai pydantic
import os
from google.colab import userdata  # Adjust if not using Colab
from langchain.chat_models import init_chat_model
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from typing import List

# 1. Set your API key (REPLACE with your actual key)
os.environ["GOOGLE_API_KEY"] = userdata.get('GOOGLE_API_KEY') # Or os.environ["GOOGLE_API_KEY"] = "YOUR_API_KEY" if not using Colab

# 2. Initialize the chat model (using Gemini - adjust as needed)
model = init_chat_model(
    "gemini-2.5-flash",
    model_provider="google_genai",
    temperature=0  # Set temperature for deterministic output
)

A. Zero-Shot Implementation (News Headline Classification)

# 3. Define the Pydantic schema for structured output
class ZeroShotClassifyResponse(BaseModel):
    predicted_label: str = Field(..., description="Predicted news category")

# 4. Create the parser
parser = PydanticOutputParser(pydantic_object=ZeroShotClassifyResponse)

# 5. Zero-shot prompt template (no examples)
prompt_template = ChatPromptTemplate.from_template(
    """
Classify the news headline into one of the categories:
["Politics", "Sports", "Business", "Technology", "Entertainment", "Health", "World"]

Headline: {headline}

Provide your output in this JSON format:
{format_instructions}
"""
)

# 6. Inject parser instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# 7. Chain: prompt → model → parser
chain = prompt | model | parser

# 8. Example headline
headline = "Government approves new policy to boost semiconductor manufacturing."

# 9. Invoke
result = chain.invoke({"headline": headline})

# 10. Display result
print("\n--- Predicted Label ---\n", result.predicted_label) # Expected output: Politics

B. Few-Shot Implementation (Key Phrase Extraction)

# 3. Define the Pydantic schema for structured output
class KeyPhraseResponse(BaseModel):
    key_phrases: List[str] = Field(..., description="List of extracted key phrases")

# 4. Create parser
parser = PydanticOutputParser(pydantic_object=KeyPhraseResponse)

# 5. Few-shot prompt with examples
prompt_template = ChatPromptTemplate.from_template(
    """
Extract the most important key phrases from the text.
Key phrases should be meaningful, concise, and capture core concepts.

Here are some examples:

Example 1:
Text: "Climate change is accelerating due to rising greenhouse gas emissions."
Key Phrases: ["climate change", "greenhouse gas emissions"]

Example 2:
Text: "Machine learning models require large datasets for effective training."
Key Phrases: ["machine learning models", "large datasets", "effective training"]

Example 3:
Text: "Renewable energy sources like solar and wind are becoming more affordable."
Key Phrases: ["renewable energy sources", "solar", "wind", "affordable energy"]

Now extract key phrases from the following text:

Text:
{input_text}

Provide the output in this JSON format:
{format_instructions}
"""
)

# 6. Inject parser instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# 7. Build LCEL chain
chain = prompt | model | parser

# 8. Example text
input_text = "Artificial intelligence is transforming healthcare by enabling faster diagnosis, personalized treatments, and advanced medical imaging."

# 9. Invoke
result = chain.invoke({"input_text": input_text})

# 10. Display results
print("\n--- Key Phrases ---\n", result.key_phrases)

C. Role Prompting (News Headline Classification)

# 3. Define the Pydantic schema for structured output
class ZeroShotClassifyResponse(BaseModel):
    predicted_label: str = Field(..., description="Predicted news category")

# 4. Create the parser
parser = PydanticOutputParser(pydantic_object=ZeroShotClassifyResponse)

# 5. Role Prompting template (no examples)
prompt_template = ChatPromptTemplate.from_template(
    """
You are a professional news editor with years of experience in global journalism. Your job is to accurately classify news headlines into their correct category.
Classify the news headline into one of the categories:
["Politics", "Sports", "Business", "Technology", "Entertainment", "Health", "World"]

Headline: {headline}

Provide your output in this JSON format:
{format_instructions}
"""
)

# 6. Inject parser instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# 7. Chain: prompt → model → parser
chain = prompt | model | parser

# 8. Example headline
headline = "Government approves new policy to boost semiconductor manufacturing."

# 9. Invoke
result = chain.invoke({"headline": headline})

# 10. Display result
print("\n--- Predicted Label ---\n", result.predicted_label)

II. Advanced Prompting Techniques for AI SEO

These techniques significantly enhance the LLM’s ability to create optimized content.

1. Chain-of-Thought (CoT) Prompting

Concept: Instruct the LLM to “think step-by-step” before providing a final answer. This encourages more logical and accurate reasoning, leading to better content.

AI SEO Angle: Helps the LLM create content that is more insightful, comprehensive, and logically structured, resembling high-quality content that Google favors. Use CoT prompting to create in-depth analyses and tutorials.

# Define the Pydantic schema for structured output
class CoTResponse(BaseModel):
    reasoning_chain: str = Field(..., description="Step-by-step reasoning")
    answer: str = Field(..., description="Final numeric answer only")

# Create the parser from the Pydantic model
parser = PydanticOutputParser(pydantic_object=CoTResponse)

# Prompt template with explicit zero-shot CoT cue ("Let's think step by step.")
prompt_template = ChatPromptTemplate.from_template(
    """
You are a step-by-step reasoning assistant.

Question: {question}

Answer: Let's think step by step.

Provide your solution in the following JSON format:
{format_instructions}

"""
)

# Inject the parser's format instructions into the template
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build the LCEL chain (prompt → model → parser)
chain = prompt | model | parser

# Example question and invocation
question = "A baker made 24 cookies. Half are chocolate chip. Half of those have sprinkles. How many chocolate-chip cookies with sprinkles?"

result = chain.invoke({"question": question})

# Display the result
print("\n--- Reasoning Chain ---\n", result.reasoning_chain)
print("\n--- Final Answer ---\n", result.answer)

SEO Example: Ask the LLM to explain the benefits of a specific SEO tool using CoT. The step-by-step breakdown can become a valuable section in your content.

2. Chain-of-Draft (CoD) Prompting

Concept: Similar to CoT, but the LLM uses very short, compact reasoning steps (3-5 words max) to reduce response length, token cost, and response time.

AI SEO Angle: CoD helps generate focused, concise content that gets straight to the point. Ideal for creating summaries, bullet points, or quick-reference guides.

# Define the Pydantic schema for structured output
class CoTResponse(BaseModel):
    reasoning_chain: str = Field(..., description="Step-by-step reasoning")
    answer: str = Field(..., description="Final numeric answer only")

# Create the parser from the Pydantic model
parser = PydanticOutputParser(pydantic_object=CoTResponse)

# Prompt template with explicit zero-shot CoT cue ("Let's think step by step.")
prompt_template = ChatPromptTemplate.from_template(
    """
You are a step-by-step reasoning assistant.

Question: {question}

Answer: Let's think step by step, but only keep a minimum draft for
each thinking step, with 5 words at most.

Provide your solution in the following JSON format:
{format_instructions}

"""
)

# Inject the parser's format instructions into the template
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build the LCEL chain (prompt → model → parser)
chain = prompt | model | parser

# Example question and invocation
question = "A baker made 24 cookies. Half are chocolate chip. Half of those have sprinkles. How many chocolate-chip cookies with sprinkles?"

result = chain.invoke({"question": question})

# Display the result
print("\n--- Reasoning Chain ---\n", result.reasoning_chain)
print("\n--- Final Answer ---\n", result.answer)

SEO Example: Use CoD to create a list of ranking factors for a specific search engine algorithm update. Each short reasoning step becomes a concise, impactful point.

3. Chain-of-Translation Prompting

Concept: Translate a non-English input sentence into English before performing the task (e.g., sentiment analysis).

AI SEO Angle: Improves accuracy when dealing with non-English keywords or content ideas. Ensures the LLM understands the nuances of the topic before generating content.

# Define Final Structured Output Model
class TranslationSentiment(BaseModel):
    telugu_sentence: str = Field(..., description="Original Telugu input")
    english_translation: str = Field(..., description="English translation of the Telugu text")
    sentiment_label: str = Field(..., description="Sentiment: Positive, Negative, or Neutral")

final_parser = PydanticOutputParser(pydantic_object=TranslationSentiment)

# Single Prompt Template (Translation + Classification)
prompt_template = ChatPromptTemplate.from_template(
    """
Consider yourself to be a human annotator who is well versed in English and Telugu language.
Given a Telugu sentence as input, perform the following tasks on the sentence:

1. Translate the given Telugu sentence into English.

2. Identify the sentiment depicted by the sentence.
   If the sentence expresses a positive emotion or opinion, label it as Positive.
   If the sentence expresses a negative emotion or complaint, label it as Negative.
   If the sentence expresses neither positive nor negative sentiment, label it as Neutral.

3. Give the output as:
   - the original Telugu sentence,
   - its English translation,
   - and the sentiment label.

Sentence is as follows:
{sentence}

Provide the final output using this JSON structure:
{format_instructions}
"""
)

single_prompt = prompt_template.partial(
    format_instructions=final_parser.get_format_instructions()
)

# Build LCEL Chain (single LLM call)
chain = single_prompt | model | final_parser

# Run Chain of Translation Prompting on the Example
telugu_sentence = "సినిమా అద్భుతంగా ఉంది! డైరెక్టర్ పనితీరు సూపర్. మళ్ళీ చూడాలని ఉంది."

result = chain.invoke({"sentence": telugu_sentence})

print("\n--- FINAL OUTPUT ---\n")
print("Telugu Sentence       :", result.telugu_sentence)
print("English Translation   :", result.english_translation)
print("Sentiment Label       :", result.sentiment_label)

SEO Example: Research keywords in another language, translate them to English, and then use those translated keywords to generate English-language content.

4. Chain-of-Verification (CoVe) Prompting

Concept: Reduces factual errors (hallucinations) by forcing the LLM to verify its own answers before finalizing them.

AI SEO Angle: Crucial for creating trustworthy and authoritative content. Use CoVe when generating content about sensitive topics (e.g., finance, health, law) or when accuracy is paramount.

Multi-Stage Implementation: CoVe requires four distinct prompts and chains.

from typing import List

# ---------------------------------------------------------
# 2. Define structured outputs for all 4 CoVe stages
# ---------------------------------------------------------

class BaselineResponse(BaseModel):
    draft_answer: str = Field(..., description="Initial unverified answer")

class VerificationPlan(BaseModel):
    questions: List[str] = Field(..., description="List of verification questions generated from the draft")

class VerificationAnswers(BaseModel):
    answers: List[str] = Field(..., description="Answers to the verification questions in the same order")

class FinalVerifiedResponse(BaseModel):
    verified_answer: str = Field(..., description="Final corrected answer using only verified facts")

# ---------------------------------------------------------
# 3. Initialize the Gemini model (gemini-2.5-flash)
# ---------------------------------------------------------
model = init_chat_model(
    "gemini-2.5-flash",
    model_provider="google_genai",
    temperature=0
)

# ---------------------------------------------------------
# 4. PROMPTS
# ---------------------------------------------------------

# 4.1 Baseline Draft Prompt
baseline_prompt_tmpl = ChatPromptTemplate.from_template(
    """
You are a factual answering assistant.

Step 1 of Chain-of-Verification:
Generate a baseline draft answer for the question. Do NOT verify anything yet.

Question:
{question}

Return your response in JSON:
{format_instructions}
"""
)

baseline_parser = PydanticOutputParser(pydantic_object=BaselineResponse)
baseline_prompt = baseline_prompt_tmpl.partial(format_instructions=baseline_parser.get_format_instructions())


# 4.2 Plan Verification Questions Prompt
verify_plan_tmpl = ChatPromptTemplate.from_template(
    """
You are now performing Step 2 of Chain-of-Verification.

Given the baseline draft answer below, generate verification questions to check EACH factual claim.

Draft Answer:
{draft}

Your job:
- Break the draft into factual claims.
- Create one verification question for each claim.
- Each question MUST be independently fact-checkable.

Return JSON:
{format_instructions}
"""
)

verify_plan_parser = PydanticOutputParser(pydantic_object=VerificationPlan)
verify_plan_prompt = verify_plan_tmpl.partial(format_instructions=verify_plan_parser.get_format_instructions())


# 4.3 Verification Answering Prompt
verify_answer_tmpl = ChatPromptTemplate.from_template(
    """
Step 3 of Chain-of-Verification.

Answer the following verification questions INDEPENDENTLY.
Do NOT refer to the draft answer. Use only factual knowledge.

Questions:
{questions}

Return JSON:
{format_instructions}
"""
)

verify_answer_parser = PydanticOutputParser(pydantic_object=VerificationAnswers)
verify_answer_prompt = verify_answer_tmpl.partial(format_instructions=verify_answer_parser.get_format_instructions())


# 4.4 Final Verified Response Prompt
final_answer_tmpl = ChatPromptTemplate.from_template(
    """
Step 4 of Chain-of-Verification.

You are given:
1. The baseline draft answer
2. The list of verification questions
3. The factual answers to those questions

Your task:
- Identify incorrect statements in the draft
- Keep only the claims supported by verification answers
- Remove or correct unsupported claims
- Produce the final VERIFIED answer

Draft:
{draft}

Verification Questions:
{questions}

Verification Answers:
{answers}

Return JSON:
{format_instructions}
"""
)

final_answer_parser = PydanticOutputParser(pydantic_object=FinalVerifiedResponse)
final_answer_prompt = final_answer_tmpl.partial(format_instructions=final_answer_parser.get_format_instructions())

# ---------------------------------------------------------
# 5. Build the LCEL chains
# ---------------------------------------------------------

baseline_chain = baseline_prompt | model | baseline_parser
time.sleep(1)
plan_chain = verify_plan_prompt | model | verify_plan_parser
time.sleep(1)
verify_chain = verify_answer_prompt | model | verify_answer_parser
time.sleep(1)
final_chain = final_answer_prompt | model | final_answer_parser

# ---------------------------------------------------------
# 6. Run CoVe on your example
# ---------------------------------------------------------

question = "Which US Presidents were born in the state of Texas?"

# Step 1: Baseline Draft
baseline = baseline_chain.invoke({"question": question})

# Step 2: Plan Verifications
plan = plan_chain.invoke({"draft": baseline.draft_answer})

# Step 3: Execute Verifications
verification = verify_chain.invoke({"questions": plan.questions})

# Step 4: Final Verified Answer
final = final_chain.invoke({
    "draft": baseline.draft_answer,
    "questions": plan.questions,
    "answers": verification.answers
})

# ---------------------------------------------------------
# 7. Print all outputs
# ---------------------------------------------------------

print("\n--- Baseline Draft ---\n", baseline.draft_answer)
print("\n--- Verification Questions ---\n", plan.questions)
print("\n--- Verification Answers ---\n", verification.answers)
print("\n--- Final Verified Answer ---\n", final.verified_answer)

SEO Example: Generate a comprehensive guide about cryptocurrency investing using CoVe to ensure the information is accurate and up-to-date.

5. Least-to-Most Prompting (LtM)

Concept: Break a complex question into simpler sub-problems and solve them sequentially.

AI SEO Angle: Useful for creating long-form content that requires in-depth analysis and progressive explanation. Helps create tutorial-style content that guides users through a process step by step.

# Define structured output for LtM
class LtMResponse(BaseModel):
    decomposition: str = Field(..., description="List of sub-problems in order")
    sequential_solution: str = Field(..., description="Step-by-step solutions for each sub-problem")
    final_answer: str = Field(..., description="Final numeric answer only")

# Create parser
parser = PydanticOutputParser(pydantic_object=LtMResponse)

# Zero-Shot Least-to-Most Prompt Template
prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert reasoning assistant.

You must solve the problem using **Least-to-Most Prompting**, which has TWO required stages:

1. **Decomposition (Least):**
   - Break the main problem into a sequential list of simpler sub-problems.

2. **Sequential Solving (Most):**
   - Solve each sub-problem step-by-step.
   - Use outputs of earlier sub-problems to solve later ones.
   - Continue until the final answer is reached.

Question:
{question}

Provide your solution in the following JSON format:
{format_instructions}

Important:
- decomposition must contain numbered sub-problems.
- sequential_solution must show calculations for each sub-problem.
- final_answer must contain ONLY the final numeric answer.
"""
)

# Insert parser instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build LCEL chain
chain = prompt | model | parser

# Invoke the chain using your marathon LtM problem
question = """
A runner is preparing for a marathon. She runs 10 miles every day.
Last week, she ran 7 days.
This week, she took a 2-day rest and ran 8 miles on the remaining days.
If she wants to run a total of 180 miles across both weeks,
how many more miles must she run in the next 3 days?
"""

result = chain.invoke({"question": question})

# Output
print(result)
print("\n--- Decomposition ---\n", result.decomposition)
print("\n--- Sequential Solution ---\n", result.sequential_solution)
print("\n--- Final Answer ---\n", result.final_answer)

SEO Example: Create a comprehensive guide to link building. Start with basic concepts (what is a link?), then progress to advanced strategies (guest posting, broken link building).

6. Plan-and-Solve Prompting

Concept: Guides the model to create a plan before solving the problem.

AI SEO Angle: Improves the logical flow and structure of content. Excellent for creating “how-to” guides, tutorials, and process documentation.

# Define structured output for Plan-and-Solve
class PlanSolveResponse(BaseModel):
    variables: str = Field(..., description="Extracted relevant variables and their numerals")
    plan: str = Field(..., description="A complete step-by-step plan to solve the problem")
    calculation: str = Field(..., description="Execution of the plan with intermediate calculations")
    final_answer: str = Field(..., description="Final numeric answer only")

# Create parser
parser = PydanticOutputParser(pydantic_object=PlanSolveResponse)

# Zero-Shot Plan-and-Solve Prompt Template
prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert step-by-step reasoning assistant using plan and solve prompting following the instruction.
Let’s first understand the problem, extract relevant variables and their corresponding numerals, and make a complete plan.
Then, let’s carry out the plan, calculate intermediate variables (pay attention to correct numerical calculation and commonsense),
solve the problem step by step, and show the answer."

Question:
{question}

Answer:

Provide your solution in the following JSON format exactly:
{format_instructions}

Important:
- variables must list each extracted variable and its numeric value.
- plan must contain a numbered plan of steps to compute the answer.
- calculation must show the step-by-step execution of the plan with arithmetic.
- final_answer must contain ONLY the final numeric answer (no units, no explanation).
"""
)

# Insert parser instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build LCEL chain
chain = prompt | model | parser

# Invoke the chain using your train-speed Plan-and-Solve problem
question = """
A train travels at an average speed of 60 mph for the first 3 hours of a journey and then at an average speed
of 40 mph for the remaining 2 hours. What is the average speed of the train for the entire journey?

Answer Choices: (A) 52 mph (B) 50 mph (C) 48 mph (D) 46 mph (E) 45 mph
"""

result = chain.invoke({"question": question})

# Output
print("\n--- Variables ---\n", result.variables)
print("\n--- Plan ---\n", result.plan)
print("\n--- Calculation ---\n", result.calculation)
print("\n--- Final Answer ---\n", result.final_answer)

SEO Example: Generate a detailed guide on conducting keyword research. The “plan” section outlines the steps (brainstorming, using keyword research tools, analyzing competitor keywords), and the “calculation” section provides specific examples.

7. Program-of-Thoughts (PoT) Prompting

Concept: The LLM generates executable Python code to solve the problem.

AI SEO Angle: Can be used to create interactive content or tools that demonstrate concepts. Useful for generating code snippets that solve specific problems (e.g., calculating ROI, optimizing images).

from langchain_experimental.utilities import PythonREPL  # UPDATED IMPORT

# Define PoT structured output
class PoTResponse(BaseModel):
    program: str = Field(..., description="Python code that computes the answer. Must assign final result to 'ans'.")

# Parser
parser = PydanticOutputParser(pydantic_object=PoTResponse)

# Python Interpreter Tool (LangChain)
python_repl = PythonREPL()

# Zero-Shot PoT Prompt Template
prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert numerical reasoning assistant.

You must solve the problem using **Program-of-Thoughts (PoT)** prompting.

Your output MUST be ONLY Python code:

- Use step-by-step reasoning expressed as variable assignments.
- Do NOT include comments.
- Do NOT include print statements.
- Use clear variable names.
- The last line MUST be: ans = <final value>
- The code MUST run in a Python interpreter.

Do NOT output natural language.
Do NOT add explanations.
ONLY return Python code.

Problem:
{question}

Provide the solution in this JSON format:
{format_instructions}
"""
)

# Insert parser instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build chain
chain = prompt | model | parser

# Problem
question = """
Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and
bakes muffins for her friends every day with four. She sells the remainder at the
farmers' market daily for $2 per fresh duck egg. How much in dollars does she make
every day at the farmers' market?
"""

# Invoke LLM → get Python program
result = chain.invoke({"question": question})

print("\n--- Program Generated by LLM ---\n")
print(result.program)

# Execute using LangChain Python Interpreter Tool
execution_output = python_repl.run(result.program)

# Retrieve 'ans' from REPL environment
final_answer = python_repl.locals.get("ans", None)

print("\n--- Final Answer (from Python interpreter) ---\n")
print(final_answer)

SEO Example: Generate a Python script that analyzes website traffic data and identifies areas for improvement.

8. Rephrase-and-Respond Prompting

Concept: The LLM first rephrases the user’s question to remove ambiguity and clarify intent before answering it.

AI SEO Angle: Improves the relevance and accuracy of the generated content. Useful when dealing with complex or poorly worded prompts. Can help target specific user intents more effectively.

# Define Structured Output for Rephrase-and-Respond
class RaRResult(BaseModel):
    rephrased_question: str = Field(..., description="The rephrased and expanded question")
    response: str = Field(..., description="Final answer produced after rephrasing")

rar_parser = PydanticOutputParser(pydantic_object=RaRResult)

# Single-Prompt Rephrase-and-Respond Template
rar_prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert reasoning assistant.

For the user question below, perform BOTH steps in a single reasoning flow:

1. Rephrase and expand the question
   - Remove ambiguity
   - State the hidden intention clearly
   - Make the required reasoning explicit

2. Respond to the rephrased question
   - Follow the clarified interpretation
   - Provide a correct and well-reasoned answer

User Question:
{question}

Provide your output in this JSON format:
{format_instructions}
"""
)

rar_prompt = rar_prompt_template.partial(
    format_instructions=rar_parser.get_format_instructions()
)

# Build the LCEL Chain — Only One LLM Call
rar_chain = rar_prompt | model | rar_parser

# Run RaR on the Example Question
question = "Identify the odd one out: Apple, Banana, Car, Orange."

result = rar_chain.invoke({"question": question})

print("\n--- REPHRASED QUESTION ---\n")
print(result.rephrased_question)

print("\n--- FINAL RESPONSE ---\n")
print(result.response)

SEO Example: If you provide a vague keyword like “SEO,” the LLM will rephrase it to something more specific like, “What are the top 5 strategies for improving organic search rankings in 2024?”. This ensures the content is focused on a particular user intent.

9. Self-Ask Prompting

Concept: The LLM breaks down a complex question into smaller follow-up questions and answers them sequentially.

AI SEO Angle: Improves the comprehensiveness and depth of content. Suitable for creating FAQs, troubleshooting guides, or complex explanations that require addressing multiple sub-questions.

# Define Pydantic schema
class SelfAskResponse(BaseModel):
    reasoning_chain: str = Field(..., description="Complete self-ask transcript (follow-ups + intermediate answers)")
    answer: str = Field(..., description="Final answer only in MM/DD/YYYY format")

# Create parser
parser = PydanticOutputParser(pydantic_object=SelfAskResponse)

# Few-shot Self-Ask example (1-shot)
few_shot_example = """
Q: The historical event was originally planned for 11/05/1852, but due to unexpected weather, it was moved forward by two days to today. What is the date 8 days from today in MM/DD/YYYY?
Are follow up questions needed here: Yes.
Follow up: What is today's date?
Intermediate answer: Moving an event forward by two days from 11/05/1852 means today's date is 11/03/1852.
Follow up: What date is 8 days from today?
Intermediate answer: 8 days from 11/03/1852 is 11/11/1852.
So the final answer is: 11/11/1852.
"""

# Prompt template matching your exact requested pattern
prompt_template = ChatPromptTemplate.from_template(
    """
You are a step-by-step reasoning assistant.

Here is an example problem solved using self-ask prompting:
{few_shot_example}

Now solve the following question using a similar self-ask prompting approach:

Question: {question}

Provide your solution in the following JSON format:
{format_instructions}
"""
)

# Inject reference example + parser formatting into the prompt
prompt = prompt_template.partial(
    few_shot_example=few_shot_example,
    format_instructions=parser.get_format_instructions()
)

# Build the LCEL chain
chain = prompt | model | parser

# Target Question (given earlier)
question = (
    "A construction project started on 09/15/2024. The first phase took 12 days. "
    "The second phase was originally scheduled for 20 days, but was shortened by 3 days. "
    "What is the completion date of the second phase in MM/DD/YYYY?"
)

# Run the chain
result = chain.invoke({"question": question})

# Display result
print("\n--- Reasoning Chain (self-ask transcript) ---\n", result.reasoning_chain)
print("\n--- Final Answer ---\n", result.answer)

SEO Example: Use Self-Ask to create a comprehensive guide to a complex topic like “Technical SEO.” The LLM would ask itself sub-questions like: “What is crawling?”, “What is indexing?”, “What are the most common technical SEO errors?”.

10. Self-Consistency Prompting

Concept: Generate multiple reasoning chains and pick the most frequent answer.

AI SEO Angle: Improves the reliability of content. Reduces the likelihood of errors, especially in tasks involving calculations or factual information. Can be applied to various content types, but is particularly useful for generating lists or comparisons.

from collections import Counter

# Define structured output model
class SCResponse(BaseModel):
    reasoning_chain: str = Field(..., description="Full reasoning steps")
    answer: str = Field(..., description="Final numeric answer only")

# Create parser
parser = PydanticOutputParser(pydantic_object=SCResponse)

# Initialize Gemini model with sampling enabled
model = init_chat_model(
    "gemini-2.5-flash",
    model_provider="google_genai",
    temperature=0.8,
    top_k=40,
)

# Zero-shot Self-Consistency Prompt
prompt_template = ChatPromptTemplate.from_template(
    """
You are a step-by-step reasoning assistant.

Use deliberate, step-by-step reasoning.

Question: {question}

Instruction:
- Think through the problem step by step.
- Produce a full chain of thought.
- Then give ONLY the final numeric answer.

Return your output in this JSON format:
{format_instructions}

Important:
- reasoning_chain must contain multiple reasoning steps.
- answer must contain ONLY the final numeric answer.
"""
)

prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build LCEL chain
chain = prompt | model | parser

# Self-Consistency Sampling (with sleep + 5 samples)
def self_consistency(question: str, samples: int = 5):
    answers = []
    all_outputs = []

    for i in range(samples):
        result = chain.invoke({"question": question})
        answers.append(result.answer)
        all_outputs.append(result)

        time.sleep(1)   # <-- prevents rate-limits

    final_answer = Counter(answers).most_common(1)[0][0]
    return final_answer, all_outputs

#Run on your example
question = (
    "When I was 6 years old, my sister was half my age. Now I am 70 years old. How old is my sister?"
)

final_answer, outputs = self_consistency(question, samples=5)

#Display results
print("\n===== SELF CONSISTENCY OUTPUT =====")
print("Final Aggregated Answer:", final_answer)

print("\n===== ALL SAMPLED REASONING PATHS =====")
for i, out in enumerate(outputs, 1):
    print(f"\n--- Sample {i} ---")
    print(out.reasoning_chain)
    print("Answer:", out.answer)

SEO Example: Generate multiple product descriptions for an e-commerce site and select the one that best highlights the key selling points based on a consistent theme.

11. Step-Back Prompting

Concept: Guide the LLM to identify the high-level concept or first principle before solving the task.

AI SEO Angle: Helps create content that demonstrates a deep understanding of the topic. Use Step-Back to generate thought leadership pieces or content that explains complex concepts in a simplified way.

# Define Structured Output Models
class Abstraction(BaseModel):
    stepback_question: str = Field(..., description="The abstract step-back question")
    stepback_answer: str = Field(..., description="The high-level principle that answers the step-back question")

class FinalAnswer(BaseModel):
    final_answer: str = Field(..., description="The final solution using the abstract principle")

abstraction_parser = PydanticOutputParser(pydantic_object=Abstraction)
final_answer_parser = PydanticOutputParser(pydantic_object=FinalAnswer)

# Prompt Templates (ONLY TWO CALLS)

# --- Call 1: Step-Back Abstraction ---
abstraction_prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert in abstraction.

Given the original question below:

Original Question:
{question}

Perform TWO tasks:
1. Generate a high-level **step-back question** that captures the general principle needed.
2. Answer that step-back question by giving the **underlying principle or formula**.

Return BOTH in this JSON format:
{format_instructions}
"""
)

abstraction_prompt = abstraction_prompt_template.partial(
    format_instructions=abstraction_parser.get_format_instructions()
)

# --- Call 2: Final Reasoning ---
final_reasoning_prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert problem solver.

Use the abstract principle retrieved earlier to answer the original question.

Original Question:
{question}

Step-Back Principle:
{abstraction}

Now solve the original question step by step.

Return the final answer in this JSON format:
{format_instructions}
"""
)

final_reasoning_prompt = final_reasoning_prompt_template.partial(
    format_instructions=final_answer_parser.get_format_instructions()
)

# Build LCEL Chains (Only Two Calls)
abstraction_chain = abstraction_prompt | model | abstraction_parser
final_answer_chain = final_reasoning_prompt | model | final_answer_parser

# Run Step-Back Prompting on Your Example
question = "A train travels at 60 miles per hour. How far will it travel in 3 hours?"

# Call 1 — Abstraction
abs_result = abstraction_chain.invoke({"question": question})
print("\n--- STEP-BACK ABSTRACTION ---\n")
print("Step-Back Question:", abs_result.stepback_question)
print("Step-Back Answer:", abs_result.stepback_answer)

# Call 2 — Reasoning
final_result = final_answer_chain.invoke({
    "question": question,
    "abstraction": abs_result.stepback_answer
})
print("\n--- FINAL ANSWER ---\n")
print(final_result.final_answer)

SEO Example: Explain a specific SEO tactic (e.g., “optimizing for featured snippets”). The LLM would first identify the underlying principle (“understanding user intent”) before explaining the tactic itself.

12. Thread-of-Thoughts (ThoT) Prompting

Concept: Breaks long/chaotic contexts into manageable parts, summarizes each part, identifies the relevant pieces, and then synthesizes the final answer.

AI SEO Angle: Particularly useful in retrieval-augmented generation (RAG) where a lot of potentially irrelevant text is mixed with relevant information. Helps the LLM filter through large amounts of data to extract the most relevant information for your content.

# Define a Pydantic schema for structured ThoT output
class ThoTResponse(BaseModel):
    thread_of_thought: str = Field(..., description="Segment-by-segment analysis with summaries")
    answer: str = Field(..., description="Final answer extracted after analysis")

# Create parser for the structured output
parser = PydanticOutputParser(pydantic_object=ThoTResponse)

# Thread-of-Thoughts prompt template (using your example)
prompt_template = ChatPromptTemplate.from_template(
    """
You are an assistant that performs Thread-of-Thoughts reasoning:

Context:
{retrieved_passages}

Question: {question}

Trigger for Thread-of-Thoughts:
Walk me through this context in manageable parts step by step, summarizing and analyzing as we go.

Provide the output using this JSON format:
{format_instructions}
"""
)

# Inject format instructions into prompt
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# LCEL chain: prompt → model → parser
chain = prompt | model | parser

# Example data (your provided retrieval example)
retrieved_passages = """
Passage 1: Talks about book vending machines.
Passage 2: Reclam's founder created the publishing house in Leipzig.
Passage 3: Mentions a random street address.
Passage 4: Reclam's publishing house was located in Leipzig.
Passage 5: Talks about another unrelated company.
"""

question = "Where was Reclam founded?"

# Invoke the chain
result = chain.invoke({
    "retrieved_passages": retrieved_passages,
    "question": question
})

# Display the result
print("\n--- Thread of Thoughts ---\n", result.thread_of_thought)
print("\n--- Final Answer ---\n", result.answer)

SEO Example: Use ThoT in a RAG system to answer a complex question about a product by retrieving information from multiple customer reviews, articles, and product specifications. ThoT helps the LLM filter out irrelevant information and synthesize the key details.

13. Tabular Chain of Thought Prompting (Tab-CoT)

Concept: Guides the LLM to show its reasoning in the form of a table.

AI SEO Angle: Forces the LLM to reason in a highly organized and structured way, leading to more accurate results. Best for generating comparison tables, data-driven content, or content that requires clear organization.

# Define the Pydantic schema for structured output
class TabCoTResponse(BaseModel):
    reasoning_table: str = Field(..., description="Generated Tabular Chain-of-Thought reasoning table")
    answer: str = Field(..., description="Final numeric answer only")

# Create the parser
parser = PydanticOutputParser(pydantic_object=TabCoTResponse)

# Prompt Template for Zero-Shot Tabular CoT
prompt_template = ChatPromptTemplate.from_template(
    """
You are a reasoning assistant that uses **Tabular Chain-of-Thought (Tab-CoT)**.

You must generate your reasoning in a table format using the header:

|step|subquestion|process|result|

For every step:
- Fill each column
- Show clean calculations in the "process" column
- Show only the intermediate numeric answer in "result"

After generating the full reasoning table, provide the final answer.

Question: {question}

Provide the output in the following JSON format:
{format_instructions}
"""
)

# Insert parser format instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build the LCEL chain
chain = prompt | model | parser

# Example problem (YOUR GIVEN EXAMPLE)
question = (
    "A librarian is shelving books. A shelf for fiction novels can hold 15 books, "
    "and a shelf for non-fiction can hold 12 books. If the library needs to shelve "
    "90 fiction novels and 72 non-fiction books, how many total shelves will the librarian need?"
)

# Invoke the chain
result = chain.invoke({"question": question})

# Display results
print("\n--- Tabular Reasoning Table ---\n")
print(result.reasoning_table)

print("\n--- Final Answer ---\n")
print(result.answer)

SEO Example: Generate a table comparing the features and pricing of different SEO software.

14. Meta Prompting

Concept: Provides the model with a structured, example-free template that tells it how to solve the given problem.

AI SEO Angle: Ensures consistency and adherence to specific formatting guidelines. Useful for creating content that follows a specific brand style guide or a pre-defined SEO template.

# Define the Pydantic schema for Meta Prompting structured output
class MetaPromptResponse(BaseModel):
    reasoning_chain: str = Field(..., description="Structured reasoning following the meta-prompt steps")
    answer: str = Field(..., description="Final numeric answer only")

# Create the parser
parser = PydanticOutputParser(pydantic_object=MetaPromptResponse)

# Zero-Shot Meta Prompt Template (structure-focused)
prompt_template = ChatPromptTemplate.from_template(
    """
You are a structured reasoning assistant that solves the given problem following the given solution structure.

Problem: {question}

Solution Structure:
  Step 1: Begin the response with: "Let's think step by step."
  Step 2: Identify the important components of the problem.
  Step 3: Break the solution process into clear, logical steps.
  Step 4: Present the final result in a LaTeX formatted box, like: \\boxed{{value}}

Final Answer: Provide only the final numeric answer.

Return your response using this JSON format:
{format_instructions}
"""
)

# Inject parser instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build the LCEL chain
chain = prompt | model | parser

# Example mathematical question
question = "Solve for x: 3x + 12 = 39."

# Run the chain
result = chain.invoke({"question": question})

# Display the structured reasoning and final answer
print("\n--- Reasoning Chain (Structured Meta Prompt) ---\n", result.reasoning_chain)
print("\n--- Final Answer ---\n", result.answer)

SEO Example: Use meta-prompting to enforce a consistent keyword density, heading structure, and tone of voice across all content generated for a specific client.

15. Universal Self Consistency Prompting (USC)

Concept: Generates multiple outputs and then prompts the LLM to select the most consistent, reasonable, and logically sound response.

AI SEO Angle: Improves the quality and coherence of content, especially in free-form generation tasks (summarization, open-ended Q&A, code generation). Avoids the limitations of exact-match voting used in standard Self-Consistency.

# Define structured output model for candidate responses
class USCResponse(BaseModel):
    reasoning_chain: str = Field(..., description="Full reasoning steps")
    answer: str = Field(..., description="Final numeric answer only")

parser = PydanticOutputParser(pydantic_object=USCResponse)

# Zero-shot generation prompt (same as SC sampling stage)
generation_prompt_template = ChatPromptTemplate.from_template(
    """
You are a detailed step-by-step reasoning assistant.

Question: {question}

Instruction:
- Think step by step.
- Produce a clear chain of thought.
- Then produce ONLY the final numeric answer.

Return output in this JSON format:
{format_instructions}
"""
)

generation_prompt = generation_prompt_template.partial(
    format_instructions=parser.get_format_instructions()
)

gen_chain = generation_prompt | model | parser

# Universal Self-Consistency Selection Prompt
selection_prompt = ChatPromptTemplate.from_template(
    """
You are an evaluator assistant.

You are given multiple candidate answers to the same question.
Your job is to read ALL responses and select the one that is
the most consistent, reasonable, and logically sound.

Question:
{question}

Candidate Responses:
{all_responses}

Instruction:
- Carefully compare the reasoning steps.
- Select the single best response.
- Provide ONLY the index number of the best response.
- DO NOT explain your choice.

Return output in plain text containing ONLY the index number (1, 2, or 3).
"""
)

selection_chain = selection_prompt | model

# Universal Self-Consistency function
def universal_self_consistency(question: str, n_samples: int = 3):
    candidates = []

    # --- Stage 1: Generate candidate responses ---
    for i in range(n_samples):
        result = gen_chain.invoke({"question": question})
        candidates.append(result)
        time.sleep(1)

    # Prepare text block for evaluation prompt
    formatted_candidates = ""
    for idx, c in enumerate(candidates, 1):
        formatted_candidates += (
            f"\n[{idx}] Reasoning:\n{c.reasoning_chain}\nAnswer: {c.answer}\n"
        )

    # --- Stage 2: Ask LLM to select best candidate ---
    chosen_idx = selection_chain.invoke(
        {
            "question": question,
            "all_responses": formatted_candidates,
        }
    )

    chosen_idx = int(chosen_idx.content.strip())

    return candidates[chosen_idx - 1], candidates

# Run Universal Self-Consistency on the example
question = (
    "What are three advantages of electric vehicles over gasoline vehicles?"
)

best_output, all_candidates = universal_self_consistency(question, n_samples=3)

# Display results
print("\n===== UNIVERSAL SELF CONSISTENCY OUTPUT =====")
print("Chosen Final Answer:", best_output.answer)

print("\n===== ALL GENERATED CANDIDATES =====")
for i, out in enumerate(all_candidates, 1):
    print(f"\n--- Candidate {i} ---")
    print(out.reasoning_chain)
    print("Answer:", out.answer)

SEO Example: Generate multiple blog post introductions and use USC to select the one that best aligns with the overall tone and messaging of your brand.

16. Self Refine Prompting

Concept: An iterative technique where a model improves its own output through a repeated cycle of generation → feedback → refinement.

AI SEO Angle: Creates progressively higher quality content with each iteration. Especially useful for optimizing existing content or generating complex content types like code or technical documentation.

# Define Structured Output Models
class InitialDraft(BaseModel):
    draft: str = Field(..., description="The model's initial attempt at the solution")

class Feedback(BaseModel):
    feedback: str = Field(..., description="Actionable and specific feedback describing issues and improvements. If no issues, must include the phrase 'no issues'.")

class RefinedOutput(BaseModel):
    refined_answer: str = Field(..., description="Improved answer incorporating the feedback")

initial_parser = PydanticOutputParser(pydantic_object=InitialDraft)
feedback_parser = PydanticOutputParser(pydantic_object=Feedback)
refine_parser = PydanticOutputParser(pydantic_object=RefinedOutput)

# Prompt Templates
# 4.1 Initial Draft Prompt
initial_prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert Python developer.

Write the FIRST DRAFT solution to the task below.
Do NOT critique or refine it yet.

Task:
{task}

Output format:
{format_instructions}
"""
)

initial_prompt = initial_prompt_template.partial(
    format_instructions=initial_parser.get_format_instructions()
)

# 4.2 Feedback Prompt
feedback_prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert code reviewer.

Carefully analyze the initial or refined draft.
Provide feedback that is:

- Specific
- Actionable
- Mentioning what to fix and why

If the answer is already correct, complete, and high-quality,
write feedback that **explicitly contains the phrase "no issues"**.

Task:
{task}

Draft Under Review:
{draft}

Output format:
{format_instructions}
"""
)

feedback_prompt = feedback_prompt_template.partial(
    format_instructions=feedback_parser.get_format_instructions()
)

# 4.3 Refinement Prompt
refine_prompt_template = ChatPromptTemplate.from_template(
    """
You are an expert Python developer.

Refine the draft by applying the feedback.
Improve correctness, clarity, robustness, and Python best practices.

Task:
{task}

Draft:
{draft}

Feedback:
{feedback}

Output format:
{format_instructions}
"""
)

refine_prompt = refine_prompt_template.partial(
    format_instructions=refine_parser.get_format_instructions()
)

# Build LCEL Chains
initial_chain = initial_prompt | model | initial_parser
feedback_chain = feedback_prompt | model | feedback_parser
refine_chain = refine_prompt | model | refine_parser

# Multi-Iteration Self-Refine Loop (Stop When “no issues”)
task = "Write a Python function calculate_average that takes a list of numbers and returns the average."

MAX_ITER = 3    # upper limit for safety

# Phase 1 — Generate initial draft
draft_result = initial_chain.invoke({"task": task})
current_draft = draft_result.draft

print("\n=== INITIAL DRAFT ===\n")
print(current_draft)

# Phase 2 — Iterative refine loop
for iteration in range(MAX_ITER):
    print(f"\n=== FEEDBACK ROUND {iteration} ===\n")

    # Generate feedback
    fb_result = feedback_chain.invoke({
        "task": task,
        "draft": current_draft
    })

    feedback = fb_result.feedback
    print(feedback)

    # Stop condition: feedback contains "no issues"
    if "no issues" in feedback.lower():
        print("\nStopping refinement: feedback reports 'no issues'.")
        break

    time.sleep(1)
    # Apply refinement
    refine_result = refine_chain.invoke({
        "task": task,
        "draft": current_draft,
        "feedback": feedback
    })

    current_draft = refine_result.refined_answer

    print(f"\n=== REFINED DRAFT {iteration} ===\n")
    print(current_draft)

print("\n\n=== FINAL OUTPUT AFTER SELF-REFINE ===\n")
print(current_draft)

SEO Example: Optimize an existing blog post for readability and keyword density. The LLM would provide feedback on areas that need improvement, and then refine the content accordingly.

17. Analogical Prompting

Concept: Instructing the LLM to recall similar problems (analogies) before solving the main problem.

AI SEO Angle: By recalling similar problems first, the model creates context, activates the right concepts, and then solves the actual problem more accurately.

# Define the structured output schema
class AnalogicalResponse(BaseModel):
    relevant_problems: str = Field(..., description="Self-generated relevant example problems with solutions")
    reasoning_chain: str = Field(..., description="Step-by-step reasoning for the original problem")
    answer: str = Field(..., description="Final numeric answer only")

# Create the parser
parser = PydanticOutputParser(pydantic_object=AnalogicalResponse)

# Analogical prompting template (matches the structure in the image)
prompt_template = ChatPromptTemplate.from_template(
    """
Your task is to tackle mathematical problems. When presented with a math problem, recall relevant problems as examples. Afterward, proceed to solve the initial problem.

# Problem:
{question}

# Instructions:
## Relevant Problems:
Recall three examples of math problems that are relevant to the initial problem. Your problems should be distinct from each other and from the initial problem (e.g., involving different numbers and names). For each problem:
- After "Q: ", describe the problem
- After "A: ", explain the solution and enclose the ultimate answer in \\boxed{{}}.

## Solve the Initial Problem:
Q: Copy and paste the initial problem here.
A: Explain the solution step by step and enclose the final answer in \\boxed{{}}.

Provide the final output in the following JSON format:
{format_instructions}
"""
)

# Inject format instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build LCEL chain (prompt → model → parser)
chain = prompt | model | parser

# Example problem (your chosen analogical example)
question = "What is the area of the rectangle with the four vertices at (1, 3), (7, 3), (7, 8), and (1, 8)?"

# Invoke the chain
result = chain.invoke({"question": question})

# Display results
print("\n--- Relevant Problems (Self-Generated) ---\n", result.relevant_problems)
print("\n--- Reasoning Chain ---\n", result.reasoning_chain)
print("\n--- Final Answer ---\n", result.answer)

18. Meta Cognitive Prompting

Concept: Guides a Large Language Model (LLM) through a structured self-reflection process, mirroring how humans think about their own thinking.

AI SEO Angle: improves the quality of generated text content through model’s own process of thinking and self-critique.

# Define structured output for Meta-Cognitive Prompting (added final_answer field)
class MetaCognitiveResponse(BaseModel):
    understanding: str = Field(..., description="Clarify understanding of the question and the context sentence")
    preliminary_judgment: str = Field(..., description="Initial assessment of whether the statement contains the answer")
    critical_evaluation: str = Field(..., description="Reflection and reassessment of the initial judgment")
    final_answer: str = Field(..., description='Final response in the exact form: "The status is (entailment / not_entailment)"')
    confidence: str = Field(..., description="Confidence score (0-100%) with explanation")

# Create parser
parser = PydanticOutputParser(pydantic_object=MetaCognitiveResponse)

# Zero-Shot Meta-Cognitive Prompt Template (exact wording requested)
prompt_template = ChatPromptTemplate.from_template(
    """
For the question: "{question}" and statement: "{sentence}", determine if the statement provides the answer
to the question. If the statement contains the answer to the question, the status is entailment.
If it does not, the status is not_entailment. As you perform this task, follow these steps:
1. Clarify your understanding of the question and the context sentence.
2. Make a preliminary identification of whether the context sentence contains the answer to the question.
3. Critically assess your preliminary analysis. If you feel unsure about your initialentailment classification, try to reassess it.
4. Confirm your final answer and explain the reasoning behind your choice.
5. Evaluate your confidence (0-100%) in your analysis and provide an explanation for this confidence level.
Provide the answer in your final response as "The status is (entailment / not_entailment)"

As you perform the above, produce the following structured output.

Provide your response in JSON format exactly matching the fields:
{format_instructions}
"""
)

# Insert parser instructions
prompt = prompt_template.partial(format_instructions=parser.get_format_instructions())

# Build LCEL chain
chain = prompt | model | parser

# Invoke the chain with your example question + statement
question = "What is the largest planet in our solar system?"
statement = (
    "Jupiter, the fifth planet from the Sun, is so massive that it accounts for more "
    "than twice the mass of all the other planets combined."
)

result = chain.invoke({
    "question": question,
    "sentence": statement
})

# Output (structured)
print("\n--- Understanding ---\n", result.understanding)
print("\n--- Preliminary Judgment ---\n", result.preliminary_judgment)
print("\n--- Critical Evaluation ---\n", result.critical_evaluation)
print("\n--- Final Answer ---\n", result.final_answer)
print("\n--- Confidence ---\n", result.confidence)

III. Real-World Applications and Examples for AI SEO

These advanced prompting techniques aren’t just theoretical; they have real-world applications in AI SEO.

  • Generating Long-Form Content: Combine LtM and CoT to create in-depth guides, tutorials, and comprehensive articles that cover a topic from basic to advanced levels.
  • Optimizing Existing Content: Use Self-Refine to iteratively improve existing blog posts for readability, keyword density, and accuracy.
  • Creating FAQs and Troubleshooting Guides: Employ Self-Ask to generate detailed answers to common user questions.
  • Building Product Descriptions: Use Self-Consistency and meta-prompting to generate multiple product descriptions that consistently highlight key features and benefits in a brand-consistent style.
  • Generating Compelling Meta Descriptions: Use step-back prompting to first, identify the goal, which is increased click-through rate, to inform what to include in the summary.
  • Localized SEO: Use Chain-of-Translation to research and optimize content for different languages and regions.

IV. Beyond Prompting: Combining Techniques with Other SEO Strategies

Prompting is a powerful tool, but it’s not a silver bullet. To achieve true AI SEO success, combine these techniques with other essential strategies:

  • Keyword Research: Use traditional keyword research tools to identify relevant keywords and incorporate them naturally into your prompts.
  • Competitive Analysis: Analyze your competitors’ content to identify gaps and opportunities for improvement.
  • Technical SEO: Ensure your website is crawlable, indexable, and mobile-friendly.
  • Link Building: Build high-quality backlinks to increase your website’s authority.
  • Schema Markup: Use schema markup to provide search engines with more information about your content.

V. The Future of AI SEO

As LLMs continue to evolve, advanced prompting will become even more critical for AI SEO. Staying ahead of the curve requires:

  • Continuous Experimentation: Test different prompting techniques and strategies to see what works best for your specific needs.
  • Monitoring Algorithm Updates: Keep up-to-date with the latest search engine algorithm updates and adjust your prompting strategies accordingly.
  • Focus on Quality: Always prioritize creating high-quality, valuable content that meets the needs of your target audience. AI is a tool to scale high-quality content, not replace it.

Embrace the Power of Advanced Prompting

Advanced prompting techniques are essential for leveraging LLMs for AI SEO. By mastering these techniques and combining them with traditional SEO strategies, you can create high-quality, optimized content that ranks well in search engines and drives valuable traffic to your website. The code examples provided in this article offer a starting point for your journey into the world of AI-powered SEO. Remember to experiment, adapt, and always focus on delivering valuable content to your audience.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *