Report: How People Use AI at Work

by

in

Executive Summary: The 30-Second Takeaway

  • The Workforce View: Professionals do not view AI as a master or an oracle. They treat it like an eager, junior intern. It is used for “grunt work” and first drafts, but never trusted without supervision.
  • The Creative View: Artists are not using AI to make art. They use it as an “Admin Shield” to handle invoices, emails, and code so they have more time for the actual creative act.
  • The Scientific View: Researchers face a “Verification Tax.” While AI speeds up coding and literature reviews, the time required to fact-check the output often negates the efficiency gains.
  • The Universal Truth: Across all sectors, the primary barrier to adoption is hallucination. The future of work is not AI replacing humans. It is humans shifting from generators to editors.

Full Dataset by Anthropic

Anthropic Interviewer

A tool for conducting AI-powered qualitative research interviews at scale. In this study, we used Anthropic Interviewer to explore how 1,250 professionals integrate AI into their work and how they feel about its role in their future.

Associated Research: Introducing Anthropic Interviewer: What 1,250 professionals told us about working with AI

Dataset

This repository contains interview transcripts from 1,250 professionals:

  • General Workforce (N=1,000)
  • Creatives (N=125)
  • Scientists (N=125)

All participants provided informed consent for public release.

License

Data released under CC-BY, code released under MIT License

Contact

For inquiries, contact kunal@anthropic.com.

@online{handa2025interviewer,
  author = {Kunal Handa and Michael Stern and Saffron Huang and Jerry Hong and Esin Durmus and Miles McCain and Grace Yun and AJ Alt and Thomas Millar and Alex Tamkin and Jane Leibrock and Stuart Ritchie and Deep Ganguli},
  title = {Introducing Anthropic Interviewer: What 1,250 professionals told us about working with AI},
  year = {2025},
  url = {https://anthropic.com/research/anthropic-interviewer},
}


In the tech world, we often talk about Artificial Intelligence in the future tense. We speculate on who it will replace and how it will reshape the economy. The reality is that the future has already arrived. It is quiet, uneven, and happening in offices, classrooms, workshops, and hospitals right now.

At Dejan AI, we wanted to move past the hype cycles. We analyzed a massive dataset of qualitative interviews with 1,250 professionals. This group spanned the entire workforce spectrum. We spoke to software engineers and legal assistants. We interviewed specialty candle makers, snow cone vendors, braille factory technicians, astrophysicists, and marine biologists.

We did not find a story of mass replacement. We found a story of adaptation, skepticism, and a fundamental shift in the definition of “work.”

Here is our analysis of how the modern world is actually collaborating with AI.


Part 1: The General Workforce

The “Overenthusiastic Intern”

The most consistent theme across the general workforce transcripts is how professionals conceptualize the AI. They treat it as an eager, highly capable, but occasionally unreliable junior intern.

The “Junior Intern” Mental Model

Users delegate the “grunt work” to the AI. This includes summarizing long email chains, formatting citations, writing first drafts of difficult emails, or generating boilerplate code. Just as a manager would not send an intern’s work to a client without review, these professionals never trust the output blindly.

A software developer described treating the AI like “an eager but very junior developer… me calling the shots, reviewing and approving each step.” A paralegal noted they delegate smaller tasks but “supervise the work to make sure it’s accurate.”

The immediate value of AI is not in high-level strategy. It acts as a force multiplier for mid-level execution, provided the human operator has the expertise to review the work.

The Death of the “Blank Page”

Writer’s block and analysis paralysis are fading away. Across almost every profession, the single most common use case for AI is not doing the final work. It is starting it.

We saw a recurring pattern we call the “0-to-60” workflow.

  • The Teacher: Asks AI to generate a list of 10 activity ideas for a specific age group. They know they will likely only use one, adapted significantly.
  • The Small Business Owner: A snow cone vendor uses AI to brainstorm “fun” flavor names and descriptions to overcome creative fatigue.
  • The Marketer: Uses it to structure a pitch deck outline so they can focus on filling in the strategic details.

As one participant noted, they use it to “break through writer’s block… just using it to get fragments I can massage into something really good.”

Authenticity as a Premium Asset

There is a strong cultural resistance to sounding “like a bot.” Across the board, professionals are fiercely protective of their authentic voice. This is especially true in client-facing communications.

Users complained about over-enthusiastic tones and a lack of “grit” or distinct personality. A school secretary noted she can tell instantly when parents use AI to write emails because of the specific syntax. A physical therapist uses AI to draft professional letters but writes personal emails to patients manually to ensure they know “I care.”

As AI-generated text floods our inboxes, the ability to write with distinct human personality and empathy is becoming a differentiator.


Part 2: The Creative Class

Negotiating the “Soul” Boundary

There is a prevailing narrative in the media that AI is coming for creative jobs. The data shows a different reality. Creatives are not handing over the keys to the kingdom. They are building sophisticated boundaries.

The “Admin Shield”

The most consistent trend across the dataset was unexpected. When asked how they use AI, the vast majority of creatives did not talk about generating art. They talked about bureaucracy.

Wedding photographers, grant-writing musicians, and freelance illustrators are using LLMs to handle the “business of being creative.” They are generating invoices, writing difficult emails to clients, analyzing spreadsheet data, and optimizing SEO for Etsy listings.

As one wedding photographer noted, AI tools helped cut their gallery turnaround time from 12 weeks to 3 weeks. By offloading the technical culling and color-correction, they bought back time to focus on the artistic direction. For creatives, AI acts as a shield against the mundane. It protects the time needed for deep work.

The “Soul” Boundary

There is a hard line drawn in the sand by 90% of the interviewees. They are happy to use AI for research, outlining, and brainstorming, but they refuse to let AI execute the core creative act.

  • A songwriter might use AI to find a rhyme for “orange” but refuses to let it write the verse.
  • A novelist uses AI to research 1940s Canadian history but writes every sentence of prose personally.
  • A knitting pattern creator uses AI to calculate yarn yardage but designs the sweater themselves.

We are seeing the emergence of “Human-in-the-Loop” as a premium value proposition. Creatives are positioning their personal touch and their unique voice as the luxury product.

The Fear of “Slop”

While the utility of AI is clear, the anxiety in the dataset is palpable. It is not just about job loss. It is about market pollution.

Multiple interviewees expressed deep concern about the “slop.” This refers to the flood of low-effort, AI-generated books on Amazon, generic images on stock sites, and fake artists on Spotify. One game designer noted that internet searches are becoming useless because so much of the results are internet spam.

There is a genuine fear that high-quality, human-crafted work will be buried under an avalanche of mediocre, automated content.


Part 3: The Scientific Community

The Trust Gap and the Verification Tax

In the creative world, an AI hallucination is a “happy accident.” In the scientific world, it is a liability. The narrative for scientists is starkly different. They use AI to overcome “data paralysis,” but the scientific method relies on reproducibility and truth. These are two things Generative AI struggles with.

The “Super-Librarian” with a Lying Problem

The most universal use case among scientists is literature review. Almost every interviewee described using LLMs to scan vast repositories of academic papers to find gaps in research or summarize complex topics.

However, this utility comes with a warning label. The “hallucination” of citations is the single biggest frustration reported. One researcher noted that AI is great for finding trends in 1940s data but fails when asked for specific page numbers.

Scientists treat AI like a brilliant but unreliable grad student. They use it to cast a wide net, but they never put the AI’s findings into a paper without finding the primary source manually.

The “Wet Lab” Firewall

If there is one place AI is strictly forbidden, it is the “Wet Lab.” This is the physical bench where experiments happen.

Whether it is culturing bacteria, soldering circuits, or monitoring chemical reactions, scientists overwhelmingly rejected the idea of AI interference in physical experimentation. One microbiologist stated they need to see the color change with their own eyes. A chemist noted the AI doesn’t know that the equipment is 20 years old and has a specific quirk.

In science, tacit knowledge is viewed as irreplaceable. AI is welcome in the digital realm of data analysis, but it is barred from the physical realm of data collection.

The “Verification Tax”

Creatives talked about AI saving time. Scientists were more conflicted. Many reported a phenomenon we call the “Verification Tax.”

AI can write a summary or a code snippet in seconds. However, the time required for a PhD-level expert to verify that output line-by-line often negates the efficiency gains. One researcher studying toxic compounds noted they have to verify every single line because a decimal point error could be dangerous.

For scientists, speed is secondary to accuracy. If an AI tool cannot prove its work, it becomes a burden rather than a boost.

The Desire for a “Scientific Adversary”

When we asked scientists what they wished AI could do, a surprising theme emerged. They did not want an AI that agrees with them. They want an AI that fights them.

Current LLMs are trained to be helpful and polite. Scientists found this annoying. They want an AI that acts as a Peer Reviewer. They want it to rip their ideas to shreds and tell them why their hypothesis is wrong. There is a massive market gap for “Adversarial AI”—models tuned not for politeness, but for rigorous, objective logic checking.


The Era of Orchestration

Reading through these 1,250 transcripts, it becomes clear that AI is not devaluing expertise. It is reshaping it. The professionals getting the most out of AI are the ones who already know their jobs inside and out.

The future of work is not “AI vs. Human.” It is Human + AI. The human shifts from being the generator of work to the architect, editor, and quality controller of work.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *