OpenAI plans to charge $20,000 (USD) a month for an AI agent that can do “PhD level research”.
Maybe all the PhDs and postdocs recently fired by DOGE should band together and sell their services as “AI agents” – apparently some people will pay more for robots than people. At least OpenAI thinks they will: TechCrunch writes that OpenAI plans to charge $20,000 a month for access to an AI agent that can do “PhD level research”.
I guess it makes business sense to subscribe to an AI service instead of hiring 2-3 postdocs – if you prefer compliance to critical thinking.
AI is compliant, not critical. Unlike a human “PhD level researcher”, an AI agent won’t say “Hey, should we really be doing this research? I disagree with the goals of my employer/government. I think we should research this instead.”

Describing an AI agent as capable of “PhD level research” is a rhetoric that aligns with the war on science and on (human) researchers we are seeing from the Trump administration. Part of the problem here is the way “research” is reduced to something that can be fully automated.
A human being with a PhD is not just performing tasks, they are also thinking about ethics, methods, their colleagues, emotions, possible harms that might arise from their research, and sure, pragmatic/cynical things like “can I publish this?” and “will this help me keep my job” but also “would this research have helped my friend who died of cancer or who was unjustly arrested due to biased facial recognition” and “is the government/my employer doing the right thing here?”.
I’m not sure exactly what kind of “PhD level research” OpenAI thinks its AI agents can do, but I’m pretty sure it’s not the same research as most of us “PhD level researchers” are actually doing day to day.
A human PhD level researcher – or a human artist – might, as Heather Dewey-Hagborg did, look at forensic DNA analysis and think hm, so it’s basing the AI-generated assumptions on outdated notions of race and gender. Maybe I should learn more about that and show how problematic this is? The image above shows me, a human PhD level researcher (or should I be billing myself as a professor level researcher?) experiencing and analysing Dewey-Hagborg’s artwork resulting from her research. Here’s my full blog post about that PhD level research. If a $20,000 AI agent can replicate Dewey-Hagborg’s work or my work, it’s because it was trained on things like, well, that blog post.
Yep, we know that AI-generated “original research” is often just repeating existing research. Just a couple of weeks ago Tarun Gupta and Danish Pruthi published a study, All That Glitters is Not Novel: Plagiarism in AI Generated Research, where they evaluated the production of AI-agents claiming to generate novel research ideas. They found that 24% of the generated documents were “either paraphrased (with one-to-one methodological mapping), or significantly borrowed from existing work”. Gupta and Pruthi then actually tracked down the authors of the identified source documents and asked them to take a look at the AI-generated copycats as well, and the original authors verified their findings.
It’s actually even worse. 24% were clearly plagiarised. Only 32% were original or only had minor similarities to existing papers. Here’s a thread explaining the study:
Remember this study about how LLM generated research ideas were rated to be more novel than expert-written ones?
— Danish Pruthi (@danish037) February 25, 2025
We find a large fraction of such LLM generated proposals (? 24%) to be skillfully plagiarized, bypassing inbuilt plagiarism checks and unsuspecting experts. A ? https://t.co/u1C9yN2KvD
So is it really the AI agent doing the PhD level research?
We don’t know much about OpenAI’s AI agents that can do “PhD level research” because this is marketing, it’s hype, it’s rhetoric as much as it is a potential service. I am worried that this will devalue and potentially abuse the important research that we need humans to do.