Ghostwriters in the Machine: Grammarly’s "Expert Review" and the Future of AI Writing Assistants
- David Golub
- 5 days ago
- 6 min read
Updated: 4 days ago

Casey Newton found out his AI doppelgänger existed the way many people discover bad news these days: on the internet, uninvited.
To Newton's surprise, the popular AI writing assistant Grammarly had deployed an "Expert Review" feature that presented users with the option to employ a synthetic version of the Platformer journalist to review writing and offer editorial suggestions, impersonating Newton without his knowledge or consent.
"I've long assumed that before too long, AI might take my job," Newton wrote about the feature, which Grammarly hastily pulled once the story broke. "I just assumed that someone would tell me when it happened."
The backlash proved the timeless adage of avoiding fights with people who buy printer's ink by the barrel.
Investigative journalist Julia Angwin filed a class-action lawsuit in the Southern District of New York, seeking damages in excess of $5 million. The roster of AI editorial voices in Expert Review included Stephen King, Carl Sagan, Kara Swisher and Neil deGrasse Tyson, none of whom had agreed to participate.
Grammarly's CEO quickly issued a public apology, saying "we hear the feedback and recognize we fell short on this," and promised to rethink the feature in a way that ensured that any expert voices would be used transparently and with permission.
So far so good. But focusing only on the legal sequel to this story misses a deeper issue with Expert Review and with the current state of the AI writing tool category in general.
Can We Really Separate Author and Voice?
To understand the challenges posed by Expert Review, it's useful to take a quick detour into philosophical debates about the nature of the human mind.
In 1949, the British philosopher Gilbert Ryle coined the phrase "ghost in the machine" to describe a curious logical fallacy: the assumption that something inseparable from a system could be extracted and treated as a distinct, portable thing. His target was Cartesian mind-body dualism, which he accused of treating that the mind like a ghost inhabiting the machinery of the body.
Grammarly's “ghostwriter in the machine” miscue represents a similar error applied to writing: the notion that an author's voice is severable from their output, as opposed to representing an ineffable residue of lived experience.
In other words, you can't simply extract Casey Newton's editorial instincts, bottle them and pour them into someone else's prose. What transfers risks sounding like Newton the way a mediocre cover band sounds like the original.
This doesn’t mean that expert advice can’t be codified. But it does suggest that generating authentic human content with AI will take more than cleverly branded voice personas that fledgling writers can apply to draft versions or rough outlines.
As we think about the future of AI writing assistants, the belief that writing style is entirely separable from the writer who produced it becomes a problem deeper than permissions and licenses.
Authenticity Isn’t an Instagram Filter
To be sure, AI writing assistants can offer tremendous value. They catch grammatical errors, accelerate first drafts and revisions, adjust reading level and red-pen passive voice and hedging language.
For professional content production, AI can be transformative, allowing companies to enforce brand voice, improve discoverability and mass-personalize engagement with customers. For teams scaling content output, they're a genuine force multiplier.
But the idea of applying the voice of recognized experts like an Instagram text filter underscores the questionable logic that drives most of the AI writing tool market.
The volume machines (Jasper, Writesonic and Copy.ai) are built for exactly that: throughput and SEO optimization at scale, and for that use case they deliver. Where they fall short is voice fidelity. They offer "brand voice" features which users report don't always work as expected.
Writesonic's reviewers noted that even after uploading sample content, "I didn't notice any significant difference when I used it." Jasper's Brand IQ is more sophisticated, but the underlying model is still pattern replication from past samples, training the tool to imitate what you've already written, not to help you write better.
QuillBot and Wordtune help writers rework source material, navigate register shifts or work in a second language. But their paraphrasing tools benchmark against external standards ("formal," "fluent" or "creative") defined abstractly, not relative to you. QuillBot offers seven such modes. None of them is "more like you at your sharpest."
Before Expert Review, Grammarly's style suggestions offered corrections against generalized benchmarks: clarity, conciseness, engagement. Useful guardrails that push writing toward an intelligible median, not toward the individual writer's own best self.
The implicit question these tools answer is: how do I make this text sound the same but different? The question absent from the current product logic? How do I make this sound more like me?
When it comes to crafting authentic human content, the best these tools can do today is replicate your previous self. They have no mechanism for helping you exceed it, and that reads like a miss.
What Should AI Writing Tools Actually Do?
A great editor doesn't make you sound like someone else.
Rather, a skilled editor notices what you do well and what you should do more of: a particular observational pattern, a way of structuring an argument, a rhythm you fall into when you're expressing yourself clearly and with conviction.
Great editing creates conditions for more of the good stuff that makes you you, surfacing what's latent to make it more visible and lucid.
That's a different approach than anything the AI writing tool category seems to be building toward today. The closest approximations aren't specialized writing tools at all. They're general-purpose LLMs (Claude, ChatGPT).
The methodology isn't complicated. A content director trying to sharpen a senior writer's byline work can share five or six pieces that are landing well.
Ask what patterns appear: in structure, in rhythm, in the kinds of observations that tend to land, in where the arguments tend to close.
Ask where the current draft departs from those patterns.
Ask what's missing and what to add.
Used this way, a general-purpose LLM isn't generating content or smoothing prose against an external benchmark. It's holding a mirror of the writer's own best work up against whatever they're producing now.
The method works, but it requires the user to construct complex prompts that tend to degrade over long chats. No specialized tool has made it the product, in part because voice development is slower to demonstrate value than SEO optimization.
When it comes to making human writing better, the market has optimized for what's easy to sell, not for what creates the most expressive writing.
Sure, once we solve the license question, let's have AI tools that allow us to impersonate famous writers, or at least that allow us to benefit from well-designed approximations of their style patterns and putative feedback.
This way, we can ask Neil deGrasse Tyson to review our science writing so audiences grasp complex ideas in a flash, or ask Stephen King to help us make our short stories truly terrifying, assuming they both agree to the deal.
But why not also have AI tools that allow us all to become better, more fully realized versions of the writers we are all striving to become?
A Different Direction for AI-Enabled Content
In the end, Grammarly’s Expert Review story will fade into a footnote on the path to a more interesting future for AI writing tools.
The question for marketing leaders is this: Are your AI writing tools helping your writers become, ever more distinctly themselves? Are they helping to distill your brand into something distinct and unique? Or are they pushing everything toward a bland median?
Look at what AI writing tools do with your existing content.
Do they analyze samples of their best output and reflect patterns back to them?
Or do they benchmark their writing against external standards it defines (clarity scores, tone targets, SEO grades) with no reference to who they are and what they are trying to accomplish?
The former is a powerful learning and development tool. The latter is a correction tool. Most of what's on the market falls into the latter category.
A recognizable voice is the heart of a trusted brand. Using AI to differentiate compounds an advantage that volume tools can't replicate. Homogenized output is a commodity, and commodities compete on price and volume, which becomes a race to the bottom.
In a market where every competitor has access to the same tools, optimizing for averaged-out content is a fast-track to not being seen at all.
Agentic Foundry: AI For Real-World Results
Learn how agentic AI boosts productivity, speeds decisions and drives growth
— while always keeping you in the loop.




