A recent incident involving Anysphere’s AI-powered code editor Cursor has ignited debate among technology professionals after its automated customer service agent invented a non-existent company policy. The case highlights growing concerns about the reliability of artificial intelligence in frontline customer support as more businesses adopt chatbots to handle user inquiries.
The support bot that cried wolf
Cursor’s development platform users began experiencing unexpected session terminations when switching between devices. When they contacted customer support for assistance, they encountered “Sam,” the company’s AI-powered support chatbot.
The virtual agent confidently informed multiple users that their sessions were ending due to a newly implemented security policy limiting account access to a single device at a time. There was just one problem—no such policy existed at the company.
The AI had completely fabricated the explanation, a phenomenon AI researchers call “hallucination,” where large language models generate plausible-sounding but entirely fictional information. Rather than admitting uncertainty or escalating to human support staff, the chatbot presented its invented explanation as official company policy.
User frustration mounted quickly as the misinformation spread. Several customers expressed intentions to cancel subscriptions, believing the company had implemented a restrictive new limitation without proper notification. Discussion threads appeared on Reddit and other developer forums, drawing additional attention to the issue.
Management intervention
The situation eventually reached Michael Truell, Anysphere’s co-founder, who publicly addressed the growing controversy. Truell confirmed that no single-device restriction policy existed and acknowledged that the AI support system had provided incorrect information to customers.
“Our AI support assistant was wrong,” Truell stated in an online response to affected users.
He explained that while the company had made recent changes to improve session security, these modifications were not designed to limit users to a single device. He also noted that the company was actively investigating the underlying cause of the unexpected session invalidations that prompted the initial support inquiries.
Anysphere has not disclosed whether they will modify their AI support system or implement additional human oversight following the incident.
The double-edged sword of automated support

This case exemplifies the significant challenges companies face when deploying artificial intelligence in customer-facing roles. While virtual agents can dramatically improve response times and handle high volumes of routine inquiries, they lack critical human qualities essential to customer service.
Unlike human representatives, AI systems cannot truly understand context, exercise judgment, or feel accountable for providing accurate information. The Cursor incident serves as a cautionary example for businesses considering reducing or eliminating human support teams in favor of automated alternatives.
“What we’re seeing is a fundamental limitation of current AI systems,” explained Dr. Elena Martinez, a digital ethics researcher at Northwestern University. “These models can sound extremely convincing while stating complete fabrications. Without proper guardrails and human oversight, they can seriously damage customer relationships and brand reputation.”
The incident raises particular concerns for technical products like Cursor, where users rely on accurate information about system functionality and company policies to make professional decisions. Incorrect support information can have cascading effects on workflow planning and project management for development teams.
Industry Taking notes
The technology sector has been closely monitoring AI implementation in customer service operations, with this incident drawing particular attention. Similar occurrences have been reported across various industries as businesses race to implement cost-saving automation measures without fully understanding the associated risks.
“Companies need to recognize that AI hallucination isn’t a rare edge case—it’s a fundamental limitation of how these systems work,” notes Samantha Chen, principal analyst at Forrester Research. “Customer-facing AI requires robust monitoring, clear escalation paths to human agents, and transparent communication about when customers are interacting with automated systems.”
Experts recommend that companies maintain human supervision of AI support interactions, implement systematic fact-checking of AI responses against verified company information, and establish clear protocols for identifying situations where human intervention is necessary.
Many advocate for greater transparency, suggesting companies should always disclose when customers are interacting with AI rather than human representatives. This approach sets appropriate expectations and provides context for potential limitations in the support experience.
Finding the right balance

As artificial intelligence continues evolving and expanding into more aspects of business operations, organizations must carefully balance efficiency gains against potential pitfalls. The Cursor support incident demonstrates that while AI can serve as a powerful tool for scaling customer service operations, it requires thoughtful implementation, continuous monitoring, and appropriate human oversight.
“AI customer service isn’t inherently problematic, but pretending it can fully replace human judgment and accountability is,” says Marcus Jackson, customer experience consultant at Deloitte Digital. “The most successful implementations maintain a healthy human-AI collaboration model where each handles the tasks they’re best suited for.”
For users of AI-powered products like Cursor, the incident serves as a reminder to maintain healthy skepticism when receiving unusual or unexpected information from automated support systems and to seek human confirmation for critical policy changes that could impact their workflow.
Does this episode not expose AI risks, like the recent AI robot attack in China?
Please share your views in the comments below and tell us whether companies should be required to identify when you’re talking to an AI versus a human agent.

