A viral trend transforming ordinary photos into charming Studio Ghibli-inspired artwork has taken social media by storm, but digital privacy experts warn users may be unwittingly giving away much more than their selfies.
The sudden popularity of these AI makeovers through tools like ChatGPT and Grok 3 has triggered growing concerns about data collection, consent issues, and potential misuse of personal images, raising questions about what happens after users click “upload.”
AI meets Miyazaki
Last week, OpenAI released a feature allowing users to convert their photos into artwork resembling the distinctive style of legendary Japanese animation studio Studio Ghibli. The aesthetic, reminiscent of beloved films like “Spirited Away” and “My Neighbor Totoro,” transforms modern portraits into nostalgic, hand-drawn characters.
The feature quickly went viral, with social media feeds filling with cartoon versions of users, celebrities, and even politicians.
Elon Musk’s company xAI promptly followed with a similar offering in its Grok 3 chatbot, available through the X platform, allowing users to generate their Ghibli-style portraits seemingly for free.
The transformations produce charming results, but privacy advocates say the real cost might not be immediately apparent.
The privacy pitfall: Are you training AI for free?
Numerous digital rights activists have raised alarms that both OpenAI and xAI could be using these image uploads to train their artificial intelligence systems.
“What looks like a fun, free feature is potentially a massive data collection operation,” said cybersecurity analyst Thomas Reed. “These companies need enormous amounts of labeled image data to improve their AI models, and this trend provides exactly that.”
Luiza Jarovsky, co-founder of the AI, Tech & Privacy Academy, explained that voluntary uploads create a significant legal advantage for AI companies compared to their usual methods of collecting training data.
“When people voluntarily upload these images, they give consent under Article 6.1.a of the GDPR,” she wrote on X, referring to Europe’s data protection regulations. “That’s a very different legal ground than scraping the internet — and it provides OpenAI with a lot more freedom.”
This distinction matters because consent-based collection bypasses many legal hurdles that companies face when gathering data from public websites, essentially providing them with clean, well-labeled images that users have willingly shared.
Jarovsky highlighted that OpenAI’s privacy policy explicitly states the company collects user-submitted content for training purposes unless users specifically opt out of this arrangement, something many casual users may not realize.
ChatGPT’s official response: Proceed with caution
When asked directly about the safety of uploading personal photos, ChatGPT offered a cautionary response during tests conducted by the Hindustan Times.
“No, it’s not safe to upload personal photos to any AI tool unless you’re certain about its privacy policies,” ChatGPT responded. “OpenAI does not retain or use uploaded images beyond the immediate session, but it’s always best to avoid sharing sensitive or personal images.”
This built-in warning underscores the ambiguity users face. While the tool claims not to store images long-term by default, users must take active steps to prevent their data from being used for AI training purposes.
Many privacy advocates point out that this opt-out approach places the burden on users rather than defaulting to the more privacy-protective option of not using uploaded content unless explicitly authorized.
What about Grok 3? Vague policy, big questions
The situation appears even less transparent with Elon Musk’s Grok 3, which offers similar image transformation capabilities.
According to prompts within the system, “xAI doesn’t explicitly detail how long it retains uploaded images or if they’re used to train future models… unless you opt out (check X settings), your images might feed into improving the AI.”
This lack of clarity raises significant questions about data retention practices. Unlike OpenAI, which has faced intense regulatory scrutiny and developed more detailed privacy documentation, xAI has not yet published comprehensive explanations about how it handles, stores or secures user-uploaded images.
“The privacy policies should be front and center when asking for photo uploads,” said digital rights attorney Maria Garcia. “Instead, they’re often buried in terms of service that almost nobody reads.”
The hidden risk: Deepfakes, metadata, and misuse
Beyond the immediate concerns about training data, security experts warn of additional risks associated with uploading personal photos to AI systems.
Privacy watchdog group Himachal Cyber Warriors cautioned users with a stark message: “Think before you #Ghibli. That cute Ghibli-style selfie? It might cost more than you think. Your photo could be misused or manipulated. AI may train on it without your consent. Data brokers might sell it for targeted ads. Stay cyber smart. Your privacy matters.”
Technical experts note that digital photos often contain embedded metadata — information about when and where the photo was taken, what device captured it, and sometimes even the identity of the photographer. This metadata could potentially be harvested if not properly removed during processing.
Even more concerning, high-quality facial images provide valuable material for creating deepfakes, highly realistic but fabricated videos or images. These can be used for disinformation, harassment, or various forms of identity fraud.
“Once your facial data is out there, it’s extremely difficult to get it back,” explained cybersecurity researcher James Wilson. “Unlike a password, you can’t change your face if that data is compromised.”
Ethical concerns: The artist’s dilemma
The trend also revives ongoing debates about the ethics of AI art generation, particularly when emulating distinctive creative styles.
Hayao Miyazaki, the renowned animator whose work inspired the Studio Ghibli aesthetic, has previously expressed strong opposition to AI-generated artwork, once describing it as “an insult to life itself” during a demonstration of AI animation technology.
Professional artists like Karla Ortiz have criticized AI systems that replicate recognizable artistic styles without compensation or acknowledgment. They argue that these generative models often learn by analyzing vast datasets that include copyrighted creative works without proper authorization.
“These AI systems don’t just appear out of nowhere with the ability to mimic Ghibli’s style,” said digital arts professor Elena Martinez. “They’ve been trained on thousands of images from Miyazaki’s films and related artwork, raising serious questions about creative attribution and compensation.”
The legal landscape: GDPR and consent
The regulatory framework surrounding these practices remains complex, particularly in regions with strong data protection laws like the European Union.
Under the General Data Protection Regulation (GDPR), companies must establish a legal basis for collecting and processing personal data. When scraping public websites for training data, AI companies typically rely on what’s called “legitimate interest” — a justification that requires balancing business needs against potential harm to individual rights.
However, as Jarovsky noted, when users voluntarily upload their images, the companies can instead rely on consent as their legal basis. This significantly reduces regulatory hurdles and potential liability.
“Consent is the strongest legal ground under GDPR,” explained data protection specialist Robert Brown. “When you freely give your data by uploading it, you’re essentially giving the company a green light to use it within the bounds of what you’ve agreed to — which many people don’t actually read.”
This distinction has allowed AI companies to build massive training datasets through user-supplied content while avoiding some of the legal challenges faced when gathering data through other means.
So, Should You Use It?
For users trying to decide whether to participate in the trend, experts suggest considering several factors:
First, check whether you’ve opted out of AI training in the platform’s settings. Both OpenAI and xAI offer options to prevent your data from being used to improve their models, but these settings are not enabled by default.
Second, be mindful of what images you upload. Privacy specialists recommend avoiding photos that clearly show your face, contain sensitive personal information, or include location data you wouldn’t want shared.
Third, understand that the policies around data retention remain unclear with many platforms. Even if a company claims not to store your images long-term, verification of these practices is difficult.
“If you wouldn’t be comfortable with that photo appearing somewhere unexpected in the future, don’t upload it,” advised digital privacy advocate Sarah Johnson. “The safest approach is to assume anything uploaded could potentially be stored, analyzed or even appear in training datasets.”
When art meets AI, be informed
The Ghibli-style AI filters represent the increasingly blurred boundary between entertainment and data collection in the digital age. While the transformed images offer momentary delight, they potentially create lasting digital footprints.
As AI companies race to build more sophisticated models, user-supplied content serves as valuable training material — making these seemingly innocent features potentially valuable data-gathering tools.
For users enchanted by the prospect of seeing themselves as Ghibli characters, the key recommendation from privacy experts is simple: understand what you’re sharing, check your privacy settings, and recognize that in the world of “free” AI services, personal data often serves as the unspoken currency.
“The magic of these transformations comes with fine print,” said technology ethicist David Chen. “Whether that trade-off is worth it depends entirely on how much you value both your privacy and that cute cartoon version of yourself.”
Have you tried using AI Ghibli filters on your photos? We want to hear about your experience and concerns. Did you check the privacy settings first? Would you reconsider using these tools after learning about the potential privacy implications?
Share your thoughts in the comments below.

