AI-powered art filters have taken the internet by storm, allowing users totransform their images into stunning Ghibli-style artwork. While these toolsshowcase the magic of artificial intelligence, beneath their charm lie seriousprivacy risks. Cyble’s Senior Director and Head ofSolutions Engineering, Ankit Sharma, recently spoke with The Hans Indiaabout the growing privacy risks associated with AI-powered image generation.
Hehighlights that users often upload personal images without considering howthese platforms handle, store, or share their data. Without clear policies,images could be retained, repurposed for AI training, or even exposed tosecurity breaches. Beyond privacy, the rise of deepfakes and synthetic media raises concerns aboutidentity theft and biometric fraud.
Cybercriminals could exploit stylizedimages to create fake profiles, manipulate authentication systems, or spreadmisinformation. As AI-generated content evolves, so doits risks. Users must stay cautious, and companies must enforce strict securitymeasures, transparent data practices, and automated image deletion to preventmisuse.
1.What are the potential privacy risks associated with uploading personal imagesto AI tools like ChatGPT’s Ghibli filter? AI-powered image filters may seemharmless, but they come with inherent privacy risks. The biggest concern isdata retention—if the platform stores images after processing, it creates anattractive target for cybercriminals.
Even if the company has no maliciousintent, weak security controls could lead to leaks or unauthorized access. Another issue is unintended AI training.Some tools refine their models using user-generated images, potentially feedingbiometric data into facial recognition systems without explicit consent.
Thisraises concerns about profiling, surveillance, and data misuse. Users shouldalso be wary of third-party integrations that could expose images to lesssecure environments, increasing the risk of breaches. 2.
With rising concerns over deepfakes and AI-generated content, could the Ghiblitrend contribute to identity theft or unauthorized image use? Absolutely. While Ghibli-style images mayseem innocent, they still contain enough facial data to be misused.Cybercriminals can build deepfake datasets using modified AI images, enablingimpersonation scams, synthetic identity fraud, or even AI-generated avatarsthat mimic real people.
The risk extends beyond just fraud.Manipulated AI-generated images can fuel misinformation campaigns, damagereputations, or be used in extortion attempts. With the rise of AI-enhancedscams, a seemingly playful trend could become an entry point for moresophisticated cyber threats.
3.How can cybercriminals take advantage of AI-generated images for fraudulentactivities or identity theft? Bad actors are always looking for newways to exploit emerging technology, and AI-generated images provide them witha versatile toolset. Here’s how they can weaponize these images: Social Engineering Attacks – Fraudsters canuse AI-generated images to create fake profiles, impersonate executives, or deceivepeople into sharing sensitive information.
Bypassing Facial Recognition – AI tools cangenerate modified images that may trick certain facial recognition systems,making biometric authentication less reliable. Manipulation & Blackmail – Attackers cantweak AI-generated images to fabricate compromising situations, leading toextortion or reputational damage. Synthetic Identity Fraud – AI-generated imagescan be combined with fake identity data to create entirely new digital personasfor financial fraud.
With AI-generated content becoming moreconvincing, organizations and individuals must remain vigilant about wheretheir images are uploaded and how they might be repurposed. 4.As a cybersecurity expert, what measures would you suggest to ensure the securehandling of user data while generating Ghibli artistic images? Security should be built into the AIimage-generation process from the ground up.
Here are some critical safeguards: Real-Time Processing, No Storage – Imagesshould be processed instantly and never stored beyond the active session. Thisminimizes exposure to data leaks. End-to-End Encryption – All uploads anddownloads should be encrypted to prevent interception by attackers.
Strict Access Controls – Only authorizedpersonnel should have access to backend AI processing, and even that should beheavily monitored. Clear User Consent Policies – Platforms shouldprovide transparency about data handling, allow users to opt out of AItraining, and ensure compliance with privacy laws like GDPR and CCPA. Routine Security Audits – Regular penetrationtesting and compliance reviews can ensure that security measures keep up withevolving threats.
By prioritizing privacy-first AI design,companies can give users peace of mind while enjoying creative tools like theGhibli filter. 5.What steps should organizations take to ensure users’ images are deleted afterprocessing? A secure AI system should follow a zero-retention policy unless usersexplicitly request storage.
Organizations should: Automate Image Deletion – The system shoulddelete images immediately after processing, leaving no traces on the server. Give Users Control – Users should be able tosee and delete their images at any time, with transparency on data handling. Enforce Third-Party Compliance – If an AI toolrelies on external cloud services, those providers must also meet stringentdeletion and privacy standards.
Conduct Regular Privacy Audits – Independentsecurity assessments should verify that no user images remain stored beyond theintended use. Ultimately, the goal is to provide aseamless creative experience without turning AI tools into privacy risks. Themore proactive companies are about user data protection, the more trust they’llbuild with their audience.
AboutAnkit Sharma Ankit Sharma - Senior Director and Head of Solutions Engineering, Cyble Inc. Ankit currently heads the solution engineering forCyble Inc., managing the global team of some of the brilliant solutionsengineers and architects in the cyber realm.
He’s responsible to drive businessgrowth across the globe & support Cyble Sales through his expertise in thefield of Program Delivery Management, Technical Sales & Key AccountManagement. Ankit is also a highly skilled datasecurity & privacy professional, specialized in data privacy (GlobalPrivacy law/regulations/standards & Privacy Information managementSystems), Data Governance, Compliance Management & Cloud Security..
Technology
Cyble’s Senior Director and Head of Solutions Engineering, Ankit Sharma, on Safe User Data Handling in Ghibli-Style AI Art

Cyble’s Senior Director and Head of Solutions Engineering, Ankit Sharma,recently spoke with The Hans India about the growing privacy risksassociated with AI-powered image generation with tools like ChatGPT’s Ghibli filter.