The spread of AI-generated cartoon images, which reflect a colorful snapshot of everything a chatbot knows about a person, could pose serious security risks, cybersecurity experts warn.

Users upload personal photos of themselves, sometimes with a company logo or details of their job, and ask OpenAI’s ChatGPT bot to create caricatures of them and their job based on the information it has about them.

Cybersecurity experts told Euronews Next that social media challenges, such as AI cartoons, could provide scammers with a huge amount of valuable information. A single photo, combined with personal details, may reveal much more information than users expect.

In this context, Bob Long, vice president of age verification company Daon, said: “You are doing the scammers’ work for them, as you are providing them with a clear visual representation of your identity.”

He pointed out that the mere way in which this trend is being promoted should raise alarm bells, because “it looks as if it was deliberately launched by a fraudster to facilitate his work.”

What happens to photos after they are uploaded?

Cybersecurity consultant Jack Moore explains that when a user uploads an image to an AI-based chatbot, the system processes the image to extract data such as the person’s feelings, environment, and information that may reveal their geographic location. This data can then be stored for an indefinite period.

Long believes that images collected from users could be used and kept to train AI image generators, as part of their datasets.

Any data breach of a company like OpenAI could mean that sensitive data, such as uploaded photos and personal information collected by a chatbot, could fall into the hands of malicious parties who could exploit it.

According to Charlotte Wilson, head of the corporate sector at the Israeli cybersecurity company Check Point, a single high-resolution photo in the wrong hands could allow the creation of fake accounts on social networks, or hyper-realistic deepfake clips that could be used to carry out fraud.

“Selfies help criminals move from general scams to very convincing, personal impersonations,” she said.

OpenAI’s privacy settings indicate that uploaded images may be used to improve the model, including training it. When ChatGPT was asked about the privacy settings of the form, he explained that this does not mean that every image is saved in a database available to the public.

The chatbot added that it instead uses user-generated content patterns to improve the way the system generates images.

How do you participate in AI trends safely?

Anyone who wants to keep up with these trends is advised to limit the data they share as much as possible.

Wilson says users should avoid uploading photos that reveal any identifying information about themselves. “Crop the image tightly, keep the background as detail-free as possible, and don’t show badges, uniforms, business cards, location tags, or anything that links you to your employer or your daily habits,” she says.

Wilson also cautions against sharing too much personal information in employment claims, such as job title, city or employer name.

In the same context, Moore advises reviewing privacy settings before participating, including the option to exclude data from training AI models.

OpenAI’s privacy portal allows users to stop the use of their data for training by choosing the “do not train on my content” option.

Also, users can disable the use of their text conversations with “ChatGPT” in training, by turning off the “improve the model for everyone” setting.

Under EU data protection law, users have the right to request the deletion of their personal data collected by the Company. However, “OpenAI” explains that it may retain some information even after it is deleted, to deal with issues of fraud, abuse, and security concerns. (euro news)