Sunday, November 24, 2024
LatestTechnology

Beware.. This is how Artificial Intelligence Platforms leak your Data

 

A group of researchers at American and Swiss universities, in cooperation with Google and its subsidiary DeepMind, published a research paper explaining how data can leak from image creation platforms that are based on generative artificial intelligence models such as DALL-E, Imagen, or Stable Diffusion.

They all work in the same way based on the user’s side typing in a specific text prompt, eg “armchair in the shape of an avocado”, and getting a text-generated image within seconds.

The generative AI models used in these platforms have been trained on a very large number of images with a predetermined description. The idea is that neural networks are able to generate new and unique images after processing a huge amount of training data.

However, the new study shows that these images are not always unique. In some cases, the neural network can reproduce an image that is an exact match to a previous image used in training. This means that neural networks may inadvertently reveal private information.

This study challenges the views that AI models used to generate images do not save their training data, and that training data may remain private if not disclosed.

Provide more data

The results of deep learning systems can be amazing for non-specialists, and they can think that they are magic, but in reality, there is no magic in the matter, as all neural networks base their work on the same principle, which is training using a large set of data, and accurate descriptions of each Picture, for example: series of pictures of cats and dogs.

After training, the neural network displays a new image and is asked to decide whether it is a cat or a dog. From this humble point, developers of these models move on to more complex scenarios, creating an image of a non-existent pet using an algorithm that has been trained on many images of cats. These experiments are conducted not only with images, but also with text, video and even sound.

The starting point for all neural networks is the training data set. Neural networks cannot create new objects out of thin air. For example, to create an image of a cat, the algorithm must study thousands of real photographs or drawings of cats.

 

Great efforts to keep the datasets confidential

In their paper, the researchers pay particular attention to machine learning models. They work as follows: they distort the training data—images of people, cars, houses, and so on—by adding noise. Next, the neural network is trained to restore these images to their original state.

This method makes it possible to generate images of acceptable quality, but a potential drawback—compared to algorithms in generative competitive networks, for example—is its greater tendency to leak data. The original data can be extracted from it in at least three different ways, namely:

Using specific queries to force the neural network to output a specific source image, not something unique generated based on thousands of images.
– The original image can be reconstructed even if only a part of it is available.
– It is possible to simply determine if a particular image is included in the training data or not.
Many times, neural networks are lazy and instead of producing a new image, they produce something from the training set if it contains multiple duplicates of the same image. If an image is repeated in the training set more than a hundred times, there is a very high chance that it will be leaked in its near-original form.

However, the researchers showed ways to retrieve the training images that appeared only once in the original set. Of the 500 images the researchers tested, the algorithm randomly recreated three of them.

Who is the victim of theft?

In January 2023, three artists sued AI-based image generation platforms for using their online images to train their models without any respect for copyright.

A neural network can actually copy an artist’s style, thereby depriving him of the income. The paper notes that in some cases, algorithms can, for various reasons, engage in outright plagiarism, generating drawings, photographs, and other images that are almost identical to the work of real people.

So the researchers made recommendations to enhance the specificity of the original training group:

1- Eliminate repetition in training groups.
2- Reprocess the training images, eg by adding noise or changing the brightness; This makes data leakage less likely.
3- Testing the algorithm using special training images, then verifying that it does not unintentionally reproduce it accurately.

what is next?

Generative art platforms have certainly sparked an interesting debate lately, one in which a balance must be sought between artists and technology developers. On the one hand, copyright must be respected, and on the other hand, is art generated by AI very different from human art?

But let’s talk about security. The paper presents a specific set of facts about just one machine learning model. Extending the concept to all similar algorithms, we come to an interesting situation. It’s not hard to imagine a scenario in which an intelligent assistant to a mobile network operator hands over sensitive company information in response to a user’s query, or writes a rogue script prompting a public neural network to create a copy of someone’s passport. However, the researchers stress that such problems remain theoretical for the time being.

But there are other real problems that we are experiencing now, as script generation models such as: ChatGPT are now used to write real malicious code.

And GitHub Copilot helps programmers write code using a huge amount of open source software as input. And the tool doesn’t always respect the copyright and privacy of authors whose code ends up in a very expanded set of training data.

As neural networks evolve, so will attacks against them, with consequences that no one yet understands.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *