[unable to retrieve full-text content]
Artificial intelligence (AI) is improving in astounding ways. The systems that use it can often diagnose illnesses more accurately than physicians with decades of experience or determine the content of images and then automatically tag them for easier retrieval later.
However, AI is not free from flaws. The people who work with it know that AI has blind spots. These so-called adversarial examples represent what can happen when AI miscalculates things.
They’re bugs that can make the AI misbehave or overlook information that it should easily recognize. The people familiar with AI and machine learning understand that adversarial examples can crop up after someone makes small tweaks to an image, for example.
They can cause severe problems, but some researchers believe they could use these AI blind spots to protect users’ privacy.
Manipulating AI Data to Cause Mistakes
Neil Gong recently joined the Duke University faculty to investigate the effectiveness of inserting false information into a person’s profile to protect their privacy. Part of his work centers on figuring out which information works best for that, and how much is needed to keep the data safe from prying eyes.
Along with another Duke researcher named Jinyuan Jia, Gong relied on a data set similar to the …
Read More on Datafloq