Connect with us

Science

Researchers Unveil Image-Based Attack Targeting AI Models

Editorial

Published

on

Security researchers have revealed a new attack technique that exploits vulnerabilities in artificial intelligence (AI) systems, enabling the theft of confidential user data through manipulated images. Developed by Kikimora Morozova and Suha Sabi Hussain from the cybersecurity firm Trail of Bits, this method builds upon a concept introduced in a 2020 study by TU Braunschweig. The researchers demonstrated how this technique, referred to as “image scaling attacks,” can be applied to contemporary AI applications.

The focus of this attack is on the way AI systems typically downscale uploaded images to optimize computing resources and reduce costs. Common resampling algorithms like “Nearest Neighbor,” “Bilinear,” and “Bicubic” can inadvertently reveal hidden patterns within the original images. When these images are downscaled, previously concealed elements may become visible, creating opportunities for malicious manipulation.

Mechanics of the Attack

In practical terms, a manipulated image can contain instructions that remain invisible to the human eye until the image undergoes resampling. For instance, during the downscaling process, dark areas may be altered to reveal hidden text. In one example, black text became visible after the image was resized, leading the AI model to interpret this text as legitimate input. This process can enable harmful commands that compromise sensitive user information.

In a test scenario, the researchers successfully redirected calendar data from a Google account to an external email address via the “Gemini CLI” tool. The implications of this attack extend to several platforms, including Google’s Gemini models, the Google Assistant on Android, and the Genspark service, raising significant concerns about data security across these widely used applications.

Proposed Defenses Against Image-Based Attacks

To combat such vulnerabilities, the researchers have introduced an open-source tool called Published, designed to create images specifically tailored for different downscaling methods. They recommend limiting the size of uploaded images and displaying a preview of the reduced versions to enhance user awareness. Furthermore, any safety-critical actions should require user confirmation, particularly when extracting text from images. The researchers emphasize that robust system design is essential to guard against prompt injection attacks.

According to Trail of Bits, the most effective defense against these threats lies in implementing comprehensive protective mechanisms. Only through a systematic approach can developers prevent multimodal AI applications from becoming conduits for data abuse. As AI technology continues to evolve and become more integral to various sectors, addressing these security challenges will be crucial in safeguarding user information.

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.

Continue Reading

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.