Science
Researchers Unveil Image-Based Attack Targeting AI Models
Security researchers have revealed a new attack technique that exploits vulnerabilities in artificial intelligence (AI) systems, enabling the theft of confidential user data through manipulated images. Developed by Kikimora Morozova and Suha Sabi Hussain from the cybersecurity firm Trail of Bits, this method builds upon a concept introduced in a 2020 study by TU Braunschweig. The researchers demonstrated how this technique, referred to as “image scaling attacks,” can be applied to contemporary AI applications.
The focus of this attack is on the way AI systems typically downscale uploaded images to optimize computing resources and reduce costs. Common resampling algorithms like “Nearest Neighbor,” “Bilinear,” and “Bicubic” can inadvertently reveal hidden patterns within the original images. When these images are downscaled, previously concealed elements may become visible, creating opportunities for malicious manipulation.
Mechanics of the Attack
In practical terms, a manipulated image can contain instructions that remain invisible to the human eye until the image undergoes resampling. For instance, during the downscaling process, dark areas may be altered to reveal hidden text. In one example, black text became visible after the image was resized, leading the AI model to interpret this text as legitimate input. This process can enable harmful commands that compromise sensitive user information.
In a test scenario, the researchers successfully redirected calendar data from a Google account to an external email address via the “Gemini CLI” tool. The implications of this attack extend to several platforms, including Google’s Gemini models, the Google Assistant on Android, and the Genspark service, raising significant concerns about data security across these widely used applications.
Proposed Defenses Against Image-Based Attacks
To combat such vulnerabilities, the researchers have introduced an open-source tool called Published, designed to create images specifically tailored for different downscaling methods. They recommend limiting the size of uploaded images and displaying a preview of the reduced versions to enhance user awareness. Furthermore, any safety-critical actions should require user confirmation, particularly when extracting text from images. The researchers emphasize that robust system design is essential to guard against prompt injection attacks.
According to Trail of Bits, the most effective defense against these threats lies in implementing comprehensive protective mechanisms. Only through a systematic approach can developers prevent multimodal AI applications from becoming conduits for data abuse. As AI technology continues to evolve and become more integral to various sectors, addressing these security challenges will be crucial in safeguarding user information.
-
Technology4 months agoDiscover the Top 10 Calorie Counting Apps of 2025
-
Health2 months agoBella Hadid Shares Health Update After Treatment for Lyme Disease
-
Health2 months agoErin Bates Shares Recovery Update Following Sepsis Complications
-
Technology3 days agoDiscover 2025’s Top GPUs for Exceptional 4K Gaming Performance
-
Technology3 months agoDiscover How to Reverse Image Search Using ChatGPT Effortlessly
-
Technology4 months agoMeta Initiates $60B AI Data Center Expansion, Starting in Ohio
-
Lifestyle4 months agoBelton Family Reunites After Daughter Survives Hill Country Floods
-
Technology2 months agoElectric Moto Influencer Surronster Arrested in Tijuana
-
Technology2 months agoUncovering the Top Five Most Challenging Motorcycles to Ride
-
Technology4 months agoRecovering a Suspended TikTok Account: A Step-by-Step Guide
-
Technology1 week agoDiscover the Best Wireless Earbuds for Every Lifestyle
-
Health3 months agoTested: Rab Firewall Mountain Jacket Survives Harsh Conditions
