Science
Researchers Unveil Image-Based Attack Targeting AI Models

Security researchers have revealed a new attack technique that exploits vulnerabilities in artificial intelligence (AI) systems, enabling the theft of confidential user data through manipulated images. Developed by Kikimora Morozova and Suha Sabi Hussain from the cybersecurity firm Trail of Bits, this method builds upon a concept introduced in a 2020 study by TU Braunschweig. The researchers demonstrated how this technique, referred to as “image scaling attacks,” can be applied to contemporary AI applications.
The focus of this attack is on the way AI systems typically downscale uploaded images to optimize computing resources and reduce costs. Common resampling algorithms like “Nearest Neighbor,” “Bilinear,” and “Bicubic” can inadvertently reveal hidden patterns within the original images. When these images are downscaled, previously concealed elements may become visible, creating opportunities for malicious manipulation.
Mechanics of the Attack
In practical terms, a manipulated image can contain instructions that remain invisible to the human eye until the image undergoes resampling. For instance, during the downscaling process, dark areas may be altered to reveal hidden text. In one example, black text became visible after the image was resized, leading the AI model to interpret this text as legitimate input. This process can enable harmful commands that compromise sensitive user information.
In a test scenario, the researchers successfully redirected calendar data from a Google account to an external email address via the “Gemini CLI” tool. The implications of this attack extend to several platforms, including Google’s Gemini models, the Google Assistant on Android, and the Genspark service, raising significant concerns about data security across these widely used applications.
Proposed Defenses Against Image-Based Attacks
To combat such vulnerabilities, the researchers have introduced an open-source tool called Published, designed to create images specifically tailored for different downscaling methods. They recommend limiting the size of uploaded images and displaying a preview of the reduced versions to enhance user awareness. Furthermore, any safety-critical actions should require user confirmation, particularly when extracting text from images. The researchers emphasize that robust system design is essential to guard against prompt injection attacks.
According to Trail of Bits, the most effective defense against these threats lies in implementing comprehensive protective mechanisms. Only through a systematic approach can developers prevent multimodal AI applications from becoming conduits for data abuse. As AI technology continues to evolve and become more integral to various sectors, addressing these security challenges will be crucial in safeguarding user information.
-
Technology1 month ago
Discover the Top 10 Calorie Counting Apps of 2025
-
Lifestyle1 month ago
Belton Family Reunites After Daughter Survives Hill Country Floods
-
Technology3 weeks ago
Discover How to Reverse Image Search Using ChatGPT Effortlessly
-
Technology4 weeks ago
Harmonic Launches AI Chatbot App to Transform Mathematical Reasoning
-
Technology1 month ago
Meta Initiates $60B AI Data Center Expansion, Starting in Ohio
-
Education1 month ago
Winter Park School’s Grade Drops to C, Parents Express Concerns
-
Lifestyle1 month ago
New Restaurants Transform Minneapolis Dining Scene with Music and Flavor
-
Technology1 month ago
ByteDance Ventures into Mixed Reality with New Headset Development
-
Technology1 month ago
Recovering a Suspended TikTok Account: A Step-by-Step Guide
-
Technology1 month ago
Mathieu van der Poel Withdraws from Tour de France Due to Pneumonia
-
Technology1 month ago
Global Market for Air Quality Technologies to Hit $419 Billion by 2033
-
Technology4 weeks ago
Google Pixel 10 Pro Fold vs. Pixel 9 Pro Fold: Key Upgrades Revealed