Google Fixes AI Coding Tool Flaw That Let Attackers Execute Malicious Code: Report
Summary
Researchers say a prompt injection bug in Google's Antigravity AI coding tool could have let attackers run commands, despite safeguards.
Description
Researchers say a prompt injection bug in Google's Antigravity AI coding tool could have let attackers run commands, despite safeguards.
Original reporting
AFBytes is a read-only aggregator. Use the original source for full context and complete reporting.
Open original source