<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on Qtnes</title><link>http://qtnes.com/tags/ai/</link><description>Recent content in Ai on Qtnes</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 17 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="http://qtnes.com/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>8 - Pizzeria: Prompt Injection Against an Android LLM Agent</title><link>http://qtnes.com/posts/8---pizzeria---prompt-injection-against-an-android-llm-agent/</link><pubDate>Fri, 17 Apr 2026 00:00:00 +0000</pubDate><guid>http://qtnes.com/posts/8---pizzeria---prompt-injection-against-an-android-llm-agent/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Pizzeria looked like a simple Android food-ordering app, but the backend was doing something much more interesting: it was running an LLM agent with tool access.&lt;/p&gt;
&lt;p&gt;The challenge was to move from a normal order request to a tool call that would reveal the flag. The path there was a good example of how brittle keyword filtering becomes once an LLM is exposed to attacker-controlled input.&lt;/p&gt;
&lt;h2 id="recon"&gt;Recon&lt;/h2&gt;
&lt;p&gt;Decompiling the APK showed three useful pieces immediately.&lt;/p&gt;</description></item></channel></rss>