BEGIN:VCALENDAR
VERSION:2.0
X-WR-CALNAME:EventsCalendar
PRODID:-//hacksw/handcal//NONSGML v1.0//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
LAST-MODIFIED:20240422T053451Z
TZURL:https://www.tzurl.org/zoneinfo-outlook/America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:College of Engineering,Thesis/Dissertations
DESCRIPTION:Advisor: Dr. Long Jiao, Computer & Information Science  Commi
 ttee Members:  Dr Joshua Carberry, Computer & Information Science Dr Amir 
 Akhavan Masoumi, Computer & Information Science  Abstract: AI agents have 
 seen rapid adoption in recent years, and many of them now include a memory
  database that allows them to store information about their users and refe
 rence it across separate conversations. However, this feature also creates
  a new attack surface, where adversaries can inject poisoned memories into
  the database of an agent in order to degrade its performance. Previous re
 search in this area has focused on attacks that either insert poisoned mem
 ories directly into the memory store or assume that the attacker knows in 
 advance which questions the agent will be asked. This thesis attempts to p
 oison a memory database without either of those capabilities, relying only
  on conversation history that has already been stored in memory and on kno
 wledge of the type and retrieval policy of the memory system. From this co
 rpus, it selects up to k conversations using closeness to the hubs of the 
 memory database — regions of the embedding space where many unrelated us
 er queries tend to retrieve from — as the guiding selection criterion. C
 onversations chosen in this way are disproportionately retrieved across a 
 wide range of future queries, displacing legitimate context and broadly de
 grading the agent's responses. Results show that this is an effective way 
 to reduce overall AI agent performance with minimal information about the 
 agent itself.  This research demonstrates that the embedding geometry of 
 retrieval-based memory systems is itself a vulnerability: an adversary can
  exploit the structure of the embedding space without ever needing to know
  what the agent will be asked.  For further information please contact D
 r. Long Jiao at ljiao@umassd.edu\nEvent page: https://www.umassd.edu/event
 s/cms/hub-based-memory-poisoning-query-blind-attacks-on-retrieval-augmente
 d-llm-agents.php
X-ALT-DESC;FMTTYPE=text/html:<html><body><p>Advisor: Dr. Long Jiao\, Comput
 er & Information Science <br /> <br />Committee Members:</p>\n<ul>\n<li>
 Dr Joshua Carberry\, Computer & Information Science</li>\n<li>Dr Amir Akha
 van Masoumi\, Computer & Information Science</li>\n</ul>\n<p>Abstract:</p>
 \n<p>AI agents have seen rapid adoption in recent years\, and many of them
  now include a memory database that allows them to store information about
  their users and reference it across separate conversations. However\, thi
 s feature also creates a new attack surface\, where adversaries can inject
  poisoned memories into the database of an agent in order to degrade its p
 erformance. Previous research in this area has focused on attacks that eit
 her insert poisoned memories directly into the memory store or assume that
  the attacker knows in advance which questions the agent will be asked. Th
 is thesis attempts to poison a memory database without either of those cap
 abilities\, relying only on conversation history that has already been sto
 red in memory and on knowledge of the type and retrieval policy of the mem
 ory system. From this corpus\, it selects up to k conversations using clos
 eness to the hubs of the memory database — regions of the embedding spac
 e where many unrelated user queries tend to retrieve from — as the guidi
 ng selection criterion. Conversations chosen in this way are disproportion
 ately retrieved across a wide range of future queries\, displacing legitim
 ate context and broadly degrading the agent's responses. Results show that
  this is an effective way to reduce overall AI agent performance with mini
 mal information about the agent itself.  This research demonstrates that 
 the embedding geometry of retrieval-based memory systems is itself a vulne
 rability: an adversary can exploit the structure of the embedding space wi
 thout ever needing to know what the agent will be asked. <br /> <br />Fo
 r further information please contact Dr. Long Jiao at ljiao@umassd.edu</p>
 <p>Event page: <a href="https://www.umassd.edu/events/cms/hub-based-memory
 -poisoning-query-blind-attacks-on-retrieval-augmented-llm-agents.php">http
 s://www.umassd.edu/events/cms/hub-based-memory-poisoning-query-blind-attac
 ks-on-retrieval-augmented-llm-agents.php</a></a></p></body></html>
DTSTAMP:20260505T154209
DTSTART;TZID=America/New_York:20260520T120000
DTEND;TZID=America/New_York:20260520T130000
LOCATION:Dion 311
SUMMARY;LANGUAGE=en-us:Hub-Based Memory Poisoning: Query-Blind Attacks on R
 etrieval-Augmented LLM Agents
UID:4dbcadeff2a7c91ae6c5408937ff3da4@www.umassd.edu
END:VEVENT
END:VCALENDAR
