BEGIN:VCALENDAR
VERSION:2.0
X-WR-CALNAME:EventsCalendar
PRODID:-//hacksw/handcal//NONSGML v1.0//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
LAST-MODIFIED:20240422T053451Z
TZURL:https://www.tzurl.org/zoneinfo-outlook/America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:College of Engineering,Thesis/Dissertations
DESCRIPTION:Faculty Supervisor: Long Jiao, Computer & Information Science 
 Committee Members: Dr. Joshua Carberry, Computer & Information ScienceDr. 
 Lance Fiondella, Electrical & Computer Engineering Abstract: Large vision
 -language models rely on pretrained vision encoders to translate images in
 to feature representations used by downstream language models. This create
 s a security risk when the encoder is compromised by a stealthy backdoor a
 ttack, such as BadVision, where a subtle trigger causes an image to be map
 ped toward an attacker-chosen target representation while clean inputs rem
 ain largely unaffected. Because the model behaves normally under standard 
 evaluation, these attacks are difficult to detect. This thesis investigate
 s controlled noise injection as a lightweight input-side defense against B
 adVision-style backdoors. The proposed approach adds small perturbations t
 o input images before they enter the vision encoder, with the goal of disr
 upting the trigger while preserving the semantic content of clean images. 
 Several perturbation types are evaluated, including Gaussian noise, random
  noise, salt-and-pepper noise, low-frequency noise, geometric transformati
 ons, occlusion, scaling, rotation, and channel-based distributions. Experi
 mental results show that geometric and channel-based transformations have 
 limited effect on the backdoor, while pixel-level statistical perturbation
 s significantly reduce target similarity, increase feature-space distance 
 from the attacker’s target representation, and lower attack success. The
 se findings suggest that stealthy encoder-level triggers depend on fragile
  statistical patterns and can be weakened through controlled noise injecti
 on without requiring retraining of the full multimodal model.   For furt
 her information please contact Dr. Long Jiao at ljiao@umassd.edu\nEvent pa
 ge: https://www.umassd.edu/events/cms/a-noise-based-defense-for-stealthy-b
 ackdoor-attacks-in-large-vision-language-models.php
X-ALT-DESC;FMTTYPE=text/html:<html><body><p>Faculty Supervisor: Long Jiao\,
  Computer & Information Science<br /> <br />Committee Members:</p>\n<p>Dr
 . Joshua Carberry\, Computer & Information Science<br />Dr. Lance Fiondell
 a\, Electrical & Computer Engineering<br /> <br />Abstract: Large vision-
 language models rely on pretrained vision encoders to translate images int
 o feature representations used by downstream language models. This creates
  a security risk when the encoder is compromised by a stealthy backdoor at
 tack\, such as BadVision\, where a subtle trigger causes an image to be ma
 pped toward an attacker-chosen target representation while clean inputs re
 main largely unaffected. Because the model behaves normally under standard
  evaluation\, these attacks are difficult to detect. This thesis investiga
 tes controlled noise injection as a lightweight input-side defense against
  BadVision-style backdoors. The proposed approach adds small perturbations
  to input images before they enter the vision encoder\, with the goal of d
 isrupting the trigger while preserving the semantic content of clean image
 s. Several perturbation types are evaluated\, including Gaussian noise\, r
 andom noise\, salt-and-pepper noise\, low-frequency noise\, geometric tran
 sformations\, occlusion\, scaling\, rotation\, and channel-based distribut
 ions. Experimental results show that geometric and channel-based transform
 ations have limited effect on the backdoor\, while pixel-level statistical
  perturbations significantly reduce target similarity\, increase feature-s
 pace distance from the attacker’s target representation\, and lower atta
 ck success. These findings suggest that stealthy encoder-level triggers de
 pend on fragile statistical patterns and can be weakened through controlle
 d noise injection without requiring retraining of the full multimodal mode
 l.  <br /> <br />For further information please contact Dr. Long Jiao at
  ljiao@umassd.edu</p><p>Event page: <a href="https://www.umassd.edu/events
 /cms/a-noise-based-defense-for-stealthy-backdoor-attacks-in-large-vision-l
 anguage-models.php">https://www.umassd.edu/events/cms/a-noise-based-defens
 e-for-stealthy-backdoor-attacks-in-large-vision-language-models.php</a></a
 ></p></body></html>
DTSTAMP:20260505T154344
DTSTART;TZID=America/New_York:20260525T120000
DTEND;TZID=America/New_York:20260525T130000
LOCATION:Dion 311
SUMMARY;LANGUAGE=en-us:A Noise-Based Defense for Stealthy Backdoor Attacks 
 in Large Vision-Language Models
UID:c45be63911af2bf51a7dfe92826e274a@www.umassd.edu
END:VEVENT
END:VCALENDAR
