University calendar

Stable and Compute-Resource Efficient Learning with Spiking and Quantum Neural Networks: Methods and Insights

Thursday, May 14, 2026 at 10:30am to 11:30am

Committee Members:
Dr. Firas Khatib, Computer and Information Science Department, University of Massachusetts Dartmouth
Dr. Christopher Hixenbaugh, Naval Undersea Warfare Center

Date & Time: 05/14/2026 (Thursday), 10:30 AM - 11:30 AM (Eastern Time)
Room:  DION 311

Abstract:

The second-generation Neural Networks have evolved in recent years, which have become more complex architectures such as spiking neural networks and quantum neural networks. However, the computational resource restriction of neural networks on edge devices is still challenging. The thesis investigates stable learning and compute-resource efficiency on spiking neural networks and quantum neural networks. Other common qualities like high performance (e.g., high accuracy, high reward), robustness, convergence, predictability, and fast running times were also considered in one or more studies. The contributions of the thesis have several folds. The first study was using audio data; one reason was to verify if a trend called temporal information concentration is present in the spiking neural network. We also gathered other findings, such as dataset complexity impacting Fisher Information, related to temporal information dynamics. The second study on multimodal spiking neural networks explored the effects of audio and image noise. The results show the multimodal model outperformed its unimodal counterparts, but certain configurations of image noises, audio noises, and noise levels performed better than others. A third study on spiking neural networks revealed that temporal information concentration was not present in quantization-aware-training variants, but an increase in Fisher Information was found in those variants. In one of the quantum neural network studies with reinforcement learning, we found faster initial convergence, longer decreasing in standard deviation and policy entropy, and a few correlations as well related to average reward and policy entropy. In the second study on quantum neural networks, structured pruning is found to sharpen decisiveness and reveal bad pruning paths, while overparameterization can help exploration. All these studies try to address maintaining or improving stable learning, if the models are computation-resource efficient enough to be realistic.

All CIS and Data Science Graduate Students are encouraged to attend.

For further questions please contact Dr. Yuchou Chang at ychang1@umassd.edu

Dion 311
Dr. Yuchou Chang
5089998457
ychang1@umassd.edu

Back to top of screen