About Cybersecurity Center seminar series
UMass Dartmouth Cybersecurity Center Seminars are live, either online or in-person academic forums where the invited speakers give technical presentations on cybersecurity related topics for UMass Dartmouth students and employees.
Please email firstname.lastname@example.org if you are interested in giving a talk, or if you are not a UMass Dartmouth member and would like to attend one of the seminars. The link for online talks are announced via email and UMD-Announce for UMass Dartmouth members.
Date: Thursday, February 8, 2024 @ 10:30 AM
Location: Dion 311
Speaker: Xiao Li
Abstract: Federated learning (FL) is a sought-after distributed machine learning architecture and has been seamlessly integrated into edge computing. However, the limited resources on edge devices introduce security vulnerabilities to FL. While blockchain technology promises to bolster security, edge devices cannot afford the cost of off-the-shelf blockchain systems. Moreover, the Blockchain-FL with multiple aggregators is still under-explored. In this talk, I introduce a novel Blockchain-empowered Heterogeneous Multi-Aggregator Federated Learning Architecture (BMA-FL). We design a performance-based Byzantine consensus mechanism to enable secure and fast model aggregation. I will also discuss the model damaging problem in BMA-FL due to the heterogeneity of aggregators. We propose a multi-agent deep reinforcement learning algorithm to help aggregators decide the best training strategies to mitigate the damage. The experiments on real-word datasets demonstrate the efficiency of BMA-FL to achieve better model performance faster than baselines.
Bio: Xiao Li is a Ph.D. Candidate in Computer Science at The University of Texas at Dallas advised by Prof. Weili Wu. His research focuses on blockchain technology and its applications for secure distributed systems. He is also interested in applied data science for problem solving with application of machine learning, deep learning and reinforcement Learning. Xiao has published 10 papers in related fields at refereed conferences and journals such as IEEE ICDCS, IEEE TCSS, Theoretical Computer Science, and Journal of Combinatorial Optimization. Xiao was awarded the prestigious Jan Van der Ziel Fellowship at UT Dallas in 2023. Xiao is also a member of IEEE, ACM and AAAI. Xiao has served as peer reviewers in various reputable conferences and journals including KDD, ICDCS, IJCAI and Information Sciences.
Date: December 8, 2023 @ 3:00 PM
Location: Dion 115
Speaker: Dr. Yukui Luo
Abstract of the talk: With the widespread adoption of cloud computing, the demand for programmable hardware acceleration devices, such as field-programmable gate array (FPGA), has increased. This device benefits the growth of efficient hardware accelerators, making cloud computing possible to serve a wide range of research and commercial project, including genetic engineering, intensive online secure trading, the Artificial Intelligence (AI) interface, etc. To further improve the performance of FPGA-enabled cloud computing, one promising technology is to virtualize the hardware resources of an FPGA device, which allows multiple users to share the same FPGA. This solution can provide on-demand FPGA resource instances, significantly improving the utilization and energy efficiency of the cloud FPGA. However, due to the hardware reconfigurability of FPGA, current virtualization technologies used for multi-tenant CPU and GPU instances are incompatible with multi-tenant FPGA.
My research aims to enhance the security of multi-tenant FPGA by defining the threat model and evaluating security concerns from confidentiality, data integrity, and availability aspects. As a part of this goal, I have constructed a multi-tenant FPGA prototype and demonstrated potential attacks. Those attacking works serve as preliminary steps toward developing a secure multi-tenant FPGA virtualization system. This system involved hardware and software co-design, which extends the multi-tenant isolation from software to hardware, ultimately resulting in a secure FPGA shared cloud computing service.
Short Bio: Yukui Luo is the Assistant Professor at the Computer & Information Science Department of University of Massachusetts Dartmouth (UMassD). His research endeavors include, but are not limited to, hardware security and trust, FPGA virtualization, computer architecture, embedded system, privacy-preserving machine learning, and hardware-software co-design computing system. He has published several papers in top-tier conferences and journals in the system security area, showcasing his expertise and contributions.
Date: Friday, November 10, 2023 @ 2:00 PM
Location: Dion 311
Speaker: Dr. Nathaniel D. Bastian (Academy Professor at USMA)
Abstract: The dynamic and adversarial nature of the cyber domain causes machine learning (ML)-based methods to lack resilience due to the dearth of well-marked and fresh data sets. Enabling ML to evolve ahead of our adversaries’ dynamic behavior is recognized as an “as of yet” unattainable requirement. Within the context of the cyber domain, how do we anticipate newly emerging attack vectors and classes of data? How can we have proactive evolution of generated cyber data targeted towards adversarial behavior? This research talk provides an overview of the investigation and development of a generative machine learning technique for out-of-distribution training data expansion in the network intrusion detection system (NIDS) setting.
Bio: Nathaniel D. Bastian, PhD is a Lieutenant Colonel in the U.S. Army, where he is an Academy Professor and Cyber Warfare Officer at the United States Military Academy (USMA) at West Point. At USMA, LTC Bastian serves as Division Chief, Data & Decision Sciences at the Army Cyber Institute (ACI) within the Department of Electrical Engineering and Computer Science. He also serves as Assistant Professor of Operations Research and Data Science with a dual faculty appointment in the Department of Mathematical Sciences and the Department of Systems Engineering. Overall, his portfolio aims to research, develop, test, evaluate, analyze, and assess innovative, assured, intelligent, human-aware, data-centric, and decision-driven capabilities that enable information advantage and decision dominance in support of multi-domain operations with greater reliability, performance, and scalability in complex, contested, congested, and continually evolving battlefield environments. LTC Bastian has co-authored 90+ refereed journal articles and conference papers, six book chapters, and one textbook. He is the recipient of numerous academic awards and honors, to include a Fulbright Scholarship and National Science Foundation Graduate Research Fellowship, and he has received $4M+ in externally funded research monies from multiple Government organizations. LTC Bastian was previously an Operations Research / Systems Analysis (ORSA) Officer with assignments as the Chief Artificial Intelligence Architect at the DoD Joint Artificial Intelligence Center, Operations Research Scientist at the ACI/USMA, and Analytics Officer at the U.S. Army Human Resources Command.
Date: Friday, October 13th, 2023 at 1:00 PM
Location: Dion 311
Speaker: Deepak Kumar, Ph.D. Candidate @ UMass Dartmouth
Abstract: Action recognition, a field within human-centered computing, is vital for identifying and understanding human actions, benefiting applications like surveillance, autonomous vehicles, and human-computer interaction. Artificial intelligence's enduring goal is to develop robust models for perceiving and understanding the visual world around us. Deep neural networks have shown exceptional performance in various tasks, profoundly impacting real-world applications, including action prediction and recognition, which have advanced significantly in recent years. While most work in visual human action recognition focuses on single viewpoints or modalities from complete observation. Yet, the real significance lies in predicting future actions from incomplete observations to prevent real-world tragedies. With the availability of multiple cameras and data from multiple modalities (RGB, Depth, and Skeleton) available today, it becomes possible to model human action in multi-view and multi-modality context, minimizing the data loss due to occlusions and signal quality issues to improve recognition accuracy on the strength of state-of-the-art deep learning models. However, deep neural network models are susceptible to adversarial attacks, where imperceptible perturbations can compromise action recognition model performance. This thesis focuses on identifying latent vulnerabilities and proposing a defense mechanism against such threats in a multi-modality and multi-view setting. This work introduces an efficient and effective attack mechanism that perturbs skeleton data by targeting key joints and segments while employing a graph attention mechanism that learns the semantics to perturb other modalities. Additionally, an approach has been developed that not only adds noise but also alters the visual spatial structure of skeleton data through generative modeling. Furthermore, this dissertation introduces a defense mechanism known as the Collaborative Knowledge Distillation Network, which leverages graph attention and knowledge distillation techniques. This network leverages the knowledge from compromised multi-view data and integrates information from clean data to address incomplete observations and noisy action videos, enhancing the robustness of action recognition models for real-world applications.
Bio: Deepak Kumar is a Ph.D. candidate at the University of Massachusetts Dartmouth from the Department of Computer and Information Science. He received his M.S. in Data Science from the University of Massachusetts Dartmouth in 2018 and a B.S. in Computer Science at Shaheed Zulfikar Ali Bhutto Institute of Science and Technology in 2013. His research interests include computer vision, spatio-temporal modeling, adversarial learning, and generative modeling.
Date: Thursday, Sep. 28th, 2023, at 1:00 PM
Location: Dion 311
Speaker: Dr. Yi Liu, Associate Professor @ UMass Dartmouth
Abstract: A secure design pattern is a well-proven reusable solution to a recurring security problem that arise in specific contexts. Using these patterns properly can help mitigate software vulnerabilities during the software development process. However, the absence of clear guidance on selecting appropriate secure design patterns often complicates their adoption for developers, especially when compared to conventional design patterns. This presentation introduces a methodology for the systematic selection of suitable secure design patterns for mitigating software vulnerabilities, formalized as anti-patterns. The focus of this presentation is on two of the most prevalent web application vulnerabilities: SQL injection (SQLi) and Cross-site scripting (XSS). A case study illustrates the practical implementation of the chosen pattern in the redesign of a vulnerable application, with evaluation results showcasing the proposed pattern's effectiveness in countering SQLi and XSS vulnerabilities. While SQLi and XSS are the targeted vulnerabilities in the approach, given the success of this study, this approach is promising for broader applications.
Bio: Dr. Yi Liu is an associate professor in the Dept. of Computer & Information Science at University of Massachusetts Dartmouth. Her research primarily focuses on software engineering, with a special emphasis on software reuse, including software frameworks, architectural patterns, design patterns, and microservice-oriented software development. Her research on the software engineering aspect of cybersecurity concentrates on incorporating security into requirements modeling and design by employing secure patterns for web applications.
Date: Friday, November 3, 2023 @ 3:00 PM
Location: Dion 115
Speaker: Dr. Vasiliki Kalavri (Assistant Professor at Boston University)
Abstract: Enabling secure outsourced analytics with practical performance has been a long-standing research challenge in the databases and systems communities. In this talk, I will present our work towards realizing this vision with SECRECY (NSDI’23) and TVA (SECURITY’23), two new frameworks for secure relational and time series analytics in untrusted clouds. SECRECY and TVA target collaborative analytics, where data owners (hospitals, companies, research institutions, or individuals) are willing to allow certain computations on their collective private data, provided that data remain siloed from untrusted entities. To ensure no information leakage and provable security guarantees, our work relies on cryptographically secure Multi-Party Computation (MPC). We will discuss how instead of treating MPC as a black box, SECRECY exposes the costs of oblivious queries to the planner and employs novel logical, physical, and protocol-specific optimizations, all of which are applicable even when data owners do not participate in the computation. Finally, I will also describe various secure time series operators in TVA that enable it to support both snapshot and recurring queries on private inputs with unordered and irregular timestamps.
Bio: Vasiliki (Vasia) Kalavri is an Assistant Professor of Computer Science at Boston University, where she co-leads the Complex Analytics and Scalable Processing (CASP) Systems lab. Vasia and her team enjoy doing research on multiple aspects of (distributed) data-centric systems. Recently, they have been working on self-managed systems for data stream processing, systems for scalable graph Machine Learning, and systems for secure collaborative analytics. Before joining BU, Vasia was a postdoctoral fellow at ETH Zurich and received a joint PhD from KTH (Sweden) and UCLouvain (Belgium). Vasia has received several awards for her research, including an IBM Innovation Award for her PhD Dissertation in 2017 and the SIGMOD Systems Award in 2023. Vasia’s research lab has received funding from the NSF, as well as industry awards from Google, Samsung, and RedHat.
Date: Friday, February 10, 2023 at 3:00 PM
Location: Claire T Carney Library (LIB), Room 205
Speaker: Ashok Vardran Raja, Ph. D. Candidate @ UMass Dartmouth
Abstract: Recent years have witnessed a significant growth of UAVs in various military and civilian tasks due to their high mobility and rich sensing capabilities. This trend is further promoted by rapidly evolving AI algorithms and hardware in recent years. Although the integration of AI makes UAVs smarter and more effective, it also raises additional security and safety concerns due to potential vulnerabilities existing in the underlying AI models. The exploitation of such vulnerabilities by malicious adversaries can cause severe security and safety consequences. Therefore, it is critical to secure the integration of AI and UAV and make it robust when operating in an adversarial environment. In this talk, we will first analyze the data sensing and processing pipeline of major sensors involved in AI-enabled UAV operations to explore potential vulnerabilities. Then, we will discuss how to design defense strategies to improve the robustness of AI-enabled UAV operations. Two widely adopted AI-enabled UAV applications, including AI-enabled UAV infrastructure inspection and AI-enabled anomaly detection, will be leveraged as case studies in this talk.
Bio: Ashok Vardhan Raja is a Ph.D. candidate at the University of Massachusetts Dartmouth from the Department of Computer and Information Science. He received his M.S. in Cybersecurity Engineering from the Embry-Riddle Aeronautical University in 2019 and a B.S. in Software Engineering at Rose-Hulman Institute of Technology in 2017. His research interests include Cybersecurity, UAV Security, Secure integration of AI and Cyber-physical Systems, and Adversarial Learning.
Date: Monday, April 25, 2022 at 3:00 PM
Location: Science & Engineering Building (SEng), Room 222
Speaker: Sam Curry, Chief Security Officer, Cybereason, MA, champion for today’s cyber defenders with future-ready attack protection
Abstract: This talk surveys the domain and markets of cybersecurity. It briefly covers advanced analytics (data analytics, machine learning and artificial intelligence) before diving into the intersection of the two. There are much hype and much promise both, but the key is a rational and pragmatic approach to the two disciplines. An interactive session will examine this fascinating applied area, from both academic and industry perspectives and should lead to engaging discussion. Being an entrepreneur in cybersecurity, Sam has offered to use the Q&A period as an "Ask Me Anything" as well, and welcome any and all discussion on his career or on cyberesecurity as a discipline.
Bio: Sam Curry is the Chief Security Officer (CSO) at Cybereason and the President of Cybereason Government Inc as well a Visiting Fellow at GMU's National Security Institute. Curry was also CTO and CISO for Arbor Networks (NetScout) and was CSO and SVP R&D at Microstrategy in addition to senior security roles at McAfee and CA. He spent seven years at RSA variously as CSO, CTO and SVP of Product and as Head of RSA Labs. Curry has over 25 patents in security from his time as a Security Architect, has been a leader in two successful startups and is a board member of the Cybersecurity Coalition, of SSH Communications and of Sequitur Labs (in the IoT security space) in addition to a number of advisorships across the security spectrum.
Date: Wednesday, April 6, 2022 at 10:00 AM
Abstract: One class of cyber-attack is advanced persistent threats (APT) where highly skilled attackers can steal user authentication information’s and then in a hidden manner, move laterally in the network, from host to host, until they reach an better target. The extensive use of information and communication technologies in power grid systems make them vulnerable to cyber-attacks.
This talk presents a game theoretic approach to optimize the defense against an invader attempting to use a set of known vulnerabilities to reach critical nodes in the network. A specific game is built based on vulnerabilities. The solution of the game provides the optimal strategy to disconnect vulnerable services.
Bio: Laurent L. Njilla is a Senior Research Engineer with the Information Assurance Branch, Information Warfare Division at the U.S. Air Force Research Laboratory (AFRL), Rome, New York. He received his B.S. in Computer Science from the University of Yaoundé 1 in Cameroon, the M.S. in Computer Engineering from the University of Central Florida (UCF) in 2005 and Ph.D. in Electrical Engineering from Florida International University (FIU) in 2015. Prior to joining the AFRL, he was a Senior Systems Analyst in the industry sector for more than 10 years. He is responsible for conducting basic research in the areas of hardware design, game theory applied to cyber security and cyber survivability, hardware Security, online social network, cyber threat information sharing, and blockchain technology. He is the Program Manager for the Cyber Security Center of Excellence (CoE) for the HBCU/MI and the Disruptive Information Technology Program at AFRL/RI. Dr. Njilla’s research has resulted in more than 60 peer-reviewed journal and conference papers and multiple awards including Air Force Notable Achievement Awards, the AFRL Information Directorate Patent award and etc. He is a reviewer of multiple journals and serves on the technical program committees of several international conferences. He is a member of the National Society of Black Engineer (NSBE).
Date: March 25, 2022 at 3:00 PM
Abstract: In this short talk, Ming will speak of the skills and opportunities that were missed out in his decade plus experience in the Security field, and why these opportunities must be heeded for future professionals.
Date: February 4, 2022 at 3:00 PM
Speaker: Marilyn George (Ph.D. Candidate @ Brown University)
Abstract: Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. Special cases of STE include searchable symmetric encryption (SSE) and graph encryption. Like all sub-linear encrypted search solutions, STE leaks information about queries against persistent adversaries. To address this, a line of work on leakage suppression was recently initiated that focuses on techniques to mitigate the leakage of STE schemes. A notable example is the query equality suppression framework (Kamara et al. CRYPTO’18) which transforms dynamic STE schemes that leak the query equality into new schemes that do not. Unfortunately, this framework can only produce static schemes and it was left as an open problem to design a solution that could yield dynamic constructions. In this work, we propose a dynamic query equality suppression framework that transforms volume-hiding semi-dynamic or mutable STE schemes that leak the query equality into new fully-dynamic constructions that do not. We then use our framework to design three new fully-dynamic STE schemes that are “almost” and fully zero-leakage which, under natural assumptions on the data and query distributions, are asymptotically more efficient than using black-box ORAM simulation. These are the first constructions of their kind. This is joint work with Seny Kamara (Brown University) and Tarik Moataz (Aroki Systems).
Bio: Marilyn George is a PhD candidate at Brown University, where she is advised by Prof. Seny Kamara. Her primary interests are in applied cryptography, with an emphasis on structured encryption. She is also interested in algorithmic game theory and privacy-conscious system design. Before starting her PhD, she obtained her Masters from the Indian Institute of Science, and was a Research Fellow at Microsoft Research India, working on analytics over encrypted data.
Date: November 19, 2021 at 3:00 PM
Speaker: Dr. Gokhan Kul
Abstract: Database systems have become an essential component of every data-intensive application. Both academia and industry invest in developing tools and methodologies to protect the privacy of the data stored while satisfying the performance requirements of these systems. I model database workloads to achieve both of these goals by exploiting the ever-evolving behavioral characteristics of users. In this talk, I will first present how traditional database workload modeling is ineffective in terms of performance and security when applied to mobile devices and how mobile databases can benefit from accurate workload modeling. Next, I will introduce a novel and lightweight data-leakage detection system on mobile databases. This system models the evolving behavior by comparing probability distributions of the query workload features over time and uses this model to determine if the incoming query activity is anomalous.
Bio: Gokhan Kul is an assistant professor at the Department of Computer and Information Science, and the associate director of the Cybersecurity Center at UMass Dartmouth. Prior to joining UMass Dartmouth, he was an assistant professor at Delaware State University. His research broadly covers software and database systems security. He has published at reputable journals and conferences such as IEEE TKDE and IEEE TrustCom focusing on data leakage, concept drift, and threat detection. He contributes to research reproducibility efforts at VLDB and SIGMOD reproducibility committees. He received his Ph.D. in August 2018 at the University at Buffalo, where he also served as an adjunct instructor for two semesters. He received his M.S. degree at METU in Turkey while working as a software engineer at METU Computer Center.
Date: September 24, 2021 at 3:00 PM
Speaker: Dr. Amir Akhavan
Abstract: Chaotic maps possess profound characteristics such as determinism, ergodicity, random-like behavior, nonlinearity, aperiodicity, high entropy, balance, nondegeneracy, incredibly low correlation of information, and extreme sensitivity to very small changes of the initial condition and control parameters. These characteristics are very favorable for cryptography and make deterministic chaos an interesting candidate for designing cryptographic primitives. Based on these characteristics, during recent years many chaos-based cryptographic primitives were proposed however, a vast portion of these algorithms have encountered major drawbacks such as shortened keypace, low encryption/decryption speed, lack of robustness, and low security. The ongoing attempt in this field is to identify weaknesses and drawbacks in the proposed chaos-based algorithms to come up with robust, fast, and secure cryptographic primitives. Some of the issues in the chaos-based cryptosystems have roots in the fundamental characteristics of chaotic dynamical systems that surface after their implementation. A new scheme is proposed in this research to address a portion of the issues caused by the implementation of chaotic maps. The effectiveness of the proposed scheme is studied using common statistical random tests on a PRNG implemented using the proposed scheme. The results of the analysis indicate that the implemented PRNG satisfies the requirements for a secure PRNG confirming the effectiveness of the proposed scheme.
Date: April 23, 2021 at 3:00 PM
Speaker: Jack D'Agostino
Abstract: An advanced persistent threat (APT) is an entity that can perform cyber-attacks that employ sophisticated techniques to stealthily gain access to unauthorized private networks. APTs are often carried out by nationally funded espionage groups that steal sensitive data from opposing governments or wealthy companies. APT groups usually start by performing an initial reconnaissance on their target so they can find where the sensitive information is being hidden, and plant and execute specialized malware via techniques such as spear phishing emails that exploit zero-day vulnerabilities in the target’s software. They also specialize in the ability to remain persistent on the network for an extended period of time. Therefore, they can avoid automatic detection and finally exfiltrate information to external command and control (C&C) servers. APT attacks are very complex attacks which in turn, require complex systems for detection and mitigation. While there is a body of work that focuses on detecting APTs, detecting what part of sensitive data has not been studied well in the literature. Therefore, the goal of this research is to construct and present a system to automatically detect what information has been stolen by a cyber-attack. Our process assumes that an anomaly or APT detection system has already been deployed on the system, and our protocol will be activated by these systems when there is suspicious activity. The cyber-attack detected may be an APT attack or another type of passive attack aiming to steal information. Even though our model can detect leaked information from other attacks, we focus on APT attacks in this research because our model is tailored towards detecting leaked information from the APT exfiltration step. Moreover, based on how the information was leaked and what part of data storage the attackers has accessed using our framework, organizations will able to get an understanding on what type of attack has the system experienced, which will be helpful to create efficient incident response policies. Once a cyber-attack that exfiltrates information is detected, normally, a forensic analysis of all systems would be performed. However, this process is exhaustive, and while not all of this analysis usually yield useful results, having such information is usually crucial for business continuity. We build a framework to detect what information has been leaked even when the attackers were able to spread the operation of stealing sensitive information across multiple victim computers in the network. To be able to achieve this, we create baseline fingerprints for each workstation and keep a sliding window of network activity summary reports to compare when an attack is detected. The sum of the difference between the network activity summary and the baseline fingerprint for each workstation yields the activity that has not been accounted for before, which we call the summary report. We compare this data with the sensitive assets to determine what parts of the assets have been stolen. There are two main challenges in this process: (1) How to create a fingerprint for workstations, and (2) how to efficiently compare the summary report with the sensitive assets. Concretely, our contributions are (1) We propose a framework to detect stolen sensitive information by cyber-attacks, (2) we construct network activity fingerprints tailored for this task, and lastly, (3) we analyze efficiency of such a task.
Date: April 21, 2021 at 10:30 AM
Speaker: Dr. Xinwen Fu
Abstract: This talk covers our research in the USENIX Security 2020 and INFOCOM 2020 papers. Our main contribution is the discovery of the insecurity of Bluetooth Low Energy one-way secure connections only (SCO) mode. In the SCO mode, a BLE device accepts only secure pairing such as Passkey Entry and Numeric Comparison from an initiator, e.g., an Android mobile. However, the BLE specification does not require the SCO mode for the initiator, and does not specify how the BLE programming framework should implement this mode. We show that the BLE programming framework of the initiator must properly handle SCO initiation, status management, error handling, and bond management; otherwise, severe flaws can be exploited to perform downgrade attacks, forcing the BLE pairing protocols to run in an insecure mode without user’s awareness. Due to such system flaws from the BLE programming framework, all BLE apps in Android are subject to our downgrade attacks. In addition to Android, we also find all major OSes including iOS, macOS, Windows, and Linux do not support the SCO mode properly. To defend against our attacks, we have built a prototype for the SCO mode on Android 8 atop Android Open Source Project (AOSP). We also present an application level defense measure.
Bio: Dr. Xinwen Fu is a professor in the Department of Computer Science, University of Massachusetts Lowell. His current research interests are in computer and network security and privacy. Dr. Fu has published at the four top computer security conferences including Oakland, CCS, USENIX Security and NDSS, and prestigious journals such as ACM/IEEE Transactions on Networking (ToN) and IEEE Transactions on Dependable and Secure Computing (TDSC). He spoke at various technical security conferences including Black Hat.
Date: March 26, 2021
Speaker: Dr. Deniz Iren
Abstract: Emotions constitute a core evolutionary mechanism that is essential to human survival. Human emotions are a large part of the fundamental structures of our society. They affect every decision, whether small or big, intentionally or unintentionally, in personal lives and in business. Recently, many researchers have directed their attention to understanding emotions, how they manifest, and how they can be automatically detected. The state-of-the-art AI models yield promising results in capturing the emotions and transforming them into actionable information. This presentation covers an introduction to emotions, emotion recognition, and how emotion recognition can be used for business decisions. To that end, a research-in-progress about the use of AI-based emotion recognition in the finance sector will be described. The role of emotions in financial investment is understudied. The executives of large companies give quarterly talks to inform stakeholders and attract investors. Analysts who listen to these talks make decisions to invest or not based on a multitude of factors. Among these factors, a very important one is emotions. However, currently, emotions as an information source is neither measurable and reportable, nor defendable and credible. Hence, our research question is the following. How can we measure the emotions of a speaker on voice and content, and how can we use the measured emotions as a signal to improve investment decision making? Specifically, we detected six basic human emotions as defined by Ekman (i.e., happiness, sadness, surprise, fear, anger, and disgust) expressed by company executives during quarterly earnings conference calls and examine their relationship with the companies upcoming earnings. To identify emotions, we develop a Long-Short Term Memory (LSTM) artificial neural network that was trained on speech signals annotated by humans. Our findings shed light on the underlying value of emotions in financial decision making under uncertainty, thereby contributing to behavioral economic theory. The preliminary findings show that the hidden emotional information in quarterly earnings calls has predictive value for financial performance.
Bio: Deniz Iren is an assistant professor and one of the founding members of the Center for Actionable Research of Open University (CAROU). In 2014, he completed his PhD in Information Systems with a focus on crowdsourcing and human computation. Consecutively, he moved to the Netherlands where he held postdoctoral researcher positions in TU Delft, VU Amsterdam, and Maastricht University. Currently, he is engaged in research and education activities at CAROU and also at the Science Faculty of the university. His research interests include machine learning and artificial intelligence (AI), specifically the acquisition of high-quality data for the training of AI models. Although he is engaged in projects from a wide spectrum of domains including computer vision and natural language procession, his favorite topic at the moment is emotion analytics. The idea of empathic machines that are able to perceive human emotions and act upon this perception fascinates him.
Date: February 19, 2021 at 3:00 PM
Speaker: Dr. Kristopher Micinski
Abstract: Binary reverse engineering is a challenging task because it often necessitates reasoning using both domain-specific knowledge (e.g., understanding entrypoint idioms common to an ABI) and logical inference (e.g., reconstructing interprocedural control flow). To help perform these tasks, reverse engineers often use toolkits (such as IDA Pro or Ghidra) that allow them to interactively explicate properties of binaries. We argue that deductive databases serve as a natural abstraction for interfacing between visualization-based binary analysis tools and highperformance logical inference engines that compute facts about binaries. In this talk, we present a vision for the future in which reverse engineers use a visualization-based tool to understand binaries while simultaneously querying a logical-inference engine to perform arbitrarily-complex deductive inference tasks. We call our vision declarative demand-driven reverse engineering (D3RE for short), and sketch a formal semantics whose goal is to mediate interaction between a logical-inference engine (such as Souffle) and a reverse engineering tool. We describe a prototype tool, d3re, which are using to explore the D3RE vision. While still a prototype, we have used d3re to reimplement several common querying tasks on binaries. Our evaluation demonstrates that d3re enables both better performance and more succinct implementation of these common RE tasks.
Bio: Dr. Kristopher Micinski is an Assistant Professor in the Electrical Engineering and Computer Science department at Syracuse University. His research centers around using understanding from programming language theory to build next-generation secure systems using static and dynamic analysis at unprecedented scale. He co-directs the HARP (High-performance Automated Reasoning and Programming) research group with his collaborators Thomas Gilray and Sidharth Kumar at the University of Alabama at Birmingham. He is also involved in a number of other collaborations at Tufts, UWashington, Northeastern, and UMaryland.
Date: January 29, 2021 at 3:00 PM
Speaker: Dr. Navid Asadi Zanjani
Abstract: Globalization has made the semiconductor industry more susceptible to trust and security issues. Hardware Trojans, i.e., malicious modification to electronic systems, can violate the root of trust when the device or systems are fabricated/assembled in untrusted facilities. As the imaging and failure analysis tools excel in the resolution and capability, physical inspection based methods become more attractive in verifying such trust issues. On the contrary, such physical inspection methods are opening new capabilities for an adversary to extract sensitive information like secret keys, memory content or intellectual property (IP) compromising confidentiality and integrity. Different countermeasures have been proposed, however, there are still many unanswered questions. This talk will focus on the state of the art physical inspection/assurance methods, the existing countermeasures, related challenges to develop new countermeasures and a research roadmap for this emerging field.
Bio: Dr. Navid Asadi is an Assistant Professor in the ECE Department at the University of Florida. He investigates novel techniques for IC counterfeit detection and prevention, system and chip level decomposition and security assessment, anti-reverse engineering, 3D imaging, invasive and semi-invasive physical assurance, supply chain security, etc. Dr. Asadi has received several best paper awards from IEEE International Symposium on Hardware Oriented Security and Trust (HOST) and the ASME International Symposium on Flexible Automation (ISFA). He was also winner of D.E. Crow Innovation award from University of Connecticut. He is co-founder and the program chair of the IEEE Physical Assurance and Inspection of Electronics (PAINE) Conference.
Date: Friday November 6, 2020 at 3:00 PM
Speaker: Dr. David Mohaisen, University of Central Florida
Abstract: Malicious software (or malware) is a vehicle for adversaries to launch various types of attacks, and there has been a constant steam of malware samples in the wild over the past few years. Per one study, the number of malware samples have grown to almost 1.1 billion samples in 2020, compared to 100 million samples only 8 years earlier, and attacks launched by malware has has significant costs to the world economy, in order of hundreds of billions of dollars. This rise in this attack vector, coupled with the deployed of new systems of unprecedented scale, e.g., Internet of Things, call for techniques to identify malware samples for detection and classification. In this regard, machine learning has shown some promise, including significant accuracy results for filtering unwanted families, as well as operational systems for tracking families of interest overtime, or just making use of threat intelligence to reduce the manual analysis efforts. In this talk, we will review some of the recent results on the applications of machine learning for a broad class of malware analysis, detection, and classification using various program analysis modalities, such as strings, graphs, and functions. We will then explore the robustness of such defenses to a new class of attacks on machine learning and broad directions to defenses.
Bio: David Mohaisen earned his M.Sc. and Ph.D. degrees from the University of Minnesota in 2011 and 2012, respectively. He is currently an Associate Professor at the University of Central Florida, where he directs the Security and Analytics Lab (SEAL). Before joining UCF, he held several posts, in academia and industry; as an Assistant Professor at the University at Buffalo, (Senior) Research Scientist at Verisign Labs, and a Member of the Engineering Staff at the Electronics and Telecommunication Research Institute (ETRI). His research interests fall in the broad areas of networked systems and their security, machine learning systems, online privacy, and measurements. Among other services, he is currently an Associate Editor of IEEE Transactions on Mobile Computing and IEEE Transactions on Parallel and Distributed Systems. He is a senior member of ACM (2018) and IEEE (2015).