An Introduction to Neuromorphic Computing and Spiking Neural Networks (SNNs)

Friday, March 24, 2023 - 12:00 pm
online

Time: Mar 24, 2023 12:00 PM Eastern Time (US and Canada)

Abstract: This short talk will be on Neuromorphic Computing by Ramashish Gaurav (Ram for short). He is a Ph.D. candidate at Virginia Tech - ECE, working on Spiking NeuralNetworks (SNNs) -- a subdomain of Neuromorphic Computing, under the supervision of Prof. Yang (Cindy) Yi at MICS. The talk will be an introduction to Neuromorphic Computing, followed by spiking networks and how they relate to the current generation of neural networks. It will then steer towards recent progress in SNNs, and will be concluded with opportunities and challenges towards energy-efficient AI. Ram's blog can be found at https://r-gaurav.github.io/

Join Zoom Meeting
https://us06web.zoom.us/j/87177520455?pwd=N2dsT052bmNoOXhHcVBvQmZ6M3ljU…

Meeting ID: 871 7752 0455

Facial Expression Recognition Using Edge AI Accelerators

Wednesday, March 15, 2023 - 10:00 am
Room 2267 Innovation building

DISSERTATION DEFENSE 

Author : Heath Smith

Advisor : Dr. Ramtin Zand

Date : March 15, 2023 

Time: 10:00 am  

Place : Room 2267 Innovation building

Abstract 

Facial expression recognition is a popular and challenging area of research in machine learning applications. Facial expressions are critical to human communication and allow us to convey complex thoughts and emotions beyond spoken language. The complexity of facial expressions creates a difficult problem for computer vision systems, especially edge computing systems. Current Deep Learning (DL) methods rely on large-scale Convolutional Neural Networks (CNN) which require millions of floating point operations (FLOPS) to accomplish similar image classification tasks. However, on edge and IoT devices, large-scale convolutional models can cause problems due to memory and power limitations. The intent of this work is to propose a neural network architecture inspired by deep CNNs which is tuned for deployment on edge devices and small-form-factor edge AI accelerators. This will be carried out by strategically reducing the size of the network while still achieving good discrimination between classes. Additionally, performance metrics such as latency, accuracy, throughput, and power consumption will be captured and compared with several popular deep CNN models. It is expected that there will be trade-offs between network size and performance when the model is deployed and running model inference on edge AI accelerators such as the Intel Movidius Neural Compute Stick II and the NVIDIA Jetson Nano GPU accelerator. An additional benefit of smaller-scale convolutional models is that they are better suited to be converted into spiking neural networks and deployed on neuromorphic hardware such as the Intel Loihi neuromorphic chip. Furthermore, this work will also examine various image processing techniques across multiple datasets in an effort to increase the performance of the edge-efficient model.

Learning Analytics Through Machine Learning and Natural Language Processing 

Wednesday, March 15, 2023 - 08:00 am
online

DISSERTATION DEFENSE 

Author : Bokai Yang

Advisor : Dr. John Rose

Date : March 15, 2023 

Time: 8:00 am  

Place : Virtual

Meeting Link 

Abstract 
The increase of computing power and the ability to log students’ data with the help of the computer-assisted learning systems has led to an increased interest in developing and applying computer science techniques for analyzing learning data. To understand and investigate how learning-generated data can be used to improve student success, data mining techniques have been applied to several educational tasks. This dissertation investigates three important tasks in various domains of educational data mining: learners’ behavior analysis, essay structure analysis and feedback providing, and learners’ dropout prediction. The first project applied latent semantic analysis and machine learning approaches to investigate how MOOC learners’ longitudinal trajectory of meaningful forum participation facilitated learner performance. The findings have implications on refining the courses’ facilitation methods and forum design, helping improve learners’ performance, and assessing learners’ academic performance in MOOCs. The second project aims to analyze the organizational structures used in previous ACT test essays and provide an argumentative structure feedback tool driven by deep learning language models to better support the current automatic essay scoring systems and classroom settings. The third project applied MOOC learners’ forum participation states to predict dropouts with the help of hidden Markov models and other machine learning techniques. The results of this project show that forum behavior can be applied to predict dropout and evaluate the learners’ status. Overall, the results of this dissertation expand current research and shed light on how computer science techniques could further improve students’ learning experience.

Adversarial Machine Learning and Defense Strategies 

Friday, February 24, 2023 - 01:00 pm
Storey Innovation Center, RM 2277 

Professor Dipankar Dasgupta 

Adversarial attacks can disrupt artificial intelligence (AI) and machine learning (ML) based system functionalities but also provide significant research opportunities. In this talk, Prof Dipankar Dasgupta from The University of Memphis will cover emerging adversarial machine learning (AML) attacks on systems and the state-of-the-art defense techniques. Prof Dasgupta will first discuss how and where adversarial attacks could happen in an AI/ML model and framework. He will then present the classification of adversarial attacks and their severity and applicability in real-world problems, including the steps to mitigate their effects, before illustrating the role of GAN in adversarial attacks and as a defense strategy. 

Finally, Prof Dasgupta will also discuss a dual-filtering (DF) strategy that could mitigate adaptive or advanced adversarial manipulations for a wide-range of ML attacks with higher accuracy. The developed DF software could be used as a wrapper to any existing ML-based decision support system to prevent a wide variety of adversarial evasion attacks. The DF framework utilizes two sets of filters based on positive (input filters) and negative (output filters) verification strategies that could communicate with each other for higher robustness. 

References:  

  • Dasgupta, D., Gupta, K.D. Dual-filtering (DF) schemes for learning systems to prevent adversarial attacks. Complex Intell. Syst. (2022). https://doi.org/10.1007/s40747-022-00649-1  
  • Gupta, K., & Dasgupta, D. Who is Responsible for Adversarial Defense? Workshop on Challenges in Deploying and monitoring Machine Learning Systems, (ICML 2021).  
  • K. D. Gupta, D. Dasgupta and Z. Akhtar, "Applicability issues of Evasion-Based Adversarial Attacks and Mitigation Techniques," 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, ACT, Australia, 2020, pp. 1506-1515, doi: 10.1109/SSCI47803.2020.9308589. 

Dr. Dipankar Dasgupta is a professor of Computer Science at the University of Memphis since 1997, an IEEE Fellow, an ACM Distinguished Speaker (2015-2020) and an IEEE Distinguished Lecturer (2022-2024). Dr. Dasgupta is known for his pioneering work on the design and development of intelligent solutions inspired by natural and biological processes. During 1990-2000, he extensively studied different AI/ML techniques and research in the development of an efficient search and optimization method (called structured genetic algorithm) has been applied in engineering design, neural-networks, and control systems. He is one of the founding fathers of the field of artificial immune systems (a.k.a Immunological Computation) and is at the forefront of applying bio-inspired approaches to cyber defense. His notable works in digital immunity, negative authentication, cloud insurance modeling, dual-filtering and adaptive multi-factor authentication demonstrated the effective use of various AI/ML algorithms. His research accomplishments and achievements have appeared in Computer World Magazine, NASA’s website, and in local TV Channels and Newspapers. 

Dr. Dasgupta has authored four books, 5 patents (including 2 under submissions) and has more than 300 research publications (20,000 citations as per google scholar) in book chapters, journals, and international conference proceedings. Among many awards, he was honored with the 2014 ACM-SIGEVO Impact Award for his seminal work on negative authentication, an AI-based approach. He also received five best paper awards in different international conferences and has been organizing IEEE Symposium on Computational Intelligence in Cyber Security at SSCI since 2007. Dr. Dasgupta is an ACM Distinguished Speaker, regularly serves as panelist and keynote speaker and offers tutorials in leading computer science conferences and have given more than 350 invited talks in different universities and industries. 

Utilizing Deep Learning Methods in the Identification and Synthesis of Gene Regulations 

Monday, February 6, 2023 - 10:30 am

DISSERTATION DEFENSE 

Author : Jiandong Wang

Advisor : Dr. Jijun Tang

Date : Feb 6, 2023 

Time: 10:30 am  

Place : Virtual

Meeting Link

 

Abstract 

Gene expression is the fundamental differentiation and development process of life. Although all cells in an organism have essentially the same DNA, cell types and activities vary due to changes in gene expression. Gene expression can be influenced by many gene regulations. RNA editing contributes to the variety of RNA and proteins by allowing single nucleotide substitution. Reverse transcription can alter the expression status of genes by inducing genetic diversity and polymorphism via novel insertions, deletions, and recombination events. Gene regulation is critical to normal development because it enables cells to respond rapidly to environmental changes. However, identifying gene regulations from genome data remains challenging due to the repetitive nature of eukaryotic genomes and their high structural diversity.

Deep learning techniques emerged in the 2000s and quickly gained traction in a variety of disciplines due to their unparalleled prediction performance on large datasets. Since then, numerous applications in computational biology have been proposed, including image resolution enhancement and analysis, the detection of DNA function, and protein structure prediction. As a result, deep learning is widely regarded as a promising technique for advancing bioinformatics perspectives. In this dissertation, we explore deep learning-based methods to solve the following gene regulation problems: 1) RNA editing identification, 2) novel LINE-1 retrotransposon gene synthesis, and 3) RNA editing identification and classification across tissues. 

First, we took the RNA editing identification task as an example to fully explore deep learning-based methods for solving gene regulation problems. While millions of RNA editing sites have been reported in the human genome, far more sites are believed to be editable and still need to be identified. We constructed convolutional neural network (CNN) models to predict human RNA editing events in both Alu regions and non-Alu regions. Experiment results showed that our method achieved outstanding performance in two validation datasets. We ported our CNN models to a web service named EditPredict. In addition to the human genome, EditPredict tackles the genomes of other model organisms, including the bumblebee, fruit fly, mouse, and squid genomes.

Second, we explored the advantages of deep learning methods in synthesizing novel genes. Long interspersed nuclear elements (LINE-1) retrotransposons are the only autonomously active transposable elements. While numerous bioinformatics methods have been developed to assist in detecting and classifying LINE-1 retrotransposons, there are still limitations in terms of reliability, precision, and efficiency. We proposed an interpretable generative adversarial network to learn the operation pattern of the LINE-1 retrotransposon and then generate synthetic sequences up to 201 nucleotides. Experimental results showed that the synthetic sequences generated by our model are highly similar to those of natural LINE-1 retrotransposons. We also optimized the generated sequences for desired properties, such as sequence structure for a particular biological function and protein secondary structure.

Third, we extended our dissertation by using deep learning methods to identify and classify RNA editing across human tissues. It is known that RNA editing varies across different tissues. Our study can be divided into two major parts: RNA editing similarity across human tissues and RNA editing specificity across human tissues. We analyzed the distribution of RNA editing and presented the atlas, comprising millions of A-to-I events identified in six tissues. Then, we used a transfer learning technique and hybrid models to identify and classify the RNA editing across tissues, respectively. Our models achieved relatively good identification and classification performances. At last, we calculated the RNA editing events associated with human disorders and categorized them into different groups. We found that specific RNA editing events are consistently associated with specific human tissue diseases.

In silico reconstruction of the brain

Friday, February 3, 2023 - 02:20 pm
Swearingen (2A27).

Talk Abstract: In a 2009 TED talk, a well-known neuroscientist announced that within 10 years, we would be able to reconstruct the brain in a computer. Almost 15 years later, this moving target still seems out of reach, but a systematic framework has been clearly established. In this seminar presentation, I will discuss how scientists attempt to tackle this grand challenge. After discussing the importance of modeling in science, engineering, and medicine, I will highlight the impressive complexity of the brain, and, for perspective, I will compare it with the complexity of modern CPUs and deep learning models. I will then summarize the approach exemplified by the Blue Brain Project to reconstruct the brain using biophysically-detailed models of neurons. I will conclude with examples of simplification used to simulate a whole -- albeit very simplified -- brain in silico, and give some examples of how such approaches can help in clinical applications and in fundamental neuroscience.  

Bio: Christian O’Reilly received his B.Ing (electrical eng.; 2007), his M.Sc.A. (biomedical eng.; 2011), and his Ph.D. (biomedical eng.; 2012) from the École Polytechnique de Montréal where he worked under the mentoring of Pr. R. Plamondon to apply pattern recognition and machine learning to predict brain stroke risks. Between 2012 and 2018, he pursued postdoctoral studies in various institutions (Université de Montréal, Mcgill, EPFL) studying sleep and autism, using different approaches from neuroimaging and computational neuroscience. In 2020, he accepted a position as a research associate at McGill where he studied brain connectivity in autism and related neurodevelopmental disorders. Since 2021, Christian joined the Department of Computer Science and Engineering, the Artificial Intelligence Institute (AIISC), and the Carolina Autism and Neurodevelopment (CAN) research center at the University of South Carolina as an assistant professor in neuroscience and artificial intelligence.

Neuro-Edge: Neuromorphic-Enhanced Edge Computing

Friday, January 27, 2023 - 02:20 pm
Swearingen (2A27).

Talk Abstract: As the technology industry is moving towards implementing machine learning tasks such as natural language processing and image classification on smaller edge computing devices, the demand for more efficient implementations of algorithms and hardware accelerators has become a significant area of research. In recent years, several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs). On the other hand, spiking neural networks (SNNs) have been shown to achieve substantial power reductions over even the aforementioned edge DNN accelerators when deployed on specialized neuromorphic event-based/asynchronous hardware. While neuromorphic hardware has demonstrated great potential for accelerating deep learning tasks at the edge, the current space of algorithms and hardware is limited and still in rather early development. Thus, many hybrid approaches have been proposed which aim to convert pre-trained DNNs into SNNs. In this talk, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware in terms of latency, power, and energy.

Bio: Ramtin Zand is the director of the Intelligent Circuits, Architectures, and Systems (iCAS) Lab at the University of South Carolina, which is collaborating with and supported by several multinational companies including Intel, AMD, and Juniper Networks, as well as local companies such as Van Robotics. He has authored 50+ articles and received recognitions from ACM and IEEE including the best paper runner-up of ACM GLSVLSI’18, best paper of IEEE ISVLSI’21, as well as featured paper in IEEE Transactions on Emerging Topics in Computing. His research focus is on neuromorphic computing and real-time and energy-efficient AI.

From Self-Adaptation to Self-Evolution

Wednesday, January 18, 2023 - 02:20 pm
Swearingen (2A27)

Abstract
Over the past two decades, self-adaption has become an established field of research. A recent survey also showed that self-adaptation is widely used in industry. In essence, self-adaptation equips a computing system with a feedback loop that enables the system to deal with changes and uncertainties autonomously, reducing the burden of complex operator tasks. In this talk, I will briefly explain the basic principles of self-adaptation. Then I will argue why self-adaptation is not enough to tackle the challenges of future computing systems. To conclude I provide a vision on how a computing system may become self-evolvable, opening an exciting area for future research. 

Speaker's Bio:
Danny Weyns is professor at the Katholieke Universiteit Leuven in Belgium. He is also affiliated with Linnaeus University in Sweden. Danny’s research interests are in the engineering of self-adaptive systems with a focus on establishing trustworthiness under uncertainty. To that end, he studies approaches that integrate human-driven design time activities with system-driven runtime activities. 
 

A Semantic Web Approach to Fault Tolerant Autonomous Manufacturing

Friday, December 16, 2022 - 11:00 am
AI Institute

THESIS DEFENSE 

Author : Fadi El Kalach

Advisor : Dr. Amit Sheth

Date : Dec 16, 2022

Time : 11 am

Place : AI Institute

Meeting Link

Abstract

The next phase of manufacturing is centered on making the switch from traditional automated to autonomous systems. Future factories are required to be agile, allowing for more customized production, and resistant to disturbances. Such production lines would have the capabilities to re-allocate resources as needed and eliminate downtime while keeping up with market demands. These systems must be capable of complex decision making based on different parameters such as machine status, sensory data, and inspection results. Current manufacturing lines lack this complex capability and instead focus on low level decision making on the machine level without utilizing the generated data to its full extent. This thesis presents progress towards autonomy by developing a data exchange architecture and introducing Semantic Web capabilities applied to managing the production line. The architecture consists of three layers. The Equipment Layer includes the industrial assets of the factory, the Shop Floor Layer supports edge analytic capabilities converting raw sensory data to actionable information, and the Enterprise Layer acts as the hub of all information. Finally, a full autonomous manufacturing use case is also developed to showcase the value of Semantic Web in a manufacturing context. This use case utilizes different data sources to complete a manufacturing process despite malfunctioning equipment. This provides an approach to autonomous manufacturing not yet fully realized at the intersection of three paradigms: Smart Manufacturing, Autonomous Manufacturing, and Semantic Web.

A Benchmark for Brain Network Analysis with Graph Neural Networks

Monday, November 21, 2022 - 10:00 am
Online

Deepa Tilwani will deliver the talk.
 
One of the most common paradigms for neuroimaging analysis is the mapping of the human connectome utilizing structural or functional connectivity. Due to their proven ability to represent complicated networked data, Graph Neural Networks (GNNs), motivated by geometric deep learning, have recently gained a lot of attention. The best way to create efficient GNNs for brain network research has not yet been thoroughly studied, despite their better performance in many disciplines. This work provides a benchmark for brain network analysis with GNNs, to fill this gap by summarizing the pipelines for building brain networks for both structural and functional neuroimaging modalities and by modularizing the execution of GNN designs. Overview Paper.
 
Zoom Link

Meeting ID: 860 1921 3021
Passcode: 12345