In silico reconstruction of the brain

Friday, February 3, 2023 - 02:20 pm
Swearingen (2A27).

Talk Abstract: In a 2009 TED talk, a well-known neuroscientist announced that within 10 years, we would be able to reconstruct the brain in a computer. Almost 15 years later, this moving target still seems out of reach, but a systematic framework has been clearly established. In this seminar presentation, I will discuss how scientists attempt to tackle this grand challenge. After discussing the importance of modeling in science, engineering, and medicine, I will highlight the impressive complexity of the brain, and, for perspective, I will compare it with the complexity of modern CPUs and deep learning models. I will then summarize the approach exemplified by the Blue Brain Project to reconstruct the brain using biophysically-detailed models of neurons. I will conclude with examples of simplification used to simulate a whole -- albeit very simplified -- brain in silico, and give some examples of how such approaches can help in clinical applications and in fundamental neuroscience.  

Bio: Christian O’Reilly received his B.Ing (electrical eng.; 2007), his M.Sc.A. (biomedical eng.; 2011), and his Ph.D. (biomedical eng.; 2012) from the École Polytechnique de Montréal where he worked under the mentoring of Pr. R. Plamondon to apply pattern recognition and machine learning to predict brain stroke risks. Between 2012 and 2018, he pursued postdoctoral studies in various institutions (Université de Montréal, Mcgill, EPFL) studying sleep and autism, using different approaches from neuroimaging and computational neuroscience. In 2020, he accepted a position as a research associate at McGill where he studied brain connectivity in autism and related neurodevelopmental disorders. Since 2021, Christian joined the Department of Computer Science and Engineering, the Artificial Intelligence Institute (AIISC), and the Carolina Autism and Neurodevelopment (CAN) research center at the University of South Carolina as an assistant professor in neuroscience and artificial intelligence.

Neuro-Edge: Neuromorphic-Enhanced Edge Computing

Friday, January 27, 2023 - 02:20 pm
Swearingen (2A27).

Talk Abstract: As the technology industry is moving towards implementing machine learning tasks such as natural language processing and image classification on smaller edge computing devices, the demand for more efficient implementations of algorithms and hardware accelerators has become a significant area of research. In recent years, several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs). On the other hand, spiking neural networks (SNNs) have been shown to achieve substantial power reductions over even the aforementioned edge DNN accelerators when deployed on specialized neuromorphic event-based/asynchronous hardware. While neuromorphic hardware has demonstrated great potential for accelerating deep learning tasks at the edge, the current space of algorithms and hardware is limited and still in rather early development. Thus, many hybrid approaches have been proposed which aim to convert pre-trained DNNs into SNNs. In this talk, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware in terms of latency, power, and energy.

Bio: Ramtin Zand is the director of the Intelligent Circuits, Architectures, and Systems (iCAS) Lab at the University of South Carolina, which is collaborating with and supported by several multinational companies including Intel, AMD, and Juniper Networks, as well as local companies such as Van Robotics. He has authored 50+ articles and received recognitions from ACM and IEEE including the best paper runner-up of ACM GLSVLSI’18, best paper of IEEE ISVLSI’21, as well as featured paper in IEEE Transactions on Emerging Topics in Computing. His research focus is on neuromorphic computing and real-time and energy-efficient AI.

From Self-Adaptation to Self-Evolution

Wednesday, January 18, 2023 - 02:20 pm
Swearingen (2A27)

Abstract
Over the past two decades, self-adaption has become an established field of research. A recent survey also showed that self-adaptation is widely used in industry. In essence, self-adaptation equips a computing system with a feedback loop that enables the system to deal with changes and uncertainties autonomously, reducing the burden of complex operator tasks. In this talk, I will briefly explain the basic principles of self-adaptation. Then I will argue why self-adaptation is not enough to tackle the challenges of future computing systems. To conclude I provide a vision on how a computing system may become self-evolvable, opening an exciting area for future research. 

Speaker's Bio:
Danny Weyns is professor at the Katholieke Universiteit Leuven in Belgium. He is also affiliated with Linnaeus University in Sweden. Danny’s research interests are in the engineering of self-adaptive systems with a focus on establishing trustworthiness under uncertainty. To that end, he studies approaches that integrate human-driven design time activities with system-driven runtime activities. 
 

A Semantic Web Approach to Fault Tolerant Autonomous Manufacturing

Friday, December 16, 2022 - 11:00 am
AI Institute

THESIS DEFENSE 

Author : Fadi El Kalach

Advisor : Dr. Amit Sheth

Date : Dec 16, 2022

Time : 11 am

Place : AI Institute

Meeting Link

Abstract

The next phase of manufacturing is centered on making the switch from traditional automated to autonomous systems. Future factories are required to be agile, allowing for more customized production, and resistant to disturbances. Such production lines would have the capabilities to re-allocate resources as needed and eliminate downtime while keeping up with market demands. These systems must be capable of complex decision making based on different parameters such as machine status, sensory data, and inspection results. Current manufacturing lines lack this complex capability and instead focus on low level decision making on the machine level without utilizing the generated data to its full extent. This thesis presents progress towards autonomy by developing a data exchange architecture and introducing Semantic Web capabilities applied to managing the production line. The architecture consists of three layers. The Equipment Layer includes the industrial assets of the factory, the Shop Floor Layer supports edge analytic capabilities converting raw sensory data to actionable information, and the Enterprise Layer acts as the hub of all information. Finally, a full autonomous manufacturing use case is also developed to showcase the value of Semantic Web in a manufacturing context. This use case utilizes different data sources to complete a manufacturing process despite malfunctioning equipment. This provides an approach to autonomous manufacturing not yet fully realized at the intersection of three paradigms: Smart Manufacturing, Autonomous Manufacturing, and Semantic Web.

A Benchmark for Brain Network Analysis with Graph Neural Networks

Monday, November 21, 2022 - 10:00 am
Online

Deepa Tilwani will deliver the talk.
 
One of the most common paradigms for neuroimaging analysis is the mapping of the human connectome utilizing structural or functional connectivity. Due to their proven ability to represent complicated networked data, Graph Neural Networks (GNNs), motivated by geometric deep learning, have recently gained a lot of attention. The best way to create efficient GNNs for brain network research has not yet been thoroughly studied, despite their better performance in many disciplines. This work provides a benchmark for brain network analysis with GNNs, to fill this gap by summarizing the pipelines for building brain networks for both structural and functional neuroimaging modalities and by modularizing the execution of GNN designs. Overview Paper.
 
Zoom Link

Meeting ID: 860 1921 3021
Passcode: 12345

Towards Safe and Trustworthy Cyber-Physical Systems

Friday, November 18, 2022 - 02:20 pm
Online

Virtual Meeting Link

Abstract:
Cyber-physical systems (CPS) are smart systems that include co-engineered interacting networks of physical and computational components. Prominent examples of CPS include autonomous robots, self-driving cars, smart cities, and medical devices. CPS are increasingly everywhere, providing new capabilities to improve quality of life and transform many critical areas. However, significant challenges are posed for assuring the safety and trustworthiness of CPS. In this talk, I will present some of my recent work to tackle these challenges , such as Trust in Human-CPS, and Safety of AI-enabled CPS.

Speaker's Bio: 
Lu Feng is an Assistant Professor at the Department of Computer Science and the Department of Engineering Systems and Environment at the University of Virginia. Previously, she was a postdoctoral fellow at the University of Pennsylvania and received her PhD in Computer Science from the University of Oxford. Her research focuses on assuring the safety and trustworthiness of cyber-physical systems, spanning many different application domains, from autonomous robots to smart cities to medical systems. She is a recipient of NSF CAREER Award.
Webpage: http://www.cs.virginia.edu/~lufeng/
 

Regression analysis of arbitrarily censored data subject to potential left truncation

Friday, November 11, 2022 - 02:30 pm
Storey Innovation Center 1400

Online Meeting Link

Abstract: 
Survival analysis is a branch of statistics that studies time-to-event data or survival data. The main feature of survival data is that the response variable is only partially observed and subject to censoring and/or truncation caused by the nature of study design. In this talk, I will briefly discuss different types of survival data and existing popular semiparametric survival models in the literature. Then I will discuss my recent work in detail on regression analysis of arbitrarily censored data and left truncated data. The proposed estimation approaches are developed based on EM algorithms and enjoy several nice properties such as being easy to implement, robust to initial values, fast to converge, and providing variance estimates in closed form.

Speaker's Bio:
Dr. Lianming Wang is an Associate Professor in the Department of Statistics at University of South Carolina. His research areas include survival analysis,  longitudinal data analysis, categorical data analysis, multivariate analysis, statistical computing, nonparametric and semiparametric modeling, and biomedical applications. His research goal is to develop sound statistical approaches for analyzing complex data with various structures in real life studies of all fields.
 

Learning Object Detection from Repeated Traversals

Friday, November 4, 2022 - 02:20 pm
Online

Virtual Meeting Link


Abstract:
Recent progress in autonomous driving has been fueled by improvements in machine learning. Ironically, most autonomous vehicles do not learn while they are in operation. If a car is used in the same location multiple times, it will act identically every single time. We propose to leverage and learn from repetition by allowing a neural network to save some of its activations in a geo-referenced data base that can be retrieved later on. If a vehicle is used in the same location multiple times, it builds up a rich data set of past network activations that aid object detection in the future. This allows it to recognize objects from afar when they are only perceived by a few pixels or LiDAR points. We further demonstrate that it is in fact possible to completely bootstrap an object detection classifier only based on repetition. Our approach has the potential to drastically improve the accuracy and safety of self-driving cars, enable them for sparsely populated areas, and allow them to adapt naturally to their local environment over time.

Speaker's Bio: 
Kilian Weinberger is a Professor in the Department of Computer Science at Cornell University. He received his Ph.D. from the University of Pennsylvania in Machine Learning and his undergraduate degree in Mathematics and Computing from the University of Oxford. During his career he has won several best paper awards at ICML (2004), CVPR (2004, 2017), AISTATS (2005) and KDD (2014, runner-up award). In 2011 he was awarded the Outstanding AAAI Senior Program Chair Award and in 2012 he received an NSF CAREER award. He was elected co-Program Chair for ICML 2016 and for AAAI 2018 and currently serves as a board member and president-elect of the ICML society. In 2016 he was the recipient of the Daniel M Lazar '29 Excellence in Teaching Award. In 2021 he became a finalist for the Blavatnik National Awards for Young Scientists. Kilian Weinberger's research focuses on Machine Learning and its applications, in particular, metric learning, Gaussian Processes, computer vision, perception for autonomous vehicles, and deep learning. Before joining Cornell University, he was an Associate Professor at Washington University in St. Louis and before that he worked as a research scientist at Yahoo! Research in Santa Clara.
 

Applications of Machine Learning for Improved Patient Selection and Therapy Recommendation

Monday, October 31, 2022 - 06:00 pm
Online

DISSERTATION DEFENSE

Author : Brendan Odigwe

Advisor : Dr. Homayoun Valafar

Date : Oct 31, 2022

Time: 6:00 pm

Place : Meeting Link

Abstract

The public health domain continues to battle with illness and the growing need for continuous advancement in our approach to clinical care. Individuals experiencing certain conditions undergo tried and tested therapies and medications, practices that have become the mainstay and standard of care in clinical medicine. As with all therapies and medications, they don't always work the same way and do not work for everyone. Some Treatment regimens come with some adverse side effects due to the nature of the medication. This would be particularly disappointing if the patients must be subjected to such medications without improving their health and quality of life. Asides from the physical toll patients could be subjected to; there is the matter of the economic impact of these therapies on the patients, their family members, insurance companies and even the government. Some life-saving therapies are cost intensive in addition to requiring risky, invasive procedures. It would be great if we had more ways of identifying patients that are most likely to receive significant benefits from recommended therapies before they are subjected to them. The datasets used in our work were varied in size as well as the hypothesis guiding our experiments, and as such, our approach to predictive analysis also varied. We have employed a series of machine learning techniques to create models that can indicate a patient's response pattern to recommended therapy. To ensure that our approaches are widely applicable, we have investigating multiple pressing healthcare problems, namely; Chronic Kidney Disease, Heart Failure, Sickle Cell Anemia, and Peripheral Arterial Disease. These approaches and others like it will positively influence medical decision-making, and administration of intervention procedures, and further the practice of precision medicine. The approaches and the rules generated produce a means of prioritizing patient data parameters and present us with the opportunity to extend medical practice and ultimately improve patient outcomes.

Human Activity Recognition (HAR) Using Wearable Sensors and Machine Learning 

Monday, October 31, 2022 - 03:00 pm
2265 Innovation

DISSERTATION DEFENSE 

Author : Chrisogonas Odero Odhiambo 

Advisor : Dr. Homayoun Valafar 

Date : Oct 31, 2022 

Time: 3:00 pm  

Place : 2265 Innovation and Teams

Teams Meeting Link

Abstract 

Humans engage in a wide range of simple and complex activities. Human Activity Recognition (HAR) is typically a classification problem in computer vision and pattern recognition, to recognize various human activities. Recent technological advancements, the miniaturization of electronic devices and the deployment of cheaper and faster data networks have propelled environments augmented with contextual and real-time information, such as smart homes and smart cities. These context-aware environments, alongside smart wearable sensors, have opened the door to numerous opportunities for adding value and personalized services to citizens. Vision-based and sensory-based HAR find diverse applications in healthcare, surveillance, sports, event analysis, Human-Computer Interaction (HCI), rehabilitation engineering, occupational science, among others, resulting into significantly improved human safety and quality of life. 

Despite being an active research area for decades, HAR still faces challenges in terms of gesture complexity, computational cost on small devices, energy consumption, as well as data annotation limitations. In this research, we investigate methods to sufficiently characterize and recognize complex human activities, with the aim to improving recognition accuracy, reducing computational cost, energy consumption, and creating a research-grade sensor data repository to advance research and collaboration. This research examines the feasibility of detecting natural human gestures in common daily activities. Specifically, we utilize smartwatch accelerometer sensor data and structured local context attributes, and apply AI algorithms to determine the complex activities of medication-taking, smoking, and eating gestures 

A major part of my work centers around modeling human activity and the application of machine learning techniques to implement automated detection of specific activities using accelerometer data from smartwatches. Our work stands out as the first in modeling human activity based on wearable sensors in a linguistic representation with grammar and syntax to derive clear semantics of complex activities whose alphabet comprises atomic activities. We apply machine learning to learn and predict complex human activities. I demonstrate the use of one of our unified models to recognize two activities using smartwatch: medication-taking and smoking. 

Another major part of my work addresses the problem of HAR activity misalignment through edge-based computing at data origination point, leading to improved rapid data annotation, albeit with assumptions of subject fidelity in demarcating gesture start and end sections. Lastly, I propose a theoretical framework for the implementation of a library of shareable human activities. The results of this work can be applied in the implementation of a rich portal of usable human activity models, easily installable in handheld mobile devices such as phones or smart wearables to assist human agents in discerning daily living activities. This is akin to a social media of human gestures or capability models. The goal of such a framework is to domesticate the power of HAR into the hands of everyday users, as well as democratize the service to the public by enabling persons of special skills to share their skills or abilities through downloadable usable trained models.