Enhancing Relation Database Security with Shuffling

Wednesday, October 25, 2023 - 12:00 pm
Innovation Center, Room 2265

DISSERTATION DEFENSE

Department of Computer Science and Engineering
University of South Carolina
Author : Tieming Geng
Advisor : Dr. Chin-Tser Huang
Date : October 25, 2023
Time: 12 pm
Place : Innovation Center, Room 2265 Virtual
Meeting Link : Teams

  • Meeting ID: 287 744 722 437
  • Passcode: ZegM7A

Abstract

The ocean covers two-thirds of Earth, which is relatively unexplored compared to the landmass. Mapping underwater structures is essential for both archaeological and conservation purposes. This dissertation focuses on employing a robot team to map underwater structures using vision-based simultaneous localization and mapping (SLAM). The overarching goal of this research is to create a team of autonomous robots to map large underwater structures in a coordinated fashion. This requires maintaining an accurate robust pose estimate of oneself and knowing the relative pose of the other robots in the team. However, the GPS-denied and communication-constrained underwater environment, along with low visibility, poses several challenges for state estimation. This dissertation aims to diagnose the challenges of underwater vision-based state estimation algorithms and provide solutions to improve their robustness and accuracy. Moreover, robust state estimation combined with deep learning-based relative localization forms the backbone for cooperative mapping by a team of robots.

The performance of open-source state-of-the-art visual-inertial SLAM algorithms is compared in multiple underwater environments to understand the challenges of state estimation underwater. Extensive evaluation showed that consumer-level imaging sensors are ill-equipped to handle challenging underwater image formation, low intensity, and artificial lighting fluctuations. Thus, the GoPro action camera that captures high-definition video along with synchronized IMU measurements embedded within a single mp4 file is presented as a substitute. Along with enhanced images, fast sparse map deformation is performed for globally consistent mapping after loop closure. However, in some environments such as underwater caves, it is difficult to perform loop closure due to narrow passages and turbulent flows resulting in yaw drift over long trajectories. Tightly-coupled fusion of high frequency magnetometer measurements in optimization-based visual inertial odometry using IMU preintegration is performed producing a significant reduction in yaw drift. Even with good quality cameras, there are scenarios during underwater deployments where visual SLAM fails. Robust state estimation is proposed by switching between visual inertial odometry and a model-based estimator to keep track of the Aqua2 Autonomous Underwater Vehicle (AUV) during underwater operations. For mapping large underwater structures, cooperative mapping by a team of robots equipped with robust state estimation and capable of relative localization with each other is required. A deep learning framework is designed for real-time 6D pose estimation of an Aqua2 AUV with respect to observing camera trained only on synthetic images. This dissertation combines robust state estimation and accurate relative localization that contribute to mapping underwater structures using multiple AUVs.

Real-time Computing for Cyberphysical Systems

Friday, October 13, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract:

Recently, the cyberphyical system (CPS) has gained significant traction in various engineering fields. One of the challenges for CPS is to develop lightweight, real-time computational models to enable in-situ evaluation and decision-making capabilities on mobile decentralized platforms. This seminar presents multiple research efforts being pursued along this frontier at the Integrated Multiphysics & Systems Engineering Laboratory (iMSEL) at the University of South Carolina (USC). It starts with a fundamental introduction of key methodologies to enable lightweight and real-time computation in engineering, including reduced order modeling (ROM) and data-driven modeling. Then, the extension of the data-driven method by leveraging the recent advances in deep learning will be discussed. The strategies to integrate real-time evaluation and decision-making on edge computing devices to enable field deployment of CPS will be presented. Several real-world applications of significant interest demonstrated by iMSEL to federal agencies for real-time computing, such as design automation, massive data analytics, anomaly detection, system autonomy, and others, will also be presented.

Bio:

Yi Wang is an associate Professor in mechanical engineering at the University of South Carolina (USC). He completed his PhD at Carnegie Mellon University in 2005 and obtained his B.S. and M.S. from Shanghai Jiaotong University in China in 1998 and 2000, respectively. From 2005 to 2017, he held several positions of increasing responsibility at the CFD Research Corporation (CFDRC), Huntsville, Alabama. In 2017, he joined the University of South Carolina to start his academic career. His research interests focus on computational and data-enabled science and engineering (CDS&E), including reduced order modeling, large-scale and/or real-time data analytics, system-level simulation, computer vision, and cyberphysical system and autonomy with applications in aerospace, naval perception, unmanned systems, manufacturing, and biomedical devices. His research has been sponsored by several federal funding agencies, including DoD, NIH, NASA, DOT, and industries. He has published over 150 papers in referred journals and conference proceedings. He is also the recipient of the 2021 Research Breakthrough Star Award of USC.

Virtual audience

Robust Underwater State Estimation and Mapping

Wednesday, October 11, 2023 - 03:00 pm
Innovation Center, Room 2277 & Virtual

DISSERTATION DEFENSE

Author: Bharat Joshi
Advisor: Dr. Ioannis Rekleitis
Date: October 11, 2023
Time: 3 pm - 5 pm
Place: Innovation Center, Room 2277 & Virtual

Meeting Link: 

Abstract:

 The ocean covers two-thirds of Earth, which is relatively unexplored compared to the landmass. Mapping underwater structures is essential for both archaeological and conservation purposes. This dissertation focuses on employing a robot team to map underwater structures using vision-based simultaneous localization and mapping (SLAM). The overarching goal of this research is to create a team of autonomous robots to map large underwater structures in a coordinated fashion. This requires maintaining an accurate robust pose estimate of oneself and knowing the relative pose of the other robots in the team. However, the GPS-denied and communication-constrained underwater environment, along with low visibility, poses several challenges for state estimation. This dissertation aims to diagnose the challenges of underwater vision-based state estimation algorithms and provide solutions to improve their robustness and accuracy. Moreover, robust state estimation combined with deep learning-based relative localization forms the backbone for cooperative mapping by a team of robots.
 

The performance of open-source state-of-the-art visual-inertial SLAM algorithms is compared in multiple underwater environments to understand the challenges of state estimation underwater. Extensive evaluation showed that consumer-level imaging sensors are ill-equipped to handle challenging underwater image formation, low intensity, and artificial lighting fluctuations. Thus, the GoPro action camera that captures high-definition video along with synchronized IMU measurements embedded within a single mp4 file is presented as a substitute. Along with enhanced images, fast sparse map deformation is performed for globally consistent mapping after loop closure. However, in some environments such as underwater caves, it is difficult to perform loop closure due to narrow passages and turbulent flows resulting in yaw drift over long trajectories. Tightly-coupled fusion of high frequency magnetometer measurements in optimization-based visual inertial odometry using IMU preintegration is performed producing a significant reduction in yaw drift. Even with good quality cameras, there are scenarios during underwater deployments where visual SLAM fails. Robust state estimation is proposed by switching between visual inertial odometry and a model-based estimator to keep track of the Aqua2 Autonomous Underwater Vehicle (AUV) during underwater operations. For mapping large underwater structures, cooperative mapping by a team of robots equipped with robust state estimation and capable of relative localization with each other is required. A deep learning framework is designed for real-time 6D pose estimation of an Aqua2 AUV with respect to observing camera trained only on synthetic images. This dissertation combines robust state estimation and accurate relative localization that contribute to mapping underwater structures using multiple AUVs.

Codesigning Computing Systems for Artificial Intelligence

Tuesday, October 10, 2023 - 11:40 am
online

Title: 

Amir Yazdanbakhsh (Google DeepMind), Suvinay Subramanian (Google)


Teams Link


Abstract:

The rapid advancement of artificial intelligence (AI) has ushered in an era of unprecedented computational demands, necessitating continuous innovation in computing systems. In this talk, we will highlight how codesign has been a key paradigm in enabling innovative solutions and state-of-the-art performance in Google's AI computing systems, namely Tensor Processing Units (TPUs). We present several codesign case studies across different layers of the stack, spanning hardware, systems, software, algorithms, all the way up to the datacenter. We discuss how TPUs have made judicious, yet opinionated bets in our design choices, and how these design choices have not only kept pace with the blistering rate of change, but also enabled many of the breakthroughs in AI.

Bio:

Amir Yazdanbakhsh received his Ph.D. degree in computer science from the Georgia Institute of Technology. His Ph.D. work has been recognized by various awards, including Microsoft PhD Fellowship and Qualcomm Innovation Fellowship. Amir is currently a Research Scientist at Google DeepMind where he is the co-founder and co-lead of the Machine Learning for Computer Architecture team. His work focuses on leveraging the recent machine learning methods and advancements to innovate and design better hardware accelerators. He is also interested in designing large-scale distributed systems for training machine learning applications, and led the development of a massively large-scale distributed reinforcement learning system that scales to TPU Pod and efficiently manages thousands of actors to solve complex, real-world tasks. The work of our team has been covered by media outlets, including WIRED, ZDNet, AnalyticsInsight, InfoQ. Amir was inducted into the ISCA Hall of Fame in 2023.

Suvinay Subramanian is a Staff Software Engineer at Google, where he works on the architecture and codesign for Google's ML supercomputers, Tensor Processing Units (TPUs). His work has directly impacted innovative architecture and systems features in multiple generations of TPUs, and empowered performant training and serving of Google's research and production AI workloads. Suvinay received a Ph.D. from MIT, and a B.Tech from the Indian Institute of Technology Madras. He also co-hosts the Computer Architecture Podcast that spotlights cutting-edge developments in computer architecture and systems.

Designing Quantum Programming Languages with Types

Friday, October 6, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract:
Quantum computing presents many challenges for the programming language community. How can we program quantum algorithms in a way that ensures they behave correctly? In this talk, I will discuss how types can be used to enforce various properties of quantum programs. I will first talk about how linear types and dependent types can be useful for programming quantum circuits. I will then discuss my recent work on designing a type system to enable the interaction of quantum circuit generation time and quantum circuit execution time. If time permits, I will sketch how to ensure reversibility and controllability of the quantum circuits using types.

Bio:
Frank (Peng) Fu is an assistant professor in the Computer Science and Engineering Department at the University of South Carolina. Previously, he was a postdoctoral researcher at Dalhousie University in Canada. He obtained his Ph.D. degree from University of Iowa. His research interests are in quantum programming languages, type theory and their applications.

Location:

In-person

Innovation Center Building 1400

 

Virtual audience
 

Towards Automotive Radar Networks for Enhanced Detection/Cognition.

Friday, September 22, 2023 - 02:20 pm
Innovation Center, Room 1400

SUMMARY: This talk will present an overview of recent research at UW FUNLab around the use of vehicular radar for advanced driver assistance systems (en route to a future vision of autonomous driving). Wideband (typically FMCW or chirp) radars are increasingly deployed onboard vehicles as key high-resolution sensors for environmental mapping or imaging and various safety features. The talk will be demarcated into two parts, centered on the evolving role of radar ‘cognition’ in complex operating environments to address two important future challenges:
 

  1. Mitigating multi-access interference among Radars (e.g., dense traffic scenario)
    This will first illustrate the impact of mutual interference on detection performance in commercial Chirp/FMCW radars and then highlight some multi-access protocol design approaches for effective resource sharing among multiple radars.
  2. Contributions to radar vision via new radar hardware (MIMO radar) + associated advanced signal processing (Synthetic Aperture) principles using Convolutional Neural Network (‘Radar Net’) based machine learning approaches for enhanced object detection/classification in challenging circumstances.

 

Trustworthy Artificial Intelligence Using Knowledge-powered CREST Framework

Friday, September 15, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract

Large Language Models (LLMs) have garnered significant attention from researchers, including clinicians, due to their ability to respond to various human queries. Innovations like ChatGPT's groundbreaking reinforcement learning with human feedback and Google's domain-specific fine-tuning in Med-PaLM have introduced two potent information-providing platforms for general health inquiries. The 2023 Gartner Hype Curve places such LLMs at the pinnacle, foreseeing translational impact in the next 2-3 years. This foresight is grounded in comprehensive assessments of recent studies that have illuminated the limitations of these LLMs.

The remarkable potential of these LLMs, when fortified with features like human-level explainability, consistency, reliability, and safety, holds the promise of making deployable systems usable and readily adaptable to various scenarios where human lives may be affected. The talk will introduce a suite of methodologies (methods+metrics) under the Knowledge-powered CREST Framework for LLMs. This practical approach harnesses declarative, procedural, and graph-based knowledge within a neurosymbolic framework to shed light on the challenges associated with LLMs. 
 

Bio

Manas Gaur is an assistant professor in the Department of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC). At UMBC, he leads the Knowledge-infused AI and Inference (KAI2) lab. Before entering academia, he was the lead research scientist in Natural Language Processing (NLP) at the AI Center within Samsung Research America. He also held a visiting researcher role at the Alan Turing Institute. Dr. Gaur earned his Ph.D. under the guidance of Prof. Amit P. Sheth at the Artificial Intelligence Institute, University of South Carolina. Together, they played a pivotal role in the development of Knowledge-infused Learning, a paradigm that harmonizes seamlessly with NeuroSymbolic AI. He has been recognized as AAAI New Faculty for 2023 and is currently an advisor to Balm.ai, a startup on Mental Health. More details about him are at: https://manasgaur.github.io/
 

Location:

In-person

Innovation Center Building 1400

 

Online

 

Resource-Aware Approximate Dynamic Programming and Reinforcement Learning for Optimal Control of Dynamic Cyber-Physical Systems

Friday, September 8, 2023 - 02:30 pm
Online

Abstract
The “Curse of Dimensionality” issue of dynamic programming-based control approaches for large-scale state and action space of dynamic systems or agents led to the development of approximate dynamic programming (ADP). The approximate dynamic programming unifies the theory of optimal control, adaptive control, and reinforcement learning (RL) to obtain an approximate solution to the Bellman equation online and forward-in-time. In general, the value function, which is the solution to the Bellman equation in a discrete-time framework or Hamilton-Jacobi-Equation (HJB) in a continuous-time framework, is approximated using a neural network-based approximator.  The learning/adaptive nature of the solution often partially or fully relaxes the assumption of complete system information, which leads to optimal decision-making in uncertain/unknown environments. This presentation will traverse the evolution of the ADP/RL-based optimal control designs for dynamic cyber-physical systems, moving from traditional iterative solutions to those that emphasize time-based solutions. Specifically, there will be a focus on computation and communication-saving aspects of the ADP/RL-based designs.  The resource-aware ADP scheme, referred to as event-driven ADP, using Q-learning and Temporal Difference learning approaches will be discussed in detail. The event-driven approaches train the neural network approximators and update the actions at certain events only, thereby considerably minimizing the computational and communication requirements for the implementation of the learning-based control schemes over the communication network. Concluding this presentation, we will probe into some of the unresolved challenges of ADP/RL schemes, emphasizing their potential vulnerabilities in a cyber-physical framework.


Bio
Avimanyu Sahoo received his Ph.D. in Electrical Engineering from  Missouri University of Science and Technology, Rolla, MO, USA, in 2015 and a Masters of Technology (MTech) and the Indian Institute of Technology (BHU), Varanasi, India, in 2011. He is currently an Assistant Professor in the Electrical and Computer Engineering Department at the University of Alabama in Huntsville (UAH), AL. Prior to joining UAH, Dr. Sahoo was an Associate Professor in the Division of Engineering Technology at Oklahoma State University, Stillwater, OK.

 

Dr. Sahoo’s research interest includes learning-based control and its applications in lithium-ion battery pack modeling, diagnostics, prognostics, cyber-physical systems, and electric machinery health monitoring. Currently, his research focuses on developing intelligent battery management systems (BMS) for lithium-ion battery packs used onboard electric vehicles, computation, and communication-efficient distributed intelligent control schemes for cyber-physical systems using approximate dynamic programming, reinforcement learning, and distributed adaptive state estimation.


Link