Multi-Robot Coordination with Environmental Disturbances

Wednesday, April 7, 2021 - 12:00 pm
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Adem Coskun Advisor : Dr. Marco Valtorta Date : April 7, 2021 Time : 12:00 pm - 02:00 pm Place : Virtual Defense (link below) Link : Microsoft Teams: https://teams.microsoft.com/l/meetup-join/19%3ameeting_YTJkMzc0MjEtNDg3… Abstract Multi-robot systems are increasingly deployed in environments where they interact with humans. From the perspective of a robot, such interaction could be considered a disturbance that causes a well-planned trajectory to fail. This dissertation addresses the problem of multi-robot coordination in scenarios where the robots may experience unexpected delays in their movements. Prior work by Cap Gregoire, and Frazzoli introduced a control law, called RMTRACK, which enables robots in such scenarios to execute pre-planned paths in spite of disturbances that affect the execution speed of each robot while guaranteeing that each robot can reach its goal without collisions and without deadlocks. We extend that approach to handle scenarios in which the disturbance probabilities are unknown when execution starts and are non-uniform across the environment. The key idea is to "repair" a plan on-the-fly, by swapping the order in which a pair of robots passes through a mutual collision region (i.e. a coordination space obstacle), when making such a change is expected to improve the overall performance of the system. We introduce a technique based on Gaussian processes to estimate future disturbances, and propose two algorithms for testing, at appropriate times, whether a swap of a given obstacle would be beneficial. Tests in simulation demonstrate that our algorithms achieve significantly smaller average travel time than RMTRACK at only a modest computational expense. However, deadlock may arise when rearranging the order in which robots pass collision regions and other obstacles. We provide a precise definition of deadlock using a graphical representation and prove some of its important properties. We show how to exploit the representation to detect the possibility of deadlock and to characterize conditions under which deadlock may not occur. We provide experiments in simulated environments that illustrate the potential usefulness of our theory of deadlock.

Computational and Causal Approaches on Social Media and Multimodal Sensing Data: Examining Wellbeing in Situated Contexts

Friday, April 2, 2021 - 11:00 am
Time: Apr 2, 2021 11:00 AM Eastern Time (US and Canada) https://zoom.us/j/8440139296?pwd=MU9EWkJ5VEMyNzlneHI3Q1NTN0JxZz09 Abstract: A core aspect of our social lives is often embedded in the communities that we are situated in, such as our workplaces, neighborhoods, localities, and school/ college campuses. The inter-connectedness and inter-dependencies of our interactions, experiences, and concerns intertwine our situated context with our wellbeing. A better understanding of our wellbeing and psychosocial dynamics will help us devise strategies to address our wellbeing through proactive and tailored support strategies. However, existing methodologies to assess wellbeing suffer from limitations of scale and timeliness. Parallelly, given its ubiquity and widespread use, social media can be considered a “passive sensor” that can act as a complementary source of unobtrusive, real-time, and naturalistic data to infer wellbeing. In this talk, Koustuv Saha, from Georgia Tech, will present computational and causal approaches for leveraging social media in concert with complementary multimodal data to examine wellbeing. He will show how theory-driven computational methods can be applied on unique social media and complementary multimodal data to capture attributes of human behavior and psychosocial dynamics in situated communities, particularly college campuses and workplaces. Further, he will dive deep into drawing meaning out of online-inferences about offline metrics. Finally, this talk will propel the vision towards human-centered technologies tailored to situations, demands, and needs, facilitating technology-supported remote functioning, evaluating the prospective utility of social platforms for wellbeing, and understanding the harms/benefits of computational and data-driven assessments. Bio: Koustuv Saha is a doctoral candidate in Computer Science in the School of Interactive Computing at Georgia Tech. His research interest is in Social Computing and Computational Social Science. In his research, he adopts machine learning, natural language, and causal inference analysis to examine human behavior and wellbeing using social media and online data, along with complementary multimodal sensing data. His work has been published at several high prestige venues, including CHI, CSCW, ICWSM, IMWUT, JMIR, TBM, ACII, FAT*, PervasiveHealth, and WebSci, among others. He has been recognized as Foley Scholar, a recipient of the Foley Scholarship Award, GVU Center’s highest recognition for student excellence in research contributions to computing. He is a recipient of the Snap Research Fellowship, a finalist of the Symantec Graduate Fellowship, and his research has won the Outstanding Study Design Award at ICWSM 2019. His research has been covered at prestigious media outlets, including the New York Times, CBC Radio, NBC, 11Alive, the Hill, and the Commonwealth Times. During his Ph.D., he has had research internships at Snap Research, Microsoft Research, Max Planck Institute, and Fred Hutch Cancer Research. Earlier, he completed his B.Tech (Hons.) in Computer Science and Engineering from the Indian Institute of Technology (IIT) Kharagpur. He was also awarded the NTSE Scholarship by the Govt. of India, and he holds an overall industry research experience of five years. More about Koustuv can be found out at https://koustuv.com.

Regularized Deep Network Learning for Multi-label Visual Recognition

Wednesday, March 31, 2021 - 10:00 am
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Hao Guo Advisor : Dr. Song Wang Date : March 31, 2021 Time : 10:00am - 12:00 pm Place : Virtual Defense (link below) Link : https://bluejeans.com/140824998 Abstract This dissertation is focused on the task of multi-label visual recognition, a fundamental task of computer vision. It aims to tell the presence of multiple visual classes from the input image, where the visual classes, such as objects, scenes, attributes, etc., are usually defined as image labels. Due to the prosperous deep networks, this task has been widely studied and significantly improved in recent years. However, it remains a challenging task due to appearance complexity of multiple visual contents co-occurring in one image. This research explores to regularize the deep network learning for multi-label visual recognition. First, an attention concentration method is proposed to refine the deep network learning for human attribute recognition, i.e., a challenging instance of multi-label visual recognition. Here the visual attention of deep networks, in terms of attention maps, is an imitation of human attention in visual recognition. Derived by the deep network with only label-level supervision, attention maps interpretively highlight areas indicating the most relevant regions that contribute most to the final network prediction. Based on the observation that human attributes are usually depicted by local image regions, the added attention concentration enhances the deep network learning for human attribute recognition by forcing the recognition on compact attribute-relevant regions. Second, inspired by the consistent relevance between a visual class and an image region, an attention consistency strategy is explored and enforced during deep network learning for human attribute recognition. Specifically, two kinds of attention consistency are studied in this dissertation, including the equivariance under spatial transforms, such as flipping, scaling and rotation, and the invariance between different networks for recognizing the same attribute from the same image. These two kinds of attention consistency are formulated as a unified attention consistency loss and combined with the traditional classification loss for network learning. Experiments on public datasets verify its effectiveness by achieving new state-of-the-art performance for human attribute recognition. Finally, to address the long-tailed category distribution of multi-label visual recognition, the collaborative learning between using uniform and re-balanced samplings is proposed for regularizing the network training. While the uniform sampling leads to relatively low performance on tail categories, re-balanced sampling can improve the performance on tail classes, but may also hurt the performance on head classes in network training due to label co-occurrence. This research proposes a new approach to train on both class-biased samplings in a collaborative way, resulting in performance improvement for both head and tail classes. Based on a two-branch network taking the uniform sampling and re-balanced sampling as the inputs, respectively, a cross-branch loss enforces consistency when the same input goes through the two branches. The experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art methods on long-tailed multi-label visual recognition.

Deep Learning Based Sound Event Detection and Classification

Monday, March 29, 2021 - 01:00 pm
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Alireza Nasiri Advisor : Dr. Jianjun Hu Date : March 29, 2021 Time : 1:00 - 3:00 pm Place : Virtual Defense (link below) Zoom link: https://us02web.zoom.us/j/83125251774?pwd=NDEwK3M4b3NuT0djQ25BMlQ2cGtuZ… Abstract Hearing sense has an important role in our daily lives. During the recent years, there has been many studies to transfer this capability to the computers. In this dissertation, we design and implement deep learning based algorithms to improve the ability of the computers in recognizing the different sound events. In the first topic, we investigate sound event detection, which identifies the time boundaries of the sound events in addition to the type of the events. For sound event detection, we propose a new method, AudioMask, to benefit from the object-detection techniques in computer vision. In this method, we convert the question of identifying time boundaries for sound events, into the problem of identifying objects in images by treating the spectrograms of the sound as images. AudioMask first applies Mask R-CNN, an algorithm for detecting objects in images, to the log-scaled mel-spectrograms of the sound files. Then we use a frame-based sound event classifier trained independently from Mask R-CNN, to analyze each individual frame in the candidate segments. Our experiments show that, this approach has promising results and can successfully identify the exact time boundaries of the sound events. In the second topic, we present SoundCLR, a supervised contrastive learning based method for effective environmental sound classification with state-of-the-art performance, which works by learning representations that disentangle the samples of each class from those of other classes. We also exploit transfer learning and strong data augmentation to improve the results. Our extensive benchmark experiments show that our hybrid deep network models trained with combined contrastive and cross-entropy loss achieved the state-of-the-art performance on three benchmark datasets ESC-10, ESC-50, and US8K with validation accuracies of 99.75%, 93.4%, and 86.49% respectively. The ensemble version of our models also outperforms other top ensemble methods. Finally, we analyze the acoustic emissions that are generated during the degradation process of SiC composites. The aim here is to identify the state of the degradation in the material, by classifying its emitted acoustic signals. As our baseline, we use random forest method on expert-defined features. Also we propose a deep neural network of convolutional layers to identify the patterns in the raw sound signals. Our experiments show that both of our methods are reliably capable of identifying the degradation state of the composite, and in average, the convolutional model significantly outperforms the random forest technique.

From the Lab to Community: AI for Document Understanding and Public Health

Monday, March 29, 2021 - 11:00 am
Topic: Seminar: Muhammad Rahman Time: Mar 29, 2021 11:00 AM Eastern Time (US and Canada) Join Zoom Meeting https://zoom.us/j/97536411087?pwd=RExTdkVQcEg4OERFMUJhWm5rQThndz09 Title: Abstract: Artificial intelligence (AI) has made incredible scientific and technological contributions in many areas including business, healthcare and psychology. Due to the multidisciplinary nature and the ability to revolutionize, almost every field has started welcoming AI. The last decade is the witness of progresses of AI and machine learning, and their applications. In this talk, I will present my work that used AI and machine learning to solve interesting research challenges. The first part of my talk will describe an AI-powered framework that I have developed for large document understanding. The research contributed by modeling and extracting the logical and semantic structure of electronic documents using machine learning techniques. In the second part of my talk, I will present an ongoing work that uses computational technology to design a study for measuring COVID-19 effects on people with substance use disorders. I will conclude the talk by introducing few other AI-powered initiatives in mental health, substance use and addiction that I am currently working on. Bio: Dr. Muhammad Rahman is a Postdoctoral Researcher at National Institutes of Health (NIH). Before that, he was a Postdoctoral Fellow in the Center for Language and Speech Processing (CLSP) research lab at Johns Hopkins University. He obtained his Ph.D. in computer science from the University of Maryland, Baltimore County. His research is at the intersection of artificial intelligence (AI), machine learning, natural language processing, mental health, addiction and public health. Dr. Rahman’s current research mostly focuses on the real-world applications of advanced AI and machine learning techniques in addiction, mental health and behavioral psychology. As a part of NIH, he is working on designing and developing real-time digital intervention techniques to support substance use disorders and mental illness patients. During his Ph.D., Dr. Rahman worked on large document understanding that automatically identifies different sections of documents and understands their purpose within the document. He also had research internships at AT&T Labs and eBay Research where he worked on large scale industrial research projects. https://irp.drugabuse.gov/staff-members/muhammad-mahbubur-rahman-ph-d/

Sonar sensing algorithm inspired by the auditory system of big brown bats

Friday, March 26, 2021 - 11:00 am
Friday, March 26 at 11 am Zoom Meeting details: https://zoom.us/j/95651622905?pwd=M3IxbGY0WWpBUEJnRE5XRmhnRW91UT09 Echolocating animals rely on biosonar for navigation and foraging, operating on lower energy input but achieving higher accuracy compared with engineered sonar. My research focuses on understanding the mechanism of bat biosonar by simulating different acoustic scenes involving vegetation and defining the invariants in the foliage echoes that provide the tree type information for bats to use as landmarks. Additionally, I have developed Spectrogram Correlation and Transformation (SCAT) model that simulates the bat’s auditory system with a gammatone filterbank, a half-wave rectifier, and a low-pass filter. The SCAT model splits a signal into many parallel frequency channels and maps the acoustic “image” of the target by finding the crossings at each channel with the same threshold. It can estimate the range delay between a sound source and targets as well as fine delay within reflecting points in one target – signal delays as short as a few microseconds. Currently, I am expanding the SCAT model by including a convolutional neural network for binaural localization of small targets. Bio: Chen Ming, Ph.D., received a BS and an MS in Mechanical Engineering from Hunan University in China. She then moved to the US to study bioacoustics at Virginia Tech with a focus on foliage echoes in natural environments. After graduation, she joined the Neuroscience department at Brown University as a postdoc, where she has been working on the modeling of the auditory system of big brown bats and acoustic scene reconstruction as a part of a Multidisciplinary University Research Initiative (MURI) Program to inspire advanced Navy sonar designs. Her long-term research goal is to design sonar for small autonomous aerial vehicles and incorporate AI for precise sensing. Recently she has been selected as a speaker for the Neuroscience Institute’s Rising Star Postdoctoral Seminar Series at the University of Chicago with her research on bioacoustics. Link to my webpage: https://cming8.wixsite.com/mysite

Dynamic Learning and Control for Complex Population Systems and Networks

Wednesday, March 24, 2021 - 11:00 am
Systems commonly encountered in diverse scientific domains are complex, highly interconnected, and dynamic. These include the processes and mechanisms previously confined to biology, quantum science, social science, etc., which are increasingly studied and analyzed from a systems-theoretic viewpoint. The ability to decode the structural and dynamic information of such dynamical systems using observation (or measurement) data and the capability to precisely manipulate them are both essential steps toward enabling their safe and efficient deployment in critical applications. In this talk, I will present some of the emerging learning and control problems associated with dynamic population systems and networks, including data-integrated methods for control synthesis, perturbation-based inference of nonlinear dynamic networks, and moment-based methods for ensemble control. In particular, I will present the bottlenecks associated with these challenging yet critical problems, motivating the need for a synergy between systems and control theory and techniques from artificial intelligence to build novel mathematically grounded tools that enable systematic solutions to these complex problems. In this context, in the first part of my talk, I will present some of the recent developments in solving inference problems for decoding the dynamics and the connectivity structure of nonlinear dynamical networks. Then, I will present model-agnostic data-integrated methods for solving optimal control problems associated with complex dynamic population systems such as neural networks and robotic systems. Bio: Vignesh Narayanan (Member, IEEE) received the B.Tech. Electrical and Electronics Engineering degree from SASTRA University, Thanjavur, India, the M.Tech. degree with specialization in Control Systems from the National Institute of Technology Kurukshetra, Haryana, India, in 2012 and 2014, respectively, and the Ph.D. degree from the Missouri University of Science and Technology, Rolla, MO, USA, in 2017. He joined the Applied Mathematics Lab and Brain Dynamics and Control Research Group in the Dept. of Electrical and Systems Engineering at the Washington University in St. Louis, where he is currently working as a postdoctoral research associate. His current research interests include learning and adaptation in dynamic population systems, complex dynamic networks, reinforcement learning, and computational neuroscience. Wednesday, March 24 at 11 am https://zoom.us/j/95594086334?pwd=dlJvdmhhOENOZE9qY1dhM1g4SmVPUT09 Meeting ID: 955 9408 6334 Passcode: 1928

Towards Machine Learning-Driven Precision Mental Health

Monday, March 22, 2021 - 12:00 pm
Topic: Dr. Wei Wu's Seminar Time: Mar 22, 2021 12:00 PM Eastern Time (US and Canada) Dr. Wei Wu is a candidate for the AI-Neuroscience Faculty position. Abstract: Psychiatric disorders are major causes of the global burden of disease affecting more than 1 billion people globally. Current psychiatric diagnoses are defined based on constellations of symptoms. However, patients with identical diagnoses may in fact fall into biologically heterogeneous subgroups, each of which may require a different therapy. Yet to date, we still lack validated neurobiological biomarkers that can reliably dissect such heterogeneity and allow us to objectively diagnose and treat psychiatric disorders. In this talk, I will present our recent discoveries of EEG biomarkers for dissecting the biological heterogeneity of psychiatric disorders, enabled by tailed machine learning methods for decoding disease-relevant information from EEG. These biomarkers can also be leveraged to drive therapeutic development using brain stimulation tools. Our findings therefore lay a path towards machine-learning driven personalized treatment to psychiatric disorders and have the potential of being translated to the clinic as point-of-care biological tests. Short Bio: Wei Wu is the Co-Founder and Chief Technology Officer of Alto Neuroscience, Inc., Los Altos, CA. He is also an Instructor affiliated with the Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA. He received the Ph.D. degree in Biomedical Engineering from Tsinghua University, Beijing, China, in 2012. From 2012-2016, he was an Associate Professor with the School of Automation Science and Engineering, South China University of Technology, Guangzhou, China. His research interests include computational psychiatry, brain signal processing, neural engineering, and brain stimulation. Dr. Wu is an IEEE Senior Member, an Associate Editor of Neural Processing Letters and Frontiers in Computational Neuroscience, and served as an Associate Editor of Neurocomputing from 2013-2019. He is also a member of the IEEE Biomedical Signal Processing Technical Committee. Homepage: https://weiwuneuro.wixsite.com/home Zoom Meeting details: https://zoom.us/j/99931576592?pwd=dGZEQlJ1NzNjeWVXWXd2SDlvQ2ZLQT09

Deep Learning based Models for Classification from Natural Language Processing to Computer Vision

Friday, February 12, 2021 - 02:30 pm
Online

DISSERTATION DEFENSE

Department of Computer Science and Engineering University of South Carolina

Author : Xianshan Qu

Advisor : Dr. John Rose

Date : Feb 12, 2021

Time : 02:30 pm

Place : Virtual Defense (link below)

Please use the following link to participate my defense (scheduled for Feb, 12th Friday 2:30pm-4:30pm EST): https://zoom.us/j/98430235673?pwd=SlNLSU9TOVJ4c29nZHU2cytkTEZHQT09

Abstract With the availability of large scale data sets, researchers in many different areas such as natural language processing, computer vision, recommender systems have started making use of deep learning models and have achieved great progress in recent years. In this dissertation, we study three important classification problems based on deep learning models. First, with the fast growth of e-commerce, more people choose to purchase products online and browse reviews before making decisions. It is essential to build a model to identify helpful reviews automatically. Our work is inspired by the observation that a customer's expectation of a review can be greatly affected by review sentiment and the degree to which the customer is aware of pertinent product information. To model such customer expectation and capture important information from a review text, we propose a novel neural network which encodes the sentiment of a review through an attention module, and introduces a product attention layer that fuses information from both the target product and related products. The results demonstrate that both attention layers contribute to the model performance, and the combination of them has a synergistic effect. We also evaluate our model performance as a recommender system using three commonly used metrics: NDCG@10, Precision@10 and Recall@10. Our model outperforms PRH-Net, a state-of-the-art model, on all three of these metrics. Second, real-time bidding (RTB) that features per-impression-level real-time ad auctions has become a popular practice in today's digital advertising industry. In RTB, click-through rate (CTR) prediction is a fundamental problem to ensure the success of an ad campaign and boost revenue. We present a dynamic CTR prediction model designed for the Samsung demand-side platform (DSP). We identify two key technical challenges that have not been fully addressed by the existing solutions: the dynamic nature of RTB and user information scarcity. To address both challenges, we develop a model that effectively captures the dynamic evolutions of both users and ads and integrates auxiliary data sources to better model users' preferences. We evaluate our model using a large amount of data collected from the Samsung advertising platform and compare our method against several state-of-the-art methods that are likely suitable for real-world deployment. The evaluation results demonstrate the effectiveness of our method and the potential for production. Third, for Highway Performance Monitoring System (HPMS) purposes, the South Carolina Department of Transportation (SCDOT) must provide to the Federal Highway Administration (FHA) a classification of vehicles. However, due to limited lighting conditions at nighttime, classifying vehicles at nighttime is quite challenging. To solve this problem, we designed three CNN models to operate on thermal images. These three models have different architectures. Of these, model 2 achieves the best performance. Based on model 2, to avoid overfitting and improve the performance further, we propose two training-test methods based on data augmentation technique. The experimental results demonstrate that the second training-test method improves the performance of model 2 further with regard to both accuracy and f1-score.

Learning How to Search: Generating Effective Test Cases Through Adaptive Fitness Function Selection

Monday, December 21, 2020 - 10:00 am
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Hussien Almulla Advisor : Dr. Gregory Gay Date : Dec 21, 2020 Time : 10:00 am Place : Virtual Defense Abstract Search-based test generation is guided by feedback from one or more fitness functions— scoring functions that judge solution optimality. Choosing informative fitness functions is crucial to meeting the goals of a tester. Unfortunately, many goals—such as forcing the class-under-test to throw exceptions, increasing test suite diversity, and attaining Strong Mutation Coverage—do not have effective fitness function formulations. We propose that meeting such goals requires treating fitness function identification as a secondary optimization step. An adaptive algorithm that can vary the selection of fitness functions could adjust its selection throughout the generation process to maximize goal attainment, based on the current population of test suites. To test this hypothesis, we have implemented two reinforcement learning algorithms in the EvoSuite framework, and used these algorithms to dynamically set the fitness functions used during generation for the three goals identified above. We have evaluated our framework, EvoSuiteFIT, on a set of real Java faults. EvoSuiteFIT techniques attain significant improvements for two of the three goals, and show small improvements on the third when the number of generations of evolution is fixed. For all goals, EvoSuiteFIT detects faults missed by the other techniques. The ability to adjust fitness functions allows EvoSuiteFIT to make strategic choices that efficiently produce more effective test suites, and examining its choices offers insight into how to attain our testing goals. We find that AFFS is a powerful technique to apply when an effective fitness function does not exist for a testing goal.