Women in Computing Professional Development Meeting

Wednesday, September 11, 2019 - 06:00 pm
Room 1400, IBM Innovation Center/Horizon 2
When: 6:00pm – 7:00pm, Wednesday, September 11 Where: Room 1400, IBM Innovation Center/Horizon 2 (the building next to Strom Thurmond Fitness Center that has the IBM logo on the side). Main agenda: We will be sharing tips on finding and applying for internships, interacting with recruiters, keeping resumes and LinkedIn accounts up to date, and strategies for preparing for technical interviews. Bring your resume along if you're interested in getting it reviewed!

Women in Computing Welcome Meeting

Wednesday, September 4, 2019 - 06:00 pm
Room 2277, IBM Innovation Center/Horizon 2
Women in Computing will have a welcome meeting on Wednesday, September 4. When: 6:00pm – 7:00pm, Wednesday, September 4 Where: Room 2277, IBM Innovation Center/Horizon 2 (the building next to Strom Thurmond Fitness Center that has the IBM logo on the side). Main agenda: Administrative business and elections

Learning Discriminative Features for Facial Expression Recognition

Wednesday, August 28, 2019 - 09:30 am
Seminar Room 2277, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Jie Cai Advisor : Dr. Yan Tong Date : Aug 28th, 2019 Time : 9:30 am Place : Seminar Room 2277, Innovation Center Abstract Over the past few years, deep learning, e.g., Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have shown promise on facial expression recognition. However, the performance degrades dramatically especially in close-to-real-world settings due to high intra-class variations and high inter-class similarities introduced by subtle facial appearance changes, head pose variations, illumination changes, occlusions, and identity-related attributes, e.g., age, race, and gender. In this work, we developed two novel CNN frameworks and one novel GAN approach to learn discriminative features for facial expression recognition. First, a novel island loss is proposed to enhance the discriminative power of learned deep features. Specifically, the island loss is designed to reduce the intra-class variations while enlarging the inter-class differences simultaneously. Experimental results on two posed facial expression datasets and, more importantly, two spontaneous facial expression datasets have shown that the proposed island loss outperforms the baseline CNNs with the traditional softmax loss or the center loss and achieves better or at least comparable performance compared with the state-of-the-art methods. Second, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes. Specifically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to identity-related attributes, where the final features are less affected by the attributes. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on three posed facial expression datasets as well as three spontaneous facial expression datasets have demonstrated that the proposed PAT-CNN achieves the best performance compared with state-of-the-art methods by explicitly modeling attributes. Impressively, the PAT-CNN using a single model achieves the best performance on the SFEW test dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs. Last, we present a novel Identity-Free conditional Generative Adversarial Network (IF-GAN) to explicitly reduce high inter-subject variations caused by identity-related attributes for facial expression recognition. Specifically, for any given input facial expression image, a conditional generative model was developed to transform it to an ``average'' identity expressive face with the same expression as the input face image. Since the generated images have the same synthetic ``average'' identity, they differ from each other only by the displayed expressions and thus, can be used for identity-free facial expression classification. Experimental results on three well-known facial expression datasets have demonstrated that the proposed IF-GAN outperforms the baseline CNN model and achieves best or at least comparable performance compared with the state-of-the-art methods.

Degraded Image Segmentation, Global Context Embedding, and Data Balancing in Semantic Segmentation

Friday, August 9, 2019 - 10:30 am
Seminar Room 2277, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Dazhou Guo Advisor : Dr. Song Wang Date : Aug 9th, 2019 Time : 10:30 am Place : Seminar Room 2277, Innovation Center Abstract Recently, semantic segmentation -- assign a categorical label to each pixel in an image -- plays an important role in image understanding applications, e.g., autonomous driving, human-machine interaction, and medical imaging. Semantic segmentation has made progress by using the deep convolutional neural networks, which are surpassing the traditional methods by a large margin. Despite the success of the deep convolutional neural networks (CNNs), there remain three major challenges. The first challenge is how to segment the degraded images semantically -- degraded image semantic segmentation. In general, image degradations increase the difficulty of semantic segmentation, usually leading to decreased semantic segmentation accuracy. While the use of supervised deep learning has substantially improved the state-of-the-art of semantic image segmentation, the gap between the feature distribution learned using the clean images and the feature distribution learned using the degraded images poses a major obstacle to improve the degraded image semantic segmentation performance. We propose a novel Dense-Gram Network to more effectively reduce the gap than the conventional strategies and segment degraded images. Extensive experiments demonstrate that the proposed Dense-Gram Network yields state-of-the-art semantic segmentation performance on degraded images synthesized using PASCAL VOC 2012, SUNRGBD, CamVid, and CityScapes datasets. The second challenge is how to embed the global context into the segmentation network. As the existing semantic segmentation networks usually exploit the local context information for inferring the label of a single-pixel or patch, without the global context, the CNNs could miss-classify the objects with similar color and shapes. In this dissertation, we propose to embed the global context into the segmentation network using the object's spatial relationship. In particular, we introduce a boundary-based metric that measures the level of spatial adjacency between each pair of object classes and find that this metric is robust against object size induced biases. We develop a new method to enforce this metric into the segmentation loss. We propose a network, which starts with a segmentation network, followed by a new encoder to compute the proposed boundary-based metric, and then trains this network in an end-to-end fashion. We evaluate the proposed method using CamVid and CityScapes datasets and achieve favorable overall performance and a substantial improvement in segmenting small objects. The third challenge of the existing semantic segmentation network is how to address the problem of imbalanced data induced performance decrease. Contemporary methods based on the CNNs typically follow classic strategies such as class re-sampling or cost-sensitive training. However, for a multi-label segmentation problem, this becomes a non-trivial task. At the image level, one semantic class may occur in more images than another. At the pixel level, one semantic class may show larger size than another. Here, we propose a selective-weighting strategy to consider the image- and pixel-level data balancing simultaneously when a batch of images are fed into the network. The experimental results on the CityScapes and BRATS2015 benchmark datasets show that the proposed method can effectively improve the performance.

Person Identification with Convolutional Neural Networks

Friday, August 9, 2019 - 09:00 am
Seminar Room 2277, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Kang Zheng Advisor : Dr. Song Wang Date : Aug 9th, 2019 Time : 9:00 am Place : Seminar Room 2277, Innovation Center Abstract Person identification aims at matching persons across images or videos captured by different cameras, without requiring the presence of persons' faces. It is an important problem in computer vision community, and has many important real-world applications, such as person search, security surveillance and no-checkout stores. However, this problem is very challenging due to various factors, such as illumination variation, view changes, human pose deformation, and occlusion. Traditional approaches generally focus on hand-crafting features and/or learning distance metrics for matching to tackle these challenges. With Convolutional Neural Networks (CNNs), feature extraction and metric learning can be combined in a unified framework. In this work, we study two important sub-problems of person identification: cross-view person identification and visible-thermal person re-identification. Cross-view person identification aims to match persons from temporally synchronized videos taken by wearable cameras. Visible-thermal person re-identification aims to match persons between images taken by visible cameras under normal illumination condition and thermal cameras under poor illumination condition such as during night time. For cross-view person identification, we focus on addressing the challenge of view changes between cameras. Since the videos are taken by wearable cameras, the underlying 3D motion pattern of the same person should be consistent and thus can be used for effective matching. In light of this, we propose to extract view-invariant motion features to match persons. Specifically, we propose a CNN-based triplet network to learn view-invariant features by establishing correspondences between 3D human MoCap data and the projected 2D optical flow data. After training, the triplet network is used to extract view-invariant features from 2D optical flows of videos for matching persons. We collect three datasets for evaluation. The experimental results demonstrate the effectiveness of this method. For visible-thermal person re-identification, we focus on the challenge of domain discrepancy between visible images and thermal images. We propose to address this issue at a class level with a CNN-based two-stream network. Specifically, our idea is to learn a center for features of each person in each domain (visible and thermal domains), using a new relaxed center loss. Instead of imposing constraints between pairs of samples, we enforce the centers of the same person in visible and thermal domains to be close, and the centers of different persons to be distant. We also enforce the feature vector from the center of one person to another in visible feature space to be similar to that in thermal feature space. Using this network, we can learn domain-independent features for visible-thermal person re-identification. Experiments on two public datasets demonstrate the effectiveness of this method.

Challenges for the Development of Cloud Native Applications

Friday, July 26, 2019 - 02:00 pm
Innovation Center, Room 2277
Speaker: Nabor Mendonça Affiliation: University of Fortaleza, Brazil Location: Innovation Center, Room 2277 Time: Friday 7/26/2019 (2 - 3pm) Host: Pooyan Jamshidi (Please contact me if you want to meet with the speaker) Abstract: In this talk I'll give a brief overview of how the concept of software architecture has evolved over the last 50 years, from centralized monolithic systems to today's highly distributed cloud-based applications. Then I'll discuss some of the key challenges facing the development of cloud native applications, with a focus on the microservice architectural style and its supporting practices and technologies. Bio: Nabor Mendonça is a full professor in applied informatics at the University of Fortaleza, Brazil. From 2017 to 2018 he was a visiting researcher at the Institute for Software Research, Carnegie Mellon University, working in David Garlan's ABLE group. His main research areas are software engineering, distributed systems, self-adaptive systems, and cloud computing. Prof. Mendonça has a Ph.D. in computing from Imperial College London.

Challenges in Large-Scale Machine Learning Systems: Security and Correctness

Wednesday, June 12, 2019 - 02:00 pm
Meeting Room 2265, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Emad Alsuwat Advisor : Dr. Farkas and Dr. Valtorta Date : June 12th, 2019 Time : 2:00 pm Place : Meeting Room 2265, Innovation Center Abstract In this research, we address the impact of data integrity on machine learning algorithms. We study how an adversary could corrupt Bayesian network structure learning algorithms by inserting contaminated data items. We investigate the resilience of Bayesian network structure learning algorithms, namely the PC and LCD algorithms, against data poisoning attacks that aim to corrupt the learned Bayesian network model. Data poisoning attacks are one of the most important emerging security threats against machine learning systems. These attacks aim to corrupt machine learning models by contaminating datasets in the training phase. The lack of resilience of Bayesian network structure learning algorithms against such attacks leads to inaccuracies of the learned network structure. In this dissertation, we propose two subclasses of data poisoning attacks against Bayesian networks structure learning algorithms: (1) Model invalidation attacks when an adversary poisons the training dataset such that the Bayesian model will be invalid, and (2) Targeted change attacks when an adversary poisons the training dataset to achieve a specific change in the structure. We also define a novel measure of the strengths of links between variables in discrete Bayesian networks. We use this measure to find vulnerable sub-structure of the Bayesian network model. We use our link strength measure to find the easiest links to break and the most believable links to add to the Bayesian network model. In addition to one-step attacks, we define long-duration (multi-step) data poisoning attacks when a malicious attacker attempts to send contaminated cases over a period of time. We propose to use the distance measure between Bayesian network models and the value of data conflict to detect data poisoning attacks. We propose a 2-layered framework that detects both traditional one-step and sophisticated long-duration data poisoning attacks. Layer 1 enforces “reject on negative impacts” detection; i.e., input that changes the Bayesian network model is labeled potentially malicious. Layer 2 aims to detect long-duration attacks; i.e., observations in the incoming data that conflict with the original Bayesian model. Our empirical results show that Bayesian networks are not robust against data poisoning attacks. However, our framework can be used to detect and mitigate such threats.

A Novel and Inexpensive Solution to Build Autonomous Surface Vehicles Capable of Negotiating Highly Disturbed Environments

Friday, May 3, 2019 - 09:00 am
Meeting Room 2267, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Jason Moulton Advisor : Dr. Ioannis Rekleitis Date : May 3rd, 2019 Time : 9:00 am Place : Meeting Room 2267, Innovation Center Abstract This dissertation has four main contributions. The first contribution is the design and build of a fleet of long\hyp range, medium\hyp duration deployable autonomous surface vehicles (ASV). The second is the development, implementation and testing of inexpensive sensors to accurately measure wind, current, and depth environmental variables. The third leverages the first two contributions, and is modeling the effects of environmental variables on an ASV, finally leading to the development of a dynamic controller enabling deployment in more uncertain conditions. The motivation for designing and building a new ASV comes from the lack of availability of a flexible and modular platform capable of long\hyp range deployment in current state of the art. We present a design of an autonomous surface vehicle (ASV) with the power to cover large areas, the payload capacity to carry sufficient power and sensor equipment, and enough fuel to remain on task for extended periods. An analysis of the design, lessons learned during build and deployments, as well as a comprehensive build tutorial is provided in this thesis. The contributions from developing an inexpensive environmental sensor suite are multi-faceted. The ability to monitor, collect and build models of depth, wind and current in environmental applications proves to be valuable and challenging, where we illustrate our capability to provide an efficient, accurate, and inexpensive data collection platform for the communities use. More selfishly, in order to enable our end\hyp state goal of deploying our ASV in adverse environments, we realize the requirement to measure the same environmental characteristics in real\hyp time and provide them as inputs to our effects model and dynamic controller. We present our methods for calibrating the sensors and the experimental results of measurement maps and prediction maps from a total of 70 field trials. Finally, we seek to inculcate our measured environmental variables along with previously available odometry information to increase the viability of the ASV to maneuver in highly dynamic wind and current environments. We present experimental results in differing conditions, augmenting the trajectory tracking performance of the original way\hyp point navigation controller with our external forces feed\hyp forward algorithm.

Follow the Information: Illuminating Emerging Security Attacks and Applications

Monday, April 29, 2019 - 10:15 am
Storey Innovation Center (Room 2277)
Dr. Xuetao Weifrom the School of Information Technology at the University of Cincinnati will give a talk on Monday April 29 at 10:15 - 11:15, in Storey Innovation Center (Room 2277). Abstract: Cyberspace is a constantly changing landscape. Not only security attacks have become more and more stealthy, but also a myriad of opaque security applications have emerged. Navigating the increasingly complex cyber threat landscape has become overwhelming. Thus, it is essential to profile and understand emerging security attacks and applications. In this talk, I will first present a novel approach and tool, which illuminates in-memory injection attacks via provenance-based whole-system dynamic information flow tracking. Then, I will present a framework to enable behavior-based profiling for smart contracts, which could enhance users' understanding and control of contract behavior and assess performance and security implications. Finally, I will briefly discuss current ongoing research and future directions. Bio: Xuetao Wei is a tenure-track assistant professor in the School of Information Technology at the University of Cincinnati. He received his Ph.D. in Computer Science from the University of California, Riverside. His research interests span the areas of cybersecurity, blockchain, and measurements. His current research is supported by federal and state agencies, including NSF, DARPA, and Ohio Cyber Range. He particularly enjoys solving problems and developing innovative solutions based on interdisciplinary perspectives.