Portable Parallel Programming in the Age of Architecture Diversity for High Performance

Friday, October 27, 2017 - 02:20 pm
Swearingen room 2A14
I would like to invite you to attend this week's CSCE 791 seminar. These seminars highlight research being performed in our department and across the world. All CSCE 791 seminars are open to anybody who wishes to attend - not just students registered for the course. Speaker: Dr. Yonghong Yan, University of South Carolina Abstract: Today’s computer systems are becoming much more heterogeneous and complex from both computer architecture and memory system. High performance computing systems and large-scale enterprise clusters are often built with the combination of multiple architectures including multicore CPUs, Nvidia manycore GPUs, Intel Xeon Phi vector manycores, and domain-specific processing units, such as DSP and deep-learning tensor units. The introduction of non-volatile memory and 3D-stack DRAM known as high-bandwidth memory further complicated computer systems by significantly increasing the complexity of the memory hierarchy. For users, parallel programming for those systems has thus become much more challenging than ever. In this talk, the speaker will highlight the latest development of parallel programming models for the existing and emerging architectures for high performance computing. He will introduce the ongoing work in his research team (http://passlab.github.io) for improving productivity and portability of parallel programming for heterogeneous systems with the combination of shared and discrete memory. The speaker will conclude that this is an exciting time for performing computer system research and also share some of his unsuccessful experiences for studying his Ph.D. Bio: Dr. Yonghong Yan joined University of South Carolina as an Assistant Professor in Fall 2017 and he is a member of OpenMP Architectural Review Board and OpenMP Language Committee. Dr. Yan calls himself a nerd for parallel computing, compiler technology and high-performance computer architecture and systems. He is an NSF CAREER awardee. His research team develop intra-/inter-node programming models, compiler, runtime systems and performance tools based on OpenMP, MPI and LLVM compiler, explore conventional and advanced computer architectures including CPU, vector, GPU, MIC, FPGA, and dataflow system, and support applications ranging from classical HPC, to big data analysis and machine learning, and to computer imaging. The ongoing development can be found from https://github.com/passlab. Dr. Yan received his PhD degree in computer science from University of Houston and has a bachelor degree in mechanical engineering

Toward a Theory of Automated Design of Minimal Robots

Friday, October 13, 2017 - 02:20 pm
Swearingen room 2A14
I would like to invite you to attend this week's CSCE 791 seminar. These seminars highlight research being performed in our department and across the world. All CSCE 791 seminars are open to anybody who wishes to attend - not just students registered for the course. Speaker: Dr. Jason O'Kane, University of South Carolina Abstract: The design of an effective autonomous robot relies upon a complex web of interactions and tradeoffs between various hardware and software components. The problem of designing such a robot becomes even more challenging when the objective is to find robot designs that are minimal, in the sense of utilizing only limited sensing, actuation, or computational resources. The usual approach to navigating these tradeoffs is currently by careful analysis and human cleverness. In contrast, this talk will present some recent research that seeks to automate some parts of this process, by representing models for a robot's interaction with the world as formal, algorithmically-manipulable objects, and posing various kinds of questions on those data structures. The results include both both bad news (i.e., hardness results) and good news (practical algorithms). Bio: Jason O'Kane is Associate Professor in Computer Science and Engineering and Director of the Center for Computational Robotics at the University of South Carolina. He holds the Ph.D. (2007) and M.S. (2005) degrees from the University of Illinois at Urbana-Champaign and the B.S. (2001) degree from Taylor University, all in Computer Science. He has won a CAREER Award from NSF, a Breakthrough Star Award from the University of South Carolina, and the Outstanding Graduate in Computer Science Award from Taylor University. He was a member of the DARPA Computer Science Study Group. His research spans algorithmic robotics, planning under uncertainty, and computational geometry.

Enhancement of Hi-C experimental data using deep convolutional neural network

Friday, October 6, 2017 - 02:20 pm
Swearingen room 2A14
I would like to invite you to attend this week's CSCE 791 seminar. These seminars highlight research being performed in our department and across the world. All CSCE 791 seminars are open to anybody who wishes to attend - not just students registered for the course. Speaker: Dr. Jijun Tang, University of South Carolina Abstract: Hi-C technology is one of the most popular tools for measuring the spatial organization of mammalian genomes. Although an increasing number of Hi-C datasets have been generated in a variety of tissue/cell types, due to high sequencing cost, the resolution of most Hi-C datasets are coarse and cannot be used to infer enhancer-promoter interactions or link disease-related non-coding variants to their target genes. To address this challenge, we develop HiCPlus, a computational approach based on deep convolutional neural network, to infer high-resolution Hi-C interaction matrices from low-resolution Hi-C data. Through extensive testing, we demonstrate that HiCPlus can impute interaction matrices highly similar to the original ones, while using only as few as 1/16 of the total sequencing reads. We observe that Hi-C interaction matrix contains unique local features that are consistent across different cell types, and such features can be effectively captured by the deep learning framework. We further apply HiCPlus to enhance and expand the usability of Hi-C data sets in a variety of tissue and cell types. In summary, our work not only provides a framework to generate high-resolution Hi-C matrix with a fraction of the sequencing cost, but also reveals features underlying the formation of 3D chromatin interactions.

Error Correction Mechanisms in Social Networks: Implications for Replicators

Friday, September 29, 2017 - 02:20 pm
Speaker: Dr. Matthew Brashears, University of South Carolina (Department of Sociology) Abstract: Humans make mistakes but diffusion through social networks is typically modeled as though they do not. We find in an experiment that efforts to correct mistakes are effective, but generate more mutant forms of the contagion than would result from a lack of correction. This indicates that the ability of messages to cross “small-world” human social networks may be overestimated and that failed error corrections create new versions of a contagion that diffuse in competition with the original. These results are extended to a nascent general theory of replicators explaining how error correction mechanisms facilitate rapid saturation of a search space. A simulation model and preliminary results are presented that are consistent with this prediction. Bio: Matthew E. Brashears is an Associate Professor of Sociology at the University of South Carolina. His work crosses levels, integrating ideas from evolutionary theory, social networks, organizational theory, and neuroscience. His current research focuses on linking cognition to social network structure, studying the effects of error and error correction on diffusion dynamics, and using ecological models to connect individual behavior to collective dynamics. He is also engaged in an effort to model values and interactional scripts in an ecological space using cross-national data, with the goal of generating a predictive model of cultural competition and evolution. His work has appeared or is forthcoming in Nature Scientific Reports, the American Sociological Review, the American Journal of Sociology, Social Networks, Social Forces, Advances in Group Processes and Frontiers in Cognitive Psychology, among others. He has received grants from the National Science Foundation, the Defense Threat Reduction Agency, the Army Research Institute, the Army Research Office, and the Office of Naval Research. He is one of two new co-editors for the journal Social Psychology Quarterly, and currently serves as an officer in the American Sociological Association’s Social Psychology Section.

Generating Effective Test Suites by Combining Coverage Criteria

Friday, September 22, 2017 - 02:20 pm
Swearingen room 2A14
I would like to invite you to attend this week's CSCE 791 seminar. These seminars highlight research being performed in our department and across the world. All CSCE 791 seminars are open to anybody who wishes to attend - not just students registered for the course. Speaker: Dr. Gregory Gay, University of South Carolina Abstract: A number of criteria have been proposed to judge test suite adequacy. While search-based test generation has improved greatly at criteria coverage, the produced suites are still often ineffective at detecting faults. Efficacy may be limited by the single-minded application of one criterion at a time when generating suites - a sharp contrast to human testers, who simultaneously explore multiple testing strategies. We hypothesize that automated generation can be improved by selecting and simultaneously exploring multiple criteria. To address this hypothesis, we have generated multi-criteria test suites, measuring efficacy against the Defects4J fault database. We have found that multi-criteria suites can be up to 31.15% more effective at detecting complex, real-world faults than suites generated to satisfy a single criterion and 70.17% more effective than the default combination of all eight criteria. Given a fixed search budget, we recommend pairing a criterion focused on structural exploration - such as Branch Coverage - with targeted supplemental strategies aimed at the type of faults expected from the system under test. Our findings offer lessons to consider when selecting such combinations. Bio: Gregory Gay is an assistant professor of Computer Science & Engineering at University of South Carolina. His research interests include automated testing and analysis, and search-based software engineering, with a focus is on the use of coverage criteria in automated test case generation, as well as the construction of effective test oracles for real-time and safety critical systems. He serves on the steering committees for the Symposium on Search-Based Software Engineering and the International Workshop on Search-Based Software Testing, as well as the organizing and program committees of a variety of conferences and workshops. He graduated with a PhD from the University of Minnesota under a NSF Graduate Research Fellowship, working with the Critical Systems research group. He received his BS and MS in Computer Science from West Virginia University. Additionally, he has previously worked at NASA's Ames Research Center and Independent Verification & Validation Center, and served as a visiting academic at the Laboratory for Internet Software Technologies at the Chinese Academy of Sciences in Beijing.

Human attribute recognition by refining attention heat map

Friday, September 15, 2017 - 02:20 pm
Swearingen room 2A14
I would like to invite you to attend this week's CSCE 791 seminar. These seminars highlight research being performed in our department and across the world. All CSCE 791 seminars are open to anybody who wishes to attend - not just students registered for the course. Speaker: Song Wang, University of South Carolina Abstract: Most existing methods of human attribute recognition are part-based and the performance of these methods is highly dependent on the accuracy of body-part detection, which is a well known challenging problem in computer vision. In this talk, I will introduce a new method to recognize human attributes by using CAM (Class Activation Map) network, as well as an unsupervised algorithm to refine the attention heat map, which is an intermediate result in CAM and reflects relevant image regions for each attribute. The proposed method does not require the detection of body parts and the prior correspondence between body parts and attributes. The proposed methods can achieve comparable performance of attribute recognition to the current state-of-the-art methods. Bio: Song Wang received the Ph.D. degree in electrical and computer engineering from the University of Illinois at Urbana–Champaign in 2002. He received his M.E. and B.E. degrees from Tsinghua University in 1998 and 1994, respectively. In 2002, he joined the Department of Computer Science and Engineering in University of South Carolina, where he is currently a Professor and the director of the Computer Vision Lab. His current research interest is focused on computer vision, image processing and machine learning, as well as their applications to materials science, medical imaging, digital humanities and archaeology. He has published more than 100 research papers in journal and conferences, including top venues like CVPR, ICCV, NIPS, IJCAI, TPAMI, IJCV and TIP. He is currently serving as the Publicity/Web Portal Chair of the Technical Committee of Pattern Analysis and Machine Intelligence of the IEEE Computer Society, and an Associate Editor of Pattern Recognition Letters. He is a senior member of IEEE.

Improving Facial Action Unit Recognition Using Convolutional Neural Networks

Thursday, September 14, 2017 - 10:00 am
Swearingen 3A75
DISSERTATION DEFENSE Department of Computer Science and Engineering, University of South Carolina Candidate: Shizhong Han Advisor: Dr. Yan Tong Abstract Recognizing facial action units (AUs) from spontaneous facial expression is a challenging problem, because of subtle facial appearance changes, free head movements, occlusions, and limited AU-coded training data. Most recently, convolutional neural networks (CNNs) have shown promise on facial AU recognition. However, CNNs are often overfitted and do not generalize well to unseen subject due to limited AU-coded training images. In order to improve the performance of facial AU recognition, we developed two novel CNN frameworks, by substituting the traditional decision layer and convolutional layer with the incremental boosting layer and adaptive convolutional layer respectively, to recognize the AUs from static image. First, in order to handle the limited AU-coded training data and reduce the overfitting, we proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases. Second, all current CNNs use predefined and fixed convolutional filter size. However, AUs activated by different facial muscles cause facial appearance changes at different scales and thus favor different filter sizes. The traditional strategy is to experimentally select the best filter size for each AU in each convolutional layer, but it suffers from expensive training cost, especially when the networks become deeper and deeper. We proposed a novel Optimized Filter Size CNN (OFS-CNN), where the filter sizes and weights of all convolutional layers are learned simultaneously from the training data along with learning convolutional filters. Specifically, the filter size is defined as a continuous variable, which is optimized by minimizing the training loss. Experimental results on four AU-coded databases and one spontaneous facial expression database outperforms traditional CNNs with fixed filter sizes and achieves state-of-the-art recognition performance. Furthermore, the OFS-CNN also beats traditional CNNs using the best filter size obtained by exhaustive search and is capable of estimating optimal filter size for varying image resolution.