Graph-centric approaches for understanding the mutational landscape of life

Friday, March 16, 2018 - 10:15 am
Innovation Center, Room 2277
Speaker: Heewook Lee Abstract: Genetic diversity is necessary for survival and adaptability of all forms of life. The importance of genetic diversity is observed universally in humans to bacteria. Therefore, it is a central challenge to improve our ability to identify and characterize the extent of genetic variants in order to understand the mutational landscape of life. In this talk, I will focus on two important instances of genetic diversity found in (1) human genomes (particularly the human leukocyte antigens—HLA) and (2) bacterial genomes (rearrangement of insertion sequence [IS] elements). I will first show that specific graph data structures can naturally encode high levels of genetic variation, and I will describe our novel, efficient graph-based computational approaches to identify genetic variants for both HLA and bacterial rearrangements. Each of these methods is specifically tailored to its own problem, making it possible to achieve the state-of-the-art performance. For example, our method is the first to be able to reconstruct full-length HLA sequences from short-read sequence data, making it possible to discover novel alleles in individuals. For IS element rearrangement, I used our new approach to provide the first estimate of genome-wide rate of IS-induced rearrangements including recombination. I will also show the spatial patterns and the biases that we find by analyzing E. coli mutation accumulation data spanning over 2.2 million generations. These graph-centric ideas in our computational approaches provide a foundation for analyzing genetically heterogeneous populations of genes and genomes, and provide directions for ways to investigate other instances of genetic diversity found in life. Bio: Dr. Heewook Lee is currently a Lane Fellow at Computational Biology Department at the School of Computer Science in Carnegie Mellon University, where he works on developing novel assembly algorithms for reconstructing highly diverse immune related genes, including human leukocyte antigens. He received a B.S. in computer science from Columbia University, and obtained M.S. and Ph.D in computer science from Indiana University. Prior to his graduate studies, he also worked as a bioinformatics scientist at a sequencing center/genomics company where he was in charge of the computational unit responsible for carrying out various microbial genome projects and Korean Human Genome project. Mar. 16 2018 Location: Innovation Center, Room 2277 Time: 10:15 - 11:15 AM

Security and Privacy Challenges in User-Facing, Complex, Interconnected Environments

Wednesday, March 14, 2018 - 10:15 am
Innovation Center, Room 2277
COLLOQUIUM Soteris Demetriou Abstract In contrast with traditional ubiquitous computing, mobile devices are now user-facing, more complex and interconnected. Thus they introduce new attack surfaces, which can result in severe private information leakage. Due to the rapid adoption of smart devices, there is an urgent need to address emerging security and privacy challenges to help realize the vision of a secure, smarter and personalized world. In this talk, I will focus on the smartphone and its role in smart environments. First I will show how the smartphone's complex architecture allows third-party applications and advertising networks to perform inference attacks and compromise user confidentiality. Further, I will demonstrate how combining techniques from both systems and data sciences can help us build tools to detect such leakage. Second, I will show how a weak mobile application adversary can exploit vulnerabilities hidden in the interplay between smartphones and smart devices. I will then describe how we can leverage both strong mandatory access control and flexible user-driven access control to design practical and robust systems to mitigate such threats. I will conclude, by discussing how in the future I want to enable a trustworthy Internet of Things, focusing not only on strengthening smartphones, but also emerging intelligent platforms and environments (e.g. automobiles, smart buildings/cities), and new user interaction modalities in IoT (acoustic signals). Soteris Demetriou is a Ph.D. Candidate in Computer Science at the University of Illinois at Urbana- Champaign. His research interests lie at the intersection of mobile systems and, security and privacy, with a current focus on smartphones and IoT environments. He discovered side-channels in the virtual process filesystem (procfs) of the Linux kernel that can be exploited by malicious applications running on Android devices; he built Pluto, an open-source tool for detection of sensitive user information collected by mobile apps; he designed security enhancements for the Android OS which enable mandatory and discretionary access control for external devices. His work incited security additions in the popular Android operating system, has received a distinguished paper award at NDSS, and is recognized by awards bestowed by Samsung Research America and Hewlett-Packard Enterprise. Soteris is a recipient of the Fulbright Scholarship, and in 2017 was selected by the Heidelberg Laureate Forum as one of the 200 most promising young researchers in the fields of Mathematics and Computer Science. Date: Mar. 14, 2018 Time: 10:15-11:15 am Place: Innovation Center, Room 2277

The Role of Applications in Wireless Communication System Design and Optimization

Monday, March 12, 2018 - 10:15 am
Innovation Center, Room 2277
COLLOQUIUM Antonios Argyriou Abstract The next generation of cellular wireless communication systems (WCS) aspire to become a paradigm shift and not just an incremental version of existing systems. These systems will come along with several technical and conceptual advances resulting in an ecosystem that aims to deliver orders of magnitude higher performance (throughput, delay, energy). These systems will essentially serve as conduits among service/content providers and users, and are expected to support a significantly enlarged and diversified bouquet of applications. In this talk we will first introduce the audience to the fundamental concepts of WCS that brought us to this day. Subsequently, we will identify the application trends that drive specific design choices of future WCS. Then, we will present a new idea for designing and optimizing future WCS that puts the specific application at the focus of our choices. The discussion will be based on two key application categories namely wireless monitoring, and video delivery. In the last part of this talk we will discuss how this paradigm, that elevates the role of the applications, opens up new directions for understanding, operating, and designing future WCS.. Dr. Antonios Argyriou received the Diploma in electrical and computer engineering from Democritus University of Thrace, Greece, in 2001, and the M.S. and Ph.D. degrees in electrical and computer engineering as a Fulbright scholar from the Georgia Institute of Technology, Atlanta, USA, in 2003 and 2005, respectively. Currently, he is an Assistant Professor at the department of electrical and computer engineering, University of Thessaly, Greece. From 2007 until 2010 he was a Senior Research Scientist at Philips Research, Eindhoven, The Netherlands where he led the research efforts on wireless body area networks. From 2004 until 2005, he was a Senior Engineer at Soft.Networks, Atlanta, GA. Dr. Argyriou currently serves in the editorial board of the Journal of Communications. He has also served as guest editor for the IEEE Transactions on Multimedia Special Issue on Quality-Driven Cross-Layer Design, and he was also a lead guest editor for the Journal of Communications, Special Issue on Network Coding and Applications. Dr. Argyriou serves in the TPC of several international conferences and workshops in the area of wireless communications, networking, and signal processing. His current research interests are in the areas of wireless communications, cross-layer wireless system design (with applications in video delivery, sensing, vehicular systems), statistical signal processing theory and applications, optimization, and machine learning. He is a Senior Member of IEEE. Date: Mar. 12, 2018 Time: 10:15-11:15 am Place: Innovation Center, Room 2277

Elastic and Adaptive SDN-based Defenses in Cloud with Programmable Measurement

Friday, March 9, 2018 - 10:15 am
Innovation Center, Room 2277
An Wang Affiliation:George Mason University Abstract: The past decade has witnessed a dramatic change in the way organizations and enterprises manage their cloud and data center systems. The main drive of such transition is the Network Virtualization techniques, which have been promoted to a new level by the Software-Defined Networking (SDN) paradigm. Along with the programmability and flexibility offered by SDN, there are fundamental challenges in defending against the prevalent large-scale network attacks, such as DDoS attacks, against the SDN-based cloud systems. This talk presents efficient and flexible solutions to address such challenges in both reactive and proactive modes of SDN. In this talk, I will first discuss the vulnerabilities in the architecture of SDN, which results in risk of congestions on the control path under the reactive mode. For the solution, I will show how the control path capacity could be elastically scaled up by taking advantages of the software switches’ abundant processing powers to handle control messages. Then, for the proactive mode, I will discuss how traffic measurement and monitoring mechanisms are necessary yet incompetent with the existing SDN solutions. To fix this issue, I will present the design and implementation of a separate monitoring plane in SDN that enables flexible and fine-grained data collections for security purposes. Bio:An Wang is a Ph.D. candidate in the Department of Computer Science at George Mason University. She received BS in Department of Computer Science and Technologies from Jilin University in 2012. Her research interests lie in the areas of security for networked systems and network virtualization, mainly focusing on Software-Defined Networking (SDN) and cloud systems, and large-scale network attacks. Mar. 09 2018 Innovation Center, Room 2277 10:15 - 11:15 AM

Theory-guided Data Science: A New Paradigm for Scientific Discovery from Data

Wednesday, March 7, 2018 - 10:15 am
Innovation Center, Room 2277
COLLOQUIUM Anuj Karpatne Abstract This talk will introduce theory-guided data science, a novel paradigm of scientific discovery that leverages the unique ability of data science methods to automatically extract patterns and models from data, but without ignoring the treasure of knowledge accumulated in scientific theories. Theory-guided data science aims to fully capitalize the power of machine learning and data mining methods in scientific disciplines by deeply coupling them with models based on scientific theories. This talk will describe several ways in which scientific knowledge can be combined with data science methods in various scientific disciplines such as hydrology, climate science, aerospace, and chemistry. To demonstrate the value in combining physics with data science, the talk will also introduce a novel framework for combining deep learning methods with physics-based models, termed as physics-guided neural networks, and present some preliminary results of this framework for an application in lake temperature modeling. The talk will conclude with a discussion of future prospects in exploiting latest advances in deep learning for building the next generation of scientific models for dynamical systems, where theory-based and data science methods are used at an equal footing. Dr. Anuj Karpatne is a PostDoctoral Associate at the University of Minnesota, where he develops data mining methods for solving scientific and socially relevant problems in Prof. Vipin Kumar's research group. He has published more than 25 peer-reviewed articles at top-tier conferences and journals (e.g., KDD, ICDM, SDM, TKDE, and ACM Computing Surveys), given multiple invited talks, and served on panels at leading venues (e.g., SDM and SSDBM). His research has resulted in a system to monitor the dynamics of surface water bodies on a global scale, which was featured in an NSF news story. He is also a co-author of the second edition of the textbook, "Introduction to Data Mining." Anuj received his Ph.D. in September 2017 from the University of Minnesota under the guidance of Prof. Kumar. Before joining the University of Minnesota, Anuj received his bachelor's and master's degrees from the Indian Institute of Technology Delhi. Date: Mar. 7, 2018 Time: 10:15-11:15 am Place: Innovation Center, Room 2277

Big Data Bridge

Monday, March 5, 2018 - 10:15 am
Innovation Center, Room 2277
COLLOQUIUM Justin Zhan Abstract Data has become the central driving force to new discoveries in science, informed governance, insight into society, and economic growth in the 21st century. Abundant data is a direct result of innovations including the Internet, faster computer processors, cheap storage, the proliferation of sensors, etc, and has the potential to increase business productivity and enable scientific discovery. However, while data is abundant and everywhere, people do not have a fundamental understanding of data. Traditional approaches to decision making under uncertainty are not adequate to deal with massive amounts of data, especially when such data is dynamically changing or becomes available over time. These challenges require novel techniques in data analytics, data-driven optimization, systems modeling and data mining. In this seminar, a number of recent funded data analytics projects will be presented to address various data analytics, mining, modeling, and optimization challenges. In particular, DataBridge, which is a novel data analytics system, will be illustrated. Dr. Justin Zhan is a professor at the Department of Computer Science, College of Engineering, Department of Radiology, School of Medicine, as well as Nevada Institute of Personalized Medicine. His research interests include Big Data, Information Assurance, Social Computing, Biomedical Computing and Health Informatics. He has been a steering chair of International Conference on Social Computing (SocialCom), and International Conference on Privacy, Security, Risk and Trust (PASSAT). He has been the editor-in-chief of International Journal of Privacy, Security and Integrity and International Journal of Social Computing and Cyber-Physical Systems. He has served as a conference general chair, a program chair, a publicity chair, a workshop chair, or a program committee member for over one-hundred and fifty international conferences and an editor-in-chief, an editor, an associate editor, a guest editor, an editorial advisory board member, or an editorial board member for about thirty journals. He has published more than two hundred articles in peer-reviewed journals and conferences and delivered thirty keynote speeches and invited talks. His research has been extensively funded by National Science Foundation, Department of Defense and National Institute of Health. Date: Mar. 5, 2018 Time: 10:15-11:15 am Place: Innovation Center, Room 2277

Bringing Millimeter-Wave Wireless to the Masses

Friday, March 2, 2018 - 10:15 am
Innovation Center, Room 2277
COLLOQUIUM Sanjib Sur University of Wisconsin- Madison Abstract: Many of the emerging IoT applications --- such as wireless virtual and augmented reality, autonomous vehicles, tactile internet --- demand multiple gigabits per second wireless throughput with sub-millisecond latency guarantees. Today’s wireless infrastructure --- such as LTE or Wi-Fi --- will unlikely handle such demand. Abundant opportunity, however, exists at millimeter-wave wireless, but with two key-barriers --- directional link alignment and link blockage --- that prevent the mass deployment of millimeter-wave in today’s network. In the first part of the talk, I will present my approach to addressing these two challenges by designing solutions that span across the wireless link, protocol, and system stack. Mass deployment of millimeter-wave devices also brings opportunity to enable new IoT applications, including designing new user-device interactions and ad-hoc imaging of objects hidden from the line-of-sight. In the second part of the talk, I will briefly go through my design to address the challenges of such ad-hoc applications. Finally, I will conclude this talk with a glimpse of my future works that are shaped by the emerging mass proliferation of cheap and ubiquitous wireless systems at millimeter-wave, sub-terahertz, and terahertz. Sanjib Sur is a Ph.D. candidate in the Electrical and Computer Engineering department at the University of Wisconsin-Madison. His research interests are in millimeter-wave networks, wireless and mobile systems, and IoT connectivity and sensing systems. His research works have appeared on multiple flagship conferences for wireless and mobile systems. Sanjib has been recently nominated for the Wisconsin Distinguished Graduate Fellowship for an outstanding graduate research work. He received a Bachelor’s degree with the highest distinction in Computer Science and Engineering from the Indian Institute of Engineering Science and Technology, where he was awarded the President of India Gold Medal for outstanding academic achievement. Location: Innovation Center, Room 2277 Date: Mar. 02 2018 Time: 10:15 - 11:15 AM

Towards Continual and Fine-Grained Learning for Robot Perception

Wednesday, February 28, 2018 - 10:15 am
Innovation Center, Room 2277
COLLOQUIUM Zsolt Kira Abstract A large number of robot perception tasks have been revolutionized by machine learning and deep neural networks in particular. However, current learning methods are limited in several ways that hinder their large-scale use for critical robotics applications: They are often focused on individual sensor modalities, do not attempt to understand semantic information in a fine-grained temporal manner, and are beholden to strong assumptions about the data (e.g. that the data distribution is the same when deployed in the real world as when trained). In this talk, I will describe work on novel deep learning architectures for moving beyond current methods to develop a richer multi-modal and fine-grained scene understanding from raw sensor data. I will also discuss methods we have developed that can use transfer learning to deal with changes in the environment or the existence of entirely new, unknown categories in the data (e.g. unknown object types). I will focus especially on this latter work, where we use neural networks to learn how to compare objects and transfer such learning to new domains using one of the first deep-learning based clustering algorithms, which we developed. I will show examples of real-world robotic systems using these methods, and conclude by discussing future directions in this area, towards making robots able to continually learn and adapt to new situations as they arise. Dr. Zsolt Kira received his B.S. in ECE at the University of Miami in 2002 and M.S. and Ph.D. in Computer Science from the Georgia Institute of Technology in 2010. He is currently a Senior Research Scientist and Branch Chief of the Machine Learning and Analytics group at the Georgia Tech Research Institute (GTRI). He is also an Adjunct at the School of Interactive Computing and Associate Director of Georgia Tech’s Machine Learning Center (ML@GT). He conducts research in the areas of machine learning for sensor processing and robot perception, with emphasis on feature learning for multi-modal object detection, video analysis, scene characterization, and transfer learning. He has over 25 publications in these areas, several best paper/student paper and other awards, and has been invited to speak at related workshops in both academia and government venues. Date: Feb. 28, 2018 Time: 10:15-11:15 am Place: Innovation Center, Room 2277

Improving Speech-related Facial Action Unit Recognition by Audiovisual Information Fusion

Tuesday, February 27, 2018 - 08:00 am
Meeting room 2267, Innovation Center
DISSERTATION DEFENSE Zibo Meng Advisor : Dr. Yan Tong Abstract In spite of great progress achieved on posed facial display and controlled image acquisition, performance of facial action unit (AU) recognition degrades significantly for spontaneous facial displays. Furthermore, recognizing AUs accompanied with speech is even more challenging since they are generally activated at a low intensity with subtle facial appearance/geometrical changes during speech, and more importantly, often introduce ambiguity in detecting other co-occurring AUs, e.g., producing non-additive appearance changes. All the current AU recognition systems utilized information extracted only from visual channel. However, sound is highly correlated with visual channel in human communications. Thus, we propose to exploit both audio and visual information for AU recognition. Specifically, a feature-level fusion method combining both audio and visual features is first introduced. Specifically, features are independently extracted from visual and audio channels. The extracted features are aligned to handle the difference in time scales and the time shift between the two signals. These temporally aligned features are integrated via feature-level fusion for AU recognition. Second, a novel approach that recognizes speech-related AUs exclusively from audio signals based on the fact that facial activities are highly correlated with voice during speech is developed. Specifically, dynamic and physiological relationships between AUs and phonemes are modeled through a continuous time Bayesian network (CTBN); then AU recognition is performed by probabilistic inference via the CTBN model. Third, a novel audiovisual fusion framework, which aims to make the best use of visual and acoustic cues in recognizing speech-related facial AUs is developed. In particular, a dynamic Bayesian network (DBN) is employed to explicitly model the semantic and dynamic physiological relationships between AUs and phonemes as well as measurement uncertainty. AU recognition is then conducted by probabilistic inference via the DBN model. To evaluate the proposed approaches, a pilot AU-coded audiovisual database was collected. Experiments on this dataset have demonstrated that the proposed frameworks yield significant improvement in recognizing speech-related AUs compared to the state-of-the-art visual-based methods. Furthermore, more impressive improvement has been achieved for those AUs, whose visual observations are impaired during speech.