My Journey in Artificial Intelligence

Thursday, November 7, 2019 - 07:00 pm
Gresette Room, Harper College, 3rd floor
Dr. Marco Valtorta is a professor of Computer Science and Engineering in the College of Engineering and Computing at the University of South Carolina. He received a laurea degree with highest honors in electrical engineering from the Politecnico di Milano, Milan, Italy, in 1980. After his graduate work in Computer Science at Duke University, he joined the Commission of the European Communities in Brussels, Belgium, where he worked as a project officer for the European Strategic Programme in Information Technologies from 1985 to 1988. In August 1988, he joined the faculty at UofSC in the Department of Computer Science, where he primarily does research in Artificial Intelligence. His first research result, known as “Valtorta’s theorem” and obtained in 1980, was recently (2011) described as “seminal” and “an important theoretical limit of usefulness” for heuristics computed by search in an abstracted problem space. Most of his more recent research has been in the area of uncertainty in artificial intelligence. His proof with graduate student Vimin Huang of the completeness of Pearl’s do-calculus of intervention in 2006 settled a 13-year old conjecture. His students have been best paper award winners at the Conference on Uncertainty in Artificial Intelligence (1993, 2006) and the International Conference on Information Quality (2006). He was undergraduate director for the Department of Computer Science from 1993 to 1999. He was awarded the College of Science and Mathematics Outstanding Advisor Award in 1997. In addition to his teaching and research activity, he has served in numerous service capacities at the departmental (e.g., chair of the tenure and promotion committee and of the colloquium committee), college (e.g ., College of Engineering and Computing scholarship committee), and university level (e.g., faculty senator, committee on curricula and courses, committee on instructional development, university committee on tenure and promotion). In April 2016, Valtorta was elected chair of the university faculty senate and became chair in August 2017 for a two-year term.

Development of a national-scale Big data analytics pipeline to study the potential impacts of flooding on critical infrastructure and communities

Thursday, November 7, 2019 - 02:00 pm
Meeting Room 2267, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Nattapon Donratanapat Advisor : Dr. Jose Vidal and Dr. Vidya Samadi Date : Nov 7th , 2019 Time : 2:00 pm Place : Meeting Room 2267, Innovation Center Abstract With the rapid development of the Internet and mobile devices, crowdsourcing techniques have emerged to facilitate data processing and problem solving particularly for flood emergences purposes. We developed Flood Analytics Information System (FAIS) application as a python interface to gather Big data from multiple servers and analyze flood hazards in real time. The interface uses crowd intelligence and machine learning to provide flood warning and river level information, and natural language processing of tweets during flooding events, with the aim to improve situational awareness for flood risk managers and other stakeholders. We demonstrated and tested FAIS across Lower PeeDee Basin in the Carolinas where Hurricane Florence made extensive damages and disruption. Our research aim was to develop and test an integrated solution based on real time Big data for stakeholder map-based dashboard visualizations that can be applicable to other countries and a range of weather-driven emergency situations. The application allows the user to submit search request from USGS and Twitter through criteria, which is used to modify request URL sent to data sources. The prototype successfully identifies a dynamic set of at-risk areas using web-based river level and flood warning API sources. The list of prioritized areas can be updated every 15 minutes, as the environmental information and condition change.

Creating an AI Platform to Automatically Audit Financial Spend

Thursday, November 7, 2019 - 01:00 pm
Room 2277 in Innovation Building
The Artificial Intelligence Institute in collaboration with the Department of Computer Science and Engineering presents Kunal Verma AppZen uses cutting edge AI technologies to automatically audit financial data such as Expense Reports, Invoices and Contracts. Our platform is currently being used by over 1500 customers including Amazon and JP Morgan Chase. In this talk, we will talk about some of the use cases that we solve and also some key challenges that we face. A key ingredient in our approach is using a semantic layer that understands unstructured data and using this understanding to create features in machine/deep learning models. We will provide an overview of this unique approach. We will also discuss how we are creating a general AI platform that will help us solve current use cases and beyond. Bio: Kunal co-founded AppZen and developed its core artificial intelligence technology. He is passionate about developing AI-based solutions to solve real-world business problems. He is responsible for AppZen’s product vision, and oversees the company’s R&D and data science teams. Previously, he led research teams at Accenture Technology Labs that were responsible for developing AI-based tools for Fortune 500 companies. He earned his Ph.D. in Computer Science from the University of Georgia with a focus on semantic technologies. He is a published author with over 50 refereed papers and holds several granted patents. Kunal is a keen golfer and an avid follower of the Georgia BullDawgs and the Golden State Warriors. Location: CSE Seminar Room - Room 2277 in Innovation Building Date: Thursday, November 7 Time: 1:00 pm

Machine Learning Based Ultra High Carbon Steel Micro-Structure Image Segmentation

Wednesday, November 6, 2019 - 09:00 am
Meeting Room 2267, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Sumith Kuttiyil Suresh Advisor : Dr. Jianjun HU Date : Nov 6th , 2019 Time : 9:00 am Place : Meeting Room 2267, Innovation Center Abstract Ultra-high carbon steel(UHCS) materials are carbon steels which contain 1-2.1% carbon. These steels show remarkable structural properties when processed at different external conditions. It can be made superplastic at intermediate temperatures and strong with good tensile ductility at ambient temperatures. Further, they can be made hard with compression toughness. Contrary to conventional wisdom, UHCS are ideal replacements for currently used high-carbon (0.5–1 % carbon) steels because they have comparable ductility but higher strength and hardness. UHCS can be laminated with other metal-based materials to achieve superplasticity, high impact resistance, exceptionally high tensile ductility, and improved fatigue behavior. This makes UHCS to be widely used in the material world to build various kinds of industrial tools. This quality of UHCS attributes to the variety of microstructure formed at different processing conditions. Hence the study of micro-constituents which contributes to the overall property of the material is a core focus of the discipline of material science. Through my research, I try to study the usefulness of machine learning and image processing techniques in UHCS image classification and segmentation. I primarily focus on using image segmentation methods to segment UHCS microstructure images based on micro-constituent location.

Neurons, Perceptron and Deep Learning: Milestones toward Artificial General Intelligence

Friday, November 1, 2019 - 02:20 pm
Innovation Center, Room 1400
Speaker: Venkat Kaushik Affiliation: OpenText Corporation Location: Innovation Center, Room 1400 Time: Friday 11/1/2019 (2:20 - 3:10pm) Affiliation: OpenText Corporation Abstract: Deep Learning is a culmination of advances in cognitive neuroscience, neurobiology, clinical psychology, mathematics and statistics and logic. The explosion in data coupled with the recent advancements in custom compute and cloud scale storage has brought us super-human narrow AI systems such Watson, AlphaZero and DeepFace. A portion of this talk is an exploration of key ideas and their significance in the context of neural networks which forms the basis of most deep learning systems. Here, I will highlight several milestones that led us to our current vantage point. Remainder of the talk telescopes on the prerequisites that may help pave the way for a safe, human-centered Artifical General Intelligence (AGI). Bio: I am a solutions architect and specialize in Enterprise Information Management (EIM) for large enterprises. I received a PhD in physics from University of Texas at Arlington in 2007. My doctoral dissertation and post-doctoral work respectively centered on a search for the Higgs boson at the DZero experiment at Fermilab and precision top quark measurements for ATLAS experiment at CERN. I have witnessed the collection and use petabyte scale dataset and grid computing spanning multiple continents. I used Artificial Neural Networks in search for exotic particles and contributed to building and refining software for advanced multivariate statistics, hypotheses testing and particle/detector Monte Carlo simulations. Since transitioning to IT industry in 2013, I have assumed several different roles in platform and data engineering, technical leadership in big data technologies specializing in distributed, relational and in-memory databases and message streaming. My current focus areas are leveraging machine learning algorithms to improve business outcomes in EIM and practice of cloud architecture. I have enjoyed being an adjunct faculty in the physics department of University of South Carolina. I am avid technology enthusiast and a voracious consumer of knowledge.

Phylogenetic Reconstruction Analysis on Gene Order and Copy Number Variation

Friday, November 1, 2019 - 11:00 am
Meeting Room 2265, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Ruofan Xia Advisor : Dr. Jijun Tang Date : Nov 1st, 2019 Time : 11:00 am Place : Meeting Room 2265, Innovation Center Abstract Genome rearrangement is known as one of the main evolutionary mechanisms on the genomic level. Phylogenetic analysis based on rearrangement played a crucial role in biological research in the past decades, especially with the increasing availability of fully sequenced genomes. In general, the phylogenetic analysis aims to solve two problems: Small Parsimony Problem (SPP) and Big Parsimony Problem (BPP). Maximum parsimony is a popular approach for SPP and BPP which relies on iteratively solving an NP-hard problem, the median problem. As a result, current median solvers and phylogenetic inference methods based on the median problem all face serious problems on scalability and cannot be applied to datasets with large and distant genomes. In this thesis, we propose a new median solver for gene order data that combines double-cut-join (DCJ) sorting with the Simulated Annealing algorithm (SAMedian). Based on this median solver, we built a new phylogenetic inference method to solve both SPP and BPP problems. Our experimental results show that the new median solver achieves excellent performance on simulated datasets and the phylogenetic inference tool built based on the new median solver has a better performance than other existing methods. Cancer is known for its heterogeneity and is regarded as an evolutionary process driven by somatic mutations and clonal expansions. This evolutionary process can be modeled by a phylogenetic tree and phylogenetic analysis of multiple subclones of cancer cells can facilitate the study of the tumor variants progression. Copy-number aberration occurs frequently in many types of tumors in terms of segmental amplifications and deletions. In this thesis, we developed a distance-based method for reconstructing phylogenies from copy-number profiles of cancer cells. We demonstrate the importance of distance correction from the edit (minimum) distance to the estimated actual number of events. Experimental results show that our approaches provide accurate and scalable results in estimating the actual number of evolutionary events between copy number profiles and in reconstructing phylogenies. High-throughput sequencing of tumor samples has reported various degrees of genetic heterogeneity between primary tumors and their distant subpopulations. The clonal theory of cancer evolution shows that tumor cells are descended from a common origin cell. This origin cell includes an advantageous mutation that causes a clonal expansion with a large amount of population of cells descended from the origin cell. To further investigate cancer progression, phylogenetic analysis on the tumor cells is imperative. In this thesis, we developed a novel approach to infer the phylogeny to analyze both Next-Generation Sequencing and Long-Read Sequencing data. Experimental results show that our new proposed method can infer the entire phylogenetic progression very accurately on both Next-Generation Sequencing and Long-Read Sequencing data. In this thesis, we focused on phylogenetic analysis on both gene order sequence and copy number variations. Our thesis work can be categorized into three parts. First, we developed a new median solver to solve the median problem and phylogeny inference with DCJ model and apply our method to both simulated data and real yeast data. Second, we explored a new approach to infer the phylogeny of copy number profiles for a wide range of parameters (e.g., different number of leaf genomes, different number of positions in the genome, and different tree diameters). Third, we concentrated our work on the phylogeny inference on the high-throughput sequencing data and proposed a novel approach to further investigate and phylogenetic analyze the entire expansion process of cancer cells on both Next-Generation Sequencing and Long-Read Sequencing data.

Properties, Learning Algorithms and applications of Chain Graphs and Bayesian Hypergraphs

Wednesday, October 23, 2019 - 11:00 am
Meeting Room 2277, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Mohammad Ali Javidian Advisor : Dr. Marco Valtorta Date : Oct 23, 2019 Time : 11 am Place : Meeting Room 2277, Innovation Center Abstract Probabilistic graphical models (PGMs) use graphs, either undirected, directed, or mixed, to represent possible dependencies among the variables of a multivariate probability distribution. PGMs, such as Bayesian networks and Markov networks, are now widely accepted as a powerful and mature framework for reasoning and decision making under uncertainty in knowledge-based systems. With the increase of their popularity, the range of graphical models being investigated and used has also expanded. Several types of graphs with different conditional independence interpretations - also known as Markov properties - have been proposed and used in graphical models. The graphical structure of a Bayesian network has the form of a directed acyclic graph (DAG), which has the advantage of supporting an interpretation of the graph in terms of cause-effect relationships. However, a limitation is that only asymmetric relationships such as cause and effect relationships can be modeled between variables in a DAG. Chain graphs, which admit both directed and undirected edges, can be used to overcome this limitation. Today there exist three main different interpretations of chain graphs in the literature. These are the Lauritzen-Wermuth-Frydenberg, the Andersson-Madigan-Perlman, and the multivariate regression interpretations. In this thesis, we study these interpretations based on their separation criteria and the intuition behind their edges. Since structure learning is a critical component in constructing an intelligent system based on a chain graph model, we propose new feasible and efficient structure learning algorithms to learn chain graphs from data under the faithfulness assumption. The proliferation of different PGMs that allow factorizations of different kinds leads us to consider a more general graphical structure in this thesis, namely directed acyclic hypergraphs. Directed acyclic hypergraphs are the graphical structure of a new probabilistic graphical model that we call \textit{Bayesian hypergraphs}. Since there are many more hypergraphs than DAGs, undirected graphs, chain graphs, and, indeed, other graph-based networks, Bayesian hypergraphs can model much finer factorizations and thus are more computationally efficient. Bayesian hypergraphs also allow a modeler to represent causal patterns of interaction such as Noisy-OR graphically (without additional annotations). We introduce global, local and pairwise Markov properties of Bayesian hypergraphs and prove under which conditions they are equivalent. We define a projection operator, called shadow, that maps Bayesian hypergraphs to chain graphs, and show that the Markov properties of a Bayesian hypergraph are equivalent to those of its corresponding chain graph. We extend the causal interpretation of LWF chain graphs to Bayesian hypergraphs and provide corresponding formulas and a graphical criterion for intervention. The framework of graphical models, which provides algorithms for discovering and analyzing structure in complex distributions to describe them succinctly and extract unstructured information, allows them to be constructed and utilized effectively. Two of the most important applications of graphical models are causal inference and information extraction. To address these abilities of graphical models, we conduct a causal analysis, comparing the performance behavior of highly-configurable systems across environmental conditions (changing workload, hardware, and software versions), to explore when and how causal knowledge can be commonly exploited for performance analysis.

Stacked Modelling Framework

Monday, October 21, 2019 - 03:30 pm
Meeting Room 2265, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Kareem Abdelfatah Advisor : Dr. Gabriel Terejanu Date : Oct 21, 2019 Time : 3:30 pm Place : Meeting Room 2265, Innovation Center Abstract The thesis develops a predictive modeling framework based on stacked Gaussian processes and applies it to two main applications in environmental and chemical engineering. First, a network of independently trained Gaussian processes (StackedGP) is introduced to obtain analytical predictions of quantities of interest (model outputs) with quantified uncertainties. StackedGP framework supports component-based modeling in different fields such as environmental and chemical science, enhances predictions of quantities of interest through a cascade of intermediate predictions usually addressed by cokriging, and propagates uncertainties through emulated dynamical systems driven by uncertain forcing variables. By using analytical first and second-order moments of a Gaussian process with uncertain inputs using squared exponential and polynomial kernels, approximated expectations of model outputs that require an arbitrary composition of functions can be obtained. The performance of the proposed nonparametric stacked model in model composition and cascading predictions is measured in different applications and datasets. The framework has been evaluated in a wildfire and mineral resource problem using real data, and its application to time-series prediction is demonstrated in a 2D puff advection problem. In additions, the StackedGP is introduced to one of challenging environmental problems, prediction of mycotoxins. In this part of the work, we develop a stacked Gaussian process using both field and wet-lab measurements to predict fungal toxin (aflatoxin) concentrations in corn in South Carolina. While most of the aflatoxin contamination issues associated with the post-harvest period in the U.S. can be controlled with expensive testing, a systematic and economical approach is lacking to determine how the pre-harvest aflatoxin risk adversely affects crop producers as aflatoxin is virtually unobservable on a geographical and temporal scale. This information gap carries significant cost burdens for grain producers, and it is filled by the proposed stacked Gaussian process. The novelty of this part is two folds. First, the aflatoxin probabilistic maps are obtained using an analytical scheme to propagate the uncertainty through the stacked Gaussian process. The model predictions are validated both at the Gaussian process component level and at the system level for the entire stacked Gaussian process using historical field data. Second, a novel derivation is introduced to calculate the analytical covariance of aflatoxin production at two geographical locations. Similar with kriging/Gaussian process, this is used to predict aflatoxin at unobserved locations using measurements at nearby locations but with the prior mean and covariance provided by the stacked Gaussian process. As field measurements arrive, this measurement update scheme may be used in targeted field inspections and warning farmers of emerging aflatoxin contaminations. Lastly, we apply the stackedGP framework in a chemical engineering application. Computational catalyst discovery involves identification of a meaningful model and suitable descriptors that determine the catalyst properties. First, we study the impact of combining various descriptors (e.g. reaction energies, metal descriptors, and bond counts) for modeling transition state energies (TS) based on a database of adsorption and TS energies across transition metal surfaces {Palladium (PD_111), Platinum (PT_111), Nickel (NI_111), Ruthenium (RU_0001), and Rhodium (RH_111)} for the decarboxylation and decarbonylation of propionic acid, a chemistry characteristic for biomass conversion. Results of different machine learning models for more than 1330 of these descriptor combinations suggest that there is no statistically significant difference between linear and non-linear models when using the right combination of reactant energies, metal descriptors, and bond counts. However, linear models are inferior when not including bond count and metal descriptors. Furthermore, when there are missing data for reaction steps on all metals, conventional linear scaling is inferior to linear and nonlinear models with proper choice of descriptors that are surprisingly robust. Finally, the stackedGP framework is evaluated in modeling the adsorption and transition state energies as a function of metal descriptors with data from all metal surfaces. By getting these energies, the Turn-Over-Frequency (TOF) can be estimated using micro-kinetic models.

ACM Codeathon

Friday, October 18, 2019 - 07:00 pm
Swearingen 2A11
We're excited to invite you to the Fall 2019 Code-a-Thon! The competition will begin this Friday, October 18th, at 7pm. You may attend in-person in the Swearingen 2A11 computer lab, or participate online at the links provided later at the bottom of this email. Every semester, our chapter of the Association of Computing Machinery hosts a 24 hour Code-a-Thon (coding competition) open to all University of South Carolina students. Come solve problems, eat (FREE) pizza, find internships, and battle for prizes at ACM's most anticipated event of the semester. Do not be intimidated by the 24 hour run time! While the Code-a-Thon does indeed run 24 hours (7 PM - 7 PM), you do not have to be present for all of it. Submitting problems from home is allowed and welcomed, especially for alumni not in Columbia. There will be pizza, snacks, and drinks at the event. We have 4 different divisions so that you do not have to compete with people of wildly different skill levels. Please note that the divisions are split by CURRENT course enrollment - if you have completed CSCE 146 and are currently enrolled in CSCE 240, you would compete in the 240 division. However if you have completed CSCE 146 but are not currently enrolled in CSCE 240, you would be eligible to compete in the 146 division. To competitors not in a computing degree program at USC, self evaluate yourself as an (1) absolute beginner; (2) beginner - some coding experience; (3) intermediate; (4) advanced into the 145, 146, 240, and 350 divisions, respectively. Prizes are awarded for 1st, 2nd, and 3rd place in each division. Prizes have not yet been determined but in the past have been Arduinos, gift cards, flash drives, and USB keyboards. Check out the newspost on our ACM chapter's website for more information and links to the competition: https://acm.cse.sc.edu/events/2019-10-18-code-a-thon.html Once the competition starts, you can click on the links to join the division: 145, introductory programming questions 146, data structures questions 240, algorithms questions 350, advanced algorithms questions Look forward to seeing you there! Please reach out to me at hdamron@email.sc.edu with any questions. Hunter Damron ACM Student Chapter President