Mind the Gap: What lies between the end of CMOS scaling and future technologies?
- 11 views
Meeting Location:
Storey Innovation Center 1400
Live Meeting Link for the virtual audience :
Talk Abstract: David’s talk will cover some of the exciting technologies the Devices, Circuits & Systems group at ARM is researching as well as what he sees in the general trend and future of process technology. And since it’s not possible to discuss CMOS scaling without commenting on Moore’s law, he will do that, too. 🙂
Speaker's Bio: David Pietromonaco has been in the semiconductor industry for almost 30 years at Hewlett-Packard, Sony, and most recently Artisan/Arm (for 20 of those). He works in Arm Research; in the Devices, Circuits & Systems group, specifically on the Technology Optimized Design team. That team tries to look 5-10 years ahead to understand future computing technologies and how to utilize them.
DISSERTATION DEFENSE
Department of Computer Science and Engineering
University of South Carolina
Author : John Ravan
Advisor : Dr. Csilla Farkas
Date : November 1, 2021
Time : 1:30pm
Place : Virtual Defense
Join Zoom Meeting
https://citadelonline.zoom.us/j/5257755660?pwd=dzBwNW85RUdSRjVWdGp4RzRxbzE2UT09
Abstract
Concurrent database transactions within a web service environment can cause a variety of problems without the proper concurrency control mechanisms in place. A few of these problems involve data integrity issues, deadlock, and efficiency issues. Even with today's industry standard solutions to these problems, they have taken a reactive approach rather than proactively preventing these problems from happening. We deliver a twofold solution that presents a proactive prediction-based approach to ensure consistency while keeping execution time the same or faster than current industry solutions. The first part of this solution involves prototyping and formally proving a prediction-based scheduler.
The prediction-based scheduler leverages a prediction-based metric that promotes transactions with reliable reputations based on the transaction's performance metric. This performance metric is based on the transaction's likelihood to commit and its efficiency within the system. We can then predict the outcome of the transaction based on the metric and apply customized lock behaviors to address consistency issues in current web service environments. We have formally proven that the solution will increase consistency among web service transactions without a performance degradation that is worse than industry standard 2PL. The simulation was developed using a multi-threaded approach to simulate concurrent transactions. Experimentation results show that the solution works comparatively with industry solutions with the added benefit of ensured consistency in some cases and deadlock avoidance in others. This work has been published in IEEE Transactions on Services Computing.
The second part of the solution involves building the prediction-based metric mentioned previously. In the initial solution we assumed the prediction-based categorization coming into the solution in order to prove the feasibility and correctness of a prediction-based scheduler.
Once that was established, we extended the four-category solution to a dynamic reputation score built upon transactional attributes. The attributes used in the reputation score are system abort ranking, user abort ranking, efficiency ranking, and commit ranking. With these four attributes we were able to establish a dynamic dominance structure that allowed for a transaction to promote or demote itself based on its performance within the system. This work has been submitted to ACM Transactions on Information Systems and awaiting review.
Both phases provide a complete solution of prediction-based transaction scheduling that provides dynamic categorization no matter the transactional environment.
Future work of this system would involve extending the prediction-based solution to a multi-level secure database with an added dimension. The dimension provides a security classification in addition to attributes for dynamic reputation that allows for transactions to establish dominance. The goal would be to prevent covert timing channels that occur in multi-level secure database systems due to the differing classifications. Our reputation score would provide a cover story for timing differences of transactions of different security levels to allow for a more robust scheduling algorithm. This would allow for high security transactions to gain priority over low security transactions without exposing a covert timing channel.
Live Meeting Link for the virtual audience :
Talk Abstract: In the past 10 years, we have seen the rise of SAAS (Software As A Service). We have seen SAAS take over many of the existing businesses. Amazon, NetFlix, Expedia are household name SAAS-operated businesses that replaced traditional ones As Machine Learning (ML) and Artificial Intelligence (AI) rise, we are also seeing lots of jobs replaced and automated by machines. This is happening at a much faster pace than anticipated. AI is also replacing jobs that were once thought to be securely dominated by humans. These were jobs that require some form of human intellect We will be discussing what AI really is beyond what the media defines it to be. We will also discuss the implication of this automation on society & the labor market. We will try to show that AI, contrary to the latest media scare, is going to bring an era of unprecedented productivity gains and prosperity. Something comparable to what the industrial revolution brought a few centuries back. But that requires us to be prepared as a society
Speaker's Bio:
Ahmad Abdulkader is a well-renowned industry expert, with over 50 publications and patents, in Machine Learning and Artificial Intelligence.
Ahmad is currently a Distinguished Scientist at Facebook AI Applied Research. Ahmad invented DeepText, a Deep-Learning Text Understanding Platform that is widely used throughout FB and the open-source community.
Prior to Facebook, Abdulkader was the co-founder & CTO of Voicea.ai which was acquired by Cisco in 2019. Voicea built a widely-used meetings platform that became part of Cisco's WebEx.
https://www.amazon.com/Attracting-technical-co-founders-corporate-fundraising/dp/B08KVDHRN8
Ahmad also worked for Google where he built the Optical Character Recognition and verification of Google's BookSearch. Ahmad is one of the main contributors to Tesseract: The most widely used open-source OCR Engine.
https://github.com/tesseract-ocr/tesseract/blob/master/AUTHORS
In addition, Ahmad was one of the pioneers of StreetView at Google. Ahmad was one of the main creators of StreetSmart; A Computer Vision platform for privacy protection and Scene Understanding in StreetView.
https://research.google/pubs/pub35481/
At Microsoft corporation, Ahmad was one of the pioneers of the Handwriting Recognition Technology that powers the Microsoft Surface devices.
Ahmad also is one of the earliest contributors to Arabic OCR & Handwriting Recognition. Ahmad is the co-inventor of the first Arabic OCR Engine (ICRA) in 1994
https://org.uib.no/smi/ksv/ArabOCR.html
Ahmad studied at Cairo University where he got his B.Sc. and M.Sc. in Electrical Engineering and at McMaster University & University of Washington where he got his M.Sc. & Ph.D. in Computer Science.
Meeting Location:
Storey Innovation Center 1400
Live Meeting Link for the virtual audience
Speaker's Bio: Dr. Qiang Zeng is an Assistant Professor in the CSE department at the University of South Carolina. He received his Ph.D. in Computer Science and Engineering from Penn State University. His main interest is Computer Systems Security, with a focus on the Internet of Things and Mobile Computing. He is also interested in Adversarial Machine Learning. He publishes his work in CCS, USENIX Security, NDSS, MobiCom, MobiSys, PLDI, etc.
Talk Abstract: As IoT devices are integrated via automation and coupled with the physical environment, anomalies in an appified smart home, whether due to attacks or device malfunctions, may lead to severe consequences. Prior works that utilize data mining techniques to detect anomalies suffer from high false alarm rates and missing many real anomalies. Our observation is that data mining-based approaches miss a large chunk of information about automation programs (also called smart apps) and device relations. We propose Home Automation Watcher (HAWatcher), a semantics-aware anomaly detection system for appified smart homes. HAWatcher models a smart home’s normal behaviors based on both event logs and semantics. Given a home, HAWatcher generates hypothetical correlations according to semantic information, such as apps, device types, relations and installation locations, and verifies them with event logs. The mined correlations are refined using correlations extracted from the installed smart apps. The refined correlations are used by a Shadow Execution engine to simulate the smart home’s normal behaviors. During runtime, inconsistencies between devices’ real-world states and simulated states are reported as anomalies. We evaluate our prototype on the SmartThings platform in four real-world testbeds and test it against totally 62 different anomaly cases. The results show that HAWatcher achieves high accuracy, significantly outperforming prior approaches.
Two Lectures on AI Explainability
As part of Trusted AI course in Fall 2021 by Prof. Biplav Srivastava https://sites.google.com/site/biplavsrivastava/research-1/trustedai
Oct 19, Tuesday, 10:00-11:15 am - Talk
Blackboard link: https://us.bbcollab.com/guest/f567247c101145cebc6eaa937af2cecd
Oct 21, Tuesday, 10:00-11:15 am – Talk and Working Session
Blackboard link: https://us.bbcollab.com/guest/f567247c101145cebc6eaa937af2cecd
On campus class at Seminar Room, AI Institute, 1112 Greene St, Columbia (5th Floor; Science & Technology Building)
Speakers: Dr. Diptikalyan Saha (Dipti) and Dr. Vijay Arya
Speakers Bio:
Dr. Diptikalyan Saha (Dipti) is a Senior Technical Staff Member and manager of Reliable AI team in Data&AI department of IBM Research at Bangalore. His research interest includes Artificial Intelligence, Natural Language Processing, Knowledge representation, Program Analysis, Security, Software Debugging, Testing, Verification, and Programming Languages. He received a Ph.D. degree in Computer Science from the State University of New York at Stony Brook his B.E. degree in Computer Science and Engineering from Jadavpur University. His group’s work on Bias in AI Systems is available through AI OpenScale in IBM Cloud as well as through open-source AI Fairness 360.
Vijay Arya is a senior researcher in IBM Research AI at the IBM India Research Lab where he works on problems related to Trusted AI. Vijay has 15 years of combined experience in research and software development. His research work spans Machine learning, Energy & smart grids, network measurements & modeling, wireless networks, algorithms, and optimization. His work has received outstanding technical achievement awards at IBM and has been deployed by power utilities in USA. Before joining IBM, Vijay worked as a researcher at National ICT Australia (NICTA) and received his PhD in Computer Science from INRIA, France, and a Masters from Indian Institute of Technology (IIT) Delhi. He has served on the program committees of IEEE, ACM, and IFIP conferences, he is a senior member of IEEE & ACM, and has more than 60 conference & journal publications and patents.
Two Lectures on AI Explainability
As part of Trusted AI course in Fall 2021 by Prof. Biplav Srivastava https://sites.google.com/site/biplavsrivastava/research-1/trustedai
Oct 19, Tuesday, 10:00-11:15 am - Talk
Blackboard link: https://us.bbcollab.com/guest/f567247c101145cebc6eaa937af2cecd
Oct 21, Tuesday, 10:00-11:15 am – Talk and Working Session
Blackboard link: https://us.bbcollab.com/guest/f567247c101145cebc6eaa937af2cecd
On campus class at Seminar Room, AI Institute, 1112 Greene St, Columbia (5th Floor; Science & Technology Building)
Speakers: Dr. Diptikalyan Saha (Dipti) and Dr. Vijay Arya
Speakers Bio:
Dr. Diptikalyan Saha (Dipti) is a Senior Technical Staff Member and manager of Reliable AI team in Data&AI department of IBM Research at Bangalore. His research interest includes Artificial Intelligence, Natural Language Processing, Knowledge representation, Program Analysis, Security, Software Debugging, Testing, Verification, and Programming Languages. He received a Ph.D. degree in Computer Science from the State University of New York at Stony Brook his B.E. degree in Computer Science and Engineering from Jadavpur University. His group’s work on Bias in AI Systems is available through AI OpenScale in IBM Cloud as well as through open-source AI Fairness 360.
Vijay Arya is a senior researcher in IBM Research AI at the IBM India Research Lab where he works on problems related to Trusted AI. Vijay has 15 years of combined experience in research and software development. His research work spans Machine learning, Energy & smart grids, network measurements & modeling, wireless networks, algorithms, and optimization. His work has received outstanding technical achievement awards at IBM and has been deployed by power utilities in USA. Before joining IBM, Vijay worked as a researcher at National ICT Australia (NICTA) and received his PhD in Computer Science from INRIA, France, and a Masters from Indian Institute of Technology (IIT) Delhi. He has served on the program committees of IEEE, ACM, and IFIP conferences, he is a senior member of IEEE & ACM, and has more than 60 conference & journal publications and patents.
DISSERTATION DEFENSE
Department of Computer Science and Engineering
University of South Carolina
Author : Trevor Olsen
Advisor : Dr. Jason O'Kane
Date : Oct 19, 2021
Time : 9:00am
Place : Meeting Room 2265, Innovation Center
Abstract
Given a two-dimensional polygonal space, the multi-robot visibility-based pursuit-evasion problem tasks several pursuer robots with the goal of establishing visibility with an arbitrarily fast evader. The best-known complete algorithm for this problem takes time doubly exponential in the number of robots. However, sampling-based techniques have shown promise in generating feasible solutions in these scenarios.
Existing sampling-based algorithms have long execution times and high failure rates for complex environments. We first address that limitation by proposing a new algorithm that takes an environment as its input and returns a joint motion strategy which ensures that the evader is captured by one of the pursuers. Starting with a single pursuer, we sequentially construct data structures called Sample-Generated Pursuit-Evasion Graphs to create such a joint motion strategy. This sequential graph structure ensures that our algorithm will always terminate with a solution, regardless of the complexity of the environment.
Another aspect of this problem that has yet to be explored concerns how to ensure that the robots can recover from catastrophic failures which leave one or more robots unexpectedly incapable of continuing to contribute to the pursuit of the evader. To address this issue, we propose an algorithm that can rapidly recover from catastrophic failures. When such failures occur, a replanning occurs, leveraging both the information retained from the previous iteration and the partial progress of the search completed before the failure to generate a new motion strategy for the reduced team of pursuers.
The final contribution is a novel formulation of the pursuit-evasion problem that modifies the pursuers' objective by requiring that the evader still be detected, even in spite of the malfunction of any single pursuer robot. This novel constraint, whereby two pursuers are required to detect an evader, has the benefit of providing redundancy to the search, should any member of the team become unresponsive, suffer temporary sensor disruption/failure, or otherwise become incapacitated. The proposed formulation produces plans that are inherently tolerant of some level of disturbance.
For each contribution discussed above, we describe an implementation of the algorithm and provide quantitative results that show substantial improvement over existing results.
Meeting Location:
Storey Innovation Center 1400
Speaker's Bio: Catherine (Katie) Schuman is a research scientist at Oak Ridge National Laboratory (ORNL). She received her Ph.D. in Computer Science from the University of Tennessee (UT) in 2015, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. She is continuing her study of algorithms for neuromorphic computing at ORNL. Katie has an adjunct faculty appointment with the Department of Electrical Engineering and Computer Science at UT, where she co-leads the TENNLab neuromorphic computing research group. Katie received the U.S. Department of Energy Early Career Award in 2019.
Talk Abstract: Neuromorphic computing is a popular technology for the future of computing. Much of the focus in neuromorphic computing research and development has focused on new architectures, devices, and materials, rather than in the software, algorithms, and applications of these systems. In this talk, I will overview the field of neuromorphic from the computer science perspective. I will give an introduction to spiking neural networks, as well as some of the most common algorithms used in the field. Finally, I will discuss the potential for using neuromorphic systems in real-world applications from scientific data analysis to autonomous vehicles.
CASY + {Hack@Home} will take place on October 15, 2021 and
CASY 2.0 will take place on February 11, 2022
The event will be free-to-attend once registered and is intended to promote the ethical usage of digital assistants in society for daily life activities.
See casy.aiisc.ai or the event page for more information.
DISSERTATION DEFENSE
Department of Computer Science and Engineering
University of South Carolina
Author : Nare Karapetyan
Advisor : Dr. Ioannis Rekleitis
Date : Oct 11, 2021
Time : 12:30pm
Place : Meeting Room 2267, Innovation Center
Abstract
This thesis is motivated by real world problems faced in aquatic environments. It addresses the problem of area coverage path planning with robots - the problem of moving an end-effector of a robot over all available space while avoiding existing obstacles. The problem is considered first in a 2D environment with a single robot for specific environmental monitoring operations, and then with multi-robot systems which is known to be an NP-complete problem. Next we tackle the coverage problem in 3D space - a step towards underwater mapping of shipwrecks, underwater structures, and monitoring of coral reefs.
The first part of this thesis leverages human expertise in river exploration and data collection strategies to automate and optimize environmental monitoring and surveying operations using autonomous surface vehicles (ASVs). In particular, three deterministic algorithms for both partial and complete coverage of a river segment are proposed, providing varying path length, coverage density, and turning patterns. These strategies resulted in increases in accuracy and efficiency compared to human performance. The proposed methods were extensively tested in simulation using maps of real rivers of different shapes and sizes. In addition, to verify their performance in real world operations, the algorithms were deployed successfully on several parts of the Congaree River in South Carolina, USA, resulting in a total of more than 35km of coverage trajectories in the field.
In large scale coverage operations, such as marine exploration or aerial monitoring, single robot approaches are not ideal. The coverage might take not only too long during such operation, but the robot might run out of battery charge before completing coverage. In such scenarios, multi-robot approaches are preferable. Furthermore, several real world vehicles are non-holonomic, but can be modeled using Dubins vehicle kinematics. The second part of this thesis focuses on environmental monitoring of aquatic domains using a team of Autonomous Surface Vehicles (ASVs) that have Dubin vehicles constraint. It is worth noting that both multi-robot coverage and Dubins vehicle coverage are NP-complete problems. As such, we present two heuristics methods based on a variant of the traveling salesman problem---k-TSP---formulation and clustering algorithms that efficiently solve the problem. The proposed methods are tested both in simulations and with a team of ASVs operating on a 200$m$ x 200$m$ lake area to assess their ability to scale and applicability in the real world.
Finally, in the third part, a step towards solving the coverage path planning problem in the 3D environment for surveying underwater structures, employing vision-only navigation strategies, is presented. Given the challenging conditions of the underwater domain it is very complicated to obtain accurate state estimates reliably. Consequently, it is a great challenge to extend known path planning or coverage techniques developed for aerial or ground robot controls. In this work we are investigating a navigation strategy utilizing only vision to assist in covering a complex underwater structure. We propose to use a navigation strategy akin to what a human diver will execute when circumnavigating around a region of interest, in particular when collecting data from a shipwreck. The focus of this work is a step towards enabling the autonomous operation of light-weight robots near underwater wrecks in order to collect data for creating photo-realistic maps and volumetric 3D models while at the same time avoiding collisions. The proposed method uses convolutional neural networks to learn the control commands based on the visual input. We have demonstrated the feasibility of using a system based only on vision to learn specific strategies of navigation with 80% accuracy on the prediction of control commands changes. Experimental results and a detailed overview of the proposed method are discussed.
Room 2267, Innovation building