Bringing Scalable Millimeter-Wave Networks and Applications to the Masses

Friday, February 18, 2022 - 02:20 pm
Swearingen room 2A31

Abstract

Internet-of-Things (IoT) systems play an integral role in our daily lives, and we are currently witnessing an explosion of the IoT ecosystem, which includes not only smartphones but also smart, ubiquitous objects embedded with communication, computation, and sensing capabilities. Emerging IoT systems, such as autonomous vehicles, immersive virtual and augmented reality, tactile internet, holoportation, and smart, connected buildings, promise to automate human lives at unprecedented levels this decade. However, such systems rely on two critical foundations: (1) Next-generation wireless network architectures that can serve billions of devices; and (2) Ubiquitous sensing techniques that enable the objects to be "truly smart" by understanding and interpreting the ambient conditions and micro-activities with high precision.

 

My research team has been building these two foundations. We design, develop, and deploy experimental data-driven computational and deep learning models to extract intelligence from wireless signals, which, in turn, enable ubiquitous sensing modalities and high-resilience and high-performance networks. In this talk, I will go through some of the design and prototyping of our current works that use extremely high-frequency millimeter-wave wireless to enable wire-like connectivity and reliability, and applications in healthcare and beyond-visions. 

 

Bio

Sanjib Sur is an Assistant Professor in the Department of Computer Science and Engineering at the University of South Carolina, Columbia. He received his Ph.D. from the University of Wisconsin - Madison in 2018. His research interest lies in wireless systems and ubiquitous computing, and his research work has been regularly published in top conferences in these areas, especially ACM MobiCom, MobiSys, SIGMETRICS, USENIX NSDI, and IEEE INFOCOM. He is the recipient of ACM HotMobile Best Poster and Best Poster Runner-Up Awards in 2021 and President of India Gold Medal in 2012. Sanjib holds 8 US Patents with 8 more pending. He served as the TPC co-chair for IEEE STEERS 2021-2020 and ACM mmNets 2020.

 

Location:

In person

Swearingen Engineering Center in Room 2A31

Virtual MS Teams

Time

2:20-3:10pm

On Providing Efficient Real-Time Solutions to Motion Planning Problems of High Complexity

Wednesday, February 16, 2022 - 03:00 pm
Seminar room 2277, Storey Innovation Building

DISSERTATION DEFENSE

       Department of Computer Science and Engineering

University of South Carolina 

Author : Marios Xanthidis

Advisor : Dr. Ioannis Rekleitis and Dr. O'Kane

Date : Feb 16, 2022

Time 3:00 pm

Place : Seminar room 2277, Storey Innovation Building

 

Abstract

The holy grail of robotics is producing robotic systems capable of efficiently executing all the tasks that are hard, or even impossible, for humans. Humans, undoubtedly, from both a hardware and software perspective, are extremely complex systems capable of executing many complicated tasks. Thus, the complexity of state-of-the-art robotic systems is also expected to progressively increase, with the goal to match or even surpass human abilities. Recent developments have emphasized mostly hardware, providing highly complex robots with exceptional capabilities. On the other hand, they have illustrated that one important bottleneck of realizing such systems as a common reality is real-time motion planning.

 

This thesis aims to assist the development of complex robotic systems from a computational perspective. The primary focus is developing novel methodologies to address real-time motion planning that enables the robots to accomplish the goals safely and provide the building blocks for developing robust advanced robot behavior in the future. The proposed methods utilize and enhance state-of-the-art approaches to overcome three different types of complexity:

  1. Motion planning for high-dimensional systems. RRT+, a new family of general sampling-based planners, was introduced to accelerate solving the motion planning problem for robotic systems with many degrees of freedom by iteratively searching in lower-dimensional subspaces of increasing dimension. RRT+ variants computed solutions in real-time, orders of magnitude faster compared to state-of-the-art. Experiments in simulation of kinematic chains up to 50 degrees of freedom, and the Baxter humanoid robot validate the effectiveness of the proposed technique.
  1. Underwater navigation for robots in cluttered environments. AquaNav, a real-time navigation pipeline for robots moving efficiently in challenging, unknown, and unstructured environments, was developed for Aqua2, a hexapod swimming robot with complex, yet to be fully discovered, dynamics. AquaNav was tested offline in known maps, and online in unknown maps utilizing vision-based SLAM. Rigorous testing in simulation, in-pool, and open-water trials show the robustness of the method on providing efficient and safe performance, enabling the robot to navigate by avoiding static and dynamic obstacles in open-water settings with turbidity and surge.
  1. Active perception of areas of interest during underwater operation. AquaVis, an extension of AquaNav, is a real-time navigation technique enabling robots, with arbitrary multi-sensor configurations, reach safely their target, while at the same time observing from a desired proximity multiple areas of interest. Extensive simulations show a safe behavior, and strong potential for improving underwater state estimation, monitoring, tracking, inspection, and mapping of objects of interest in the underwater domain, such as coral reefs, shipwrecks, and marine life.

Human Problem Solving

Friday, February 11, 2022 - 02:20 pm
Swearingen Engineering Center in Room 2A31

Abstract

The talk focuses on insightful problem solving and on scientific discovery as the most sophisticated form of insight. I will review several classical insight problems and will offer a conjecture that symmetry of the representation is what is common to all of them. Next, I will describe two phenomena from visual perception, 3D shape reconstruction and figure-ground organization, which also critically depend on symmetry of visual representation. This way, 3D vision can be considered the most elementary, but at the same time, ubiquitous form of insightful problem solving. In the third part of the talk, I will discuss the fundamental role symmetry plays in mathematics and physics including formulation of the Natural Laws. If time allows, I will conclude by describing how humans solve combinatorial optimization problems .

 

Bio

Professor Pizlo received his PhD in electrical and computer engineering from the Institute of Electron Technology, in Poland in 1982, and another PhD in psychology from the University of Maryland, College Park in 1991. He was a Professor of Psychology at Purdue University from 1991 to 2017 and is now a Professor of Cognitive Sciences at UC Irvine. He published 3 books on visual perception of shapes and space and his new book on problem solving will come out this Summer. His research on vision combines projective geometry and symmetry with inverse problems and regularization methods to solve them. His work on problem solving focuses on combinatorial optimization problems. 

 

Location:

In person

Swearingen Engineering Center in Room 2A31

 

Virtual MS Teams

https://teams.microsoft.com/l/meetup-join/19%3ameeting_MTY5ODJjOTgtOTZhYi00OTJmLTljYTgtNjlkYzMxZjI5NjVk%40thread.v2/0?context=%7b%22Tid%22%3a%224b2a4b19-d135-420e-8bb2-b1cd238998cc%22%2c%22Oid%22%3a%22c678cf91-85c0-4c2d-82a0-cce6903f3963%22%7d

 

Time

2:20-3:10pm

 

Artificial Intelligence – Fact and Fiction

Friday, February 4, 2022 - 02:20 pm
Swearingen Engineering Center in Room 2A31

This Friday (2/04), from 2:20-3:10pm, at the Seminar in Advances in Computing, Professor Michael Wooldridge from the University of Oxford will give a talk entitled "Artificial Intelligence – Fact and Fiction". This talk is related to his recent book "A brief history of artificial intelligence: what it is, where we are, and where we are going.""

 Abstract

Long regarded as an impossible dream, Artificial Intelligence (AI) is now an everyday reality. Advances in AI regularly make headline news, and the steady advance of AI looks set to change our world dramatically. In this lecture Professor Michael Wooldridge explores the reality of AI today: what makes AI work after half a century of effort, what is possible, and what the implications are for all of us. 

Bio

Michael Wooldridge is a Professor of Computer Science and Head of Department of Computer Science at the University of Oxford. He has been an AI researcher for more than 25 years, and has published more than 350 scientific articles on the subject. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of AI (AAAI), and the European Association for AI (EurAI). From 2014-16, he was President of the European Association for AI, and from 2015-17 he was President of the International Joint Conference on AI (IJCAI).

Location:

In person

Swearingen Engineering Center in Room 2A31

Virtual MS Teams link

Time

2:20-3:10pm

Women in Computing Meeting

Monday, January 31, 2022 - 06:00 pm
Story Innovation Center

Model-Based Reinforcement Learning and Search with Discrete World Models

Friday, January 28, 2022 - 02:20 pm
Swearingen Engineering Center in Room 2A31

Abstract

World models capture the dynamics of an environment and can be used to produce new, “imagined,” experiences. This can significantly reduce the number of real-world experiences required for an agent learn how to make decisions and can be combined with search for to allow an agent to plan before acting. However, in environments with a sub-symbolic representation, generating new experiences with a learned model over multiple timesteps can be difficult as small errors can accumulate over time. Furthermore, identifying previously encountered states during search can be difficult as the same state obtained by traversing different paths can result in slightly different representations. In this talk, I will discuss preliminary research on using discrete world models to address both issues. Using discrete world models, small errors can be corrected by simply rounding and identifying previously seen states is as simple as checking for equality between two arrays. Preliminary experiments with raw pixel representations of the Rubik’s cube and Sokoban show that a discrete world model can be learned using an offline dataset and can be unrolled over multiple timesteps without accumulating errors. Furthermore, after using the world model to learn a value function, combining the world model and value function with A* search solves 100% of test cases.

 

Bio

Forest Agostinelli is an assistant professor at the University of South Carolina. He received his B.S. from the Ohio State University, his M.S. from the University of Michigan, and his Ph.D. from the University of California, Irvine. His research group investigates how deep learning and reinforcement learning can be used to create agents that can solve complex problems and explain their solutions in a manner that humans can understand. His homepage is located at https://cse.sc.edu/~foresta/.

 

Location:

In person

Swearingen Engineering Center in Room 2A31

Virtual MS Teams

 

 

Deep Learning Applications in the Sciences

Friday, January 21, 2022 - 02:20 pm
Swearingen Engineering Center in Room 2A31

Friday, at the Seminar in Advances in Computing, Professor Peter Sadowski from the University of Hawai’i at Manoa will be giving a talk entitled “Deep Learning Applications in the Sciences”.

Abstract

Deep learning with artificial neural networks has enabled remarkable progress in traditional artificial intelligence applications including vision, natural language processing, and voice recognition. It also has myriad applications to science. I will review the paradigms for applying deep learning to scientific applications including inverse problems, surrogate models, and physics-informed machine learning.

Bio

Peter Sadowski is an assistant professor of Information and Computer Sciences at UH Manoa, where his lab works on a range of deep learning applications to astronomy, oceanography, and microbiome science. 

A person smiling for the camera
  Description automatically generated with medium confidence

 

Location:

In person: Swearingen Engineering Center in Room 2A31

Virtual MS Teams

Collaborative Assistant Building Contest

Monday, November 15, 2021 - 12:00 pm
Online

Develop collaborative assistants (chatbots) that offer innovative and ethical solutions to real-world problems !

Prizes

First Prize - $250 Second Prize - $150 Third Prize - $100

Problem ideas:

  • Health: Which one or more medical specialties can treat my abdominal pain?

  • Public safety: Which community is most unsafe for children?

  • Water: Will there be a water problem if everyone starts washing their cars at home?

  • Gardening: What happens to my water if we plant eucalyptus or cactus in SC?

  • Information gathering: How is UofSC better than Clemson? Columbia v/s Charleston? based on crime statistics, hospitals, etc.

These are just suggestions to inspire. You are free to choose any problem that would help solve any problem in your community and preferably, South Carolina too.

See the event page for more information.

Working with neuroscience data in the Python ecosystem

Friday, November 12, 2021 - 02:20 pm
Storey Innovation Center 1400

Meeting Location:

Storey Innovation Center 1400

Live Meeting Link for the virtual audience

Talk Abstract:  About 15 years ago, as I was working on a graphical interface for scientific software in Matlab, I got frustrated by the clumsy code structure that Matlab required for GUI coding. Although I thought that the C++ Qt library would be a great alternative, I did not want to get my fast-prototyping process slowed by low-level coding. Since Python had bindings for Qt, I decided to translate all my code into Python. To my surprise, I was able to swiftly complete this process over the weekend. Since then, I have been working almost exclusively in Python, and I never regretted it a single day. In this talk, I will the main components of the Python stack for scientific programming, focusing on neuroscience and illustrating it by summarily analyzing EEG recordings (MNE-Python). I will discuss why Python has become a major player in this field and how limitations typical to interpreted languages (e.g., slow at runtime) have been tackled with libraries such as NumPy. I will also explain why Python is a strong environment for data wrangling by introducing libraries like Pandas – which offers data frame functionalities similar to R – and XArray. Finally, I will touch upon how libraries like Seaborn provide a high-level interface for quickly producing publication-quality figures with only a few (if not a single) lines of code.

 

Speaker's Bio: Christian O’Reilly received his B.Ing (elec eng; 2007), his M.Sc.A. (biomed eng; 2011), and his Ph.D. (biomed eng; 2012) from Polytechnique Montreal. He was a postdoc fellow at the CARSM (2012-2014) and then a NSERC postdoc fellow at McGill's Brain Imaging Center (2014-2015) where he worked on EEG sleep transients. He also worked at the EPFL (2015-2018) on modeling of the thalamocortical loop and at McGIll on brain connectivity (2020-2021). Since 2021, he is Assistant Professor at UofSC.

Mind the Gap: What lies between the end of CMOS scaling and future technologies?

Friday, November 5, 2021 - 02:20 pm
Storey Innovation Center 1400

Meeting Location:

Storey Innovation Center 1400 Live Meeting Link for the virtual audience :

https://teams.microsoft.com/l/meetup-join/19%3ameeting_OWZkMTYwZDQtMDVmMy00NjA1LTgxNmEtZDExMDdiZTM2ZjYz%40thread.v2/0?context=%7b%22Tid%22%3a%224b2a4b19-d135-420e-8bb2-b1cd238998cc%22%2c%22Oid%22%3a%225fc2170a-7068-4a33-9021-df11b94ba696%22%7d

Talk Abstract: David’s talk will cover some of the exciting technologies the Devices, Circuits & Systems group at ARM is researching as well as what he sees in the general trend and future of process technology. And since it’s not possible to discuss CMOS scaling without commenting on Moore’s law, he will do that, too. 🙂

 

Speaker's Bio: David Pietromonaco has been in the semiconductor industry for almost 30 years at Hewlett-Packard, Sony, and most recently Artisan/Arm (for 20 of those). He works in Arm Research; in the Devices, Circuits & Systems group, specifically on the Technology Optimized Design team. That team tries to look 5-10 years ahead to understand future computing technologies and how to utilize them.