Lost in the Middle Kingdom: Teaching New Languages Using Serious Games and Language Learning Methodologies

Wednesday, July 9, 2014 - 11:30 am
Swearingen (Dean’s Conference Room)
MASTER’S DEFENSE Renaldo J. Doe Lost in the Middle Kingdom is a serious video game for language learning. Our game utilizes several language learning methodologies including second language acquisition theory, content-based instruction, and task-based language teaching. We analyze previous language learning games and their drawbacks in order to create a more effective experience. Lost in the Middle Kingdom seeks to balance language learning with fun and intuitive gameplay in order to deliver a form of interactive media that is accepted by both the gaming and research communities. Our test data illustrates the strengths and weaknesses of our game and how future improvements can bolster its effectiveness.

Using Genetic Algorithm to solve Median Problem and Phylogenetic Inference

Tuesday, July 8, 2014 - 10:00 am
Swearingen (3A75)
DISSERTATION DEFENSE Nan Gao Abstract Genome rearrangement analysis has attracted a lot of attentions in phylogenetic computation and comparative genomics. Solving the median problems based on various distance definitions has been a focus as it provides the building blocks for maximum parsimony analysis of phylogeny and ancestral genomes. The Median Problem (MP) has been proved to be NP-hard and although there are several exact or heuristic algorithms available, these methods all are difficulty to compute distant three genomes containing high evolution events. Such as current approaches, MGR and GRAPPA, are restricted on small collections of genomes and low-resolution gene order data of a few hundred rearrangement events. In my work, we focus on heuristic algorithms which will combine genomic sorting algorithm with genetic algorithm (GA) to produce new methods and directions for whole-genome median solver, ancestor inference and phylogeny reconstruction. In equal median problem, we propose a DCJ sorting operation based genetic algorithms measurements, called GA-DCJ. Following classic genetic algorithm frame, we develop our algorithms for every procedure and substitute for each traditional genetic algorithm procedure. The final results of our GA-based algorithm are optimal median genome(s) and its median score. In limited time and space, especially in large scale and distant datasets, our algorithm get better results compared with GRAPPA Extending the ideas of equal genome median solver, we develop another genetic algorithm based solver, GaDCJ-Indel, which can solve unequal genomes median problem (without duplication). In DCJ-Indel model, one of the key steps is still sorting operation. The difference with equal genomes median is there are two sorting directions: minimal DCJ operation path or minimal indel operation path. Following different sorting path, in each step scenario, we can get various genome structures to fulfill our population pool. Besides that, we adopt adaptive surcharge-triangle inequality instead of classic triangle inequality in our fitness function in order to fit unequal genome restrictions and get more efficient results. Our experiments results show that GaDCJ-Indel method not only can converge to accurate median score, but also can infer ancestors that are very close to the true ancestors. An important application of genome rearrangement analysis is to infer ancestral genomes, which is valuable for identifying patterns of evolution and for modeling the evolutionary processes. However, computing ancestral genomes is very difficult and we have to rely on heuristic methods that have various limitations. We propose a GA-Tree algorithm which adapts meta-population, co-evolution and repopulation pool methods In this paper, we describe and illuminate the first genetic algorithm for ancestor inference step by step, which uses fitness scores designed to consider coevolution and uses sorting-based methods to initialize and evolve populations. Our extensive experiments show that compared with other existing tools, our method is accurate and can infer ancestors that are much closer to true ancestors.

Document Analysis Techniques for Handwritten Text Segmentation, Document Image Rectification and Digital Collation

Thursday, July 3, 2014 - 11:00 am
Swearingen 3A75
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Dhaval Salvi Abstract Document image analysis comprises all the algorithms and techniques that are utilized to convert an image of a document to a computer readable description. In this work we focus on three such techniques, namely (1) Handwritten text segmentation (2) Document image rectification and (3) Digital Collation.Offline handwritten text recognition is a very challenging problem. Aside from the large variation of different handwriting styles, neighboring characters within a word are usually connected, and we may need to segment a word into individual characters for accurate character recognition. Many existing methods achieve text segmentation by evaluating the local stroke geometry and imposing constraints on the size of each resulting character, such as the character width, height and aspect ratio. These constraints are well suited for printed texts, but may not hold for handwritten texts. Other methods apply holistic approach by using a set of lexicons to guide and correct the segmentation and recognition. This approach may fail when the lexicon domain is insufficient. In the first part of this work, we present a new global non-holistic method for handwritten text segmentation, which does not make any limiting assumptions on the character size and the number of characters in a word. Digitization of document images using OCR based systems is adversely affected if the image of the document contains distortion (warping). Often, costly and precisely calibrated special hardware such as stereo cameras, laser scanners, etc. are used to infer the 3D model of the distorted image which is used to remove the distortion. Recent methods focus on creating a 3D shape model based on 2D distortion information obtained from the document image. The performance of these methods is highly dependent on estimating an accurate 2D distortion grid. These methods often affix the 2D distortion grid lines to the text line, and as such, may suffer in the presence of unreliable textual cues due to preprocessing steps such as binarization. In the domain of printed document images, the white space between the text lines carries as much information about the 2D distortion as the text lines themselves. Based on this intuitive idea, in the second part of our work we build a 2D distortion grid from white space lines, which can be used to rectify a printed document image by a dewarping algorithm.Collation of texts and images is an indispensable but labor-intensive step in the study of print materials. It is an often used methodology by textual scholars when the underlying manuscript of the text is nonexistent. Various methods and machines have been designed to assist in this labor, but it remains both expensive and time-consuming, requiring travel to distant repositories for the painstaking visual examination of multiple original copies. Efforts to digitize collation have so far depended on first transcribing the texts to be compared, introducing into the process a layer not only of labor and expense but also of potential error. Digital collation will instead automate the first stages of collation directly from the document images of the original texts, dramatically speeding the process of comparison. We describe such a novel framework for digital collation in the third part of this work.

Ghosts of the Horseshoe: A Mobilization of a Critical Interactive

Wednesday, July 2, 2014 - 10:30 am
Swearingen (Deans Conference Room)
DISSERTATION DEFENSE Richard Walker Time: 1030-1130 (10:30am-11:30am) Date: July 2, 2014 Place: Swearingen (Deans Conference Room) Abstract Critical Interactives (CIs) are designed to harness the voluntary, reality-bending excitement of discovery as afforded by play, but to do so in the context of rules that mobilize procedural rhetoric to instantiate critical awareness. Critical interactives are not just about improving lives through code or education; rather, they establish a methodology for generating more aesthetic and reflective interactive experiences. To grasp more fully the logic underpinning CIs, we need to understand the powerful nature of interactivity and outline how such interactivity involves a notion of ethics, i.e., a way of living, in and Ghosts of the Horseshoe is a critical interactive, in this case a mobile interactive application for iPad, that presents the largely unknown role of South Carolina College, the predecessor of the University of South Carolina, in slavery during the years prior to the Civil War. The USC Horseshoe was built by enslaved persons, and the bricks of the Wall and buildings made by enslaved persons, and yet this history is for the most part not known by the USC community and not acknowledged by the institution. We discuss the role of critical interactives as instruments of procedural rhetoric--software artifacts that interact with their participants to carry a message, in this case a message about a sensitive topic in the history of the institution. Ghosts as a CI uses ludic methods as a rhetorical technique. We place CIs, and Ghosts in particular, in the general context of games, computer video games, and serious games, commenting on the use of ludic methods in presenting topics like slavery on which one cannot legitimately produce a "game''. We discuss further the iterative development and testing process that converged to the final version that is available today.

An Application for Keeping Track of Food Item Expiration

Friday, May 2, 2014 - 02:00 pm
Discovery I, Room 331
A seminar about mobile technologies in health presented by: Rejin James, graduate student, College of Engineering and Computing & Danielle Schoffman, graduate student, Health Promotion, Education and Behavior Friday, May 2, 2014 2:00 PM – 3:00 PM Discovery I, Room 331 (please note room change) An Application for Keeping Track of Food Item Expiration Rejin James Food, honestly, is too precious to waste. Food wastage is a very serious issue prevalent in the world today. American households alone throw out an equivalent of $165 billion worth of food each year. People often forget to consume food they purchased before the expiration date, or sometimes they over-purchase food they can have, then throw them away. Hence, this thesis aims to prevent food wastage with the help of a smart phone application that helps keep track of food item expiration dates and gives you notification alerts when it is about to expire. It implements a barcode scanner for automatic product name discovery as well as optical character recognition (OCR) for automatic food expiration discovery. Apps for Family Obesity Treatment and Prevention Interventions Danielle Schoffman Mobile smartphone applications (apps) offer a scalable way to deliver family obesity treatment and prevention interventions, yet little is known about the efficacy of or family preference for apps. The aim of the present study is to test the efficacy, usability, and acceptability of commercial apps and mobile monitoring devices for Physical Activity (PA) and Healthy Eating (HE) with parent-child dyads. Using a two phase design, parent-child dyads are enrolled in a 4-week mobile intervention to test a set of apps and monitoring devices, and then share their experiences and preferences during a post-program structured interview. Elements of the study design, including participant recruitment, measurement of outcomes, and preliminary results will be discussed. This is a free seminar and all faculty, staff, students, and guests are welcome to attend. Contact Susan Klie at sklie@mailbox.sc.edu or 803-777-6363 for more information http://nutritioncenter.sph.sc.edu/

Capstone Project Demos

Tuesday, April 29, 2014 - 09:00 am
300 Main St. Room B201
Students from our Senior Capstone Project class will demo the apps they have built for this class. This year we have 14 groups doing demos:
  • 6 are web applications built with Rails, Google App engine, PHP, 4
  • 4 are Android applications for phones or tables
  • 3 are iOS or MacOS apps built for iPhone and iPad
  • 1 is a hardware project.
You can watch their demo videos before attending. Open to the Public.

Making Sense of Sensing

Friday, April 25, 2014 - 04:00 pm
Swearingen 1A03 (Faculty Lounge)
COLLOQUIUM Jiangying Zhou Information Sciences Division Teledyne Scientific Abstract How do you recognize a friend walking at a distance when you cannot see his/her face? Driving down a busy intersection, why do you not try to identify each and every individual vehicle or person? The receptor cells in your eyes and the light-sensitive elements of a digital camera record nothing but mottled pattern of colors flickering as function of space and time. Making sense of the world from these sensory input involves solving extraordinarily difficult recognition problems in real-time. Human and computers both apply sophisticated computing to make sense of senses. At Teledyne, we conduct cutting-edge research to understand how higher-level concepts such as visual shapes emerge in our brain from senses, and to develop advanced algorithms that can extract useful information that the real world presents to us via sensors. In this talk, I will highlight some of the work that we do on this fascinating topic, the challenges we face, and the exciting opportunities that are awaiting future researchers. Dr. Jiangying Zhou is a Senior Technical Manager in the Information Sciences Division at Teledyne Scientific, where she manages and leads a group of scientists pursuing contract R&Ds from government agencies as well as commercial customers. Dr. Zhou was the PM for Teledyne's FITT Program (DARPA, 2011-2013), and PM/PI of the seismic data analysis program (2009-2013). Prior to joining TS&I, Dr. Zhou was the director of R&D of Summus Inc., a small start-up company specializing in contract engineering projects for U.S. Department of Defense and commercial markets in the areas of video and image compression, pattern recognition, and computer vision. While at Summus, Dr. Zhou was the lead investigator of several research projects funded by the Office of Naval Research on side-scan-sonar image analysis. From 1993-1998, Dr. Zhou was a scientist at Panasonic Technologies, Inc., Princeton, NJ, where she conducted research in the areas of document analysis, hand-drawn gesture recognition, image analysis, and information retrieval. Dr. Zhou obtained her Ph.D. in Electrical Engineering at the State University of New York, Stony Brook, in 1993. Dr. Zhou is the author of more than 30 technical papers on referred journals and conferences and the co-inventor of twelve U.S. patents. Dr. Zhou was an Associate Editor of the SPIE Optical Engineering from 2001 to 2005 and Chair of SPIE/IS&T: EI Document Analysis Conference from 1998 – 1999. Jiangying Zhou is a member of IEEE society.

What Every Programmer Should Know about Web Development

Friday, April 25, 2014 - 02:30 pm
Swearingen 2A15
COLLOQUIUM Richard Baldwin Director of Web Development www.cyberwoven.com Abstract Programming is a vast field that covers everything from mainframe banking software to cutting edge web technologies and the title Programmer is every bit as vague as the title Doctor. However unlike doctors, programmers graduating from college often have not chosen a specialty in the programmer arena. This could lead graduates into choosing a specialty based solely on post-graduation job offers. Come hear the story of a web developer who found his specialty purely by chance but is glad he did. Also, learn from him why web development may be the specialty for you. Richard Baldwin is the Director of Web Development at cyberwoven.com. He heads a team of developers who work hard to ensure that even complicated websites function flawlessly. With over a decade of experience in website development, Richard is adept at pinpointing problems and implementing custom back-end solutions. Prior to joining Cyberwoven in 2010, Richard was a senior developer and project manager with Verizon, where he developed intranet applications that streamlined and digitized manual processes. He holds an MBA and Bachelor of Science in Computer Engineering from the University of South Carolina.

Introduction to Linked Open Data and the Semantic Web

Thursday, April 24, 2014 - 11:30 am
Thomas Cooper Library 304 (MM3)
a talk by Srikar Nadipally (soon to be a Master of Science) Thursday, April 24, 2014 11:30 AM - 12:30 PM Thomas Cooper Library 304 (MM3) topic to include
  • semantic web
  • data formats
  • ontologies and vocabularies
  • links (URI’s) and triples
  • creating LOD from relational data
  • a guide to useful software
  • the server side
organized by the Center for Digital Humanities at the University of South Carolina for more information, email Colin Wilder wildercf@mailbox.sc.edu http://cdh.sc.edu/