Steering Model-Based Test Oracles to Admit Real Program Behaviors
Friday, February 13, 2015 - 09:30 am
Swearingen 1A03 (Faculty Lounge)
COLLOQUIUM
Gregory Gay
Department of Computer Science and Engineering
University of Minnesota
Date: February 13, 2015
Time: 0930-1100
Place: Swearingen 1A03 (Faculty Lounge)
Abstract
There are two key artifacts necessary to test software, the test data - inputs given to the system under test (SUT) - and the oracle - which judges the correctness of the resulting execution. Substantial research efforts have been devoted towards the creation of effective test inputs, but relatively little attention has been paid to the creation of oracles. The specification of test oracles remains challenging for many domains, such as real-time embedded systems, where small changes in timing or sensory input may cause large behavioral differences. Models of such systems, often built for analysis and simulation before the development of the final system, are appealing for reuse as oracles. These models, however, typically represent an idealized system, abstracting away certain considerations such as non-deterministic timing behavior and sensor noise. Thus, even with the same test data, the model’s behavior may fail to match an acceptable behavior of the SUT, leading to many false positives reported by the oracle.
This talk will present an automated framework that can adjust, or steer, the behavior of the model to better match the behavior of the SUT in order to reduce the rate of false positives. This model steering is limited by a set of constraints (defining acceptable differences in behavior) and is based on a search process attempting to minimize a numeric dissimilarity metric. This framework allows non-deterministic, but bounded, behavior differences, while preventing future mismatches, by guiding the oracl--within limits--to match the execution of the SUT. Results show that steering significantly increases SUT-oracle conformance with minimal masking of real faults and, thus, has significant potential for reducing false positives and, consequently, development costs.
Gregory Gay is a Ph.D. candidate and NSF graduate fellow in the Department of Computer Science and Engineering at the University of Minnesota, working with the Critical Systems research group. His research interests include automated testing and analysis--with an emphasis on test oracle construction--and search-based software engineering. Greg previously received his BS and MS from West Virginia University and has held short-term research positions at NASA's Ames Research Center and the Chinese Academy of Sciences.