The formal methods seminar in the Computer Science department at the University of Illinois focuses on various aspects of formal methods for both software and hardware, including specification, modeling, verification, and development of safe and secure systems.
For announcements, please subscribe to the Formal Methods mailing list^{}.
Fall 2013
Venue  Time  Date  Title and Abstract 
Speaker 

3401 SC  4:00pm  11/29  Stochastic Superoptimization Once data parallelism has been exploited, one ends up with compute bound kernels. Superoptimizations are low level optimizations that exploit the features of the instruction set. We formulate the binary superoptimization task as a two phase stochastic search. In the synthesis phase, we search for different low level algorithms for the functionality to be achieved. In the optimization phase, we search for faster implementations of the synthesized algorithms. The scope of programs we are able to consider, and the resulting quality of the programs that we produce, far exceed those of existing superoptimizers. Beginning from binaries compiled by llvm O0 for 64bit x86, our prototype implementation, STOKE, is able to produce programs which either match or outperform the code produced by gcc O3, icc O3, and in some cases, expert handwritten assembly. Moreover, unlike gcc and icc, STOKE produces a proof of equivalence between the unoptimized input binary and the optimized output binary, thus ensuring the soundness of the optimizations. We use the verification algorithm in STOKE to prove the equivalence of binaries generated by a certified compiler (CompCert) and an optimizing compiler (gcc) with promising results.

Rahul Sharma^{} Stanford University 
Spring 2013
Venue  Time  Date  Title and Abstract 
Speaker 

3401 SC  3:00pm  04/18  Algorithmic Verification of Stability of Hybrid Systems Hybrid systems refer to systems exhibiting mixed discretecontinuous behaviors and arise as a natural byproduct of the interaction of embedded processors with physical systems. In this talk, we focus on the verification of a class of properties of hybrid systems called stability. Stability captures the notion that small perturbations to the initial state or input to the system result in only small changes in the eventual behavior of the system. It is a fundamental requirement of any control system design. In contrast to the wellknown methods for automated verification of stability which are deductive, we present algorithmic techniques for stability analysis.

Pavithra Prabhkar^{} IMDEA Software Institute 
3401 SC  3:00pm  04/02  Model Checking Database Applications We describe the design of DPF, an explicitstate model checker for databasebacked web applications. DPF interposes between the program and the database layer, and precisely tracks the effects of queries made to the database. We experimentally explore several implementation choices for the model checker: stateful vs. stateless search, state storage and backtracking strategies, and dynamic partialorder reduction. In particular, we define independence relations at different granularity levels of the database (at the database, relation, record, attribute, or cell level), and show the effectiveness of dynamic partialorder reduction based on these relations. We apply DPF to look for atomicity violations in web applications. Web applications maintain shared state in databases, and typically there are relatively few database accesses for each request. This implies concurrent interactions are limited to relatively few and welldefined points, enabling our model checker to scale. We explore the performance implications of various design choices and demonstrate the effectiveness of DPF on a set of Java benchmarks. Our model checker was able to find new concurrency bugs in two opensource web applications, including in a standard example distributed with the Spring framework.

Milos Gligoric^{} UIUC 
3401 SC  3:00pm  03/12  Automatic Verification Techniques for HeapManipulating Programs Reliability is critical for system software, such as OS kernels and mobile browsers, as well as applications such as office software and mail clients. The correctness of these programs, especially for security, is highly desirable, as they should provide a trustworthy platform for higherlevel applications and the endusers. Unfortunately, due to its inherent complexity, the verification process of these programs is typically manual/semiautomatic, tedious, and painful. Automating the reasoning behind these verification tasks and decreasing the dependence on manual help is one of the greatest challenges in software verification.
In this talk, I will present two logicbased automatic software verification systems, namely Strand and Dryad, that help in the task of verification of heap manipulation, which is one of the most complex aspects of modern software that eludes automatic verification. Strand is a logic that combines an expressive heaplogic with an arbitrary datalogic and admits several powerful decidable fragments. The decision procedures can be used in not only proving programs correct but also in software analysis and testing. Dryad is a dialect of separation logic that is amenable to automated reasoning using the natural proof strategy. Dryad is so far the only logic that can verify the full correctness of a wide variety of challenging programs, including a large number of C programs from various opensource libraries. I will also explain how to encode the natural proof strategy in ghost code and alleviate the programmer's burden. The natural proof technique and Dryad logic we build hold promise of being the nextgeneration automatic verification logics and reasoning techniques for software verification. 
Xiaokang Qiu^{} UIUC 
3401 SC  3:00pm  03/05  PALSBased Analysis of an Airplane Multirate Control System Distributed cyberphysical systems (DCPS) are pervasive in areas such as aeronautics and ground transportation systems, including the case of distributed hybrid systems. DCPS design and verification is quite challenging because of asynchronous communication, network delays, and clock skews. Furthermore, their model checking verification typically becomes unfeasible due to the huge state space explosion caused by the system's concurrency. The PALS ("physically asynchronous, logically synchronous") methodology has been proposed to reduce the design and verification of a DCPS to the much simpler task of designing and verifying its underlying synchronous version. The original PALS methodology assumes a single logical period, but Multirate PALS extends it to deal with multirate DCPS in which components may operate with different logical periods. This paper shows how Multirate PALS can be applied to formally verify a nontrivial multirate DCPS. We use RealTime Maude to formally specify a multirate distributed hybrid system consisting of an airplane maneuvered by a pilot who turns the airplane according to a specified angle through a distributed control system. Our formal analysis revealed that the original design was ineffective in achieving a smooth turning maneuver, and led to a redesign of the system that satisfies the desired correctness properties. This shows that the Multirate PALS methodology is not only effective for formal DCPS verification, but can also be used effectively in the DCPS design process, even before properties are verified.

Kyungmin Bae^{} UIUC 
3401 SC  3:00pm  02/19  Towards Reliable Distributed CyberPhysical Systems Software defects are rampant in today's embedded systems, even safety critical ones. While we have been lucky to have experienced relatively few softwareinduced catastrophes, recall
notices for software defects are commonplace in industries from automotive to telecommunications to SCADA systems. Such defects may be simple software bugs, or they may result from the tight interaction of cyber and physical states and evolution thereof in such cyberphysical systems (CPS). I will describe our efforts developing formal methods for proving the absence of the latter class of cyberphysical defects, focusing on CPS that are distributed computationally and/or geographically, such as air traffic control protocols, realtime threads, network control systems, and swarm robotics. We model such distributed CPS as parameterized systems, where N agents execute some protocol at the same time. For example, a desired safety property in any air traffic control system is that no aircraft collide, regardless of the number of aircraft, N, in the system. One way to verify systems regardless of the number of participants is through the use of small model theorems, which allow for reasoning about the parameterized system from small instantiations, and I will describe one such theorem we developed. We have implemented this technique in a software tool called Passel, which allows us to prove safety properties automatically for parameterized CPS using Microsoft Research’s Z3 as a backend satisfiability modulo theories (SMT) solver. I will conclude with a proposed research agenda that aims to transition from verification of embedded software and models of embedded systems to verification for embedded systems. 
Taylor Johnson^{} UIUC 
3401 SC  3:00pm  02/12  Verifying Security Invariants in OS Code This talk is based on joint work with Haohui Mai, Hui Xue, Samuel T. King, and P. Madhusudan. I will present use of formal methods for proving key security invariants about ExpressOS, a new OS for enabling high assurance applications to run on commodity mobile devices securely. In this work we focus on proving that the OS correctly implements security invariants rather than showing its full functional correctness. Our goal is to prove relevant security aspects of the system with reasonable verification effort. Evaluation shows that ExpressOS is effective in preventing existing vulnerabilities and offers performance comparable to native Android.

Edgar Pek UIUC 
3401 SC  3:00pm  02/05  ObjectOriented Formal Modeling and Analysis of Interacting Hybrid Systems in HIMaude In complex hybrid systems, different components may influence each others' continuous behaviors. In this talk I introduce the rewritinglogicbased HIMaude tool that supports an objectoriented
modeling methodology in which it is sufficient to specify the continuous dynamics of single (physical component and physical interaction) objects in such interacting hybrid systems. HIMaude supports simulation and model checking for a number of numerical approximations of the continuous behaviors, based on adaptations of the Euler and RungaKutta methods. The ability to decompose the definition of the complex continuous behaviors of interacting hybrid systems enables objectoriented modeling of such systems, and made it possible to define a realistic HIMaude model of the human thermoregulatory system. We have used this model and HIMaude analysis to investigate possible explanations for the accident at the 2010 Sauna World Championships, where the finalists collapsed in a surprisingly short time. 
Peter Ölveczky^{} University of Oslo 
Fall 2012
Venue  Time  Date  Title and Abstract 
Speaker 

3401 SC  4:00pm  11/29  An inferable dependently types Pi Calculus The Pi Calculus is an approach to modeling distributed systems that relies on a notion of processes communicating over shared channels. Session Types extends the Pi Calculus by giving statically inferable types to its programs to exclude certain dangerous interactions and give a more precise characterization of their behaviors.
Liquid types are an approach to extending a simple type system with a limited form dependency. It utilizes refinement types, i.e., logical predicates attached to basic types. As an example, we might use {x:intx<=5} instead of just "int". Liquid types adds these refinements in a manner that preserves inferability of types (including their refinements). LiquidPi is a fusion of Session Types and Liquid Types to achieve a system that is able to infer dependent session types. This talk will contain an overview of the preliminary work that has been done for LiquidPi and its associated background material. 
Dennis Griffith^{} UIUC 
3401 SC  4:00pm  11/08  Static and Dynamic Analysis of Timed Distributed Traces This work presents an algorithm for checking global predicates from distributed traces of cyberphysical systems. For an individual agent, such as a mobile phone or a robot, a trace is a finite sequence of state observations and message histories. Each observation has a possibly inaccurate timestamp from the agent's local clock. The challenge is to symbolically overapproximate the reachable states of the entire system from the unsynchronized traces of the individual agents. The presented algorithm first approximates the time of occurrence of each event, based on the synchronization errors of the local clocks, and then overapproximates the reach sets of the continuous variables between consecutive observations. The algorithm is shown to be sound; it is also complete for a class of agents with restricted continuous dynamics and when the traces have precise information about timing synchronization inaccuracies. The algorithm is implemented in an SMT solverbased tool for analyzing distributed Android apps. Experimental results illustrate that interesting properties like safe separation, correct geocast delivery, and distributed deadlocks can be checked for upto twenty agents in minutes.

Sridhar Duggirala^{} UIUC 
3401 SC  4:00pm  11/01  Natural Proofs for Structure, Data, and Separation We propose natural proofs for reasoning with complex properties of the dynamically manipulated heap of a program that combines constraints on structure of the heap, data stored in the heap, and separation of regions within the heap. Natural proofs are a subclass of proofs that are amenable to completely automated reasoning and provide sound but incomplete procedures, and capture common reasoning tactics in program verification. We develop a dialect of separation logic over heaps, called Dryad, with recursive definitions that avoid explicit quantification, develop ways to reason with heaplets using logical set theory, and develop natural proofs for reasoning with Hoaretriples written in Dryad using unfoldings and formula abstractions. We also implement the technique and show that a large class of correct programs over lists, cyclic lists, doublylinked lists, and trees are amenable to automated proofs using the proposed natural proof method.

Xiaokang Qiu^{} UIUC 
3401 SC  4:00pm  10/25  Automated verification of equivalence properties of cryptographic protocols Indistinguishability properties are essential in formal verification of cryptographic protocols. They are needed to model anonymity properties, strong versions of confidentiality and resistance to offline guessing attacks, and can be conveniently modeled using process equivalences. We present a novel procedure to verify equivalence properties for bounded number of sessions. Our procedure is able to verify trace equivalence for determinate cryptographic protocols. On determinate protocols, trace equivalence coincides with observational equivalence which can therefore be automatically verified for such processes. When protocols are not determinate our procedure can be used for both under and overapproximations of trace equivalence, which proved successful on examples. The procedure can handle a large set of cryptographic primitives, namely those which can be modeled by an optimally reducing convergent rewrite system. Although, we were unable to prove its termination, it has been implemented in a prototype tool and has been effectively tested on examples, some of which were outside the scope of existing tools. Joint work with Stefan Ciobaca and Steve Kremer.

Rohit Chadha University of Missouri 
3401 SC  4:00pm  10/18  Checking Reachability using Matching Logic This paper presents a verification framework that is parametric in a (trusted) operational semantics of some programming language. The underlying proof system is languageindependent and consists of eight proof rules. The proof system is proved partially correct and relatively complete (with respect to the programming language configuration model). To show its practicality, the generic framework is instantiated with a fragment of C and evaluated with encouraging results.
Paper webpage: http://fsl.cs.uiuc.edu/index.php/Checking_Reachability_using_Matching_Logic^{} Online interface: http://fsl.cs.uiuc.edu/index.php/Special:MatchCOnline^{} 
Andrei Stefanescu^{} UIUC 
3401 SC  4:00pm  10/11  Timed Eventbased Message Passing Models: Schedulability, Deadlock Freedom, and Performance Analysis Asynchronous eventbased systems can be modeled using Timed Rebeca with least effort. Network and computational delays, periodic events, and required deadlines can be expressed in the model. The language is actorbased and the syntax is Javalike. Model checking and simulation tools are built based on the formal semantics of the language given as SOS rules. For deadlockfreedom and schedulability analysis a socalled floatingtime transition system is introduced which reduced the state space by exploiting the isolation of method execution in the model. For performance analysis, a simulation tool is developed which uses McErlang as the backend. Simulation traces are saved in SQL database for further analysis. Case studies are used to show the applicability of the language and the tools.

Marjan Sirjani^{} Reykjavik University 
3401 SC  10:00pm  10/05  Engage: A Deployment System Many modern applications are built by combining independently developed packages and services that are distributed over many machines with complex interdependencies.The assembly, installation, and management of such applications is hard, and usually performed either manually or by writing customized scripts. We present Engage, a system for configuring, installing, and managing complex application stacks. Engage consists of three components: a domainspecific model to describe component metadata and intercomponent dependencies; a constraintbased algorithm that takes a partial installation specification and computes a full installation plan; and a runtime system that coordinates the deployment of the application across multiple machines and manages the deployed system. By explicitly modeling configuration metadata and intercomponent dependencies, Engage enables static checking of application configurations and automated, constraintdriven, generation of installation plans across multiple machines. This reduces the tedious manual process of application configuration, installation, and management. We have implemented Engage and we have used it to successfully host a number of applications. We describe our experiences in using Engage to manage a generic platform that hosts Django applications in the cloud or on premises.

Rupak Majumdar^{} MPISWS, UCLA 
3401 SC  3:00pm  09/27  Safety Verification for Distributed CyberPhysical Systems Distributed cyberphysical systems (CPS) incorporate communicating agents with their own cyber and physical states and transitions. Such systems are typically designed to accomplish tasks in the physical world, such as autonomous aircraft coordinating to ensure collisions do not occur while the aircraft attempt to land at a runway. The combination of physical dynamics, software dynamics, and communications leads these systems naturally to be modeled as networks of hybrid automata. Two techniques for verification of safety properties are presented and evaluated. The first method computes the set of reachable states for such systems, and checks that the intersection of the reach set and a bad set of states is empty. The second method is a first step in automating the traditionally deductive verification approach of proving inductive invariants, and applies this with the use of a small model theorem.

Taylor Johnson^{} UIUC 
Spring 2012
Venue  Time  Date  Title and Abstract 
Speaker 

3401 SC  3:00pm  04/13  Using nonconvex abstractions for efficient analysis of timed automata We consider the classic reachability problem for timed automata: given an automaton, decide if there exists a path from its initial state to a given target state. The standard solution to this problem involves computing the zone graph of the automaton, which in principle could be infinite. In order to make the graph finite, zones are approximated using an extrapolation operator. For reasons of efficiency it is required that an extrapolation of a zone is always a zone; and in particular that it is convex. We propose a new solution to the reachability problem that uses no such extrapolation operators. To ensure termination, we provide an efficient algorithm to check if a zone is included in the region closure of another. Although theoretically better, closure cannot be used in the standard algorithm since a closure of a zone may not be convex. The structure of this new algorithm permits to calculate approximating parameters onthefly during exploration of the zone graph, as opposed to the current methods which do it by a static analysis of the automaton prior to the exploration. This allows for further improvements in the algorithm
[Slides] 
Dileep Kini UIUC 
3405 SC  2:00pm  04/10  Foundations of Verification with Proof Scores in CafeOBJ The verification method with proof scores is an interactive theorem proving method based on algebraic specifications. We have been developing verifications with proof scores in CafeOBJ algebraic specification language for more than ten years. This talk explains theoretical foundations of verification with proof scores using simple and insightful examples. The focuses of the talk are (1) model based verification, (2) quasicomplete proof rules, (3) analysis of relations between specifications and proofs.
[Slides] 
Kokichi Futatsugi^{} JAIST, Japan 
3401 SC  3:00pm  03/30  Rely/Guarantee Temporal Logic We will discuss recent research on a temporal logic to support relyguarantee reasoning. The logic we propose may be seen as a generalization of Alternatingtime Temporal Logic (ATL).
[Slides] 
William Mansky UIUC 
3401 SC  3:00pm  03/02  Prediction of Nullpointer Dereferences and Other Violations on Concurrent Programs Among the techniques for testing concurrent programs one particularly used is the predictionbased. The predictionbased testing methodology involves taking one arbitrary concurrent execution of the program under test, and from that predicting alternate interleavings that fall into a class of interleavings that are likely to contain bugs. In this talk we focus on nullpointer dereferences as a new target for finding bugs in concurrent programs. We propose an accurate prediction algorithm based on a combination of static analysis on the run followed by logical constraint solving. The latter predicts executions using global stitching of local computations, without modeling the local computations themselves, in order to yield to simple, accurate, and scalably solvable logical constraints that can be tackled using SMT solvers. For the cases in which the accurate prediction algorithm fails in predicting an absolutely guaranteed feasible interleaving a relaxed prediction algorithm is proposed. Our valuedriven prediction framework applies also for predicting other violations such as dataraces. We have implemented our techniques in a tool, Penelope, and we show some preliminary results of the prediction over 13 benchmark programs for nullpointer dereferences and dataraces.
[Slides] 
Francesco Sorrentino^{} UIUC 
3401 SC  3:00pm  02/17  Sliced Causality and Prediction for Safety Properties in Concurrent Programs Monitoring safety properties is useful at all stages of code life, from testing and debugging during the production stage to improving reliability and security in distributed final products. Monitoring safety properties, however, fails to achieve much coverage; a monitor can only see the code that is executed in a given pass. A possible solution is model checking, which can check all paths through a program, but this has scalability issues. Our solution is a middle ground between model checking and monitoring: infer possible executions close to an observed execution via a causal model known as sliced causality. Sliced causality is able to infer more possible executions than the popular "happens before" causal model.
This work, RVPredict, is based on the previous work of the jPredictor system by Chen and Rosu. It expands on the work by adapting it to general purpose safety properties using a new linearization algorithm; additionally the engineering advancements in RVPredict have resulted in a tool usable on real world programs. [Slides] 
Patrick Meredith^{} UIUC 
3401 SC  3:00pm  02/03  An Executable Formal Semantics of C with Applications This paper describes an executable formal semantics of C. Being executable, the semantics has been thoroughly tested against the GCC torture test suite and successfully passes 99.2% of 776 test programs.
It is the most complete and thoroughly tested formal definition of C to date. The semantics yields an interpreter, debugger, state space search tool, and model checker "for free". The semantics is shown capable of automatically finding program errors, both statically and at runtime. It is also used to enumerate nondeterministic behavior. [Slides] 
Chucky Ellison^{} UIUC 
3401 SC  4:00pm  01/20  Recursive Proofs for Inductive Tree DataStructures We develop logical mechanisms and procedures to facilitate the verification of full functional properties of inductive tree datastructures using recursion that are sound, incomplete, but terminating. Our contribution rests in a new extension of firstorder logic with recursive definitions called DRYAD, a syntactical restriction on pre and postconditions of recursive imperative programs using DRYAD, and a systematic methodology for accurately unfolding the footprint on the heap uncovered by the program that leads to finding simple recursive proofs using formula abstraction and calls to SMT solvers. We evaluate our methodology empirically and show that several complex tree datastructure algorithms can be checked against full functional specifications automatically, given pre and postconditions. This results in the first automatic terminating methodology for proving a wide variety of annotated algorithms on tree datastructures correct, including maxheaps, treaps, redblack trees, AVL trees, binomial heaps, and Btrees. [Slides^{}] 
Xiaokang Qiu ^{} UIUC 
Fall 2011
Venue  Time  Date  Title and Abstract 
Speaker 

3403 SC  4:00pm  11/11  Reduction Techniques in Verifying Rebeca Models In this talk I will provide an overview of the different analysis techniques that are provided for the modeling language Rebeca. Rebeca is designed as an imperative actorbased language with the goal of providing an easy to use language for modeling concurrent and distributed systems, with formal verification support. I will explain the language Rebeca and the supporting model checking tools. Abstraction and compositional verification, and statebased reduction techniques including symmetry, partial order reduction, and slicing of Rebeca will be discussed. As an example of a domain specific example, I will show how we used Rebeca for model checking SystemC codes. [Slides] 
Marjan Sirjani ^{} Reykjavik University, Iceland 
3403 SC  4:00pm  10/28  Proving Safety Properties of Rewrite Theories Rewrite theories are a general and expressive formalism for specifying concurrent systems in which states are axiomatized by equations and transitions among states are axiomatized by rewrite rules. We present a deductive approach for verifying safety properties of rewrite theories in which all formal temporal reasoning about concurrent transitions is ultimately reduced to purely equational inductive reasoning. Narrowing modulo axioms is extensively used in our inference system to further simplify the equational proof obligations to which all proofs of safety formulas are ultimately reduced. In this way, existing equational reasoning techniques and tools can be directly applied to verify safety properties of concurrent systems. We report on the implementation of this deductive system in the Maude Invariant Analyzer tool, which provides a substantial degree of automation and can automatically discharge many proof obligations without user intervention. [Slides] 
Camilo Rocha ^{} UIUC 
3403 SC  4:00pm  10/21 10/14  K and Matching Logic (multiple seminars) Would you like to wake up some morning, have a coffee, then design a nontrivial programming language, say with multidimensional arrays and array references, with functions and references to them, with blocks and local declarations, with input/output, with exceptions, with nondeterminism and concurrency with synchronization, and so on, and by evening to have a reasonably efficient interpreter, a statespace search tool and a model checker, as well as a deductive program verifier for your language, all correctbyconstruction? Then you may be interested in K^{} and Matching Logic^{}, because that is what they are aiming for. K^{} is a rewritebased framework for defining formal operational semantics of programming languages. A K^{} semantics can be executed, and thus tested, as if it was an interpreter for the defined language. This way, we can gain confidence in the correctness of our semantics. Then we can use precisely that semantics, unchanged, for program analysis and verification, without a need to give the language another, e.g., axiomatic or denotational or dynamic, semantics. Matching Logic^{} consists of a languageindependent proof system that allows us to reason about programs in any language that is given a rewritebased operational semantics. [Slides] 
Grigore Rosu ^{} UIUC 
2405 SC  4:00pm  09/22  The Role Mining Problem: A Formal Perspective Role based access control is well accepted as the standard best practice for access control within applications and organizations. Role engineering, the task of defining roles and associating permissions to them, is essential to realize the full benefits of the rolebased access control paradigm. The essential question is how to devise a complete and correct set of roles; this depends on how you define goodness/interestingness. (When is a role good/interesting?) We formulate the role mining problem (RMP) as a Boolean matrix decomposition problem and introduce different variants of the RMP that have pragmatic implications. Formulating the problem as a Boolean matrix decomposition makes our results applicable to several other domains, including text mining, recommender systems, and knowledge discovery from databases. <<< INFORMATION TRUST INSTITUTE >>> <<< Trust & Security Seminar ^{} >>> 
Vijay Atluri^{} Rutgers University 
3403 SC  4:00pm  09/09  State/Eventbased LTL Model Checking under Parametric Generalized Fairness In modeling a concurrent system, fairness constraints are usually considered at a specific granularity level of the system, leading to many different variants of fairness: transition fairness, object/process fairness, actor fairness, etc. These different notions of fairness can be unified by making explicit their parametrization over the relevant entities in the system as universal quantification. We propose a state/eventbased framework as well as an onthefly model checking algorithm to verify LTL properties under universally quantified parametric fairness assumptions, specified by generalized strong/weak fairness formulas. It enables verification of temporal properties under fairness conditions associated to dynamic entities such as new process creations. We have implemented our algorithm within the Maude system. [Slides^{}] 
Kyungmin Bae ^{} UIUC 
3403 SC  4:00pm  09/02  Efficient Decision Procedures for Heaps Using STRAND The STRAND logic allows expressing structural properties of heaps combined with the data stored in the nodes of the heap. A semantic fragment of STRAND as well as a syntactically defined subfragment of it are known to be decidable. The known decision procedure works by combining a decision procedure for MSO on trees (implemented by the tool MONA) and a decision procedure for the quantifierfree fragment of the datatheory (say, integers, and implemented using a solver like Z3). The known algorithm for deciding the syntactically defined decidable fragment (which is the same as the one for the semantically defined decidable fragment) involves solving large MSO formulas over trees, whose solution is the main bottleneck in obtaining efficient algorithms. In this paper, we focus on the syntactically defined decidable fragment of STRAND, and obtain a new and more efficient algorithm. Using a set of experiments obtained from verification conditions of heapmanipulating programs, we show the practical benefits of the new algorithm. [Slides (draft)^{}] (the final version will appear after SAS 2011) 
Xiaokang Qiu ^{} UIUC 
Spring 2011
Venue  Time  Date  Title and Abstract 
Speaker 

3405 SC  11:00am  07/08  Verified Software Toolchain The software toolchain includes static analyzers to check assertions about programs; optimizing compilers to translate programs to machine language; operating systems and libraries to supply context for programs. Our Verified Software Toolchain verifies with machinechecked proofs that the assertions claimed at the top of the toolchain really hold in the machinelanguage program, running in the operatingsystem context, on a weaklyconsistentsharedmemory machine. Our verification approach is modular, in that proofs about operating systems or concurrency libraries are oblivious of the programming language or machine language, proofs about compilers are oblivious of the program logic used to verify static analyzers, and so on. The approach is scalable, in that each component is verified in the semantic idiom most natural for that component. Finally, the verification is foundational: the trusted base for proofs of observable properties of the machinelanguage program includes only the operational semantics of the machine language, not the source language, the compiler, the program logic, or any other part of the toolchaineven when these proofs are carried out by sourcelevel static analyzers. <<< CS Seminar >>> 
Andrew W. Appel^{} Princeton University 
3403 SC  10:00am  04/28  SAT/SMT Solvers for Software Reliability and Security Constraint solvers such as Boolean SAT or modular arithmetic solvers are increasingly playing a key role in the construction of reliable and secure software. In this talk, I will present two solvers STP and HAMPI. STP is a solver for the quantifierfree theory of bitvectors and arrays, a theory whose satisfiability problem is NPcomplete. STP has been used in developing a variety of new software engineering techniques from areas such as formal methods, program analysis and testing. STP enabled a variant of a new and exciting testing technique known as Concolic testing. STPenabled Concolic testers have been used to find hundreds of previously unknown errors in Unix utilities, C/C++ libraries, media players, and commercial software, some with approximately a million lines of code. HAMPI is a solver for a rich theory of equality over bounded string variables, bounded regular expressions and contextfree grammars, another theory whose satisfiability problem is NPcomplete. HAMPI is primarily aimed at analysis of stringmanipulating programs such as web applications and scripts. HAMPI has been used to find many unknown SQL injection vulnerabilities in applications with more than 100,000 lines of PHP code using static and dynamic analysis. I will discuss the techniques that make these solvers scale to formulas with millions of variables and constraints derived from realworld software engineering applications. I will also discuss some related theoretical results. Finally, I will talk about what the future for solvers might look like with a focus on multicores and programming language design. [Slides^{}] 
Vijay Ganesh^{} MIT 
2405 SC  10:00am  04/22  How's the Parallel Computing Revolution Going? Two trends changed the computing landscape over the past decade: (1) hardware vendors started delivering chip multiprocessors (CMPs)instead of uniprocessors, and (2) software developers increasingly chose managed languages instead of native languages. Unfortunately, the change to CMPs has disrupted the virtuouscycle between portable performance improvements and software innovation. This talk first examines current progress towards power efficient scalable systems using performance and onchip power measurements on diverse benchmarks executed on 130nm to 32nm processor technologies. We are not there yet. Establishing a virtuous cycle for the 21st century will require a combination of new programming models and algorithms with data and computational efficiencies that have not been required in decades. Since the Virtual Machines (VM) executes, schedules, monitors, compiles, optimizes, and garbage collects together with managed applications, the VM must be central to providing such efficiencies. This talk describes our markregion Immix garbage collector that achieves space and time efficient garbage collection. The parallel computing revolution urgently needs many such innovations to create a new ecosystem. <<< The Special Distinguished CS Seminar >>> 
Kathryn McKinley^{} The University of Texas at Austin 
3405 SC  2:00pm  04/20  Local Reasoning about Programs that Alter Data Structures By Peter O'Hearn, John Reynolds, and Hongseok Yang, CSL 2001. Available at http://www.dcs.qmul.ac.uk/~ohearn/papers/localreasoning.pdf^{} We describe an extension of Hoare's logic for reasoning about programs that alter data structures. We consider a lowlevel storage model based on a heap with associated lookup, update, allocation and deallocation operations, and unrestricted address arithmetic. The assertion language is based on a possible worlds model of the logic of bunched implications, and includes spatial conjunction and implication connectives alongside those of classical logic. Heap operations are axiomatized using what we call the "small axioms", each of which mentions only those cells accessed by a particular command. Through these and a number of examples we show that the formalism supports local reasoning: A specification and proof can concentrate on only those cells in memory that a program accesses. This paper builds on earlier work by Burstall, Reynolds, Ishtiaq and O'Hearn on reasoning about data structures. <<< Reading Group on Heap Analysis >>> 
Pranav Garg^{} UIUC 
3405 SC  2:00pm  04/11  Finding and Understanding Bugs in C Compilers Compiler bugs slow down development and undermine sourcelevel verification and validation. C compiler bugs are pernicious because C code is part of the trusted computing base for so many computer systems. This talk is about Csmith, our randomized testcase generation tool, and the 340 previouslyunknown bugs in C compilers that we found using it, including crash bugs and wrongcode bugs in every compiler we have tested. [Slides^{}] <<< Joint FM/Compiler Seminar >>> 
John Regehr ^{} University of Utah 
3405 SC  2:00pm  04/06  Abstraction Refinement for Stability Counter Example Guided Abstraction Refinement (CEGAR) is a popular technique that has been successfully applied in software verification and verification of safety properties of hybrid systems. In this talk, we introduce a CEGAR framework to prove the stability of hybrid systems. Hybrid systems are a special class of systems with continuous and discrete dynamics and are widely used to model automotive controllers, cyberphysical systems, etc. A system is stable if it will eventually reach a set of distinguished stable states and stay in these states forever. The key insight is to relate the stability property to a *blocking* property of hybrid systems, and to characterize this blocking property using a notion called *Hybrid Step Relations*. Using Hybrid Step Relations and the CEGAR framework, we device a technique to prove that a hybrid system is blocking and hence stable. 
Parasara Sridhar Duggirala ^{} UIUC 
3401 SC  11:00am  03/10  Approximations for Verification of Cyber Physical Systems Cyber Physical Systems (CPS) is a term broadly used to define systems in which the cyber world consisting of computation and communication is tightly connected to the physical world. These systems have applications in Automotives, Medical Devices, Aeronautics and Robotics among several other areas. These systems are often deployed in safetycritical systems and hence their reliable performance is of utmost importance. Verifying CPS is challenging due to the presence of continuous components modeling the physical world. The complexity of verification of these systems increases with the complexity of the continuous components, and is undecidable for a relatively simple class of systems. In fact, the problem of computing onestep successors which is a basic step in various state space exploration based verification techniques is undecidable for complex dynamical systems. Hence, the verification of these systems relies on developing techniques which can compute good approximations of these system efficiently. I will describe some of my work on approximation techniques for verification of CPS. In particular, I will talk about a technique for approximating general CPS by polynomial systems, and some results on how this process of approximation can be efficiently automated. I will also describe a counterexample guided abstraction refinement technique for hybrid systems which automatically refines an abstraction when it is not sufficient to prove the correctness of the system. I will discuss various cyber physical systems which have been analysed using these techniques. 
Pavithra Prabhakar ^{} UIUC 
3405 SC  2:00pm  03/08  A Mathematical Framework for the Verification of Air Traffic Separation Assurance Systems In order to accommodate the predicted increase in air traffic operations over the next decades, the next generation of air transportation systems rely on new operational concepts where the responsibility for traffic separation is air/ground distributed. These new concepts will require an unprecedented level of safetycritical systems. Statebased distributed Separation Assurance (SA) is one of such concepts and it refers to conflict prevention, detection, and resolution systems that rely exclusively on the state information, i.e., 3D position and velocity vectors, that is broadcast by each aircraft. This talk presents the Airborne Coordinated Conflict Resolution and Detection (ACCoRD) framework for the design and verification of statebased separation assurance systems. The framework, which has been specified and mechanically verified in PVS, consists of a set of definitions and formally proven criteria that guarantee the correctness of a family of SA algorithms. For instance, in the case of pairwise conflict resolution, ACCoRD provides criteria that guarantee that two distributed algorithms, which satisfy the same coordination criteria, independently compute resolution maneuvers that yield conflict free trajectories when simultaneously implemented by the aircraft. This talk gives more examples of such criteria and algorithms that satisfy them. It also argues that a criteriabased verification framework, such as ACCoRD, will enable the natural evolution of the worldwide Air Transportation System. 
Cesar Munoz ^{} NASA Langley Research Center 
3405 SC  2:00pm  02/09  Selected papers from POPL a) Calling Context Abstraction with Shapes X. Rival, B. Chang b) Streaming Transducers for Algorithmic Verification of SinglePass List Processing Programs R. Alur, P. Cerny c) Laws of Order: Expensive Synchronization in Concurrent Algorithms cannot be eliminated H. Attiya, R. Guerraoui, D. Hendler, P. Kuznetsov, M. Michael, M. Vechev d) Precise Reasoning for Programs Using Containers I. Dillig, T. Dillig, A. Aiken 
Pranav Garg ^{} and Xiaokang Qiu ^{} UIUC 
3403 SC  11:00am  01/21  Verifying StartUp and Mode Transistions for Analog, MixedSignal Circuits Analog, mixedsignal (AMS) designs have become ubiquitous. Obvious examples are chips for cell phones and other mobile devices that combine RF, audio, video, and computing capabilities. Even traditional CPUs include analog circuits for clock generation, high bitrate I/O, power and temperature regulation and other functions. While digital design flows are now very systematic and benefit from formal verification techniques, analog design remains very much an ad hoc art, based on simulation and designer intuition. In this talk, I will present results and opportunities for formal verification of analog designs. For digital circuit, automata theory provides the underlying mathematical framework for verification. For analog circuits, linear systems theory and dynamical systems theory take this role. I will illustrate these methods with two examples: the verification of an oscillator circuit put forward as a challenge problem by researchers from Rambus; and the analysis of synchronizer failure probabilities. 
Mark Greenstreet ^{} Univ. of British Columbia 
Fall 2010
Venue  Time  Date  Title and Abstract  Speaker 

3405 SC  11:00am  12/03  Formalizing Operator task Analysis Aberrant behavior of human operators in safety critical systems can lead to severe or even fatal consequences. Human operators are unique in their decision making capability, judgment and nondeterminism. There is a need for a generalized framework that can allow capturing, modeling and analyzing the interactions between computer systems and human operators where the operators are allowed to deviate from their prescribed behaviors. This will provide a formal understanding of the robustness of a computer system against possible aberrant behaviors by its human operators. We provide a framework for (i) modeling the human operators and the computer systems; (ii) formulating tolerable human operator action variations(protection envelope); (iii) determining whether the computer system can maintain its guarantees if the human operators operate within their protection envelopes; and finally, (iv) determining robustness of the computer system under weakening of the protection envelopes. We present Tutela, a tool that assists in accomplishing the first and second step, automates the third step and modestly assists in accomplishing the fourth step. 
Ayesha Yasmeen UIUC 
B02 CSL  4:00pm  11/15  Automated Correct Construction of Distributed Systems from Componentbased Models Design and implementation of distributed algorithms often involve many subtleties due to their complex structure, nondeterminism, and low atomicity as well as occurrence of unanticipated physical events such as faults. Thus, constructing correct distributed systems has always been a challenge and often subject to serious errors. We propose a method for generating distributed implementations from highlevel componentbased models that only employ simple synchronization primitives. The method is a sequence of three transformations preserving observational equivalence: (1) A transformation from a global state to a partial state model, (2) a transformation which replaces multiparty strong synchronization primitives in atomic components by pointtopoint send/receive primitives based on asynchronous message passing, and (3) a final transformation to concrete distributed implementation based on platform and architecture. These transformations may use additional mechanisms ensuring mutual exclusion between conflicting interactions. We propose different distributed implementations from fully centralized to fully decentralized ones. We study the properties of different transformations, in particular, performance criteria such as degree of parallelism and overhead for coordination. In our case studies, we observe that different types of distributed and parallel algorithms may significantly behave differently in terms of performance under different types of transformations. 
Borzoo Bonakdarpour ^{} University of Waterloo 
3405 SC  11:00am  10/22  PENELOPE: Weaving Threads to Expose Atomicity Violations Testing concurrent programs is challenged by the interleaving explosion problem the problem of exploring the large number of interleavings a program exhibits, even under a single test input. Rather than try all interleavings, we propose to test wisely: to exercise only those schedules that lead to interleavings that are typical error patterns. In particular, in this paper we select schedules that exercise patterns of interaction that correspond to atomicity violations. Given an execution of a program under a test harness, our technique is to algorithmically mine from the execution a small set of alternate schedules that cause atomicity violations. The program is then reexecuted under these predicted atomicityviolating schedules, and verified by the test harness. The salient feature of our tool is the efficient algorithmic prediction and synthesis of alternate schedules that cover all possible atomicity violations at program locations. We implement the tool PENELOPE that realizes this testing framework and show that the monitoring, prediction, and rescheduling (with precise repro) are efficient and effective in finding bugs related to atomicity violations. 
Francesco Sorentino^{} UIUC 
2405 SC  2:00pm  10/28  Practical Parallel and Concurrent Programming: Performance and Correctness on the Multicore Computer On the multicore computer many applications will make use of parallelism to achieve speedup and concurrency to achieve responsiveness. The .NET 4 Framework introduced new abstractions for parallelism and concurrency, but achieving good performance while maintaining correctness is still difficult. In this talk, I'll describe courseware we have developed in conjunction with the University of Utah for introducing key concepts in parallelism and concurrency, with supporting code C#/F# code examples, as well as tests and new tools that support the twin goals of achieving high performing and correct code. For more info, visit http://ppcp.codeplex.com/^{}. (Joint work with Sebastian Burckhardt, Ganesh Gopalakrishnan, Joseph Mayo, Madan Musuvathi, Shaz Qadeer, and Caitlin Sadowski). <<< UPCRC SEMINAR^{} ( Slides^{}) (Video^{}) >>> 
Tom Ball ^{} Microsoft Research 
3405 SC  11:00am  10/15  Interpolants paper presentation: Interpolation and SATbased Model Checking by McMillan (CAV'03)^{} We consider a fully SATbased method of unbounded symbolic model checking based on computing Craig interpolants. In benchmark studies using a set of large industrial circuit verification instances, this method is greatly more efficient than BDDbased symbolic model checking, and compares favorably to some recent SATbased model checking methods on positive instances. 
Edgar Pek ^{} UIUC 
3405 SC  11:00am  10/01  Statistical Model Checking Given a stochastic system (a Markov Chain,...), the probabilistic model checking problem consists in deciding whether this system satisfies some property with a probability greater or equal to a certain threshold. There exist several numerical algorithms (implemented in tools such as PRISM or LIQUOR) for solving such problems. Unfortunately, those algorithms do not scale up to realistic systems. In this talk, we will show that techniques coming from the area of statistics can be used to solve the probabilistic model checking problem. We will also consider concrete applications of the approach. 
Axel Legay ^{} INRIA, France. 
3405 SC  11:00am  09/24  Compositionality Entails Sequentializability We show that any concurrent program that has a compositional proof can be effectively translated to a sequential program. More precisely, we show that concurrent programs with a relyguarantee proof using a set of auxiliary variables (a la Jones) can be effectively translated to a sequential program whose variables are a bounded number of copies of the variables in the concurrent program. The proposed sequential program precisely captures the semantics of the concurrent program and has no underapproximation restriction (like contextbounding). More strongly, there are backandforth parsimonious translations of relyguarantee proofs of the concurrent program to a Hoarestyle proof of the sequential program. We use our result to automatically prove concurrent programs like Bakery and Peterson correct by proving their sequentializations using the automatic predicateabstraction based tool SLAM. Further, using the parsimonious proof reduction, we implement relyguarantee verifiers using the sequential program verifier Boogie. 
Pranav Garg ^{} UIUC 
3405 SC  11:00am  09/10  Reasoning About MDPs as Transformers of Probability Distributions We consider Markov Decision Processes (MDPs) as transformers on probability distributions, where with respect to a scheduler that resolves nondeterminism, the MDP can be seen as exhibiting a behavior that is a sequence of probability distributions. Defining propositions using linear inequalities, one can reason about executions over the space of probability distributions. In this framework, one can analyze properties that cannot be expressed in logics like PCTL*, such as expressing bounds on transient rewards and expected values of random variables, and comparing the probability of being in one set of states at a given time with that of being in another set of states. We show that model checking MDPs with this semantics against ?regular properties is in general undecidable. We then identify special classes of propositions and schedulers with respect to which the model checking problem becomes decidable. We demonstrate the potential usefulness of our results through an example. Joint work with Mahesh Viswanathan, YoungMin Kwon and Gul Agha. 
Vijay Korthikanti ^{} UIUC 
3405 SC  11:00am  09/03  VEX: Vetting Browser Extensions For Security Vulnerabilities The browser has become the de facto platform for everyday computation. Among the many potential attacks that target or exploit browsers, vulnerabilities in browser extensions have received relatively little attention. Currently, extensions are vetted by manual inspection, which does not scale well and is subject to human error. In this paper, we present VEX, a framework for highlighting potential security vulnerabilities in browser extensions by applying static informationflow analysis to the JavaScript code used to implement extensions. We describe several patterns of flows as well as unsafe programming practices that may lead to privilege escalations in Firefox extensions. VEX analyzes Firefox extensions for such flow patterns using highprecision, contextsensitive, flowsensitive static analysis. We analyze thousands of browser extensions, and VEX finds six exploitable vulnerabilities, three of which were previously unknown. VEX also finds hundreds of examples of bad programming practices that may lead to security vulnerabilities. We show that compared to current Mozilla extension review tools, VEX greatly reduces the human burden for manually vetting extensions when looking for key types of dangerous flows. Joint work with Sam King, P. Madhusudan, and Marianne Winslett. 
Sruthi Bandhakavi ^{} UIUC 
Page maintained by Edgar Pek and Madhusudan Parthasarathy