Rajeev Alur is Zisman Family Professor of Computer and Information Science at University of Pennsylvania. He obtained his bachelor's degree in computer science from IIT Kanpur in 1987 and PhD in computer science from Stanford University in 1991. Before joining Penn in 1997, he was with Computing Science Research Center at Bell Labs. His research is focused on formal methods for system design, and spans theoretical computer science, software verification and synthesis, and cyber-physical systems.

The classical synthesis problem is to find a program or a system that meets a correctness specification given as a logical formula. Recent work on synthesis and optimization illustrates many potential benefits of allowing the user to supplement the logical specification with a syntactic template that constrains the space of allowed implementations. The formulation of the syntax-guided synthesis problem (SyGuS) is aimed at standardizing the core computational problem common to these proposals in a logical framework. The input to the SyGuS problem consists of a background theory, a semantic correctness specification for the desired program given by a logical formula, and a syntactic set of candidate implementations given by a grammar. The computational problem then is to find an implementation from the set of candidate expressions so that it satisfies the specification in the given theory.

In these lectures, we first describe how a wide range of problems such as automatic synthesis of loop invariants, program optimization, program repair to defend against timing-based attacks, and learning programs from examples can be formalized as SyGuS instances. We then describe the counterexample-guided-inductive-synthesis (CEGIS) strategy for solving the SyGuS problem. Finally we discuss our efforts over the past three years on defining the standardized interchange format built on top of SMT-LIB, repository of benchmarks from diverse applications, organization of the annual competition, SyGuS-COMP, of solvers, and experimental evaluation of solution strategies.

Tom Bienmüller graduated in Computer Science with a diploma in Hardware Design from the University of Oldenburg in 1997. He received his doctoral degree in 2003 with a dissertation on the optimization of symbolic model checking in reactive systems testing. After joining BTC Embedded Systems in 2003, he led Product Development and grew the newly founded services department from 2009 onward. Since 2013, he has been head of Product & Innovation Development and Quality Assurance.

This teaching session focuses on state-of-the-art industrial application of formal methods from a tool-vendor's perspective, and it gives an overview of the foreseeable challenges in the future. With increased amount of software in current and future automotive systems and the extended utilization of formal methods in production projects the demands on supporting tools raise in terms of reliability and applicability. Safety-standards like IEC-61508 or ISO-26262 require additional evidence for confidence for each relevant tool used in a development or test process. Failing to give an evidence may disable a tool's application. This imposes further needs to the tool confidence, in particular, for formal verification, as this method is claiming to give "the ultimate answer" regarding fulfillment of functional requirements. Future trends like Advanced Driver Assistance Systems will require new approaches in development and quality assurance for OEMs and Suppliers. In particular, the complexity of testing and verification will tremendously increase in contrast to traditional component based development, and therefore new solutions will be needed, which will induce academic challenges.

Jasmin Blanchette obtained his Ph.D. from the Technische Universität München in 2012. His research focuses on improving the automation, expressiveness, and trustworthiness of proof assistants. He is a researcher at Inria Nancy and the MPII and has now been appointed assistant professor at the Vrije Universiteit Amsterdam. He has recently been awarded an ERC starting grant. Before pursuing an academic career, he worked as a software engineer and documentation manager in industry.

Proof assistants (also called interactive theorem provers) are interactive tools with a graphical user interface that make it possible to develop computer-checked, formal proofs of theorems, usually expressed in some variant of higher-order logic. The primary advantage of formal proofs over their pen-and-paper counterparts is the high trustworthiness of the result; but increasingly, proof assistants are also used for their convenience, especially for program verification, where the proof obligations can be very large and unwieldy. In this lecture, I will briefly review the main proof assistants in use and their underlying formalisms (simple type theory, dependent type theory, and set theory). Then I will focus on one specific system, Isabelle/HOL, and demonstrate interactive proving on a few examples. At the end of the lecture, you should have some basic understanding of what modern interactive proving looks like and where to go next if you want to replace pen-and-paper arguments by a more reliable, and often more convenient, approach.

Chris Brown received his Ph.D. in Computer Science from the University of Delaware in 1999, after which he started at the United States Naval Academy in Annapolis, Maryland, where he is currently a Professor in the Department of Computer Science. His primary research interest has been in computing symbolically with real polynomial constraints, and at present he is working on a National Science Foundation sponsored project on the model-based approach and a new kind of Cylindrical Algebraic Decomposition.

Computing with real polynomial constraints naturally requires us to consider algebra, logic and geometry all together. The constraints themselves are formulas in a first-order logic, and the questions asked are often the questions one would expect from logic: e.g. determine satisfiability, or eliminate quantified variables. On the other hand, it's hard to go very far algorithmically without bringing in some understanding of the geometric objects defined by such constraints. Cylindrical Algebraic Decomposition is a data-structure that connects formulas and the geometric objects they define, and can be used to perform many computations involving real polynomial constraints. This session will introduce CAD, survey the current state of the art in CAD, and review some software tools that use CAD.

James Davenport studied at Cambridge, getting his PhD "On the Integration of Algebraic Functions" there in 1979: the implementation was in the Reduce computer algebra system. He then went to IBM Yorktown Heights, where he worked on what became the Axiom strongly typed computer algebra system, back to Cambridge, then to Grenoble, where he taught using the Macsyma computer algebra system, and started his book "Calcul Formel". He then moved to the University of Bath in 1983, being at that University for 2/3 of its life. He has also visited many French universities, KTH in Stockholm, Waterloo and Western Ontario in Canada, and New York University. As well as integration and algebra system design, he has worked on polynomial system solving and real algebraic geometry, the last mostly in the Maple computer algebra system. He is the coordinator of the SC-square network organising this summer school.

"Symbolic Computation" is also called "Computer Algebra", and that is exactly what it is: getting computers to do algebra, including operations such as symbolic integration, limits. But getting a computer to do anything means that you have to make explicit decisions that you don't make normally, even how to store a polynomial. We will talk about Maple and Reduce, two systems whose designers have make rather different decisions. We will also talk about various solving methods, and how these relate to different strategies for thinking of, and storing, polynomials.

David Deharbe holds a PhD in Informatics on formal verification of VHDL designs. Since January 2016, he is software engineer and formal methods expert at ClearSy, an SME specialized in safety systems. He is on leave from Federal University of Rio Grande do Norte, where he has worked since 1997. He his one of the main developers of the SMT-solver veriT. His main interests are SMT-solving, safety assurance, and the B method.

slides (pdf) ⋅ slides (mp4) talk

Formal verification is a key aspect of the practical application of formal methods for industrial developments. In particular, B is a formal notation used as a basis for the sound design of software components, for system-level modelling and analysis, and data validation. Grounded in first-order logic, set theory, integer arithmetic and the substitution calculus, B practitioners need robust and efficient formal verification tools to achieve validity checking and constraint solving tasks. For some industrial projects, safety certification comes into play and has an impact on how such tools may be used. In this talk we present these challenges and how they are being addressed. In particular we explore the connection with satisfiability solving and symbolic computation.

**Keijo Heljanko** is an Associate Professor at Aalto University. He received his doctoral degree from Helsinki University of Technology in 2002. He has worked as a postdoc at the University of Stuttgart and as an Academy Research Fellow at Helsinki University of Technology. His research interests include design and development methods for distributed systems, including automated algorithms for testing and verification using SAT/SMT technologies.

**Tomi Janhunen** is a senior university lecturer (docent) at Aalto University. He received his doctoral degree from Helsinki University of Technology in 1998. His primary research interests are in knowledge representation and automated reasoning, especially in answer set programming, extensions of Boolean satisfiability, and translations between logical formalisms.

**Tommi Junttila** is a university lecturer (docent) at Aalto University. He received his doctoral degree in theoretical computer science from Helsinki University of Technology in 2003. He worked as a postdoc at ITC-irst in 2004. His primary research interests are in propositional satisfiability, satisfiability modulo theories, practical approaches to graph isomorphism, and verification.

The goal of this practical session is to give the participants hands on experience on logical modeling and problem solving with SAT/SMT solvers. As the solver we will use the Z3 SMT solver developed at Microsoft Research through its python interface. The problems to be considered cover topics such as combinatorial search and model checking. As a starting point, the participants will be provided with template code for each problem and the goal is to complete the code with logical constraints required to model and solve the problems. More detailed instructions for getting prepared will be sent before the school.

**Keijo Heljanko** is an Associate Professor at Aalto University. He received his doctoral degree from Helsinki University of Technology in 2002. He has worked as a postdoc at the University of Stuttgart and as an Academy Research Fellow at Helsinki University of Technology. His research interests include design and development methods for distributed systems, including automated algorithms for testing and verification using SAT/SMT technologies.

**Tomi Janhunen** is a senior university lecturer (docent) at Aalto University. He received his doctoral degree from Helsinki University of Technology in 1998. His primary research interests are in knowledge representation and automated reasoning, especially in answer set programming, extensions of Boolean satisfiability, and translations between logical formalisms.

**Tommi Junttila** is a university lecturer (docent) at Aalto University. He received his doctoral degree in theoretical computer science from Helsinki University of Technology in 2003. He worked as a postdoc at ITC-irst in 2004. His primary research interests are in propositional satisfiability, satisfiability modulo theories, practical approaches to graph isomorphism, and verification.

The goal of this practical session is to give the participants hands on experience on logical modeling and problem solving with SAT/SMT solvers. As the solver we will use the Z3 SMT solver developed at Microsoft Research through its python interface. The problems to be considered cover topics such as combinatorial search and model checking. As a starting point, the participants will be provided with template code for each problem and the goal is to complete the code with logical constraints required to model and solve the problems. More detailed instructions for getting prepared will be sent before the school.

Marijn Heule received his PhD at Delft University of Technology in the Netherlands in 2008. He is an Associate Editor of the Journal on Satisfiability, Boolean Modelling and Computation and one of the editors of the Handbook of Satisfiability. His research focuses on solving hard combinatorial problems in areas such as formal verification, number theory, and combinatorics. Most of his contributions are related to theory and practice of satisfiability (SAT) solving. He has developed award-winning SAT solvers, and his preprocessing techniques are used in most state-of-the-art SAT solvers.

slides (solving) ⋅ slides (processing) ⋅ code

Satisfiability (SAT) solvers have become powerful search engines to solve a wide range of applications in fields such as formal verification, planning and bio-informatics. Due to the elementary representation of SAT problems, many low-level optimizations can be implemented. At the same time, there exist clause-based techniques that can simulate several high-level reasoning methods. The teaching session focuses on the search procedures in successful conflict-driven clause learning SAT solvers. It shows how to learn from conflicts and provides an overview of effective heuristics for variable and value selection. Additionally, the teaching session covers recent developments, in particular a technique used in today's strongest solvers: the alternation between "classic" depth-first search with learning, and breadth-first search for simplification.

Hoon Hong is a professor of mathematics at North Carolina State University in US. His main area of research is in symbolic computation, in particular, computational real algebraic geometry. He currently serves as the editor-in-chief of Journal of Symbolic Computation. He has organized numerous international conferences and workshops. He has also given numerous invited talks at international meetings and institutions. He served as an elected steering committee member-at-large of ISSAC, and as an elected vice-president of ACM SIGSAM. He has received the outstanding teacher award from North Carolina State University.

* Symbolic computation * is a research area where one develops mathematical theories, algorithms and software systems for solving problems symbolically, which means (1) *exactly * in contrast to approximately and (2) *parametrically * in contrast to numerically.

* Quantifier elimination * is one of the core problems in symbolic computation. It takes a quantified formula and produces an equivalent quantifier-*free* formula. Satisfiability checking and theorem proving/verification can be viewed as special cases where there are no free variables. The original motivation for quantifier elimination research came from laying firm logical foundation of mathematics. However, recently, it has been recognized that numerous important problems in science and engineering can be reduced to that of quantifier elimination.

The lecture will consists of two parts: (1) a quick * bird's eye * view on the whole symbolic computation area. (2) a * tutorial/survey* on various fundamental mathematical theories and algorithms for quantifier elimination over complex algebra and real algebra, and if time allows, differential algebra. The lecture will be down-to-earth, hands-on and self-contained.

This lecture will be followed by two closely related in-depth lectures. (1) Davenport's lecture on crucial issues in the design and use of symbolic computation algorithm and system. (2) Brown's lecture on cylindrical algebraic decomposition, one of the most important quantifier elimination algorithms over real algebra.

Grant Passmore is Co-Founder and Co-CEO of Aesthetic Integration, a startup pioneering the application of formal verification to the design and regulation of financial algorithms. He has over a decade of industrial formal verification experience and has been a key contributor to safety verification of algorithms at Cambridge, Carnegie Mellon, Edinburgh, Microsoft Research and SRI. His work has been featured in the Financial Times, Bloomberg News and Reuters. He is regularly an invited speaker at international conferences, including Seven on Seven 2016, ITP 2016 and CADE 2017. He earned his PhD from the University of Edinburgh and is a Life Member of Clare Hall, University of Cambridge.

Many deep issues plaguing today’s financial markets are symptoms of a fundamental problem: The complexity of algorithms underlying modern finance has significantly outpaced the power of traditional tools used to design and regulate them. At Aesthetic Integration, we've pioneered the use of formal verification for analysing the safety and fairness of financial algorithms. With a focus on financial infrastructure (e.g., the matching logics of exchanges and dark pools), we'll describe the landscape, and illustrate our Imandra formal verification system on a number of real-world examples. We'll sketch many open problems and future directions along the way.

Cesare Tinelli received a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1999 and is currently a professor at the University of Iowa. Much of his work has focused on the development of results and techniques for Satisfiability Modulo Theories (SMT) and their application to formal methods. He has led or co-led the development of award-winning theorem provers and SMT solvers, including the widely used CVC4 solver. He is an associate editor of the Journal of Automated Reasoning and a founder and coordinator of the SMT-LIB initiative, an international effort aimed at standardizing benchmarks and I/O formats for SMT solvers.

Satisfiability Modulo Theories (SMT) refers to the problem of determining the satisfiability of a first-order formula in a logical theory of interest. Typical theories include theories of various forms of arithmetic and of various data types such as arrays, bit vectors, finite sets, strings, lists, and so on. In contrast to general purpose theorem provers, SMT solvers use specialized, and more efficient, inference methods for each theory. These methods are generally limited to conjunctions of simple constraints. Reasoning about formulas with arbitrary Boolean structures and quantifiers is then achieved by combining them in a principled way with powerful propositional satisfiability techniques and advanced quantifier instantiation heuristics. This approach has led to the creation of powerful and versatile solvers that are now used as back-end engines in a large and growing number of application areas, including hardware verification, equivalence checking, bounded and unbounded model checking, predicate abstraction, static analysis, symbolic simulation, program verification, security, test generation, type checking, planning, scheduling, and optimization. This talk will give an overview of the formal foundations of SMT by presenting and discussing a few abstract architectures for SMT solving together with a few methods for checking satisfiability in specific theories, with a focus on arithmetic ones.

Christoph Weidenbach got his PhD in 1996 and Habilitation in 2000 from Saarland University. He is an author of a variety of automated reasoning procedures, including SPASS. From 1999 to 2005 he worked as a manager for GM Powertrain Europe and returned in 2005 to academia taking an independent research group leader position at the Max Planck Institute for Informatics. His interests are in the automation of logics in a broad sense.

Starting from Marijn's automated reasoning techniques for SAT I explore corresponding developments in FOL solving. It turns out that due to the far more expressive logic, the representation of models turns into a research area on its own and syntactic techniques for deriving learned clauses become as important as model-based conflict driven techniques. In particular, I will present recent progress on decision procedures for FOL fragments.