15424-project

\[\newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\inferrule}[3][]{\frac{#2}{#3}\,{#1}}\]

Beyond $\ast$: Visualizing Quantifier Elimination for Real Arithmetic

Abstract: The existence of a quantifier elimination algorithm for real arithmetic is one of the foundational results that enables formal reasoning and verification of CPS. Most of the well-known algorithms for quantifier elimination are extremely complicated, too inefficient to be used on even the simplest of formulae, or both. This makes studying quantifier elimination algorithms difficult. This project aims to rectify this problem by providing a writeup and implementation of the Cohen-Hörmander algorithm along with visualizations to aid understanding.

Introduction

The modeling of cyber-physical systems (CPS), and the subsequent formal verification of the model, are made possible by a multitude of results. Foremost among these is the development of a logic (such as differential dynamic logic) that can express desirable properties of a CPS as logical formulae, along with a set of inference rules that can be used to construct proofs of these formulae. Manually constructing these computationally-verifiable proofs from the axioms of all the formal logics involved would be far too painful even for simple CPS. It thus becomes important to seek methods of automating the proof construction.

While it is impossible for many useful logics to fully automate the proof construction process [3], it is certainly possible to automate portions of it. Rather surprisingly, a 1931 result by Tarski [7], along with related more recent developments [5], show that the entire proof construction process can be automated once the desired goal has been reduced to proving a formula of real arithmetic. This result is absolutely foundational for CPS verification, both from a theoretical and a practical perspective. The existence of an automatic verification procedure for formulae of real arithmetic allows one to abstract away the formal reasoning process behind the real arithmetic proof goals, and simply give inference rules such as the following, which holds whenever $\bigwedge \Gamma \implies \bigvee \Delta$ is a valid formula of real arithmetic [6].

\[\inferrule{\ast}{\,\,\Gamma \vdash \Delta\,\,}{\color{blue}{\R}}\]

Practically speaking, the real arithmetic proof goals that result from attempts to prove properties of CPS are often prohibitively complex for manual methods.

Given the significance of this result, and the rather mysterious nature of the real arithmetic proof rule, the question “what is really going on here?” likely crosses many students’ minds. A little research would reveal a number of algorithms for automatically deciding the truth of a sentence of real arithmetic (called quantifier elimination (QE) algorithms), but many of the choices have significant disadvantages for someone studing real QE for the first time:

However, there is a (not too well-known) alternative that offers a reasonable balance: the Cohen-Hörmander algorithm [1]. It is simple enough to be described in this paper, complete in the sense that it can (in principle) decide the truth of any sentence of real arithmetic, and efficient enough to admit implementations that one can actually interact with. This work thus aims to introduce an audience of students taking introductory logic courses to real quantifier-elimination by providing a writeup, a number of visuals, and an implementation of the Cohen-Hörmander algorithm.

[10] develops a tool that can be used to visualize the data structures computed by the cylindrical algebraic decomposition algorithm. This present work is analogous to that one in the sense that we provide an implementation that allows users to view and understand the main data structure (the sign matrix) computed by the Cohen-Hörmander algorithm.

[4] presents the Cohen-Hörmander algorithm and provides an implementation in OCaml. Our presentation of the algorithm is based on this one. While the implementation provided by this book is likely more readable than ours, we improve accessibility and usability of the implementation by embedding it in a website and enabling verbose output that allows the reader to trace the operation of the algorithm.

Our presentation of the algorithm also differs from both of the above in that it includes animated visualizations intended to complement text-based explanations.

Background

In this section, we very briefly review some of the background material that is necessary to properly define the problem solved by the Cohen-Hörmander algorithm.

Real Arithmetic

The first-order theory of real closed fields is a formal language for stating properties of the real numbers. The language is built up recursively as follows:

Terms: Terms are the construct that the language uses to refer to real numbers, or to combine existing numbers into other ones. They are built up via the following inference rules

\[\inferrule{c \in \Q}{c\,\mathsf{term}}{} \qquad \inferrule{x\,\mathsf{var}}{x\,\mathsf{term}}{} \qquad \inferrule{e_1\,\mathsf{term}\quad e_2\,\mathsf{term}}{e_1 + e_2\,\mathsf{term}}{} \qquad \inferrule{e_1\,\mathsf{term}\quad e_2\,\mathsf{term}}{e_1 \cdot e_2\,\mathsf{term}}{}\]

Formulae: Formulae are the construct that the language uses to express assertions about real numbers. The basic, or atomic, formulae are constructed as follows:

\[\inferrule{e_1\,\mathsf{term}\quad e_2\,\mathsf{term}}{e_1 = e_2\,\mathsf{form}}{} \qquad \inferrule{e_1\,\mathsf{term}\quad e_2\,\mathsf{term}}{e_1 < e_2\,\mathsf{form}}{} \qquad \inferrule{e_1\,\mathsf{term}\quad e_2\,\mathsf{term}}{e_1 > e_2\,\mathsf{form}}{} \qquad \inferrule{e_1\,\mathsf{term}\quad e_2\,\mathsf{term}}{e_1 \leq e_2\,\mathsf{form}}{} \qquad \inferrule{e_1\,\mathsf{term}\quad e_2\,\mathsf{term}}{e_1 \geq e_2\,\mathsf{form}}{} \qquad\]

Formulae can be joined together using boolean connectives:

\[\inferrule{\varphi\,\mathsf{form}}{\neg\varphi\,\mathsf{form}}{} \qquad \inferrule{\varphi\,\mathsf{form}\quad \psi\,\mathsf{form}}{\varphi \wedge \psi\, \mathsf{form}}{} \qquad \inferrule{\varphi\,\mathsf{form}\quad \psi\,\mathsf{form}}{\varphi \vee \psi\, \mathsf{form}}{} \qquad \inferrule{\varphi\,\mathsf{form}\quad \psi\,\mathsf{form}}{\varphi \implies \psi\, \mathsf{form}}{} \qquad \inferrule{\varphi\,\mathsf{form}\quad \psi\,\mathsf{form}}{\varphi \iff \psi\, \mathsf{form}}{}\]

Finally, variables occuring in terms can be given meaning by way of quantifiers.

\[\inferrule{\varphi\,\mathsf{form}\quad x\,\mathsf{var}}{\forall x\, \varphi\,\mathsf{form}}{} \qquad \inferrule{\varphi\,\mathsf{form}\quad x\,\mathsf{var}}{\exists x\, \varphi\,\mathsf{form}}{} \qquad\]

Real arithmetic is what we get when we give these syntactic constructs their natural meaning over the real numbers. That is, the symbols $+, \cdot, =, <$, etc denote addition, multiplication, equality, and comparison of real numbers respectively. Quantifiers are interpreted to range over the real numbers.

With these constructs, we can formally express many properties of the real numbers. Some examples include:

However, not everything that we intuitively think of as a property of the real numbers can actually be accurately expressed in this language. A typical example is the supremum property: the assertion that every nonempty set of real numbers which is bounded above has a least upper bound has no equivalent in this language [8]. As we shall shortly see, the expressiveness (or lack thereof) of this language is key to the operation of the Cohen-Hörmander algorithm.

Now we can properly define what the Cohen-Hörmander algorithm actually does. It is a quantifier-elimination algorithm: it takes as input a formula $\varphi$ in this language, and produces a formula $\psi$ which contains no quantifiers and whose free variables are a subset of the free variables of $\varphi$. Moreover, $\psi$ and $\varphi$ have the same truth value regardless of how we choose to substitute for the real variables.

Real Analysis/Algebra

In this section, we list a few definitions and theorems of basic analysis/algebra that are useful in understanding the Cohen-Hörmander algorithm.

Definition (Sign):The sign of a real number $x$ is

Theorem (Intermediate value): If $f$ is a continuous function of $x$ (in particular, if $f$ is a polynomial in $x$), $a < b$, and the signs of $f(a)$ and $f(b)$ do not match, then $f$ has a root in $[a, b]$.

Theorem (Mean value): If $f$ is a differentiable function of $x$ (in particular, if $f$ is a polynomial in $x$), and $a < b$, then there exists $c \in (a, b)$ such that \[ f’(c) = \frac{f(b) - f(a)}{b - a} \]

Definition (Polynomials with rational coefficients): $\Q[x]$ denotes the set of all polynomials in $x$ with coefficients in $\Q$.

Theorem (Polynomial division): If $a, b \in \Q[x]$ and $b \neq 0$, there exists unique $q, r \in \Q[x]$ satisfying

The Algorithm

Our approach will be to first treat the simpler univariate case: formulas of the form $\forall x\, \varphi$ or $\exists x\, \varphi$ such that the only variable in $\varphi$ is $x$. Unfortunately, due to time constraints and the absence of the required libraries for JavaScript, we do not describe or implement the multivariate generalization here.

Univariate Case

A priori, deciding the truth of a formula of the form $\forall x\, \varphi$ or $\exists x\, \varphi$ requires looping through every single $x \in \R$, substituting that value into $\varphi$, checking the result, and then combining the results for all $x \in \R$ in the manner that suits the quantifier. This approach works when the model we’re concerned with is finite, but since $\R$ is very much not finite, this cannot possibly yield a useful algorithm.

Understanding the Sign Matrix

The first step in unlocking a decision procedure for real arithmetic is to look closely at what formulae in this language can express. All formulae are ultimately built up from atomic formulae, and atomic formulae are built out of terms, so we’ll start there.

Recall the inductive construction of terms: we started with rational constants and variables (in our present case, only $x$), and we were allowed to combine terms into larger terms by adding and multiplying. Using multiplication only, starting with rational constants and $x$, we’ll get terms of the form $qx^n$, where $q \in \Q$ and $n \in \N$. These are monomials (with rational coefficients) in $x$. Add addition into the mix, and since multiplication distributes over addition, we arrive at our first important observation: a term is a polynomial in $x$ with rational coefficients.

Atomic formula were constructed as $e_1 \textsf{ CMP } e_2$, where $e_1, e_2$ are terms and $\mathsf{CMP}$ was one of $=$, $<$, $>$, $\leq$, or $\geq$. Since terms are polynomials in $x$, atomic formulae are comparisons between polynomials. But $e_1 \textsf{ CMP } e_2$ is equivalent to $e_1 + (-1) \cdot e_2 \textsf{ CMP } 0$ for any choice of $\mathsf{CMP}$, and $e_1 + (-1) \cdot e_2$ is also a term, and therefore a polynomial. Thus, every atomic formula asserts something about the sign of a polynomial. For example, the atomic formula $3x^2 + 2 \geq 2x + 1$ equivalently asserts that the polynomial $3x^2 - 2x + 1$ is positive or zero.

This realization is key to transforming our infinite loop over $\R$ that we had in our initially proposed algorithm into a finite loop. Polynomials (with the zero polynomial being an easy special case) have only finitely many roots. Between two consecutive roots (also before the first root, and after the last root), a polynomial must maintain the same sign, since if it changed sign, by the intermediate value theorem there would have to be another root in the middle. The upshot is that if $x_1, \dots, x_n$ are the roots of a polynomial $p$ in increasing order, then by knowing the sign of $p$ at one point in each of the intervals $(-\infty, x_1), (x_1, x_2), \dots, (x_{n - 1}, x_n), (x_n, \infty)$, we know the sign of $p$ for every $x \in \R$. Since the truth of an atomic formula $p \textsf{ CMP } 0$ at a point $x$ is a function of the sign of $p$ at $x$, given a finite data structure which specifies the signs of $p$ over the intervals $(-\infty, x_1), (x_1, x_2), \dots, (x_{n - 1}, x_n), (x_n, \infty)$, we can evaluate an atomic formula at any $x \in \R$. Note that the signs at the points $x_1, \dots, x_n$ are all $0$, since $x_1, \dots, x_n$ are the roots of $p$, so in the case where we’re dealing with just a single polynomial, we don’t need to include the sign information at the roots in the data structure.

A quantifier-free formula $\varphi$ is just a propositional combination of a bunch of atomic formulae, and as such, knowing the truth value of each of the composite atomic formulae is sufficient to determine the truth value of $\varphi$. Above we discussed how to obtain a finite data structure that gives us the truth value of an atomic formulae at any $x \in \R$ - so all we need to do is combine these data structures for all the (finitely many) atomic formulae that are in $\varphi$, and we obtain a finite data structure which can be used to evaluate $\varphi$ at any point $x$. This is exaftly the sign matrix data structure which is the core of the Cohen-Hörmander algorithm.

More formally, let $S_\varphi$ be the set of all polynomials that occur in atomic formulae within $\varphi$. Then the rows of the sign matrix are indexed by the polynomials in $S_\varphi$. If $x_1, \dots, x_n$ are an exhaustive list of all the roots of the polynomials in $S_\varphi$ with $x_1 < x_2 < \cdots < x_n$, then the columns of the sign matrix are indexed by the list

\[(-\infty, x_1), x_1, (x_1, x_2), x_2, \dots, (x_{n - 1}, x_n), x_n, (x_n, \infty)\]

i.e, the singleton sets at the roots and the intervals between them. The entry of the sign matrix at row $p \in S_\varphi$ and column $I$ is just the sign of $p$ on $I$. Just as before, note that the signs of all the polynomials are invariant on each interval, because if a polynomial (or any continuous function, for that matter) changes sign on an interval, it must have a root in that interval. But since $x_1, \dots, x_n$ is a list of ALL of the roots of the polynomials in $S_\varphi$, and no interval listed above contains any of these points, this is not possible. Note also that in this case, where we potentially have multiple polynomials, we do need to specify the sign of each polynomial at each root: the presence of $x_i$ as a column only means that $x_i$ is a root of one of the polynomials involved - the other polynomials may have nonzero sign at $x_i$.

Here’s an example of a sign matrix for the set of polynomials ${p_1, p_2, p_3}$, where $p_1(x) = 4x^2 - 4$, $p_2(x) = (x + 1)^3$, and $p_3(x) = -5x + 5$.

\[\begin{array}{cccccc} & (-\infty, x_1) & x_1 & (x_1, x_2) & x_2 & (x_2, \infty) \\ p_1 & + & 0 & - & 0 & + \\ p_2 & - & 0 & + & + & + \\ p_3 & + & + & + & 0 & - \end{array}\]

And here’s an animation that illustrates the meaning of the sign matrix.

Finally, here’s an animation that shows how this sign matrix can be used to compute the truth value of the formula $\forall x\, [(p_1(x) \geq 0 \wedge p_2(x) \geq 0) \vee (p_3(x) > 0)]$. Note how the fact that the signs of the polynomials (and thus the truth values of the atomic formulae) are invariant over each column of the sign matrix allows us to effectively iterate over all of $\R$ by only checking each column of the sign matrix.

One important thing to note is that while the sign matrix relies crucially on the ordering of the roots of the polynomials involved, it doesn’t actually contain any numerical information about the roots themselves. In our toy example, it’s easy to see that $x_1 = -1$ and $x_2 = 1$, but this isn’t recorded in the sign matrix, nor is it necessary for the final decision procedure.

Computing the Sign Matrix

Unfortunately, building the sign matrix for an arbitrary set of polynomials isn’t as simple as telling the computer to “draw a graph,” as we did in the animation above. However, to compute the sign matrix for the set of polynomials ${p_1, \dots, p_n}$, we first remove the $0$ polynomial if its in the set - it can be added back at the end of the algorithm by simply setting its sign to $0$ everywhere. Then we construct a set containing the following polynomials:

We recursively compute the sign matrix for this set, and use it to construct the sign matrix for ${p_1, \dots, p_n}$. It’s not at all clear how the seemingly arbitrarily constructed polynomials above should help us build a sign matrix for ${p_1, \dots, p_n}$, so we first discuss that.

Including $p_2, \dots, p_n$ makes sense: these polynomials are in the “target set” ${p_1, \dots, p_n}$ as well. Having information on how their signs change at their roots and the intervals between them certainly helps us build a sign matrix for ${p_1, \dots, p_n}$ - it reduces our worries to figuring out the behavior of $p_1$.

The reason for including $p_1’$ is revealed by the following property: if $p_1’$ has no roots on an interval $(a, b)$, then $p_1$ can have at most one root in $(a, b)$. The reason for this is that in between any two distinct roots of $p_1$, there must exist a turning point of $p_1$, which corresponds to a root of $p_1’$.

So, if $p_1$ has two distinct roots $x_1, x_2$ in $(a, b)$, $p_1’$ must also have a root in $(a, b)$; the contrapositive of this is the desired statement. The same reasoning holds for intervals that are infinite in either or both directions. Given a sign matrix for a set of polynomials including $p_1’$, we know that $p_1’$ will not have any roots in any of the intervals. This is the reason why we include $p_1’$ in the input for the recursive call - it limits the number of “extra roots” that $p_1$ can have to at most one per interval in the recursively computed sign matrix.

The remainders are included to help us deduce the sign of $p_1$ at the roots of the recursively computed sign matrix. Since the $r_i$s are defined as remainders when doing polynomial division, there exist polynomials $q_i$, $1 \leq i \leq n$, such that

\[\begin{align*} p_1(x) &= q_1(x)p_1'(x) + r_1(x) \\\\ p_i(x) &= q_i(x)p_i(x) + r_i(x) \qquad (2 \leq i \leq n) \end{align*}\]

for every $x \in \R$. In particular, we can substitute in any $x_k$ in the recursively computed sign matrix that is the root of $p_i$, and we get $p_1(x_k) = q_i(x_k)p_i(x_k) + r_i(x_k)$, but since $p_i(x_k) = 0$, this reduces to $p_1(x_k) = r_i(x_k)$. This means that the sign of $p_1$ at any point $x_k$ in the recursively computed sign matrix that is the root of some $p_i$, $2 \leq i \leq n$ (resp. $p_1’$) is the same as the sign of $r_i$ (resp. $r_1$). Since $r_i$ is part of the input to the recursive call, we can read the sign of $r_i$ at $x_k$ from the sign matrix to obtain the sign of $p_1$.

But why bother with division for this? Indeed, if we added the polynomials $p_1 + p_1’, p_1 + p_2, \dots, p_1 + p_n$ to the input lists instead of these remainders, we could obtain the signs of $p_1$ at the roots of the polynomials $p_1’, p_2, \dots, p_n$ just as easily (the sign of $p_1$ at a root $x_k$ of $p_i$ is the sign of $p_1 + p_i$ at $x_k$). The reason is that we want our recursion to terminate. Given an initial input set of polynomials ${p_1, \dots, p_n}$, we construct the set ${p_1’, p_2, \dots, p_n, r_1, \dots, r_n}$ as the input to the recursive call. This set could potentially have more than twice the size of the initial set, so a termination argument for the recursion can’t possibly be based on decreasing size of the input set. However, the following observations will help us:

Now, if we choose $p_1$ such that it has maximal degree in the input set (so that every $p_i$ has degree at most the degree of $p_1$), the last observation can be replaced by

Then, when we combine all the observations, we see that the input to the recursive call is constructed by removing a polynomial $p_1$ of maximal degree, and replacing it with a bunch of polynomials of smaller degree. Thus, every time we recurse

Any recursion having this property will terminate; indeed, the property implies that the set of pairs (# of polys with max degree, max degree) form a strictly decreasing sequence in $\omega^2$, which must be finite as $\omega^2$ is well-ordered. Note that the alternative method that we proposed (replacing the remainders with $p_1 + p_1’, p_1 + p_2, \dots, p_1 + p_n$), does not have the above properties, and there is no reason why the alternative method should produce a terminating recursion.

The base case of the recursion can be taken to be any set of constant polynomials. Constant polynomials never change their sign, so the sign matrix in this case has just one column: $(-\infty, \infty)$. The sign of a constant polynomial $p(x) = c$ is simply the sign of $c$.

Almost all the pieces are in place. We’ve described how to construct the input to the recursive call, we’ve provided a base case, and we’ve argued that the recursion terminates. We now explain how to construct the solution given the output of the recursive call. Most of the work was already done in explaining the intuition behind choosing the input to the recursive call. The signs of $p_2, \dots, p_n$ at all of their roots and the intervals between them is part of the output of the recursive call already, so we worry only about $p_1$.

We start by going through the roots $x_k$ in the recursively obtained sign matrix. Any time we see a $0$ in the column $x_k$ in any of the rows $p_1’, p_2, \dots, p_n$ (i.e. $x_k$ is a root of at least one of $p_1’, p_2, \dots, p_n$), we assign $p_1$ the sign of the corresponding remainder (i.e. $r_1$ for roots of $p_1’$, and $r_i$ for roots of $p_i$, $2 \leq i \leq n$). For now, we don’t do anything for any of the other columns. Here’s an example where $p_1(x) = x^2 -1, p_2(x) = x^2 + 2x$, which gives us $p_1’(x) = 2x, p_2(x) = x^2 + 2x, r_1(x) = -1, r_2(x) = -2x - 1$ as the input for the recursive call, and the following recursively computed sign matrix:

\[\begin{array}{cccccccc} & (-\infty, x_1) & x_1 & (x_1, x_2) & x_2 & (x_2, x_3) & x_3 & (x_3, \infty) \\ p_1' & - & - & - & - & - & 0 & + \\ p_2 & + & 0 & - & - & - & 0 & + \\ r_1 & - & - & - & - & - & - & - \\ r_2 & + & + & + & 0 & - & - & - \end{array}\]

The column $x_1$ is a root of $p_2$, so the sign of the corresponding remainder $r_2$ is lifted to $p_1$. The column $x_2$ is not a root of either $p_1’$ or $p_2$, so we don’t assign a sign to $p_1$ for that column - this is indicated by the $\varnothing$ sign. The column $x_3$ is a root of both $p_1’$ and $p_2$: in the animation we choose the lift the sign from $r_1$ to $p_1$, but lifting from $r_2$ would have yielded the same result. This step is called DETERMINING SIGN of p_1 AT ROOTS of p_1', p_2, ..., p_n in the implementation.

From this point onwards, we don’t need any of the information from the remainder polynomials, so we implement a step which removes them from the sign matrix. In doing so, we may end up with “root” columns which are no longer the root of any polynomial in the matrix (e.g. $x_2$ in the following example). We merge these columns with their adjacent intervals to maintain the invariant that the columns of the sign matrix alternate between roots and intervals. This step is called REMOVING REMAINDER INFORMATION AND FIXING MATRIX in the provided implementation.

Since all roots where we didn’t deduce a sign for $p_1$ were removed in this last step, we now have a sign matrix with signs for $p_1$ at all roots. We just have to treat the intervals. There are two questions we have to answer for each interval:

There are four different types of intervals: the bi-infinite interval $(-\infty, \infty)$ that typically only is produced in the base case, the half-infinite intervals $(-\infty, x_1)$ and $(x_n, \infty)$ that usually occur as the first and last columns of the sign matrix, and the typical bounded interval between two consecutive roots $(x_i, x_{i + 1})$. Fortunately, they can all be treated using the same logic, but there is an important pre-processing step for the unbounded intervals. We give them pseudo-endpoints; these are the signs assumed by $p_1$ as $x \to \infty$ and $x \to -\infty$ respectively.

These signs can be computed from the sign of the derivative $p_1’$ in the unbounded intervals. Note that the sign matrix is already fully populated in the row $p_1’$, so we can simply read off the sign of $p_1’$ in the necessary intervals.

Now that we have assigned pseudo-endpoints to the unbounded intervals, along with the sign of $p_1$ at these pseudo-endpoints, we proceed with the assumption that every interval has two endpoints, and the sign of $p_1$ is known at both endpoints. This is true for the pseudo-endpoints as we just stated, and it is true for the “real” endpoints (the roots) because the previous step already determined the signs of $p_1$ at all roots in the sign matrix.

Given an interval with endpoints $a, b$ (both are either the pseudo-endpoints $-\infty, \infty$ or roots $x_i$), we split into cases based on the sign of $p_1$ at $a$ and $b$. There are 9 possibilities, but fortunately many of them are similar.

Combining the above reasoning with our pre-processing, we’ve described now how to compute the signs of $p_1$ on each interval, as well as inject roots where necessary. That is, we can deduce signs for $p_1$ on every interval in the sign matrix, as well as add new columns as needed when there are roots of $p_1$ that weren’t already in the matrix. With this, we’ve completed adding in all the sign information for $p_1$, and the sign information for $p_2, \dots, p_n$ was already in the recursively computed sign matrix. Thus, we run one final filtration/merging pass to remove rows corresponding to polynomials not in the original input list $p_1, p_2, \dots, p_n$ (this last step is called FILTERING AND MERGING RESULT, and is very similar to the previous REMOVING REMAINDER INFORMATION AND FIXING MATRIX step), and we output the result.

Sign Matrix Calculation Implementation

We provide an implementation of sign matrix computation via this algorithm here. The implementation is also embedded below. Enter a comma separated list of polynomials with rational coefficients. Enough output will be produced to trace the steps described above. Here are a couple of example inputs:

Some notes on how to read the output:





Univariate Decision Procedure Implementation

This script combines the sign matrix computation with the decision procedure we described earlier to evaluate the truth of univariate formulae. The input format is not very polished; here are a couple of examples and notes on the input format



Deliverables

This webpage embeds all the deliverables, and serves as both the final project and the term paper. We summarize all the deliverables briefly here:

Acknowledgements

Thanks to Prof. Platzer for making me aware of the Cohen-Hörmander algorithm and providing resources to learn more about it.

Thanks to Grant Sanderson (3blue1brown) and the other manim developers for the framework, without which creating the animations seen above would have been impossible.

Thanks to Robert Eisele (GitHub user infusion) for the Fraction.js rational number library and the Polynomial.js univariate polynomial library.

References

[1] Cohen, Paul J. 1969. “Decision Procedures for Real and p-Adic Fields.” Communications on Pure and Applied Mathematics 22 (2): 131–51. https://doi.org/10.1002/cpa.3160220202.
[2] Collins, George E. 1975. “Quantifier Elimination for Real Closed Fields by Cylindrical Algebraic Decompostion.” In Automata Theory and Formal Languages, edited by H. Brakhage, 134–83. Berlin, Heidelberg: Springer Berlin Heidelberg.
[3] Gödel, Kurt. 1931. Über Formal Unentscheidbare sätze Der Principia Mathematica Und Verwandter Systeme i.” Monatshefte für Mathematik Und Physik 38 (1): 173–98. https://doi.org/10.1007/BF01700692.
[4] Harrison, John. 2009. Handbook of Practical Logic and Automated Reasoning. 1st ed. USA: Cambridge University Press.
[5] McLaughlin, Sean, and John Harrison. 2005. “A Proof-Producing Decision Procedure for Real Arithmetic.” In Automated Deduction – CADE-20, edited by Robert Nieuwenhuis, 295–314. Berlin, Heidelberg: Springer Berlin Heidelberg.
[6] Platzer, Andr. 2018. Logical Foundations of Cyber-Physical Systems. 1st ed. Springer Publishing Company, Incorporated.
[7] Tarski, Alfred. 1998. “A Decision Method for Elementary Algebra and Geometry.” In Quantifier Elimination and Cylindrical Algebraic Decomposition, edited by Bob F. Caviness and Jeremy R. Johnson, 24–84. Vienna: Springer Vienna.
[8] Väänänen, Jouko. 2020. Second-order and Higher-order Logic.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Fall 2020. https://plato.stanford.edu/archives/fall2020/entries/logic-higher-order/; Metaphysics Research Lab, Stanford University.
[9] Weispfenning, V. 1997. “Quantifier Elimination for Real Algebra — the Quadratic Case and Beyond.” Applicable Algebra in Engineering, Communication and Computing 8 (2): 85–101. https://doi.org/10.1007/s002000050055.
[10] Wilson, David John. 2014. “Advances in Cylindrical Algebraic Decomposition.” Department of Computer Science, University of Bath.