module Data.Nat.Solver where
The Nat solver🔗
This module defines a solver for equations in the commutative semiring of natural numbers. Our approach splits cleanly into 3 distinct parts:
- Horner normal forms for polynomials
- Evaluation of reflected terms into polynomials
- The reflection interface
Horner normal forms🔗
If we ignore the suc
and zero
constructors, and their respective
equations, the core problem at hand is trying to compute normal forms
for polynomials. Luckily, like most problems involving polynomials, this
has been thoroughly studied! There are many possible normal forms to
choose from, but the most useful for our task is Horner normal
form, as it admits a particularly nice inductive
characterization.
The core idea is that we can rewrite a (univariate) polynomial of the form into the following form:
However, we need multivariate polynomials, not just univariate! To do this, we exploit the isomorphism between and . This allows us to define the normal form of a multivariate polynomial as (essentially) a product of univariate ones.
We start by defining the type of n-ary multivariate polynomials.
data Poly {a} (A : Type a) : Nat → Type a where
The first polynomial we define is the constant polynomial .
: (c : A) → Poly A 0 const
Next, we define the 0 polynomial
.
Note that we do _not. include
here! This is important for ensuring that our data type defines a
(somewhat) unique normal form. If we had instead chosen
Poly A n
, we could represent the
polynomial using either const 0
or zerop
,
which complicates matters somewhat.
: ∀ {n} → Poly A (suc n) zerop
Finally, we define both addition and multiplication by in one fell swoop. This constructor represents the polynomial , where , and .
This flipping of the index may seem confusing at first, but it serves
an important purpose: it makes evaluation really easy! When
evaluating, we will need some environment Vec A n
which
assigns values to the various indeterminates
.
In the variable case, the indexing will ensure that our environment is
non-empty, whence we can use the tail of the vector to evaluate
q
! In this sense, not only does this constructor handle
addition and multiplication by
,
it also handles weakening.
_*X+_ : ∀ {n} → (p : Poly A (suc n)) → (q : Poly A n) → Poly A (suc n)
Note that this representation, while convenient, does pose some problems. We pay for the (relative) uniqueness by having larger terms: For instance, the polynomial is represented as
private
: Poly Nat 1
x⁴+1 = zerop *X+ const 1 *X+ const 0 *X+ const 0 *X+ const 0 *X+ const 0 x⁴+1
This could be alleviated by using a sparse representation, but this is decidedly more difficult. As the solver is not expected to deal with polynomials with large degrees, this term blow-up will not be a problem in practice.
Operations on Horner normal forms🔗
Now, let’s define a handful of functions for constructing and
combining polynomials. The naming here can get a bit confusing, so let’s
stick with the convention of adding a subscript p
to denote
an operation on polynomials. As a further note, the entire following
section could be generalized work over an arbitrary semiring, but this
would complicate the dependency graph somewhat, so we stick to natural
numbers.
As previously mentioned, we have different representations of the
polynomial depending on if we are working with Poly A zero
or Poly A (suc n)
. This is somewhat annoying, so we define
a small helper for picking the correct representation.
: Poly Nat n
0ₚ {n = zero} = const zero
0ₚ {n = suc n} = zerop 0ₚ
While we are at it, we also define the polynomial that represents, given a constant , the constant polynomial .
: Nat → Poly Nat n
constₚ {n = zero} c = const c
constₚ {n = suc n} c = 0ₚ *X+ constₚ c constₚ
The constant 1 is important enough that it deserves its own syntax.
: Poly Nat n
1ₚ = constₚ 1 1ₚ
We also define the identity monomials .
_] : Fin n → Poly Nat n
X[_] fzero = 1ₚ *X+ 0ₚ
X[_] (fsuc i) = 0ₚ *X+ X[ i ] X[
Now, onto addition of polynomials. This is more or less what one would expect if they wrote out the polynomials and did the addition by hand.
infixl 6 _+ₚ_
infixl 7 _*ₚ_
_+ₚ_ : Poly Nat n → Poly Nat n → Poly Nat n
= const (c₁ + c₂)
const c₁ +ₚ const c₂ = q
zerop +ₚ q (p *X+ q) +ₚ zerop = p *X+ q
(p *X+ r) +ₚ (q *X+ s) = (p +ₚ q) *X+ (r +ₚ s)
Multiplication, however, is somewhat trickier. The problem is that
during the course of recursion, we will need to multiply a
Poly A n
by a Poly A (suc n)
— for which we
will need mutual recursion, since that multiplication will fall back to
the “homogeneous” case eventually. We predeclare their types to make
Agda happy:
_*ₚ_ : Poly Nat n → Poly Nat n → Poly Nat n
_*ₚ'_ : Poly Nat n → Poly Nat (suc n) → Poly Nat (suc n)
First, the homogeneous multiplication. The first two cases are pretty
straightforward, but the final one is decidedly less so. To start, we
can distribute the multiplication of r
over the addition,
invoking _+ₚ_
to add the results together. When multiplying q
and
r
, we need to use the aforementioned heterogeneous
multiplication, as
— note that the index is off by one
(
vs
).
When multiplying p
and r
, we need to add on a
0ₚ
under the *X
, as this is the only way of
multiplying by
.
= const (c₁ * c₂)
const c₁ *ₚ const c₂ = zerop
zerop *ₚ q (p *X+ q) *ₚ r = ((p *ₚ r) *X+ 0ₚ) +ₚ (q *ₚ' r)
For the heterogeneous case, the call graph is simpler, as we can fall back to the homogeneous operator.
= zerop
r *ₚ' zerop (p *X+ q) = (r *ₚ' p) *X+ (r *ₚ q) r *ₚ'
Evaluation of Horner normal forms🔗
Multivariate polynomials represent functions , so we should be able to interpret them as such. Luckily, Horner Normal Forms are extremely easy to evaluate. As a historical note, this is why this representation was created in first place! In this light, they should probably be called “Sharaf al-Din al-Tusi normal forms”.
: Nat → Nat
block = x block x
_⟧ₚ : Poly Nat n → Vec Nat n → Nat
⟦= c
⟦ const c ⟧ₚ env = 0
⟦ zerop ⟧ₚ env (x₀ ∷ env) = ⟦ p ⟧ₚ (x₀ ∷ env) * x₀ + ⟦ q ⟧ₚ env ⟦ p *X+ q ⟧ₚ
Soundness of the operations🔗
Now, it’s important that the operations we defined actually denote the correct operations on natural numbers. As a warm up, let’s show that the zero polynomial really represents the function .
: ∀ (env : Vec Nat n) → ⟦ 0ₚ ⟧ₚ env ≡ 0
sound-0ₚ = refl
sound-0ₚ [] (x ∷ env) = refl sound-0ₚ
We do the same for the constant polynomials:
: ∀ c → (env : Vec Nat n) → ⟦ constₚ c ⟧ₚ env ≡ c
sound-constₚ = refl
sound-constₚ c [] (x ∷ env) = sound-constₚ c env sound-constₚ c
At the risk of repeating ourselves, we also show the same for the monomial .
_] : ∀ i → (env : Vec Nat n) → ⟦ X[ i ] ⟧ₚ env ≡ lookup env i
sound-X[(x₀ ∷ env) =
sound-X[ fzero ] 1 ⟧ₚ env * x₀ + ⟦ 0ₚ ⟧ₚ env ≡⟨ ap₂ (λ ϕ ψ → ϕ * x₀ + ψ) (sound-constₚ 1 env) (sound-0ₚ env) ⟩
⟦ constₚ 1 * x₀ + 0 ≡⟨ +-zeror (1 * x₀) ⟩
1 * x₀ ≡⟨ *-onel x₀ ⟩
x₀ ∎(_ ∷ env) = sound-X[ i ] env sound-X[ fsuc i ]
Now, for something more involved: let’s show that addition of
polynomials really deserves the name “addition” — under the semantics
mapping ⟦_⟧ₚ
, adding two
polynomials then evaluating is the same thing as evaluating each then
adding the results.
: ∀ p q → (env : Vec Nat n)
sound-+ₚ → ⟦ p +ₚ q ⟧ₚ env ≡ ⟦ p ⟧ₚ env + ⟦ q ⟧ₚ env
(const c1) (const c2) env = refl
sound-+ₚ = refl
sound-+ₚ zerop q env (p *X+ r) zerop env =
sound-+ₚ (+-zeror (⟦ (p *X+ r) +ₚ zerop ⟧ₚ env))
sym (p *X+ r) (q *X+ s) (x₀ ∷ env) =
sound-+ₚ (λ ϕ ψ → ϕ * x₀ + ψ) (sound-+ₚ p q (x₀ ∷ env)) (sound-+ₚ r s env) ⟩
⟦p+q⟧ * x₀ + ⟦r+s⟧ ≡⟨ ap₂ (⟦p⟧ + ⟦q⟧) * x₀ + (⟦r⟧ + ⟦s⟧) ≡⟨ ap (λ ϕ → ϕ + (⟦r⟧ + ⟦s⟧)) (*-distrib-+r ⟦p⟧ ⟦q⟧ x₀) ⟩
(⟦r⟧ + ⟦s⟧) ≡⟨ commute-inner (⟦p⟧ * x₀) (⟦q⟧ * x₀) ⟦r⟧ ⟦s⟧ ⟩
⟦p⟧ * x₀ + ⟦q⟧ * x₀ + (⟦q⟧ * x₀ + ⟦s⟧) ∎
⟦p⟧ * x₀ + ⟦r⟧ + where
= ⟦ p +ₚ q ⟧ₚ (x₀ ∷ env)
⟦p+q⟧ = ⟦ r +ₚ s ⟧ₚ env
⟦r+s⟧ = ⟦ p ⟧ₚ (x₀ ∷ env)
⟦p⟧ = ⟦ r ⟧ₚ env
⟦r⟧ = ⟦ q ⟧ₚ (x₀ ∷ env)
⟦q⟧ = ⟦ s ⟧ₚ env ⟦s⟧
Wow, that was a bit painful! This is a common theme when writing proof automation: it distills down a lot of the annoying proofs we need to write across the entire code-base into one extremely painful proof. Thus, conservation of frustration is preserved.
Philosophical reflections aside, let’s move onto multiplication of polynomials. As the homogeneous and heterogeneous multiplication were defined in a mutually recursive manner, we must do so for their proofs of soundness as well.
sound-*ₚ: ∀ p q → (env : Vec Nat n)
→ ⟦ p *ₚ q ⟧ₚ env ≡ ⟦ p ⟧ₚ env * ⟦ q ⟧ₚ env
sound-*ₚ': ∀ p q → (x₀ : Nat) → (env : Vec Nat n)
→ ⟦ p *ₚ' q ⟧ₚ (x₀ ∷ env) ≡ ⟦ p ⟧ₚ env * ⟦ q ⟧ₚ (x₀ ∷ env)
The first couple of cases of homogeneous multiplication don’t look so bad…
(const c1) (const c2) env = refl
sound-*ₚ = refl
sound-*ₚ zerop q env (p *X+ r) zerop (x₀ ∷ env) =
sound-*ₚ (x₀ ∷ env) * x₀ + ⟦ 0ₚ ⟧ₚ env ≡⟨ ap₂ (λ ϕ ψ → ϕ * x₀ + ψ) (sound-*ₚ p zerop (x₀ ∷ env)) (sound-0ₚ env) ⟩
⟦ p *ₚ zerop ⟧ₚ 0 * x₀ + 0 ≡⟨ ap (λ ϕ → (ϕ * x₀) + 0) (*-zeror ⟦p⟧) ⟩
⟦p⟧ * 0 ≡˘⟨ *-zeror (⟦p⟧ * x₀ + ⟦r⟧) ⟩
(⟦p⟧ * x₀ + ⟦r⟧) * 0 ∎
where
= ⟦ p ⟧ₚ (x₀ ∷ env)
⟦p⟧ = ⟦ r ⟧ₚ env ⟦r⟧
However, the case where we need to distribute is not so easy. The consists of repeatedly expanding out the polynomial operations into those on natural numbers, then doing a brutal bit of symbol shuffling. There’s not too much to be gained from dwelling on this, so let’s move on.
(p *X+ r) (q *X+ s) (x₀ ∷ env) =
sound-*ₚ (r *ₚ s) ⟧ₚ env ≡⟨ ap (λ ϕ → ⟦p*⟨qx+s⟩+r*q⟧ * x₀ + ϕ) (sound-+ₚ 0ₚ (r *ₚ s) env) ⟩
⟦p*⟨qx+s⟩+r*q⟧ * x₀ + ⟦ 0ₚ +ₚ (⟦ 0ₚ ⟧ₚ env + ⟦ r *ₚ s ⟧ₚ env) ≡⟨ ap₂ (λ ϕ ψ → ⟦p*⟨qx+s⟩+r*q⟧ * x₀ + (ϕ + ψ)) (sound-0ₚ env) (sound-*ₚ r s env) ⟩
⟦p*⟨qx+s⟩+r*q⟧ * x₀ + (⟦r⟧ * ⟦s⟧) ≡⟨ ap (λ ϕ → ϕ * x₀ + ⟦r⟧ * ⟦s⟧) (sound-+ₚ (p *ₚ (q *X+ s)) (r *ₚ' q) (x₀ ∷ env)) ⟩
⟦p*⟨qx+s⟩+r*q⟧ * x₀ + (⟦p*⟨qx+s⟩⟧ + ⟦r*q⟧) * x₀ + ⟦r⟧ * ⟦s⟧ ≡⟨ ap₂ (λ ϕ ψ → (ϕ + ψ) * x₀ + ⟦r⟧ * ⟦s⟧) (sound-*ₚ p (q *X+ s) (x₀ ∷ env)) (sound-*ₚ' r q x₀ env) ⟩
(⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧) + ⟦r⟧ * ⟦q⟧) * x₀ + ⟦r⟧ * ⟦s⟧ ≡⟨ ap (λ ϕ → ϕ + ⟦r⟧ * ⟦s⟧) (*-distrib-+r (⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧)) (⟦r⟧ * ⟦q⟧) x₀) ⟩
(⟦q⟧ * x₀ + ⟦s⟧) * x₀ + ⟦r⟧ * ⟦q⟧ * x₀ + ⟦r⟧ * ⟦s⟧ ≡˘⟨ +-associative (⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧) * x₀) (⟦r⟧ * ⟦q⟧ * x₀) (⟦r⟧ * ⟦s⟧) ⟩
⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧) * x₀ + (⟦r⟧ * ⟦q⟧ * x₀ + ⟦r⟧ * ⟦s⟧) ≡⟨ ap (λ ϕ → ⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧) * x₀ + (ϕ + ⟦r⟧ * ⟦s⟧)) (*-associative ⟦r⟧ ⟦q⟧ x₀) ⟩
⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧) * x₀ + (⟦r⟧ * (⟦q⟧ * x₀) + ⟦r⟧ * ⟦s⟧) ≡˘⟨ ap (λ ϕ → ⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧) * x₀ + ϕ) (*-distrib-+l (⟦q⟧ * x₀) ⟦s⟧ ⟦r⟧) ⟩
⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧) * x₀ + ⟦r⟧ * (⟦q⟧ * x₀ + ⟦s⟧) ≡⟨ ap (λ ϕ → ϕ + ⟦r⟧ * (⟦q⟧ * x₀ + ⟦s⟧)) (commute-last ⟦p⟧ (⟦q⟧ * x₀ + ⟦s⟧) x₀) ⟩
⟦p⟧ * (⟦q⟧ * x₀ + ⟦s⟧) + ⟦r⟧ * (⟦q⟧ * x₀ + ⟦s⟧) ≡˘⟨ *-distrib-+r (⟦p⟧ * x₀) ⟦r⟧ (⟦q⟧ * x₀ + ⟦s⟧) ⟩
⟦p⟧ * x₀ * (⟦p⟧ * x₀ + ⟦r⟧) * (⟦q⟧ * x₀ + ⟦s⟧) ∎
where
= ⟦ (p *ₚ (q *X+ s)) +ₚ (r *ₚ' q) ⟧ₚ (x₀ ∷ env)
⟦p*⟨qx+s⟩+r*q⟧ = ⟦ p *ₚ (q *X+ s) ⟧ₚ (x₀ ∷ env)
⟦p*⟨qx+s⟩⟧ = ⟦ r *ₚ' q ⟧ₚ (x₀ ∷ env)
⟦r*q⟧ = ⟦ p ⟧ₚ (x₀ ∷ env)
⟦p⟧ = ⟦ r ⟧ₚ env
⟦r⟧ = ⟦ q ⟧ₚ (x₀ ∷ env)
⟦q⟧ = ⟦ s ⟧ₚ env ⟦s⟧
As a nice palate cleanser, the proofs for heterogeneous multiplication are nowhere near as bad.
= sym (*-zeror (⟦ p ⟧ₚ env))
sound-*ₚ' p zerop x₀ env (p *X+ q) x₀ env =
sound-*ₚ' r (x₀ ∷ env) * x₀ + ⟦ r *ₚ q ⟧ₚ env ≡⟨ ap₂ (λ ϕ ψ → ϕ * x₀ + ψ) (sound-*ₚ' r p x₀ env) (sound-*ₚ r q env) ⟩
⟦ r *ₚ' p ⟧ₚ (λ ϕ → ϕ + ⟦r⟧ * ⟦q⟧) (*-associative ⟦r⟧ ⟦p⟧ x₀) ⟩
⟦r⟧ * ⟦p⟧ * x₀ + ⟦r⟧ * ⟦q⟧ ≡⟨ ap (⟦p⟧ * x₀) + ⟦r⟧ * ⟦q⟧ ≡˘⟨ *-distrib-+l (⟦p⟧ * x₀) ⟦q⟧ ⟦r⟧ ⟩
⟦r⟧ * (⟦p⟧ * x₀ + ⟦q⟧) ∎
⟦r⟧ * where
= ⟦ r ⟧ₚ env
⟦r⟧ = ⟦ p ⟧ₚ (x₀ ∷ env)
⟦p⟧ = ⟦ q ⟧ₚ env ⟦q⟧
This concludes phase one of the solver.
Evaluation into polynomials🔗
Now that we’ve gotten the first phase of the solver done, let’s move on to expressions in the language of natural numbers. Our first move shall be defining expressions in the equational theory of natural numbers.
However, there is an efficiency problem we need to take care of here.
If we naively expand out _+_
and _*_
during reflection, we could end up in a situation where we need to
contend with a huge amount of suc
constructors when large literals get
involved. Therefore, we prevent reduction of such functions, but this
means this phase of the solver needs to be aware of nat literals.
With that in mind, let’s define our expressions. Note that things
that the solver is unable to deal with (for instance, functions that
aren’t _+_
or _*_
)
are represented as variables, and will be replaced during
evaluation.
data Expr : Nat → Type where
: ∀ {n} → Expr n
‵0 : ∀ {n} → Nat → Expr n
‵lit _ : ∀ {n} → Fin n → Expr n
‵_ : ∀ {n} → Expr n → Expr n
‵1+_‵+_ : ∀ {n} → Expr n → Expr n → Expr n
_‵*_ : ∀ {n} → Expr n → Expr n → Expr n
We also define an interpretation of expressions into functions .
_⟧ₑ : Expr n → Vec Nat n → Nat
⟦= lookup env i
⟦ ‵ i ⟧ₑ env = 0
⟦ ‵0 ⟧ₑ env = suc (⟦ e ⟧ₑ env)
⟦ ‵1+ e ⟧ₑ env = k
⟦ ‵lit k ⟧ₑ env = ⟦ e1 ⟧ₑ env + ⟦ e2 ⟧ₑ env
⟦ e1 ‵+ e2 ⟧ₑ env = ⟦ e1 ⟧ₑ env * ⟦ e2 ⟧ₑ env ⟦ e1 ‵* e2 ⟧ₑ env
We also define an evaluation of expressions into polynomials. The
only thing to note here is the evaluation of quoted suc
constructors as polynomial addition.
This is somewhat inneficient, but in practice we rarely have too many
suc
constructors to deal with, as
we handle literals separately.
_ : Expr n → Poly Nat n
↓(‵ i) = X[ i ]
↓ = 0ₚ
↓ ‵0 (‵1+ e) = constₚ 1 +ₚ ↓ e
↓ = constₚ k
↓ ‵lit k (e₁ ‵+ e₂) = (↓ e₁) +ₚ (↓ e₂)
↓ (e₁ ‵* e₂) = (↓ e₁) *ₚ (↓ e₂) ↓
Soundness of evaluation🔗
With all of that machinery in place, our final proof shall be to show that evaluating an expression into a polynomial has the same semantics as the original expression. Luckily, most of the legwork is already done, so we can sit back and enjoy the fruits of our labour.
: ∀ e → (env : Vec Nat n) → ⟦ ↓ e ⟧ₚ env ≡ ⟦ e ⟧ₑ env
sound (‵ i) env = sound-X[ i ] env
sound = sound-0ₚ env
sound ‵0 env (‵1+ e) env =
sound 1 +ₚ (↓ e) ⟧ₚ env ≡⟨ sound-+ₚ (constₚ 1) (↓ e) env ⟩
⟦ constₚ 1 ⟧ₚ env + ⟦ ↓ e ⟧ₚ env ≡⟨ ap (λ ϕ → ϕ + ⟦ ↓ e ⟧ₚ env ) (sound-constₚ 1 env) ⟩
⟦ constₚ (⟦ ↓ e ⟧ₚ env) ≡⟨ ap suc (sound e env) ⟩
suc (⟦ e ⟧ₑ env) ∎
suc (‵lit k) env = sound-constₚ k env
sound (e₁ ‵+ e₂) env =
sound (↓ e₁) +ₚ (↓ e₂) ⟧ₚ env ≡⟨ sound-+ₚ (↓ e₁) (↓ e₂) env ⟩
⟦ _+_ (sound e₁ env) (sound e₂ env) ⟩
⟦ ↓ e₁ ⟧ₚ env + ⟦ ↓ e₂ ⟧ₚ env ≡⟨ ap₂
⟦ e₁ ⟧ₑ env + ⟦ e₂ ⟧ₑ env ∎(e₁ ‵* e₂) env =
sound (↓ e₁) *ₚ (↓ e₂) ⟧ₚ env ≡⟨ sound-*ₚ (↓ e₁) (↓ e₂) env ⟩
⟦ _*_ (sound e₁ env) (sound e₂ env) ⟩
⟦ ↓ e₁ ⟧ₚ env * ⟦ ↓ e₂ ⟧ₚ env ≡⟨ ap₂ ⟦ e₁ ⟧ₑ env * ⟦ e₂ ⟧ₑ env ∎
Now, all we need to do is expose an interface for the reflection
portion of the solver. The abstract
here is VERY IMPORTANT,
as it prevents the proof from unfolding into an enormous term that kills
our compile times.
abstract
: ∀ e₁ e₂ → (env : Vec Nat n)
solve → (⟦ ↓ e₁ ⟧ₚ env ≡ ⟦ ↓ e₂ ⟧ₚ env)
→ ⟦ e₁ ⟧ₑ env ≡ ⟦ e₂ ⟧ₑ env
= sym (sound e₁ env) ·· p ·· sound e₂ env solve e₁ e₂ env p
We also expose a function for “simplifying” expressions. In reality this will almost always make the term more complicated, but it’s useful for debugging purposes.
: (e : Expr n) → (env : Vec Nat n) → Nat
expand = ⟦ ↓ e ⟧ₚ env expand e env
Reflection🔗
Now, for the truly difficult part: the reflection interface.
We begin by defining some pattern synonyms for expressions we want to
reflect into our Expr
type.
private
pattern nat-lit n =
(quote Number.fromNat) (_ ∷ _ ∷ _ ∷ (lit (nat n)) v∷ _)
def pattern “zero” =
(quote zero) []
con pattern “suc” x =
(quote suc) (x v∷ [])
con pattern _“+”_ x y =
(quote _+_) (x v∷ y v∷ _)
def pattern _“*”_ x y =
(quote _*_) (x v∷ y v∷ _) def
Next, we construct quoted a Expr
from a term, replacing any unknown Term
nodes with variables. This uses the
Variable
interface for managing the Fin
variables and the environment. A discussion of the internals of this is
out of scope of this solver; we have already looked into the abyss too
deeply.
private
: Variables Nat → Term → TC (Term × Variables Nat)
build-expr (nat-lit n) = do
build-expr vs
‵n ← quoteTC n(quote ‵lit) (‵n v∷ []) , vs
pure $ con = do
build-expr vs “zero” (quote ‵0) (unknown h∷ []) , vs
pure $ con (“suc” t) = do
build-expr vs
e , vs ← build-expr vs t(quote ‵1+_) (unknown h∷ e v∷ []) , vs
pure $ con (t₁ “+” t₂) = do
build-expr vs
e₁ , vs ← build-expr vs t₁
e₂ , vs ← build-expr vs t₂(quote _‵+_) (unknown h∷ e₁ v∷ e₂ v∷ []) , vs
pure $ con (t₁ “*” t₂) = do
build-expr vs
e₁ , vs ← build-expr vs t₁
e₂ , vs ← build-expr vs t₂(quote _‵*_) (unknown h∷ e₁ v∷ e₂ v∷ []) , vs
pure $ con = do
build-expr vs tm (v , vs') ← bind-var vs tm
(quote ‵_) (v v∷ []) , vs' pure $ con
Next, let’s define the quoted forms of some terms that we will use to interface with the solver.
private
: Term → Term → Term
“expand” = def (quote expand) (unknown h∷ e v∷ env v∷ [])
“expand” e env
: Term → Term → Term → Term
“solve” =
“solve” lhs rhs env (quote solve)
def (unknown h∷ lhs v∷ rhs v∷ env v∷ def (quote refl) [] v∷ [])
The actual macros🔗
Now, the actual reflection API calls. In order to keep drawing this
file out, we start by defining some useful debugging macros. As we noted
a looong time ago, we don’t want to unfold the _+_
or _*_
functions, so let’s make a list of those names so that we can call withReduceDefs
more easily.
private
: List Name
don't-reduce = quote _+_ ∷ quote _*_ ∷ quote Number.fromNat ∷ [] don't-reduce
The repr!
macro prints out a bunch of information about
a given expression of type Nat
. This is very
useful when we are debugging.
: Nat → Term → TC ⊤
repr-macro =
repr-macro n hole
withNormalisation false $(false , don't-reduce) $ do
withReduceDefs
tm ← quoteTC n
e , vs ← build-expr empty-vars tm
size , env ← environment vs(quote ↓_) (size h∷ e v∷ [])
repr ← normalise $ def "The expression\n " ∷
typeError $ strErr
termErr tm ∷"\nIs represented by the expression\n " ∷
strErr
termErr e ∷"\nAnd the polynomial\n " ∷
strErr
termErr repr ∷"\nThe environment is\n " ∷
strErr
termErr env ∷ []macro
: Nat → Term → TC ⊤
repr! = repr-macro n repr! n
Slightly more useful is the expand!
macro. This takes in
a natural number, and will fill in the hole with its expanded form. This
is intended to be used with the agda2-elaborate-give
command in Emacs, which is bound to C-c RET
by default.
: Nat → Term → TC ⊤
expand-macro =
expand-macro n hole
withNormalisation false $(false , don't-reduce) $ do
withReduceDefs
tm ← quoteTC n
e , vs ← build-expr empty-vars tm
size , env ← environment vs(“expand” e env)
expanded ← reduce
unify hole expanded
macro
: Nat → Term → TC ⊤
expand! = expand-macro n expand! n
Now, finally, we have reached the summit. The nat!
macro
allows us to automatically solve equations involving natural
numbers.
: Term → TC ⊤
solve-macro =
solve-macro hole
withNormalisation false $(false , don't-reduce) $ do
withReduceDefs
goal ← infer-type hole >>= reduce
(lhs , rhs) ← get-boundary goal
just where nothing → typeError $ strErr "Can't determine boundary: " ∷
termErr goal ∷ []
elhs , vs ← build-expr empty-vars lhs
erhs , vs ← build-expr vs rhs
size , env ← environment vs(noConstraints $ unify hole (“solve” elhs erhs env)) <|> do
(“expand” elhs env)
nf-lhs ← normalise (“expand” erhs env)
nf-rhs ← normalise (strErr "Could not solve the following goal:\n " ∷
typeError " ≡ " ∷ termErr rhs ∷
termErr lhs ∷ strErr "\nComputed normal forms:\n LHS: " ∷
strErr
termErr nf-lhs ∷"\n RHS: " ∷
strErr )
termErr nf-rhs ∷ []
macro
: Term → TC ⊤
nat! = solve-macro nat!
Examples🔗
Congratulations! We now have a solver. Let’s marvel at all of our hard work for a moment.
private
: ∀ x y z
wow-good-job → (x + 5 + suc y) * z ≡ z * 5 + x * z + z + z * y
= nat! wow-good-job x y z
Thus concludes our journey. There is still room for improvement, however. A sparse representation would be much more efficient, but these proofs are already quite difficult to begin with. For the brave, here is what a sparse representation might look like.
private
data SparsePoly {a} (A : Type a) : Nat → Type a where
: ∀ {n} → A → SparsePoly A n
const-sparse : ∀ {n} → (j : Nat) → SparsePoly A n
shift → SparsePoly A (j + n)
_*X^_+_ : ∀ {n} → SparsePoly A (suc n) → Nat → SparsePoly A n
→ SparsePoly A (suc n)