Scientific Computing in Lean

πŸ–¨

Symbolic Differentiation

Differentiation is at the heart of many problems in scientific computing, from optimizing functions to solving differential equations. Computing derivatives is a mechanical process but very error prone when done manually. In this chapter, we will explore how we can compute derivatives of expressions and programs automatically.

Let's recall what derivative is, for a function f : ℝ β†’ ℝ we define it derivative as

$$f'(x) = \lim_{h β†’ 0} \frac{f(x+h) - f(x)}{h} $$

The mathematical library Mathlib defines derivative fderiv for a general function f : X β†’ Y from some vector spaces X and Y. The derivative of f at point x : X in direction dx : X is

$$\texttt{fderiv ℝ f x dx} = \lim_{h β†’ 0} \frac{f(x+h\cdot dx) - f(x)}{h} $$

SciLean provides tactic fun_trans which can compute derivatives. The name fun_trans stands for 'function transformation' and differentiations is just one example of function transformation. We will talk about general function transformations later in the book.

One of the simplest examples of a derivative is the derivative of \(x^n\) which is equal to \(n x^{n-1}\). Let's compute it using fun_trans tactic

variable (n : Nat) (x : ℝ) ↑n * x ^ (n - 1) : ℝ#check (fderiv ℝ (fun (x : ℝ) => x^n) x 1) rewrite_by
n:β„•
x:ℝ
| ↑n * x ^ (n - 1)
↑n * x ^ (n - 1) : ℝ

Add a link to an interactive code editor and encourage reader to differentiate more complicated expressions involving sin, cos, exp, ... but warn about log or division.

Notation

Writing fderiv ℝ (fun x => f x) is somewhat tedious so SciLean makes our life easier by introducing a nice notation βˆ‚ x, f x for differentiating (f x) w.r.t. x.

Before we explore this notation further we have to mention that fderiv can also compute complex derivatives with @fderiv β„‚ instead of @fderiv ℝ. However, most of the time we work exclusively with real derivative so we can inform Lean that the default choce of the scalar field are real numbers using the following command

set_default_scalar ℝ

Now Lean knows that we want real derivative when we write βˆ‚ x, f x.

Using this notation we can compute again the above derivative

variable (n : Nat) fun x => ↑n * x ^ (n - 1) : ℝ β†’ ℝ#check (βˆ‚ (x : ℝ), x^n) rewrite_by
n✝:β„•
x:ℝ
n:β„•
| fun x => ↑n * x ^ (n - 1)
fun x => ↑n * x ^ (n - 1) : ℝ β†’ ℝ

Because we did not specify the point where we want to compute the derivative we obtained a function in x. We can specify the point where we want to compute the derivative with βˆ‚ (x:=xβ‚€), ...

variable (n : Nat) (xβ‚€ : ℝ) ↑n * xβ‚€ ^ (n - 1) : ℝ#check (βˆ‚ (x:=xβ‚€), x^n) rewrite_by
n✝¹:β„•
x:ℝ
n✝:β„•
n:β„•
xβ‚€:ℝ
| ↑n * xβ‚€ ^ (n - 1)
↑n * xβ‚€ ^ (n - 1) : ℝ

Writing rewrite_by fun_trans every time we want to diferentiate an expression gets a bit tedious. We can add an exclamation mark after βˆ‚ to indicate that we want to run fun_trans tactic to compute the derivative.

variable (n : Nat) fun x => ↑n * x ^ (n - 1) : ℝ β†’ ℝ#check (βˆ‚! (x : ℝ), x^n)
fun x => ↑n * x ^ (n - 1) : ℝ β†’ ℝ

We can differentiate w.r.t to a vector valued variable (x : ℝ×ℝ)

fun x => fun dx =>L[ℝ] 2 * βŸͺdx, x⟫_ℝ : ℝ Γ— ℝ β†’ ℝ Γ— ℝ β†’L[ℝ] ℝ#check βˆ‚! (x : ℝ×ℝ), β€–xβ€–β‚‚Β²
fun x => fun dx =>L[ℝ] 2 * βŸͺdx, x⟫_ℝ : ℝ Γ— ℝ β†’ ℝ Γ— ℝ β†’L[ℝ] ℝ

For derivatives w.r.t. a vector valued variable we have to also specify the direction in which we differentiate. Therefore in the above we obtained derivative as a function of the position x and direction dx. Furthermore, the notation fun dx =>L[ℝ] ... indicates that the function is linear function in dx and similarly X β†’L[ℝ] Y stands for the space of ℝ-linear functions from X to Y.

If we want to specify the position and the direction in which we want to compute the derivatives we use the notation βˆ‚ (x:=xβ‚€;dxβ‚€), f x

variable (xβ‚€ dxβ‚€ : ℝ×ℝ) 2 * βŸͺdxβ‚€, xβ‚€βŸ«_ℝ : ℝ#check βˆ‚! (x:=xβ‚€;dxβ‚€), β€–xβ€–β‚‚Β²
2 * βŸͺdxβ‚€, xβ‚€βŸ«_ℝ : ℝ

To summarize all the different variants. For function of a scalar valued argument

variable (f : ℝ β†’ ℝ) (xβ‚€ : ℝ) βˆ‚ f : ℝ β†’ ℝ#check βˆ‚ f βˆ‚ x, f x : ℝ β†’ ℝ#check βˆ‚ x, f x βˆ‚ (x:=xβ‚€), f x : ℝ#check βˆ‚ (x:=xβ‚€), f x

For function of a vector valued argument

variable (f : ℝ×ℝ β†’ ℝ) (xβ‚€ dxβ‚€ : ℝ×ℝ) βˆ‚ f : ℝ Γ— ℝ β†’ ℝ Γ— ℝ β†’L[ℝ] ℝ#check βˆ‚ f βˆ‚ x, f x : ℝ Γ— ℝ β†’ ℝ Γ— ℝ β†’L[ℝ] ℝ#check βˆ‚ x, f x βˆ‚ (x:=xβ‚€), f x : ℝ Γ— ℝ β†’L[ℝ] ℝ#check βˆ‚ (x:=xβ‚€), f x (βˆ‚ (x:=xβ‚€), f x) dxβ‚€ : ℝ#check βˆ‚ (x:=xβ‚€;dxβ‚€), f x

There is nothing stopping us from applying derivative multiple times to compute higher order derivatives

fun x => 2 : ℝ β†’ ℝ#check (βˆ‚! (βˆ‚! (x:ℝ), x^2))
fun x => 2 : ℝ β†’ ℝ

Exercises

  1. Express first derivative of f : ℝ β†’ ℝ β†’ ℝ in the first and the second argument. Also express derivative in both arguments at the same time.

Solution
variable (f : ℝ β†’ ℝ β†’ ℝ) (xβ‚€ yβ‚€ : ℝ) -- first argument βˆ‚ (x:=xβ‚€), f x yβ‚€ : ℝ#check βˆ‚ (x:=xβ‚€), (f x yβ‚€) βˆ‚ (x:=xβ‚€), f x yβ‚€ : ℝ#check βˆ‚ (f Β· yβ‚€) xβ‚€ -- second agument βˆ‚ (y:=yβ‚€), f xβ‚€ y : ℝ#check βˆ‚ (y:=yβ‚€), (f xβ‚€ y) βˆ‚ (x:=yβ‚€), f xβ‚€ x : ℝ#check βˆ‚ (f xβ‚€ Β·) yβ‚€ -- both arguments βˆ‚ (x:=(xβ‚€, yβ‚€)), match x with | (x, y) => f x y : ℝ Γ— ℝ β†’L[ℝ] ℝ#check βˆ‚ ((x,y):=(xβ‚€,yβ‚€)), f x y
  1. For (g : ℝ×ℝ β†’ ℝ), express derivative of g (x,y) in x.

Solution
variable (g : ℝ×ℝ β†’ ℝ) (xβ‚€ yβ‚€ : ℝ) (βˆ‚ (xy:=(xβ‚€, yβ‚€)), g xy) (1, 0) : ℝ#check βˆ‚ (xy:=(xβ‚€,yβ‚€);(1,0)), g xy (βˆ‚ g (xβ‚€, yβ‚€)) (1, 0) : ℝ#check βˆ‚ g (xβ‚€,yβ‚€) (1,0) βˆ‚ (x:=xβ‚€), g (x, yβ‚€) : ℝ#check βˆ‚ (x:=xβ‚€), g (x,yβ‚€)
  1. Express second derivatives of f : ℝ β†’ ℝ β†’ ℝ in the first and the second argument.

Solution
variable (f : ℝ β†’ ℝ β†’ ℝ) (xβ‚€ yβ‚€ : ℝ) βˆ‚ (x':=xβ‚€), βˆ‚ (x'':=x'), f x'' yβ‚€ : ℝ#check βˆ‚ (x':= xβ‚€), βˆ‚ (x'':=x'), (f x'' yβ‚€) βˆ‚ (βˆ‚ x, f x yβ‚€) xβ‚€ : ℝ#check βˆ‚ (βˆ‚ (f Β· yβ‚€)) xβ‚€ βˆ‚ (y':=yβ‚€), βˆ‚ (y'':=y'), f xβ‚€ y'' : ℝ#check βˆ‚ (y':=yβ‚€), βˆ‚ (y'':=y'), (f xβ‚€ y'') βˆ‚ (βˆ‚ x, f xβ‚€ x) yβ‚€ : ℝ#check βˆ‚ (βˆ‚ (f xβ‚€ Β·)) yβ‚€
  1. Let \(L(t,x)\) be a function of time and space and y(t) a function of time. Express \( \frac{d}{dt} L(t, y(t)) \) and \( \frac{\partial}{\partial t} L(t, y(t)) \) in Lean. What is the difference between these two expressions?

Solution
variable (L : ℝ β†’ ℝ β†’ ℝ) (y : ℝ β†’ ℝ) (t : ℝ) -- d/dt L βˆ‚ (t':=t), L t' (y t') : ℝ#check βˆ‚ (t':=t), L t' (y t') -- βˆ‚/βˆ‚t L βˆ‚ (t':=t), L t' (y t) : ℝ#check βˆ‚ (t':=t), L t' (y t)

Because SciLean's notation forces you to be a bit more explicit, there is no need to distinguish between \( \frac{d}{dt} \) and \( \frac{\partial}{\partial t} \). Lots of pain and suffering has been infliced on generations of physics students because of the ambiguity of partial derivative notation.

  1. Express one dimensional Euler-Lagrange equation in Lean

$$\frac{d}{dt} \frac{\partial L}{\partial \dot x}(x(t),\dot x(t)) - \frac{\partial L}{\partial x}(x(t), \dot x(t)) $$

Solution
variable (L : ℝ β†’ ℝ β†’ ℝ) (x : ℝ β†’ ℝ) (t : ℝ) let v := βˆ‚ x; βˆ‚ (t':=t), βˆ‚ (v':=v t'), L (x t') v' - βˆ‚ (x':=x t), L x' (v t) : ℝ#check let v := βˆ‚ x βˆ‚ (t':=t), (βˆ‚ (v':=v t'), L (x t') v') - βˆ‚ (x':=x t), L x' (v t)

Examples

Let's put computing derivatives to some practical use. We will demonstrate how to use SciLean symbolic differentiations to solve few common tasks in scientific computing and physics.

Newton's solver

Solving non-linear equation \( f(x) = 0 \) is a very common problem in scientific computing. Often this can be done only approximately and a popular tool to do so is Newton's method. It is an interative process that starts with an initial guess \(xβ‚€\) which is incrementally improved by the following rule

$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} $$

We can use Newton's method to compute sqruare root of \(y\) by choosing \( f(x) = x^2 - y\).

def mySqrt (steps : Nat) (y : Float) : Float := Id.run do let f := fun x => x^2 - y let mut x := 1.0 for _ in [0:steps] do x := x - f x / ((deriv f x) rewrite_by
n✝²:β„•
x✝³:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝²:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝¹:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
steps:β„•
y:Float
f:Float β†’ Float := fun x => x ^ 2 - y
x✝¹:Float := 1.0
col✝:Std.Range := { start := 0, stop := steps, step := 1 }
x✝:β„•
r✝:Float
x:Float := r✝
| 2 * x
) return x 1.414214 #eval mySqrt 10 2.0
1.414214

In mySqrt we should use (βˆ‚! f x) notation but unfortunatelly it is currently broken for some reason.

You might feel a bit unconfortable here are we are differentiating a function defined on floating point numbers. If you think that can't be formally correct then you are right. We will discuss this issue in a later chapter "Working with Real Numbers".

Exercises

  1. try to solve different equations, for example exp x = y to obtain log, x*exp x = y to obtain Lambert W function or some polynomial.

  2. measure relative,\(\left| \frac{f(x_n)}{x_n} \right| \), and absolute error \( \left| f(x_n) \right| \) and use them for stopping criteria.

  3. A difficult exercise is to define a general newtonSolve function that takes an arbitrary function f : Float β†’ Float and uring elaboration synthesizes its derivative. Add multiple hints, 1. use infer_var trick, 2. state explicitly how the arguments should look like

Solution
set_default_scalar Float def newtonSolve (steps : Nat) (xβ‚€ : Float) (f : Float β†’ Float) {f' : Float β†’ Float} (hf : f' = (βˆ‚ f) := by unfold deriv; fun_trans; infer_var) : Float := Id.run do let mut x := xβ‚€ for _ in [0:steps] do x := x - f x / f' x return x 1.414214 #eval newtonSolve 10 1.0 (fun x => x^2 - 2.0)

Kinematics

We can also use SciLean's symbolic differentiation to prove some basic theorems from physics. For example we can state the second Newton's law

def NewtonSecondLaw (m : ℝ) (x : ℝ β†’ ℝ) (F : ℝ β†’ ℝ) : Prop := βˆ€ t, m * deriv (βˆ‚ x) t = F t

saying that for a particle with mass m under the influence of force F has trajectory x if the mass times the acceleration deriv (βˆ‚ x) t, i.e. the second derivative of trajectory, is equal to the force F t.

We can show that under constant force f a particle with mass m has trajectory (fun t => 1/2 * f/m * t^2)

example (m f : ℝ) (hm : m β‰  0) : NewtonSecondLaw m (fun t => 1/2 * f/m * t^2) (fun _ => f) :=
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝²:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝¹:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
f:ℝ
hm:m β‰  0
⊒ NewtonSecondLaw m (fun t => 1 / 2 * f / m * t ^ 2) fun x => f
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝²:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝¹:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
f:ℝ
hm:m β‰  0
⊒ βˆ€ (t : ℝ), m * βˆ‚ (βˆ‚ t, 1 / 2 * f / m * t ^ 2) t = (fun x => f) t
-- compute derivatives
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝²:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝¹:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
f:ℝ
hm:m β‰  0
⊒ m * (2⁻¹ * f / m * 2) = f
-- finish algebraic simplifications
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝²:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝¹:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
f:ℝ
hm:m β‰  0
⊒ m * (f * 2) = f * (2 * m)
;
All goals completed! πŸ™

Warning: currently the tactic fun_trans uses theorems that are not fully proven therefore the above proof is not completely formal proof. If you do not like this you are welcome to improve SciLean by helping out proving its theorems. Lots of theorems should be just matter of finding the right mathlib theorem.

Exercises

  1. show that trajectory x := fun t => (cos t, sin t) satisfies differential equation βˆ‚ x t = (- (x t).2, (x t).1)

Solution
open SciLean Scalar def ode (x : ℝ β†’ ℝ×ℝ) := βˆ€ t, deriv x t = (- (x t).2, (x t).1) example : ode (fun t => (cos t, sin t)) :=
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
⊒ ode fun t => (cos t, sin t)
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
⊒ βˆ€ (t : ℝ), (βˆ‚ (t:=t), (cos t, sin t)) 1 = (-((fun t => (cos t, sin t)) t).2, ((fun t => (cos t, sin t)) t).1)
;
All goals completed! πŸ™
  1. Show that trajectory \(x(t) = \sin(\omega t) \) corresponds to the force \(f(x) = - k x \) with \(\omega = \sqrt{(k/m)} \)

Hint

After differentiation you will have to show that \(m \sqrt{\frac{k}{m}}^2 = k\). Unfortunatelly Lean is not yet very powerful computer algebra system. So you can finish the proof with

  ring_nf --  m * (sqrt (k / m) * (sqrt (k / m) ==> m * sqrt (k * m⁻¹) ^ 2
  have h : m * sqrt (k * m⁻¹) ^ 2 = k := sorry_proof
  simp[h]

where we call ring_nf to clean up the expression, then we just assume that m * sqrt (k * m⁻¹) ^ 2 is equal to k and finally we can finish the proof by running simp

Solution
open SciLean Scalar example (m k : ℝ) : let Ο‰ := sqrt (k/m) let x := fun t => sin (Ο‰*t) NewtonSecondLaw m x (fun t => - k*x t) :=
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
k:ℝ
⊒ let Ο‰ := sqrt (k / m); let x := fun t => sin (Ο‰ * t); NewtonSecondLaw m x fun t => -k * x t
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
k:ℝ
⊒ let Ο‰ := sqrt (k / m); let x := fun t => sin (Ο‰ * t); βˆ€ (t : ℝ), m * (βˆ‚ (x_1:=t), (βˆ‚ x x_1) 1) 1 = (fun t => -k * x t) t
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
k:ℝ
⊒ βˆ€ (t : ℝ), m * (sqrt (k / m) * (sqrt (k / m) * sin (sqrt (k / m) * t))) = k * sin (sqrt (k / m) * t)
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
k:ℝ
⊒ βˆ€ (t : ℝ), m * sqrt (k * m⁻¹) ^ 2 * sin (sqrt (k * m⁻¹) * t) = k * sin (sqrt (k * m⁻¹) * t)
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
m:ℝ
k:ℝ
h:m * sqrt (k * m⁻¹) ^ 2 = k
⊒ βˆ€ (t : ℝ), m * sqrt (k * m⁻¹) ^ 2 * sin (sqrt (k * m⁻¹) * t) = k * sin (sqrt (k * m⁻¹) * t)
All goals completed! πŸ™
  1. Show that \(u(t,x) = sin(x-t)\) is a solution to the wave equation

$$\frac{\partial^2 u}{\partial t^2} = \frac{\partial^2 u}{\partial x^2} $$

Solution
open SciLean Scalar def WaveEquation (u : ℝ β†’ ℝ β†’ ℝ) := βˆ€ x t, (βˆ‚ (βˆ‚ (u Β· x)) t) = (βˆ‚ (βˆ‚ (u t Β·)) x) example : WaveEquation (fun t x => sin (x - t)) :=
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
⊒ WaveEquation fun t x => sin (x - t)
n✝²:β„•
x✝:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝¹:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x:ℝ β†’ ℝ
t:ℝ
⊒ βˆ€ (x t : ℝ), (βˆ‚ (x_1:=t), (βˆ‚ (x_2:=x_1), (fun t x => sin (x - t)) x_2 x) 1) 1 = (βˆ‚ (x:=x), (βˆ‚ (x:=x), (fun t x => sin (x - t)) t x) 1) 1
All goals completed! πŸ™
  1. solution to heat equation

Gradient

In many practical applications, we need to compute gradient instead of directional derivative. For a function \(f : \mathbb{R}^n \rightarrow \mathbb{R} \) the gradient of \(f\) is a vector of all its partial derivatives

$$\nabla f = \left(\frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n} \right) $$

A more general way of defining gradient is through linear map transposition/adjoint. The derivative of a function (f : X β†’ ℝ) at point x is a linear map from X to ℝ

open SciLean variable {X} [NormedAddCommGroup X] [AdjointSpace ℝ X] [CompleteSpace X] variable (f : X β†’ ℝ) (x : X) βˆ‚ f x : X β†’L[ℝ] ℝ#check (βˆ‚ f x)
βˆ‚ f x : X β†’L[ℝ] ℝ

To obtain gradient we take an adjoint and evaluate it at one. This is exactly how gradient is defined.

variable (f : X β†’ ℝ) (x : X) example : (βˆ‡ f x) = adjoint ℝ (βˆ‚ f x) 1 :=
n✝²:β„•
x✝²:ℝ
n✝¹:β„•
n✝:β„•
xβ‚€βœβ΅:ℝ
n:β„•
xβ‚€βœβ΄:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁴:ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
f✝³:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒΉ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœ:ℝ
yβ‚€βœ:ℝ
f✝¹:ℝ β†’ ℝ β†’ ℝ
xβ‚€:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝¹:ℝ β†’ ℝ
t:ℝ
X:Type u_1
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝:X β†’ ℝ
x✝:X
f:X β†’ ℝ
x:X
⊒ βˆ‡ f x = adjoint ℝ (⇑(βˆ‚ f x)) 1
All goals completed! πŸ™

This coincides with the standard notion of gradient that it is a vector of all its partial derivatives. For example for n=2 we have

variable {n : Nat} (f : ℝ×ℝ β†’ ℝ) (hf : Differentiable ℝ f) (x y : ℝ) example : (βˆ‡ f (x,y)) = (βˆ‚ (x':=x), f (x',y), βˆ‚ (y':=y), f (x,y')) := sorry_proof

Warning for mathlib users: SciLean defines its own version of adjoint and gradient. The reason is that the product type ℝ×ℝ and function type Fin n β†’ ℝ are not InnerProductSpace and therefore it is impossible do use mathlibs gradient on functions of type ℝ×ℝ β†’ ℝ or (Fin n β†’ ℝ) β†’ ℝ. Mathlib's advice is to use WithLp 2 (ℝ×ℝ) or EuclidianSpace n however this is seriously inconvenient for people that just want to write some code.

SciLean solution to this is to introduce new typeclass AdjointSpace ℝ X that is almost the same as InnerProductSpace ℝ X but requires that the norm induced by inner product, β€–xβ€–β‚‚ = βŸͺx,x⟫, is topologically equivalent to the norm β€–xβ€–. This way we can provide instance of AdjointSpace ℝ (XΓ—Y) and AdjointSpace ℝ (ΞΉ β†’ X) without causing issues.

Few examples of of computing gradients

variable (xβ‚€ : ℝ×ℝ) (1, 0) : ℝ Γ— ℝ#check βˆ‡! (x:=xβ‚€), x.1
(1, 0) : ℝ Γ— ℝ
variable (xβ‚€ : ℝ×ℝ) 2 β€’ xβ‚€ : ℝ Γ— ℝ#check βˆ‡! (x:=xβ‚€), β€–xβ€–β‚‚Β²
2 β€’ xβ‚€ : ℝ Γ— ℝ
variable (xβ‚€ y : ℝ×ℝ) y : ℝ Γ— ℝ#check βˆ‡! (x:=xβ‚€), βŸͺx,y⟫
y : ℝ Γ— ℝ

Exercises

  1. Compute gradient of x[0], β€–xβ€–β‚‚Β², βŸͺx,y⟫ for x y : Float^[3] and gradient of A[0,1], β€–Aβ€–β‚‚Β², βŸͺA,B⟫ for A B : Float^[2,2]. Also evaluate those results for some concrete values.

Solution
set_default_scalar Float ⊞[1.000000, 0.000000, 0.000000] #eval βˆ‡! (x:=⊞[1.0,2.0,3.0]), x[0] ⊞[2.000000, 4.000000, 6.000000] #eval βˆ‡! (x:=⊞[1.0,2.0,3.0]), β€–xβ€–β‚‚Β² ⊞[0.000000, 1.000000, 0.000000] #eval βˆ‡! (x:=⊞[1.0,2.0,3.0]), βŸͺx, ⊞[0.0,1.0,0.0]⟫ def matrix1 := ⊞[1.0,2.0;3.0,4.0] ⊞[0.000000, 0.000000, 1.000000, 0.000000] #eval βˆ‡! (A:=matrix1), A[0,1] ⊞[2.000000, 6.000000, 4.000000, 8.000000] #eval βˆ‡! (A:=matrix1), β€–Aβ€–β‚‚Β² ⊞[0.000000, 0.000000, 1.000000, 0.000000] #eval βˆ‡! (A:=matrix1), βŸͺA, ⊞[0.0,1.0;0.0,0.0]⟫
  1. Previously we computed \(\sqrt{y}\) using Newton's method. Similarly we can mySqrt Compute sqrt y using gradient descent by minimizing objective function f := fun x => (x^2 - y)^2

Add solution to gradient descent

  1. Linear regression via gradient descent. Find matrix \( A \in \mathbb{R}^{2\times 2} \) that for given data \( x_i, y_i \in \mathbb{R}^2 \) minimizes

$$A = \text{argmin}_B \sum_i \| B x_i - y_i \|^2 $$

Solution
set_default_scalar Float def declaration uses 'sorry'linreg {n : β„•} (x y : Float^[2]^[n]) : Float^[2,2] := let loss := fun (A : Float^[2,2]) => βˆ‘ i, β€–(⊞ i' => βˆ‘ j, A[i',j] * x[i][j]) - y[i]β€–β‚‚Β² sorry
  1. Write down Euler-Lagrange equation over abstract vector space X and show that for lagrangian L x v := 1/2 * m * β€–vβ€–β‚‚Β² - Ο† x the Euler-Langran equation is m * βˆ‚ (βˆ‚ x) t = - βˆ‡ Ο† x

Either define the Lagrangian over ℝ×ℝ, L : ℝ×ℝ β†’ ℝ×ℝ β†’ ℝ or you can introduce abstract vector space X using this variable command

variable {X} [NormedAddCommGroup X] [AdjointSpace ℝ X] [CompleteSpace X]

The explanation of these typeclasses will be discussed in the last section "Abstract Vector Spaces".

Solution
set_default_scalar ℝ noncomputable def EulerLagrange (L : X β†’ X β†’ ℝ) (x : ℝ β†’ X) (t : ℝ) := let v := βˆ‚ x βˆ‚ (t':=t), (βˆ‡ (v':=v t'), L (x t') v') - βˆ‡ (x':=x t), L x' (v t) noncomputable def NewtonsLaw (m : ℝ) (Ο† : X β†’ ℝ) (x : ℝ β†’ X) (t : ℝ) := m β€’ (βˆ‚ (βˆ‚ x) t) + (βˆ‡ Ο† (x t)) -- example -- (x : ℝ β†’ X) (hx : ContDiff ℝ ⊀ x) -- (Ο† : X β†’ ℝ) (hΟ† : Differentiable ℝ Ο†) : -- EulerLagrange (fun x v => m/2 * β€–vβ€–β‚‚Β² - Ο† x) x t -- = -- NewtonsLaw m Ο† x t := by -- unfold EulerLagrange NewtonsLaw deriv fgradient; fun_trans [smul_smul] -- sorry

Derivative Rules

A commong issue when fun_trans is not doing what we expect is that there is a missing differentiation theorem.

For example, if we define a function

def foo (x : ℝ) := x^2 + x

then nothing happens when we try to differentiate it

fun x => (βˆ‚ (x:=x), foo x) 1 : ℝ β†’ ℝ#check βˆ‚! x, foo x
fun x => (βˆ‚ (x:=x), foo x) 1 : ℝ β†’ ℝ

Turning on the fun_trans trace reveals useful information

set_option trace.Meta.Tactic.fun_trans true in fun x => (βˆ‚ (x:=x), foo x) 1 : ℝ β†’ ℝ#check βˆ‚! x, foo x
[Meta.Tactic.fun_trans] [❌] βˆ‚ (x:=x), foo x
  [Meta.Tactic.fun_trans] candidate theorems for foo: []
  [Meta.Tactic.fun_trans] candidate local theorems for foo: []
  [Meta.Tactic.fun_trans] candidate fvar theorems: [isContinuousLinearMap_fderiv]
  [Meta.Tactic.fun_trans] [❌] applying: isContinuousLinearMap_fderiv
    [Meta.Tactic.fun_trans] isContinuousLinearMap_fderiv, failed to discharge hypotheses
          SciLean.IsContinuousLinearMap ℝ fun x => foo x

The ❌ on the first line signifies that fun_trans failed to make prograss on βˆ‚ (x:=x), foo x. The next two lines

  [Meta.Tactic.fun_trans] candidate theorems for foo: []
  [Meta.Tactic.fun_trans] candidate local theorems for foo: []

states that there are no derivative theorems for foo. The next line

  [Meta.Tactic.fun_trans] candidate fvar theorems: [isContinuousLinearMap_fderiv]

states that there is a potentially applicable theorem isContinuousLinearMap_fderiv which can differentiate linear functions. However the next few lines report that applying this theorem failed as fun_trans can't prove that foo is (continuous) linear map.

To remedy this problem we can define derivative of foo

def foo_deriv (x : ℝ) := 2*x + 1

and add a theorem that the derivative of foo is equal to foo_deriv

open SciLean @[fun_trans] theorem foo_deriv_rule : fderiv ℝ foo = fun x => fun dx =>L[ℝ] dx β€’ foo_deriv x :=
⊒ βˆ‚ foo = fun x => fun dx =>L[ℝ] dx β€’ foo_deriv x
⊒ βˆ‚ x, (x ^ 2 + x) = fun x => fun dx =>L[ℝ] dx β€’ (2 * x + 1)
;
h.h
x:ℝ
⊒ (βˆ‚ (x:=x), (x ^ 2 + x)) 1 = (fun dx =>L[ℝ] dx β€’ (2 * x + 1)) 1
;
All goals completed! πŸ™

Because foo_deriv_rule is marked with fun_trans attribute it will be used when we try to differentiate foo now

fun x => foo_deriv x : ℝ β†’ ℝ#check βˆ‚! x, foo x
fun x => foo_deriv x : ℝ β†’ ℝ

Unfortuantelly there is one more issue, fun_trans will do nothing when we try to compose foo together

fun x => (βˆ‚ (x:=x), foo (foo x)) 1 : ℝ β†’ ℝ#check βˆ‚! x, foo (foo x)
fun x => (βˆ‚ (x:=x), foo (foo x)) 1 : ℝ β†’ ℝ
set_option trace.Meta.Tactic.fun_trans true in fun x => (βˆ‚ (x:=x), foo (foo x)) 1 : ℝ β†’ ℝ#check βˆ‚! x, foo (foo x)
...
  [Meta.Tactic.fun_trans] trying comp theorem SciLean.fderiv.comp_rule
  [Meta.Tactic.fun_trans] [❌] applying: SciLean.fderiv.comp_rule
    [Meta.Tactic.fun_trans] SciLean.fderiv.comp_rule, failed to discharge hypotheses
          Differentiable ℝ fun x0 => foo x0
...

The trace reveals that fun_trans tries to apply composition(chain) rule SciLean.fderiv.comp_rule but it fails as it can't prove Differentiable ℝ fun x0 => foo x0. We need another theorem stating that foo is differentiable function. Mathlib has a tactic fun_prop that can prove differentiability and many other function properties like linearity, continuity, measurability etc. and fun_trans uses this tactic to ensure it can apply chain rule.

We need to add fun_prop theorem for foo

@[fun_prop] theorem foo_differentiable : Differentiable ℝ foo :=
⊒ Differentiable ℝ foo
⊒ Differentiable ℝ fun x => x ^ 2 + x
;
All goals completed! πŸ™

Now fun_trans knows that foo is differentiable function and can safely apply chain rule

fun x => foo_deriv x * foo_deriv (foo x) : ℝ β†’ ℝ#check (βˆ‚! x, foo (foo x)) rewrite_by
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœβΈ:ℝ
n✝:β„•
xβ‚€βœβ·:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβ΄:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.99
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœΒΉ:ℝ Γ— ℝ
xβ‚€βœ:ℝ Γ— ℝ
xβ‚€:ℝ Γ— ℝ
y:ℝ Γ— ℝ
| fun x => foo_deriv x * foo_deriv (foo x)
fun x => foo_deriv x * foo_deriv (foo x) : ℝ β†’ ℝ

Writing these theorems by hand is quite tedious so there is a macro def_fun_prop and def_fun_trans to help you with writing these theorems

def_fun_prop : Differentiable ℝ foo by
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœβΈ:ℝ
n✝:β„•
xβ‚€βœβ·:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβ΄:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.99
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœΒΉ:ℝ Γ— ℝ
xβ‚€βœ:ℝ Γ— ℝ
xβ‚€:ℝ Γ— ℝ
y:ℝ Γ— ℝ
⊒ Differentiable ℝ fun x => x ^ 2 + x
;
All goals completed! πŸ™
def_fun_trans : fderiv ℝ foo by
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœβΈ:ℝ
n✝:β„•
xβ‚€βœβ·:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβ΄:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.1595
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœΒΉ:ℝ Γ— ℝ
xβ‚€βœ:ℝ Γ— ℝ
xβ‚€:ℝ Γ— ℝ
y:ℝ Γ— ℝ
| βˆ‚ x, (x ^ 2 + x)
;
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœβΈ:ℝ
n✝:β„•
xβ‚€βœβ·:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβ΄:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒ²:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.1595
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœΒΉ:ℝ Γ— ℝ
xβ‚€βœ:ℝ Γ— ℝ
xβ‚€:ℝ Γ— ℝ
y:ℝ Γ— ℝ
| fun x => fun x_1 =>L[ℝ] 2 * x_1 * x + x_1

It generates these theorems and definition

foo.arg_x.Differentiable_rule : Differentiable ℝ foo#check foo.arg_x.Differentiable_rule def foo.arg_x.fderiv : ℝ β†’ ℝ β†’L[ℝ] ℝ := fun x => fun x_1 =>L[ℝ] 2 * x_1 * x + x_1#print foo.arg_x.fderiv foo.arg_x.fderiv_rule : βˆ‚ foo = foo.arg_x.fderiv#check foo.arg_x.fderiv_rule

The problem of writing appropriate theorems for fun_trans and fun_prop is quite involved problem and will be discussed in future chapter.

Differentiating Division, Log, Sqrt, ...

So far we have worked with functions that are smooth. However, functions like division, log, sqrt, β€–Β·β€–β‚‚ are not differentiable everywhere. We have to be a bit careful with those functions because SciLean tries to perform computations that are, at least in principle, fully formalizable. Let's try to differentiate division

fun x => (βˆ‚ (x:=x), x⁻¹) 1 : ℝ β†’ ℝ#check βˆ‚! (x:ℝ), 1/x
fun x => (βˆ‚ (x:=x), x⁻¹) 1 : ℝ β†’ ℝ

We did not get expected -x⁻². When differentiation, or any tactic, is not doing what we expect we can turn on tracing. Let's try again with tracing on

set_option trace.Meta.Tactic.fun_trans true in fun x => (βˆ‚ (x:=x), x⁻¹) 1 : ℝ β†’ ℝ#check βˆ‚! (x:ℝ), 1/x

and the beggining of the trace is saying that fun_trans tried to apply theorem HDiv.hDiv.arg_a0a1.fderiv_rule_at however it failed to discharge x β‰  0

[Meta.Tactic.fun_trans] [❌] βˆ‚ (x:=x), 1 / x
  [Meta.Tactic.fun_trans] candidate theorems for HDiv.hDiv: [HDiv.hDiv.arg_a0a1.fderiv_rule_at]
  [Meta.Tactic.fun_trans] [❌] applying: HDiv.hDiv.arg_a0a1.fderiv_rule_at
    [Meta.Tactic.fun_trans] [❌] discharging: x β‰  0
    [Meta.Tactic.fun_trans] HDiv.hDiv.arg_a0a1.fderiv_rule_at, failed to discharge hypotheses
          x β‰  0

This makes sense as division 1/x is well defined and differentiable only away from zero. Therefore we have to differentiate it at a concrete point that is not equal to zero.

variable (xβ‚€ : ℝ) (hxβ‚€ : xβ‚€ β‰  0) -(xβ‚€ ^ 2)⁻¹ : ℝ#check (βˆ‚ (x:=xβ‚€), 1/x) rewrite_by
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœβΉ:ℝ
n✝:β„•
xβ‚€βœβΈ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβ΅:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ΄:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœΒ³:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.1219
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
xβ‚€βœΒΉ:ℝ Γ— ℝ
xβ‚€βœ:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€:ℝ
hxβ‚€:xβ‚€ β‰  0
| -(xβ‚€ ^ 2)⁻¹
-(xβ‚€ ^ 2)⁻¹ : ℝ

We introduced a point xβ‚€ and assumption hxβ‚€ that it is not equal to zero. By default fun_trans does not see this assumption and we have to provide discharger. A discharger is any tactic that tries to satisfy(discharge) any assumption of the theorems fun_trans is using. In this simple case assumption tactic is enough as it just looks through the local context and tries to directly apply any existing assumptions.

Using assumption is not enough for a more complicated expression

variable (xβ‚€ : ℝ) (hxβ‚€ : xβ‚€ β‰  0) βˆ‚ (x:=xβ‚€), (x ^ 2)⁻¹ : ℝ#check (βˆ‚ (x:=xβ‚€), 1/x^2) rewrite_by
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉβ°:ℝ
n✝:β„•
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ΅:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβ΄:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.1228
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
xβ‚€βœΒΉ:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ
hxβ‚€:xβ‚€ β‰  0
| βˆ‚ (x:=xβ‚€), (x ^ 2)⁻¹
βˆ‚ (x:=xβ‚€), (x ^ 2)⁻¹ : ℝ

tracing shows

[Meta.Tactic.fun_trans] HDiv.hDiv.arg_a0a1.fderiv_rule_at,
  failed to discharge hypotheses xβ‚€ ^ 2 β‰  0

We need a tactic that is capable of infering (xβ‚€^2 β‰  0) from (xβ‚€ β‰  0). A very general and powerful tactic is aesop

variable (xβ‚€ : ℝ) (hxβ‚€ : xβ‚€ β‰  0) -(2 * xβ‚€) / (xβ‚€ ^ 2) ^ 2 : ℝ#check (βˆ‚ (x:=xβ‚€), 1/x^2) rewrite_by
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒΉ:ℝ
n✝:β„•
xβ‚€βœΒΉβ°:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβ΅:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.1237
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ
hxβ‚€:xβ‚€ β‰  0
| -(2 * xβ‚€) / (xβ‚€ ^ 2) ^ 2
-(2 * xβ‚€) / (xβ‚€ ^ 2) ^ 2 : ℝ

In multiple dimensions we often want to differentiate the norm

open SciLean Scalar variable (xβ‚€ : ℝ×ℝ) (hxβ‚€ : xβ‚€ β‰  0) β€–xβ‚€β€–β‚‚[ℝ]⁻¹ β€’ xβ‚€ : ℝ Γ— ℝ#check (βˆ‡ (x:=xβ‚€), β€–xβ€–β‚‚) rewrite_by
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.1289
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
| (<βˆ‚ (x:=xβ‚€), β€–xβ€–β‚‚[ℝ]).2 1
; fun_trans (disch:=
All goals completed! πŸ™
)

The fact that norm is not differentiable can cause issues. The common practice when writing numerical algorithms is to regularize norm using a small positive Ξ΅.

open SciLean Scalar variable (Ξ΅ : ℝ) (hΞ΅ : 0 < Ξ΅) fun w => fun x =>L[ℝ] 2 * βŸͺx, w⟫_ℝ / (2 * sqrt (Ξ΅ + β€–wβ€–β‚‚Β²)) : ℝ Γ— ℝ β†’ ℝ Γ— ℝ β†’L[ℝ] ℝ#check (βˆ‚ (x:ℝ×ℝ), sqrt (Ξ΅ + β€–xβ€–β‚‚Β²)) rewrite_by
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X:Type ?u.1326
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝¹:X
f✝:X β†’ ℝ
x✝:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x:ℝ
y✝:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
Ξ΅:ℝ
hΞ΅:0 < Ξ΅
| βˆ‚ x, sqrt (Ξ΅ + β€–xβ€–β‚‚Β²)
fun_trans (disch:=
funProp.discharger
n✝³:β„•
x✝⁴:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝³:ℝ β†’ ℝ
t:ℝ
X:Type ?u.1326
inst✝²:NormedAddCommGroup X
inst✝¹:AdjointSpace ℝ X
inst✝:CompleteSpace X
f✝¹:X β†’ ℝ
x✝²:X
f✝:X β†’ ℝ
x✝¹:X
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x✝:ℝ
y✝:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
Ξ΅:ℝ
hΞ΅:0 < Ξ΅
x:ℝ Γ— ℝ
⊒ Ξ΅ + β€–xβ€–β‚‚Β² β‰  0
;
All goals completed! πŸ™
)
fun w => fun x =>L[ℝ] 2 * βŸͺx, w⟫_ℝ / (2 * sqrt (Ξ΅ + β€–wβ€–β‚‚Β²)) : ℝ Γ— ℝ β†’ ℝ Γ— ℝ β†’L[ℝ] ℝ

Figuring out the right tactic like intro x; nlinarith [norm2_nonneg ℝ x] can be difficult. Therefore introduce tactic/discharger unsafe_ad_disch that assumes commong AD assumptions and reports them back to user.

Create unsafe mode differentiation which assumes that everything works out. Effectivelly this requires discharger that recognize commong goals that should be sorries or postponed.

Exercises

  1. gradient of energy for n-body system, newton's potential, lenard jones potential

    • do it for two particles

    • do it for n-particles

  2. signed distance function

    • compute gradient of sphere sdf

    • compute mean and gaussian curvature of sphere

    • pick SDF from https://iquilezles.org/articles/distfunctions/

      • most of them involve functions that are not differentiable everywhere

      • compute derivative in unsafe mode

      • figure out the minimal condition under which it is differentiable

Abstract Vector Spaces

In calculus we usually consider only functions \((f : \mathbb{R}^n \rightarrow \mathbb{R}^m) \). The issue is that on computers the type \( \mathbb{R}^n \) can have multiple different realizations. For example \( \mathbb{R}^3 \) can be modeled by FloatΓ—FloatΓ—Float, Float^[3] or FloatΓ—Float^[2]. They are all equivalent but in code we have to explicitely convert between these types. For this reason it is better to work with abstract vector spaces instead of with \( \mathbb{R}^n \).

Fortunately mathlib's derivative fderiv is already defined for a function (f : X β†’ Y) between two abstract vector spaces X and Y over a field π•œ. Mathlib's way of introducing an abstract vector space is rather involved and we need to spend some time talking about it. This presentation will be rather simplified. For interested reader we provide references at the end of this section that go over mathlib's algebraic stracutes in more detail.

A vector space X is a set with operations +,-,β€’,0 such that

  βˆ€ (x y z : X), x + (y + z) = (x + y) + z
  βˆ€ (x y : X), x + y = y + x
  βˆ€ (x : X), x + 0 = 0
  βˆ€ (x : X), x + (-x) = 0

  βˆ€ (r s : π•œ) (x : X), r β€’ (s β€’ x) = (r * s) β€’ x(
  βˆ€ (x : X), 1 β€’ x = x
  βˆ€ (r : π•œ) (x y : X), r β€’ (x + y) = r β€’ x + r β€’ y
  βˆ€ (r s : π•œ) (x : X), (r + s) β€’ x = r β€’ x + s β€’ x

in mathlib the axioms talking about addition and negation are captured by the type class AddCommGroup and the aximps talking about scalar multiplication are captured by the type class Module. Therefore if we want to introduce a new abstract vector space over a field R we have to introduce these variables

variable {π•œ : Type} [Field π•œ] {X : Type} [AddCommGroup X] [Module π•œ X] example (s r : π•œ) (x y : X) : (s + r) β€’ (x + y) = s β€’ x + r β€’ x + s β€’ y + r β€’ y :=
n✝³:β„•
x✝⁴:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁡:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁴:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝²:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝²:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝³:ℝ β†’ ℝ
t:ℝ
X✝:Type ?u.2030
inst✝⁡:NormedAddCommGroup X✝
inst✝⁴:AdjointSpace ℝ X✝
inst✝³:CompleteSpace X✝
f✝¹:X✝ β†’ ℝ
x✝²:X✝
f✝:X✝ β†’ ℝ
x✝¹:X✝
n:β„•
f:ℝ Γ— ℝ β†’ ℝ
hf:Differentiable ℝ f
x✝:ℝ
y✝¹:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y✝:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
Ξ΅:ℝ
hΞ΅:0 < Ξ΅
π•œ:Type
inst✝²:Field π•œ
X:Type
inst✝¹:AddCommGroup X
inst✝:Module π•œ X
s:π•œ
r:π•œ
x:X
y:X
⊒ (s + r) β€’ (x + y) = s β€’ x + r β€’ x + s β€’ y + r β€’ y
All goals completed! πŸ™

When we want to differentiate a function (f : X β†’ Y) between two vector spaces we also need that X and Y are equiped with a norm. For this purpose there is NormedAddCommGroup which equips AddCommGroup with a norm and guarantees that it compatible with addition and negation, and NormedSpace which equips Module with a norm and guarentees that it is compatible with scalar multiplication. Furthermore, we have to restric to a filed π•œ that is either real numbers ℝ or complex numbers β„‚. The type class RCLike states exactly that. Therefore when we work with derivative in general setting the code usually looks like this

variable {π•œ : Type} [RCLike π•œ] {X : Type} [NormedAddCommGroup X] [NormedSpace π•œ X] {Y : Type} [NormedAddCommGroup Y] [NormedSpace π•œ Y] set_default_scalar π•œ example (f g : X β†’ Y) (hf : Differentiable π•œ f) (hg : Differentiable π•œ g) : (βˆ‚ x, (f x + g x)) = (βˆ‚ f) + (βˆ‚ g) :=
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁢:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁡:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝⁴:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g✝:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X✝:Type ?u.3556
inst✝⁷:NormedAddCommGroup X✝
inst✝⁢:AdjointSpace ℝ X✝
inst✝⁡:CompleteSpace X✝
f✝²:X✝ β†’ ℝ
x✝¹:X✝
f✝¹:X✝ β†’ ℝ
x✝:X✝
n:β„•
f✝:ℝ Γ— ℝ β†’ ℝ
hf✝:Differentiable ℝ f✝
x:ℝ
y✝:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
Ξ΅:ℝ
hΞ΅:0 < Ξ΅
π•œ:Type
inst✝⁴:RCLike π•œ
X:Type
inst✝³:NormedAddCommGroup X
inst✝²:NormedSpace π•œ X
Y:Type
inst✝¹:NormedAddCommGroup Y
inst✝:NormedSpace π•œ Y
f:X β†’ Y
g:X β†’ Y
hf:Differentiable π•œ f
hg:Differentiable π•œ g
⊒ βˆ‚ x, (f x + g x) = βˆ‚ f + βˆ‚ g
h.h
n✝³:β„•
x✝⁴:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁢:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁡:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝⁴:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g✝:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝³:ℝ β†’ ℝ
t:ℝ
X✝:Type ?u.3556
inst✝⁷:NormedAddCommGroup X✝
inst✝⁢:AdjointSpace ℝ X✝
inst✝⁡:CompleteSpace X✝
f✝²:X✝ β†’ ℝ
x✝²:X✝
f✝¹:X✝ β†’ ℝ
x✝¹:X✝
n:β„•
f✝:ℝ Γ— ℝ β†’ ℝ
hf✝:Differentiable ℝ f✝
x✝:ℝ
y✝:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
Ξ΅:ℝ
hΞ΅:0 < Ξ΅
π•œ:Type
inst✝⁴:RCLike π•œ
X:Type
inst✝³:NormedAddCommGroup X
inst✝²:NormedSpace π•œ X
Y:Type
inst✝¹:NormedAddCommGroup Y
inst✝:NormedSpace π•œ Y
f:X β†’ Y
g:X β†’ Y
hf:Differentiable π•œ f
hg:Differentiable π•œ g
x:X
dx:X
⊒ (βˆ‚ (x:=x), (f x + g x)) dx = ((βˆ‚ f + βˆ‚ g) x) dx
;
All goals completed! πŸ™

When working with gradients we also need inner product as adjoint is defined through inner product. Unfortunately, here we diverge from mathlib a little bit. Mathlib defines InnerProductSpace which equips NormedSpace with inner product. Understandably InnerProductSpace requires that the βŸͺx,x⟫ = β€–xβ€–Β² however mathlib made the unfortunate decision by definin norm on produce spaces as β€–(x,y)β€– = max β€–xβ€– β€–yβ€– which is incompatible with the inner product structure. Therefore type like ℝ×ℝ can't be equiped with InnerProductSpace. Because of these issues, SciLean introduces AdjointSpace which is almost identical to InnerProductSpace but it only requires that the norm induced by inner product is equivalend to the existing norm i.e. βˆƒ (c d : ℝ⁺), βˆ€ x, c * βŸͺx,x⟫ ≀ β€–xβ€–^2 ≀ d * βŸͺx,x⟫. On AdjointSpace SciLean introduces Lβ‚‚-norm β€–xβ€–β‚‚ := sqrt βŸͺx,x⟫ which you have seen already and it is the norm you most likely want to use instead of the default norm β€–xβ€–. Therfore when we work with gradient in general setting the code usually looks like this

open SciLean variable {π•œ : Type} [RCLike π•œ] {X : Type} [NormedAddCommGroup X] [AdjointSpace π•œ X] [CompleteSpace X] {Y : Type} [NormedAddCommGroup Y] [AdjointSpace π•œ Y] [CompleteSpace Y] set_default_scalar π•œ example (f g : X β†’ Y) (hf : Differentiable π•œ f) (hg : Differentiable π•œ g) : (βˆ‡ x, (f x + g x)) = (βˆ‡ f) + (βˆ‡ g) :=
n✝³:β„•
x✝³:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁢:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁡:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝⁴:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g✝:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝²:ℝ β†’ ℝ
t:ℝ
X✝:Type ?u.5960
inst✝⁹:NormedAddCommGroup X✝
inst✝⁸:AdjointSpace ℝ X✝
inst✝⁷:CompleteSpace X✝
f✝²:X✝ β†’ ℝ
x✝¹:X✝
f✝¹:X✝ β†’ ℝ
x✝:X✝
n:β„•
f✝:ℝ Γ— ℝ β†’ ℝ
hf✝:Differentiable ℝ f✝
x:ℝ
y✝:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
Ξ΅:ℝ
hΞ΅:0 < Ξ΅
π•œ:Type
inst✝⁢:RCLike π•œ
X:Type
inst✝⁡:NormedAddCommGroup X
inst✝⁴:AdjointSpace π•œ X
inst✝³:CompleteSpace X
Y:Type
inst✝²:NormedAddCommGroup Y
inst✝¹:AdjointSpace π•œ Y
inst✝:CompleteSpace Y
f:X β†’ Y
g:X β†’ Y
hf:Differentiable π•œ f
hg:Differentiable π•œ g
⊒ βˆ‡ x, (f x + g x) = βˆ‡ f + βˆ‡ g
h.h
n✝³:β„•
x✝⁡:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁢:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁡:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝⁴:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g✝:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝⁴:ℝ β†’ ℝ
t:ℝ
X✝:Type ?u.5960
inst✝⁹:NormedAddCommGroup X✝
inst✝⁸:AdjointSpace ℝ X✝
inst✝⁷:CompleteSpace X✝
f✝²:X✝ β†’ ℝ
x✝³:X✝
f✝¹:X✝ β†’ ℝ
x✝²:X✝
n:β„•
f✝:ℝ Γ— ℝ β†’ ℝ
hf✝:Differentiable ℝ f✝
x✝¹:ℝ
y✝:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
Ξ΅:ℝ
hΞ΅:0 < Ξ΅
π•œ:Type
inst✝⁢:RCLike π•œ
X:Type
inst✝⁡:NormedAddCommGroup X
inst✝⁴:AdjointSpace π•œ X
inst✝³:CompleteSpace X
Y:Type
inst✝²:NormedAddCommGroup Y
inst✝¹:AdjointSpace π•œ Y
inst✝:CompleteSpace Y
f:X β†’ Y
g:X β†’ Y
hf:Differentiable π•œ f
hg:Differentiable π•œ g
x:X
x✝:Y
⊒ βˆ‡ (x:=x;x✝), (f x + g x) = (βˆ‡ f + βˆ‡ g) x x✝
;
h.h
n✝³:β„•
x✝⁡:ℝ
n✝²:β„•
n✝¹:β„•
xβ‚€βœΒΉΒ²:ℝ
n✝:β„•
xβ‚€βœΒΉΒΉ:ℝ Γ— ℝ
dxβ‚€βœ:ℝ Γ— ℝ
f✝⁢:ℝ β†’ ℝ
xβ‚€βœΒΉβ°:ℝ
f✝⁡:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβΉ:ℝ Γ— ℝ
dxβ‚€:ℝ Γ— ℝ
f✝⁴:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΈ:ℝ
yβ‚€βœΒΉ:ℝ
g✝:ℝ Γ— ℝ β†’ ℝ
xβ‚€βœβ·:ℝ
yβ‚€βœ:ℝ
f✝³:ℝ β†’ ℝ β†’ ℝ
xβ‚€βœβΆ:ℝ
yβ‚€:ℝ
L✝:ℝ β†’ ℝ β†’ ℝ
y✝¹:ℝ β†’ ℝ
t✝:ℝ
L:ℝ β†’ ℝ β†’ ℝ
x✝⁴:ℝ β†’ ℝ
t:ℝ
X✝:Type ?u.5960
inst✝⁹:NormedAddCommGroup X✝
inst✝⁸:AdjointSpace ℝ X✝
inst✝⁷:CompleteSpace X✝
f✝²:X✝ β†’ ℝ
x✝³:X✝
f✝¹:X✝ β†’ ℝ
x✝²:X✝
n:β„•
f✝:ℝ Γ— ℝ β†’ ℝ
hf✝:Differentiable ℝ f✝
x✝¹:ℝ
y✝:ℝ
xβ‚€βœβ΅:ℝ Γ— ℝ
xβ‚€βœβ΄:ℝ Γ— ℝ
xβ‚€βœΒ³:ℝ Γ— ℝ
y:ℝ Γ— ℝ
xβ‚€βœΒ²:ℝ
hxβ‚€βœΒ²:xβ‚€βœΒ² β‰  0
xβ‚€βœΒΉ:ℝ
hxβ‚€βœΒΉ:xβ‚€βœΒΉ β‰  0
xβ‚€βœ:ℝ
hxβ‚€βœ:xβ‚€βœ β‰  0
xβ‚€:ℝ Γ— ℝ
hxβ‚€:xβ‚€ β‰  0
Ξ΅:ℝ
hΞ΅:0 < Ξ΅
π•œ:Type
inst✝⁢:RCLike π•œ
X:Type
inst✝⁡:NormedAddCommGroup X
inst✝⁴:AdjointSpace π•œ X
inst✝³:CompleteSpace X
Y:Type
inst✝²:NormedAddCommGroup Y
inst✝¹:AdjointSpace π•œ Y
inst✝:CompleteSpace Y
f:X β†’ Y
g:X β†’ Y
hf:Differentiable π•œ f
hg:Differentiable π•œ g
x:X
x✝:Y
⊒ (<βˆ‚ (x:=x), (f x + g x)).2 x✝ = ((fun x dy => (<βˆ‚ f x).2 dy) + fun x dy => (<βˆ‚ g x).2 dy) x x✝
;
All goals completed! πŸ™

For interested reader we recommend reading the chapter Hierachies from Mathematics in Lean which explains how mathlib approaches algebraic hierachies like monoids, groups or modules. After reading that we recommend reading Differential Calculus in Normed Spaces which how NormedSpace is structured.