2004-10-23 Books and Software

Looking at lecture 1b of the SICP Videos ¹... (saw 1a a few days ago: 2004-10-16 Books and Software).

¹

2004-10-16 Books and Software

Sussman introduces a model for thinking about procedures – the substitution model – and goes on to say that imagining the outcome of things is a big part of the art. Like photographers who try to imagine how the thing will look on paper, we have to imagine how the program will look when it runs. And an important part of *that* is knowing when to stop looking deeper. Understanding complexity depends on your ability to ignore unnecessary detail.

Sussman then uses two different procedures to add numbers to make a point. He wants us to *analyze* these programs (using the substitution model he gave us).

Here’s one:

(define (+ x y)
  (if (= x 0)
      y
      (+ (1- x) (1+ y))))

And here’s the other:

(define (+ x y)
  (if (= x 0)
      y
      (1+ (+ (1- x) y)))))

These two both do “Peano Arithmetic” – adding by just using a conditional (if), increment by one (1+) and decrement by one (1-).

The first one, for example, we can imagine as two piles of things. If there’s stuff on the first pile, we take one away, and add one to the second pile. If the first pile is empty, the sum of the two is the second pile.

True SICP genius, the kinds of programs they use!

What he does, basically, is apply the substitution method, writing line after line, and says that this is the *evaluation process*, and when he looks at what he wrote, he says that is the *shape of the process*, and it shows the *behaviour of the process*:

The *shape* of the first process:

(+ 3 4)
(+ 2 5)
(+ 1 6)
(+ 0 7)
7

Notice the straight right margin.

The second process:

(+ 3 4)
(1+ (+ 2 4))
(1+ (1+ (+ 1 4)))
(1+ (1+ (1+ (+ 0 4))))
(1+ (1+ (1+ 4)
(1+ (1+ 5))
(1+ 6)
7

Check how the margin grows and shrinks.

Then he says to imagine a machine that actually implements this substitution model. Then we could assume that the number of steps we used is proportional to the *time* it takes, and the width of the shape is proportional to the *space* it requires.

Thus, for the first algorithm, time is *proportional* to X (add a line), and space is constant.

time = O(x) space = O(1)

We call such a process an *iteration*. And what Sussman believes to be the most important part about this result is that it tells us that *any* machine should be able to run this process in constant space!

For the second algorithm, time is also *proportional* to X (add two lines), and space is constant.

time = O(x) space = O(x)

The name for this kind of process is *linear recursion*.

The *essence*:

But it doesn’t stop there...

And finally:

I could cry if I compare this to the crappy lectures we were given at university!! 😢

He then goes on to talk about *Fibonacci numbers*. Here are the Fibonacci numbers:

||n||0||1||2||3||4||5||6||7||8||9||10|| ||:--:||:--:||:--:||:--:||:--:||:--:||:--:||:--:||:--:||:--:||:--:||:--:|| ||fib(n)||0||1||1||2||3||5||8||13||21||34||55||

Here’s how to calculate them:

(define fib (n)
  (if (< n 2)
      n
      (+ (fib (- n 1))
         (fib (- n 2)))))

Here’s how he draws the process:

  fib(4) is  fib(3) + fib(2) is  fib(2) + fib(1) + fib(1) + fib(0) is  fib(1) + fib(0) + 1 + 1 + 0 is  1 + 0 + 1 + 1 + 0 = 3

Why is this bad? We calculated fib(2) twice! If we were to calculate fib(5), we’d have to add fib(4) and fib(3) – and fib(3) we already used to calculate fib(4)!

This process is *exponential* in time: I have to take a *portion* of what I calculated before and calculate it again.

time = O(fib(x))
space = O(n)

Why is this happening? And he shows us how the two parts of the program – the two branches of the condition reappear in our process:

Interesting: How to find “instances” of your general instructions (your program) in the image of the process we drew.

Next he moves to the game Tower of Hanoi. Amazing: Before he starts programming he explains why there must be a solution:

Thus, the solution exists. This only takes a second or two in the movie, but I love his way of approaching the problem!

The result, assuming the three locations are called from, to, and spare:

(define (move n from to spare)
   (cond ((= n 0) "done")
         (else
           (move (1- n) from spare to)
           (print-move from to)
           (move (1- n) spare to from))))

He says this is very similar to the Fibonacci thing: It will produce a tree...

 (move 4 1 2 3)
 (move 3 1 3 2) (...)
 (move 2 1 2 3) (...)
 (move 1 1 3 2)

That’s why we start by moving the one tower from location 1 to 2. 😄

Now, looking at the problem, Sussman says:

An iterative algorithm would not split the problem into two subproblems. It would just apply some “local rule”...

Same goes for the Fibonacci algorithm. There’s an iterative way of computing them, in constant space. We should try to do it.

Hehe, reminds of the passage in the Emacs Lisp manual that says that Emacs is not efficient in doing recursive algorithms. It would be better to rewrite them as iterative algorithm for speed. 😄

​#Books ​#Software