What else can you do with typing systems [1]? And what about that relentless drive towards multiple processor machines [2]?
One thing, better parallelization of code. One such method is the map function [3], where some function (or code) is applied to every element in a list, and each such application is independent of each other. In the current crop of langauges, there's an actual function or keyword that is used:
>
```
(DEFINE INCR (LAMBDA (X) (+ X 1)))
(MAPCAR #'INCR '(1 2 3 4 5))
```
This bit of Lisp code increments the elements of a list by 1. Other languages have a similar function or library variation on map. But why? Couldn't we have the compiler figure it out? At least in a few instances?
For instance, in C, the following will give a compiler error:
>
```
{
double x[100];
double y[100];
y = sin(x);
}
```
since sin() takes a single double parameter, not an array. But since I'm talking hypothetical compilers here, couldn't the compiler recognize that x is an array of doubles, and that sin() takes a single double, and since the result is going to another array of doubles, why not just map sin() over the array x? On a uniprocessor machine the code generated could be something like:
>
```
{
double x[100];
double y[100];
size_t i;
for (i = 0 ; i < 100 ; i++)
y[i] = sin(x[i]);
}
```
Given a routine that expects a parameter of type X, and a variable array of type X, passing said variable array into said routine should not produce an error, but a mapping instead. No fence post errors [4]. No having to type out for loops, or while loops, or loops of any kind. Or even of having to type out “map.”
Or is this too much DWIM (Do What I Mean)?
[3] http://en.wikipedia.org/wiki/Map_(higher-order_function)