________________________________________________________________________________
I wrote a Lisp in F# last year, and I started with Fexprs and used them as the basis to write quasiquote and defmacro, which are then used to write things like defun and let:
"backtick char ` is expanded to quasiquote by the reader" (defmacro! quasiquote (flambda (form) (_qq-expand form))) (defmacro! define-macro (flambda (sig :rest body) `(defmacro! ~(car sig) (flambda ~(cdr sig) ~@body)) )) (define-macro (defun name vars :rest forms) `(def! ~name (lambda ~vars ~@forms)) (define-macro (lisp/let bindings :rest forms) `( (lambda ~(map first bindings) ~@forms) ~@(map second bindings) )) (define-macro (let :rest forms) (if (contains? forms :in) `(~'clj/let ~@forms) `(~'lisp/let ~(car forms) ~@(cdr forms)) ))
There is a certain amount of brain-warping required understanding it all, and getting the nested quasi- and regular- quotes right takes some getting used to, even for me going back and trying to grok it -- but it's bootstrapping code, so once your primitives are defined, you can move on.
The post describes how Fexprs were designed in a way that was tightly coupled to implementation choices around eval, in ways which were later limiting. Did you experience this? Were there things you might have wanted to change about your lisp which were inhibited by the use of Fexprs?
> FEXPRs and FSUBRs were ... meta-implementation
operators that relied on the meta-circular definition of EVAL
> The problem with FEXPRs is that they are far too powerful and place too many constraints on the implementation.
> FEXPRs were _designed_
around the fact that the language had a meta-circular definition that was
implemented via recursive descent of list structure.
> A FEXPR, on the other hand, is a program that is deliberately
_not_ abstracted away from the concrete representation.
> A FEXPR manipulates objects on both sides
of the isomorphism (yuck), so Big-P and Big-P^-1 cannot be factored out
of the semantics.
I think I got lost on how exactly SUBRs and FSUBRs are different. Is it that SUBRs still get evaluated forms of the arguments in an assembly-accessible form, while FSUBRs get reader output as linked lists?
Yes, I believe that's correct.
SUBRs arguments are evaluated before they are passed to the SUBR - this is effectively just normal a function familiar from other languages.
I thinnk an FSUBR (or FEXPR) essentially gets the entire argument list unevaluated and then is responsible for evaluating that according to it's own desires.
I am not too proud of a man to say that I read that all the way through, and by the end, still didn't understand one bit: and I'm a lisp person who at least knows a little bit about the history of the language.
Can ANYONE here actually
translate that into English?
I've been on a bit of a lisp kick recently.
Coming across the description of fexprs in earlier (pre Common Lisp and Scheme) Lisp dialects was interesting to me since it I'm familiar with uplevel and upvar in Tcl, and appears that Tcl's upvar/uplevel is closely related to Lisp's fexprs. It was interesting to see that fexprs basically give all of the power of macros, although possibly (probably?) at a performance cost due to compilation difficulties.
The obvious advantage of fexprs is that they are functions and therefore can be applied or passed along as functional arguments in ways that macros cannot.
I've been trying to understand why they were dropped from Common Lisp and the mainline Lisp descendants. From the comments I've seen in other locations, (and in the sibling linked paper) it seems that the performance issues of fexprs was a strong reason for the advocacy of dropping them (from Common Lisp especially). Some of the other online references that I've seen mention that perhaps fexprs would not continue to have the same degree of compilation problems in the modern lexically scoped Lisps that they had in the dynamically scoped earlier lisp dialects.
One of the things that was interesting about this fragment of document was that it seems to imply that fexprs are explicitly tied to the evaluation strategy, whereas it seems like maybe macros are less intimately tied to the direct structure of the evaluator.
As an aside, there are a couple of contemporary Lisp variants (PicoLisp and newLISP) that still maintain implementations of fexprs.
Lisp compilers assume that certain things are available at compile time, but not at run-time. These assumptions are supported by the particular set of special forms that is handled, and that set is not extensible.
Fexprs make the set of special forms open-ended, and allow _anything_ to be available at run-time.
If a new special form is introduced which allows something to be dynamic which was previously static, it can cause major work in the compiler.
It is not usually reasonable to do major work in the compiler for the sake of some special form that an application needs. It is not even possible unless the users have buildable sources for the compiler.
One example of an important facility about which static assumptions are made is lexical scope (a.k.a. static scope). Lisp compilers use a completely different representation for the static scope from interpreters. In a lexically scoped Lisp dialect, fexprs receive the lexical environment as an argument (necessarily so). This is a run-time object, which a fexpr can easily leak to the application directly, or else indirectly (by providing semantics like looking for a variable using a run-time symbol as a key). That is like the upvar business in tcl. It relies on the existence of a searchable structure at run time, which is the antithesis of lexical scope.
Still, there can be fexprs that don't require the application writer to hack he compiler. Why don't we just have those? fexprs users who want their code to compile will just have to write a macro. If they are not able to, then they don't get to compile the code.
But those who want to compile the code don't want to do the work twice: why write a fexpr _and_ a macro, if you can just write a macro? The interpreter can work with the macro expansion, just like the compiler. Application writers want to implement something once, in one way. If you implement something twice, you have to maintain both.
Basically, Lisp users historically expressed the desire to compile the code without hacking the compiler, and that steered things away from fexprs toward macros.
Here is a big one: macros can be functionally written _even if the construct they implement is imperative_. Expanding a macro can be a side-effect-free, pure operation. If it is so, you can trust it to produce he same results on the same inputs.
It's often possible to check that a macro is doing the right thing just from example expansions. The expansion works right there at the call site.
A fexpr carries out operations "remotely", so to speak, because it's a function. It's a function which has to receive contextual arguments and then uses an API to access the scope. The resulting code is harder to read. The (non-hygienic) macro user just performs an expansion and sees an expression like A or #:G0025, denoting the access to a variable. He or she just has to imagine that code in place of the macro call. To debug fexpr, the user has to read code that uses some API like _(get-var env third-var)_ to do the same thing.
Common Lisp's mixture of lexical and dynamic scope could cause challenges for fexprs also.
Thanks for that response.
Your explanation makes a lot of things clear that I hadn't fully parsed out from what I've been reading recently.
I'm not sure if it'll make things clearer for you, but there's some background of fexprs in Kent Pitman's "Special Forms in Lisp" paper here:
http://www.nhplace.com/kent/Papers/Special-Forms.html
An FEXPR is similar to a function (EXPR), except that its operands are not be implicitly evaluated before being passed to the body of the FEXPR. Consider the following:
(if (and a b) x y)
Were it the case that `if`, `and` were regular functions, then `b` would get evaluated even if `a` was #f (but we expect short-circuiting behavior). Also, both `x` and `y` would be evaluated, regardless of the condition, but we clearly only want to evaluate one or the other. Functions implicitly evaluate their operands _before_ passing the resulting arguments to the body of the function.
In most lisps, `if`, `and`, `or` are not functions, but special forms which are second-class citizens. They must appear in their own names when used, and you can't assign them to another name.
FEXPRs are an abstraction which allow you to write your own "special forms", and decide how and whether or not the operands should be evaluated _within the body_ of the FEXPR. They are unlike macros or quotation however, in that they don't merely suppress the evaluation of a syntactic list at some stage before regular evaluation, but they pass the parsed operand list to the body of the FEXPR during evaluation. FEXPRs are also first-class items, so you can assign them to a variable, and pass them around like first-class lambdas.
Consider the following (simplified) description for a evaluator:
EVAL takes a term, call it T. If T is a symbol, look it up in the current dynamic environment and return the value to which it is mapped. If not found, raise an error. If T is a cons cell, take the CAR and call it C and take the CDR and call it L. If C is an FEXPR, call C, passing the value L. If C is an EXPR, evaluate each value in the list L, constructing a new list of results, call it A. Next, call C, passing the value A. If C is neither an FEXPR nor EXPR, raise an error. Otherwise, T must be a self-evaluating term, so return T.
Note that the core circular evaluator does not need to contain cases for special forms like `if`, `and`, `or`, `quote`, etc, as these should be implemented as FEXPRs. In this regard, the evaluator for a language supporting FEXPR may actually be simpler than one without.
The main area where FEXPR cause issues is due to the dynamic environment which was prevelant in the Lisps of its day, but features similar to FEXPRs have been used since in languages where lexical scoping is the default behavior, and the results are a bit more promising. Kernel in particular is worth checking out if you wish to explore further. (Start with:
). Also picolisp, which I'm less familiar with.
As some others have pointed out, FEXPRs do present a real problem in producing optimized code, because they are a runtime phenomena and not much can be reasoned about the behavior one might expect during compilation. A program using FEXPRs is likely to be slower than one using macros, which is likely the main reason they were sidelined. FEXPRs are more powerful than macros, but as the OP points out, usually too powerful that it becomes difficult or impossible to reason about programs.
The more constrained version used in Kernel is much easier to reason about, almost as powerful, but it does not resolve the performance issues, and attempts to try usually end up reinventing something like macros, which are less powerful.
While the OP argues that "`Safe' FEXPRs turn out to be the subsetof FEXPRs that can be understood as source->source translations," I would debate that in the context of Kernel, where its version of FEXPRs (called Operatives) are more tightly constrained due to a clever implementation of environments in Kernel. I have come across many safe use-cases for Operatives which would be difficult or impossible to express in macros. In general, I find them much more intuitive than macros, and quotation is completely unnecessary in Kernel (although possible, for better or worse).