hckrnws
> humans can’t immediately parse structure with a glance, and sometimes we have to count parentheses by hand.
This is why automatic indentation is important when working with Lisp. This is mentioned later in the article, but I consider it mandatory because of the multitude of problem this approach resolves.
For example, upon pasting the below snippet into emacs:
(let* ((foo 10)
(bar (+ foo 32))
(* foo bar))
The editor 1) notifies me that there is a missing paren somewhere, 2) automatically* reindents this code into: (let* ((foo 10)
(bar (+ foo 32))
(* foo bar))
At this moment, it's obvious that we have three variable bindings, one of them malformed, and the programmer can correct this by planting a missing paren in the proper place. I wrote about this topic at length in https://mov.im/blog/phoe%40movim.eu/cd3577f6-fb1d-45f5-b881-... before.* with aggressive-indent and electric-indent enabled in emacs, as soon as the missing paren is placed at the end
And for that added flair, don't forget: https://www.emacswiki.org/emacs/RainbowDelimiters.
Automatic indentation helps, but the main problem is that LISP uses a single kind of parentheses in all places where some kind of brackets is required.
If different kinds of brackets had been used for every distinct purpose, then there would have been no need to count parentheses or be dependent on enforcing invariable indentation rules.
ALGOL 68 is another example of programming language with fully parenthesized non-ambiguous syntax, but it uses many kinds of different bracket pairs, which makes it much easier to read. While some of its bracket pairs, like if/fi, do/od, case/esac, have been rightfully mocked, they achieve well their purpose.
In a programming language there are many different purposes for which bracket pairs are needed, e.g. for delimiting sub-expressions in order to change the default order of evaluation, for delimiting function argument lists, for delimiting array index lists, for delimiting blocks, for delimiting function bodies, for delimiting record/structure component definitions, for delimiting the components of various program structures, like conditional expressions and loops, and so on.
For an automatic parser, the use of different kinds of brackets can only make its task easier, never harder, because in the worst case its lexical analysis could replace all kinds of brackets with a single kind, while for a human reader different kinds of brackets are always helpful.
Using many kinds of bracket pairs is not necessarily as verbose as in Algol 68, because Unicode offers enough bracket pairs to choose for all purposes.
In LISP all parentheses look the same, but a LISP programmer who does not recognize that the things delimited in those parentheses are of different natures, e.g. "(CAR ..." vs. "(COND ..." vs. "(QUOTE ..." and so on, cannot write high quality LISP programs.
Many years ago, I have written many rather big Scheme or LISP programs, because they have many advantages beyond the uniform parentheses. If I would use again a language of the LISP family, I would use a text preprocessor to pass the source to the compiler, which would allow me to use various kinds of bracket pairs, for instance Unicode mathematical angle brackets for COND, Unicode bag delimiters for loops, curly brackets for blocks (curly bracket is the Unicode term for braces), and so on.
> In LISP all parentheses look the same,
(car (first a))
(cond ((> a 10) 1)
((> b 20) 2)
((> c 30) 3))
Above looks different for me. As a Lisp programmer I don't look for parentheses, but for patterns with a starting symbol. Remember: list forms typically begin with a symbol.> If different kinds of brackets had been used for every distinct purpose, then there would have been no need to count parentheses or be dependent on enforcing invariable indentation rules.
No Lisp developer counts parentheses. Indentation rules are used in many programming languages. Lisp applies these rules using tools, like many programming environments do.
I've often wondered if it was worth a language having interchangeable brackets that you could use similar to 4-colour problems eg. ([1 2] 3) is the same as {[1 2] 3} or ((1 2) 3) semantically, just visually easier to read.
Any takers?
You could have three slightly different brackets, one for each level of nesting. And when you get to four that's it. More than three is illegal.
Edit: Or you could use markup to make the text size smaller and smaller.
This is more or less like the traditional use of parentheses, square brackets and curly brackets in English texts, where they had the same meaning but they were used for different levels of nested brackets.
IIRC, I have seen once an experimental language which worked exactly like in your proposal, but it was long ago and I do not remember which was it.
One LISP system that I have used long ago had a different approach. In it, one closing square bracket could be used to close all non-closed parentheses preceding it.
> One LISP system that I have used long ago had a different approach. In it, one closing square bracket could be used to close all non-closed parentheses preceding it.
That was Interlisp, originally making handling of punch cards easier. Nowadays it's not necessary, since the 70s/80s text editors on terminals support handling nested expressions.
Rust macros can be used like that:
let hello = dbg!{ dbg![ dbg!("world") ] };
We have ides with different colours for different layers of brackets, and we have them with rendering different characters (>= etc).
So it feels like this could be a reasonable plugin?
Racket works this way.
Yes, this is my argument against significant-whitespace languages like python, or even worse for simple nested data like yaml.
Actual delimiters explicitly structure your code, and then automatic formatting confirms it, kind of like double-entry book-keeping.
For me, just a couple of days with lisp and parentheses made me wish every language had such simple explicit syntax, I really recoil at some language's special syntax, eg python list comprehension. Maybe I just hate python, lol.
Aren't there 11 parentheses in both of your code snippets? I must be missing something, sorry could you possibly explain where the missing parenthesis should be?
They're not saying that the second snippet is "fixed", they're saying that the auto-indented second snippet highlights the mistake to the reader. Yes there are 11 parentheses in both the code snippets because both are "incorrect" in that they're both missing a parenthesis. The GP is trying to point out that since Emacs automatically re-indents the first snippet so that it looks like the second snippet, the reader knows immediately that something "isn't quite right".
Ahhh, understood. Thank you!
It helps to basically vocalize what the code is doing. It is a "let" form, which has a list of variables, and then a body section using them. (It is a "let*", which also means that the variables can refer to variables that come before them, but not too relevant here.)
To that end, the first () in a "let" should be of the forms (name value), where value can be a longer form, if needed.
To that end, the last (name value) in the list should almost always look like (name value)), as it is closing the first in the list, that is ((name value).
(let* ((foo 10)
(bar (+ foo 32)))
(* foo bar))
Or, to demonstrate the ugly "nailclip" style: (let* ((foo 10)
(bar (+ foo 32)))
(* foo bar)
)
Note that I moved a closing paren off the third line and onto the second one, to close the variable bindings in the LET* form.That explains it, thank you!
~ let*
~ ~ foo 10
~ bar (+ foo 32)
~ * foo bar
> And this is, in fact, a made-up example but the thing is, I was a C programmer for the past 5 years, and I was working with low-level code (bare metal actually). And I have never seen the use of explicit scoping of variables. [...] I find this a major problem when dealing with C because I always see about 10 uninitialized variables at the start of each function, where half of those are used only once. Somewhere. And thank god, if there are no global variables.
Sure, if you write C like it's 1995, it's not going to be very readable. You don't need to define variable at the beginning of a scope, and you can use {} to scope variables to tighter blocks. And functions can be small.
(Sometimes you need to keep variables for longer than you would like for cleanup reason, that's true.)
The drawback of the microscopic function craze is you lose the ability to read a whole procedure from top to bottom. If some code is truly reused, break it out into a function with a good name, sure.
But in my domain I have seen countless one-line functions used in one place each, and it’s frustrating having to jump around the code (and add to my mental context stack*) rather than just putting some curly braces and a comment. Functions aren’t primarily meant as a hygiene mechanism. I really like the curly brace scoping in Rust.
I found my style partly by reading John Carmack and Casey Muratori’s coding philosophies and putting them to the test for myself.
* For a natural language analogy, consider the difference between parenthetical remarks and asterisks. You have to remember your place in the text.
Mostly a problem with people who borrow their concepts of cohesion from MVC, but yes people tend to fundamentally misunderstand SIP and it results in over modularity.
IMHO that is a side effect of how we teach people how to program mixed in with orgs cargo culting acronyms and not the core concepts of ideas.
For real though. This was my biggest frustration when I was an SE.
Want people in a large organization to ignore your ideas? Speak them out loud.
Want them to follow them as dogma? Write a blog post under a pseudonym.
You should be able to read a function from top to bottom and it should show you the whole procedure at a given level of abstraction. Hiding the details is the heart of what programming is.
I always try to write code like
fn foo() {
let foo = do_first_thing();
let bar = do_second_thing();
do_third_thing(foo, bar);
}whenever possible. Then you can see the abstract overview of what the code does at a glance, but you can drill down into the details if you need to.
It's a personal preference based on how we mentally model problems. I cannot stand long procedural code. The mental gymnastics required to know what it does is too much. More than 5-6 variables in scope and I have no idea if it does what you say it does. It would take me some time to figure that out. If it is under test we can at least say it's not wrong some times.
I much prefer breaking problems down to the level where I can reason in terms of sets, algebras, and that sort of thing. It's much easier for my brain to rely on properties and the ability to substitute terms in equations than it is to reconstruct a giant state machine and execute it in my head. Much more so when we add concurrency to the mix.
At the end of the day though I don't think there's, "one true way."
When I was a kid I would tell my parents I cleaned my room, then they’d look in the closet and see everything in a pile.
My kid does this. We wanted to find a book. It required us to search through the mountain hidden in the closet and was very annoying. If it was organized we would have known where to find the book.
Don't let perfect be the enemy of good
I still don’t fold clothes. I just buy stuff that doesn’t wrinkle. Work smarter not harder.
I dislike microscopic functions a lot too and yet I've never needed 10 uninitialized variables at the beginning of a function. I've written readable 100 lines long functions. I just assumed that the OP was dealing with like 5000 lines long files with statics all over the place.
If your functions are 100 lines long, readability may be in the eyes of the beholder. That's how big I'd expect most modules to be.
> you lose the ability to read a whole procedure from top to bottom
An ability that is only valuable if it really is a procedure, some sequence of operations, rather than a function, some forming of a value. I'd agree that procedures should generally not be split up into subprocedures; a procedure has to be read from top to bottom, so factoring out subprocedures only makes it harder to understand. But a function can be understood without understanding the smaller functions that it calls, so breaking functions up into smaller functions - and breaking small functions out of procedures, where possible - makes it easier to understand.
Having a language that makes an explicit distinction between procedures and functions - in particular, where calling a procedure looks different from calling a function - helps a lot (e.g. Haskell).
I define functions as often in let/letfn blocks than I do as separate functions, so they are local and in context. Makes it easy to read.
This sounds like another option while maintaining agreement with the gist of what I meant. Those are good, though that’s where I start preferring smallness
So your solution to having too many () is to have just as many {}. I'm not complaining but...
Well, no, the above solution is to have a {} if and only if explicit scoping is desired instead of having just as many {} as there would be ().
Sometimes you need explicit scope for a bunch of things and the code would benefit from it. That doesn't justify having explicit scopes for everything all the time.
Isn't is always safer to have variables scoped only to the relevant part of the code, that is supposed to make use of them? So basically it would result in almost the same number of parens.
This isn’t always possible, even in lisp. Compare to Rust’s non-lexical lifetimes
No it isn't. In C you often have to keep pointers around for longer than you need to so that you can free() them at the end of a block.
But still, the original post said that they have to deal with this:
> because I always see about 10 uninitialized variables at the start of each function, where half of those are used only once
That's bad!
I don't see how it could ever be not possible. Maybe not always convenient, but impossible? How?
a = getInt(); b = getInt() + a; c = getInt(); d = b / c; puts(d); puts(c); puts(b); puts(a);
Nothing in the middle of this code prevents using ‘a’ even though it isn’t relevant to ‘c’. And there’s no way to arrange it to change that.
Comment was deleted :(
That's only a problem if your language doesn't allow separating computation from executing effects. With monads it's pretty straightforward:
def doD(b: Int, c: Int) = puts(b / c)
def doCD(b: Int) = getInt >>= {c => doD(b, c) >> puts(c)}
def doBCD(a: Int) = getInt >>= {tb => val b = tb+a; doCD(b) >> puts(b)}
getInt >>= {a => doBCD(a) >> puts(a)}
If I wanted to deeply pedantic, I think the blocks associated with those monady bits keep the parameters in scope longer than needed.
But I never got into Haskell so I don’t really know.
> If I wanted to deeply pedantic, I think the blocks associated with those monady bits keep the parameters in scope longer than needed.
I don't think so? If you write it as a "flat" block using do notation then anything from earlier in the block is accessible later in the block. But local functions are real functions with lexical scope, they don't get hoisted up to the containing scope or anything. (And FWIW that's Scala-style syntax, although the ideas are very Haskell-like)
Dope. Thanks.
Have you written much Prolog?
I was thinking about in Prolog terms and I think it’s the hardest paradigm to avoid the scoping problem in
I've never done any Prolog.
If `c` does not depend on `a`, then the re-arrangement seems simple: Define `c` later.
> If `c` does not depend on `a`, then the re-arrangement seems simple: Define `c` later.
How does that prevent 'a' from being in the scope where 'c' is defined? It is not clear to me what you mean by defining 'c' later. Later where? And how do we ensure that 'a' is not accessible there?
I think lex-lightning's comment has a point. We need to define 'a' and we need to define 'c' and then we need to print 'c' before printing 'a', so although 'c' does not depend on 'a', we still need to keep 'a' in scope because we cannot print it before we print 'c'.
Perhaps a convoluted solution to the problem posed in lex-lightning's comment would be something like this:
#include <stdio.h>
int getint() {
return 42;
}
int main()
{
int a = getint();
int b = getint() + a;
int c;
{
// Shadow 'a' here to make the outer 'a' inaccessible in this scope.
int *a = NULL;
(void) a;
c = getint();
}
int d = b / c;
printf("%d\n", d);
printf("%d\n", c);
printf("%d\n", b);
printf("%d\n", a);
return 0;
}
Output: $ cc -std=c99 -Wall -Wextra -pedantic foo.c && ./a.out
2
42
84
42
I see now how the problem is. Thanks. I have a suspicion, that this depends on side effects and that without assignments and without side effects things might look different. Somehow I don't seem to come across these scenarios, when I write FP programs.
Hey I’m down to party with some monads, but I leave that as an exercise to the reader.
One note: each variable only gets assigned to once, so FP won’t magic that away.
The side effects might have a different way of management in FP. But the IO will happen in the same order in reality anyway.
So I don’t think there’s a way out, but I honestly would be interested to see it.
I will definitely keep an eye open now, to see whether I hit such a situation in the future.
I appreciate your interest.
I’d argue that even declaring ‘c’ is against the spirit though. To say nothing of the eldritch shadow cast there (no disrespect, I think we’re both enjoying playing around here)
> I’d argue that even declaring ‘c’ is against the spirit though.
Yes, I agree. My code example is meant to show the best we can possibly do to solve the problem in your comment and how contrived that solution is. The fact that this solution is contrived and diverges from the intended spirit of the problem only emphasizes the point of your comment.
That's why I am eager to understand what zelphirkalt really meant when they wrote, "the re-arrangement seems simple: define `c` later". It seems far from simple and, in fact, impossible if we want to avoid contrived solutions.
I’m a contrived idea man, not a contrived solution man, damnit.
This comes off more as a series of rationalizations than a compelling argument. Lisp is easy for a computer to parse, not great for humans. There other languages with comparably rich metaprogramming that syntactically have a better tradeoff between computer and human legibility.
Yeah, it reads like "parentheses are good, here is a 50-page essay on why" - sounds like the author is grasping at straws.
Speak for yourself, I find the lisp syntax to be among the easiest to read.
I am not a lisp hater. There are advantages to its syntax relative to other languages. I am just saying that in the space of programming languages, there exist other languages capable of solving the same problems as lisp in essentially the same way using fewer characters simply by removing some of the excessive parens and adding syntactic support for the most common operations (assignment, scoping, calling functions, data structure instantiation). Even if the total characters is a wash, not having to chase down dangling missed closed brackets or rely on an external tool to highlight 15 levels of parens and/or auto insert 10 ugly parens at thee end of a nested code block is a nice QOL improvement in my opinion.
Again, none of this is saying that lisp is bad. I just think we can have nicer things and preserve the important semantic insights that lisp gives us.
> not having to chase down dangling missed closed brackets or rely on an external tool to highlight 15 levels of parens and/or auto insert 10 ugly parens at thee end of a nested code block is a nice QOL improvement in my opinion.
Development environments with syntax support are common. These are not "external", they are nowadays "integrated". See the features of typical Java IDEs. Java uses tools to edit code, so does Lisp.
> Lisp is easy for a computer to parse, not great for humans.
Which is why "best of both worlds" approaches work best in my opinion, no matter if it's "just" automatic indentation and paren-counting that leaves actual editing to humans, or more intrusive approaches like Parinfer (mentioned in the article) that automatically infer parenthesisation from code indentation and edit the code as appropriate.
Properly indented Lisp is easy to read (for me), and badly indented code can be misread even in languages like C++.
If I wanted to I could get used to reading Lisp. I could also get used to reading Klingon. But I never had to get used to reading Python.
Nah, parenthesis are easy to parse with the eye if you spend even a small amount of time working with them. You still have to indent reasonably like you would with every other syntax.
We actually need to talk about lists. The single reason why Lisp uses parentheses is the list notation -> programs are actually lists, not just text.
The same code which runs
CL-USER 10 > (let* ((foo 10)
(bar (+ foo 32)))
(* foo bar))
420
becomes data, when quoted: CL-USER 11 > (quote
(let* ((foo 10)
(bar (+ foo 32)))
(* foo bar)))
(LET* ((FOO 10) (BAR (+ FOO 32))) (* FOO BAR))
A typical Lisp function can then substitute all * symbols to + symbols: CL-USER 12 > (subst '+ '* '(LET* ((FOO 10) (BAR (+ FOO 32))) (* FOO BAR)))
(LET* ((FOO 10) (BAR (+ FOO 32))) (+ FOO BAR))
EVAL can execute lists: CL-USER 13 > (eval (subst '+ '* '(LET* ((FOO 10) (BAR (+ FOO 32))) (* FOO BAR))))
52
Thus the single main feature is that Lisp itself is an application of the List Processor. This feature then is user visible in read eval print loops, debuggers, inspectors, code formatters, source interpreters, code generators (-> macros), embedded languages, ... So Lisp developers loved it, because one would not need translators between user facing (infix) syntax and internal data structures (ASTs, ...). Thus early attempts for other syntax variants failed, because this duality between data and code has some kind of elegance. Something which for example impressed Alan Kay (of OOP / Smalltalk / Dynabook / ... fame), when he understood that something like a Lisp evaluator can be written in a few lines of Lisp code, processing lists, which is code. -> https://www.quora.com/What-did-Alan-Kay-mean-by-Lisp-is-the-...I had fun trying to explain this a few years back. https://taeric.github.io/CodeAsData.html
I do have to confess I have rarely taken huge advantage of this fact. But the idea is pretty easy to help look at code.
Comment was deleted :(
It’s a shallow understanding of the code if you don’t know the variable definitions and types. Good enough for some things, though.
Correct, that's where for example an interpreter walks the code and builds up the understanding. Similar a compiler, a code walker or other tools.
The shallow view allows in Lisp to provide new/different views of code. That's good and bad. Many transformations of code don't need a deep understanding, many do. In Lisp I can write (infix 3 + 1 * sin (x)), because an infix macro can provide its own interpretation, by transforming the code. But then I need to parse the code and find out what the things are: variables, literals, operators, calls, ...
For example when I want to substitute FOO with a call:
CL-USER 34 > (LET* ((FOO 10) (BAR (+ FOO 32)))
(symbol-macrolet ((foo (print 2)))
(* foo bar foo bar foo ; 1-3
(flet ((foo (a) (* foo a))) ; 5
(foo (* foo bar)))))) ; 4
2 ; 1
2 ; 2
2 ; 3
2 ; 4
2 ; 5
2370816
Here the symbol-macrolet knows which FOO to expand, and which not. There are five expansions in the code, which makes 2 printed five times. You can see that it works in nested code, too : it knows that some FOO are function names and those will not be expanded.Whereas a MACROLET knows, where an operator needs to be expanded:
CL-USER 38 > (LET* ((FOO 10) (BAR (+ FOO 32)))
(macrolet ((foo (a) `(print ,a)))
(* foo bar foo bar (foo bar) ; 1
(flet ((bar (a) (foo a))) ; 2
(bar (* foo bar))))))
42 ; 1
420 ; 2
3111696000
List is confusing terminology. It's a tree. Sure, any node of a Lisp tree is implemented as a list of branches, but fundamentally we are talking about trees, still.
Foremost, it's just nested lists and not a tree. We are talking about lists and atoms. We are then talking about list operations, like getting list elements, appending lists, reversing lists, listing things, nesting lists and atoms, mapping lists, matching lists, destructuring of lists, ...
This is a list:
(munich frankfurt dresden)
This is a list of lists ((münchen :state Bavaria)
(frankfurt :state hessen )
(dresden :state sachsen))
If we look at the level of conses and atoms, then we see a binary tree and operations: car, cdr, cons, ... Lisp is there a low-level language: lists are made of simpler building blocks: two-item nodes and atoms.This is a tree:
((a . (10 . nil))
.
((b . (20 . nil))
.
nil))
The dot is an infix syntax element for a cons cell.Which as a nested list is shorter written as
((a 10) (b 20))
Would you like to call them "TREEPs"?
Programming languages are user interfaces. The syntax of a programming language is part of that user interface. Some choices are poor by virtually any measure (stropped keywords come to mind; as visually hideous as they are unpleasant to type), but most choices represent tradeoffs that favor certain usage patterns over others.
Lisp de-prioritizes legibility at a glance and concision in order to give maximum priority to simplicity in metaprogramming. I'm not convinced this is a good tradeoff; I would no sooner choose to drive to work in an amphibious tank that got 0.2 miles per gallon to be able to cut across a single river along the way.
Lispers say that I could use specialized tooling to compensate for poor ergonomics, but in many contexts- someone else's computer, a text box or static text on a website, a general-purpose textual diff or search tool, print in a book, a whiteboard- these affordances are not available, and I would still have to contend with the inconvenient reality of Lisp's very real design choices.
> Lisp de-prioritizes legibility at a glance and concision in order to give maximum priority to simplicity in metaprogramming.
I'd need to see a source for this claim to believe that S-exprs are primarily for the benefit of metaprogramming. They have a lot of ancillary benefits in terms of uniformity, simplicity, and scope. Plus, learning paredit gives a lot of power at the keyboard, and you can't build an editing setup without something like S-exprs (or angle bracket equivalents like XML/HTML).
> Lispers say that I could use specialized tooling to compensate for poor ergonomics
Well, in input- and tooling constrained environments I have preferred Lisp for its syntactic simplicity. I wouldn't call it poor ergonomics.
I find coding in Lisp to be very comfortable. If I ever decided to pull the trigger and swap [] for (), it would remove a large portion of the "chording" that happens with coding. And I by "chording" I refer to the shift key.
I love kebab-case-its-so-easy-to-type. May as well be spaces. You can pretty much do everything with 26 lower case characters, the -, space bar, and (). What's that, 30 characters? I have 46 unshifted keys I can use, plus the spacebar. So thats 16 more keys for the more common symbols. Just gotta remap the keyboard a bit.
Recall early Lisp actually used things like (plus ...) and (minus ...).
Not have to chord is a real benefit, to me, when it comes to typing. Lisp just flows that way. For me my-class I find a lot easier than either MyClass, or my_class.
> Programming languages are user interfaces.
Lisp is optimized for interactive programming user interfaces. There is a lot of tooling around that.
> stropped keywords come to mind; as visually hideous as they are unpleasant to type
It's pretty common today for people to voluntarily put SQL keywords in all caps.
> Lisp de-prioritizes [...] concision
Really?
By the standards of a K programmer, Lisp is quite verbose and excessively nested.
> Really?
Yeah, Lisp can be verbose due to the lack of built-in syntax for things other than lists.
Take the Common Lisp syntax/library functions (there's not really built-in syntax) for hash tables:
https://cl-cookbook.sourceforge.net/hashes.html
Compare it with Python dictionary syntax:
https://developers.google.com/edu/python/dict-files
Almost everything that's built-in to other languages has to be given a name that makes sense in Lisp. It makes it more readable in some ways, but also more verbose.
Please use the newer Cookbook! https://lispcookbook.github.io/cl-cookbook/data-structures.h...
For a short hash-table notation, I now use Serapeum's dict:
(dict :a 1 :b 2)
and that's it, you now have a hash-table (with a representation than can also be read back in by the lisp reader).Thank you, that's good to know. Apologies for linking an old version of the page; I would edit my comment, but it's too late for me to do so now.
alists and plists are a much closer lisp analog to python's dicts though
Comment was deleted :(
That's fair. Dictionary literals aren't really any more concise than a-lists.
A-list:
(("the" . 50) ("be" . 25))
Dictionary literal: {"the": 50, "be": 25 }
Using both a-lists and p-lists as mere structural conventions- and thus needing to convert back and forth or write multiple versions of routines to handle each- is a good example of how sexprs provide a superficial veneer of simplicity while inviting cancerous complexity in practice.
The Clojure approach of recognizing associative structures as a fundamental building-block and giving them specialized syntax:
{:the 50 :be 25}
Is a clear-cut improvement.At the risk of waking Naggum's ghost, plists are performant enough to use everywhere, and an alist can be converted to a plist merely by flattening it.
I do covet the way clojure's associations are callable as a lookup method, though.
The weird thing about parentheses is that lispers only really get how powerful and how simple it makes programming after prolonged exposure, and thats why every now and then we have articles that try to explain this magical moment to folks.
The strength of lisps syntax basically boil down to a few things:
1. Scope is visually clearer than in non-lisps, you dont need to hold the parser in your head while reading code.
2. Code as lists allows consistent and simple macros.
3. No need to reinvent editor tooling.
This is a interesting idea!
Maybe all programming languages have their sweet points that learners find "magic" and then generate blog articles.
> This code does exactly the same thing as Rust code
Except it doesn't - because after the final parenthesis of the Lisp snippet, the let-binding is finished, i.e., the introduced variables are no longer in scope. That is not the case after the end of the last line in the Rust example.
The whole point of parentheses is to mark where something starts and where it ends. Here, it is where the scope of the let-defined variables starts and ends. Without some sort of "parenthesizing", there might be a start, but it's much harder to mark where something ends.
I think it's more subtle than that. Rust has a notion of non-lexical lifetimes (https://rust-lang.github.io/rfcs/2094-nll.html) and the compiler often completely avoids the use of the stack for small, trivially droppable values, regardless of their lexical scope. In some ways, `let` in Rust is a more C-like variant of the `let ... in ... end` construct from OCaml.
Not to mention that you can always introduce a new lexical scope with `{ ... }` in Rust code.
I believe there are Common Lisp implementations that will happily stack allocate dynamic extent values.
I loved this article, this is exactly what programmers in my community complain most about with regard to lisp.
The greatest strengths for me are the scoping and how lisp enables interactive development. For one, I don't even see parens anymore when writing code. They've become automatic where I'm able to parse structure at least better than I have in the past. With a REPL, (I specifically use Clojure), development has become fun again. A typical debugging process is attaching to my actual program, and calling functions seeing what happens.
I love the flow of defining functions in my code, `def`-ing some test data, and calling those functions. I'll use `cider-inspect` to see if the data structure looks correct and iterate.
Everyone should at least try a lisp. Iterative, interactive development is so fun and powerful.
Pssst- hey kid! I heard you came to this side of town looking for a lisp with no parentheses.
If you're not a narc,
I've got the goods.
Try a little bit of this julia!
end
Ok, ok I know your parents warned you about indexing from 1 but they also thought that global variables should be replaced with singleton factories, so what do they know.Monster! You’ll have to pry my 0-indexing from my cold dead hands.
Maybe the outcome of this syntax experiment is that less is not always more. Maybe in general people prefer more syntactical notations than what Lisps offer out of the box.
Comment was deleted :(
Also many people, who have actually used Lisps prefer the notation for its ease of editing code.
> Here’s the same code, but with all parentheses in place:
(define (fac x)
(if (<= x 0) 1
(* x
fac (- x 1))))
Still one pair missing.Is it true that in normally formatted LISP code, every line should start with an opening parenthesis?
Btw, I prefer Haskell's syntax of
fac x = if x <= 0 then 1 else x * fac (x-1)
Nope, lisp doesn't care about whitespace at all, including line endings. The way lisp is read in is as if there was no whitespace, just objects in lists.
(foo bar baz)
(foo
bar baz)
(foo
bar
baz)
All the same thing to lispThat's not quite right. Lisps care a lot about the presence of whitespace, just not about their type or number. (foo bar baz) is very very different from (foobarbaz). They are identical in this respect to the C family of languages, in fact (the only difference is that they allow newlines in quoted strings, which C languages normally do not).
The term of art is whitespace insensitive, meaning that whitespace is only necessary when needed to separate tokens, and any amount will do when it's allowed. While it may be possible to write a language in which whitespace may be inserted or left out literally anywhere, I'm unaware of any actual languages where this is true.
In Fortran the following two lines are the same:
DO 10 I=1.10
DO10I = 1.10
Fortran comes closer than any other language, true.
But there's a mandatory six columns of whitespace at the start of every line, so not quite! But the fact that PRO GRAM and such constructs are legal is... special.
You are 34 years out of date.
I feel like the name "Fortran 90" was a missed opportunity, since it got rid of all the old punchcard stuff that people associate with "Fortran".
I've got this theory that what people actually don't like about Lisp parentheses is not the amount of nesting, nor the opening paren's prefix location, but rather that parentheses always include a hard nesting apply (or really, the syntax layer equivalent of a hard nesting list). In other words - most programmers are used to having an assertion that ((foo)) (foo) and foo are all equivalent, and also assume the basic top-level production is already some kind of list (not a single expression). Lisp breaks these long-loved assumptions.
Writing Haskell feels quite Lispy in that the basic syntax is quite generic, but you can nest parentheses all day long without changing the meaning, or even use some generic operators ($, .) to write out a structure in a clearer way. I'd love to see an attempt at a Lisp syntax that felt similarly. (And yes, I know there have been 2^37 attempts at different Lisp syntaxes, but I still haven't found one in line with this idea)
> It depends… on operator precedence
I think implementing operator precedence in ANY programming language is a mistake. `1 + 1 * 2` should be a syntax error. In C, in Python, in Java.
For basically forever compilers and languages have been designed with a total ordering on operator precedences. The Yacc compiler generator tool maps operator precedence to integers, as do most handwritten compiler parsers.
A total ordering has a definite answer to the question "is x greater than, equal to, or less than y?" for all x and y. A partial ordering will answer "I don't know" for some x and y.
It's quite possible to implement operator precedence using a partial ordering. When the parser has to resolve precedence between two operators and their relationship is not defined, throw an "ambiguous precedence" error.
You can implement the partial ordering by putting all the operators into a DAG of sets of operators. If two operators are in the same set, they have equal precedence. If there is a path through the DAG from one operator to the other, that defines the precedence relation. Else there is no relation.
Say "*" is defined to have a higher precedence than "+", and they both have an undefined precedence relation with "&". Then "1 + 2 * 3" should compile into "1 + (2*3)", but "1 + 2 & 3" should throw an "ambiguous precedence" syntax error.
Some operator precedences are way more reasonable than others, and those between arithmetic operators are good examples. What we actually need is a partial order of operator precedences: you can have `1 * 2 + 3` or `full_name ?? first_name ++ " " ++ last_name` but not necessarily `1 * 2 & 3` because that combination of operators is uncommon and a parenthesis should be preferred instead of an arbitrary total order.
There is value in notational simplicity.
For the same reason that math has defined precedence (and even omits multiplication signs entirely in many places), programming languages in some domains will choose to do the same.
It’s not a problem if you want to create a language that does not do this, but I think it is a problem if you want to ban others from doing it.
Algebra only really has defined precedence between multiplication and addition/subtraction. In all other cases, even division, there is either a non-linear textual representation, or parantheses are required. You have fractions, exponents, radical signs, various signs which appear only at the beginning of an expression (sigma, integral, lim) etc.
Even multiplication is often distinguished more by the shape of the text than by actual precedence rules, once you leave basic arithmetic - expressions like "1 - 3xy + 2y²" make the order of operations obvious from text layout, which is why they are preferred over "1 - 3 × x × y + 2 × y ^ 2", which no mathematician ever writes.
Exponentiation also has defined precedence, does it not? (Everyone who has passed algebra agrees that y is squared before being multiplied by 2 in your example.)
You can view that as precedence, but I think it's more correct to say that exponentiation binds to the single symbol it is attached to. When you write x² + 2, I don't think it's operator precedence that makes it clear this is not equal to x ^ 4.
Similarly, in the following expression:
x
_ + 2
2
I don't think it's precedence that makes it clear you first divide by 2 and then add 2 to the result.Alternatively, perhaps what you're observing is simply the graphical representation of the underlying mathematical precedence rules. (Obviously, over hundreds of years of symbolic math, the notation and precedence has co-evolved, with precedence probably generally, well, preceding notation.)
y = x / 3 + 2
y = 2 + x / 3
y = x/3 + 2
We all agree those are the same, despite a lack of an any cues in the first two. That suggests to me that there is a defined precedence for division vs addition in math. (So much so that it doesn't even seem like a controversial statement to make.)Precedence rules always apply to a specific textual notation. There is no precedence between division and addition, there is a precedence between the / and the + operators.
People are generally bad at remembering arbitrary operator precedence rules. That is why most math notation doesn't rely on precedence rules to be unambiguous, it uses other kinds of textual clues. For example, the / operator is virtually never used in algebra. Instead, division is indicated by fractions, which explicitly group their operands and separate them from other adjacent operations. Exponentiation relies on super scripts to separate its arguments, making it extremely unambiguous at least which operations are part of the exponent and which the total. Roots use the radical symbol which extends over the entire expression you're taking the root of, and again rely on a kind of superscript for indicating which root you are taking. Limits and sums use subscripts to separate the iteration direction from the expression. Integrals use both sub and superscripts.
> Exponentiation relies on super scripts to separate its arguments, making it extremely unambiguous at least which operations are part of the exponent and which the total.
The textual representation even for exponents is not enough if you don't already know that exponentiation has higher precedence than multiplication.
"1 - 3xy + 2xy²"
Both of us know that only the y term is squared in that expression. How do we know that?
I claim that we know that because we know the precedence rules and that someone never exposed to our system of algebra would not inherently and unambiguously know that it was 2·x·(y²) rather than (2·x·y)² or even 2·(x·y)² merely by visual inspection of the expression.
This (the meaning of the above expression, including the precedence/grouping) is something that has to be taught in Alegbra/Prealgebra courses, and is something that some students struggle with.
For expnentiation I was talking about something else:
1+3
x + 4
In this case, the exponent notation makes it unambiguous that this is (x^(1+3))+4, and not x^(1+3+4).I agree that xy² could mean both (xy)² and x(y²), and that perhaps you need to understand precedence to disbiguate. I would still contend that the notation is defined as "each term has its own exponent, possibly 1 which need not be written" to disambiguate, where terms are separated by parentheses or by other operators.
That is, in my mind, the grammar for usual algebraic notation is something like this:
expression = term (op term)*
expression___________ (expression)?
term = (num | var | fraction | \( expression \) | \/expression)
op = + | - | / | × | ° etc
And the rules are that you first evaluate each term, and then the operators between the terms. You only need to follow precedence rules for those operators between terms, the evaluation of a term is unambiguous.You appear to be inventing elaborate ways to talk about precedence without saying "precedence".
Precedence, under the term "order of operations", predates computers by several centuries. Which is not to apply that it was fixed in stone from the beginning, or perfectly consistent, but mathematics agreed on the modern rules universally in the first couple decades of the 20th century.
> Alternatively, perhaps what you're observing is simply the graphical representation of the underlying mathematical precedence rules.
This is nonsense. There are no underlying mathematical precedence rules. You're seeing precedence being represented visually, but there is no mathematical difference between 5 3 ADD 2 MUL and 5 3 2 MUL ADD.
MUL and ADD are just binary operators, or if you prefer, functions from U × U to U for some set U.
Your statement is particularly nonsensical as applied to the example you're supposedly responding to. Given some expressions:
5 + 3 5 5
------- ; --- + 3 ; -------
2 2 2 + 3
How do you even form the argument that the difference in interpretation is "simply the graphical interpretation of the underlying precedence rules"? The precedence rules here are obvious, because they don't exist:- Division occurs between the upper operand and the lower operand
- Addition occurs between the left operand and the right operand
It's impossible for these to conflict, so there is no such concept as precedence.
(Precedence also doesn't exist between addition and addition because addition is associative. It is necessary between division and division, and the system is straightforward: lower precedence is indicated by longer lines. Tell me about the underlying mathematical precedence rule that determines that.)
> There are no underlying mathematical precedence rules.
You might want to correct the information here then:
https://en.wikipedia.org/wiki/Order_of_operations#Convention...
What do you think I'd want to correct? That page says the same thing you've already been told twice: there is no underlying mathematics, and the meaning of the notation is defined by the notation. Here's a quote:
> These rules are meaningful only when the usual notation (called infix notation) is used. When functional or Polish notation are used for all operations, the order of operations results from the notation itself.
You could "correct" that to observe that "the order of operations results from the notation itself" is true of all notation, including infix notation, but why? It's not wrong now.
Can you really not tell the difference between "notation" and "math"?
"Order of operations" is the order in which operations proceed. Hence precedence.
They aren't a fact about mathematics. They're a fact about notation. Operator precedence and order of operations are synonymous.
Yes, but programming is more akin to copywriting than mathematical notation most of the time.
Math has very well-defined, well-trod conventions that are densely packed into the notation. When people try to write code like this though, it’s hard to read.
All else equal, I’ll take a snippet of code with more characters in it but less work to unpack those characters. Reading linearly is cheap; I get tired reading formulas faster than reading technical prose. Precedence makes the mental parsing nonlinear.
The REAL problem comes from math and physics teachers (even at the university level) writing things like:
(a + b) / 2π
instead of
(a + b) / (2π)
It seems that math teachers put division above multiplication in order of operations but programming languages almost universally treat division and multiplication as equal and read them left-to-right. I've fixed multiple bugs before that were due to this.
The lack of operator precedence has no relationship with the mandatory use of parentheses.
For example the APL syntax for expressions, which I prefer, is that there is no operator precedence.
The operands are associated to operators strictly from the right to the left, regardless which operators or functions are used, which is a much wiser choice than the reverse traditional order of evaluation.
No parentheses are used, except when they are needed to modify the default evaluation order.
I vigorously disagree.
For that example, I agree, but how about
a | b || c >> d & e * f
? I think its reasonable to say that a language should not allow thatHow about *paste Linux source code here*? Should the language C allow it? It's long and complicated, you don't even have enough time to read it, let alone to comprehend it.
That expression would not pass code review around me, and a linter should flag it for sure.
But all the precedence rules you're invoking here are useful in isolation, and I see no reason why they shouldn't parse in combination. A linter can use heuristics to flag cases which should be parenthesized, a parser would need a defensible rule for failing a parse. What would that rule be?
You can take any reasonable grammar and come up with unreasonably ugly/illegible code.
Yeah, of course, but grammar can be designed as to not allow as much implicit magic.
If you can do it, someone will do it (in prod, in your product).
The reason why this expression shouldn't compile has nothing to do with precedences (C has some odd choices here to be sure), but because of mixing incompatible types. Arithmetic integers, bitvector integers, Booleans, and possibly even floats. Conversions to appropriate types should have to be explicit. But C's type system is weak as all hell and this shit actually compiles there. (I'm only mentioning C because your example looks like it.)
There‘s no implicit type conversion here. As long as all the vars are the same type it works (albeit unpredictable if you don‘t know operator precedence for the lang).
That would get really annoying.
Not in expressions like 1 + 3 * 6. Personally, I include the parentheses anyway, as a matter of style, and would write 1 + (3 * 6).
But in expressions like i + 3 > 4 / n? No thank you, (i + 3) > (4 / n) would drive me up a wall. Ymmv.
Should `x == 1 or y == 3` be a syntax error? Because that’s operator precedence too.
that's certainly a plausible position; in lisp (or = x 1 = y 3) is generally an error; even if it's not an error it doesn't mean what you intend it to. having to write (x == 1) || (y == 3), similar analogous to the code you'd write in lisp, would be only a minor burden
in c, x == 1 || y == 3 works fine, but unfortunately the bitwise operators have the wrong precedence; x & 2 == 2 is parsed as x & (2 == 2), which is never what you meant. so even the best software designers have made terrible blunders in precedence design
math formulas are a bigger issue; 3*x**2 + x/2 + 1 is a lot better than the fully parenthesized version
There are of course operations with unclear precedence relationships and it’s completely reasonable to want to force parentheses when composing those operations.
However, there are also operations with crystal clear precedence relationships, such as logical conjunction and comparison, and in that case it has to be asked what exactly would the parentheses be there for?
In lisp the answer is: to enable syntactic homogeneity and to enable macros.
But in other languages like Python that don’t have sexpr based syntax, parenthesizing everything would just be pointlessly unreadable.
it isn't crystal clear to everyone, smart guy, it's only clear if you're familiar with it
yea it should
That is type inference actually.
preach king
https://hyperscript.org doesn’t allow mixing different infix operators w/o parentheses
I like Lisp syntax because the following works:
(if (<= lower-bound position upper-bound)
...)
And the following doesn't: if (lower_bound <= position <= upper_bound)
...
Also
(+)
And (*)
Are well defined.Fist post here, and shameless self promotion: I happen to have written a paper on that topic:
"Unifying Textual and Visual: A Theoretical Account of the Visual Perception of Programming Languages" https://dl.acm.org/doi/abs/10.1145/2661136.2661138 direct pdf link: https://dl.acm.org/doi/pdf/10.1145/2661136.2661138
The paper offers a theoretical account based on the semiology of graphics from Bertin, using Lisp as an example (with very similar examples to Andrey's), but also other languages e.g. Befunge ;-)
Mostly this article is about people being exposed to M-expressions in elementary math.
As someone whos first programming post BASIC was on a Symbolics machine, coding _conventions_ worked just fine with s-expressions
The Polish notation is probably what most people are reacting to unless you used the reverse version with an HP calculator.
Whitespace isn't significant in C like the author suggests, it is coding convention.
COBOL is a counterexample for whitespace significance not helping with readability.
But for lisp the parentheses was a non-issue with just a few emacs macros.
But that is the same with most modern languages outside of novelties like bf
Coming from Java, Scala, Javascript one nice benefit of Go, was no parantheses around if conditions. At first I thought an irrelevant change but now I do think it makes code more readable.
Probably one of the best thing about the new "pythonized" Scala 3 syntax is that it doesn't require parentheses around if conditions.
This is why every sane person uses an RPN calculator. :-0
Lisp code is == to its AST, the programmer is literally writing the AST that runs, hence meta programming is really easy.
The trade-off ends up being that metaprogramming becomes harder while syntax gets changed
shrug, do it long enough and the parans disappear:
Q: How can you tell when you've reached Lisp Enlightenment?
A: The parentheses disappear.
~ Anonymous
In a simple DSL I am working based on s-expressions I enable things like removing parenthesis in single line expressions. For example, (define a "hello world!") is just define a "hello world!" <EOL>
Another example that could work more than RPN/FORTH (or close to the ; operator in smalltalk) is:
(* 4 (+ 1 2 3))
equals to:
+ 1 2 3 . <EOL>
* 4 <EOL>
The DSL is not LISP based but s-expr to define a DAG like in an spredsheet.
Do we though? Do we really need to talk about parentheses?
I've learned a lot from LISP and its variants over the years, and I understand the value of LISP syntax for meta-programming, but if any of this was truly compelling we'd have all been doing it for years.
Every language has its own form of Stockholm Syndrome, I guess.
One can think of s-expr parentheses as structural formatting commands - sort of like markdown. Then you can render an s-expr in a different way so long as the structure is maintained. For example choosing diff background color at each level or for different types of s-expr.
Except for the structural editing enabled by the usage of parenthesis, all the other features can be achieved without them in ex. Haskell:
a =
let foo = 10
bar = foo + 32
in foo * bar
b =
1
& (+ 2)
& (* 9)
& (/ 5)
Very long article to find a justification for a Stockholm syndrome.
And if they had just used an RPN CAS like Erable... ;]
10 [ENTER] foo [STO]
« foo 32 + » bar [STO]
foo bar *
Parens are powerful in lisp(s) because it means the syntax is its own AST and can be manipulated by itself (macros). It's one of those exploding head moments that you (including me) can get totally ensorceled by. But--as with all language features--I think pragmatism demands that we ask "what incredible software has this produced", and I can't think of any standouts.
There are various problems with 'what incredible software has this produced?'.
Various incredible software which has been produced date back from before the Lisp machine went the way of the dodo. An example of this would be Genera.
Other incredible software has a very specific type of draw which doesn't work for everyone. Examples would be Guix and Emacs.
And yet other software is very niche, leading to it being barely discoverable. As Lisp isn't a hot programming language (like Rust is), these projects don't usually get 'this thing was programmed in Lisp' coverage. Things like NASA's SPIKE, for example (cfr. <https://allegrograph.com/press_room/franzs-allegro-cl-used-f...>).
Sure, mainly I mean like, "Lisp's feature X enabled us to do thing Y that we couldn't do in other languages which let us make baller software Z". I think other languages arguably have this stuff: C/C++/Rust have direct hardware access and speed, JavaScript has the browser, etc.
It's a little hard for me to pick out exactly what makes Allegro CL [0] good at the AI thing. They do say you can also write rules in Prolog, which makes me think the advantage isn't down to Lisp but rather the library code they wrote on top of it.
> Other incredible software has a very specific type of draw which doesn't work for everyone. Examples would be Guix and Emacs.
Yeah, maybe this is one. I almost listed Emacs, but I'm a little torn (also a Vim user so don't trust me here haha). Lisp is inextricable from Emacs, but I'm not familiar enough to say whether or not you would have an appreciably harder time writing an equivalent app in, say, Lua. Maybe! Maybe live reload is core to the experience, etc. Like I said I'm not super familiar.
So yeah, I should clarify that I don't think the software needs to take over the world to meet my criteria here. I think niche software can totally qualify; in fact I think it's more likely to qualify. Basically it's "we wrote a unique, useful program in Lisp that we really couldn't have written in a different language." Does Emacs fit the bill here? Maybe, but I kind of doubt it.
Also the heavyweights of CAD, which have definitely affected your life even if you’re not consciously aware, are Lisp based.
Examples (for Common Lisp, so not citing Emacs): reddit v1, Google's ITA Software that powers airfare search engines (Kayak, Orbitz…), Postgres' pgloader (http://pgloader.io/), which was re-written from Python to Common Lisp, Opus Modus for music composition, the Maxima CAS, PTC 3D designer CAD software (used by big brands worldwide), Grammarly, Mirai, the 3D editor that designed Gollum's face, the ScoreCloud app that lets you whistle or play an instrument and get the music score,
but also the ACL2 theorem prover, used in the industry since the 90s, NASA's PVS provers and SPIKE scheduler used for Hubble and JWT, many companies in Quantum Computing, companies like SISCOG, who plans the transportation systems of european metropolis' underground since the 80s, Ravenpack who's into big-data analysis for financial services (they might be hiring), Keepit (https://www.keepit.com/), Pocket Change (Japan, https://www.pocket-change.jp/en/), the new Feetr in trading (https://feetr.io/, you can search HN), Airbus, Alstom, Planisware (https://planisware.com),
or also the open-source screenshotbot (https://screenshotbot.io), the Kandria game (https://kandria.com/),
and the companies in https://github.com/azzamsa/awesome-lisp-companies and on LispWorks and Allegro's Success Stories.
https://github.com/tamurashingo/reddit1.0/
https://www.ptc.com/en/products/cad/3d-design
https://apps.apple.com/us/app/scorecloud-express/id566535238
This site is written in Lisp, I believe (a dialect called Arc).
Not only that, but the site and Y Combinator as a whole likely wouldn't exist without Lisp.
https://paulgraham.com/avg.html
(Wow! Over 20 years ago!)
Yeah but I'm skeptical that there's anything about HN's software that's uniquely enabled by being written in Arc. I want someone to point to a feature in a Lisp and say, "you can write X software only in Z because there's Y feature". I'll even accept "significantly more easily" instead of "only" here. For example: "you can write secure kernels only in Rust because it has direct hardware access and memory safety" (let's stipulate this is all true).
Has anything else notable been done with Arc?
Rather than "Lisp gives you superpowers", maybe the story is actually "there were some very effective developers who happened to use Lisp to build their stuff".
> "what incredible software has this produced", and I can't think of any standouts.
See for example https://interlisp.org , an early Lisp system with IDE developed at BBN and later at Xerox PARC. They got the 1992 ACM Software System Award for pioneering work in programming environments.
Grady Booch once said at an Eclipse conference: 'For those of you looking at the future of development environments, I encourage you to go back and review some of the Xerox documentation for InterLisp-D.'"
I think the historic Interlisp-D system counts for "incredible software".
There is a bunch of incredible software written in Lisp, from language-oriented workstations, parametric CAD systems, databases, etc. Many languages have "incredible" software written with it, you'll find those for Lisp, too, over the decades it exists. But you might need to very specialized domains, like computer algebra systems, autonomous robots for inspecting pipelines, chip design verification or quantum computers.
I might have a big galaxy brain thought here that I'm just scratching the surface of in this thread. I think languages/platforms are less about the technology or features and more about shared values and community. Languages/platforms each have an ethos. They prove it with features (Rust's borrow checker, Perl's regular expressions, PHP's templating) and build a community around that ethos and the shared values.
Are there, from time to time, language features that really set a language apart? Yeah I think so, Ada has a bunch of baller features, Erlang's OTP is pretty unique, etc. I think these are the kinds of things you could point to and say, "I wrote a network server that processes random user-generated text as fast as possible and has zero memory bugs because I used Ada's safety and performance features" or "I wrote a sprawling telephony system with insanely high uptime because of OTP". They're like the materials science advances right, we found a new way to make a metal that will never bend and is virtually weightless, enabling us to build a space elevator.
Is Lisp like this? Probably! Its ideas of dynamic typing, garbage collection, no direct memory access, interpretation, etc. have been nearly entirely co-opted by static languages like Java and dynamic languages like Lua. I guess my argument is that now, I don't really see a differentiator outside of macros. Maybe it's unfair, but I think my threshold here is "we used macros to build AI/something nuts". I don't think that's happened.
Then again I'm not embedded in the Lisp community -- I definitely got everyone to list their favorite Lisp projects here haha. Maybe some of these examples qualify? I'm not gonna run them all down. I just think there's something rich here about community, ethos, and technologies that find their ways through different languages for technical and cultural reasons.
> in the Lisp community
There is no such thing as "a" Lisp community, since Lisp is a term for a wide variety of different languages, each having their own community. Common Lisp is a group of implementations (which are sub-communities), Scheme is a group of implementations / sub-standards (which are sub-communities), there are derived languages (-> Clojure)...
> I think languages/platforms are less about the technology or features and more about shared values and community.
Yes.
> I guess my argument is that now, I don't really see a differentiator outside of macros.
The larger topic is "symbolic computation" and Lisp itself is one application of it. Lisp cannot only be used to implement itself, but a range of languages. "Interpreter" in Lisp actually means "interpreting Lisp", where interpreter for Java means "interpreter for JVM byte code", which is something very different. An interpreter written in Lisp for Lisp interprets source code, which is actually data.
Scheme, Clojure, ML and a bunch of more esoteric ones were first implemented in Lisp. Often languages and language extensions are integrated into Lisp. Racket (a variant of Scheme) makes language implementation one of its core features.
> Maybe it's unfair, but I think my threshold here is "we used macros to build AI/something nuts". I don't think that's happened.
There are many many usages of language extensions via macros, which are "nuts". For example the possibly first parametric CAD system called iCAD was based on Lisp + a dynamic object system, whose user-facing forms are macros. The language Lisp was thus extended to describe complex parametric CAD models. I once heard that Boeing had whole models of complex jets (think 747) described and loaded into Lisp memory. Turbines were described in Lisp and design changes could be generated via code&data changes. The cabin configuration of jets were described and computed with design rules in Lisp.
YALPAAG (Yet Another "LISP's Parentheses Are Actually Great") article.
One would think that if they really were that great, there wouldn't have been any need to keep writing such articles after several years, and yet here we are again.
> humans can’t immediately parse structure with a glance
No, humans absolutely can, as long as those structures are: a) delimited; and b) the delimiters used aren't nauseatingly repeating. LISP's parentheses start failing at the b) condition much sooner than semantic whitespace/indentation starts to.
YAEAR (Yet Another "Earth's Actually Round") article.
One would think that if the earth was really round, there wouldn't have been any need to keep writing such articles after several years, and yet here we are again.
> humans can’t immediately see curvature with a glance
No, humans absolutely can, as long as those curvatures are: a) intuitive; and b) the curvatures used aren't nauseatingly large. Round Earth start failing at the b) condition much sooner than a flat-earth model.
In short, generations are forced to effectively re-learn basic facts about the world they inhabit because new members are in fact ignorant and must be disabused of their intuitive yet incorrect assumptions.
What on Earth did you intend to convey with this? I'm mystified.
Are you, in fact, comparing a preference for Algolic syntax over s-exprs, to... flat Earth theory?
Surely not.
The comparison between preferences for Algolic syntax over s-expressions and flat Earth theory is not about equating the legitimacy or scientific validity of the concepts but rather highlights the persistence of debate and the necessity of re-education. It illustrates that regardless of the objective correctness or efficiency of a concept (like the roundness of Earth or the utility of LISP's parentheses), there will always be a need to revisit and reiterate these concepts for new audiences who may not yet understand or accept them. The analogy aims to show how education and clarification are ongoing processes, necessitated by the arrival of individuals unfamiliar with established knowledge or preferences.
There is no serious debate. Earth is generally believed to be round (and there is no swarm of articles floating around that explain that the Earth is, indeed, round/flat albeit in some weird corners of the Internet). And LISP syntax is generally not believed to be objectively superior (again, albeit in some weird corners of the Internet) ― even TFA shies from saying that, and most of its arguments are inherently subjective. I know why the parentheses are there and yet, despite the article's claim that "once you understand their value, you’ll likely never be able to program without them", I am perfectly fine programming without them. Well, maybe I am just not a True Programmer then. Oh, and "the solution is to learn to stop worrying and love the parentheses" is not even an argument, just as the immediate knee-jerk response "well, fuck you too I guess" to this ludicrous proposal is not an argument either.
The criterion of truth is practice. In practice (of the last sixty years), human programmers have generally fared better with non-LISP syntax, like it or not. Sure, if you love the parentheses, go and use them but don't try to pass you personal preference as the objective truth. It's the similar thing with the literate programming: yes, it works great for Knuth, but it doesn't work well at all for most of the other people who are not Knuth.
And by the way, if you drop parentheses and reverse the order of the words, you basically get a FORTH program which is self-delimited and requires no parentheses. So parentheses hit that middle point between the two extremes of "many special pieces of syntax for structuring the program" and "no special syntax for structuring at all" and take the worst from both of them: they are redundant for the machine and they don't help the human programmer ("humans can’t immediately parse structure with a glance"), so all they do is just increase visual clutter. Thanks no thanks.
As I use vim/slimv, I don't care, paredit fixes that.
It's weird to have a Lisp fanboy gushing about parentheses without seemingly knowing about M-exprs, Dylan, and how the paren-heavy syntax wasn't meant for humans.
> the paren-heavy syntax wasn't meant for humans
In Lisp the "paren-heavy syntax" was for humans. It was even a part of M-exprs. It was the syntax for data in the m-expression syntax.
But the M-exprs wasn't implemented, the code was translated by humans to full s-expressions and entered that way.
Users then found out that it was easier to stay with s-expressions.
Yet another chance to regret that REBOL has never caught on
[dead]
[dead]
programmers a choosing a language because of syntax are like house buyers choosing a house because of the color it's painted
The wrong paint won’t stop you from adding a garage, but choosing a complex and non-uniform syntax makes it so hard for end users to extend the language that we almost never try. Imagine how much more readable Java would be if Lombok didn’t need to smuggle each option in a prefix @Annotation.
if running your java through m4 before you compiled it is a major advantage, doing it is trivial. compiling from a completely different surface syntax is a little more work but still not life-changing
The hard part is parsing Java’s grammar with your extensions, and then validly altering a complex AST with many different types of statements and declarations. That’s the only way to make your work read like Java, and why Lombok doesn’t.
But what if the two houses are otherwise identical? Even a small convenience will tip decisionmaking
in cases similar to that, it would be a reasonable thing to do
I'd like to think I'd have a far easier time painting a mauve house ochre than I'd have trying to change Rust to have an S-expression syntax (not in the least because I have the programming chops of a slightly dimwitted housefly).
it's an excellent point: non-programmers choosing a programming language would do well to pick a language whose syntax they like, since they can't change it
I seem to recall some fellow or other describing notation as a tool of thought.
That doesn't sound superficial to me.
that was iverson, explaining the motivation for apl. you may have noticed that apl-like notations have not supplanted algol-like notations like c, python, golang, lua, and scheme for programming in the way that arabic numerals have supplanted roman numerals and algebraic formulas have supplanted geometric diagrams and natural-language arguments. the apl data model is very popular in some application areas (numpy, r, and tensorflow are based on it) but the notation is not. today apl, j, and k are less popular than haskell
to me, this indicates that iverson was mistaken: apl users did not gain major cognitive advantages over users of programming languages with more conventional syntax, so language choice has been driven by other factors. the same cannot be said about programming language choice in general; writing things in assembly language clearly takes much longer than writing them in c, python, golang, lua, scheme, or basic. but that turns out to be a result of their different data models, not different syntax
»> ... "curly braces" ... « Ugh. Braces are "curly" — there are no "square braces" or "round braces." Needlessly redundant and confusion-inducing. Can we please eradicate this circumlocution from programming language?
This is a regional thing. In the UK they're all brackets.
Brackets, square brackets, curly brackets.
Even if that's true (and it wasn't in my experience of the UK), no-one calls any character other than {} braces (even with extra qualifiers), right?
Considering the worldwide population of coders don't all share a common native language, no - we need to embrace fluidity in our vocabulary.
They are also called curly brackets, so I imagine the confusion comes from there, and from the fact that the words "braces" and "brackets" are relatively similar, making them easy to confuse, especially for non-natives.
Parentheses (()), braces({}), brackets([]), and chevrons(<>). I concur.
Or, in British English, brackets(()), curly brackets({}), square brackets ([]).
And I’ve heard both “angle brackets” and “pointy brackets” (<>)
Smooth, squiggly, hard, and pointy are my gotos
That would be fine if we could somehow magically change the English language so that this would universally match how people call these symbols around the world - but that's not likely to happen.
> While I do not find jokes in this comic funny (except maybe C one), the Lisp part is pretty representative of how non-Lispers see Lisps in general, which is unfortunate.
Lisp programmers see the exact same thing as non Lisp programmers. This image is funny because Lisp is known for its use of parentheses and no amount of coping will change this.
I don't think we need to talk about parentheses?
I doubt there is anything nontrivial but useful to say about them?
Anyone found anything in the article?
I think I'll take these votes as a no.
Crafted by Rajat
Source Code