# hckrnws

Once managed tech for an insurance actuarial department. We ran IBM DB2 for underwriting and claims apps. One day had lunch with the actuarials to make friends and make sure we were supporting them well. At one point in the conversation I foolishly asked whether they also would like to access DB2 to minimize data transfers. They laughed and said: "SQL is like drinking data through a straw. We use APL so we can drink it all at once." I felt like a rookie at spring training.

I intend to, at some point, learn a readable array oriented programming language. I heard that J fits that bill. Any other suggestions?

What I mean by "readable" is, that it uses ASCII strings and actual names, not merely 1 character (except for very few cases where 1 character might be appropriate, like "Y" or stuff like that) as function names, instead of symbols, of which I do not know their names. I should be able to not only write once, but read again easily. I should not need a special keyboard or special keyboard layout to write programs in the language.

> What I mean by "readable" is, that it uses ASCII strings and actual names, not merely 1 character

I urge you to reconsider: That they are 1-character is an extremely important feature of Iverson's work.

> instead of symbols, of which I do not know their names

Why do we write 5+5 and not five plus five? The symbols are important! Gosh, can you even *remember* not knowing how to do addition? Don't you remember how much easier adding things up became once you learned "+"?

Most of Iverson's ideas for symbols really good -- so good that if you *do* learn their names and everything about them you will have the most amazing ideas as significant a change in you as learning "+"

> I should not need a special keyboard or special keyboard layout to write programs in the language.

You don't need anything of the sort, but if you can't touch-type in APL you might want keycaps, because I think connecting the idea with a single symbol or unique idea is harder if you keep having to translate to English all the time.

>Why do we write 5+5 and not five plus five?

Because tradition is a *very* hard stuff to get rid of. Just like clocks with needles, with 12 sections.

If you assume addition as implicit juxtaposition syntactic operator ⁙⁙ or ⬠⬠ is also damn efficient to represent five plus five (/faɪv.plʌs.faɪv/ in IPA).

In Ruby:

```
class Fixnum alias_method :plus, :+;end
five=5
five.plus five
```

Multiple letter "five" is a single symbol, just as the multiple pixel 5 glyph.Now, there *is* some convenience in having scripturally short symbols when you do your math with paper and pencil, for example posing some additions in column. But that’s all there is to it, pencil convenience.

A concept is a single semantic idea whatever the length of the symbol used to refer to that concept.

> Now, there is some convenience in having scripturally short symbols when you do your math with paper and pencil, for example posing some additions in column. But that’s all there is to it, pencil convenience.

You say that like you can just *say it*, and it's so, but you've got no evidence all symbols are equal even if they are equal in some ways, and Iverson delivered such an utterly convincing argument to the contrary by showing example after example of the exact opposite. You should read it a few times:

https://dl.acm.org/doi/pdf/10.1145/1283920.1283935

> A concept is a single semantic idea whatever the length of the symbol used to refer to that concept.

No, symbols take up real space on the page, on the screen, and in our minds; there are only so many symbols you're going to be able to put in your mind in your career, or in your life, so if there's a way to say some symbols are better than others, then by knowing the better symbols you will among other things, be able to solve bigger problems faster and with fewer bugs. That may not be important to you, but it's important to me!

"+" is a *really* good symbol. That's why the tradition has been so hard to shake, and that "pencil convenience" has been with us for thousands of years, but "+" isn't that old! For a long time addition *was* performed with juxtaposition! Just a series of stroke marks like |||| but seriously 5+5 is better than ||||||||||| and nobody can convince me otherwise! I think though, if you aren't convinced by now, I'm not sure what else I can do.

+ was invented by Nicole Oresme in 1360. He was tired of writing “et” (latin for and) over and over.

While we have some standardized operators that got into programming languages, we are doing a very poor job of using symbols even for very useful functions. Iverson was, and still is, right about terse notation. However, APL/J/BQN are not flexible enough in this regard. You can’t introduce new symbols. For example, you CAN do “et = +”, but you CAN’T do “• = *”. User-level definition of custom operators would enable APL-family to break out of the diamond world in which they live in.

I don't think user-level definitions of custom symbols are that useful, but for what it's worth, you *can* do it in ngn/k...

```
(Π):!/: / make table
Π[`a `b
( 1 2
3 4)]
```

TIL that ngn/apl and ngn/k supports user-defined symbols. Thank you!

Creating custom notation (aka DSL) can be very liberating. As Iverson noted, having a good notation enables you to think previously unthinkable thoughts. Einstein’s theory of relativity was enabled by use of his sum notation. [0]

Of course, it should be paired with the right user interface. Inputting non-ASCII chars on keyboards is still pretty cumbersome. But APL with its weird symbols is perfect for paper and pencil, where it has its origins. [1]

Some might see pencil-and-paper interface for an array-oriented REPL as going backwards, but I consider pencil to be mightier than the keyboard. Finally you could sketch visually your algorithm beside the runnable code.

[0]: https://en.m.wikipedia.org/wiki/Einstein_notation [1]: https://mlajtos.mu/posts/new-kind-of-paper

Much of the progress in mathematics over the past 500 years can be attributed to the development of a good, concise notation.

Yet there is at least one famous person in CS and EE (and physics?), who says, that mathematical notation is not all that great, compared to how precise computer languages are or can be. Yet I remember often struggling to decide the order of things expressed with mathematical notation, when merely the minimal number of parentheses was employed by a lecturer or teacher.

Mathematical notation is very concise, but I would not call it a great notation in terms of intuitively being clear, which operations to apply in which order or which symbol groups its argument(s) in what way and found myself rather tending to write an extra pair of parentheses, to make things clearer for myself.

There is also the mix of operators and functions. Some computer languages do not mix that, but employ only functions, which makes things clearer as well.

IDK, for instance, I'm familiar with *Structure and Interpretation of Classical Mechanics*, and even though I have had my share of fun from it, the (functional) presentation there still looks terrible, to the point of becoming completely opaque in places, compared to the traditional mathematical notation of the Lagrangian etc. I have also seen books on calculus that focus on function composition (resembling the style of category theory) rather than on sets, and I felt that this was completely unnecessary as it only served to complicate both the exposition and the understanding of the subject that is already difficult enough.

I'm not the person you replied to, but symbols are easier to distinguish visually, tho they need careful balance. Too many are worse than too few

> ⁙⁙ or ⬠⬠ is also damn efficient to represent five plus five

Ok, now do a million plus a million.

MM? Using roman numerals... Seriously, I do value good notation, but for this particular case, juxtaposition, we do use it for multiplication - in algebra, where it's easy to find the boundary of a number, and variables are always 1 symbol long, so 'xyz' is a product of three values, not a name of one.

Coming back - notation is important, but sometimes it's hard to find "the best" or even "good enough". Newton's primes as symbols of differentiation used alongside with Leibniz' dy/dx, and there is the shortcut for "differentiated by time" as dot on top of the function symbol, so for various cases different notation is more convenient.

MM in Roman numerals is 2,000. There are some Unicode symbols representing one million, but I dunno if they're in the range HN accepts.

oo

Now do ten million plus ten million

I guess that the "Combining Cyrillic Ten Millions Sign" could do the job: ꙰ ꙰

Though you point it seems is more like using only a small set of symbols. Obviously, to render large number you need to introduce new syntax apparatus if keeping terseness is a desired constraint.

Just like you would wrote 1×10⁷ or 1e7, you can have say a single point on one line, and an heptagon or seven points on an other, still juxtaposing these two columned digits. In Unicode, it seems these last symbols are not present, but instead you can use cards from different games.

```
🀟 🀟
· ·
```

How fun! :DAPL Mnemonics: https://dyalog.tv/APLSeeds23/?v=qZtb4XOLdkI

If the symbols are just functions then they can have longer-than-one-character names. A(x,y) is no different than add(x,y) no?

The advantage of single character concatenated symbols is that it allows your brain to group them rather than forcing you to read them separately. This means you almost unconsciously build up a bunch of set of instantly recognizable "chunks" of symbols that represent concepts you're familiar with. Each of these chunks can represent the equivalent of a huge amount of code in another language.

> bunch of set of instantly recognizable "chunks"

Indeed! Much of APL programming is using idioms (≠⊆⊢ for example as split on delimiters)

https://aplcart.info has a good selection too

As the other poster implied, it's the difference between a library of functions and an alphabet for composing functions. Concise symbols allow composition of "words" in a way that longer names really don't. There's a practical language design side too - I don't think the right-to-left evaluation and accompanying concision and logical reading order would work otherwise. Maps in other languages are *so* clumsy and clunky and backwards compared to the APL equivalent.

Germans beg to differ

> I should be able to not only write once, but read again easily.

Once you overcome the admittedly very steep learning curve of APL, it tends to actually be more readable than Algol-likes in many respects.

In fact, counter-intuitively, the single-character symbols actually contribute a lot to its readability. This is, in part, similar to the reason that "a^2 + b^2 = c^2" is usually preferred over "The sum of the squares of the orthogonal sides of a right triangle is commensurate with the square of the hypotenuse," assuming you've done your homework and know basic algebra, equational reasoning, and all that.

In my experience, I see something like a 100X reduction in code size. This means that on my editor screen, right now, just by moving my eyeballs, I can read code that would be scattered over 10s of files in Python. This is the stronger sense in which APL is more readable, and by a large margin, to boot.

Seriously, APL is tons of fun. I feel like it would be tragic to let a few small preconceptions cut off the potential for you to give it an honest go. Also, the J and APL communities are each quite distinctive. FWIW, I've have a particularly good experience with the people using Dyalog APL.

>"a^2 + b^2 = c^2" is usually preferred over "The sum of the squares of the orthogonal sides of a right triangle is commensurate with the square of the hypotenuse,"

These two expression doesn’t carry the same level of information. At best the first, without additional contextual information can only be interpreted as into "The surface of two square equals to a third", or more directely "a·squared plus b·squared gives c·squared".

Hehe. You are absolutely right. I was hoping someone would point that out because it highlights another strong point of APL, in my opinion.

APL expressions, just like mathematical formula, don't just float alone in a void in practice. They exist within a context, some problem domain, surrounding code and/or equations, etc. Just how the equation "a^2 + b^2 = c^2" can encode equivalence relations between different structures, depending on context, APL expressions take on interesting, semantically useful meanings when applied to a particular problem domains.

The reason I call this a strong point of APL is that, in "normal" languages we tend to re-encode that extra domain-specific information all over the place at the micro level—within function names, type signatures, and more pertinently in the abstractions we make. APL enables (but doesn't demand) us to let that information manifest directly at the macro level, without erasing the unused potential semantics of the raw algebraic expression.

This is something that practicing mathematicians get intuitively, I believe. Take category theory, for instance, where commutative diagrams in their most abstract form really are Abstract Nonsense™, but their power comes from instantiating them into specific categories, without having to erase the information about the fact that the same diagram also applies within other categories. That kind of thing really tickles the natural abilities of human pattern matching in a productive way.

I am not a practicing mathematician but I have my degree in math, and from that point of view I do not agree at all with you. Symbols are mostly useful for symbolic manipulation, but any good mathematical book will try to keep the number of new symbols introduced at a minimum precisely because they add a lot of cognitive overhead. In fact, you'll hardly find novel symbols, mostly there are shortened function names (e.g., Hom(G) for the set of homomorphisms over a group). Considering how code is read more than written, adding context-dependent symbols that add cognitive overhead isn't really an advantage.

> in "normal" languages we tend to re-encode that extra domain-specific information all over the place at the micro level—within function names, type signatures, and more pertinently in the abstractions we make.

But that's good! It means there are less possible sources of truth, less things to update when something changes, and less context switching.

> Symbols are mostly useful for symbolic manipulation

This doesn't match with my experience at all. If you open up a math book and see integral signs everywhere, that immediately gives you hints for the kind of domain you're dealing with. If, on the other hand, you see mostly polynomial equations, then you're probably not worried about limiting processes. Mathematical symbols, at the graduate level and beyond, take on a communicative role in addition to the pure algebraic one you internalize during undergrad.

> In fact, you'll hardly find novel symbols [in any good mathematical book]

The average working mathematician has a working knowledge of at least a hundred squiggles. APL has much less. I don't quite see your point. It's not like APL programs are willy-nilly introducing new symbols.

> But that's good! It means there are less possible sources of truth, less things to update when something changes, and less context switching.

What would you rather read? 30-ish equations or PyTorch library code? Which do you think would be easier to grok? I am, of course, referring to the 30-line self-contained APL implementation of a neural net with performance on par with PyTorch:

Hsu and Serrão, U-net CNN in APL: https://www.dyalog.com/uploads/conference/dyalog22/presentat...

To be clear, I am not making an arguing for APL; I am sharing my direct experience. Despite the completely natural intuition to the opposite, APL expressions turn out be experienced as readable. More than that, though, APL *programs* are able to make overall architecture and design readable in a way that's unseen in supposedly "readable" languages.

While you come up with arguments for why APL is unreadable, I'll continue to write readable APL :P Why not join me instead!

> If you open up a math book and see integral signs everywhere, that immediately gives you hints for the kind of domain you're dealing with. If, on the other hand, you see mostly polynomial equations, then you're probably not worried about limiting processes. Mathematical symbols, at the graduate level and beyond, take on a communicative role in addition to the pure algebraic one you internalize during undergrad.

The title of the book also does that. Few books, papers, articles or exercises consist exclusively of symbols. I just took some random math papers from Arxiv and most of them are extremely wordy, and most of the symbols used are letters are names for values and functions defined right there, not operators that you need to reference elsewhere for the definition.

> What would you rather read? 30-ish equations or PyTorch library code? Which do you think would be easier to grok? I am, of course, referring to the 30-line self-contained APL implementation of a neural net with performance on par with PyTorch:

I mean, obviously the PyTorch library code. Not even talking about symbols and syntax, at least the PyTorch code has readable names. We could also mention the lack of spacing to match different concepts or groups of operations.

I find it really funny that the claim is "APL is like math" when precisely APL does things that are very much frowned upon in math: excessive use of symbols, long symbolic equations, lack of literal explanations...

> I mean, obviously the PyTorch library code. Not even talking about symbols and syntax, at least the PyTorch code has readable names.

Do you realize that there are vastly more "readable names" in the PyTorch library than there are *bytes* in the APL implementation of that neural net? I am fairly confident you could learn APL and grok the code in that paper faster than you could grok the PyTorch library code.

I feel like you're getting hung up on a few unfamiliar squiggles. They're really not that big of a deal, and the reality is that APL code in the wild has comments, READMEs, and all that jazz to provide helpful context. Then the APL code gets out of your way so you can focus on the big picture.

It might be hard to believe, but again, no matter how many reasonable arguments you may have against APL's readability, I'm simply sharing that real experience with the language shows otherwise. This is coming from someone who's only been futzing around with APL for about a year or so, to boot.

> I find it really funny that...

I like your strong skepticism. You'd probably be pretty good at APL if you ever developed the interest.

APL designers also work to keep symbols to a minimum, and any given APL program introduces *zero* new symbols. They're not context-dependent (why did you think this?), and very few are domain-specific. Various languages differ a bit on this: J has more primitives, and several for number theory or other specific branches of math, while K has very few, and all are general.

From Alan Perlis in APL's early days: "The large number of primitive functions, at first mind-numbing in their capabilities, quickly turn out to be easily mastered, soon almost all are used naturally in every program — the primitive functions form a harmonious and useful set."

> They're not context-dependent (why did you think this?),

They are, ρ5 is not the same as 2ρ5. A lot of symbols mean different things depending on whether they're being applied as monadic or dyadic operators. The circle operator is specially fun: A○B will apply a certain trigonometric operation depending to B depending on the value of A. That, to me, is very context-dependent.

> and any given APL program introduces zero new symbols.

Yes, but APL itself introduces a ton of new symbols. In fact, far more symbols than I've ever seen in my math degree.

> From Alan Perlis in APL's early days: "The large number of primitive functions, at first mind-numbing in their capabilities, quickly turn out to be easily mastered, soon almost all are used naturally in every program — the primitive functions form a harmonious and useful set."

This is a tautology to me, "once you master things you use them naturally". Of course. But the fact that APL and similar languages are very obscure would point to "once you master things" not being that easy and natural.

Is that so? Compare all Dyalog primitives to some math ones:

- https://aplwiki.com/wiki/Dyalog_APL#Primitives

- https://en.wikipedia.org/wiki/List_of_mathematical_symbols_b...

APL shows 74 symbols. I count at least 50 math symbols that seem very likely to show up in an undergraduate degree; notably, 26 symbols +-×÷|⌊⌈!~∧∨⍲⍱<≤=≥>≠∊∩∪≡≢∘ in APL are taken from math with similar meanings.

The arguments to a primitive are not context! You may as well call addition context-dependent because 1+n increments n while 2+n increments it twice. You're describing overloading, and I do really dislike the way ○ handles things (in BQN trig just goes in the •math namespace). However, I don't think it's right to put the blame on symbols, as a trig(code, argument) function could be defined to do the same thing. That is, maybe symbols encourage this design choice, but in themselves they aren't the problem.

Note "quickly" and "easily" in Perlis's quote. A language with 500 symbols would seem to me to match your description before, and I would find it hard to believe it would be quickly or easily mastered. A language with 74 symbols, many of which are related, is what makes it possible to teach APL quickly: Perlis's month would be a few a day. APL was near-mainstream in the 1980s, and I think there are better explanations for its decline since.

> APL shows 74 symbols. I count at least 50 math symbols that seem very likely to show up in an undergraduate degree; notably, 26 symbols +-×÷|⌊⌈!~∧∨⍲⍱<≤=≥>≠∊∩∪≡≢∘ in APL are taken from math with similar meanings.

I have counted at least 30 symbols that I haven’t seen in an undergraduate degree + master degree, not counting symbols that I know but haven’t seen used directly as symbols (eg , or ?) but as punctuation. Of course I might have forgotten some of them but I think it’s undeniable that APL introduces a lot of new symbols.

> The arguments to a primitive are not context! You may as well call addition context-dependent because 1+n increments n while 2+n increments it twice. You're describing overloading

Not the arguments themselves but the amount of arguments. Operator overloading is also context dependency. Maybe we’re getting lost in the words: what I mean is that you can’t say for sure what a certain symbol means unless you have the symbols around it. For example, in C / always means “divide what’s on the left by what’s on the left”, but * is context dependent because it can either be multiplication or pointer dereference.

> That is, maybe symbols encourage this design choice, but in themselves they aren't the problem.

If some part of a language encourages things that we consider bad decisions, then that part is a problem. Again returning to C, one could say that the memory allocation system isn’t a problem despite the fact that it makes it really easy to cause leaks and bad memory accesses.

> A language with 74 symbols, many of which are related, is what makes it possible to teach APL quickly

Quickly… compared to what? Because I honestly don’t think an average person is going to learn APL faster than any mainstream language.

> APL was near-mainstream in the 1980s, and I think there are better explanations for its decline since.

It’s not just that APL has declined but in general the array programming paradigm that it pioneered hasn’t really taken off. For example, LISP declined quite a lot but new similar, functional languages have appeared and attracted interest.

J uses ASCII. Built-in symbols use dot (.) and colon (:) as part of a symbol. In J vocabulary, https://www.jsoftware.com/help/dictionary/vocabul.htm , I counted less than 150 built-in symbols. Even if you double it as monadic/dyadic variations, that's less than 300. I definitely don't know many of them, and still able to write some programs, like a parser generator; I think those symbols are closer to a standard library - i.e. with C we need to get familiar with printf, getc or strcpy. As with mathematics in elementary school, you start with few simpler ones, then gradually add some more useful ones, then the rest is optional for cases when you want or need them.

In practice a lot of J symbols are either already known or rather obvious.

It's like a standard library except it's harder to search for the symbol you want via autocomplete, harder to infer the meaning of symbols you don't know (even when the C standard library has some terrible names) and easier to confuse (from the page, having symbols like ,. and .. or ,: and .:).

Of course, I assume that people end up knowing most of them and that not all of them are necessary. I'm not arguing that, I'm saying that symbols are an extra cognitive load and I find it funny that the argument for them is "well this is how is done in math" when precisely there's a push in math to avoid using excessive symbols.

> Quickly… compared to what? Because I honestly don’t think an average person is going to learn APL faster than any mainstream language.

It's known that non-professional programmers often preferred APL to other languages, it was easier to express their problems in APL than in something else.

A possible point of confusion: "I count at least 50 math symbols" wasn't in reference to APL, just my count of how many symbols are widely used in mathematics. Contrasting your claim that APL has "far more symbols than I've ever seen in my math degree". Agreed, most of APL's symbols are not found in mathematics.

I refer you again to Perlis, same paragraph: "It is true that BASIC and FORTRAN are easier to learn than APL, for example, a week versus a month. However, once mastered, APL fits the above requirements much better than either BASIC or FORTRAN or their successors ALGOL 60, PL/I and Pascal."

> For example, in C / always means “divide what’s on the left by what’s on the left”, but * is context dependent because it can either be multiplication or pointer dereference.

No, it could mean a start of a comment. Multi-line or single-line - depends on context. Similarly, there are multi-character operators - both in C and J. And while some idioms in APL could be seen as "atomic" (like (+/ % #) in J), they are actually a function composition, so they'd better be seen and understood in context, which is rather small.

>They are, ρ5 is not the same as 2ρ5.

And +5 is not the same as 2+5. One is a sign and the other addition...

Yes, so therefore + is context dependent. Some context dependency is unavoidable, but having *everything* be context dependent is bad design IMHO.

My point was that mathematical symbols are also context dependent and yet they work and are useful, so I don't get your argument of context dependency against APL's symbols.

You're really making up and bending rules here. Did you learn an array language to see things both ways?

What rules? I am only saying that symbols add cognitive load, even more if they're context dependent. I don't think that's controversial at all, and APL and other array languages do have context-dependent symbols.

And yes, I tried learning an array language but I didn't get far for two reasons: one, I didn't really see where could I apply it; two, it's really hard to discover things and learn incrementally when you can't really use autocomplete or even Google (punctuation and symbols are not well supported in search engines) to help with the large set of symbols/operations with multiple meanings each.

As said by others: you should reconsider and learn the symbols or k or j or Klong which use your normal keyboard. In k there are very few commands and it won’t take you long to learn them and in your head, at first, say things like 5 cut range 9 etc. You don’t notice after a while, like a natural language sinking in, that you don’t do this anymore and the symbols just ‘arrive’ instead; you will find patterns and type idioms faster than it would take you to look up (let alone learn how to use it etc) the npm in js let’s say. At that time you would not really enjoy not having mastered symbols but words instead.

Also, nothing keeps you from implementing names functions in k or j out of the 1 letter ones;

```
3#!9 / original
0 1 2
take:{x#y} / or take:#
take[3;!9] / 1 readable function
0 1 2
range:{!x} / or range:!:
take[3;range[9]] / all readable functions
0 1 2
take[3] range[9]
0 1 2
```

In k. You can make a readable lib, but I wouldn’t.https://github.com/robpike/ivy

Documentation: https://pkg.go.dev/robpike.io/ivy#pkg-overview

Also see these Advent of Code 2021 solutions with Ivy: https://www.youtube.com/playlist?list=PLrwpzH1_9ufMLOB6BAdzO...

APL entered my consciousness when I wandered into a talk at a conference, where those cryptic symbols first confounded, then intrigued me. Now that I've learning a bit, here's how I explain it. While in elementary school we learned the symbol "-" is a very compact way to convey the powerful ideas of subtraction (when placed between operands: "3-2") and negation ("-5").

The insight underlying APL is that this idea can be extended further. When working with matrices (2d arrays), the operations of (a) checking the shape of an array and (b) reshaping arrays are so frequent that APL designates a single symbol for them: "⍴" (which to me looks like an iron bar whose end was "reshaped" into a loop).

In the same way as "-", compact symbols aid communication and also comprehension. Since the symbols aren't alphanumeric, whitespace separators aren't necessary to delimit tokens. Just as "10-7" means the same thing as "10 - 7", "3⍴4" and "3 ⍴ 4" are equivalent (it means a 1d array comprising 3 fours: 4 4 4).

What has helped me has been learning from the inventor himself: Iverson wrote books that introduced the symbols in a very natural way. The one I'm working through now (and which I highly recommend) is "Elementary Algebra", available for free download here:

https://www.softwarepreservation.org/projects/apl/Papers/Ele...

To try out your own expressions, you can use https://tryapl.org/

If that’s what you really want, you end up with numpy, which — whilst efficient in its niche — ends up as a pale imitation of the real thing. I’d urge you to try a modern APL to really experience the power of the Iverson vision.

Somebody always mentions BQN, so: https://mlochbaum.github.io/BQN/

I keep eyeballing that thing. But I am afraid that like only 7 people use it and it isn't a good longterm investment. No idea.

I think if you're looking at learning an array language in order to learn an array language, because of the oft-discussed change in perspective and technique, then I think BQN meets that very handily.

BQN is, in my opinion, the most slick of the array languages; it consistently takes the best of the approaches to high-rank arrays and combinator (including tacit) programming from the others. Some things are unique, and amazing, like structural under.

It is small, it is new, so I would agree with your hesitation to start a business based on it, but to understand these languages or use it for the typical small calculations, it is more than capable.

Well, I read:

http://nsl.com/papers/origins.htm

https://github.com/zserge/odetoj

And watched a few neat videos on the concepts of APL. There's a black and white one on YouTube of a British fellow introducing it using a typewriter. :)

I vaguely get it, my beef with BQN besides being slow is that it is fledgling. And so I have no real excuse to play with it. Otherwise it seems like a holy grail, like, woah finally an APL/J/K to rule them all.

I've seen enough wide eyed salty lispers telling tales of white whales to fear for my sanity going on this quest. Need some sort of lie to buy into like "BQN beyond a superior array language is so fast you could bang out half a page of code in a few minutes and get insane performance and functionality that no mere mortals with their aliensquiggles-less languages can hardly even conceive of".

What kind of half assed cult are you running, haha. No, I refuse to get sucked in. You will not nerd snipe me. Must. Resist.

Do you have any examples that come to mind of it being slow? My experience has been that I've been the slowest part of my code, by a long way.

Some optimisations aren't quite there (like under on higher rank arrays not being as fast as computing the indices on the flattened array), but everything else has been good.

Notes from the creator on this subject: https://mlochbaum.github.io/BQN/implementation/perf.html

Abandoned half-finished rewrite of the BQN VM in Rust as is hipster tradition: https://github.com/cannadayr/rsbqn/

To build on this:

k is a bit like APL, but more concerned with lists rather than multi-dimensional arrays. It uses normal symbols that are easily accessible on your keyboard like + and !, but they are *seriously* overloaded.

q is built on top of k, and take all the monadic overloads of the operators and gives them a name.

So instead of writing

```
!5
```

to get a list of the first 5 integers, you write```
til 5
```

This is sort of what the original comment is getting at (I think)J has a library for what you describe, but I doubt you'll like it given your description. What you described is another language called Nial.

I guess you're referring to this: https://github.com/danlm/QNial7

> Any other suggestions?

Futhark: https://futhark-lang.org/

> "*a readable array oriented programming language. I heard that J fits that bill. What I mean by "readable" is, that it uses ASCII strings and actual names, not merely 1 character*"

It definitely doesn't; J is ASCII symbols. See the vocabulary at https://code.jsoftware.com/wiki/NuVoc - those things in the coloured boxes like ": and ;. and {: are the J building blocks and the quotes and brackets don't pair up. If you want words and "actual names", you need something else.

someone already mentioned q, so i'll add nial

APL was the first language I learned after IBM 360 Assembly in the 1970s while I was in college. I really liked it because I was a math major and it was extremely well-suited to math operations on arrays. I used it for finding objects on the 2D complex number plane for my Senior Thesis.

I really enjoyed creating clever combinations of the APL operators to get a ton of computation done in one line of code. However, this was the epitome of "read-only code". It was very difficult to read that kind of code after writing it.

After college I never used APL again, although I worked as a software developer (and still do). I miss the fun I had with it, but I wouldn't want to use it for code that I or anyone else would have to maintain, and I doubt that it would be a very expressive language for most commercial applications. For mathematical, array-oriented tasks, Python has appropriate libraries that would probably lead to more maintainable code. And of course there's Julia.

But I can imagine some people still having the expertise to use APL for exploratory calculations, enjoyably and more productively than any other language.

In a round-about fashion I get interested in array languages because the relational model.

I tough an array is "just a column" and somewhere I get to K and that leads me to my current attempt to build a language that make both paradigms work: https://tablam.org

I think each paradigm complement and "fill" the mission pieces the other has. For example, you can name columns, that is alone very useful!

BTW kdb+ is also on this direction but the combination is `array then SQL/Relational` and mine is `Relational then Array`.

I think APL is very beautiful as someone who has longed to learn it from afar and not yet had the time, would be very curious to get more insight into how it changes the way you approach problems in both APL and programming in general, and how it gets you to think differently.

If I recall correctly, the Dyalog branch of APL allows for variable names? I kind of like the idea of zero variable names and all strange symbols. The reasoning that after you learn those, you can read any program and understand what's going on at every step, without needing to check what each function actually does -- that sounded very intriguing to me.

And the reason that "checks out" (or may) I guess is that the language's primitives are very carefully chosen such that you have a kind of extreme economy of "words" - so your LOCs will be so low such that you may not need to abstract lines into mysterious function names to reduce the number of lines a person has to read to understand what a program does. Aka the economy of words more than compensates for the lack of abstraction.

As far as I know, every major dialect of APL has had support for variable names. Symbols are used to name primitive functions, but not user-defined terms; if you want to use any intermediate term more than once, you likely want to assign it a name. And even leaving this aside, you probably would not very much like to write an entire program as a single very long expression; intermediate variables are necessary to break lines up.

A few decades ago, j (an apl dialect) innovated a form of *tacit* programming—programming functions which do not refer to their arguments by name, and which do not need to name intermediate terms to share them—but you still must name your *functions* if you would like to compose them or if you would like to avoid the extremely-long-line problem.

Generally, I write J programs using a mix of small, tacit functions, and larger explicit functions. The latter primarily comprise a sequence of assignments, and generally use unnamed tacit functions heavily (alongside explicit applications and references to other defined functions) when constructing intermediate terms. Explicit control flow is rare, but usually accomplished using builtin combinators and recursion when necessary.

Thanks for the correction! I haven't looked as deeply into J but now also curious to learn more. So many new and interesting terms to look up, thank you!

I agree maybe a little bit of variable and function naming is fine in some cases.

Note that seems to describe apl-style tacit programming, which is a lot less expressive than j.

All APL versions have had variable assignments, denoted by ←, e.g. `var ← 1 2 3`.

There is a concept of tacit, or point-free, programming, which avoids the variable names as parameters, for example the calculation of the mean `avg ← +⌿÷≢`. However this does become unwieldy pretty quickly, and it's easy to juggle too much. It is very useful for short snippets, where you avoid all of the ceremony and can just express the core thing you want to talk about (for the avg example, it's not improved by including a parameter: `{(+⌿ ⍵)÷≢⍵}` - the ⍵ parameter isn't informative at all, and nor would anything else, like `samples`). I think it is best to keep the tacit snippets short, and to assign good names to them.

Oh! Thanks so much for the correction. I was sure my memory would fail me here. That's a great point too with naming. In cases like this it makes a lot of sense. There are many cases I'm thinking of where people sometimes have long function names that do lots of stuff so you're not quite sure what the implementation is like.

There are many styles of APL, not just due to its long history, but also because APL is somewhat agnostic to architecture paradigms. You can see heavily imperative code with explicit branching all over the place, strongly functional-style with lots of small functions, even object-oriented style.

However, given the aesthetic that you express, I think you might like https://github.com/Co-dfns/Co-dfns/. This is hands-down my favorite kind of APL, in which the data flow literally follows the linear code flow.

ngn/apl is mostly a subset of Dyalog, but allows assigning to glyphs: https://abrudz.github.io/ngn-apl

Is there a good book or whatever a resource to learn and train youself to efficiently think in APL?

APL seems beautiful to me and would probably help me with a number of tasks I do but it doesn't feel easy to actually practice.

https://xpqz.github.io/learnapl/intro.html - is a good place to start. After that consider “Mastering Dyalog APL” it’s a freely available book by Dyalog

And Dyalog also has https://course.dyalog.com/

I love APL and I use J too as desktop calculators up to programming some quick mathy things for work and fun. My latest love is APRIL (Array Programming Re-Imagined in Lisp) [1]. It allows me to use the libraries and legacy of Lisp for a lot the drudgery in all programming and APL to sling numbers.

I dabble with BQN and Klong.

My first and only experience with APL was on February 5, 2010, at the ACM ICPC world finals. I spent probably an hour and a half trying to figure out how to write a right-associative parser, and never ended up submitting a working solution.

Anybody here write a working solution to that problem? Or remember it, even?

Is there a comprehensive comparison between the strengths/weaknesses of APL and numpy?

I have been interested in APL and friends for some time, but have not yet seen an example which made me commit to taking the time to learn it. numpy (and JAX) have been doing everything I wanted, so far.

I am not aware of such a comparison, but I hope this quote will encourage you to have a serious look at APL:

> i don't know one week of studying some APL i could just visualize how I would do that and I would just run and no error every time. Like one week of study APL did more for my expertise, you know, by then more than one year coding in JAX and NumPy.

João Araújo in The Array Cast: https://www.arraycast.com/episodes/episode33-joao-araujo

Thanks for the suggestion! The quote indeed sounds promising. I was a bit disappointed by the episode though, because they don't really go into what exactly the learnings are. I guess that is to be expected from a podcast in which _each_ episode is about APL :D

Comment was deleted :(

I always wondered how one can start learning APL or similar array oriented language. Any suggestions? I'd like to give it a try!

I think BQN is a lot of fun, it is an array language like APL, but it does some things different https://mlochbaum.github.io/BQN/ the documentation is great.

Dyalog APL is more of an actual APL, https://www.dyalog.com/

There is also J https://www.jsoftware.com/#/

The array language community on discord "The Apl Farm" (https://discord.gg/SDTW36EhWF) is pretty active.

I also love the Arraycast podcast https://www.arraycast.com/ many amazing interviews and interesting discussions.

Start here (disclaimer: author) https://xpqz.github.io/learnapl/intro.html

I think the j learning materials are best, particularly 'learning j' (https://www.jsoftware.com/help/learning/contents.htm), though others like 'j for c programmers' (https://www.jsoftware.com/help/jforc/contents.htm).

There is a book on Klong https://t3x.org/klong/book.html

I've been using klong as it is small, quick to build and has good intro, reference and quick-reference.

One downside is somewhat memory limited.

To add to the list, this one is pretty good;

Head on over to apl.chat! People are really friendly over there.

Do the people here of an opinion on cool projects made with or for APL?

The obvious: Co-dfns. It's an APL compiler that generates GPU code. But the interesting thing is that it's also _hosted_ on the GPU!

Has this guy not heard of Matlab? He's acting like the idea of everything as an array is some niche forgotten thing in a long dead language, meanwhile practically every Engineering department is choc o bloc of people who take the "everything is an array" idea way beyond its logical extent.

Array programming languages are about more than just broadcasting. What you said is kind of like saying C is a functional language because you can pass function pointers around.

Also, Matlab's treatment of arrays is derived directly from APL.

Came here to say the same thing, and add that GNU Octave is pretty good too. Its libraries didn't seem as diverse as MATLAB's when I tried it, but since it's open source, maybe that's changed.

Also when I tried SSE and AltiVec about 20 years ago for SIMD, they were really fast, but I was flabbergasted that the instructions were fixed-length. I wanted something more like the x86 string instructions so that I could fused-multiply-add arrays of floats without having to manually unroll loops to process 4 elements at a time:

https://docs.oracle.com/cd/E19120-01/open.solaris/817-5477/e...

Looks like Arm is trying to do variable-length vectors with Scalable Vector Extension (SVE) but it's limited to 2048 bits, which is unfortunate IMHO:

https://developer.arm.com/documentation/101726/0400/Learn-ab...

https://alastairreid.github.io/papers/sve-ieee-micro-2017.pd...

Matlab is pretty awesome for doing scientific work and building plots/graphs. It's array/matrix based, but very different than APL in how one goes about programming.

Fortran on training wheels is not an array language. Put this in your pipe https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.p....

Matlab was created for people too dumb to use Fortran. APL was invented for people too smart to use Fortran.

This seems oversimplistic to say the least. Matlab is used over Fortran as a lot of scientific and numeric work doesn't need to be as fast as Fortran, so having something much easier and faster to develop in with a REPL and data inspector and top tier graphics and GUI building...etc is worth the performance cost as the overall time (including developer) time is much lower in a lot of cases. I've worked with a lot of brilliant academics and national labs folks and they all mostly use Python, Matlab, or Julia for their work. I doubt a single one of them can't learn Fortran. Indeed, some use it when needed on super computers after they've built a prototype in Matlab. Remember, there was a time when scientists thought anyone who needed to use Fortran over Assembly or Assembly over manually configuring the computer were just too dumb/spoiled.

Tldr: Fortran, C/C++, Matlab, Python, Julia are all great and serve different roles with pros and cons.

[dead]

Crafted by Rajat

Source Code