hckrnws
In my opinion it's a tragedy there are so few resources in using "Propositions as Types"/"Curry–Howard correspondence"[0] in didactics in tandem with teaching functional programming to teach structured proof-writing.
Many students do not feel comfortable with proof-writing and cannot dispatch/discharge implications or quantifications correctly when writing proofs and I believe that a structured approach using the Curry-Howard correspondence could help.
[0]: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
While I am first in line to say that programming is math whether you like it or not, it is certainly the case that few programmers are using that sort of math in their heads when they are programming. I suspect that if, oh, about 60-70 years of trying to change that has not had any headway that there is little reason to suspect that's going to change in the next 30.
But you can sort of backdoor the idea with something like this: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion... I won't claim it is exactly the same, but it is certainly similar. Tell a programmer, "Hey, you can build a type that carries certain guarantees with it, so that the rest of the program doesn't need to be constantly validating it", a problem any 5+ year dev ought to be very familiar with, and I think you make more progress in the direction of this notion than a paper full of logic diagrams in a notation even most computer scientist undergrads will never encounter.
(I'm not even certain if I directly encountered that in my Master's degree. It's been a while. I think I did, but I think it was only one class, and not a huge part of it. I'm not sure because I know I did more with it on my own, so I can't remember if I was formally taught it or if I picked it up entirely myself.)
While I generally agree with defining new types to assert that validation has been done, I think your blog post could have explained more about what kinds of validation are practical to do. For example:
> Address that represents a “street address” that has been validated by your street address to exist
What does it even mean to verify that a street address exists? Verifying real-world relationships is complex and error-prone. I’m reminded of the genre of articles starting with “Falsehoods programmers believe about names” [1].
In practice, the rules that can be enforced are often imposed by the system itself. It simply doesn’t accept data that isn’t in correct format or isn’t consistent with other data it already has. And we need to be cautious about what harm might be caused by these rejections.
[1] https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-...
I've been thinking the same thing and mentioned it the other day[0]. When I was learning proofs, I saw people struggle with the idea of needing to choose a "generic" element to prove a forall statement, for example. I suspect it might make more sense if you taught things in terms of needing to write a function x=>P(x). I think in some cases, thinking in terms of programming might also change how we think about structuring things. e.g. define a data structure for a "point with error tolerance" (x, epsilon), then continuity says given a toleranced-point y* at f(x), I can find a toleranced-point x* at x such that f(x*) is "compatible" (equal at the base point and within tolerance) with y*. This factoring lets you avoid shocking new students with quantifier soup. Likewise the chain rule is just straightforward composition when you define a "derivative at a point" data structure Df((x,m))=(f(x), m*f'(x)).
This is not at all new, but I suppose it's currently a lot to ask students to learn proof mechanics while also unwinding multiple layers of definitions for abstractions. Computers can help do the unwinding (or lifting) automatically to make it easier to make small "quality of life" definitions that otherwise wouldn't be hugely useful when pen-and-paper-proofs would always be unwinding them anyway.
Basically, math education could look a lot like software engineering education. The concerns and solution patterns are basically the same. e.g. typeclasses are pretty much how mathematics does polymorphism, and are probably usually the right way to do it in programming too.
One of the best modern resources is "Programming language foundations in Agda", co-authored by Wen Kokke and the same Philip Wadler, and used for teaching at multiple universities.
The connection to intuitionism (see https://en.wikipedia.org/wiki/Intuitionism) gives this kind of thinking much broader appeal and application. It seems to me we live in a time so dominated by analytical thinking that we completely ignore other useful and effective modes.
Intuitionism in this context just means the proofs have to be constructive. (no proof-by-contradiction, or other half-assed academic hacks)
This may not go over as well as you'd think.
I took Haskell in the same uni course with FSAs, Turing machines, Lambda calculus, natural deduction, Hoare logic and structural induction.
So I was exposed to the "base case & step case" of structural induction at the same time as learning about it in Haskell. And for whatever reason it didn't leave a good impression. Implementing it formally was harder and more error-prone than shooting from the hip in an IDE. What was the point!?
Now I smash out Haskell code daily as the quickest way to end up with little binaries that just-work-if-they-compile. It took me a while to realise that the upside of all this formality/mathiness is that other people did the proofs so that you don't need to. I get to take for granted that a boolean is true or false (and not some brilliant third value!) and that (map f . map g) is (map (f . g)).
“A man is rich in proportion to the number of things which he can afford to let alone.” ― Henry David Thoreau, Walden or, Life in the Woods
If I have to do the proof, then as you say, it's probably harder and (in many cases) more error prone than if I didn't use a proof. If the language can do it for me, and I get the lack of errors without having to do the work (and without the chance of me making the mistakes)? Yeah, now we're talking. (Yes, I am aware that I still have to design the types to fit what the program is doing.)
If a tool introduces complexity, it has to make up for it by eliminating at least that much complexity somewhere else. If it doesn't, the tool isn't worth using.
As much as the concept blew me away when I first heard of it, I can't shake the feeling that the Curry-Howard correspondence is somehow mis-marketed as something that would immediately cater to programmers. The idea to encode propositions into types sounds enticing for programmers, because there are indeed a lot of cases where something needs to be conveyed that can't be conveyed using type systems (or other features) of common programming languages.
Examples include the famous "the caller of this function guarantees that the argument is a non-empty list" but also "the caller guarantees that the argument object has been properly locked against concurrent access before using the function".
However, in my experience, the community is more interested in mathematics than in programming and I don't know if anybody is really working to provide propositions-as-types to mainstream programmers. This is certainly because it is hard enough to prove soundness of a strict functional programming language as Agda or Rocq, much more for anything imperative, stateful, "real-world", non-strict, non-pure, concurrent, ill-defined, you name it.
So, for me the promise of "showing programmers that what they do is actually mathematics" is not really kept as long as the definition of "programmer" is so narrow that it excludes the vast majority of people who define themselves as programmers.
What I'm trying to say: I wish there were more efforts to bring more powerful formal methods, especially as powerful as dependent types, to existing mainstream programming languages where they could be useful in an industrial context. Or at least try to come up with new programming languages that are more pragmatic and do not force the programmer into some narrow paradigm.
Am I wrong? I hope so :)
Programmers regularly do this stuff under various names/phrases. "Make illegal states unrepresentable", "parse, don't validate", "resource acquisition is initialization", etc. It's all about encoding the fact that your structure isn't just data, but also represents some proof of provenance that guarantees some property.
The author has given a few talks built around the same concept: https://www.youtube.com/watch?v=IOiZatlZtGU
Comment was deleted :(
Crafted by Rajat
Source Code