hckrnws
How funny if we eventually come full circle and such knowledge bases end up like low-background steel in the economy of AI training. Either way, Eurisko remains one of the most interesting pieces of software ever created. I'd be fascinated if someone managed to create some sort of chain of thought system based on its heuristics but driven by modern LLMs. I don't think this is the direction in which AGI lies, but in the right domains it could be a powerful exploratory or debugging system.
The ultimate irony of all these ontological portraits is that they are each so isolated that no matter how much I actively read about them, I keep finding completely new rabbit holes.
If it weren't for your comment, how could I have heard or read about Eurisko?
Funny thing, early deployments of a few Garbage Producing Transformers, and the base will be drowned out by the mess created and even humans will have problems telling what is right, much less a machine. Ever increasing amount to work out what is sense and what is a convincing lie.
Dead Internet Theory is already in motion.
This was already true of the internet. IMO staying within relatively curated spaces keeps the occurrence of this kind of noise low.
The issue of the public not being able to tell truth from fiction has existed since the beginning of society. Every improvement in communications technology has led to a massive increase in propaganda and confusion that spikes and then eventually levels out as people learn to tune it out.
Increasing the rate at which people can deploy lies has happened many times in history. This is just one more instance.
He was lost just over one year ago:
https://writings.stephenwolfram.com/2023/09/remembering-doug...
The fundamental flaw with Lenat's approach is that humans do not think with first order predicate calculus. We do not juggle a bunch of statements/rules the way Cyc does, but have a very fast, efficient and powerful conceptual approach to the world with a massive amount of recursive conceptual inference.
Effectively, you need to understand (compute with) what concepts are before using them in statements. And the most common mistake is to confuse the definition of a concept with the concept itself, as happens typically with knowledge graphs.
That argument sounds awfully like Drew McDermott's A Critique of Pure Reason from 1987 which was one of the things that made me get out of the "good old fashioned" AI world in the mid 1990s.
this was my main takeaway from 20th century philosophy lol
> The fundamental flaw with Lenat's approach is that humans do not think with first order predicate calculus. We do not juggle a bunch of statements/rules the way Cyc does, but have a very fast, efficient and powerful conceptual approach to the world with a massive amount of recursive conceptual inference.
Source?
This isn’t Reddit, evaluate the statement without needing to link to some mainstream media outlet.
The OP is making a statement about how humans think and why this is at odds with Cyc. I don't think it is wrong to ask about a source on this. Otherwise I can claim anything. This has nothing to do with Reddit.
> Humans do think with first order predicate calculus. We juggle a bunch of statements/rules the way Cyc does, and don't have a very fast, efficient and powerful conceptual approach to the world with a massive amount of recursive conceptual inference.
Great discussion, thanks.
I am the source (working on conceptual computing since the late 90s), and have built a conceptual computing platform based on it (In common lisp of course): https://graphmetrix.com/trinpod-server
“Lenat argues that we just don't have the data needed to reach common sense through these newer methods. Common sense isn't written down. It's not on the Internet. It's in our heads. And that’s why he continues to work on Cyc.”
“At one point, Lenat remembers, it suggested he could win the game by changing the rules. But each morning, Lenat would rejigger the system, pushing Eurisko away from the ridiculous and toward the practical. In other words, he would lend the machine a little common sense.”
Really enjoyed this 2016 article.
For safety-critical applications that use black-box ML models, I feel like a rules-engine paired with a knowledge graph needs to kinda sandwich the ML black box —- to “prompt engineer” the input and ensure the output aligns with common sense and safety standards.
Edit: clarity
When Margaret Hamilton developed the Apollo guidance systems, she ran three teams which all wrote completely independent guidance systems. She then wrote herself a program to arbitrate between them when they disagreed.
I thought that the Apollo system was barely able to execute on the available hardware, so there was no redundancy.
I think that the space shuttle did feature this kind of system though.
I've got a nasty feeling you're right. I can't find any references at all really. My understanding was the shuttle was using very similar tech to Apollo but I can verify almost nothing.
Really they should've have three programs deciding.
It appears that a 1981 version of the source code is both available and runnable: https://github.com/seveno4/EURISKO
(The source for both AM and Eurisko were discovered at the same time.)
I was reading through the source code for both (and the Interlisp manual, but mostly Eurisko) earlier this year and found it quite inspiring. Some of the most meta(-meta-heuristic search) code ever written! Now I'm working on a similar project and am looking for parts of AM's and Eurisko's algorithms which I can incorporate! I'll write something up if I have any success.
Eurisko is far more abstract than AM, but the data structures (structures with named 'slots' (members)) it uses makes the code vastly more readable (AM is pretty damn unreadable despite plenty of comments). Lenat himself said the better representations are what made Eurisko possible, unsurprisingly. If you do want to understand the implementation of AM in incredible detail you can read Lenat's PhD thesis. No detailed documentation of Eurisko exists. Also, both programs contain lots of code to print out what the system is currently doing, which is incredibly helpful for deciphering the code.
The inline comments of when a function was last edited as very rough version control is quite something
At that time rules systems (expert systems) were very popular
I wrote one as part of my first paid job (holiday job between school and university) in the late 80s. My prototype was eventually turned into a production system that was used for many years.
Comment was deleted :(
Related:
[1] OpenCyc CVS repository on sourceforge: https://sourceforge.net/p/opencyc/code/
[2] Somebody made the effort to download/upload everything together on GitHub (with state of OpenCyc of 03/2018): https://github.com/asanchez75/opencyc
OpenCyc is mostly just the core ontology (knowledge graph). I'm told it is nothing in comparison to Cyc. None of the interesting algorithms (just some basic inference). It was so terribly named that they had to discontinue it.
But I imagine it's still a great resource. Haven't played with it.
Related thread from when Doug Lenat passed away in 2023
This does not distract from the potential usefulness of the effort, since the work does not at all depend on some word's definition, so this is merely a side note, "FYI": It appears that the "common" in common sense does not exist. It could only be found in "plainly worded, factual claims about physical reality"; "We also find limited presence of collective common sense, undermining universalist claims and supporting skeptics." (from the study abstract).
https://www.theguardian.com/commentisfree/2024/sep/30/i-took...
The article links to the study.
> Common sense is not that common: a recent study from the University of Pennsylvania concludes the concept is “somewhat illusory”. Researchers collected statements from various sources that had been described as “common sense” and put them to test subjects.
> The mixed bag of results suggested there was “little evidence that more than a small fraction of beliefs is common to more than a small fraction of people”.
What this study does show that is relevant for the submitted article is that whatever he trains the AI for, it may not be something "common".
I think nitpicking the commonness of common sense is taking the term a bit too literally. I see it as another way to talk about intuition or gut feelings; knowledge that has some uncertain or wishy-washy origin, probably an amalgamation of experience (which naturally differs for everybody) and informal automatic reasoning (which may often err, but is nonetheless important for the way people navigate life because it can be done more efficiently than more rigorous reasoning.)
Comment was deleted :(
This isn't nitpicking, what are you talking about!?
When someone tries to give an AI "common sense" talking about exactly what I said is important! Because that person is NOT going to do anything "common", (s)he is implementing their own version and biases!
That distinction is at the very heart of the matter attempting to be achieved, not some "nitpicking" about terms!
I am not really on board with this study's interpretation of common sense - it seems to measure how close the things that I think are to the average. Which is a very literal take on 'common' sense. Considering the tests have a ton of questions about spirituality, and advanced technologies, for me the exercise devolved into trying to guess what others would think of it (which turned out to be the point after all).
And the author's person seems really concerning. She is someone, who, by her own admission struggles with grade school math such as fractions, yet proclaims her intellectual superiority over people who think 'obviously' silly things, like Ivermectin curing covid.
The idea of horse medicine curing Covid makes about as much sense as heart medicine helping with erections.
Although, somewhat amusingly, she seems to score below average on the common sense test.
The very cynical take of journalists being both substandard critical thinkers, and unwilling consider alternative viewpoints seems to be true in this case.
"heart medicine helping with erections."
Why should it be nonsense? Erections are a function of the circulatory system. One could expect them improve if the circulatory system is improved as well.
I'm wondering if Chris McKinstry used this as an inspiration for his MindPixel project, which was a curated collection of short, validated true/false statements. Cyc uses a structured grammar while Mindpixels were in English.
His intent (as he stated online, and not mentioned in the Wikipedia page) was to have a collection of sentences which could be used to give an AI some idea of what it was like to be human.
What exactly is it? I get the sense it's basically a knowledge graph and an inference engine. But what was it actually doing in terms of submitting queries and spitting out takeaways in human readable form?
I'm extremely skeptical about the anecdotes about the game as an indicator of this thing's competence. It sounds unlikely that this thing actually encoded any sort of game state or nuanced simulations and was really just spitballing on vague strategies that just happened to find some cheese (twice?). I'm guessing they had to play the strats until one of them proved valuable, and it's kind of weird and surprising that they thought this was a good use of their time and model.
This should perhaps have [2016] in the title.
RIP, Doug.
He died August 31, 2023, age 72.
2016.
Interesting read considering it was written right before the advent of llms
Are there any good sources online for how this is used concretely/queried? And what CYCL looks like? I have tried finding it online out of interest, but all sources only describe the system generally and don't show any examples of what you could derive.
an oracle filtering a random generator will always produce oracle-grade results. is the computer actually getting better or is Lenat just acting as an oracle and the writer is running with it?
Alright, talk is cheap, show me the code.
Is Cyc essentially a 15 million line long Prolog program? Based on the article’s hand wavy description that’s the best I can put together.
It's a discovery system: but: https://en.wikipedia.org/wiki/Discovery_system_(AI_research)
And from that wikipedia article: "The dream of building systems that discover scientific hypotheses was pushed to the background with the second AI winter and the subsequent resurgence of subsymbolic methods such as neural networks. Subsymbolic methods emphasize prediction over explanation, and yield models which works well but are difficult or impossible to explain which has earned them the name black box AI. A black-box model cannot be considered a scientific hypothesis, and this development has even led some researchers to suggest that the traditional aim of science - to uncover hypotheses and theories about the structure of reality - is obsolete.[7][8] Other researchers disagree and argue that subsymbolic methods are useful in many cases, just not for generating scientific theories."
which sounds very intriguing to me :-)
The inference engine is implemented in SubL (see https://cyc.com/archives/glossary/subl/ and https://web.archive.org/web/20120513111513/http://www.cyc.co...) which is a variant of Common Lisp, and the knowledge base is implemented in CycL which is divided into the Epistemological Level (EL) and the Heuristic Level (HL).
There was a source-available version called OpenCyc with a small subset of the knowledge base that is now retired and no longer officially available, but it's still easy to find on the net.
That’s about right, but written in Lisp.
Perhaps the structured dataset he built can be now used to train language models on a small corpus of text to get a reasonable common sense?
2016
Back in this time, early to mid 1980's, I did some work in a programming language called Savvy. It was some kind of tool to build expert systems based on data. Google doesn't seem to know about it. It was very obscure but my father and I built a tool for the Institute of Metals and Material Australia for assisting engineers/metallurgists pick the right kinds of metal alloy for a specific job. It's the first ever useful software I wrote but I can't find a record of it anywhere. Time has erased it.
We had used a primitive rolling hand scanner to scan in huge tables of metal alloy data along with recommended usages. My father wrote the query engine using Savvy and I wrote the text based windowing UI.
Would be neat if someone could turn up some record of it somewhere. My father has been retired since ever and has no records of the system either.
I remember those scanners, too.
You can approach the group who are apparently now known as Materials Australia https://www.materialsaustralia.com.au/ and they should have an archivist or historian with access to period records and publications.
Failing that, visit the state library in your state and request the assistance of a research librarian. They should be able to help you locate potential records or journals if it was ever discussed.
Found it. https://www.ascilite.org/conferences/aset-archives/confs/edt... IMMAMAT - https://files.eric.ed.gov/fulltext/ED323968.pdf ( page 34 )
Comment was deleted :(
Comment was deleted :(
Crafted by Rajat
Source Code