- The Vision
- Collaborative Reasoning begets Collaborative Solving
- Intents are the New Prompts & Search Queries
- The Problems
- The Group-Work Problem
The Vision
We are leaving the Information Age behind. Data is no longer “the new gold”; knowledge is. Recent developments in the artificial intelligence industry have given us a teaser of what humans can do when our machines behave as if they really understand us. Using natural language to communicate with machines is only one part of the intelligence puzzle, though. Sometimes, we need machines to do more than merely “sound correct”; we need them to be correct.
The other side is reasoning. The statistical paradigm isn’t well-suited for reasoning about anything critically important because, at its core, statistical AI is the machine-equivalent of making “educated guesses.” Their ability to guess more accurately can be improved, but the possibility of being wrong will always exist. Bigger models, longer chains of thought, and more fine-tuning can go a long way towards improving AI performance, but stops short of “correctness.”
While hosted LLM’s make low-stakes tasks easy, it’s far less palatable to entrust them with decisions when the stakes are high. LLMs are good at outputting answers that appear correct at face value. But in a situation that requires the machine intelligence to output something that not only appears correct, but is correct, the burden of correctness falls back on the human.
We need correctness as a guarantee if we want to depend on AI technology for high-stakes tasks. We need machines that reason rather than guess; whose thinking is driven by formal logic & proof rather than probabilistic weights.
This benefits LLMs as well: LLMs approximate the more basic, primitive form of associative thinking that comes so naturally to human beings. The entry point into thinking doesn’t require formal, mathematical rigor. As any mathematician will admit: first comes creative ideation, second comes formalization and justification. We have already seen prototypical examples of generative AI playing kind-of well with symbolic computation & semi-automated reasoning.
Networks like Ocean Protocol brought us an open, global market for data. Recently, many networks have emerged with the goal of delivering open access to machine learning capabilities. Reasoning is the next frontier.
Collaborative Reasoning begets Collaborative Solving
For years, the Khalani team has been working in private on developing the necessary infrastructure for global, open access to mechanized reasoning (this won’t come as a surprise for those who are familiar with the origins of the name “Khalani”). The developments in AI over the last two years have only strengthened our belief in Khalani’s mission. Following the surge of new capabilities in AI technology, two things are now more obvious than ever:
- Artificial intelligence will only become fundamental to all other technology sectors
- AI technology today can’t reliably justify or explain itself. In short, it can’t reason.
Search engines gave us curated lists of likely-relevant sources of information. GPTs gave us access to personalized recounts of such information, and even the ability to generate new information assets out of pre-existing data. These are the pinnacles of the Information Age.
Khalani aims to take a step into what is called the “Reasoning Age” by making mechanized reasoning globally scalable, networked, indexable and searchable. This will enable not only collaborative reasoning at global scale, but it also forms the bedrock of something even more powerful: collaborative solving. For machines to autonomously coordinate around a goal and then perform actions to achieve those goals, they must first be able to coordinate around a problem and generate ideas that bring them closer to a solution.
Intents are the New Prompts & Search Queries
If you’re familiar with Khalani, you’re likely familiar with the term “collaborative solving”, since we’ve been talking about it for a little while now! Khalani also operates within the “intents” ecosystem, so one may wonder: where do intents come into the picture? From Khalani’s perspective, intents and solving are two sides of the same system. At the highest level of generality, intents replace API requests. They are the medium through which users & applications plug into the system, as well as the medium through which solvers themselves coordinate their actions.
Intents are the entry point into collaborative solving as well as the primary mechanism through which solvers themselves coordinate their efforts. In this sense, intents are to the Reasoning Age what search engine queries (and more recently, LLM prompts) are to the Information Age.
In the future, we will be able to open our laptops, fire up our browser, and type in a problem we want solved. The problem will be solved, and the next thing we will see on our screens is a proof that our problem was, indeed, taken care of.
The Problems
Building mechanized reasoning infrastructure comes with unique challenges. First, automated reasoning typically requires significant formalization of a problem in a formal, mathematical language. Formalization is unpleasant and inconvenient for many software developers. Second, automated reasoning doesn’t consistently work for every problem, nor does it work equally well for each kind of problem it can handle (neither does verifying the outputs of reasoning). Third, modern proof systems are not optimized for machine-to-machine reasoning; the standard assumption is that all proof certificates will be re-generated (or, at the very least, re-checked) whenever they are used as lemmas in subsequent proofs.
I call that last problem the “group-work” problem.
The Group-Work Problem
To illustrate the significance of the group-work problem, let’s talk about people instead of machines.
Imagine you are at a conference and you run into a couple of your acquaintances. You know them well enough: you all follow the same annual circuit of events in your industry. You begin to make small talk with them. Being the exciting person that you are, you find yourself telling them about a recent visit to the gas station, during which you were surprised at what seemed to you to be a sudden and unexpected drop in fuel prices. After finishing your story, both of your friends might respond. The first friend may respond with “oh yeah, I noticed that as well!” before speculating as to the cause of the sudden price drop in the area. Your second friend may follow up with a similar response. In any case, notice that the beginning of each friend’s response took for granted that you knew the story to which they were responding.
A conversation is an iterative process of building off of what others have already said, owing to the fact that the prior contents of the conversation is common knowledge amongst the participants. Imagine the inconvenience if, instead, the first friend’s response was preluded by their own verbatim re-telling of the story you just told. Further, imagine if the second friend’s response was preluded not only by a re-telling of your story, but by a re-telling of the first friend’s entire response, word-for-word.
This is roughly analogous to what it would look like if a bunch of theorem provers were made to talk to one another. Now, if they all trust each other, certain optimizations become possible. This kind of trust is a hard property to scale to many participants, especially when the set of participants is not fixed from the start. The situation becomes more complicated when we consider concurrent conversations and partially-overlapping conversations.
Machines communicate all the time over various protocols. So why is machine-to-machine reasoning so challenging? The fruits of reasoning are justifications and explanations. Just as with humans, it is easier (more efficient) to collaborate when a group has high levels of mutual trust, an upgradeable shared body of knowledge, and an agreed-upon understanding of what constitutes “valid reasoning” (and the ability to continuously abstract steps of reasoning into single steps at a higher level. Otherwise, each participant would have to constantly “spell it out” for every other group member).
The elusive properties that make teamwork effective for humans can be realized for machines using, you guessed it, a blockchain. From a reasoning perspective, it provides a trusted source of sound explanations and justifications. As the blockchain grows, then, so too does the ability of reasoning machines to take larger and more sophisticated steps in reasoning.
As a side note, the growth of reasoning power depends on efficient indexing of the knowledge base at the semantic level, so as to amortize “memory retrieval” as memory grows. The chosen language to represent knowledge has a huge impact the feasibility of such indexing.
For this reason, as well as a few others, we can’t solve the “reasoning problem” without solving the “language problem,” which will be the subject of part 2 in this series.