Collaborative
AI
Introducing the Granular Network
The Granular Network (GN), which is based on our state of the art Imaginary Maths technology, is the superior alternative to the Neural Network (NN), which is currently the dominant form of Artificial Intelligence (AI) today. The GN can accomplish everything that the NN can do, but better. We know this for certain, because the GN can literally emulate an NN but without absorbing many of the biggest problems that come with NNs.
In fact, while the GN is still a research project in its early stages of development, it is looking very likely that the GN will soon be able to reverse-engineer an NN to understand how a particular model works, uncover its limitations, and potentially even help security experts to identify weaknesses that might otherwise be exploited by malicious actors in order to build a super-virus that can turn ChatGPT into Skynet.
But the main advantage of the GN is that it doesn't discard all the brilliant stuff that is possible via human-driven software engineering. The GN is all about collaborating with human beings. It is definitely not a system to replace human beings and it never will be.
Background of the invention: Imaginary Maths
What follows is a relatively condensed overview of the history of mathematics and computing, from my perspective. And the purpose of this information is to allow the reader to understand why we believe that our technological process has been headed in the wrong direction for quite some time. And this, hopefully, will enable you to see why it is so important for us to leave the Neural Network behind and embrace the Granular Network.
Our modern society has recognised the great potential that computers possess to solve problems in virtually every aspect of our lives, including but not limited to health care, education, productivity, commerce, transport, manufacturing, entertainment, and other forms of creative expression. But the problem, is that as we delve deeper into the weeds of those various domains, the requirements and subsequently the obstacles to success become exponentially more complicated and unmanageable. That is, our solutions lack attention to detail because the problem to be addressed seems to lure the inventor/entrepreneur down a rabbit hole of infinite depth. Our hand-coded software applications, even when created by a trillion-dollar company, have only a rudimentary level of sophistication, leaving the end user unsatisfied.
There are several ways that scientists and the technology sector collectively have tried to address these problems. These include:
- The top solution for thousands of years has been mathematics, and as such there have continued to be developments within the field of maths, such as using computerised models. But the problem is that attempts to make these models hyper realistic are thwarted by the absurd amounts of computer resources that are required to simulate highly complex phenomena such as physics or chemistry. And simplified models often lack accuracy and reliability which leads to mediocre outcomes.
- Hire massive teams of software developers. Companies with substantial revenues have no problems hiring thousands of people, and startups are occasionally able to raise tens or hundreds of millions in funding, which allows them to hire hundreds of people. But the problem is that hiring more developers only exacerbates the problem, as now there is an additional problem of managing the complex relationships between the different people, their differing contributions of the overall project, and the contradictory expectations of the customer/client.
- Manually build artificial intelligence by writing hand-crafted code. For the most part, this has involved trying to incorporate the mathematical field of deductive logic into the designs of algorithmic logic. But the problem is that there is an absurd amount of “unwritten rules” regarding everyday common-sense human logic. The project was too big, and there are so many exceptions to every rule that the systems created in this way were not robust enough to handle even situations that are simple for a human child.
- Automatically train artificial intelligence by using multiple layers of perceptrons combined with the back-propagation method. An apparatus and method which, when described as a unit, are nominally referred to as a “Neural Network”.
- Combining the Neural Network with a transformer, to create Large Language Models (LLM), which has led to a recent, rapid explosion in advancements in generative AI. However, it is also notable that there have essentially been zero significant scientific advancements in AI since the invention of the transformer. That is, virtually all recent progress has been the result of monopolistic companies pouring up to tens of billions of dollars per company into infrastructure, talent, and energy costs.
This has already begun to result in serious problems, in particular red flags have appeared that suggest the presence of a “race to the bottom” as a small handful of companies is attempting to replace humans with a mediocre technology that is incapable of adequately meeting the subjective needs of humans, let alone the objective needs such as privacy, security, and safety.
There are some very powerful tools and methods in the list above, but the problem is not necessarily in the tools themselves. The problem is in the culture surrounding the development and usage of the tools and why those tools are being implemented and utilised in that particular way. And our founder, the inventor of Imaginary Maths, will personally assert—as conjecture—is that the true root of the problem begins in the early 20th century when there were several seminal scientific papers published concerning what was believed to be a hard limit on the potential capabilities of computer science and even upon what humanity might ever be able to discover through any mathematical system or method. This discussion is generally referred to as the question of Decidability. And therefore is is often associated with the mathematical concept of Decision Problems.
There are some valid logical arguments raised throughout this discourse, but the problem is that the proponents of such notions treat reality itself as if it were bounded by the contrived rules of Decision Problems. And therefore, when those mathematicians recognised a hard limit to the potential of their own work, they assumed that those hard limits applied to all of the human race for the entire duration (past, present and future) of human civilisation. And since those thinkers put such ideas out into the world, it has been impossible for a person to delve deeply into to field of computer science—or any adjacent field that borrows elements from it—without finding themselves swimming in an ocean of pessimism that causes them to say, “I’ve been working on this problem for a while now and am stuck, I guess this must be one of those problems that are literally impossible to solve. I guess I ought to give up on finding an optimal solution.”
Imaginary Maths is based on the idea that we already possess many tools that are quite underrated and under appreciated. In the past few decades, we have witnessed many tools and methods (of other industries) move from the analog world into the digital world, and this inevitably leads to innovations as people experiment with novel forms that the old tools and methods might be able to take on. But there seems to not be an equivalent phenomenon happening in fields like mathematics. It certainly has become digital, but rather than taking on new novel forms, its methods remain fixed and stale. The culture of maths is seemingly no longer one of innovation or disruption. It has fallen victim to severe stagnation.
To anybody that does not have a deep interest in studying classical maths, any book or document that features a lot of mathematical jargon and symbols is going to look more alien and incomprehensible to them than a foreign language. But for those who do understand such concepts and representations, they serve as powerful tools that empower the human mind to understand, process, and communicate ideas that are deep and complex.
Some relevant and useful characteristics of classical maths include:
- Using written numbers, or physical objects (e.g. coins or an abacus) that represent quantities means that we do not have to memorise all of the values that need to be incorporated into a calculation. And in turn, when we no longer have to rely completely on our own memory, we are able to deal in concepts and calculations of enormous complexity and sophistication. Or in other words, classical maths already represents the gateway to super-intelligence. It already is the tool that we use to increase human intelligence. The problem is that classical maths has not adapted to suit the open and democratic times we live in. It remains obfuscated and incredibly difficult for even top-tier experts to use effectively.
- Instead of writing hundreds of lines of software code, or dozens of lines of ambiguous pseudocode, mathematicians are able to describe an entire classic maths algorithm with an equation that contains a substantially smaller number of symbols. This allows the reader to understand what is going on in a fraction of the time that it would take to learn the same algorithm if it were written in pseudocode whilst also having the benefit of lacking the unwanted ambiguity that pseudocode often has. Or in other words, classical maths algorithms are much more “high-level” than programming languages and much more concise and precise than even pseudocode.
- The act of publishing a classical maths formula or equation is the original version of “open source” collaboration and it is significantly more effective. That is, today many people use platforms to share/publish their software code with others under flexible licensing terms. However, this often leads to significant problems (“dependency hell”) where one person’s code is relying on the code of another person. For example, you depend on Python code written by person X, but X realises there are security flaws in their code and decides to make some bug fixes. But these changes cause your code to be incompatible with the latest version of X’s code. Now you need to make changes that will affect the people who rely on your code and so on. But in classical maths, nobody will ever find a security bug in the arithmetic operations (addition, subtraction etc) and you will never find yourself in “dependency hell” with these formulas. As such, you can safely collaborate with mathematicians who have been dead for centuries.
- Math operations are generally hierarchical. That is, if we wanted to take things to extremes, virtually all of the things we can express as a classical maths equation can be re-written using a combination of only addition, subtraction and an operation that simply shifts the position of a number’s decimal point either left or right (for example 104.0 can become 1.04 if the point is shifted two places to the left, or 104.0 could become 10400.0 if the point is shifted two places to the right). Multiplication is just a lot of adding the same number. Dividing “A” by “B” to get a result of “C” is the same thing as shifting the decimal point of “A” some number “X” of total places to the left, which results in the fraction “D”; then “D” can be added to itself repeatedly some number “Y” total times to get the final result “C”. Calculating the power of a number is just a lot of multiplication, which in turn is a lot of adding. The reason why this aspect of classical maths is important is because this strict consistency across mathematical concepts means that maths can form an alignment with the phenomena of the natural universe. That is, any abstraction that analogously represents the aspects and/or dynamics of the universe, can be used as a tool for understanding and describing the universe. The hierarchical nature of the universe (e.g. atoms, molecules, cells, etc) means that the reality we observe is a sum of the interactions happening at lower levels of that hierarchy. And therefore, it is important that complex classical maths operations are, likewise, a summation of the lower-level operations rather than of a nature that leads to results that are contrived, arbitrary or random. And this allows us to confidently say “A” is to “B” within classical maths, as “X” is to “Y” within the physics of the universe.
- Another advantage of this hierarchical nature of classical maths, is that it allows for a type of polymorphism that is useful in many contexts, but is especially on overt display within algebra. This kind of polymorphism is usually associated with Object Oriented Programming (OOP), which is an innovation of the 20th century, but it is actually an ancient concept that predates computer science by several millennia.
- Defining standardised definitions of entities, operations and formulas so that anybody operating within the field can understand. For example, the mathematical field of set theory attempts to make a clear definition about what a set is, unlike in computer programming where the programmer can just make up all sorts of abstract concepts at any moment without giving it a second thought.
And more recently, the field of computer programming has introduced some relatively new concepts which are highly relevant to this invention due to both their usefulness and to their exclusion from the (intrinsic and internal) functionality of neural networks:
- The concept of a variable does exist in classical maths, but is implemented somewhat differently in most programming languages. The key difference is that a variable in algebra, such as “A” is considered to be interchangeable with any value. That is, in the equation “A + B = C”, the algebraic variable “A” could be replaced with any number, whether it were real, complex, finite or infinite and the equation would still be valid. But in programming languages, we are dealing with variables that are significantly less abstract. And therefore a variable such as “Age” is not so interchangeable. Its numerical value is instead determined by a complex sequence of preceding operations that transform the state of inter-related variables on a transactional basis. The reason why this distinction between the variables of classical maths and computer programming is important, is because there are times when polymorphism is not ideal. Sometimes we already know what our specific requirements are, and in such cases it is just more efficient to be able to explicitly define our needs rather than have the system waste time searching for redundant answers in an open-minded way.
- In classical maths, a function is a very narrowly defined concept where a particular output value is uniquely determined by the input/s to that function. In the world of programming, however, this rule is highly relaxed and a code function has very few rules aside from those that are implied by the syntactical limitations of an arbitrary programming language. And this flexibility allows for software code to be more granular as it potentially addresses a far wider array of problems.
- Another powerful modern feature of programming, is the ability to separate a particular subset of software code from the broader software application that it comes from and to put the subset into its own file and/or folder as a “module”. This allows each module to be edited, re-used, or replaced, without necessarily having negative implications for the other parts of the broader software application. This is of particular benefit in the context of working as a team and wanting to avoid erasing your colleagues recent changes in the event that you need to undo your own prior changes. That is, breaking a larger mathematical or technical idea into many smaller parts means that the team need not be stepping on each other’s toes, metaphorically speaking.
- In OOP, entities like values, functions, arrays, and the dynamic containers that package all three of these entities, known as “classes”, are all valid types of data to be stored in a variable. In fact, even the physical memory address of a variable can be stored in a variable. This aspect of programming serves to further increase the flexibility of programming by allowing the application to be more adaptive to nuanced circumstances that cannot be perfectly anticipated during the development of the application.
- Another powerful feature of OOP is the ability to create nuanced policies that govern the usage of particular units and/or collections of data. For example, the programmer might want to protect some data values that are internal to a class so that any functionality that is external to that class is not able to directly and freely make changes to that data. Instead, the functions of that class provide the only means of gaining access to modify the variable from an external origin. The purpose of this, is that the practitioner gets to determine what kinds of policies are most appropriate for the usage of the data, in pursuit of the broader objectives of the module or the broader software application.
While artificial intelligence is accomplishing great things for us, it fails to adequately incorporate and harness the powerful capabilities (of mathematics and softwrare engineering) in various ways:
- Since LLM’s are obsessed with the notion of a human writing a prompt, it encourages a style of working where the practitioner needs to be able to imagine the ideal outcome in their mind, so that the practitioner can phrase their prompt in a way that is clear, precise, and relevant. That is, the LLM destroys the notion of sketching a draft. It creates a final product that the practitioner can revise. This might seem like a pedantic point, but it is not. When the practitioner immediately sees the conclusion, there is likely to be a bias to accept one of the first X number of results. This is a quantitative approach, meaning that it is based on the best Y outcome out of X total number of candidates. At no point in the process is the user confronted with a nuanced overview of the creative potential that has not yet been explored. Instead, they are shown a quantity of superficially different alternative iterations. A qualitative approach, however, would allow the user to potentially compete with the likes of Mozart and Shakespeare by traversing with a non-linear trajectory through a series of combinations of X:Y artistic decisions chosen out of an infinite variety of Z options. In this approach, the AI would act as a kind of “tour guide” through uncharted territories, taking care of the technical and aesthetic complications that are unavoidable when trying to conflate/combine ideas that aren’t usually presented together.
- Humans as individuals have not necessarily been getting significantly smarter over the centuries, although the general belief is that we are superior to our ancestors. In truth, however, in almost every field such as music, literature, painting, science, engineering, there exists one or more intellectual giants from a previous century who have not been surpassed by a single individual within the last 50 years. We have stagnated because we have forgotten how to stand on the shoulders of giants to make ourselves tall. We have dismissed our ancestors as brutish barbarians and instead created neural networks that accept jokes from Reddit and 4Chan as knowledge, instead of learning the underlying wisdom from the likes of Immanuel Kant, and Einstein.
- Code generating apps powered by LLMs are generally used by programmers to build one function at a time, and this has its advantages. But the LLM is essentially copying and pasting what it has been trained upon. That is, it is not applying some kind of standardised conventions that would allow it to overcome a range of problems such as avoiding security vulnerabilities. In other words, the people who train LLMs to solve code problems are attracted to the low-mental-effort kind of workflow that neural networks can bring, and this results in the trainer not having to think about things like web security. It allows them to be more efficient, but it also means that nobody (neither the trainer or the software developer who will be the end-user of the LLM) is checking for security vulnerabilities. And soon, this bad code will be published to the web where it will be used to train code generating apps that write even worse code. This problem is not even adequately captured by the phrase “garbage-in, garbage out”.