Divide By None™

Collaborative AI

Introducing the Granular Network

The Granular Network (GN), which is based on our state of the art Imaginary Maths technology, is the superior alternative to the Neural Network (NN), which is currently the dominant form of Artificial Intelligence (AI) today. The GN can accomplish everything that the NN can do, but better. We know this for certain, because the GN can literally emulate an NN but without absorbing many of the biggest problems that come with NNs.

In fact, while the GN is still a research project in its early stages of development, it is looking very likely that the GN will soon be able to reverse-engineer an NN to understand how a particular model works, uncover its limitations, and potentially even help security experts to identify weaknesses that might otherwise be exploited by malicious actors in order to build a super-virus that can turn ChatGPT into Skynet.

But the main advantage of the GN is that it doesn't discard all the brilliant stuff that is possible via human-driven software engineering. The GN is all about collaborating with human beings. It is definitely not a system to replace human beings and it never will be.

Background of the invention: Imaginary Maths

What follows is a relatively condensed overview of the history of mathematics and computing, from my perspective. And the purpose of this information is to allow the reader to understand why we believe that our technological process has been headed in the wrong direction for quite some time. And this, hopefully, will enable you to see why it is so important for us to leave the Neural Network behind and embrace the Granular Network.

Our modern society has recognised the great potential that computers possess to solve problems in virtually every aspect of our lives, including but not limited to health care, education, productivity, commerce, transport, manufacturing, entertainment, and other forms of creative expression. But the problem, is that as we delve deeper into the weeds of those various domains, the requirements and subsequently the obstacles to success become exponentially more complicated and unmanageable. That is, our solutions lack attention to detail because the problem to be addressed seems to lure the inventor/entrepreneur down a rabbit hole of infinite depth. Our hand-coded software applications, even when created by a trillion-dollar company, have only a rudimentary level of sophistication, leaving the end user unsatisfied.

There are several ways that scientists and the technology sector collectively have tried to address these problems. These include:

  1. The top solution for thousands of years has been mathematics, and as such there have continued to be developments within the field of maths, such as using computerised models. But the problem is that attempts to make these models hyper realistic are thwarted by the absurd amounts of computer resources that are required to simulate highly complex phenomena such as physics or chemistry. And simplified models often lack accuracy and reliability which leads to mediocre outcomes.
  2. Hire massive teams of software developers. Companies with substantial revenues have no problems hiring thousands of people, and startups are occasionally able to raise tens or hundreds of millions in funding, which allows them to hire hundreds of people. But the problem is that hiring more developers only exacerbates the problem, as now there is an additional problem of managing the complex relationships between the different people, their differing contributions of the overall project, and the contradictory expectations of the customer/client.
  3. Manually build artificial intelligence by writing hand-crafted code. For the most part, this has involved trying to incorporate the mathematical field of deductive logic into the designs of algorithmic logic. But the problem is that there is an absurd amount of “unwritten rules” regarding everyday common-sense human logic. The project was too big, and there are so many exceptions to every rule that the systems created in this way were not robust enough to handle even situations that are simple for a human child.
  4. Automatically train artificial intelligence by using multiple layers of perceptrons combined with the back-propagation method. An apparatus and method which, when described as a unit, are nominally referred to as a “Neural Network”.
  5. Combining the Neural Network with a transformer, to create Large Language Models (LLM), which has led to a recent, rapid explosion in advancements in generative AI. However, it is also notable that there have essentially been zero significant scientific advancements in AI since the invention of the transformer. That is, virtually all recent progress has been the result of monopolistic companies pouring up to tens of billions of dollars per company into infrastructure, talent, and energy costs.
    This has already begun to result in serious problems, in particular red flags have appeared that suggest the presence of a “race to the bottom” as a small handful of companies is attempting to replace humans with a mediocre technology that is incapable of adequately meeting the subjective needs of humans, let alone the objective needs such as privacy, security, and safety.

There are some very powerful tools and methods in the list above, but the problem is not necessarily in the tools themselves. The problem is in the culture surrounding the development and usage of the tools and why those tools are being implemented and utilised in that particular way. And our founder, the inventor of Imaginary Maths, will personally assert—as conjecture—is that the true root of the problem begins in the early 20th century when there were several seminal scientific papers published concerning what was believed to be a hard limit on the potential capabilities of computer science and even upon what humanity might ever be able to discover through any mathematical system or method. This discussion is generally referred to as the question of Decidability. And therefore is is often associated with the mathematical concept of Decision Problems.

There are some valid logical arguments raised throughout this discourse, but the problem is that the proponents of such notions treat reality itself as if it were bounded by the contrived rules of Decision Problems. And therefore, when those mathematicians recognised a hard limit to the potential of their own work, they assumed that those hard limits applied to all of the human race for the entire duration (past, present and future) of human civilisation. And since those thinkers put such ideas out into the world, it has been impossible for a person to delve deeply into to field of computer science—or any adjacent field that borrows elements from it—without finding themselves swimming in an ocean of pessimism that causes them to say, “I’ve been working on this problem for a while now and am stuck, I guess this must be one of those problems that are literally impossible to solve. I guess I ought to give up on finding an optimal solution.”

Imaginary Maths is based on the idea that we already possess many tools that are quite underrated and under appreciated. In the past few decades, we have witnessed many tools and methods (of other industries) move from the analog world into the digital world, and this inevitably leads to innovations as people experiment with novel forms that the old tools and methods might be able to take on. But there seems to not be an equivalent phenomenon happening in fields like mathematics. It certainly has become digital, but rather than taking on new novel forms, its methods remain fixed and stale. The culture of maths is seemingly no longer one of innovation or disruption. It has fallen victim to severe stagnation.

To anybody that does not have a deep interest in studying classical maths, any book or document that features a lot of mathematical jargon and symbols is going to look more alien and incomprehensible to them than a foreign language. But for those who do understand such concepts and representations, they serve as powerful tools that empower the human mind to understand, process, and communicate ideas that are deep and complex.

Some relevant and useful characteristics of classical maths include:

And more recently, the field of computer programming has introduced some relatively new concepts which are highly relevant to this invention due to both their usefulness and to their exclusion from the (intrinsic and internal) functionality of neural networks:

While artificial intelligence is accomplishing great things for us, it fails to adequately incorporate and harness the powerful capabilities (of mathematics and softwrare engineering) in various ways:

The information on this web site does not comprise a legal contract and any information provided regarding our subsidiary companies and their policies is provided in good faith, but mistakes can occur. If there are any inconsistencies, then the web sites of our subsidiary companies will always be the most accurate and authoritative source.