The Reality Quotient
I began this essay thinking I would be writing about artificial intelligence, brain-machine interaction, and virtual reality and their application to people with disabilities. It didn’t take long to figure out that an underlying issue is reality itself. As we connect ourselves to technology or have it implanted inside us in order to acquire functionality we don’t currently possess, we may be pushed farther and farther away from reality.
The “Reality Quotient” (RQ) is a ratio between that which we think is real and that which we know is real. The closer to a ratio of 1.0 we get, the better the technology that displays, simulates, or connects us to reality will be judged. RQ has been the province of sci-fi writers who have created worlds where people can move from one reality quotient to another. At each level, they can affect the world around them to a lesser or greater extent.
In my own way, I am one of these writers. My book Vision Dreams: A Parable (see below) is all about creating visual reality for blind people by way of artificial technology (nanobots) and brain programming. Mine is a cautionary tale, ranging into what might happen if one seeks refuge or escape from the vicissitudes of life, even if just for 15 minutes per day, (the theoretical limits of my technology) and what might happen if the technology goes wrong.
Science, on the other hand, approaches reality via theories of the space-time continuum and multi-dimensional space where parallel versions of the universe might all co-exist. Quantum mechanics has thrown a monkey-wrench into common-sense views of reality. While notions that sub-atomic particles like electrons can be in more than one place at a time are mystifying, these phenomena have powerful implications for everything from information-processing to a deeper understanding of the universe.
Then there is “metaphysics”, the philosophy of reality, so large and diverse a field of study that I’ll cite here only one of the most iconic examples of a reality thought-experiment, Plato’s allegory of the cave.
Plato’s philosophy evolved from thousands of years of musings by earlier Homo Sapiens and even those that preceded them, reflected in spirits, nature gods, and a wide variety of mythologies, both East and West. Suffice it to say, humans have long pondered not only reality, but whether there is more than one reality, not to mention the difference between mind and body. Debates aside, let’s focus on that which is available to us via our senses and conscious awareness. For now, this is the realm where technology can help us.
Plato drew the distinction between what we think we perceive, what is really there, and the ideal form of those things. Let’s say prisoners in a cave are chained to seats so they can look only toward the back wall. Behind them and unknown to them is a fire, shining light on the wall. There are puppets behind the prisoners and in front of the fire, casting shadows on the wall. The prisoners can see only the puppet-shadows. After a discussion, they come to consensus. They will use the name “puppets” for what they see. When the prisoners are released and can turn around, they see the ‘real’ puppets. They realize that the word they had agreed upon represents something much more real than they had originally thought. Furthermore, seeing several examples of puppets enables them to abstract “puppet-hood”, the ideal form of puppets. Thus educated, the prisoners’ grasp of reality is much greater than prior to their education. The key to a proper understanding of reality, according to Plato, is education.
But what if technology were to keep people from experiencing direct-reality”? Would they forever think that the world the technology gives them is all there is??Take for example the way screen reading software renders a computer screen for the blind. Those who have compared their experience of the screen with a sighted person’s experience of the same screen learn, much to their consternation, that they are living in an altered reality. We who have been around since the beginning of screen reading software remember when decisions were made to move away from the direct visual analog approach that essentially imitates what sighted people see, to ways that render the screen sensible to auditory users. However, these depictions make our perceived reality different from that of sighted people. I, for one, am grateful for the powerful tool, but this example illustrates how technology can separate us from consensus reality. Instead of an RQ of 1.0, we end up with one of, say, 0.8.
In my book (Vision Dreams), four blind people are so frustrated with the difficulties of life in a society that has gone dystopian and has dedicated most of its resources to the military, they agree to wear a visual input device and have microscopic robots surgically placed in their brains. They also undergo brain chemistry augmentation to give them the cognitive substrate they will need to comprehend the vision the technology will provide.
When I drafted Vision Dreams, I had not yet subscribed to the notion that what we think we see, feel and hear is only that which the brain translates for us. Had I done so, I would have focused more of my creative energy on the input device, the idea being that the better the input provided by technology, the best chance the brain will have to render something that makes sense.
Lately, research into Virtual Reality (VR) and Brain-Machine Interface (BMI) technology is bringing us closer to neuro-prosthetics that will enable paralyzed individuals to regain some real-world abilities. By Inserting implants into the brain and spinal cord, some technology has enabled paralyzed individuals to activate robotic arms. Newer brain implant technology has allowed some to directly stimulate their own arms, to the point of taking a drink of water on their own. I hypothesize that the amount of training required for these feats is directly related to the RQ. The closer the implants imitate actual brain operations, the easier it should be to activate an arm or leg.
Recently, We’ve seen lots of news about artificial intelligence (AI) for everything from programs that can write term papers to a variety of devices that can provide environmental information to the blind such as reading text and facial recognition. Researchers at the University of Colorado, Boulder have devised a “walking stick” that identifies features of a room such as an empty table in a restaurant and groceries on supermarket shelves. In these cases, computer vision, not artificial vision does the “seeing” and auditory signals provide information to the end-user. If we compare how people receive this information naturally or from another person, we get a sense of the difference between an RQ, say of 0.6 versus one of 0.9 or 1.0.
Devices like the ARGOS artificial vision implants include a camera-like device and electrodes implanted in the retina. Approaches like these “hope” that the earlier in the visual process signals reach the brain, the better. The contention is that the deeper into the brain one begins the signaling process, the more might be the distortion of reality. Better to let the brain do what it has evolved to do by sending signals to it from as close to the natural input source as possible, in this case, the retina. Long before micro-chip technology got as good as it is and complex computer processing could be done in tiny objects, artificial vision technology attempted to bypass the retina by way of a camera linked directly to chips implanted in the brain. If the retina, a complex array of specialized cells, isn’t working, artificial retinas may yield higher RQs and direct brain implants.
This may be the best approach until something new comes along that changes the picture, something like newly grown retinas spawned from stem cells.
In the end, the best technology will be that with the highest RQ, that which most closely renders the world in the way most humans comprehend it. Even with this power, some will decide not to experience it. Reality is often too complicated and cluttered. Much as we do when we use selective attention, like screening out extraneous input or setting our screen reading software to pronounce only some or no punctuation marks, it is not necessary to experience everything to be a part of the consensus.
Again, we are stipulating that what humans apprehend is truly objective reality. There is a plethora of philosophical debate about that out there, but for now it is the only reality we have. Thus, technology that provides us maximum information and physical and sensory ability without removing us too far from consensus reality is best. With such technology, we will be in better shape to take part in defining that reality. Technologies that render high reality quotients will help us contribute to making the world a better place.
Anthony R. Candela, Author
Saying aloud what should not remain silent.
February 7, 2023