Technology is Humanity

The Cost, Beauty and Inevitability of the Human-Technology Identity

Sean McClure
22 min readJul 2, 2022

Original photo by Rachel McDermott

For the podcast version of this article see the NonTrivial episode here.

The usual way of thinking about technology is as an extension of life. Technology extends our eyes, our ears, our muscles. If we pick up a smartphone we can see and hear family and friends thousands of miles away. Technology also extends our muscles. Since we began using tools we have wielded objects and crafted machines that enabled us to build bridges, lift heavy objects and extend cables.

But it’s not just our physical traits that get supercharged by technology. We regularly offload our memories to machines. We organize our thoughts, documents and photos on hard drives. If we seek knowledge we don’t gather around the campfire and listen to folktales, we enter keywords in Google and find the appropriate Wikipedia article.

Offloading mental tasks to the machine hardly stops at memory. Our thinking is being transferred to machines via technologies like machine learning. Machine learning uses large amounts of data to discover statistical correlations, which fuel either implicit or explicit decision making. Much of this decision making is behind the products and services that drive our economy.

This offloading of information to machines is of course a byproduct of heading into modernity. Our brains haven’t changed in 50,000 years; modern life demands we offload much of our minds to machines if we are to continue our relentless march towards progress.

And so technology as an extension of life makes sense in both the physical and intellectual sense. But there is something problematic with the term “extension.” To “extend” something is to supercharge an innate ability. But technology has never not been a part of humanity. In fact, technology precedes language itself.

From the very beginning humans have picked up rocks to smash and open objects to access food. Tooling is how we survive, since our naked bodies are ill-equipped to handle earth’s considerable stressors. Our bodies are arguably the least furnished in the animal kingdom relative to animals of similar size. We survive because of our minds, and it’s our minds that enable us to create what other animals cannot.

We use our minds to gather resources, to mix-and-match the substances of this planet in ever-changing ways, to fashion implements, which in turn are used to change our surroundings.

So technology has alway been. Technology as an extension of life is a flawed concept. For humans, technology and life are one in the same. More to the point, there is no human life without technology.

Technology is better viewed as an inherent part of human evolution and adaptation. Humans adapt via the tools they create more than by changes to their physical makeup. When we think about adaptation in biology we envision species going through physical revisions in order to adapt to environmental stressors. We note that certain varieties of species become more adapted because their physical makeup happens to be coherent with their surroundings. It is these successful species who pass on their genes and over time evolve.

But modern humans adapt to their environment through a much different mechanism. We obviously still go through physical changes, but the adjustments we make occur almost entirely by virtue of our tools. One has to look no further than the rate of human evolution to see this is as self-evident.

Human evolution is far more rapid than other species, and that’s because we don’t just adapt to external stressors. We actually create our own external stressors every time we gather resources and fashion new tools. Humans regularly change the landscapes they inhabit.

There are both positive and negative aspects to the changes humans create in the world. On the negative side we have environmental impact, with increased pollution and our stripping of earth’s surface. On the positive side we give humans access to more resources and innovative ways to give back to the planet. Whether good or bad, humans create their surroundings. These new surroundings represent ever-changing external stressors that we must adapt to, and we cannot accomplish this via our physical biology alone, because that mechanism cannot keep pace with the rate of change we bring about.

We can think of the above process as representing a feedback loop, where we create technology, which changes the environment, which changes our external stressors, which we must adapt to, by yet again creating new technologies. Figure 1 displays this process.

Figure 1 Human evolution as a feedback loop with technology playing a fundamental role.

When we realize the direct evolutionary role that technology plays in human life we can see that technology is not really an extension at all; it’s baked into the definition of humanity itself.

It’s Not Anthropomorphizing

Assigning lifelike characteristics to technology is warranted when considering the properties systems exhibit once they pass a given complexity threshold. When enough components are interacting we witness the onset of properties such as nonlinearity, causal opacity, and emergence.

This is why assigning lifelike properties to sufficiently complex technology is not anthropomorphizing. Many of today’s technologies are indeed composed of countless interacting components. New technologies are reaching levels of complexity that bestow upon them behaviors we normally assign to life.

We should no longer be thinking of technology in terms of cogs and pistons, with deterministic inputs and outputs. We are leaving the world of rules-based engineering and entering one where the outputs are not explicitly programmed into the machine. Today’s “AI”, even in its narrow embodiment, uses rules-based code only as scaffolding, to support the meat of the software, which is nondeterministic statistical models that converge on solutions. In other words, the behavior of these systems are not “programmed” into the machine.

This is why reductionist approaches can no longer be used to make sense of our technology (despite the constant naive attempt to do so). It is becoming less feasible to “debug” the code so to speak. In non-complex things the intricate components do not talk to each other anywhere near enough to bring about complex properties. The complexity threshold is not breached. You can peel back the casings of a rocket and see how complicated its inner workings are, but that is not complex. You can always debug any physical problem with a rocket.

The properties of any truly complex technology should be expected to take-on lifelike properties. This is why the idea of the “Technium” as Kevin Kelly calls it in his book What Technology Wants is correct. The Technium is a way of thinking about the whole collection of today’s technology as a kind of “living” blob that strongly resembles life.

Of course we can debate how close our technology is, or ever will be, to genuine life. But such a debate forces us to separate the notion of life from technology, and therein lies the issue I submitted from the onset. What I’m arguing in this article is that we shouldn’t have that distinction. Technology and humanity is a distinction without a difference.

We already see lifelike behavior with complex things like electrical grids, where surprising patterns emerge from an otherwise deterministic, engineered construct. The internet is of course the best example. If you want a way to visualize the “Technium” as Kevin Kelly calls it, imagine the internet with its large number of machines in communication with each other. That’s what gives rise to the emergent properties that cannot be predicted. You’re not going to trace back patterns that emerge from the internet to its so-called “root causes” because real-world phenomena don’t have single points of causality.

The internet is a blob of nondeterminism. The electrical grid will have patterns that emerge that nobody engineered into it. These patterns emerge because when you reach enough complexity, with enough components that are talking to each other, the output is expected to be (much) more than the sum of its parts.

So when technology is referred to as something that has its own “push” and “pull” it’s not anthropomorphizing, it’s simply respecting known properties of complex systems.

Feeding Technology

Our actions feed technology every time we post photos, write text messages, send emails, execute bank transactions, etc. Our activity across the software products that drive today’s economy add information to the “Technium.” Information is what our technology “wants.” Information is our technology’s currency. But it’s not a neutral resource that gets added. Those data are getting turned into new value, by mixing and matching information into new services and products. Algorithms make decisions inside products used by businesses, and produce new outputs that people act and depend on.

That increased value in turn draws us towards using more technology, making us adopt new products and services. It’s yet another version of the feedback loop discussed above. We exist in a constant iteration of value and usage, all of which satisfies the ultimate goal of our technology.

Figure 2 The iteration of value and usage, providing the impulse for humans to “feed” technology.

In one sense humanity is being subsumed into technology, but again, humanity and technology have always been the same thing. Today’s technology just makes the lack of distinction more apparent. It is correct to think of today’s collection of technology as “pulling” us. It’s drawing us into itself, to use it more and more, and we’re giving it what it wants. We can see from this mechanism that technology does indeed “want.”

There is a scary aspect to the notion of being caught in a feedback loop seemingly dictated by today’s technology. Most people can’t help but view this cycle as a kind of entrapment. But this cycle has always been, making the fear of it ultimately misplaced.

What makes the cycle of value and usage particularly difficult to reconcile is the fact that today’s information economy supercharges this fear. The apparent runaway pace of technological innovation brings us face to face with the inescapable feedback we are all caught in. The whole thing is staring us in the face and it is demonstrably inescapable.

But this doesn’t mean there can’t be any controls. Surely there are still some forms of regulation we can bring to complex systems? Yes, there are, and they tend to look like the so-called precautionary principle, where we pause and review our actions prior to leaping into new innovations that may prove disastrous (have a potentially large downside).

But we also have to be realistic. There’s this ultimate inescapable reality to technology, not because it is in some sense “right” but because it’s always been here, and because it’s an inherent part of the definition of “human.”

Technological innovation is going to happen regardless, and there is a mechanism behind that worth highlighting. Recall my NonTrivial episode on There Were no Giants, Only Shoulders. Even though we like to assign owners or creators (“geniuses”) to individuals, whether it’s a scientific theory or invention, no single person creates anything.

This is because humans are so interconnected, constantly sharing information and ideas, which build upon more and more ideas. The same kind of mixing and matching that we see in science and technology is fully realized throughout the history of human civilization. This means that eventually, statistically, someone will (definitely) discover what is possible. This doesn’t stop history from assigning the credit to an individual or specific group of people. Despite the mountain of contribution that precedes all discoveries and invention, humans need to point their fingers at a “cause.” It’s how we make sense of the world.

The name that history attaches to human successes is just a label to anchor the narrative. From Thomas Edison’s lightbulb to Einstein’s Relativity, these individuals were not needed for these discoveries. Within +/-10 years any invention is guaranteed to occur. It would be statistically impossible for this not to be the case. That’s because an invention or theory does not precipitate out of an individual “genius” mind. Rather it emerges from countless contributions and the culture of the time. There will always be someone at the right place and time to have their name attached to the insights that precipitate out of the swirling mess of human contribution.

Of course the look and feel of the invention or theory will be based on who “discovered” it. The light bulb would look a little different if someone other than Edison had invented it, but this is just the accidents of history locking-in the look and feel of something. To suggest Einstein was needed for Relativity is ridiculous on its face, and shows a lack of understanding of how complex problems are solved. Too many components in the system are interacting, too much cultural underpinnings exist, to not allow the discovery to precipitate out. It definitely will.

This is the same underlying mechanism that makes banning a technology ultimately futile. You cannot fully ban a technology because you can’t isolate the discovery. There is no nexus of innovation where a discovery must come from.

Can we stop stem cell research? Of course not. A temporary ban is possible, and perhaps even beneficial, but the suggestion that we can fully ban such technologies from ever happening is nonsensical (although nature may still prove such innovations themselves futile). We would need every political system, religion and economy to be aligned on the same values, and to ensure those never change over time. It’s such an impossible notion it’s rarely worth mentioning, and yet people talk about banning things all the time. These kinds of non-solutions do nothing more than fill the debate with wasteful rhetoric and grandstanding. The invention or theory, if possible, is guaranteed to precipitate out somewhere. Kevin Kelly calls this “simultaneous invention.”

So why am I talking about the futility of banning? Because there’s an inevitability to technology, meaning solutions to our challenges cannot be met by talking about technology being “good” or “bad.” Technology just is, and will be, and we need to start our conversations from there. We cannot separate technology from humanity, because technology is humanity.

There will always be someone at the right place and time to have their name attached to the insights that precipitate out of the swirling mess of human contribution.

As I regularly argue on NonTrivial, understanding these core properties of how things work gives us the correct starting points for fruitful conversations.

We have to be honest about the fact that there is something fundamentally wrong with the technophobic notion that technology takes us away from being human. That being said, it is fully understandable why many people would come to this conclusion. There are undoubtedly real costs to technological progress, and those costs can seem too high for many. Whether it’s smartphones preventing authentic human interaction, or nuclear accidents calling into question the safety of new energy solutions, the costs are all too real. But we have to remember that it’s never just a cost; there is always something gained.

In the case of smartphones we are connecting family and friends across vast distances that would otherwise be impossible. Nuclear power produces lower cost energy, can be more reliable, and releases zero carbon emissions.

Kevin Kelly talks about the Amish community as the quintessential example of people who shy away from technology. The Amish generally resist the temptation to use the most recent gadgets. But upon visiting a number of Amish communities Kevin realized just how much technology these communities embraced. Contrary to the notion of them hating technology, the Amish are simply more judicious in their approach of what technology they decide to use.

Kevin also looked at the well-known Luddite Wendell Berry. Wendell is a farmer who has written a number of books, where he regularly encourages embracing a more traditional form of living. But just as with the Amish communities Wendell still embraces a massive amount of technology when viewed through the lens of all human history. As argued by Kevin, Wendell is merely drawing the line at a certain generation, embracing technologies that were present during his childhood. These technologies are still far more advanced than what appeared in almost all of human history.

When we take an honest look at Luddites we realize their freedom to hold anti-technology opinions if afforded them thanks to technology. The ability to separate yourself from large portions of technology depends on the technology. It’s similar to somebody arguing against war, despite living in a country whose freedoms were obtained largely due to war. Of course this doesn’t mean one must love war. It doesn’t mean we can’t change things for the better. But it’s still the reality. One’s hatred of something is often afforded to them by the thing they hate.

This is the true interconnectivity of all the things in our lives. To understand how things work we must step back and see the bigger picture (contrary to the enlightenment narrative that knowledge is gained by pulling things apart and inspecting their components). Only the holistic, emergent view of something can reveal the properties that matter.

“Beating” the 2nd Law of Thermodynamics

A concept brought up in Kevin Kelly’s book is that of negentropy. Negentropy is the opposite of entropy. So whereas entropy can be viewed as the constant push towards disorder, negentropy is the impulse to bring things together; to lower entropy.

Entropy is of course a fundamental aspect of reality, and serves as the foundation of many other scientific laws. Colloquially, entropy tells us that things tend towards decay. If I leave something out for too long it decomposes, rusts, breaks down, flies apart. Unless acted on by some outward force there is no reason for things in our universe to stay together. This of course is captured in the 2nd law of thermodynamics.

But now think about life. Life is a fight against the cosmic law that says entropy must increase. We know this because life is much more ordered and structured than anything random. With life, things don’t just fly apart, at least not initially. Life generally creates structure and order, in comparison to non-living things (of course many inorganic things create order as well (e.g. crystals)).

Negentropy happens at all scales. During the course of a human life, and humanity in general, society aggregates together, creating societies. So in a sense we have an apparent paradox where entropy is supposed to always be increasing, but in life it appears to do the opposite. This is called the Schrodinger Paradox, named so because the physicist Erwin Schroedinger was looking at the role entropy played in life, and noticed this paradoxical truth about how life aggregates and creates order; the opposite of the usual tendency towards randomness and chaos.

We know that the evolution of biological systems occurs in the direction of increased complexity. We can think of increased complexity as increased order, since complexity sees more components interacting to produce specific behavior, relative to a random collection of objects. Complexity thus exhibits the sharing of information and matter to produce outputs that survive its environment (i.e. adaptation).

This means we can think of survival as the adaptation of species to their environment in order to minimize entropy production. This information-theoretic expression of natural selection shows us life moves in the opposite direction stated within the 2nd law of thermodynamics.

To understand entropy correctly we must keep in mind that it only increases (or remains constant) in closed systems, something we more formally call an adiabatically isolated system. If something is adiabatically isolated it’s not in communication with its surroundings, which means no energy, matter or information enters or leaves.

But life does not occur in adiabatic isolation since living systems are open systems. If you have an open system you can resolve the apparent entropy paradox, by realizing that one can create structure and order within the system, at the cost of increased entropy outside the system. The 2nd law of thermodynamics is thus preserved despite life’s march forward, given that overall entropy is still increasing.

We know that life depends on incoming sunshine, winds, rains, particles from outer space, etc. and thus exists as an open system. Life is not adiabatically isolated. Life is very much in contact with its surroundings in terms of both matter and information. This means that life reduces the disorder/chaos inside its systems at the expense of everything else around it.

Cost and Inevitability

From the above discussion we can see that the order produced by life is compensated for by the disorder that’s created in its surroundings. Now let’s go back to the topic of technology. Technology is able to increase its internal structure (complexity) by adding more components and connecting those components in useful ways. But it does this at the expense of its surroundings, meaning there must be a real cost to what humans call “technological progress.”

Much of the cost associated with technological progress is paid by planet earth. We know there are ecological consequences to continued progress. We pollute rivers, we strip landscapes, we wipe out entire species. We have undoubtedly done a lot of damage to our environment in the pursuit of bigger and better technology.

This change in our surroundings has forced us to adapt to these new environmental stressors, and we do this by creating new technologies. We must create water filtration systems to clean our polluted waters, build drones to replant trees, use genetics to reestablish near-extinct species. All this gets us into the feedback loop discussed previously. We build technology, increase the entropy of our surroundings, and adapt to those new surroundings by creating yet more technologies.

Regardless of whether or not we believe the cost of technological growth is worth it, this cost is what provides part of the impetus for technology’s inevitability. We cannot escape the new surroundings we create, and the only way to adapt to those new surroundings is to create new technologies.

A critical point in all this is the realization that nobody is at the helm when it comes to technological growth because technology gets what it wants from the feedback loop. Regulations and manifestos notwithstanding, there is no stopping technology. The human dependency on technology is absolute. Technology is an inherent part of human evolution.

Evolution Drives Us Towards Higher Information Content

When viewed in terms of information, rather than any underlying physical construct, we see quite clearly that humanity and technology are 2 parts of the same thing, both evolving together. Human evolution leads to higher information content in the organisms that survive and evolve.

As depicted casually in Figure 3 there is an explosive growth of information and complexity in any evolving system. Information on the y axis can be replaced with number of people, or number of technology components comprising the system.

Figure 3 The complexity threshold. As more components (humans and/or machines) come together, and interact, the total information content increases, as does complexity.

This inflection point is why, among other ways of framing the problem, technology should be expected to take on lifelike properties, and be an undeniable part of our evolutionary process.

Our technology presents an all too natural push towards higher information content, which brings about higher levels of complexity, which means emergent patterns. It is emergent patterns that engender a system with its ability to adapt and survive.

The complexity theorist Gregory Chaitin talks about life having properties of high mutual information, where different components within a system share essentially the same information. This relates to the ability of components to self-organize, reducing their entropy. This is a more precise way of thinking about what happens as components come together to form aggregate technology. Each individual component (i.e. machine) can look to share information with its partners, which in aggregate maximizes mutual information in system, at a given level of environmental constraints.

Information is the currency of everything. Ultimately, it doesn’t matter if we are talking about people or microchips, everything can be viewed in terms of information and processing. When one takes this perspective the supposed distinction between humanity and technology disappears. The only real distinction worth mentioning is that humans appear as slower intermediaries to the evolution of information and processing. Humans cannot communicate as rapidly (although still more effectively) as information technology. Biological humans are the organic, slow version of components that talk to each other.

So rather than the evolutionary line moving from humans to technology what we really have is humanity and technology constantly intertwined. In other words, it’s always been about “components” communicating, with the only true change being the amount of information present at a given point in time. That’s what the universe wants. The fight against entropy is as inevitable as the increase in entropy itself, and that’s thanks to information.

It is not anthropomorphizing to look at today’s technology as lifelike; it is the most reasonable take when life if viewed through the lens of information and processing. Humanity and technology evolve the same way.

Technology as Beauty

The idea that humans are a kind of informational intermediary to what the universe wants is an off-putting notion. It sounds similar to transhumanism, where you’re meshing organic humanity with inorganic technology to achieve something “better” than either alone. And perhaps at the end of that line is the disappearance of humanity itself.

But as I have argued, the supposed humanity-technology dichotomy is a distinction without a difference. This is even “more true” the more we move into the future. Technology is becoming more natural and organic, not more robotic. During the industrial revolution humanity created and used a robotic form of technology, with cogs, pistons, steam and grease. Technology, until recently, didn’t look anything like life.

Technology has rarely been called beautiful, let alone messy or organic. But technology is becoming more natural, not more robotic. In Kevin Kelly’s book he discusses how people are drawn to technology because it is beautiful. I would argue it is becoming beautiful, and will envelop humanity as it reaches levels of undeniable beauty. As we pass the complexity threshold humans will be drawn into technology by more than just its utility. We will be drawn to technology because it will finally show us its true face, which will turn out to be our very own.

Think about how technology looked in the movies up until the 80s. These movies depicted technology with wires and buttons, looking quite complicated and technical. Technology in these movies looked like “technology.”

But today, so much of that complexity is subsumed inward. Think of the iPad or the iPhone. There are almost no buttons. Today’s technology doesn’t look like “technology.” The technology that drives the information economy is sleek, minimal, and does not come with instructions. They are slabs of smooth glass with thin metal backings, the knobs and dials of yesterday now exposed via a touchable screen.

Technology today is far more natural than decades ago. That’s why one does not need to be a computer programmer or tech nerd to use today’s sophisticated software products. Today’s technology is aesthetically pleasing. This isn’t just a change in popular aesthetics. Our computers are able to approximate reality to a high degree, as demonstrated by computer simulations. So much so that we can barely tell the difference. Go on YouTube and look at the various CGI animations of water flowing or blooming flowers. The uncanny valley notwithstanding, computers are getting very close to reproducing what we experience in real life.

Today’s technology is fluid, not mechanical. I argue that technology is coming full circle. The way we interact with technology isn’t rigid or deterministic, it’s messy and organic. We are beginning to operate our technologies at extremely high levels of abstraction. To create software is becoming more like sculpting than computer programming. Many of the “best practices” touted by programmers (modularity, maintainability, etc.) are being (or will be) taken over by managed services and codeless frameworks. To create great software products has much more to do with messy iteration and creativity than some alignment to age-old design patterns and unit testing.

The AI technologies we create are considered “soft” computing because they don’t arrive at their outputs using deterministic, hardcoded rules. They are produced by iteration and convergence, whose innermost workings are unknown even to those who engineer them. AI appears “unscientific” to those stuck in the older statistical approaches of analysis and research. Much to their consternation today’s AI seems more alchemy than science; rest assured, this is a good thing.

Our most natural human characteristics are becoming the skills to have in the new information economy. Technology is becoming more complex, more emergent, more causally opaque, more lifelike. Today, technology is best used as our ancestors used their very first tools, via intuition, emotion and muscle memory.

A common conversation point is the role automation plays in taking over jobs. Not only is such automation inevitable, there is also something right about it. In the spirit of coming full circle, why should anyone be working on an assembly line? And this isn’t just for factory work. Computer programming sees humans lowering themselves to the level of the machine in order to write specific instructions. Any activity that sees the human coming down to the level of the machine, rather than the other way around, is demeaning to the human spirit, and as such, cannot last.

It’s not a particular job that is being “targeted” by automation; all jobs that lack creativity can, and arguably should, be automated away. Technologies should be meeting humans where they are not the other way around. The future of technology will be genuinely complex, and as such, the industrial revolution mindset of cogs and pistons, of deterministic machinery, no longer applies. The future is fluid, organic.

Technology is becoming beautiful because we are recognizing ourselves in what we create. That deep familiarity draws us in and will eventually coalesce the false demarcation of “human” and “technology” into a single identity.

Science Fiction Gets it Wrong

Science fiction gets AI wrong. It likes to portray intelligent machines as emotionless. But this cannot be how intelligence evolves. Evolution endowed humanity with emotions in order to navigate their complex environments. There is no such thing as an emotionless intelligence. If artificial general intelligence (AGI) is ever possible it will exhibit all the “flaws” and emotions that we humans do. We cannot achieve what millions of years of evolution has produced by bypassing what has emerged along the way.

For an intelligent being to be truly intelligent, they must have emotions. Emotions are the anchor to the heuristics we use to solve complex problems. More precisely, emotions act as high-level guides to the sampling of the possibility space (gathering disparate pieces of information in order to come to an intelligent conclusion). This is how we make decisions to navigate our reality. This is a non-deterministic process.

We cannot achieve what millions of years of evolution has produced by bypassing what has emerged along the way.

Gut feelings are not wishy washy things. Such thinking smacks of old school reductionism, that stuck-in-the-industrial-revolution mindset. Genuine intelligence does not look fully logical. It looks messy and unreasonable. Emotions are anchors to high dimensional problem solving. Period.

Technology as “God”?

Ultimately, where does this all lead? I think the ultimate question that must be confronted is this: is technology the same thing as “God”? I use the term “God” in the sense of some ultimate end game, regardless of what you might believe personally. In other words, what will the universe ultimately become?

Of course such questions can offend easily, but this is only the case if thinking of technology in an inhuman way. But as I’ve argued throughout this article technology should no longer be thought of in strictly lifeless terms. Technology is not some robotic version of what humans do more naturally rather it is humanity itself. If technology does indeed come full circle, where its role is the most fluid, organic, and natural then my “God” question makes more sense.

Could technology, as this blob of interconnected machines with its own drive to maximize information content be a/the “God” … our ultimate end state?

Regardless of how one should best think about technology we should replace our outdated notions of dirty, mechanical technology with a vision that is as beautiful as it is inevitable. As we learned from Jurassic Park, “life will find a way.” Technology will find a way. Technology will find a way because life finds a way, and because technology and life are one in the same.

Perhaps tomorrow’s technology isn’t some scary future that strips away humanity. Perhaps tomorrow’s technology actually brings us home.

Something to think about.

--

--

Sean McClure

Founder Kedion, Ph.D. Computational Chem, builds AI software, studies complexity, host of NonTrivial podcast.