Technology is a Test of Our Wisdom

When we advance technology, we put our Will and Intentions to the test.

We struggle to 'frame' technology, forming labels of 'good' and 'bad tech' in our attempt to assess risks and potential rewards. I offer another framing that may help: each technology gives us greater powers, and those powers confront our wisdom, tempting it. When we advance technology, we put our Will and Intentions to the test.

Once a new technology threatens us, we are torn between our hopes and fears: do we try to control people, so that they cannot abuse technology, or do we elevate people, so that technology is a valued servant of our higher purposes? The reason technology makes this dilemma more difficult is that when technology creates risks, those risks tend to be longer-term and they impact wider groups, the whole globe. If we lack foresight, we miss those long-term risks. If we lack compassion, we miss the global impact. In that chaotic and desperate state, we turn to dominance to protect ourselves; control the sources of risk, and tamp them down.

When we try to control others, to protect ourselves, we create structures of oppression that must be thrown off in conflict by later generations. When we can find an opening, a release valve, then that conflict can ooze its way out of us. These release valves are found in the incentives and institutional protocols that surround technology; these are our methods of grasping and operating tech, as a civilization. I'll look at one famous example of "the protocol which guides and decides the use of technology," in particular: Adam Smith's 'Invisible Hand'.

Imagine you are walking down a country road among the trees, on your way to Grandma's House. You stop to look at a field of blooms, and that's when you notice: FOUR Gummy-Bears are laid neatly in a pile, a few yards from the side of the path. Four Gummies are 'better' than NONE, so you scoot over to grab them, and while you are chewing, you look around a bit more – Ah! Over there, further into the woods, are THREE more gummies! You keep following further, to the next pile of two gummies, munching, and next is that Last gummy bear... and only then do you look around: "How did I end up in this pit?" It was a trap.

When we follow that same rule above, simply "grab whatever looks best to you right now" - that 'method' has a name in Mathematicians' circles: "The Greedy Algorithm," and it does what it says on the tin! Now, when you have a simple game to play, like Tic Tac Toe, then that Greedy Algorithm will usually serve you well. In simple games, at least, that simple rule is plenty!

That's what Adam Smith found, a quarter millennium ago. He looked at simple cases of economics, with "two businesses competing over one product" or "one business choosing how much to make of either of two products." He was working by candlelight before we'd developed the modern mathematical tool-belt of algorithm design, so I don't expect him to have handled complex simulations!

Yet - because Adam only looked at simple cases, and he only considered "The Greedy Algorithm," he erroneously concluded that The Greedy Algorithm would work fine in ALL cases, of ANY complexity. He over-generalized a little... ahem. He called it 'The Invisible Hand" and claimed that it would bring us abundance without us needing to pay any attention at all! That was his fatal conceit: "You don't need to pay any attention; no steering is required, because this boat will naturally, automatically go to the very BEST future possible, without any guidance."

We now know that real-world problems are solved by more complex algorithms; "The Greedy Algorithm" is not enough. And we can't let our hands off the steering, because that Invisible Hand is as dangerous and misaligned as any Terminator Robot Apocalypse. In fact, that Invisible Hand is what currently pushes tech companies toward our Robot Risks: they rush to beat each other, for profit-motive, without any steering or guidance.

Wisdom is needed to survive the dangers we reveal with our great power. What does our Wisdom currently lack, that allows technology to harm us? Tech can't move, yet, so it is our own deficiency that makes technology 'bad' or 'dangerous'. The cure is in us, too. We need to get back to steering this thing, or Nature will take the reins in frustration. And, Darwin reminds me that She is not so forgiving.

What does that steering look like, though?

There are branches of study devoted to parts of this steering, and we can draw upon those domains of expertise, as well as listening to the people who live within those systems - their feedback is the most acute lens for finding our way again. As to those experts, 'Mechanism Design' and Algorithm-tinkerers show that we can simulate the impact of each set of rules, to get a first sense of what might happen.

Already, A.I. 'agents' have been simulated under various Tax Codes in a virtual world, to see what patterns of behavior are rewarded by each set of rules, and which rules lead to the most equitable, the most profitable, or the most dynamic and responsive results. (Hint: 'Tax Cuts for the Rich' doesn't do so well, by any measure.)

We can use mechanism design and simulation, followed by local trial runs (such as those testing Universal Basic Income and other community support services) to test like Scientists; and we can listen to the affected groups, to hear what goals those simulations and tests need to achieve.

So long as we throw up our hands and say "We don't need to steer, forethought is unnecessary. The Invisible Hand will save us all, so I don't need to feel compassion for others' struggle..." then we drift closer to that steep waterfall. Grab a paddle!

Subscribe to SuperHumanist

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe