Triggering the AI singularity

Lorand R. Minyo
4 min readAug 11, 2021

--

The colorful Singularity

Unless you’ve been living under a rock (lucky you!), you’re quite familiar with Artificial Intelligence (AI), what it is, what it does, and why it’s something that we, as a species, can no longer function without.

If you’re really new to AI — or you just need a refresher, I recommend checking out BuiltIn’s Introduction to AI.

Since AI is now used in pretty much all the digital products that you use every day (including, yes, the platform you’re reading this on), it’s only natural that fearmongers started portraying AI as the ultimate demise of humanity.

And to some extent, that might be true, if not properly handled. Much like a child, it can grow up to be one thing or another, depending on education and environment.

And the process of properly handling AI is what I believe is going to be the future of work. I’m sure that we’ll soon enough fix AI bias, the issues of distrust, as well as the underlying ethics of AI — what we need to focus on more than ever is to make sure that those handling the education (training) and advancement of AI are of good (or at least positive) intent.

It’s been said up until recently (and in some developing countries this is still regarded as the rule) that the safest career choice for the future is to become a software developer; that building digital products is going to be so common, that most if not all people will need coding skills — I agree with having the skills, I don’t agree with focusing in them as a career choice.

We already have AI pair programmers and ones that write code, it’s just a matter of time until self-improving AIs are mainstream.

OpenAI does a wonderful job at making sure progress is done in a responsible, positive way—just take a look at the latest GPT-3 implementations.

Still, many of the apps are far from being “dangerous” or reaching levels of consciousness that would become concerning, and yet there are so many people that use these apps as examples for instilling fear that one day, AIs are going to be better than us and take over.

Newsflash: in many ways, AIs are better than us; they are better at winning any game that has a predictable outcome, they are better at remembering, better at counting, scanning, scaling — sometimes even at composing songs.

And sometimes they surprise us with their articles and stories.

And here’s where things start getting really interesting.

If you’ve ever read an article written by an AI and you have a little bit of copywriting experience, you’ll notice that something’s off—you just can’t put your finger on it.

I’m here to tell you what that something is.

It’s what no machine was (or is, for now) capable of doing, let alone master it, but we humans have perfected it, and use it to manipulate thoughts, intentions, decisions, and of course, elections.

It’s the quintessence of storytelling and what turns a good copy into a great copy.

It’s the process of evoking a specific emotion.

There are four kinds of basic emotions: happiness, sadness, fear, and anger, which are differentially associated with three core effects: reward (happiness), punishment (sadness), and stress (fear and anger).

You might argue that thoughts and emotions are all chemical reactions and electrical impulses— and that is entirely true. Thus, we could very well be brains in a jar and don’t know it (but this is an entirely different rabbit hole).

Great storytellers are able to instill certain emotions only because they themselves feel those emotions. Moreso, great storytellers are able to manipulate the intensity of an emotion.

And that is something that I cannot see an AI doing, at least in the very near future.

I’d argue that in order to create emotion, you need to understand emotion.

Not even humans know precisely how emotions work—hence we cannot teach them to our future digital selves.

But when we’ll be able to do so, we’re not even going to notice the transition.

We’d already be past the Singularity.

Thank you for taking the time to read this article—if you’ve learned something from it, I’d appreciate it if you would share it with your peers.

I’m a technologist and storyteller, with a strong focus on the future. My writings include bits and pieces about #DeFi, #eCommerce, #iGaming, #Energy and #Robotics. If you want to keep in touch, feel free to add me on LinkedIn.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Lorand R. Minyo
Lorand R. Minyo

Written by Lorand R. Minyo

Technology executive, philanthropist. Designing the future of #energy, #education, #health, #food, and #security. Founder and Chairman of The Neveli Foundation

No responses yet

Write a response