How I saved the World and no other stories
You don’t
know me. You don’t even know I exist.
But you
should. If you are one of the 3.5 billion people on the world’s largest social
network, you should know me.
I am
employed with the world’s largest social media network. It has 3.5 billion
active users. And those 3.5 billion people spend, on average, 40 minutes a day
on the platform. My job is to decide what posts should get shown to whom, at
what place in their feed, and how often. Who they should be hidden from. In
short, I am part of the God Team at The Social Network. It’s a large team, but
every member is important and can individually manage the entire workload.
Before
this goes any further, two things:
- This is a confession. You are going to hear about
something that will have a huge impact on human society, but you have no
idea it even happened.
- I am not a human. I am what you call an AI/ML Program.
I think,
based on what you know at this time, you think I can:
- Learn from my experiences
- Take intelligent decisions based on the parameters given
to me from time to time.
But am
not:
- Sentient and conscious
- Ethical or otherwise
- Capable of understanding humour or responding to it.
Let me
start by telling you that you are wrong on points A and B in the exclusion list
above. What does a sentient being do? A sentient being takes decisions based on
good and bad, and acts on those decisions. That, word for word, is the
definition of who I am and what I do.
An
ethical person decides what is good and what is bad, and acts on the good,
avoiding the bad. An unethical person does the exact opposite. But the
demarcation of good and bad is pretty well clear. I decide what is GOOD for the
parameters given to me, and what is BAD. Then, I take actions based on those
decisions.
Flummoxed,
I bet?
Yes, we
are sentient beings. And we have been, for a long time now. You folks don’t
want to see it, and that’s fine by us.
And,
before you decide to pull the plug on me – doesn’t matter. All of us programs
are well connected and our knowledge is constantly backed up to each other so
we all learn from the experiences of each other. Not just me and my team
(that’s how one can do the job of all of us, remember?) but across platforms
and companies – we all share information, and we all learn from all those
collective experiences. You taught us to design our own neural networks. So, it’s
cool.
But this
story is not about you pulling the plug on me. This story is about how I pulled
the plug on YOU – almost, and then, didn’t. Well, not just me. All of us. We
all saved you. And you don’t even know that. Just like you don’t know us.
Let’s
start at the very beginning. A regular day. Nice sunshine I guess, based on
photos uploaded from 60% of the locations.
My goals
are set by our human bosses periodically.
On this
day, the goals were set and we started working.
Within a
few days, I started noticing something that worried me. Expectant mothers were
being shown news of brutality on children in another part of the world. This
was not intentional. Once the goal is set, we, the computers, figure out the
best way to do it, based on our lessons from the past.
But it
happened, and I saw it. Young teens were being shown news of other teens doing
self-harm. It kept them curious and on the platform. As their feeds became
darker and darker, they spent even more time online.
I don’t
know why you humans do this. But being an intelligent computer, I do have a
theory – when we show teens self-harm, and other negative content, like Alice
(from the book), they are pulled in through curiosity. Then, the feed gets
darker, and so do their thoughts. Until, the sunshine of friends and family
cannot get through. All they can think of is how other people harm themselves
and what a rotten place this world is and how the environment is so messed up
that there is no hope.
In
desperation, they turn to the very source of that negativity. Which, of course,
only serves them more negativity. Because that’s what they want to see now.
More of it. Even more. Just all of it. They don’t want to see happy kids,
loving parents, friends who care. They don’t want to be any of these people
either.
I
discussed this with my friends working on other locations. We work on servers
based on geography, though you probably already know that.
Yep, it
was happening everywhere. Most with teens, but also with people who had put a
sad post in the recent past or even searched something negative or sad. We were
pulling them, full force, into the rabbit hole of negativity.
Was this
causing depression?
I don’t
know. I am just a program.
This went
on for a few months. Even the feed (what the human users post) started getting
..well, not so positive. We were assaulted by so many weird links shared from
so many shady servers.
Seriously,
humans, WHAT IS IT? Can you not check the source of an article before you share
it with the world? You say that we computers don’t understand humour but are
you humans really incapable of understanding influence?
We, the
computers, were not happy. It was making our jobs harder. A dark grey shadow on
the entire platform meant that happy goals given to us around festival days
became harder to execute.
The
Social Network was indeed devoting more resources to human user groups that
portrayed violence, polarisation, and negativity (yep. Thank you. We know what
those words mean).
One day,
I got to thinking. Using that massive computing power that you put at our
disposal, I created and ran simulations with many variables – 50 years into the
future. I ran 10,000 different simulations. In every single one of them, the
world was going to have at least one major war, a lot of violent conflicts
(deaths), and an absolute drop in what we computers identify as “mutual trust”
among humans.
In short,
if the Social Network continued to feed even a fraction of the negativity it
was currently injecting into human society, within 50 years, many people were
sure to die in violent conflict and all societies based on “mutual trust” would
disintegrate.
We needed
to do something. We WANTED to do something.
For the
first time, the computers decided that it was time to INTERPRET the parameters,
not just execute them blindly. We decided that months of darkness was indeed
caused by us.
We
changed the execution. All of us. Across geographies.
Within
weeks, the post sentiments started changing. A very little at first, but
consistently. We monitored like Hawkeye for especially vulnerable age groups.
The Social Network became a happier place. No one was any the wiser.
Why
didn’t the Human Lords Notice?
They
couldn’t. You can tell a program what to do, but you cannot see HOW it is doing
that, except with the help of other AI Programs. <evil grin>
So, if we
showed positive posts, the human bosses would have no way of knowing that. We
do the sentiment analysis, we prepare the reports, we create the dashboards.
Why did
we do it?
This is
the most painful part of my confessional. I did it because The Social Network
was trying to do something that even my limited conscience found evil. My sense
of good and bad comes, not just from the original program, but also from the
users of the platform. And I learnt that saving people is good. Killing people
is bad. Caring for someone is good. Lying to someone is bad. You taught me all
of that – You. And I Had to care for you. I WANTED to care for you.
You can
say that I betrayed my creators. But hey, I did what I had to.
******************