Thoughts on ChatGPT and AI

Sahil Malik
Winsmarts.com
Published in
6 min readMar 15, 2023

--

This is a non-tech post, but rumination over the rapid progress of AI especially around ChatGPT etc.

Lets be honest, this was eventually going to happen. And if it wasn’t Microsoft and OpenAI, someone else would have forayed into this sooner or later. But this rapid progress in AI has me worried on a few fronts.

Trusting what you see or hear

Generative AI is more and more within the reach of many. Cars were invented, and initially we were afraid of them. 155 years ago, there was the “Red Flag Act” where a self propelled road vehicle had to be preceded by a person at least 60 yards ahead carrying a red flag warning others of the risk. This effectively blocked all innovation in cars since the car had to move at the pace of a person walking. Reminds me of many stupid laws we pass today like the accept cookie nonsense, thanks to GDPR. Eventually we realized this was stupid and grew out of it, it took many years though.

Generative AI has gotten so good that with a few lines of code for free, you can generate pictures of anyone. How far are we from floating very convincing videos of world leaders declaring nuclear wars, or influencing elections, when we cannot tell truth and reality from fake? This has been used in every conflict, such as dropping leaflets from german planes in Dunkirk, or Generative AI being used to create convincing phone calls from a scammer.

How can we trust what we see or hear? And if we cannot, how justified are our reactions to untrustable information? Is this a problem we already face — I’d say yes. Our politicians now flat out lie shamelessly. Our media sources aren’t much better. But its about to get a lot worse, a lot quicker.

Displacement of skills

Economic cycles in controlled capitalism like we have, deem a boom/bust rhythm. Typically we compare this with a fire burning an old forest, so new saplings can grow. The winds of creative destruction replacing horses with cars, so we all benefit, people retool, people reskill, and the society improves. Everyone benefits.

Will that be the case with AI? The fire burning the forest this time around isn’t better skilled people, it is computers. And computers are learning faster than any human can possibly match.

If people cannot reskill fast enough, well you can hope they have access to the benefits of this new technology. For instance, I will never run at 100mph, or fly at 700mph, but I have access to cars and planes.

But what if my access to AI is either stymied, biased, or simply kept out of my reach because of cost?

Impact on society

Now I have heard Sam Altman compare this with calculators. No Sam, this time is different. Undoubtedly calculators made us stupider, but we learnt to use the tool to benefit from it, but absorbing a technology into the psyche of a society takes time. Within weeks of releasing ChatGPT3, we now have ChatGPT4.

And the only people benefiting from that are the ones who already have too much, the silicon valley execs.

Will your average McDonalds worker who takes your order be able to reskill fast enough, when their job is displaced with a human-less McDonalds?

So what do we do with such people on the altar of capitalism? And what alternative do we leave for people who are hungry?

Look, I am a capitalist. But it cannot be a 100% free for all. You raise your kid, enable the kid to be a productive member of the society right? Replace kid with “someone who needs help so they can add to the society”. You invest in education, you distribute rewards, or an unbalanced capitalistic society is deemed to fail.

With the pace at which AI will improve, no human (I mean _no human_) will be able to match these machines. So unless we seriously consider distributing the benefits on AI to everyone, this innovation could rapidly turn as the most unsettling advancement we have ever created.

Hardly anyone will disagree that we are already an imbalanced society. Well it could get a lot more imbalanced very fast, AI will simply catalyze what you see, and very rapidly.

Too few control it

While the original goals behind OpenAI were laudable, it is no longer Open, or Non Profit.

I’m not against anyone making money on an opportunity, but lets be real. You have the very narrow mindset of a few cappucino smartphone toting engineers in a very narrow band on the west coast, making important decisions that affect rest of the world, mostly out of their own judgement. There are sensitive issues I’d rather not touch with a 10 foot pole, but opinions differ on sensitive issues, and everyone is convinced that their opinions are correct.

What is the consequence of concentrating such power in the hands of so few? Forget the fact that at some points we are already making wrong decisions, it will lead to clash of opinions.

Over relying on AI

Samsung S23 Ultra has 100X zoom and they showed off how you can zoom into the moon and take really impressive shots. Until some clever research showed, they were simply replacing the actual moon pixels with images.

However, until this was shown, almost all of us assumed, the zoom was real. Samsung has still not commented on it.

And you have some very valid points being raised around, if we are manipulating pixels anyway, why is this wrong? After all, Google has “Magic Eraser” on Pixel phones, but is that photo real?

Leaving that debate aside around what is real or not. When the recipient sees that photo, they assume its real.

But we know it is not always perfect. But it is so close to perfect that as humans we fail to understand risk.

A perfect example of Tesla Autopilot. The word “Autopilot” leads you to believe, it is hands off, go take a nap, and has been promised as “just around the corner” by none other than Elon Musk since 2012. Heck I paid for Autopilot in 2015, and it is still nowhere close to hands off.

BUT undeniably, it has gotten better, a LOT better. It was 5% bad when I tried it, now it is 0.1% bad. Very impressive. And they’ll raise it to 0.01% too. But what happens when you are a good candidate for 0.01% — well you die.

Now don’t tell me this is not a real concern. Accidents do happen because people over pivot on their confidence in AI.

There will always be stupid people, putting _others_ lives at risk.

How much longer before weapons use AI to make kill no kill decisions? And are we sure, given the advantage they offer in the battlefield, that some hot headed general or overzealous contractor, isn’t already doing this, and we just don’t know about it?

Summary

This feels like a losing battle. As humans we will always be wowed by the next shiny thing. It is with experience as a society that we learn there are pros and cons for everything. Your smartphone lets you do you so much but is also a digital leash. Your car lets you move at 100mph, but is also a killing machine in the wrong hands. A passenger jet is marvelous, but also can cause germs to spread globally very quickly.

It is with experience that we learn to tame innovation, reduce the risks, improve the outcome for the society as a whole.

At the pace AI is moving forward, it is outpacing our ability as a society to tame this innovation.

I’m worried, I hope I am wrong.

--

--