3 Reasons AI Won’t Save You Yet
AI has been here longer than we give it credit for, but AI won’t save you…yet. Like any new evolutions of tech, it’s easy to get caught up in the excitement and possibilities. It’s the greatest appeal of software development for me. There is always something new around the corner, but not everything that glitters is gold.
Let me explain:
I’ve always had a fascination with the nearly prophetic writings of Isaac Asimov. As a concept, the idea of artificial intelligence started to appear as early as the 1940s with his story of “Robbie” and by 1956 the term officially showed up in the lexicon.
History of Artificial Intelligence:
“The term “AI” could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as “the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning.”
By the 80’s you could find AI throughout pop culture. HAL 9000, Ultron, and Star Trek’s Data have always been my perception of what would one day be AI. Truly intelligent mimicry of human thought. So when I see the term popping up all over the place these days…

AI isn’t really intelligent yet.
Intelligence is subjective. Even dictionaries don’t agree on a definition, but I tend to define it as some form of applied knowledge, deduction, and reasoning. I don’t really require the idea of self-awareness that movies like I, Robot and games like Detroit: Become Human have made me contemplate.
That said, we are really still only talking about Type 2 — Limited Memory AI, and those have been around for a while. Chatbots, smart cars, and virtual assistants are all types of limited memory machines that we use every day. As a member of the Y2K generation, my first interaction with the smarter versions of tech was in the form of the chatbot SmarterChild on AOL in 2001.
Newer versions like ChatGPT don’t have the best memory and some questionable data sets, but it is still learning. One user wrote that code was dropped from the previous answers when refinements were requested. Many Medium writers have said it’s all about using better prompts to get good responses — like knowing which search terms will get you the best results in Google.
The best way to help AI get better? Using it (and flagging errors).
In an interview with the Intelligencer on March 23, OpenAI co-founder Sam Altman stated that the thing ChatGPT needs most to get better is “Human feedback — people flagging the errors for us, developing new techniques of the model — can tell when it’s about to go off the rails.”
Don’t be afraid to see what AI can kick out for you. It might get the gears in your head spinning, especially for well-known problems where it could say your time.
Stopping the spread of misinformation is hard enough.
Let’s face it. Misinformation is everywhere. And if large data sets are pulling information from biased sources, extremist organizations, and frankly — out of thin air — then how can we be sure that responses are correct and usable?
When I started getting sucked into the rabbit hole of information that is AI right now, I found references to something called hallucinations. A terrifying and highly anthropomorphic term for when AI makes something up that doesn’t otherwise exist. Greg Kostello, CTO and Co-Founder of AI-based healthcare company Huma.AI told Cybernews, “It can manifest as a picture of a cat with multiple heads, code that doesn’t work, or a document with made-up references.”
I definitely have trust issues here, so whether you were among those to catch the error from Bard on exoplanets or you’ve fallen for Balenciaga Pope Francis this week, it’s getting hard to discern what’s AI-generated and what isn’t.

It’s not uncommon for technology to outpace what we understand to be the actual impact and what we write into law. If you ever had any doubt about that struggle, take a look at the example of TikTok hearings in the US right now.
Deepfakes are scamming grandparents for money. Cybercriminals are perfecting their code with ChatGPT. Fake influencers are a thing? Sifting through plausible BS is becoming harder for experts too, even when they know the answer.
Ethical use and development are challenges that Timnit Gebru and fellow AI researchers have been trying to tackle, but disinformation researchers worry too.
Be aware of losing yourself to the algorithm.
It’s not just high school and college students turning in essays written by AI we have to worry about or the deeper problem of not thinking for yourself.
I use Grammarly a lot. But I don’t always accept the recommendations because I want my voice to come through and not just perfect grammar. They say our writing is like a fingerprint, unique to each person. Even as a QA who has done a lot of code reviews, I can say that goes for writing code as well. When I closely work with software engineers, I often recognize who has written a chunk of code by little idiosyncrasies in the code structure and the way they tackled a problem.
Is a fancy chatbot that searches the internet and gives you code any different than searching Google and copying something from StackOverflow? It’s hard to say for sure, but we all know that the ease of copying some code might be swapped for the agony of trying to make it work for you.

I’ve already heard stories of people causing trouble by implementing faulty code or breaking NDAs by uploading code for a quick review. Privacy and security is already a growing concern and opening ourselves up to these breaches should be at the forefront of our minds as we develop, the same as we would vet any other software solution.
I’m just not sold yet.
It’s definitely fun to play with. I’ve asked for movie quotes, help naming things like blogs, and even used letter templates. I am, however, hesitant to truly call this tech AI. At best it’s another tool in a developer’s toolbox. At worst, we could be making our planet hotter, misinformation more prolific, and mundane tasks even harder.
Will this be added to the list of inventions that their creators have regretted? Call me jaded, but I can’t believe all of the hype. AI has been here much longer than we have known and is still maturing. I do think we can have a successful relationship with AI if we remember to give lots of feedback to improve it, leverage it with caution, and mind the security gaps to keep our jobs and app users happy.
Let me know if you think AI is the next best thing or more harmful than we think in the comments or find me on Twitter and LinkedIn!