Howdy! It’s your local software engineer checking in! It’s too cold to go to the pool this weekend, and I am very bitter about it. I’ll have to read on my back deck like a simpleton instead. My squash are multiplying and I’m now a proud vegetable parent. I also finished a massive finance project at work that was probably the most difficult coding task I’ve done so far. Three months worth of troubleshooting and almost crying at my desk. All updates aside, let’s get into todays topic: Artificial Intelligence.

People ask me frequently about my opinions on AI as someone who works in IT. I take a bit more of a nuanced approach for it. First of all, let’s talk about how ChatGPT, Copilot, Gemini, and all their other bot buddies operate. As a very high level overview, engineers plugged in a lot of huge datasets to these bots so they could “learn” how humans typically think, what questions we typically have, how we operate sociologically and psychologically. Think social science married to computer programming. These AI models are sorting through big data to learn how to respond to typical human questions. They’re learning more from us the more we use them. They use what types of questions we ask of them and fine tune their answers the more examples they get from us. More knowledge = “Better Bot”. I guess in human terms it’s like growing up. Learning more speech patterns, when to say what things, how to problem solve, and more. It’s actually kind of a neat concept if you ask me. Here’s the important part though right? I cannot stress this enough, AI is a resource, but it has nowhere near the level of ability to become self aware and try to destroy us or whatever your worst robot nightmares are. For my Marvel fans out there, it’s not the next Ultron. I can’t even get it to understand possessives in an email.

AI operates with a series of predictive algorithms. AKA, a set of instructions for the computer to follow. If someone types this in, spit this out etc. That’s the simple version of it. It’s able to take in text and understand a lot deeper than that, but that’s the basis of how it works. I use it at work sometimes to help me as a sounding board. I code in eight different languages and it’s hard to keep the syntax straight sometimes. I’ll double check hey what’s wrong with this code or something if I’m really struggling. ChatGPT was a huge help to me with the finance report. It had millions of data points and was taking entirely too long to compile. My bot, which I totally named Chat Bot Claire, suggested a materialized view to compile the data instantly instead of being a performance issue for the database. It can be a great resource if you’re spinning your wheels and can’t figure out what the next steps are for a problem. There’s a ton of use cases for it for your personal and professional life.

Let’s talk about the drawbacks. Number one, it’s a huge drain on resources. I never ever use AI in my personal life. I only use it at work if I absolutely have to. It uses a ton of energy globally and is putting a strain on national and international energy resources. As its infrastructure is expanding, governments around the world are implementing regulatory standards to slow the need for more energy, water, and other environmental resources to power the datacenters. It uses a ton of pure water needed to cool the datacenters for AI, which in turn, hurts local water supplies for communities. In areas with frequent droughts, that’s a major problem. There’s also quite a few naturally occurring metals that can only be gathered through disruptive mining operations. Overall, the more it expands, the more resources it consumes. Some of them aren’t renewable resources and what happens if we run out? I’ve seen people use AI as a variation of Google for simple questions instead of just googling the answer. Food for thought next time you feel the need to use AI to write a simple email.

Next, this is a really scary implication, in my opinion, and I’m going to get on my soap box. People are using AI as a replacement for their ability to critically think and problem solve. If they can’t immediately figure something out, they go right to AI. They need to write a simple email, they go right to AI. It’s meant to be a resource, not something to do all the work for you. Students are now cheating their way through high school and college and they’re able to find workarounds so they don’t get caught. What happens when they can’t actually perform the job they got because they don’t have the necessary skills? They’re paying thousands for a degree they didn’t actually earn. Teachers everywhere are complaining that their students can’t write a simple essay. They don’t have the attention span to read a book. They don’t have the critical thinking skills to understand what’s propaganda and what’s not. That makes them super easy to control. People have no patience anymore and everything is so instant and on demand. Couple that with the constant stimulation, it feels like people don’t know how to do things for themselves now. There’s an app or an instant solution for almost everything now. It’s great to see technology progress so much, but it’s starting to feel like WALL-E where everyone is now unable to walk because they have floating chairs to take them wherever they want to go. Don’t let AI be your go to for problem solving. The old fashioned way worked for centuries, I promise.

I’m going to link this article from the New York Magazine. This is a paid subscription, so I’ll try to sum up the article for those of you that don’t subscribe. It starts with a kid who transferred into Columbia and his admissions essay was AI generated. He cheated on almost all of his assignments with ChatGPT. He was later kicked out of Columbia for extensive honor code violations, including developing and promoting a tool to help you cheat your way through technical interviews for IT positions. While professors have attempted to make it harder to use AI by having written exams, Blue Books for essays, and so on, it’s an academic nightmare. Professors are saying the students are coming out of their degree programs essentially illiterate, both figuratively, and literally. They’re not sure what to do about this, but we’re unsure of the long term effects of using AI for just about everything. There is speculation that problem solving abilities and creativity will suffer as a result. As for that student at Columbia, he was suspended again, and left. He started his own company that allows for AI to help you get answers to anything and he plans to now use it to allow people to cheat on anything they want including LSAT’s, GRE’s, interviews, and more. Truly startling just how entitled some of these kids are. They just don’t care and I’m nervous for how the next upcoming generation will operate and the implications growing up with AI at their fingertips.

Students are Cheating Their Way Through College

Overall, I know we live in a technology centered world. It’s my literal job, after all. I just don’t think it needs to be everything. I use AI as a resource when necessary, but it’s a last resort for me. Being a creative problem solver is the very foundation of what I do everyday. It’s a passion of mine. I don’t want AI to take over that portion of the job for me. I don’t want it to take over making artwork for people when we have actual artists with a lot of skill and love for their creative projects. It doesn’t need to think for us. It doesn’t need to be such a drain on resources. It doesn’t need to generate things that make people question what’s real or fake anymore. The ethical implications, or lack thereof, are startling. I’d love to hear your thoughts in the comments below! Do you use AI for work? What’s your relationship with it? Any concerns? Thanks for reading as always.

Sincerely,

Mayor of Yap City

**Image credits to University of Central Florida. Check out their link on AI here.


2 responses to “My Thoughts on the Uses and Implications of AI”

  1. Cheryl shigo Avatar
    Cheryl shigo

    Really great article Carly!! I now have a much better understanding of AI. Okay so when I google a question why does it give me an answer automatically derived from AI, which is a drain on resources from what you say? I never use AI but I get its usefulness. But like everything else we later learn (social media) I’m sure it will be a problem down the road.

    Like

    1. cnw5172 Avatar

      Great question! Truthfully, I can only speculate. Google claims it’s to help answer your question immediately, so an end user doesn’t have to click on an article to find a result. I get it, but it goes right back to the whole instant gratification thing I get so tired of. It feels like people won’t know how to obtain actual information anymore and it will start thinking for them. The implications are terrifying. Moreover, Google is a tech giant in Silicon Valley. One of the FAANG companies. Facebook, Amazon, Apple, Netflix, and Google. They’re not super concerned with not being a drain on resources. They just want to remain competitive and making more money than everyone else by any means necessary. Major eyeroll. Thanks for reading!

      Like

Leave a Reply to Cheryl shigo Cancel reply

Your email address will not be published. Required fields are marked *