Everyone is talking about AI these days, it seems. Maybe its because I work in technology, but I can’t go a day without hearing of or reading about it. Today I will look at AI, tell you how I use it, and what I don’t use it on…as well as two cautionary ideas that you need to be aware of.
The Age Of Data
One of the problems of looking at AI is the amount of misinformation out there – on both sides, pro and con – and separating that from the facts. The other issue, as it is turning out, is the impact AI is having on people’s abilities and brains. Today we will do a brief look at AI, what it is, isn’t, and how I, as an IT professional, feel safe using it.
I’ve worked in IT my entire career, and I specialize in data management. We are on the edge of a great age of using data to drive decisions in business and life. After all, isn’t it better to make decisions based on facts rather than gut instincts or feelings? We are now just to the point where models can look at the data and spit out useful results. Gone are the days of min/max/average. So it’s a perfect time to have a monster computer help with those things.
What is AI?
AI is currently a very large data model that has consumed language, and processed it so that when you ask a question, it searches through everything that it has consumed and makes guesses as to what the answer should be. Cal Newport (a computer science professor) did a brilliant job of explaining how AI works in his episode Defusing AI Panic.
Here is the takeaway: AI is only as good as what it is trained on.
What Isn’t AI?
AI currently can’t think for itself. It can only assimilate what it knows and work off of that. It can’t take into account context, unless it is directly given. So it can’t generate plans, it can’t do true innovation, and it definitely isn’t conscious.
Again, I urge you to listen to Cal Newport’s episode Defusing AI Panic.
How I Use AI
I don’t see any reason to panic about AI, understanding the technology behind it. So I have started to use it for a few tasks that allows me to keep being creative, and having it do the things that would be low return on my time investment.
I use AI to write the 100 character summaries of my blog articles that get put into the SEO engines. Do I do this blindly? No. Because I am using it to condense the article into 100 characters, but it misses the SEO keywords about 75% of the time. Even when I give it the SEO phrase, it still doesn’t always produce ready-to-use text. But it saves me time having to collapse the article.
I also use AI to soften the tone of some of my emails. It never fails to tick me off that an pointed email sent by any one of my male colleagues would be accepted as “to the point”. If I were to send the same email, with nothing changed, I would be perceived as a raging beyotch. (We’ve experimented with this. It’s really disheartening) So there are times when I drop an email I have written into ChatGPT and tell it to soften it.
That’s it.
What I Don’t Use AI For
Many people have asked me if I use AI for writing. Nope. Not outlining either. Any engine trained exclusively on my content might be able to pick up my tone and word choice, but it will never be able to replicate the way that my brain moves from one idea to another. All the content on this blog is coming directly from my brain via my fingers on the keyboard.
I also do not use AI for programming in my job. The things I would ask it are syntax based, and are easily found through a google search. But writing code? The models aren’t trained on the complexity and subtlety of what I do as part of my job. AI could produce a blunt-force data loading program. But it can’t optimize it or take into account things like server load, timing and concurrency – all of which can make the difference between a program running well or not at all.
The Danger of AI Training
One of the things that make me wary of AI is the way it was trained. Let’s take a programming slant to this: if the AI is trained on all of the code that is out there on the internet, it is a small proportion of all the code that is being used to run computer systems and applications – and the systems and applications from history aren’t present. That’s because that information isn’t available to feed into the model. People don’t post their source code out there for systems.
Most of the internet stuff is “I broke my code, can someone help” so you have a lot of broken code out there that the model can’t determine if it truly works or not, being answered by people who may or may not know what they are doing.
Poor inputs all around. The answer may “work” but it may not be optimal.
Then if you pass poor quality answers into the model, it keeps getting worse and worse.
What I see happening with poor results being fed back in is that eventually everything will come to a point – and that point will be based on really poor input and lack quality in itself.
The Danger of Unknown Bias
One of my colleagues challenged me on not using ChatGPT to relieve me of some of the burden of researching topics for articles. But to me, it’s obvious: I don’t know what Chat GPT was trained on, and so I can’t detect bias in the sources.
Let’s say I asked AI for an article about kangaroos with sources. It could summarize ten articles on Why Kangaroos Are Not Dangerous. That seems cool, but I wouldn’t be able to tell if it looked equally at both sides of the issue. Kangaroos have drowned people, for example. When AI is coming up with a thesis, we have no way to know what it is basing it on, and can’t detect bias in the results.
That is one of the big dangers of AI being on social media – it can’t differentiate truth from fiction and is as likely to spread lies as well as truth.
The Dangers of AI in Schools
Some MIT researchers did a study about the use of AI in schools, and the results were scary enough that they published the article while it was undergoing peer review. The Time Magazine article on it is more digestible than the paper, and points to things that don’t surprise me as an educator.
People are not going to assimilate and remember concepts in an essay if something else doesn’t the writing. The hint of a study of worse results on a similar vein with programmers has me anxious to read the results. I doubt the results will surprise me.
In Conclusion
So AI. It’s a new thing. I think it can help with certain things, but I think as the receivers of the information we have to be aware of its pitfalls. AI is only as good as the data it is trained on. If it’s fed garbage, the results will be garbage. We also need to remember that it can’t detect bias, and it can’t tell truth from fiction. And that casts all results where you can’t see how it go the answer into question.


