Hey Arclayn... You studied computer science, so talk to us about artificial intelligence.
🤨😒
Uhh... Please?
Oh fine... What about it?
Is GLaDOS, HK-47, The Geth, or maybe a depressed Delamain Taxi coming to kill us all?
No. 🙄
I was kidding. But someone is feeling touchy.
Yup. 😓
A serious question, then...
We gamers have referred to AI as "artificial idiocy" because our computer controlled opponents are not particularly clever. They get fast and accurate, but never smart. Heck, our strategy game opponents need a handicap to be able to win. So how do actual artificial intelligence systems seem so... intelligent?
If you don't mind, I'm going to skip over the deep technical talk. Computer science tech talk often triggers a deer-in-headlights response followed by everyone leaving the room. Are you still there? 😏
Haha, yes. Is there going to be cake? 😥
Nope. Dark chocolate. 🍫 Or chai. 🍵
Anyways, most of our video game opponents are not really artificial intelligence in a scientific sense. Our video game antagonists' minds are pretty much just playbooks. The complexity will vary from game to game, but still playbooks. They are all rules to dictate our antagonists' attack patterns and/or reactions against the player. And there are limits to what could be accomplished this way, which is why even when the highest difficulty levels permit our antagonists substantial bonuses, they can still be defeated.
For example...
I created an Uno-like card game as a university student. The game incorporated one human player and three AI players. All three AI players would always play their most powerful cards, except Wilds, first. Wild cards would only be played when an AI player ran out any other kind of playable card, with Wild Draw Four cards being played before regular Wild cards. This created a very aggressive, albeit strategically simple playing style with no consideration over what their neighboring players have played or could be expected to play.
If an AI player played a Wild card, it was guaranteed that they had no other valid cards to play. Further, if an AI player drew a card, it had no valid cards to play, not even Wild cards. Both of these events provide advantageous information to the human player. And because AI players quickly played and depleted their attack cards, there were likely no attack cards remaining in their hand to prevent any other player from, later, going out and winning a hand. Unless the luck of the draw gave an AI player an attack card during a clutch moment.
As for actual AI...
The new AI systems we are seeing are very different. They require training data (and lots of it!) in order to "learn" patterns. This "learning" is purely mathematical by computing probabilities to dictate how to respond to human interaction. That said, I say that actual AI systems are best described as pattern analyzers. And they are very very good at that, even to a fault.
That sure sounds like learning.
On the surface, perhaps. When I reflect on my time in University, there was a lot of reading books and restating that information back onto exams and into reports. But my A graded exams and reports went beyond just restating my textbooks because I provided reasoning as to why this information is in the book in the first place or sometimes critique and disagreement with the book, again justified by additional reasoning. This ability to reason and critique allows humans to justify, or reject, established patterns. Meanwhile, an AI in school (in training) will just slurp up any and all patterns without reason or critique — just like a machine. So I conclude that AI systems are very good at gathering and restating information. But it is a slave to the quality of that information.
Have you ever heard of "garbage in, garbage out?" That's an axiom taught to computer science students that when a program receives unexpected input, the output could be equally unexpected and seldom ever reliable. I say the same goes for AI. Bad or incomplete training data will handicap an AI in remarkable ways.
Unlike machines, human intelligence is capable of reasoning and critiquing when information just might be faulty. I say might, because humans are imperfect at this. But I argue that, even though human reasoning is imperfect, it is more reasoning than an AI is capable of. Humans, at least some of us at some of the time, are able to spot "garbage input" and are less likely to respond with "garbage output."
Garbage output... like hallucinations?
Quite right. Large language model AI sounds remarkably intelligent, because they are trained on the rules and patterns of human language. They are very good at expressing themselves.
A human's use of language is often used as a benchmark to rate another's intellectual capacity. But this benchmark does not work with AI. AI will always sound intelligent, confident, and correct — even when hallucinating.
So, AI is slop?
I am not quite saying that. I am saying that we need to understand the limits of AI so we can use it responsibly.
A lot of people have grown accustomed to treating their computer tools as magic boxes. These people embrace ignorance to the point that computers might as well be boxes full of fairies flitting about. These people just need to meet their deadlines and those fairies had better do their part and do it, like, yesterday. And yes I jest but just a bit. Because I have personally witnessed a similarly aloof "faith in the black-box-of-mysteries" attitude in corporate settings, and it is dishearteningly common.
The computer said so, therefore that is the law. This was the status quo I observed, even before the ascendancy of AI systems.
Seriously? What the heck?
Consider this...
I have been reading of a trend where trial lawyers are relying on AI to write their legal briefs. Court judges are getting quite angry about this trend and issuing "contempt of court" violations. Many lawyers are being fined, and some are getting jail time.
Maybe you think that is excessive but I need to remind people that courts of law need to operate on facts and reason. Justice cannot be served if a court is going to accept the word of an AI just because the black-box-of-mysteries said so. Just like a trial court requires lawyers and witnesses to be truthful, the same must also apply to an AI's contribution.
When trial lawyers are filing information that is expected to be true, as is a lawyer's duty, but turns out to be AI hallucinations, there are serious negative ramifications to pursuing justice. For one, it bogs down the court system with silliness while others are waiting for their turn to pursue legal remedy. And actual cases can become unnecessarily tainted and thrown out, as a judge cannot trust the information presented to them.
Hold up. I'm not a lawyer. 😛
Neither am I. 😁
But the story is important. It shows that AI needs to be approached with skepticism. Not necessarily rejection. But certainly skepticism — a healthy amount is all.
Absolute adherence and trust in AI will invite disaster. Maybe not immediately. But AI is not a deity. It is not all knowing and all powerful. It will not be some magic fairy to do all your work for you. Those who lean on the black-box-of-mysteries like a crutch are inviting very rude and painful consequences.
Those of us, with skepticism, can benefit. Just remember that AI is also imperfect and needs to be managed. Those using it as an aid do need to audit the work. Don't be like one of those lawyers thrown in jail.
So... AI isn't reliable? Or is it?
Ask yourself. Is there an efficiency gain by AI automation even after managing the AI's work? Or maybe the time sink in AI management erases efficiency? You need to decide this one. But if you don't approach AI with healthy skepticism and manage its work, then you are being irresponsible, and you will get you burned. 🔥