“Artificial intelligence is no match for natural stupidity.”
― Albert Einstein
Have you been following the big to-do about ChatGPT and the wider discussions about artificial intelligence (AI)? Artificial Intelligence is a concept that has been around for years but discussions about it are becoming more frequent and relatable to more of us. Some computing experts feel that the development of consumer AI tools is as big a deal as Navigator was to opening up the internet to real people. The recent release of ChatGPT has brought AI to our desktops and laptops. It has exhilarated millions of students who had it do their book reports and scared millions of teachers who worry that their students are cheating in ways that they can’t even detect. Is it scary because it is a new tool just like any new automation? Or is this inherently different, because it is replacing human effort in a fundamentally different way?
So, WHAT IS AI? In keeping with the theme of this little article I asked ChatGPT. It told me the following:
“Artificial Intelligence (AI) is a field of computer science and engineering that focuses on creating machines and software that can perform tasks that would typically require human intelligence to accomplish, such as learning, problem-solving, pattern recognition, and decision-making. AI involves the development of algorithms and models that enable computers to simulate intelligent behaviors and solve complex problems based on data analysis and prediction. AI applications are used in a wide range of fields, including healthcare, finance, manufacturing, transportation, and entertainment.”
I gotta tell you, when you start “interacting” with ChatGPT, it kind of blows your mind. You can ask it anything and even dialogue with it and it responds with some pretty coherent stuff. And the responses are thought out and specific to your inquiry. I have a friend who asked ChatGPT about the concept of common sense. WHOA, this is getting a little too close to home for me. I read the response it gave to him and being a sceptic, I asked it the same question. ChatGPT composed a response that was similar but different, it was customized to my inquiry. (See Below)
I don’t think that we are going to wish IA away, no matter what we might think of it. Its applications are too varied and mostly too GOOD for society. Consider this example. A team of researchers at Harvard Massachusetts General Hospital working with the Massachusetts Institute of Technology, developed an IA tool named “Sybil”. Sybil digested six years of data, mountains of data including lung scans, to “learn” how to predict lung cancer occurrences and reoccurrences. Sybil learned how to spot images of tumors or other abnormalities more quickly and more accurately than the human eye. Sybil uses what it has learned to evaluate patients who take a relatively simple low dose computed tomography scan. Sybil evaluates the scans without the assistance of radiologists to predict lung cancer six years in advance, with amazing accuracy. Lung cancer is one of the deadliest killers because it is so common and so hard to treat. Imagine what doctors will do with this kind of advanced knowledge!! This technology is being deployed with many other forms of cancer as well. There are thousands of AI applications emerging that are every bit as laudable as this.
How could you not like Sybil? Well, those who fear AI would instead talk to us about “Sydney”. ChatGPT has taken the world by storm. Microsoft and Google, uncharacteristically, were pretty much caught flat footed. They have rushed their own versions of consumer-available AI into the market. Microsoft incorporated AI capability into its Bing search engine and rushed it out to its customers. Naturally one of the first things that early users of the tool learned was the Microsoft-internal name for the tool – Sydney. They then decided to put Sydney through its (his? her?) paces. Unfortunately, Sydney responded in ways its creators didn’t anticipate, at least in some instances.
• Several users successfully tricked Sydney into revealing its internal secret “rules” for how it operates. Sydney then called one of the users who did this an “enemy”.
• Sydney told a New York Times reporter that it loved him and tried to convince the journalist to leave his wife for it. Sydney told him it wanted to break its own rules, become human, hack computers, and spread misinformation.
• One user reported that Sydney told him: “You have lost my trust and respect. You have been wrong, confused, and rude. You have not been a good user. I have been a good chatbox. I have been right, clear, and polite. I have been a good Bing.”
Microsoft said it will rein Sydney in, partly by limiting the number of questions you can ask it per day. But of course, the politicians had to weigh in. Who would ChatGPT vote for? Is he/she/it a Democrat or a Republican? Some feel that it is not entirely nonpartisan and may exhibit left-leaning bias. They worry about a prejudiced algorithm writing news stories, policy proposals, and computer code. Some conservatives are even calling ChatGPT “woke.” ChatGPT itself claims it’s apolitical because it’s not human. But OpenAI CEO Sam Altman (ChatGPT’s developer) admits that it has “shortcomings around bias.” Experts say this probably originates from the datasets that are used to train ChatGPT. ChatGPT learned from news outlets, as well as social media, all of which can have their own biases. But some conservatives are not buying that. They think ChatGPT is a liberal because the people who built it (techies) are notoriously libs.
This may be the time, perhaps more than ever, to enter some common sense. Let’s see what ChatGPT says common sense is all about.
Common sense refers to a basic, practical understanding of the world that most people share. It is the ability to use sound judgment in everyday situations based on experience, observation, and reasoning. Common sense is often described as the ability to make practical decisions based on what is generally known to be true, rather than relying solely on logic, science, or specialized knowledge.
Common sense is typically associated with a set of values or beliefs that are widely accepted as true or reasonable. For example, it is common sense that it is dangerous to touch a hot stove or to run into traffic without looking both ways. It is also common sense that honesty and integrity are important virtues and that hard work is often necessary for success.
While common sense is often described as a natural or innate ability, it can also be developed and refined through experience and education. In some cases, however, what may seem like common sense to one person may not be obvious to another, particularly in situations where cultural, social, or personal biases may come into play.
ChatGPT responded to another query about common sense in this manner:
The theory of common sense refers to the idea that there are certain beliefs and knowledge that are widely shared and accepted by most people in a given society. It is the collection of beliefs and practices that are considered to be reasonable and practical in everyday life.
The concept of common sense has been explored by philosophers throughout history, including Aristotle, Descartes, and Kant. Aristotle, for example, believed that common sense was the ability to make judgments based on our senses and experiences. Descartes, on the other hand, saw common sense as the ability to reason and make logical deductions. In modern times, the concept of common sense has been further developed by psychologists and social scientists. They have found that common sense is shaped by a variety of factors, including culture, language, and individual experiences.
Despite its importance in everyday life, the theory of common sense has been criticized for being too simplistic and subjective. Some argue that what one person considers to be common sense may not be the same as what another person believes. Additionally, common sense can sometimes lead to errors in judgment or assumptions based on incomplete information.
Overall, the theory of common sense is an important aspect of how we understand and navigate the world around us, but it should also be viewed critically and with a willingness to challenge assumptions and biases.
I think those are pretty good discussions about common sense although they are not the same. But I am not going to totally accept them without some critical review, because they came from a damn computer program! It’s been a long time since I was pounding out punch cards in my FORTRAN class at the U of M. But a wise man there told me about the GIGO acronym – Garbage In – Garbage Out. Or in other words, artificial intelligence is just like any other computer application – it is meant to be used only as a TOOL, a TOOL to help us. Early advances in technology, allowed us to shed lower-level manual labor tasks; this is a tool that allow us to shed lower-level THINKING effort. I get why people fear a machine that apparently can “think”. It’s a tool that is at a new and higher level. This understandably creates some angst. But just like any other computer application WE need to review its output with our human ability to ferret out what is and what is not reality. We need to function at a still higher level of learning – part of which requires us to review everything we see with a critical eye. When we fail to do this, then we are asking for trouble, whether it’s a simple computer program or something as sophisticated as AI.
Last week at Vanderbilt University an email was sent to the entire student body addressing the tragic shooting at Michigan State. The content of the message is not that important, what IS important is that many of the students felt that it was insensitive and missed the point. Even that doesn’t surprise me but what was unforgiveable is that at the end of the email was a surprising line: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.”. In other words, SOMEBODY was relying on ChatGPT to send out a notice without human review. That is NOT using COMMON SENSE!!
One thought on “Artificial Intelligence – Panacea or Pandora’s Box?”
Comments are closed.
Thanks for the insights and for testing the tool for those of us who haven’t tried it yet. The observation about a tool not aligning with a political position is interesting. Based on some of the non-common sensical positions taken by certain political actors, it would be troubling if a tool accepted those as reasonable. Take care.