Search
Close this search box.

AI is Here to Stay

Despite concerns, educators say it can help make us smarter

Many of us have seen the movies and read the science fiction books about robots becoming increasingly human, surpassing humans in intelligence and then taking over the world and bringing about the end of civilization as we know it.

Scary? Sure. News stories about artificial intelligence, or AI, can trigger all those uncomfortable thoughts, but AI has been steadily changing civilization for a long time and will be more helpful than harmful, some local experts contend, especially when it comes to education.

“I began researching AI in the early ’90s,” says Dr. Kuanchin Chen, professor of computer information systems and director of the Center for Business Analytics at Western Michigan University. “AI was traditionally understood as a part of computer science then. Today there are many forms of AI — Google Assistant, Siri, Alexa, your smart phone. It’s everywhere. Chatbot is just one of the newer forms.”

Chatbots are used in a type of artificial intelligence called generative AI, which is sparking wide concern and controversy and numerous news articles and commentaries. It can create text, images, video, and other types of media.

“Generative AI is a tool trained on millions of scraps of information that it gathers from the internet,” explains Dr. Gwen Tarbox, English professor and director of the Office of Faculty Development at WMUx, an innovation hub at the university. “It is a tool that locates patterns and responds to prompts based on those patterns.”

While sparking some concerns, it also has benefits that should be embraced, argue Tarbox and some other local experts. “Generative AI can help students learn better and help faculty achieve their goals,” Tarbox says.


After Chat GPT was released in 2022, WMUx began efforts to determine how the new technology could be utilized at the university. Instructors, staff and administrators collaborate to address the challenges of using AI while encouraging academic innovation, not only within the university campus but also in the community beyond the campus, working with pre-college students, adult learners and organizations. WMUx has so far held 30 workshops on the use of AI, often open to the public, and many more are planned.

“When ChatGPT — a language-processing tool — was released, we had faculty who were eager to embrace it and bring it into their classrooms, while others wanted to ban it,” Tarbox says. “That’s their right, but we wanted to be able to offer training in how to use it, how it could be beneficial to students and faculty both. There were a lot of misconceptions we wanted to address. New things often make us uncomfortable, but being informed about AI can help us see this as a useful tool in education.”

Using chatbots effectively

WMU English professor Dr. Brian Gogan is offering a new class on how to get the most out of using chatbots and how to prompt them effectively. The class, titled “AI Writing: Prompt and Response,” was developed in collaboration with WMUx, and in it students use AI tools such as GPT-4, Bard and Copilot.

“Specificity and detail, rhetorical elements impact the effectiveness of your prompt,” Gogan says. “We talk about using persona, role, audience, task when developing your prompt. The temperature of the prompt refers to the style and tone you want it to generate

“AI is redefining what we mean by writing activity. We can use it to review, fact-check, revise, but we will still need human writers. I have two students in my class studying to be journalists, and they use it as a research tool and for fact-checking.”

While the chatbot mostly pulls its content from what’s available on the internet, the human eye is still needed to interpret the content and evaluate if it is what the user needs, says Gogan.

“I had a wide range of students in a recent class. Some wanted to learn more while being skeptical that it could be useful. Others wanted to level up and get that competitive edge as they entered the job market,” he says. “AI competency is beginning to show up in job sites as a requirement. Any fear students had about AI coming into the class dissipated as they began to work the platform.”

Students first learn to visualize and map the process, craft an effective prompt and then review and assess the GPT’s response.

Teaching AI competency, says Gogan, may very well change with each course offering, since AI technology is evolving fast. “It’s a tool that requires a human user, much like a typewriter or a computer. Humans retain agency. It is evolving, however, faster than policy about how to use it.”

Concerns about AI

Tarbox acknowledges that there are valid concerns about the accuracy of the content a chatbot might create, and she offers a warning: Chatbots have been known to create “facts” where none exist, or they can incorporate a bias in response to a prompt.

“Given that the current data sets used to train chatbots are taken from the Internet, the data are heavily North American and European in origin,” she says. “We need to take that into consideration when using these tools. It’s a part of the bias that comes into play.”

Gogan agrees. “Some of my students work internationally, and they have come across cultural differences,” he says. “Some countries have policies that restrict AI access. They get entirely different results for their prompts than we do.”

Author rights have also come up as a recent AI issue as well, with well-known authors taking their case to the courts to legislate the use of their written creations by chatbots.

“It will be interesting to see how author rights play out in the courts,” Tarbox says. “Copyright is less clear when it comes to chatbots, but it is important to understand that chatbots learn patterns rather than take up exact words. If authors are concerned that chunks of their work will be used, that’s not likely.”

“For the chatbot to learn a pattern, the text must be downloaded thousands of times,” adds Alyssa Moon, associate director of instructional design and development at WMUx.

“When exploring chatbots as tools,” she says, “you must be well versed in their use, in the ethics and the bias of their use, and understand the why in using them. We encourage people to play with them, learn about them, develop an AI competency. The bias part of it — that’s an inherent part of everyone. It will always exist. AI will reflect bias too, and that’s why we teach critical thinking.”

What the chatbot generates for the user is only as good as the prompt it is given, Tarbox points out. It is not as simple as typing in a topic and requesting an essay, so cheating is not that easy, she says.

“You need a clear understanding on how such a tool works, how to prompt well,” she says. “Cheating has been around and will be around forever. We need to ask ourselves, ‘Why would a student cheat? What do we need to do to prevent that?’ AI is a nonjudgmental tool that can be used as a tutor. We first give faculty support in how to use it and how to prevent cheating.”

Detecting AI use in student papers can be challenging, acknowledges Josh Moon, educational technology specialist at Kalamazoo College, but he says educators can usually tell when a paper is not in a student’s voice.

“AI is a powerful tool, and we need to recognize that students will use it. We need to help students understand when it is not useful to use it,” Moon adds.

He says Kalamazoo College has thus far opted not to develop a campus-wide policy on using AI. “It would be impractical,” he says. “How AI might be used is so different across departments, and different professors have varied views on its use.”

Along with the risk of plagiarism, he says, common concerns about AI include fake images and texts created to fool the viewer. Political and social media ads are examples of areas where fake posts or images might appear, especially in this election year.

“In general, I think fears are overblown,” he says. “Yes, fake AI and the misinformation it can create can be challenging. People can be apt to believe what they see online if it fits their preconceived notions.” Which is why, he says, it can be so crucial for today’s educators to help students develop critical-thinking skills — to check sources, ask questions and maintain a healthy degree of skepticism.”

AI in the world of business

Nowhere might critical-thinking skills be more needed, Chen says, than in the world of business. He is currently teaching a course at WMU called Artificial Intelligence in Business. The course explores natural language processing, machine learning, generative AI, fraud detection, human-AI symbiosis, user-generated content and more — branches of AI used within the context of real business challenges.

“Garbage in, garbage out,” Chen says, repeating an adage that’s never been more relevant than now when referring to the use of AI. He reminds his students to always check for bias in the data AI pulls up, keeping an eye out for cultural differences.

“We worked with scholars from France, Poland and Turkey,” he says. “We expected we would all come up with the same answers to our prompts in generative AI, but we did not. Different countries brought up their own cultural bias. When using generative AI, one should keep in mind that certain data may not be available in a certain country for the algorithm to process. The outcome could be made-up findings (termed “AI hallucination”) or something drawn from the data collected from a different country. This latter one is what we saw in these experiments.” It is recommended, he says, that people “use several search engines, generative AI, and other tools to triangulate the results. Keep in mind, computer algorithms can be biased too.”

Chen and a group of WMU engineering faculty were recently awarded a $500,000 National Science Foundation grant to study how AI can empower tomorrow’s workforce, and students are already giving positive feedback.

“A misperception can be that using an AI tool gives one an advantage,” Chen says, “but anyone else can pick it up too, get up to speed in a short time, so there goes your competitive edge. There’s a symbiosis between human and AI, beginning with a collaboration, but collaboration can’t be the goal. We must create new capabilities and benefits for both parties. This is why systematically learning how AI and its branches are used to solve the problems that were difficult or even impossible before is of educational importance.”

A common fear of AI is that its use will displace humans in the job market.

“That won’t happen overnight for most people,” Chen says. “It will be piecemeal. Even if AI eventually takes over part of your old job, training and preparation through human-AI symbiosis techniques will shift your skills upward.”

Chen encourages students to learn new capabilities to offer in the workplace. An example would be to learn the techniques behind “keyword and concept stuffing,” which involves manipulating wording in resumes, advertising messages, social media messages or web sites to gain benefits.

“Keyword and concept stuffing is a means of inserting a certain word or phrase or concept in a piece of writing so that it advances to the top in search engines, resume screening tools, etc.,” he explains.

“When it is used in disguise to sway resume screening programs — say, printing the font in white color on a white background — it requires new capabilities in humans to work with AI in order to detect that.

However, when it involves variations of the same concept, not just repeated words, it requires even a deeper collaboration with AI to detect it. These activities require humans to play an important role before, during and after the collaboration with AI to jointly accomplish the work — a nice form of human-AI symbiosis.”

Chen says he takes a holistic approach in his classes, teaching students how to work with many branches of AI to solve problems. He trains students on three pillars of AI competence in problem-solving: AI tools, techniques behind the tools, and processes to implement human competence.

Another use for AI can be what Chen calls “sentiment analysis.” An example would be a business analyzing customer reviews on a site such as Amazon, X or hotel.com. Do people like their product?

Why do they or don’t they like the product? When a customer complains, what are they complaining about?

“AI in that kind of analysis can help a business watch for trends, capture consumer preferences, identify areas of recommendations, and detect fake reviews,” he says.

Downplaying public fears

Chen waves off most fears of artificial intelligence. “If you think artificial intelligence is on the brink of becoming the Terminator, you may not see it anytime soon,” he says, referring to the title character in the 1984 movie starring Arnold Schwarzenegger as a robotic man-like machine capable of destruction and impossible to kill. “One thing to keep in mind is that AI is not just one tool, but many things with many branches. It takes multiple branches of AI together to approach human intelligence. Examples of AI branches include natural language processing, machine learning, voice AI, image processing, email filtering, etc. You don’t have to be a computer scientist to experience it.”

People encounter AI frequently in daily life, he says, as, for example, in the use of Siri, tags for people in photos, automated financial investing, chatbots for customer service, autocorrect in word processors, patterns or facial recognition to unlock your phone, travel recommendations, and more. “They are all AI,” he says, “and they represent some of the key AI branches.”

He advises people to make AI part of their skill set, “playing” with an application such as ChatGPT to start and experience how it works, because knowledge and familiarity can eliminate fear.

“And understand that AI is changing fast,” he adds. “That’s why we should not just formulate our education vision based solely on the kind of AI available today, even as we want to have an AI-competent campus. If anyone is hesitant about AI, enroll in a class and approach it with guidance from people with expertise. Look to the future of how this experience translates to a healthy human-AI symbiosis that creates a lasting competitive competence — that’s our educational goal.”

Zinta Aistars

Zinta is the creative director of Z Word, LLC, a writing and editing service. She is the host of the weekly radio show, Art Beat, on WMUK, and the author of three published books in Latvian — a poetry collection, a story collection and a children’s book. Zinta lives on a small farm in Hopkins, where she raises chickens and organic vegetables, and wanders the woods between writing assignments.

Leave a Reply

WMU lab studies how humans and robots communicate
Broken gadgets drive Rapid Repair’s thriving business
The Download on How Technology Has Changed College Life

Support local journalism by subscribing to Encore

By becoming a subscriber, you can help secure the future of Encore’s local reporting.

One year for
$36
Just $3 a month!

Sign up for our Newsletter

Never miss an issue by getting Encore delivered to your Inbox every month.

The opinions, beliefs and viewpoints expressed by those interviewed and featured in our articles do not reflect the opinions, beliefs and viewpoints of Encore Magazine or the official policies, owners or employees of Encore Publications.

Encore Magazine is published 12 times a year. © 2024 Encore Publications. All Rights Reserved.
117 W. Cedar St., Suite A, Kalamazoo, MI 49007 (269) 383-4433