AI, Ethics and a New Division of Labor

Unlimited Data | BY Jim Kulich | 5 MIN READ

ai blog image

One of three blog articles for this series

It’s not just what you do, but how you do it.

This was a theme in comments offered by faculty colleagues from various academic disciplines at a recent internal panel discussion on the impacts of artificial intelligence (AI). This perspective is especially useful as we wrestle with questions of ethics regarding AI and seek to use these tools in ways that make a positive difference.

These past few months have seen rapid growth in the exploration of generative AI. Last summer, it was not unusual for 75% of people in a room to say that they had little or no experience with ChatGPT. Today, that number is consistently below 25% among groups normally interacted with.

The long-forecasted democratization of AI is well underway.

Consistent Stream of Thought

Let’s start the discussion by talking about what’s changing in the landscape of AI. One story that is regularly talked about is the use of generative AI to spark ideas.

ChatGPT is a wonderful resource in planning topics for class sessions, identifying good sources of data and framing student exercise questions. As someone at a recent presentations said, “it’s like having a friend who does nothing but watch movies all day to whom you can go with a question about your favorite plot line”. It’s more like having 1,000 friends of this sort wrapped into one given the extensive reach of ChatGPT’s training.

A problem, though, is that this friend likes to talk too much; sometimes they’re well informed, but sometimes not, which is why it is important to remember that the only thing ChatGPT does natively is to predict words on the screen, one by one.

The difference between us as humans and an AI tool is that we have built in governors that limit what we express. We may be dreaming about a great dinner during a meeting, but we usually don’t communicate these thoughts.

Strides are being made to address this issue with ChatGPT as well as its inability to handle current events. Retrieval Augmented Generation techniques now allow for a middle step to be inserted into the use of a generative AI tool that forces it to work with a curated base of knowledge before attempting to craft its response. The results are promising.

The Ability to Summarize

Another capability of ChatGPT that changes the division of labor is its ability to summarize. With proper prompt engineering, generative AI can take large amounts of information and produce clear and concise summaries of any length. Use cases abound.

One group of our students is working on ways to create a natural language interface that will tap a highly decentralized body of information and make recommendations from it available via a few simple prompts. The bridge between a rich but complex pool of information and a readily useable answer to a question is a good, automated summary. This is what – with some help – ChatGPT is made to do.

Back to Basics

The capabilities of generative AI are changing the division of labor in ways, to some extent, that bring us back to the basics. As a recent article by Oguz A. Acar in the Harvard Business Review predicts, problem formulation will become an increasingly vital skill. Acar makes the point that even something as fundamental as prompt engineering in the world of generative AI will lose importance as AI tools are increasingly able to generate their own effective prompts from basic initial attempts by humans.

It’s not just what you do, it’s how you do it. Our machines have wonderful capabilities but sometimes we want the human touch. Generative AI can create beautiful art, but is a work of art compelling if you know it was generated by an algorithm? More attention to this division of labor is needed.

AI and Ethics

What about ethics? Lawrence Brown, director of Elmhurst’s MBA program, and his colleague Ingrid Wallace, principal of Ingrid Wallace Presents, have been traveling by similar circuits. They offer wonderful presentations on ethical leadership, tapping Ms. Wallace’s Ethical Leadership Framework. The core of this framework focuses on the connection between intentions and behaviors. Good outcomes are only achieved through high intentions and high behaviors that match.

This is an excellent starting point for discussions of AI ethics, as the fundamental question is not one of technical capabilities. As experts in this area, we must focus on intentions behind the use of these tools, ways the tools and we as its consumers and guides behave and systematically observe the impacts our projects have. As long as we remain open to making changes, there is confidence that these powerful new technologies – while not perfect – can produce compelling results that have widespread benefits. This is work that involves us all.

Take Your Next Step

Interested in AI and other data-related topics? Fill out the form below to learn more about our Master of Science in Data Analytics and other Graduate Studies Programs.

Fill out my online form.

About the Author

Jim Kulich, Elmhurst University data science graduate headshot

Jim Kulich is a professor in the Department of Computer Science and Information Systems at Elmhurst University. Jim directs Elmhurst’s master’s program in data science and analytics and teaches courses to graduate students who come to the program from a wide range of professional backgrounds.

 

Posted April 9, 2024

Connect with #elmhurstu