Advancement of AI Can Complement, Compete with Human Creativity

ChatGPT by OpenAI, a chatbot that replicates human conversation based on a given prompt, is a game changer for creators everywhere. The currently free-to-use program employs artificial intelligence to locate and composite relevant information to produce written responses to most prompts users can imagine. While ChatGPT was trained on a wide variety of internet-sourced material, the technology does not have access to the internet and cannot retrieve external information. However, if you ask ChatGPT to edit your cover letter, research a topic for class, or compose a sonnet in the style of Wordsworth on the most outlandish topic, it will deliver.

The technology’s seemingly limitless capabilities have sparked intellectual excitement with a healthy dose of apprehension. The world is enraptured by its potential practical applications. Access to this kind of free tool is of particular interest to notoriously broke college students. We can use it in classrooms, future work environments, or for creative hobbies. With this in mind, how will we choose to use ChatGPT? There is, of course, the adage of “just because we can, doesn’t mean we should,” but that hasn’t halted technological advancement in the past. Technology has always been developed to make tasks more efficient and streamlined. This technology has also always been met with fear and anxiety. The takeover of robots and artificial intelligence, the obsolescence of humanity, and the death of originality are all common fears expressed in our media and society. Throughout history, technologies now commonly accepted as harmless were at the root of widespread criticism. The transition from manuscripts to bound and printed books in the late Middle Ages was a source of anxiety for people who feared a loss of human touch. ChatGPT is a tool just like any other and maybe with a similar cost.

From a human perspective, one must consider the potential AI has to exert control over our lives. Many AI systems have documented biases — for instance, the U.S. Department of Commerce found that facial recognition AI can misidentify people of color. Human beings choose the data that AIs are trained on, which means human bias is built into the foundation of the program. If a developer with a lesser emphasis on diversity were to create a popular AI system, the software could perpetuate our society’s unconscious biases. A computer algorithm in Broward County identified African-American defendants as “high risk” twice as many times as it did white defendants. If AI is supposed to reshape how we conduct our lives, we have to ask if it can do so in a completely impartial way.

For academic institutions, the existence of ChatGPT presents a different set of problems. Higher education institutions — Oberlin included — often mandate student adherence to an honor code, which stipulates that all submitted content must be the student’s original work. Using an AI for class submissions would constitute a serious breach of academic policy but can be more difficult to catch than other methods of cheating. However, the uses of the application are more varied than simple plagiarism. Would it be inherently problematic to use ChatGPT to create a crash course ahead of a chemistry exam or to conduct a wide survey of a specific historical topic? To reject the legitimacy of this application outright in an academic setting, without exploring its potential positive applications, would constitute a dangerously reactionary rejection of the new and different.

With opportunity comes a cost, and ChatGPT is no different. The software can hyper-efficiently research and write content — significantly faster than a human ever could. Entry-level positions such as paralegals, copywriters, and social media associates may be rendered obsolete by the efficiency of ChatGPT. Despite this possibility, we must recognize that even as occupations disappear in the wake of new technological advancements, more will always emerge in their place.

In its current form, ChatGPT cannot generate new information or ideas, and the implicit context of a prompt is largely irrelevant to it, so humans are still needed to feed it the right information so it can best complete the task at hand. Human use also risks human misuse, but thankfully, in its current form, ChatGPT has certain embedded safety features: It refuses to answer questions that would help you do something illegal, generate hate or otherwise offensive speech, or generate content it deems as intended to be misleading.

As with all new technology, the question is not only what ChatGPT is now but what it may become in the future. As with all technology, the evolution of AI is inevitable, and we cannot and should not try to stop it. Instead, let’s try to talk about how to use it, consider how to complicate its applications, and understand how advancing artificial intelligence can compete with and complement human creativity.