Research Creative   My Account   Submit My Manuscript
Letpub, Scientific Editing Services, Manuscript Editing Service
Tweet
Contemporary Concepts in Publishing

Artificial Intelligence: Useful Tool or Bringer of the End Times?

 

Nathan Boutin, Associate Editor

July 2023


Artificial Intelligence (AI) has made a big splash across the globe. It has the potential to change the way we learn, conduct research, and publish our work. For academics, the positives are appreciable, as using AI to reduce human labor during the publishing process can free up time to plan and conduct research. It has further applications for accessibility through automated speech recognition and translation.

Despite these apparent benefits, AI has created a panic both within and outside of the academic space. For example, screenwriters have been demanding guardrails on AI technology since the start of the year, and news outlets fear the end of times unless the reins are pulled in on language models. Writers and editors feel pressure to justify their jobs when AI-generated content is presented as a viable alternative. The amount of trepidation in academia regarding AI is palpable as well. People fear the unknown and the potential implications AI has for academic integrity. Discussions surrounding AI sometimes sound as if the sky is falling, and human efforts are becoming obsolete.

The interesting thing is that we already use AI every day, and nobody blinks an eye.

AI is a ubiquitous part of our daily lives, even if its uses seem rudimentary. Popular media sites such as YouTube use AI-powered algorithms to suggest videos. Electronic maps and navigation systems combine GPS with AI to plan the best route, and text-to-speech reads the results. Financial systems constantly verify the authenticity of our transactions using AI to detect fraud, and similar systems facilitate Wall Street trading tools. Even something as simple as using spell check in Microsoft Word is using AI, just at a very basic level.

And yet, many of us would never have considered any of these things to be AI. Why? We have a good understanding of these systems and they are ingrained in our culture. We conceive of them as tools to make our lives easier. Now, though, as “smart” technology is becoming more powerful, the average person no longer sees AI as merely a tool. ChatGPT in particular has engendered fear and panic. How can we trust the technology? What if it is used maliciously? There are many issues with AI as it becomes more sophisticated.

The problems with current AI use in publishing

Accuracy and references

The biggest problem with AI is the accuracy of the content it generates. Proponents of current AI tools often suggest that with an appropriate and specific prompt, any kind of content can be created. However, ChatGPT has been known to create fake references. All over the Internet, people are complaining that ChatGPT is making stuff up. This is impressive, for sure; creating an AI that can confidently fabricate references is one large step to becoming more human-like. Now, fake references are easy enough to spot. A little research can combat that, but not everyone is going to check. Imagine unscrupulous journals taking AI-generated content at face value and disseminating it to the masses. That sounds like a recipe for disaster in the publishing industry.

Transparency

AI draws upon a vast online corpus to generate its content. It collects data from previously published material, surveys, observations, questionnaires, interviews, and more. The problem is that the output is only as good as the input. What happens when the input is compromised with poor or incorrect data? Humans are often wrong, but we take responsibility for being wrong, whereas the machine assumes that whatever information it collects and outputs is true. We need to understand where the data comes from and exactly what the model is trained on. As the technology evolves, these are questions that only the developers can answer because there is a lack of transparency with how AI learns and what it is consuming.

Pace of development

While AI has seen developed rapidly in recent years, countermeasures to deal with AI have not kept pace. Technology to detect AI-manipulated images and text is not nearly as prevalent, and such technology will be necessary in the future to help determine the authenticity of content. To beat big tech we need bigger tech, and tools to evaluate AI (counter-AI) require more research to balance out the breakneck speed of development for current AI models.

Attribution

Attribution is a particularly difficult problem with AI. For example, if the researchers did not draft some (or all) of a manuscript, how can they claim to be the authors? The mechanical aspect of writing is an important part of research, but commonly, other group members who do not contribute toward the writing—those who helped in sampling, collecting data, and other valuable tasks—are regularly listed on the byline. Attribution, then, is not merely saying that the academics wrote the paper, but that they take responsibility for the results and data; an AI model cannot be responsible for its own creations.

So, then, what are the guidelines for using AI in publishing?

Well, there aren’t any. At least not officially. ChatGPT has been on a hot streak, and while some publishers have provided some correspondence on the issue, we are waiting for regulatory agencies to make the first move. At a minimum, it is important to disclose any use of AI in published material and be wary of the journal’s tentative policy on the subject.

Using AI responsibly

Like any tool, AI must be used responsibly. The society for scholarly publishing recently held a panel that discussed the use of AI in the academic environment. Tom Beyer, Platform Services Director at KnowledgeWorks Global made a salient point about this:

To use AI responsibly, you must be smarter than the machine.

If we understand the proper uses of the technology, can correct the errors it makes, and can take responsibility for the output, then it is merely a tool that can be used to improve and streamline the writing process. This is perfectly acceptable, and the progress of these tools should not be impeded. If we are not smarter than the machine, though, and cannot adequately check its accuracy and assume responsibility for the material it generates, then it is truly engineered intelligence on another level. Thus, it is paramount that we can understand the output and make corrections as necessary. Human brains need to be the final benchmark, and we need to take responsibility for the content that AI generates.

We are smarter than the machine

As mentioned earlier, we use basic forms of AI every day. The important part is that we are much smarter than these forms of AI. If Google Maps tells us to go on some roundabout way to our destination, we recognize the fault and ignore the directions. If fraud is detected in our bank account, we manually verify the authenticity of that transaction. If a word processing program suggests a phrasing that does not fit with our meaning, we ignore the red underlined text.

Language models have a place in our repertoire for writing, editing, and publishing, but it is incredibly important to continue to be smarter than AI. The moment we stop employing ourselves as the gatekeepers is when the sky might actually begin to fall.


 Previous Article Next Article 


© 2010-2024  ACCDON LLC 400 5th Ave, Suite 530, Waltham, MA 02451, USA
PrivacyTerms of Service