top of page
Search
Writer's pictureJeunese Payne

"But, we can train it": Is ChatGPT a threat to your job as a technical writer?



The first time someone suggested ChatGPT could do my job for me, I rolled my eyes. I rolled my eyes the second time, too. My internal thought was: Okay, how about you replace me with ChatGPT for a week and see how well that goes?

It didn't take long for me to start getting irritated at the insinuations and so I investigated this "ChatGPT" for myself. I fed it a few questions related to my job as a technical writer, such as "What is <product>?" and "What does <feature X> do?" I even asked if it could replace me as a technical writer; after a description of what it does and what a technical writer does, it assured me it couldn't replace me:


"I can assist with providing information and answering questions but I am not able to replace the expertise and experience of a technical writer."

As flattered as I was by the answer, I'm going to continue with my own arguments in this article anyway.


What is ChatGPT?


ChatGPT is perhaps the most famous example of generative AI at the moment. Generative AI is a type of machine learning that's trained on data to create new content.


Specifically, ChatGPT is a large language model (LLM) – a learning algorithm that can recognise, summarise, and predict text.


Here's what ChatGPT can do:

  • It can answer questions based on training data.

  • It can form polished sentences, better than you might see in some high-school essays.

That's it. That's what it can do.


You give ChatGPT a prompt in the form of a question, and ChatGPT formulates a response by predicting subsequent words based on the training data. This is how other forms of generative AI work as well. Even the training data for generative AI image creation is based on the captions found on existing images, and is thus language-based.


That's not to say that ChatGPT isn't impressive. Given the difficulty I experience trying to get a chatbot to understand my request to speak to a real person, I expected a similar level of disappointment in ChatGPT. Instead, ChatGPT gave me logical (albeit often incorrect) answers in a mostly grammatically correct way. That's all it takes to impress me, since many humans can't even do this.


But what about the source and the substance of these logical, grammatically correct sentences?


The source


I asked ChatGPT how generative AI works, and this was its response:


"Generative AI works by analyzing large amounts of data and using this data to generate new content that is similar in style or content to the original data."

ChatGPT's output is based on training data. Training data is scraped directly from existing content on the Internet.


In my role as a technical writer, this means that the responses to my questions to ChatGPT were likely pulled from my own product documentation on the topic. The output wouldn't exist if my input didn't exist, which suggests that, rather than ChatGPT helping me do my job, I'm helping ChatGPT do its job.


In creative industries, however, the way that generative AI works raises ethical and legal concerns. In 2018, Getty Images, which licenses the use of its images, filed a lawsuit against Artifical Intelligence Art (AIA) based on the claim that AIA used copyrighted images as input and so the results constituted copyright infringement. Similarly, in early 2023, a class action was filed against Stability AI, Midjourney, and DeviantART seeking compensation for damages caused for violations of the Digital Millennium Copyright Act (DMCA), right of publicity violations, unlawful competition, and breach of Terms of Service (ToS).


Similar lawsuits could arise in the music industry. In a 2021 statement made to the United States Trade Representative (USTR), the Recording Industry Association of America (RIAA) claimed that music produced by generative AI constitutes derivative work created without the necessary permissions, licensing, or attribution.


"To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights"

Questions also emerge around who owns the output. Music typically belongs to the person who created it. If the generative AI tool that creates a piece of music is owned by a third-party developer or company, who owns the rights to the music it produces?


The substance


A popular response to the claim that generative AI can't replace human knowledge, insight, and creativity is "we can train it". Well, precisely. As just discussed, we train it on human-created content, and this affects what we get back from generative AI tools.


Generative AI, ChatGPT included, doesn't distinguish real news from misinformation posted by bots. ChatGPT doesn't fact-check its input or output. It doesn't understand nuance and can't use "common sense". It autocompletes sentences based on vast amounts of training data.


Quantity of data doesn't guarantee quality of output. In fact, quantity might be part of the problem. Generative AI produces content based on patterns found across the entire Internet, consisting of fake news, emotionally charged language, unwritten subtext, non-literal descriptions, and so on. Thus, the output reflects the prevalence of incorrect information in online communities, as well as human biases, including racism and sexism.


Nevertheless, let's assume that the information ChatGPT gives me is accurate – that the description of a product is broadly correct. (Sometimes it is; sometimes it isn't.) When I take a closer look, I notice that the answer, albeit three paragraphs, is actually rather vague.


ChatGPT can say a lot without saying much at all, however well-written. The sentence structure is correct, but you can get to the end of its response and still be asking "What did I learn from that?" That doesn't make a good case for using ChatGPT for product documentation, which should be concise and purposeful.


On top of this, there was inconsistency in terminology and capitalisation. Not a problem when you can "just train it", right? But the issue isn't that there are minor errors in the output that can be easily corrected and trained out of the system.


These seemingly surface-level issues have a deeper implication when AI makes them than when humans make them. Humans make typographical errors, which doesn't necessarily indicate a lack of understanding of the topic. Computers, however, once given the rules of a language, shouldn't make these errors. The presence of these errors indicate that the output isn't created based on ChatGPT's understanding of what it just generated.


For example, ChatGPT didn't understand that a particular word should always be capitalised because it's a product name, which highlights that it simply strung existing information together. Even if you train it to recognise and always capitalise the product name, it still won't have understood because it's just imitating and piecing together existing written content.


This is why the output can be superficial and often fails to truly answer the questions it's given. From a technical writing perspective, at best, the information ChatGPT provides isn't particularly useful. At worst, it's incorrect.


To give another example, if we trained ChatGPT to say "I'm a conscious being", it wouldn't make it true. ChatGPT wouldn't be expressing a subjective experience of its own existence, even though it said so. Rather, its answer would reflect the data it was trained on, as with everything it produces.


The real problem, here, is that, after training it, enough to remove all those "typos", we then wouldn't be able to tell that ChatGPT hadn't understood or wasn't accurate; the shininess of the resulting outcome would encourage us to trust the output more than we perhaps should. This reminds me a bit of the movie, Ex Machina, but (hopefully) with fewer dire consequences.



What a technical writer does that ChatGPT can't


A technical writer has a broader set of skills and a deeper level of knowledge than ChatGPT that are essential to helping the target audience understand and use products. In fact, there are several things a technical writer can do that ChatGPT can't:


Ensure accuracy.


ChatGPT provides answers just as Google provides answers, except it removes the need to open each website to search for the specific information you're looking for. As a writer, I still have to judge whether that information is correct, and re-write the content to suit the target audience.


When it comes to technical writing, rather than relying on Google or generative AI, I typically get the information I need directly from the designers, engineers, and product managers involved in a product or feature's development – before ChatGPT even has a chance to know about it. There's simply no way that ChatGPT can give me a more accurate understanding of the product than my colleagues do, and definitely not in time for a release.


Take a user-centered approach.


We want to empower users to find value in the product with what we write and how we present it. We can't just write stuff down and hope for the best. We work to present the right information for the user group as precisely and clearly as possible. This involves understanding the target audience, the user workflow from beginning to end, the features we write about, and how these features benefit different users.


ChatGPT can write coherent sentences describing a product or feature (however accurately or inaccurately), but it can't design content for the user in the same way. It can't write concise, impactful instructions for the benefit of the target audience based on their goals and their level of understanding.


Review and edit.


ChatGPT doesn't have the contextual understanding of a technical writer required to effectively review and edit content, such as the intended audience and tone. It could, in theory, make suggestions, as tools like Grammarly or Acrolinx already do, but it can't (or shouldn't) make those decisions for me. Despite the need to follow style guidelines and industry standards, there is a deeply human element to reviewing and editing content, which involves:

  • Identifying and correcting awkward phrasing and poor organisation.

  • Understanding when to make an exception to a rule.

  • Judging the clarity of content at a human rather than rule-based level.

  • Judging the appropriate level of detail to provide (less is typically more).

  • Recognising areas that might benefit from more research or explanation.

Collaborate.


Product documentation isn't a one-person (or AI) job. Product documentation is an integrated part of the product, the development of which involves collaboration and logistical planning with designers, product teams, and developers. Ideally, documentation is published at the same time that the product or feature is launched. This typically involves back-and-forth with stakeholders to gather information and feedback before the final product (which includes the software and the documentation) is ready for release. These are human activities around which the writing occurs.


Decide where content goes.


Technical writers don't just describe features and write instructions. We make decisions about how to organise and present that information.


The easy thing to do would be to organise content around the product's UI, but that's not how people work. Users have goals that don't involve systematically clicking on every element in your product. Rather, the human brain relies on subjective, semantic groupings to understand and interact with the environment, including the technology and software we use. Technical writers need to create content that fits these mental models as part of a scalable information architecture.


"Own" a doc set.


Documentation is a living resource that we iterate on and improve, nurturing it as a product in and of itself. In aid of this, each major product or feature typically has a single technical writer responsible for its documentation. Technical writers are also often involved in UX writing and review during design phases of the product, further deepening their understanding of how the product works in context. This puts them in a unique position to produce effective documentation.


The technical writer becomes a dedicated resource with intimate knowledge of the product and its documentation, and they become power users of the product over time. As a result, they know how and where to make documentation changes as the product evolves in a way that ChatGPT can't.


What technical writers can use ChatGPT for


ChatGPT can write coherent sentences, the accuracy of which depends on the training data.


We should reserve generative AI, like ChatGPT, for the things humans can't do, or can't do well, like sifting through terabytes of data in extremely short periods of time.


In the context of technical writing, I can see a place for ChatGPT as a prompt or as a tool for finding different words to explain a concept. More generally, I found myself using ChatGPT as a precursor to Googling something because it gave me direct, complete, and relevant answers.


As it stands, though, it can't be used for writing user-centered instructions, or even for creating efficiencies in my job as a content designer, since copywriting isn't what takes my time. What takes my time are the uniquely human activities that are central to my job, such as collaborating with stakeholders, designing the information architecture, and learning the products and features I write about.




143 views0 comments

Recent Posts

See All

Comments


bottom of page