It feels like the only thing that we're talking about in the tech world recently are Large Language Models. They've created many conversations, both positive and negative. In this Blog, I’d like to look a bit deeper at how LLMs work and what they can do (including the effort to achieve this) and what they can’t.
What are Large Language Models and where did they come from?
One example is GPT4, a large language model built by OpenAI. OpenAI is backed by several high-profile investors related to LinkedIn, PayPal, Microsoft, and Infosys.
The hysteria and excitement this specific model caused is due to its ability to create meaningful, audience-specific, and grammatically correct text from a defined data set (in the free version: the Web) from just a few keywords. This makes it interesting for all kinds of use cases that are related to user assistance and user enablement in general, and questions like “Will we still need technical writers?” are frequently asked.
A large language model (LLM) is a type of machine learning model that can perform a variety of natural language processing (NLP) tasks, including generating and classifying text, answering questions in a conversational manner, and translating text from one language to another.
The British science fiction writer and futurist, Arthur C. Clarke, formulated three laws. The third law is: “Any sufficiently advanced technology is indistinguishable from magic.”
Are LLMs magical? Let’s have a closer look.
Large language models are based on generative AI - algorithms that can generate new content such as audio, code, images, text, videos in response to user prompts. This creates possibilities like writing and debugging code, writing texts, composing songs and music, answering questions, and so on.
The OpenAI model was pretrained on billions of words and uses complex mathematical techniques to learn the patterns and relationships of language data. This was done on a generic dataset, the GPT language model, followed by finetuning where the model is trained on a smaller, task-specific dataset for text-based conversation.
The first phase of training is supervised by humans providing the model with conversations. The second phase is reinforcement training based on human feedback by ranking the responses from the model. The model is then trained in multiple iterations.
To sum it up, the capabilities we see from LLMs don’t come for free. The models need input and finetuning. But if this is done properly, LLMs can support both the enablement of users working on software tasks and the authors who create the enablement content in several ways.
Possible scenarios to use LLM in Technical Communication
Let’s take an abstract view on possible use cases for LLMs in technical communication. LLMs can be…
used to automate the process of generating reports, summaries, and other technical documents.
used to provide real-time feedback on writing.
trained to understand the context of written text, improving the accuracy and efficiency of technical communication.
used to provide insights and analytics on technical communication, helping to identify trends and patterns and improve the quality of technical communication.
used to analyze customer feedback and support tickets, providing insights into customers’ needs and helping to improve product documentation and support materials.
Will LLMs take over from authors and create the content themselves?
In short, no. LLMs will be able to provide valuable assistance for authors, especially for tedious tasks and quality assurance but will never be able to fully replace authors. To provide meaningful and reliable results, the corpus on which the LLMs are trained needs to be of high quality. Generating text based on generated text will eventually cause similar results as we get when we play the game "telephone".
But will LLMs change the way authors work?
Yes, absolutely. Innovations and new technologies have always influenced the way we work. This should be nothing new to everyone working in the software industry.
So, what does this mean for authors?
It’s important that we shape the way this new technology affects our work. Let’s be open and see the opportunities. Take the example of short descriptions in DITA topics. The short description is important. But who really likes writing them? They’re meaningful, but very short summaries of the topic’s content. This is something LLMs excel at. Let them do it and save your time for more valuable tasks.
There are many more examples where LLMs and their capabilities can make the authors’ lives easier and support the users relying on our help to accomplish their work.
As we continue to explore the opportunities within the technical communications world for LLM, stay tuned for more blogs on this topic.
Have you got ideas about how we can take advantage of LLMs?
SAP notes that posts about potential uses of generative AI and large language models are merely the individual poster's ideas and opinions, and do not represent SAP's official position or future development roadmap. SAP has no legal obligation or other commitment to pursue any course of business, or develop or release any functionality, mentioned in any post or related content on this website.