Artificial Intelligence in Technical Documentation, Part 2

Published: 2022-05-11 Updated: 2022-09-07

Artificial intelligence is already widespread in technical documentation these days. In this interview with Fabienne Lange and Eva-Maria Meier, project managers at plusmeta GmbH, we hear about the types of work for which artificial intelligence is already used in technical documentation, the advantages that this new technology brings, and where its boundaries lie. Part one of the interview on artificial intelligence methods is available on the plusmeta blog in German. plusmeta GmbH has its head office in Karlsruhe and is a pioneer of artificial intelligence in technical documentation, as well as a Quanos partner.

Ms. Lange, Ms. Meier, in part one of our interview you explained how artificial intelligence works and what processes are already being used in technical documentation, for example, machine learning and rule-based processes. We are now interested in hearing about specific application scenarios.

 

Could you describe some of these for us?

Fabienne Lange:  For us, the key application scenario is metadata identification. Current applications, such as content delivery portals, rely on appropriate metadata to enable users to make targeted searches and find the answers they are looking for in the hit lists. However, standards in the field of technical communication also require metadata, such as VDI 2770 or iiRDS. AI assistance is a real game changer here. Technical documentation usually contains large volumes of legacy data. Without assistance, we simply could not afford the time to edit these manually, nor would it be economically viable.

For example, rule-based identification enables the identification of metadata by identifying the incidence of words or synonyms in the text. The advantage here is that there is no need for a complex prior-trained machine learning model. However, for rule-based identification to work, the words or synonyms need to occur explicitly in the text, as is usually the case, for example, for a “product type”.

 

Do knowledge graphs also play a part here?

Yes, definitely. Knowledge graphs and extractors can also be used as subcategories of rule-based identification to identify metadata. Using product knowledge, knowledge graphs can deduce other metadata that doesn't even occur in the text. One example of this would be the manufacturer of a product, who is associated with the product via the knowledge graph.

Whereas extractors are ideal to identify metadata that follow particular patterns. Examples of this include serial numbers, order numbers, or dates. It also works if there is no selection list for comparison.

 

And what about machine learning when it comes to identifying metadata?

Naturally, machine learning also has its place in the field of metadata identification. In fact, prior training with sample data is required here, as machine learning can be used to predict metadata, which would often be difficult to detect using other methods. This could be metadata such as the target group, information topic, or topic type. For example, a text for an expert is written differently from a text for a layperson. And a “Task” topic type includes more instructive text than a “Concept”. A machine learning model is able to identify these features. This also enables the prediction of metadata that cannot be determined by means of occurrences of individual words. In general, all procedures have their strengths and weaknesses. If you use them in combination, you can achieve very good results in metadata identification.

Eva-Maria Meier: Another exciting application opportunity is document segmentation or identifying sections or chapters in long documents. This is also based on metadata identification. The product lifecycle phase, for example, is often used as a segmentation criterion. Documents are subdivided and classified into small snippets in this way. The same is also frequently done with a slight mismatch. All results are then stacked. Then, any point where the AI starts to become uncertain about which product lifecycle phase is under discussion or where the next one starts is probably the start or end of a section or chapter.

 

We have also heard about similarity analysis. Can you tell us a bit more about that?

Similarity analysis can help to find duplicates and variants, e.g., when cleaning up the authoring system or when trawling through mountains of supplier documentation for migration. Metadata identification can also be used here to compare identified metadata. Comparing groups of words is also helpful in finding identical information modules. We can even go a step further with deep learning and also compare meanings.

AI is also being used these days in language management. Examples of this include term extraction and controlled language checkers that predominantly use rule-based processes. And then of course there is also the rapidly growing field of machine translation, which can translate texts automatically using deep learning procedures.

When we think about the benefits of artificial intelligence, saving time is probably the first thing that springs to mind. What other benefits does this new technology offer?

Eva-Maria Meier: AI can take on mindless activities. When there is a mountain of legacy data to process, AI can make the job easier. As a technical writer, you then only have to check the prediction. Also, AI never gets tired when making its predictions and always delivers consistent quality. It also makes judgments objectively, according to learned criteria. But the biggest benefit has to be the ‘game-changer’ effect we mentioned earlier. Many applications rely on metadata. Updating this manually for large mountains of legacy data may simply just be impossible.

What are the weaknesses of artificial intelligence? What are its limits? And what does that mean for the future?

Fabienne Lange: To begin with, people often feel that one weakness is that, in some cases, AI models first have to be trained and systems have to be set up. That means a certain amount of effort at the start. However, when you consider the potential time saving by using AI, this effort is often easy to cope with. Another disadvantage can be the complexity and technological requirements of AI applications. The effect of these can be particularly daunting and hard to understand in the field of machine learning and deep learning. Deep learning in particular requires the availability of high computing capacity. However, this can often be achieved these days through cloud applications. Models for deep learning are usually also a black box, so it is not always possible to understand how results materialize.

Another weakness is particularly relevant in the field of technical communication: compliance with legal requirements. Technical documentation includes information about security and is therefore important for legal protection. If this information is not reported due to incorrect AI predictions, risks may be overlooked.

These legal requirements in technical communication make fully automated AI processes difficult. However, there is also a solution here: with the ‘human-in-the-loop’ principle, technical writers can be involved in the AI process, allowing them to review and remove predicted metadata, for example. With this cooperation between human and machine, it is both legitimate and guaranteed to save time.

plusmeta has launched a research project called DEEEP, which focuses on the development of a deep learning component for technical documentation. Please tell us more about this project.

Eva-Maria Meier: As part of the project, sponsored by the Baden-Württemberg Ministry of Economic Affairs, Labour and Tourism, we are developing a deep learning component for plusmeta. This AI method is not currently being used in technical documentation in the practice of data classification – but it has a lot of potential. It enables deep learning processes to classify images, as well as texts. We also want to reduce the initial efforts in new projects by providing clustering for training data and a metadata suggestion system. With prediction accuracy, we want to set new standards, courtesy of deep learning. This is possible, because as well as words, deep learning also takes context into account, thus bringing deeper understanding to text.

To fine tune the methods to the specific needs of technical communication, we need huge quantities of data. Data from one organization isn’t enough. We are therefore relying on data providers in the business world. Anyone interested can contact us at deeep@plusmeta.de. In return, there are research findings hot off the press and the opportunity to test new features.

 

Thank you Ms. Lange and Ms. Meier for having this conversation with us.

Other articles from Quanos

This might also interest you

„Doku-Lounge“: Auf dem roten Sofa mit Kerstin Berke und Philipp Eng

Moderatorin Kerstin Berke und Marketingspezialist Philipp Eng sind das Duo vor und hinter dem Mikro der „Doku-Lounge“…