AI literacy for staff members What is the impact of GenAI?
1 2
3
voltooid

Disinformation

hoofdstuk
door: Steven Trooster
2 min.
desinformation.png

Mistakes and disinformation

GenAI systems work with predictions. They look at what would be the most probable next word in a sentence to create new, grammatically correct text. However, for GenAI it does not matter whether the answer is correct (this is also referred to as hallucinating). It could therefore be the case that the created output is utter nonsense. For example, because the separate words have been taken out of context or put in a slightly different order, causing the meaning to change or even create an opposing claim. What is more, many GenAI systems are incapable of citing their sources, or they make up sources which look credible but are complete nonsense.

Automation bias

One of the existing dangers, is that we are too quick to assume that a generated text is true. This is also referred to as the automation bias: people tend to ignore contradictory information or to not actively seek out conflicting information when a computer generated solution is accepted as correct, especially when time pressure is perceived.

Deliberate disinformation

The ability of GenAI to create realistic and plausible text, video, audio and code makes the creation of false, biased, or politically motivated media faster and easier to produce. An example of these are so-called ‘deepfakes’. A deepfake is an image or audio/video recording that has been manipulated to depict someone doing or saying something they never actually did or said. The two AI-generated photos below, for example, are two** realistic yet fake depictions** of Pope Francis wearing a white puffer jacket.

ai_pope_drip_god.jpg.jpeg

Source: https://www.theverge.com/2023/3/27/23657927/ai-pope-image-fake-midjourney-computer-generated-aesthetic

Deepfakes can be used to spread disinformation and to promote hate speech or politically biased content. (Like the fake image of the arrest of Frans Timmermans generated bij PVV members of parliament.) Always try to verify the authenticity of images or recordings by tracing them back to their original source. One way of doing this is by conducting a so-called ‘reverse image search’. Web browsers such as Google or TinEye offer this service. Also, be aware that any images you share on the internet may be incorporated into GenAI training data and might be manipulated and used in unethical ways.

tips-small.png

What does this mean for you?

  • GenAI tools are not reliable for factual knowledge. In that case, you're better off using a search engine and critically reflecting on the sources available.
  • If genAI information is presented to you, cross-check whether it is correct by referring to other sources.