Skip to content

People can safely drive to their desired destination by following traffic rules and agreed codes of conduct without necessarily knowing how the car’s engine works. Now, let’s apply the same analogy when we think about how people interact with AI systems, such as digital assistants using voice recognition, or algorithms that guide us to the next movie we will watch.

The new DigComp 2.2 update aims to give users basic knowledge of what AI systems do and what they don’t do, and introduces some basic principles to keep in mind when interacting with AI systems. This can help citizens become more confident, critical, and yet open-minded users of today’s technologies, while helping mitigate risks related to safety, personal data and privacy.
 

AI is everywhere, but are we aware of it? 

Many everyday technologies use and integrate some type of artificial intelligence. For example, AI can be used to translate voice commands into a concrete action – like making a call or turning the lights on. Often, people are not aware that such systems collect personal data about the user and their actions, and they don’t realise how this data can be used for a multitude of purposes (e.g. training new AI algorithms, share data with 3rd parties). This naturally brings a range of privacy or safety concerns.

The DigComp 2.2 update includes an appendix with more than 70 examples that can help people better understand where and in which situations in their everyday life they can expect to encounter AI systems. It also gives practical examples of the ways emerging technologies have permeated our everyday lives. 
 

How can DigComp 2.2 help us understand AI systems? 

Three types of examples are given in this newest update of the framework to help understand what AI systems do – and what they do not: 

  • Knowledge examples focus on facts, principles, and practices. Engaging with AI systems in a confident and safe way means being aware of the way search engines, social media and content platforms use AI algorithms to generate responses that are adapted to the preferences of the individual user (this number helps you find it in the publication: AI 03, page 78).
  • Skills examples that focus on the ability to apply knowledge when interacting with AI systems. This means knowing how to modify user configurations (e.g. in apps, software, digital platforms) to enable, prevent or moderate the AI system from tracking, collecting or analysing data (for example, switching off the location tracking on our phones, etc). (AI 35, page 80.) 
  • Attitude examples are linked to human agency and control and indicate a disposition or a mindset to act. This means being open to AI systems supporting humans to make informed decisions in accordance with their goals (e.g. user actively deciding whether to act upon a recommendation or not). 

All examples are related to the existing DigComp competences with an idea that they will help educational curriculum designers and training providers to refresh their training content and illustrate better the application and integration of emerging technologies in everyday life. 
 

Gaining confidence to tackle fake news and disinformation

The new themes in the update touch upon current phenomena that often amplify disinformation on social media platforms such as filter-bubbles (a bias caused by an algorithm that limits the information a user sees based on their previous activities) and echo chambers (a situation where users receive online information, which reinforces their existing views without encountering opposing perspectives). Examples also illustrate deep-fakes and other automated forms of AI -generated content. 

They also focus on privacy concerns when engaging with AI systems that might share personal data with 3rd parties and lay bare the basic questions we need to ask before activating face recognition software or the digital assistant on our phone. 

The DigComp 2.2 examples on people interacting with AI systems aim to paint a picture of today’s world and help people engage confidently, critically, and safely with everyday technologies, especially those driven by AI. Another goal is to empower citizens to be more in control of their own lifelong learning to stay informed about AI systems and what we call “the datafication” of every aspect of our lives. Last but not least, one of the goals is to help people navigate ethical questions related to digital practices – like that of human autonomy which underpins many of the EU values. These goals also underpin the Digital Education Action Plan by the European Commission that aims to enhance citizens’ digital skills and competences for the digital transformation. 

Read the full report here. 

Other upcoming DEAP work includes: 

About the author 

Dr. Vuorikari is the lead investigator for DigComp 2.2. Her work focuses on developing better understanding of knowledge, skills and attitudes that help citizens engage confidently, critically and safely with digital technologies, including AI systems. She joined the European Commission’s Joint Research Centre (JRC) in July 2013 until August 2022. She has degrees in education (M.Ed in 1998 in Finland), hypermedia (DEA in 1999 in France) and she gained her PhD in 2009.
 

About the JRC

The Joint Research Centre is the Commission’s science and knowledge service. The JRC employs scientists to carry out research in order to provide independent scientific advice and support to the EU. The EU Science Hub is the main website of the JRC. 

 

©Viacheslav Iakobchuk