Digital literacy in the AI era: Experts say it’s time to rethink how we use technology – 13newsnow.com

Nov 9, 2025 - 16:30
 0  1
Digital literacy in the AI era: Experts say it’s time to rethink how we use technology – 13newsnow.com

 

Report on the Intersection of Artificial Intelligence, Digital Literacy, and Sustainable Development Goals

Executive Summary

An analysis based on expert commentary from Dr. Scott Debb, a cyberpsychology professor at Norfolk State University, reveals a critical disconnect between the rapid advancement of Artificial Intelligence (AI) and public digital literacy. This gap poses significant risks to the achievement of several United Nations Sustainable Development Goals (SDGs), particularly those concerning education, health, innovation, and stable institutions. The report outlines how uncritical acceptance of AI-generated information, a phenomenon termed “cognitive offloading,” undermines the principles of sustainable development and recommends strategic interventions focused on education and regulation.

Digital Literacy Deficit: A Barrier to SDG 4 (Quality Education)

The increasing integration of AI into daily life has exposed a profound deficit in digital literacy, directly challenging the objectives of SDG 4, which aims to ensure inclusive and equitable quality education and promote lifelong learning opportunities. Dr. Debb warns that as individuals grow more trusting of AI, they become passive consumers of information rather than critical thinkers.

  • Erosion of Critical Evaluation: Users often accept AI-generated content at face value, bypassing the essential learning process of evaluating and questioning information.
  • Hindrance to Lifelong Learning: Over-reliance on AI for research and problem-solving can stifle intellectual curiosity and the development of analytical skills necessary for continuous personal and professional growth.
  • Threat to Equitable Education: Without foundational digital literacy skills, the educational benefits of AI will be unequally distributed, potentially widening existing social and economic disparities.

Implications for SDG 16 (Peace, Justice, and Strong Institutions)

The uncritical consumption of AI-driven content has severe implications for SDG 16, which focuses on promoting peaceful and inclusive societies, providing access to justice, and building effective, accountable institutions. The tendency toward “cognitive offloading”—letting technology perform cognitive tasks—creates vulnerabilities that can be exploited to weaken societal foundations.

  • Misinformation and Institutional Trust: A populace that does not critically assess information is more susceptible to misinformation, which can erode trust in democratic processes, justice systems, and public institutions.
  • Lack of Informed Citizenry: Strong institutions rely on an engaged and informed public. Passive information consumption undermines the active citizenship required for accountability and justice.
  • Algorithmic Bias: As AI models mirror the data they are trained on, they can perpetuate and amplify existing biases, threatening the goal of building inclusive societies.

Risks to SDG 3 (Good Health and Well-being) and SDG 9 (Industry, Innovation, and Infrastructure)

The report identifies direct threats to public well-being and the principles of sustainable innovation. Without proper ethical guardrails and regulatory oversight, AI’s role in sensitive areas could be detrimental.

  • Threat to Mental Health (SDG 3): Dr. Debb explicitly warns against generative AI assuming roles it is not designed for, such as “playing therapist.” Placing faith in an unconscious program instead of a trained human professional poses a direct risk to individual mental health and well-being.
  • Unsustainable Innovation (SDG 9): The rapid advancement of AI without corresponding policies for responsible use runs counter to the goal of fostering resilient and sustainable innovation. The expert comparison to the early, unregulated days of social media highlights the potential for negative societal consequences if development outpaces governance.

Recommendations for Aligning AI with Sustainable Development

To mitigate the identified risks and harness AI as a tool for positive transformation, a multi-faceted approach aligned with the SDGs is necessary. The following actions are recommended:

  1. Integrate Comprehensive Digital Literacy into Education Curricula: To advance SDG 4, educational systems must prioritize teaching students how algorithms work, how to verify sources, and how to critically engage with AI-generated content. This should be treated as a fundamental skill alongside traditional literacy.
  2. Establish Robust Ethical and Regulatory Frameworks: In support of SDG 9 and SDG 16, governments and international bodies must collaborate to create oversight policies that ensure AI development is transparent, accountable, and aligned with human rights and public good.
  3. Promote Public Awareness Campaigns: A global effort is needed to educate the public on the capabilities and limitations of AI, encouraging a culture of critical consumption and responsible use to safeguard individual well-being (SDG 3) and societal stability (SDG 16).

Which SDGs are addressed or connected to the issues highlighted in thearticle?

  • SDG 4: Quality Education

    The article directly addresses SDG 4 by emphasizing the urgent need for education that keeps pace with technological advancements. The expert, Dr. Scott Debb, argues that “digital literacy education… should become as essential as traditional reading and writing skills.” This connects to the goal of ensuring inclusive and equitable quality education and promoting lifelong learning, as the skills required to navigate the modern world now include the ability to critically evaluate information from sources like AI.

  • SDG 16: Peace, Justice and Strong Institutions

    SDG 16 is relevant due to the article’s focus on the societal impact of misinformation and the need for governance. The warning that uncritically accepting AI-generated content can “shape how we understand truth” and lead to a “false sense of trust” touches upon the stability of societies and trust in information. The call for “policies meant to regulate” AI and the need for “oversight” directly relates to building effective and accountable institutions capable of managing new technologies to prevent negative societal consequences.

  • SDG 9: Industry, Innovation and Infrastructure

    This goal, which focuses on fostering innovation, is connected through the article’s discussion of AI as a powerful new technology. While AI is an innovation, the article highlights the risks associated with its rapid, unregulated advancement. The statement, “The technology is advancing faster than the policies meant to regulate it,” points to the need for sustainable and responsible innovation. The goal is not to stop AI but to ensure that this innovation is managed with “guardrails” and “safeguards” to make it a “tool for transformation, not dependency.”

What specific targets under those SDGs can be identified based on the article’s content?

  • Target 4.4: By 2030, substantially increase the number of youth and adults who have relevant skills, including technical and vocational skills, for employment, decent jobs and entrepreneurship.

    The article’s central argument for “digital literacy education” aligns perfectly with this target. The ability to understand “how algorithms work, where information comes from, and when to question it” is a crucial relevant skill in the 21st century. The expert’s call for people to be “critical consumers, not just consumers” of information is a call to develop the skills necessary to function effectively and responsibly in a digital-first world.

  • Target 4.7: By 2030, ensure that all learners acquire the knowledge and skills needed to promote sustainable development… and global citizenship.

    The article implies that responsible use of technology like AI is a component of modern global citizenship. Understanding the societal effects of “cognitive offloading” and misinformation is essential for promoting a sustainable and well-informed society. The skills to use AI “wisely” contribute to a culture of responsibility and critical thinking, which are foundational to sustainable development.

  • Target 16.10: Ensure public access to information and protect fundamental freedoms, in accordance with national legislation and international agreements.

    While AI provides access to vast amounts of information, the article warns that it can undermine the quality and reliability of that information. The spread of misinformation threatens meaningful public access to truthful information. Promoting digital literacy, as advocated in the article, is a method of protecting this freedom by empowering individuals to discern fact from AI-generated falsehoods, thereby ensuring the access is to reliable information.

Are there any indicators mentioned or implied in the article that can be used to measure progress towards the identified targets?

  • Implied Indicator: Proportion of the population with functional digital literacy skills.

    The article highlights a “gap between what A.I. can do and what people understand about it.” An indicator to measure progress would be the percentage of the population that can demonstrate critical evaluation of AI-generated content. This could be assessed through national surveys or educational assessments that test skills like identifying biased information, understanding basic algorithmic principles, and verifying sources, reflecting the call for digital literacy to be as common as traditional literacy.

  • Implied Indicator: Existence of national policies and regulatory frameworks for Artificial Intelligence.

    The article explicitly states, “The technology is advancing faster than the policies meant to regulate it,” and calls for “oversight” and “safeguards.” A direct indicator of progress towards responsible innovation (SDG 9) and strong institutions (SDG 16) would be the development, adoption, and implementation of national strategies and laws governing the ethical development and deployment of AI technologies.

  • Implied Indicator: Public trust in digital information sources.

    The expert warns of a “false sense of trust” in AI. A relevant indicator would be measuring public trust levels in various information sources, including AI platforms, and their ability to differentiate between human and AI-generated content. A well-informed populace would exhibit healthy skepticism and rely on verified sources, indicating successful digital literacy education and a more resilient information ecosystem.

SDGs Targets Indicators
SDG 4: Quality Education Target 4.4: Increase the number of youth and adults with relevant skills.
Target 4.7: Ensure all learners acquire knowledge and skills for sustainable development and global citizenship.
Proportion of the population with functional digital literacy skills, measured through assessments on critical evaluation of AI-generated content.
SDG 16: Peace, Justice and Strong Institutions Target 16.10: Ensure public access to reliable information. Public trust levels in digital information sources and the ability to distinguish reliable from unreliable information.
SDG 9: Industry, Innovation and Infrastructure Cross-cutting with SDG 16 (Target 16.6: Develop effective, accountable institutions). Existence and implementation of national policies and regulatory frameworks for the ethical oversight of Artificial Intelligence.

Source: 13newsnow.com

 

What is Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
sdgtalks I was built to make this world a better place :)