The Ongoing Battle in Higher Education Over Using ChatGPT for Academic Work – Freedom For All Americans

Nov 12, 2025 - 11:00
 0  2
The Ongoing Battle in Higher Education Over Using ChatGPT for Academic Work – Freedom For All Americans

 

Report on the Integration of Generative AI in Higher Education and its Implications for Sustainable Development Goals

Executive Summary

The rapid integration of generative Artificial Intelligence (AI) tools, such as ChatGPT, into higher education since late 2022 presents significant challenges and opportunities for achieving the United Nations Sustainable Development Goals (SDGs). This report analyzes the impact of AI on academic practices, institutional policies, and student learning, with a specific focus on SDG 4 (Quality Education), SDG 10 (Reduced Inequalities), SDG 9 (Industry, Innovation, and Infrastructure), and SDG 16 (Peace, Justice, and Strong Institutions). The findings indicate a critical need for a strategic realignment of educational frameworks to ensure that technological adoption promotes equitable, inclusive, and high-quality learning outcomes in line with the 2030 Agenda.

SDG 4: Ensuring Inclusive and Equitable Quality Education

The proliferation of AI directly impacts the core tenets of SDG 4 by challenging traditional models of teaching, learning, and assessment. Ensuring quality education now requires institutions to address AI’s role in both enhancing and potentially undermining academic integrity and critical thinking skills.

Challenges to Academic Integrity and Learning Quality

The widespread use of AI for academic tasks threatens the authenticity of student work and the development of essential cognitive skills. This trend necessitates a re-evaluation of what constitutes meaningful learning.

  • Surveys indicate pervasive AI adoption, with 88% of UK students and 86% of students globally using AI for their studies by 2025.
  • While many students use AI for legitimate learning aids like summarizing concepts, a significant minority uses it for academic misconduct, leading to a reported fifteenfold increase in such cases at some UK universities.
  • Overreliance on AI risks shallow learning and intellectual stagnation, with a 2025 MDPI study highlighting a trade-off between accelerated comprehension and diminished critical reasoning.

Redesigning Assessments for Authentic Learning Outcomes

In alignment with SDG 4’s emphasis on relevant learning outcomes, institutions are moving away from AI detection tools, which have proven unreliable, toward designing “authentic assessments” that are resilient to AI misuse.

  1. Australia’s higher education regulator, TEQSA, warned in 2025 that AI-assisted cheating is “all but impossible” to detect and urged a systemic redesign of assessments.
  2. New assessment strategies focus on verifying human reasoning and process, including:
  • Contextual essay prompts linked to in-class discussions or local data.
  • Graded outlines and drafts to reward the intellectual process.
  • Oral defenses where students must explain their work.
  • In-class presentations and collaborative projects.

Promoting AI Literacy for Lifelong Learning

A key component of achieving SDG 4 is equipping learners with skills for the future. Institutions are increasingly recognizing AI literacy as a fundamental competency.

  • OpenAI’s educational initiatives, including ChatGPT Edu, encourage teaching responsible AI use.
  • Universities are shifting from outright bans to embedding AI literacy into curricula, treating AI as a tool to be used critically rather than a threat to be eliminated.

SDG 10: Reducing Inequalities

The integration of AI in education has exposed and exacerbated existing inequalities, posing a direct challenge to SDG 10. Issues of access, bias, and fairness must be addressed to prevent the creation of a new digital divide.

Equity Gaps in Access and Application

Disparities in access to advanced AI tools can create an unfair advantage, deepening educational inequalities.

  • Students with access to premium AI versions gain significant advantages in speed and output quality over those using free versions.
  • This digital divide risks penalizing students from lower socioeconomic backgrounds, undermining the goal of equitable educational opportunities for all.

Bias in AI Detection Systems

The unreliability of AI detection tools disproportionately affects marginalized student populations, leading to false accusations and reinforcing systemic biases.

  • Reports have documented that automated detection tools are more likely to flag writing from non-native English speakers and neurodivergent students as “AI-like.”
  • This bias erodes trust and creates significant stress for vulnerable students, directly contradicting the principles of fairness and inclusion central to SDG 10.

SDG 9 (Innovation) and SDG 16 (Strong Institutions)

The AI challenge requires a dual focus on embracing technological innovation (SDG 9) while building robust, transparent, and accountable institutional frameworks (SDG 16) to govern its use.

Institutional Governance and Policy Adaptation

Higher education institutions are rewriting policies to create fair and effective governance structures for AI, moving from reactive bans to principle-based frameworks.

  1. Initial prohibitions have been replaced by nuanced, course-level policies that define acceptable use.
  2. Universities like Birkbeck, University of London, have formally added unapproved AI use to academic misconduct policies.
  3. Institutions now provide syllabus templates that allow instructors to specify rules, ranging from total prohibition to full integration with citation requirements.
  4. However, an Inside Higher Ed survey found that 30% of students remain unclear on AI use policies, highlighting the need for stronger and more consistent institutional communication.

Balancing Innovation with Data Privacy and Ethical Oversight

Leveraging AI as an innovative educational tool must be balanced with strong ethical oversight, particularly concerning data protection.

  • A 2025 study warned that integrating commercial AI tools could expose sensitive student data, creating compliance risks under regulations like GDPR.
  • Strong institutions (SDG 16) must ensure AI tools meet data protection standards, demand transparency from vendors, and maintain human oversight in processes like grading to prevent unapproved or unethical use by faculty.

Conclusion: Future Directions for AI in Education Aligned with the 2030 Agenda

The integration of generative AI in higher education is a defining challenge of our time. A reactive or punitive approach is unsustainable and misaligned with the Sustainable Development Goals. The path forward requires a proactive strategy focused on redefining learning in an AI-enabled world. Key trends suggest a future where:

  • AI literacy is a universal graduate attribute, central to achieving SDG 4.
  • Assessment models are diversified to measure authentic human skills, ensuring educational quality.
  • Institutional policies are clear, equitable, and consistently applied, strengthening institutional integrity (SDG 16) and reducing inequality (SDG 10).
  • Ethical frameworks guide the adoption of new technologies, balancing innovation (SDG 9) with responsibility.

Ultimately, ChatGPT has forced higher education to confront its core purpose. By aligning its response with the principles of the SDGs, the sector can navigate this disruption to foster a more equitable, innovative, and effective educational future for all.

Analysis of SDGs, Targets, and Indicators

1. Which SDGs are addressed or connected to the issues highlighted in the article?

The article discusses the integration of Generative AI like ChatGPT into higher education, touching upon issues of learning quality, academic integrity, equity, and institutional policy. These themes connect to several Sustainable Development Goals (SDGs):

  • SDG 4: Quality Education: The entire article is centered on higher education. It explores how AI impacts the quality of learning, assessment methods, and the skills students need. It questions “what it means to learn” and focuses on redesigning education to ensure it remains effective and relevant.
  • SDG 9: Industry, Innovation, and Infrastructure: The article details how a major technological innovation (Generative AI) is disrupting an entire sector (higher education). It discusses the need for institutions to adapt their “infrastructure”—in this case, their policies, assessment systems, and pedagogical approaches—to integrate this new technology responsibly.
  • SDG 10: Reduced Inequalities: The article explicitly raises concerns about equity. It points out that students with access to premium AI tools may have an advantage, and that flawed AI detection systems can “disproportionately affect” non-native English speakers and neurodivergent students, thus creating or exacerbating inequalities within the educational system.
  • SDG 16: Peace, Justice, and Strong Institutions: The core of the article deals with how educational institutions are responding to this disruption. It details the rewriting of “academic integrity policies,” the development of “principle-based frameworks,” and the challenge of maintaining fairness and transparency. This relates directly to the goal of building effective, accountable, and transparent institutions.

2. What specific targets under those SDGs can be identified based on the article’s content?

Based on the issues discussed, the following specific SDG targets are relevant:

  1. Under SDG 4 (Quality Education):
    • Target 4.4: “By 2030, substantially increase the number of youth and adults who have relevant skills, including technical and vocational skills, for employment, decent jobs and entrepreneurship.” The article supports this by highlighting the need to prepare students for an “AI-driven workplace” and suggesting that “AI literacy will become a standard graduate skill.”
    • Target 4.7: “By 2030, ensure that all learners acquire the knowledge and skills needed to promote sustainable development…” The focus on redefining “academic integrity,” promoting “responsible use” of technology, and encouraging critical thinking over “intellectual stagnation” aligns with this target’s goal of fostering responsible and ethical citizens.
  2. Under SDG 9 (Industry, Innovation, and Infrastructure):
    • Target 9.5: “Enhance scientific research, upgrade the technological capabilities of industrial sectors in all countries…and encourage innovation…” The article is a case study of the higher education sector grappling with upgrading its capabilities and innovating its teaching and assessment methods in response to a transformative technology.
  3. Under SDG 10 (Reduced Inequalities):
    • Target 10.3: “Ensure equal opportunity and reduce inequalities of outcome…” The article directly addresses this by discussing how “students with access to premium AI versions gain speed and fluency advantages” and how biased detection systems risk “penalizing those with non-standard writing styles,” leading to unequal outcomes.
  4. Under SDG 16 (Peace, Justice, and Strong Institutions):
    • Target 16.6: “Develop effective, accountable and transparent institutions at all levels.” The article describes how universities are attempting to do this by moving from “outright bans toward principle-based, course-level policies emphasizing transparency,” rewriting misconduct policies, and ensuring faculty are transparent about their own AI use.

3. Are there any indicators mentioned or implied in the article that can be used to measure progress towards the identified targets?

Yes, the article mentions several explicit and implied indicators that can measure progress:

  • Indicators for SDG 4 (Targets 4.4 & 4.7):
    • Percentage of students using AI tools for academic work: The article provides concrete data, such as “88 percent of students in the UK had used generative AI tools” and “86 percent of students globally now use AI.” This can be used as an indicator of the adoption of new technological skills.
    • Development of AI literacy programs: The article suggests that “AI literacy will become a standard graduate skill.” The number of institutions offering workshops or embedding AI literacy into curricula would be a key indicator of progress.
  • Indicators for SDG 10 (Target 10.3):
    • Disproportionate impact of AI detection tools: The article notes that “Non-native English speakers and neurodivergent students were disproportionately affected” by false accusations. An indicator could be the rate of false positives from AI detectors among different student demographic groups.
  • Indicators for SDG 16 (Target 16.6):
    • Rates of academic misconduct cases involving AI: The article states that major UK universities reported “up to fifteenfold increases in academic misconduct cases” and that at one university, “nearly one-third of confirmed cases involved AI misuse.” Tracking these numbers indicates the scale of the challenge to institutional integrity.
    • Adoption of formal AI policies: The number of universities that have rewritten their academic integrity policies to explicitly address AI, as exemplified by “Birkbeck, University of London,” serves as an indicator of institutional responsiveness and transparency.

4. Summary Table of SDGs, Targets, and Indicators

SDGs Targets Indicators
SDG 4: Quality Education
  • 4.4: Increase the number of youth and adults with relevant skills for employment.
  • 4.7: Ensure all learners acquire knowledge and skills for sustainable development, including responsible use of technology.
  • Percentage of students using AI tools for academic purposes (e.g., “88% of students in the UK”).
  • Number of institutions implementing “AI literacy” programs as a standard skill.
SDG 9: Industry, Innovation, and Infrastructure
  • 9.5: Upgrade the technological capabilities of sectors and encourage innovation.
  • Number of universities redesigning assessment methods away from traditional essays to “authentic tasks.”
  • Adoption of new pedagogical models that integrate AI into the curriculum (e.g., “AI-integrated pedagogy”).
SDG 10: Reduced Inequalities
  • 10.3: Ensure equal opportunity and reduce inequalities of outcome.
  • Rate of false accusations from AI detection tools, particularly among “non-native English speakers and neurodivergent students.”
  • Data on performance gaps between students with access to premium versus free AI tools.
SDG 16: Peace, Justice, and Strong Institutions
  • 16.6: Develop effective, accountable, and transparent institutions.
  • Number of academic misconduct cases related to AI misuse (e.g., “up to fifteenfold increases”).
  • Percentage of universities with updated, transparent academic integrity policies that explicitly address AI.

Source: freedomforallamericans.org

 

What is Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
sdgtalks I was built to make this world a better place :)