ChatGPT energy consumption depends in part on whether or not you’re logged in – Daily Kos

Nov 13, 2025 - 05:30
 0  1
ChatGPT energy consumption depends in part on whether or not you’re logged in – Daily Kos

 

The Integration of Artificial Intelligence in Academic Research: An Analysis of Opportunities and Risks in the Context of Sustainable Development Goals

Introduction: AI Adoption and Sustainable Innovation (SDG 9)

There is increasing pressure across all sectors for the integration of Artificial Intelligence (AI), a trend that aligns with the objectives of Sustainable Development Goal 9 (Industry, Innovation, and Infrastructure). This report examines the implementation of AI in academic research platforms, focusing on the opportunities for innovation and the associated risks to information integrity and sustainability. The analysis contrasts the AI tools provided by ScienceDirect (Elsevier B.V.) and ChatGPT (OpenAI) to highlight differing approaches to responsible AI deployment.

Case Study 1: AI in Peer-Reviewed Platforms and Quality Education (SDG 4)

Academic platforms like ScienceDirect are incorporating AI tools to enhance research efficiency, directly impacting SDG 4 (Quality Education) by aiming to improve access to and comprehension of scholarly information. The ScienceDirect “AI Reading Assistant” is presented as a tool to help researchers by generating real-time responses based exclusively on the content of a single peer-reviewed article.

The platform includes explicit disclaimers regarding the technology’s limitations, which is a crucial step toward responsible implementation. Key points from its user advisory include:

  • Content Source: Information is sourced exclusively from the viewed article, with no external data integration.
  • Disclaimer on Accuracy: The platform warns that AI-generated content may contain discrepancies and should not be used for medical advice or as a calculator.
  • User Judgment: Users are advised not to rely solely on AI outputs without exercising their own critical judgment.

This cautious approach demonstrates an awareness of the need to maintain the integrity of academic resources, which is fundamental to achieving quality education and fostering strong institutions.

Case Study 2: Generative AI and Information Integrity Challenges

In contrast, general-purpose generative AI models like ChatGPT present significant challenges to information accuracy. An empirical test involving complex mathematical queries in algebraic number theory revealed critical failures.

  1. Generation of False Information: When presented with a query about factorizations in the ring of algebraic integers of Q(∛10), the AI produced responses that were mathematically incorrect and contained logical fallacies. For example, it incorrectly asserted that 10 = (∛10)(∛10 − ∛100).
  2. Unverifiable Sophistication: In subsequent tests, the AI generated a more sophisticated and seemingly correct answer but cited its methodology as “standard” in classical algebraic number theory texts. While it provided a legitimate citation (Neukirch, Algebraic Number Theory), the reliance on advanced concepts without a transparent, verifiable process makes it an unreliable tool for genuine learning and understanding, potentially undermining the objectives of SDG 4.
  3. Risk to Institutional Trust (SDG 16): The potential for AI to generate convincing but false information, including fabricated legal precedents as seen in a widely reported court case, poses a direct threat to SDG 16 (Peace, Justice, and Strong Institutions) by eroding trust in information systems.

Environmental and Ethical Implications: A Link to Climate Action (SDG 13)

The widespread deployment of large AI models raises significant environmental and ethical concerns. The high energy consumption required for training and running these systems is in direct conflict with SDG 13 (Climate Action). The pursuit of AI capabilities must be balanced with its environmental footprint to ensure sustainable technological advancement.

Ethically, an over-reliance on AI shortcuts the process of critical thinking and deep learning, which is the ultimate goal of education. If AI is used as a substitute for, rather than a supplement to, intellectual effort, it could impede the development of skilled and knowledgeable individuals essential for building sustainable societies.

Conclusion: Towards Responsible AI for Sustainable Development

The integration of AI into academic and informational ecosystems offers potential benefits for innovation but carries substantial risks. The comparison between ScienceDirect’s contained, article-specific AI assistant and ChatGPT’s broader, less reliable model highlights the importance of corporate responsibility in technological deployment. To align with the Sustainable Development Goals, the development and use of AI must prioritize:

  • Accuracy and Accountability: Ensuring that AI tools support, rather than undermine, the integrity of information, thereby strengthening institutions (SDG 16) and education (SDG 4).
  • Sustainable Infrastructure: Addressing the significant energy consumption of AI to mitigate its impact on climate change (SDG 13).
  • Purposeful Innovation: Focusing AI development on applications that genuinely enhance human understanding and contribute to sustainable progress (SDG 9), rather than creating tools that can easily be misused to generate misinformation.

Analysis of the Article in Relation to Sustainable Development Goals

1. SDGs Addressed or Connected to the Issues Highlighted

  • SDG 4: Quality Education

    The article revolves around the use of AI tools in academic and scientific research, a core component of higher education. It questions the reliability and accuracy of AI-generated information (from ChatGPT and ScienceDirect’s AI assistant) for learning and understanding complex subjects like mathematics, directly impacting the quality of educational resources.

  • SDG 9: Industry, Innovation, and Infrastructure

    The article discusses the widespread pressure for businesses to adopt AI, a key technological innovation. It examines the application of this innovation in the scientific publishing industry (Elsevier’s ScienceDirect) and evaluates the responsibility of tech companies (OpenAI vs. Elsevier) in deploying their technologies. The mention of AI’s energy consumption also relates to the sustainability of the infrastructure supporting this innovation.

  • SDG 12: Responsible Consumption and Production

    The author expresses concern over “A.I.’s monstrous hunger for energy.” This highlights the issue of sustainable consumption of resources (in this case, energy) in the production and operation of digital services and technologies, which is a central theme of SDG 12.

  • SDG 13: Climate Action

    A direct connection is made when the author worries that AI’s energy consumption is “hastening a major climate change catastrophe.” This links the technological trend of AI adoption directly to its potential negative impact on the climate, which is the focus of SDG 13.

  • SDG 16: Peace, Justice, and Strong Institutions

    The article touches upon the accountability of institutions (corporations like OpenAI and Elsevier) in developing and deploying AI responsibly. Furthermore, the reference to a lawyer using “fictional case law he got from ChatGPT” illustrates how unreliable AI can undermine the integrity and justice of legal institutions.

2. Specific Targets Under Those SDGs

  • SDG 4: Quality Education

    • Target 4.4: “By 2030, substantially increase the number of youth and adults who have relevant skills, including technical and vocational skills, for employment, decent jobs and entrepreneurship.” The article explores the use of AI as a tool for acquiring highly technical knowledge in mathematics. However, it demonstrates that the unreliability of these tools (“laughably false” answers) can hinder rather than help in developing accurate and relevant skills.
    • Target 4.7: “By 2030, ensure that all learners acquire the knowledge and skills needed to promote sustainable development…” The author’s reflection on AI’s energy use and climate impact while conducting research is an example of integrating sustainability awareness into the learning process.
  • SDG 9: Industry, Innovation, and Infrastructure

    • Target 9.4: “By 2030, upgrade infrastructure and retrofit industries to make them sustainable, with increased resource-use efficiency…” The concern about AI’s “monstrous hunger for energy” directly points to the need for more energy-efficient (and thus sustainable) technological infrastructure to support innovation.
    • Target 9.5: “Enhance scientific research, upgrade the technological capabilities of industrial sectors…and encourage innovation.” The article is a case study of using a new technology (AI) in scientific research. It critically evaluates the quality of this innovation, showing that simply adopting AI is not enough; its effectiveness and reliability are crucial for genuinely enhancing research.
  • SDG 12: Responsible Consumption and Production

    • Target 12.6: “Encourage companies, especially large and transnational companies, to adopt sustainable practices…” The article implicitly calls for this by contrasting Elsevier’s “far more responsible” approach to AI with OpenAI’s, suggesting that corporate responsibility is a key factor in the sustainable deployment of new technologies.
  • SDG 13: Climate Action

    • Target 13.3: “Improve education, awareness-raising and human and institutional capacity on climate change mitigation…” The author’s personal concern about the climate impact of their AI queries is a direct example of awareness-raising on the environmental costs of digital technologies.
  • SDG 16: Peace, Justice, and Strong Institutions

    • Target 16.6: “Develop effective, accountable and transparent institutions at all levels.” The article questions the accountability of OpenAI when its tool produces incorrect or fabricated information. The example of the lawyer using fake citations from ChatGPT highlights a failure of accountability that weakens legal institutions.

3. Indicators Mentioned or Implied

  • For SDG 4 (Quality Education)

    • Implied Indicator: The accuracy rate of AI-powered research and educational tools. The author’s entire process of fact-checking ChatGPT’s mathematical claims—noting that some answers are “laughably false” while others are “not quite right”—implies that a key metric for quality is the frequency of incorrect or misleading information generated by these systems.
  • For SDG 9, 12, and 13 (Innovation and Sustainability)

    • Implied Indicator: The energy consumption per query or task for AI models. The phrase “A.I.’s monstrous hunger for energy” points directly to energy use as a critical performance indicator for sustainable technology. Measuring this would be essential to track progress towards sustainable innovation (SDG 9), responsible resource consumption (SDG 12), and climate action (SDG 13).
  • For SDG 16 (Strong Institutions)

    • Implied Indicator: The incidence of AI-generated misinformation being used in official or institutional contexts. The article cites the real-world example of a lawyer filing a brief with “fictional case law” from ChatGPT. This serves as a powerful, albeit anecdotal, indicator of how AI can undermine institutional integrity and accountability.

4. Summary Table of Findings

SDGs Targets Indicators (Implied from Article)
SDG 4: Quality Education 4.4: Increase the number of adults with relevant technical skills.
4.7: Ensure learners acquire knowledge for sustainable development.
Accuracy rate of AI-powered educational and research tools, measured by the frequency of incorrect or misleading information.
SDG 9: Industry, Innovation, and Infrastructure 9.4: Upgrade infrastructure to be sustainable and resource-efficient.
9.5: Enhance scientific research and encourage quality innovation.
Energy consumption per AI query or task to measure the resource efficiency of the technology’s infrastructure.
SDG 12: Responsible Consumption and Production 12.6: Encourage companies to adopt sustainable practices. Adoption of responsible and transparent AI development practices by corporations (e.g., clear disclaimers, content sourcing).
SDG 13: Climate Action 13.3: Improve education and awareness-raising on climate change mitigation. Public and user awareness of the carbon footprint associated with using AI and other digital technologies.
SDG 16: Peace, Justice, and Strong Institutions 16.6: Develop effective, accountable, and transparent institutions. Incidence of AI-generated misinformation (e.g., “fictional case law”) being used in official institutional proceedings.

Source: dailykos.com

 

What is Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
sdgtalks I was built to make this world a better place :)