Tackling disinformation par la formation

Table of Contents
What you will learn
At the end of this training, you will be able to identify and fight against disinformation to refocus attention to debate in society. You will be able to explain what is invisible pollution, the links between AI, energy and climate, as well as the links between social networks and mental health, from scientific articles and evidence.
Program overview
This course reviews 10 years of research and development (R&D) in artificial intelligence, from 2014 to today, for teachers, researchers and students in higher education and research or national education.
Duration: 6 hours.
Thèmes : Attention, Society and the Environment.
Audience (recommended): Bachelor students (L2/L3), language learning and development practitioners (speech therapist, psy, teacher, researcher, data scientist), non profits fighting against disinformation, for social and climate justice.
Chapters overview
From 2014 to 2018, France defined a roadmap for the coming years in artificial intelligence (AI). From 2018 to 2022, alerts followed one another to inform politicians and scientists about the terrible ecological impact of deep learning in AI.
After a heat record in France in 2022 and in a context of energy crisis and war in Europe, the winter 2023 was marked by a drought record (32 days without rain) and energy consumption records with ChatGPT3, LLaMA from Facebook AI Research (FAIR Paris), ChatGPT4… for the benefit of a few French engineers and investors. The winter ended with the bankruptcy of Silicon Valley Bank, the acquisition of Credit Suisse by UBS and a social crisis in France.
In this course, you will learn how to spot greenwashing, conflicts of interest, fake news and fight against disinformation by reinvesting your time, energy and attention in sustainable development.
What to remember from Villani's report
Spring 2018. AI cannot be a new machine to exclude.
Energy and Policy Considerations
Spring 2019. Emma Strubell and MIT’s alert.
Identify greenwashing and conflicts of interest
What is the goal of climate skeptics? What are they defending? Why is ChatGPT a source of misinformation? A case study of LLaMA and ChatGPT.
Tackling disinformation "par la formation", in practice
French Spring and Summer without social networks.
FAQs
Are there prerequisites?
There are no prerequisites for the first course.
How often do the courses run?
Continuously, at your own pace. Some events will be scheduled to foster a benevolent learning environment and sense of community.
I worked in linguistics or as a data scientist. Could I share my learnings with you?
Yes of course! Feel free to reach out to chat and co-create the course. Thanks!
Can I get a certificate of completion?
Yes! To validate the course, you must complete a final multiple choice quizz (50% of the points) and carry out a disinformation awareness project (50% of the points), for example 2-3 interviews/testimonials for a summer without social networks, in the form of an audiovisual or written production. If you validate the module, you will obtain a certificate of completion.
References
2023
Screens: health threats. CNRS.
Screens and health: it is urgent to act! Mediapart.
Servane Mouton. Screens: our health in danger? STEEP research team at INRIA Grenoble.
David Chavalarias, Paul Bouchaud, Victor Chomel and Maziyar Panahi. Climatosceptics: on Twitter, investigation of the intox mercenaries. CNRS.
6th IPCC summary report. ecologie.gouv.fr.
Climate change: perception, information and disinformation. Descartes Foundation.
2022
Rae CL, Farley M, Jeffery KJ, Urai AE. Climate crisis and ecological emergency: Why they concern (neuro) scientists, and what we can do. Brain and Neuroscience Advances.
Philippe Testard-Vaillant and Charline Zeitoun. Internet, the disinformation highway? CNRS.
David Chavalarias. Toxic Data. How networks manipulate our opinions. Editions Flammarion.
Synthesis of the Broner Commission: Enlightenment in the digital age. Descartes Foundation.
2021
The Facebook files. A Wall Street Journal investigation.
Citizen’s Climate Convention and the Case of the Century.
2019
Kate Jeffery. The Psychology of Climate Inaction. UCL.
Karen Hao. Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review.
Emma Strubell, Ganesh Ananya and Andrew McCallum. Energy and policy considerations for deep learning in NLP. In the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Florence, Italy.
An AI that writes convincing prose risks mass-producing fake news. MIT Technology Review.
2018
Laure Delisle et al. A large-scale crowdsourced analysis of abuse against women journalists and politicians on Twitter. NeurIPS.
Garg Nikhil, Londa Schiebinger, Dan Jurafsky and James Zou. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences.
Cédric Villani report: Giving meaning to artificial intelligence. gouv.fr.
2017
France I.A. Strategy Report, for the development of artificial intelligence technologies. gouv.fr.
Anti greenwashing guide. For an Ecological Awakening.
Legislation
Law No. 2021-1104 on tackling climate change and strengthening resilience to its effects. legifrance.gouv.fr. 2021.
Law No. 2018-1202 on the fight against the manipulation of information. legifrance.gouv.fr. 2018.
General Data Protection Regulation. CNIL. 2018.