Identify greenwashing and conflicts of interest

What is the goal of climate skeptics? What are they defending? Why is ChatGPT a source of misinformation? A case study of LLaMA and ChatGPT.

Disinformation is used to divert attention from the ecological emergency

Who could have predicted the climate crisis? asked Emmanuel Macron end of 2022. Six weeks later, during the drought record in France, the CNRS, ISCPIF, Citizen4science, le Monde and the journalist Audrey Garric warned of a rise of climatosceptism on Twitter (bought by Elon Musk in 2022). A large community was structured from the summer of 2022. More than 10,000 active accounts relay false information and attack the IPCC, with thousands of daily tweets.

February 2023 (ChatGPT3) Screenshot of a MIASHS undergraduate student. February 2019 (ChatGPT2) An AI that writes compelling prose risks mass-producing fake news. [MIT Technology Review]( ) (this has not prevented investments from multiplying in disinformation). March 2023 (ChatGPT4) Explosion of misinformation. [BFM Business](
February 2023 (ChatGPT3) Screenshot of a MIASHS undergraduate student. February 2019 (ChatGPT2) An AI that writes compelling prose risks mass-producing fake news. MIT Technology Review (this has not prevented investments from multiplying in disinformation). March 2023 (ChatGPT4) Explosion of misinformation. BFM Business.

The purpose of disinformation as a business is to divert attention and influence decisions, for example where to invest, what to debate, etc. When it is toxic and viral, it generates more clicks (Click Through Rate in English) and calls for more deeptech…

Thus, AI developments during fall winter 2022/2023 - Galactica 120B, ChatGPT3, LLaMA 65B, ChatGPT4 - and disinformation allowed to hijack the attention of CEOs at the World Economic Forum to invest in military applications rather than resilience and climate change.

In March 2023, TF1 talked about AI and war, the threats of AI on jobs, autonomous cars… ecology and youth are never mentioned. TF1’s documentary and article cite one of the largest US bank, Goldman Sachs, and Bellingcat, whose business model is built on disinformation.

In reality, by projecting models on a single axis (metric) without taking into account resources, energy, real costs, AI research has become a race to consume and pollute more. From HighRes-net (2019) to TR-MISR (2022), the AI metric improved by less than 2%, but with quadratic complexity instead of a logarithmic one, and a time to train a model once that goes from 9h to 60h on an NVIDIA Tesla V100 card. TR-MISR uses Transformer, the same model found in BERT, ChatGPT and LLaMA. It is therefore not more AI, at the service of AI, that is needed (this is the goal of disinformation!). If tomorrow Twitter was only made up of bots, fake profiles and climatosceptics, what would its value be?

Resources to help fight greenwashing

Use cases


Feb 2023. Meta AI / FAIR Paris blog is a gem of greenwashing.

Training smaller foundation models like LLaMA is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases.

We are making LLaMA available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a LLaMA model card that details how we built the model in keeping with our approach to Responsible AI practices.

LLaMA was trained between December 2022 and February 2023, but no communication is made on Meta AI research blog, social networks or on Github concerning the record carbon footprint, the environmental impact and the financing of version 1.

Smaller models trained on more [words] are easier to retrain and fine-tune for specific potential product use cases. We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens….

In reality, we are changing scale. Millions, billions, trillions… Models grew by a factor 1000 in complexity in 4 years.

To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license focused on research use cases. Access to the model will be granted on a case-by-case basis.

March 8, 2023. Model leaked. Risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content. Github. What is Facebook’s economic model?

B. Tackling Climate Change with Machine Learning.

The original article was published on Arxiv in June 2019, four days after Emma Strubell and the MIT’s alert. Written by 22 authors including Demis Hassabis, CEO of Deepmind, and Yoshua Bengio, deep learning godfather, it was republished in February 2022 by the Association for Computing Machinery (ACM), the American lobby which awarded the 2019 Turing Prize to Yoshua Bengio, Geoffrey Hinton (Google) and Yann Le Cun (Facebook).

In October 2019, Laure Delisle and Michel Deudon were laid off from Yoshua Bengio’s startup Element AI, AI for Good, following a series B round with McKinsey. Three years later, Macron’s McKinseyGate was disclosed and Manon Gruaz gave a talk in Nantes, CTRL+ALT+Depression pointing some issues at Element AI.

In December 2022, while Yann Lecun was working on models with billions of parameters, a workshop was organized by at NeurIPS 2022 and included 19 authors from NVIDIA, the world leader in artificial intelligence computing, which benefits from the explosion of model complexity and disinformation.

C. Research and gifts en IA.

The spotlight on deep learning godfathers, but also gifts and corruption influenced AI research to continue training ever more complex models, at the benefit of GAFAM and NVIDIA. Submitted on Arxiv on May 25, 2019 and accepted at NeurIPS 2019, the article Are Sixteen Heads Really Better than One? by an alumni of École Polytechnique (X13) writes in the acknowledgments We are also particularly grateful to Thomas Wolf from Hugging Face, whose independent reproduction efforts allowed us to find and correct a bug in our speed comparison experiments. This research was supported in part by a gift from Facebook.

D. ENGIE-Google.

Deep Mind applied ML algorithms to predict wind energy. Google announced that this algorithm could predict wind power production thirty-six hours in advance. In June [2022], ENGIE (a French company) was announced as the first client of the project on Towards Data Science and Bloomberg.

In June 2022, Total, EDF, and ENGIE warned on the threat of prices on cohesion, calling on the French for emergency sobriety in the face of soaring energy prices and to reduce “immediately” energy consumption.
In September 2022, Macron calls for individual sobriety. “Everyone has a role to play (…) the best energy is the one which we do not consume”". The objective is “to save 10% of what we currently consume”. Press conference at l’Elysee. Sobriety plan

In November 2022, while Patrick Pouyanne, CEO of Total Energies, is heard in the National Assembly within the commission aiming at establishing the reasons for France’s loss of sovereignty and energy independence, ENGIE and Google concluded a 100 MW contract for a period of 12 years, to supply Google with more than 5 TWh of “green energy” from the Moray West project, an offshore wind farm of nearly 900 MW, scheduled to be commissioned in 2025. Europe accused US of profiting from the war.