Identify greenwashing and conflicts of interest
What is the goal of climate skeptics? What are they defending? Why is ChatGPT a source of misinformation? A case study of LLaMA and ChatGPT.
Disinformation is used to divert attention from the ecological emergency
Who could have predicted the climate crisis? asked Emmanuel Macron end of 2022. Six weeks later, during the drought record in France, the CNRS, ISCPIF, Citizen4science, le Monde and the journalist Audrey Garric warned of a rise of climatosceptism on Twitter (bought by Elon Musk in 2022). A large community was structured from the summer of 2022. More than 10,000 active accounts relay false information and attack the IPCC, with thousands of daily tweets.
The purpose of disinformation as a business is to divert attention and influence decisions, for example where to invest, what to debate, etc. When it is toxic and viral, it generates more clicks (Click Through Rate in English) and calls for more deeptech…
Thus, AI developments during fall winter 2022/2023 - Galactica 120B, ChatGPT3, LLaMA 65B, ChatGPT4 - and disinformation allowed to hijack the attention of CEOs at the World Economic Forum to invest in military applications rather than resilience and climate change.
In March 2023, TF1 talked about AI and war, the threats of AI on jobs, autonomous cars… ecology and youth are never mentioned. TF1’s documentary and article cite one of the largest US bank, Goldman Sachs, and Bellingcat, whose business model is built on disinformation.
In reality, by projecting models on a single axis (metric) without taking into account resources, energy, real costs, AI research has become a race to consume and pollute more. From HighRes-net (2019) to TR-MISR (2022), the AI metric improved by less than 2%, but with quadratic complexity instead of a logarithmic one, and a time to train a model once that goes from 9h to 60h on an NVIDIA Tesla V100 card. TR-MISR uses Transformer, the same model found in BERT, ChatGPT and LLaMA. It is therefore not more AI, at the service of AI, that is needed (this is the goal of disinformation!). If tomorrow Twitter was only made up of bots, fake profiles and climatosceptics, what would its value be?
Resources to help fight greenwashing
- The anti-greenwashing guide from Pour un Réveil Ecologique
- The online anti-greenwashing tool of ADEME
Feb 2023. Meta AI / FAIR Paris blog is a gem of greenwashing.
Training smaller foundation models like LLaMA is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases.
We are making LLaMA available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a LLaMA model card that details how we built the model in keeping with our approach to Responsible AI practices.
Smaller models trained on more [words] are easier to retrain and fine-tune for specific potential product use cases. We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens….
To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license focused on research use cases. Access to the model will be granted on a case-by-case basis.
The original article was published on Arxiv in June 2019, four days after Emma Strubell and the MIT’s alert. Written by 22 authors including Demis Hassabis, CEO of Deepmind, and Yoshua Bengio, deep learning godfather, it was republished in February 2022 by the Association for Computing Machinery (ACM), the American lobby which awarded the 2019 Turing Prize to Yoshua Bengio, Geoffrey Hinton (Google) and Yann Le Cun (Facebook).
In December 2022, while Yann Lecun was working on models with billions of parameters, a workshop was organized by climatechange.ai at NeurIPS 2022 and included 19 authors from NVIDIA, the world leader in artificial intelligence computing, which benefits from the explosion of model complexity and disinformation.
C. Research and gifts en IA.
The spotlight on deep learning godfathers, but also gifts and corruption influenced AI research to continue training ever more complex models, at the benefit of GAFAM and NVIDIA. Submitted on Arxiv on May 25, 2019 and accepted at NeurIPS 2019, the article Are Sixteen Heads Really Better than One? by an alumni of École Polytechnique (X13) writes in the acknowledgments We are also particularly grateful to Thomas Wolf from Hugging Face, whose independent reproduction efforts allowed us to find and correct a bug in our speed comparison experiments. This research was supported in part by a gift from Facebook.
Deep Mind applied ML algorithms to predict wind energy. Google announced that this algorithm could predict wind power production thirty-six hours in advance. In June , ENGIE (a French company) was announced as the first client of the project on Towards Data Science and Bloomberg.
In November 2022, while Patrick Pouyanne, CEO of Total Energies, is heard in the National Assembly within the commission aiming at establishing the reasons for France’s loss of sovereignty and energy independence, ENGIE and Google concluded a 100 MW contract for a period of 12 years, to supply Google with more than 5 TWh of “green energy” from the Moray West project, an offshore wind farm of nearly 900 MW, scheduled to be commissioned in 2025. Europe accused US of profiting from the war.