More About Artificial Ignorance
19 hours ago
Guest Post by Willis Eschenbach
My previous post, Artificial Alarmism, has gotten some comments from folks who think I’m wrong, and that Large-Language Models can indeed automate the fact-checking of scientific claims. This was an interesting comment, my thanks to the author:
The kind of AI applications I am talking about do not reject things. They summarize them, including the debates. For that matter I recently got ChatGPT to correctly explain how Happer disagrees with alarmism. Nothing was rejected.
The math I refer to is that used to do science. Almost all published science uses math so it is universal. In the article I reference I use Monte Carlo as an example. There is an advance in Monte Carlo method published in a forest management journal that needs to get to all the other fields that use that method, which are legion.
For that matter you know how Google now suggests related and refined searches. That is AI and it works well.
In response I decided, being a scientist and all, that experiment is much better than theory. So I went to ChatGPT, and it turned out quite funny, for a reason I’ll explain at the end. First, the Q&A, emphasis mine:
https://wattsupwiththat.com/2024/01/29/more-about-artificial-ignorance/