Tackling AI’s honesty problem
While there has been much excitement about the potential of large language models (LLMs) to automate tasks that previously required human intelligence or creativity, many early projects have failed because of LLMs’ innate willingness to lie. The presentation explores the nature, cause and consequences of this “hallucination” issue and proposes a solution.
By combining generative AI with more traditional symbolic AI, reliability can be maintained, explainability improved and private knowledge and data injected. The talk will show simple examples of combining language-based thinking with computational thinking to generate solutions that neither could achieve on its own.
An example application of an AI scientific research assistant will be shown that brings together the ideas presented in a most demanding real-world task, where false information is not acceptable