About the Seminar
Large language models (LLMs) have demonstrated strong performance across a range of reasoning tasks, yet they continue to struggle with reliably distinguishing causation from correlation, particularly in mathematically grounded settings. In this talk we will explore how different strategies (including decoding one) can be refined to improve causal reasoning, moving beyond the limitations of standard next-token prediction.
We will also specifically speak about how humans are reasoning in order to assess the capabilities of models. Specifically we will investigate a framework that enhances causal inference by introducing structured signals during decoding, enabling models to better isolate relevant factors in complex reasoning scenarios. While structured representations (such as knowledge graphs) can be coupled with LLMs to overcome some issues e.g. with hallucinations, they are not the solution; instead, taking the inspirations from how humans reasoning (e.g. observing solutions of Olympiad mathematics etc.). We will also discuss recent results that suggest that simply eliciting longer reasoning traces does not guarantee improved causal understanding, and that more advanced strategies are required.
About the Speaker
Liubov has been working in the area of complex networks and complex systems. She has mainly focused on working with different researchers, the research area of mathematics, physics, and computer science. She currently holds a position in Artificial Intelligence Lab in Saclay, Paris, and she works with different concepts from reasoning to networks and network representation of knowledge.
Practical
The seminar is taking place online via Zoom.