Abstract
Large language models (LLMs) have caused a veritable revolution in the field of AI. However, LLMs do come with some considerable caveats including the lack of logical reasoning ability. This can make it challenging to use LLMs in environments where they need to give reliably correct answers. Recently, attempts have been made to alleviate this concern by generating a more transparent way of solving the problem using an LLM, instead of solving the problem directly with an LLM (so-called autoformalisation). Among others, answer set programs have been tried as a problem-solving intermediary in this context. However, current attempts at autoformalisation of answer set programs has been limited to toy examples or single, simple rules. In this work, we investigate the capabilities of LLMs in generating ASP that solve real-world scheduling problems, and identify techniques such as few-shot learning and chain-of-Thought as particularly succesful.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 23rd International Workshop on Non-Monotonic Reasoning (NMR 2025) co-located with the 22nd International Conference on Principles of Knowledge Representation and Reasoning (KR 2025), Melbourne, Australia, November 11-13, 2025 |
| Editors | Anna Rapberger, Sebastian Rudolph |
| Publisher | CEUR-WS.org |
| Pages | 157-168 |
| Number of pages | 12 |
| Volume | 4071 |
| Publication status | Published - 2025 |
| Event | 23rd International Workshop on Non-Monotonic Reasoning - Melbourne, Australia Duration: 11 Nov 2025 → 13 Nov 2025 https://nmr.krportal.org/2025/ |
Publication series
| Series | CEUR Workshop Proceedings |
|---|---|
| ISSN | 1613-0073 |
Conference
| Conference | 23rd International Workshop on Non-Monotonic Reasoning |
|---|---|
| Abbreviated title | NMR 2025 |
| Country/Territory | Australia |
| City | Melbourne |
| Period | 11/11/25 → 13/11/25 |
| Internet address |
Fingerprint
Dive into the research topics of 'Autoformalisation Answer Set Programs for Scheduling Problems using Few-Shot Learning and Chain-of-Thought'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver