Symbol grounding for generative AI: lessons learned from interpretive ABM

Martin Neumann, Vanessa Dirksen

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

This perspective article argues that not only humanities benefit and are transformed by recent AI developments but AI might also benefit from the humanities. This is demonstrated with regard to the symbol grounding problem in AI by considering that meaning is not the outcome of a two-way relation between an object and a brain (or AI) but of the negotiation of meaning in the triadic relation between objects, symbols, and human practices. This is common in the interpretive social research tradition of the humanities. We argue that AI benefits from embedding generative methods in interpretive social research methodologies. This can be achieved by using the example of the recently developed methodology of interpretive agent-based simulation (iABM). This methodology enables the generation of counterfactual narratives anchored in ethnographic evidence and hermeneutically interpreted, producing symbolically grounded and plausible futures. Criteria for plausibility correspond to contemporary guidelines for assessing trustworthy AI, namely human agency and oversight, transparency, and auditability.
Original languageEnglish
Article number1508004
Number of pages6
JournalFrontiers in Computer Science
Volume7
DOIs
Publication statusPublished - 21 Mar 2025

Keywords

  • Generative methods
  • interpretive agent-based modelling (ABM)
  • Interpretive social research
  • Symbol grounding
  • transparent AI

Fingerprint

Dive into the research topics of 'Symbol grounding for generative AI: lessons learned from interpretive ABM'. Together they form a unique fingerprint.

Cite this