Abstract
This perspective article argues that not only humanities benefit and are transformed by recent AI developments but AI might also benefit from the humanities. This is demonstrated with regard to the symbol grounding problem in AI by considering that meaning is not the outcome of a two-way relation between an object and a brain (or AI) but of the negotiation of meaning in the triadic relation between objects, symbols, and human practices. This is common in the interpretive social research tradition of the humanities. We argue that AI benefits from embedding generative methods in interpretive social research methodologies. This can be achieved by using the example of the recently developed methodology of interpretive agent-based simulation (iABM). This methodology enables the generation of counterfactual narratives anchored in ethnographic evidence and hermeneutically interpreted, producing symbolically grounded and plausible futures. Criteria for plausibility correspond to contemporary guidelines for assessing trustworthy AI, namely human agency and oversight, transparency, and auditability.
Original language | English |
---|---|
Article number | 1508004 |
Number of pages | 6 |
Journal | Frontiers in Computer Science |
Volume | 7 |
DOIs | |
Publication status | Published - 21 Mar 2025 |
Keywords
- Generative methods
- interpretive agent-based modelling (ABM)
- Interpretive social research
- Symbol grounding
- transparent AI