This page is for a course I was teaching at the Barcelona international summer school (July 2024)
Course description
ChatGPT frequently produces false information: output that appears plausiblebut is not factual. This is known as ‘hallucinations’. The reason behind this isthe fact that large language models (LLM) are trained to predict strings of words(rather than being a repository of ‘facts’). Crucially, an AI does not “know” aboutthe truthfulness of its output. Nevertheless, AI-tools are increasingly used to provide “information” in professional and private settings. Why are we inclined to rely on this non-reliable source? In this course we explore this question from a linguistic angle. We compare the logic and architecture behind LLMs (which underlie AI-tools) with the logic and architecture behind human cognition (including the capacity for language). At the root of our "trust" in AI-tools is the apparent flawless language output, which can lead to anthropomorphization, which in turn leads users to expect that it follows the same conversational principles as humans do. In this course, we explore several aspects of human language that contribute to our inclination to take AI generated output at face value:i) the fact that meaning in language is based on truth-conditions; ii) the fact that humans mark uncertainty with linguistic means that are conspicuously absent in AT-generated text; iii) the fact that human communication is based on the cooperative principle (according to which we assume that our interlocutors are reliable). As limiting cases, we will compare the virtual hallucinations of AI-tools with pathological hallucinations in Schizophrenia, as well as language that does not rely on truth (poetry).
Course content
Full Course materials will be published here soon.
Day 1 Introducing and exemplification of the problem: Hallucinations in AIAn overview of the linguistic capacities of humans vs. AI: human cognition vs. large language models.Reading: Zanotti, G., et al. (2023).
Day 2 An introduction to meaning in human language (part 1) truth-conditions and compositionalityTesting the limits: poetry, lies, and pathological hallucinations)Readings: Munn, et al. (2023); Parnas et al. (2023)
Day 3 An introduction to meaning in human language (part 2) The expressive power of human language: How do we talk about (un)certainty? Reading: Wiltschko (2022)
Day 4 An introduction to the use of language Cooperation in linguistic interaction: humans vs. machinesReading: Dombi et al. (2022)
Day 5 ConclusionsPresentations of project resultsLessons in critical thinking and information consumption
Required Readings:
Dombi, J.; Sydorenko, T.; Timpe-Laughlin, V. (2022). Common ground, cooperation, and recipient design in human-computer interactions, Journal of Pragmatics 193: 4-20, https://doi.org/10.1016/j.pragma.2022.03.001Munn, L., Magee, L. & Arora, V. Truth machines: synthesizing veracity in AI language models. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01756-4Parnas, J.; Yttri, J.-E.; Urfer-Parnas, A. 2023. Phenomenology of auditory verbal hallucination in schizophrenia: An erroneous perception or something else?, Schizophrenia Research (online first) https://doi.org/10.1016/j.schres.2023.03.045Wiltschko, M., (2022) “Language is for thought and communication”, Glossa 7(1). doi: https://doi.org/10.16995/glossa.5786Zanotti, G., Petrolo, M., Chiffi, D. et al. (2023). Keep trusting! A plea for the notion of Trustworthy AI. AI & Soc https://doi.org/10.1007/s00146-023-01789-9
Day 2 An introduction to meaning in human language (part 1) truth-conditions and compositionalityTesting the limits: poetry, lies, and pathological hallucinations)Readings: Munn, et al. (2023); Parnas et al. (2023)
Day 3 An introduction to meaning in human language (part 2) The expressive power of human language: How do we talk about (un)certainty? Reading: Wiltschko (2022)
Day 4 An introduction to the use of language Cooperation in linguistic interaction: humans vs. machinesReading: Dombi et al. (2022)
Day 5 ConclusionsPresentations of project resultsLessons in critical thinking and information consumption
Required Readings:
Dombi, J.; Sydorenko, T.; Timpe-Laughlin, V. (2022). Common ground, cooperation, and recipient design in human-computer interactions, Journal of Pragmatics 193: 4-20, https://doi.org/10.1016/j.pragma.2022.03.001Munn, L., Magee, L. & Arora, V. Truth machines: synthesizing veracity in AI language models. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01756-4Parnas, J.; Yttri, J.-E.; Urfer-Parnas, A. 2023. Phenomenology of auditory verbal hallucination in schizophrenia: An erroneous perception or something else?, Schizophrenia Research (online first) https://doi.org/10.1016/j.schres.2023.03.045Wiltschko, M., (2022) “Language is for thought and communication”, Glossa 7(1). doi: https://doi.org/10.16995/glossa.5786Zanotti, G., Petrolo, M., Chiffi, D. et al. (2023). Keep trusting! A plea for the notion of Trustworthy AI. AI & Soc https://doi.org/10.1007/s00146-023-01789-9