• Home
  • about
    • professional
    • personal
    • contact
  • research
    • interactional language
    • language and emotions
    • language and cognition
    • human machine interaction
    • categories
    • pronouns
    • other
    • new papers
    • upcoming and recent talks
  • supervision
    • postdocs
    • current PhD students
    • former PhD students
  • teaching
    • the linguistics of AI halucinations
    • the syntax of talking heads
  • outreach
    • videos
    • my medium posts
    • media
  • Blog
  • news
This page is for a course I was teaching at the Barcelona international summer school (July 2024)

Course description

ChatGPT frequently produces false information: output that appears plausiblebut is not factual. This is known as ‘hallucinations’. The reason behind this isthe fact that large language models (LLM) are trained to predict strings of words(rather than being a repository of ‘facts’). Crucially, an AI does not “know” aboutthe truthfulness of its output. Nevertheless, AI-tools are increasingly used to provide “information” in professional and private settings. Why are we inclined to rely on this non-reliable source? In this course we explore this question from a linguistic angle. We compare the logic and architecture behind LLMs (which underlie AI-tools) with the logic and architecture behind human cognition (including the capacity for language). At the root of our "trust" in AI-tools is the apparent flawless language output, which can lead to anthropomorphization, which in turn leads users to expect that it follows the same conversational principles as humans do. In this course, we explore several aspects of human language that contribute to our inclination to take AI generated output at face value:i) the fact that meaning in language is based on truth-conditions; ii) the fact that humans mark uncertainty with linguistic means that are conspicuously absent in AT-generated text; iii) the fact that human communication is based on the cooperative principle (according to which we assume that our interlocutors are reliable). As limiting cases, we will compare the virtual hallucinations of AI-tools with pathological hallucinations in Schizophrenia, as well as language that does not rely on truth (poetry).

Course content

Full Course materials will be published here soon. Day 1 Introducing and exemplification of the problem: Hallucinations in AIAn overview of the linguistic capacities of humans vs. AI: human cognition vs. large language models.Reading: Zanotti, G., et al. (2023).
Day 2 An introduction to meaning in human language (part 1) truth-conditions and compositionalityTesting the limits: poetry, lies, and pathological hallucinations)Readings: Munn, et al. (2023); Parnas et al. (2023)
Day 3 An introduction to meaning in human language (part 2) The expressive power of human language: How do we talk about (un)certainty? Reading: Wiltschko (2022)
Day 4 An introduction to the use of language Cooperation in linguistic interaction: humans vs. machinesReading: Dombi et al. (2022)
Day 5 ConclusionsPresentations of project resultsLessons in critical thinking and information consumption
Required Readings:
Dombi, J.; Sydorenko, T.; Timpe-Laughlin, V. (2022). Common ground, cooperation, and recipient design in human-computer interactions, Journal of Pragmatics 193: 4-20, https://doi.org/10.1016/j.pragma.2022.03.001Munn, L., Magee, L. & Arora, V. Truth machines: synthesizing veracity in AI language models. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01756-4Parnas, J.; Yttri, J.-E.; Urfer-Parnas, A. 2023. Phenomenology of auditory verbal hallucination in schizophrenia: An erroneous perception or something else?, Schizophrenia Research (online first) https://doi.org/10.1016/j.schres.2023.03.045Wiltschko, M., (2022) “Language is for thought and communication”, Glossa 7(1). doi: https://doi.org/10.16995/glossa.5786Zanotti, G., Petrolo, M., Chiffi, D. et al. (2023). Keep trusting! A plea for the notion of Trustworthy AI. AI & Soc https://doi.org/10.1007/s00146-023-01789-9
Copyright © 2020 Network Solutions, LLC, A Web.com Company. All rights reserved.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.