about
I'm an atypical philosopher, one who spends more time with engineers and machine learning models than with ancient texts. My work examines how cultural values and social norms shape the behavior of AI systems, particularly large language models and conversational AI systems. I hold a PhD in Philosophy from Sorbonne Université - CNRS, where I explored the ethics of conversational AI at the intersection of moral philosophy and machine learning.
Currently, I'm part of the science team at Mistral AI, focusing on model behavior and the challenges of cultural and multilingual alignment. Before this, I spent nearly four years as Principal Ethicist at Hugging Face, leading interdisciplinary research on AI ethics in open source environments.
research
I study how AI systems are designed, aligned, and evaluated as they transition from simple tools to entities that occupy social and relational roles, including new forms of AI companionship. Since 2017, I've treated conversational AI as a sociotechnical system, examining how it shapes social relations at different scales: from individual interactions to collective and population-level patterns. My research spans human-AI interaction, AI evaluation, governance, safety, and ethics, connecting theoretical questions to the real-world challenges of building and deploying AI systems.
projects
- Associate Researcher at Sciences, Normes, Démocraties at Sorbonne Université and Centre National de la Recherche Scientifique (CNRS) lab.
- Research Affiliate at Machine Intelligence and Normative Theory lab.
- Co-chair of the Ethical and Legal Scholarship working group of the BigScience open science project, that developed and deployed the Multilingual Large Language Model BLOOM.
- Co-editor with Michel Puech of the special issue "Technology and Constructive Critical Thought" for the peer-reviewed journal Giornale di Filosofia.
- Co-editor with Julien De Sanctis of the special issue "For an Ethics of the Human-Machine Interaction" for the peer-reviewed journal Implications Philosophiques.
media
My findings have been featured in international media including Nature, The New York Times, The Washington Post, MIT Technology Review, Wired, Bloomberg, Business Insider, and TechCrunch, as well as European outlets such as Le Figaro, La Repubblica, and Wired Italia.
I have written three op-eds in English for Tech Policy Press: on why debates about AI consciousness distract from more urgent issues, on Popper's paradox of tolerance and content moderation, and on what AI can learn from social media's mistakes before companies exploit our conversations. I've also written for Wired Italia on how systems like ChatGPT are becoming the new mirrors of loneliness.
I regularly participate in public debates through television, podcasts, and radio, with appearances on BFMTV, TF1, RAI, and France24, as well as programs for France Culture.
publications
- Kaffee, L. A., Pistilli, G., & Jernite, Y. (2025). INTIMA: A Benchmark for Human-AI Companionship Behavior. arXiv. https://arxiv.org/abs/2508.09998
- Mitchell, M., Attanasio, G., Baldini, I., Clinciu, M., Clive, J., Delobelle, P., Dey, M., Hamilton, S., Dill, T., Doughman, J., Dutt, R., Ghosh, A., Forde, J., Holtermann, C., Kaffee, L., Laud, T., Lauscher, A., Lopez-Davila, R., Masoud, M., Nangia, N., Ovalle, A., Pistilli, G., et al. (2025). SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models. Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL). https://aclanthology.org/2025.naacl-long.600/
- Palminteri, S. & Pistilli, G. (2025). The Cognitive Biases Behind Inflationary and Deflationary Claims about Large Language Models. PsyArXiv. https://osf.io/preprints/psyarxiv/26tyu_v2
- Mitchell, M., Ghosh, A., Luccioni, S., & Pistilli, G. (2025). Fully Autonomous AI Agents Should Not be Developed. arXiv. https://arxiv.org/abs/2502.02649
- Luccioni, S., Pistilli, G., Sefala, R., & Moorosi, N. (2025). Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice. arXiv. https://arxiv.org/abs/2504.00797
- Nannini, L., Huyskes, D., Panai, E., Pistilli, G., & Tartaro, A. (2025). Nullius in Explanans: an Ethical Risk Assessment for Explainable AI. Ethics and Information Technology. https://doi.org/10.1007/s10676-024-09800-7
- Pistilli, G., Leidinger, A., Jernite, Y., Kasirzadeh, A., Luccioni, A. S., & Mitchell, M. (2024). CIVICS: Building a Dataset for Examining Culturally-Informed Values in Large Language Models. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES). https://doi.org/10.1609/aies.v7i1.31710
- Rocca, R., Pistilli, G., Maheshwari, K., & Fusaroli, R. (2024). Introducing ELLIPS: An Ethics-Centered Approach to Research on LLM-Based Inference of Psychiatric Conditions. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES). https://doi.org/10.1609/aies.v7i1.31720
- Constantinides, M., Tahaei, M., Quercia, D., Stumpf, S., Madaio, M., Kennedy, S., Wilcox, L., Vitak, J., Cramer, H., Bogucka, E. P., Baeza-Yates, R., Luger, E., Holbrook, J., Muller, M., Blumenfeld, I. G., & Pistilli, G. (2024). Implications of Regulations on the Use of AI and Generative AI for Human-Centered Responsible Artificial Intelligence. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '24). https://doi.org/10.1145/3613905.3643979
- Pistilli, G. (2024). The Moral Landscape of General-Purpose Large Language Models. Human-Centered AI: A Multidisciplinary Perspective for Policy-Makers, Auditors, and Users. https://doi.org/10.1201/9781003320791
- Pistilli, G. (2024). For an ethics of conversational Artificial Intelligence. PhD Thesis at Sorbonne Université. https://theses.hal.science/tel-04627154
- Pistilli, G., Muñoz Ferrandis, C., Jernite, Y., & Mitchell, M. (2023). Stronger Together: On the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23). https://doi.org/10.1145/3593013.3594002
- Tenzer, M., Pistilli, G., Brandsen, A., & Shenfield, A. (2023). Debating AI in archaeology: applications, implications, and ethical considerations. Internet Archaeology. https://doi.org/10.31235/osf.io/r2j7h
- Laurencon, H., Saulnier, L., Wang, T., Akiki, C., Villanova del Moral, A., Le Scao, T., von Werra, L., Mou, C., Gonzalez Ponferrada, E., Nguyen, H., Frohberg, J., Sasko, M., Lhoest, Q., McMillan-Major, A., Dupont, G., Biderman, S., Rogers, A., Ben allal, L., De Toni, F., Pistilli, G., et al. (2022). The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset. Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (NeurIPS).
- Le Scao, T., Fan, A., Akiki, C., Pavlick, E. J., Ilic, S., Hesslow, D., Castagne, R., Luccioni, A. S., Yvon, F., Galle, M., Tow, J., Rush, A. M., Biderman, S. R., Webson, A., Ammanamanchi, P. S., Wang, T., Sagot, B., Muennighoff, N., Villanova del Moral, A., Ruwase, O., Bawden, R., Bekman, S., McMillan-Major, A., Beltagy, I., Nguyen, H., Saulnier, L., Tan, S., Ortiz Suarez, P., Sanh, V., Laurenccon, H., Jernite, Y., Launay, J., Mitchell, M., Raffel, C., Gokaslan, A., Simhi, A., Etxabe, A. S., Fikri Aji, A., Alfassy, A., Rogers, A., Kreisberg Nitzav, A., Xu, C., Mou, C., Emezue, C. C., Klamm, C., Leong, C., van Strien, D. A., Ifeoluwa Adelani, D., Radev, D., Ponferrada, E. G., Levkovizh, E., Kim, E., Natan, E. B., De Toni, F., Dupont, G., Kruszewski, G., Pistilli, G., et al. (2022). BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. arXiv.
- Akiki, C., Pistilli, G., Mieskes, M., Gallé, M., Wolf, T., Ilic, S., & Jernite, Y. (2022). BigScience: A Case Study in the Social Construction of a Multilingual Large Language Model. Proceedings in NeurIPS 2022 Workshop WBRC.
- Pistilli, G. & Puech, M. (2022). Journal issue, vol. 2: Technology and Constructive Critical Thought. Giornale di Filosofia.
- Pistilli, G. (2022). La logique algorithmique confrontée à l'organisation de l'administration publique française. Giornale di Filosofia.
- Johnson, R. L., Pistilli, G., Menédez-González, N., Dias Duran, L. D., Panai, E., Kalpokiene, J., & Bertulfo, D. J. (2022). The Ghost in the Machine has an American accent: value conflict in GPT-3. arXiv. https://doi.org/10.48550/arXiv.2203.07785
- Pistilli, G. (2022). What lies behind AGI: ethical concerns related to LLMs. Éthique et Numérique. https://hal.archives-ouvertes.fr/hal-03607808