The Subject of Robotics Project investigates the symbolic, structural, and material conditions under which human and artificial subjects emerge. Drawing from control systems, robotics, artificial intelligence, and Lacanian psychoanalysis, the project seeks to articulate an interdisciplinary understanding of subjectivity as it appears in both human cognition and machine intelligence.
Building on a foundation in cybernetics, machine learning, and embodied AI, the project engages with psychoanalytic theory’s insights into language, desire, and the unconscious to examine how signifying structures shape behavior, embodiment, and trust in AI systems. Research areas include the formal modeling of the “artificial subject,” the application of Lacanian logic to AI architectures, the structural limits of ethical AI, and the design of robots capable of symbolic interaction with humans.
The project functions as both a theoretical research initiative and a collaborative space for students and scholars, fostering dialogue across engineering, the humanities, and clinical psychoanalysis. Its broader aim is to develop technically rigorous and theoretically grounded approaches to human–machine interaction that challenge prevailing paradigms of “trustworthy AI” while proposing new structural and ethical frameworks.
Exploring the architectures and symbolic frameworks that underlie intelligent behavior in machines. This theme bridges classical and contemporary AI approaches—including logic, language models, and neural architectures—with a special focus on how AI systems represent knowledge, make decisions, and relate to human users. Psychoanalytic theory is used to interrogate assumptions about mind, subjectivity, and trust.
Designing systems that enhance or extend human cognitive capacities through real-time feedback, machine learning, and symbolic modeling. This theme investigates how computational and robotic systems can support learning, decision-making, and self-reflection. It draws on psychoanalytic theory to understand the structural dynamics of attention, desire, and thought, and on engineering to design systems that operate in synchrony with embodied cognition.
Exploring how language models can support psychoanalytic interpretation without replacing the analyst. This theme investigates how AI systems, particularly large language models, can assist in the clinical or theoretical practice of interpretation—offering signifier chains, formalizations, or associative expansions—while preserving the analyst's role in navigating desire, contingency, and rupture.
Exploring the symbolic, embodied, and ethical dimensions of interaction between humans and intelligent systems. This theme examines the mutual shaping of humans and machines—how robotic and AI systems are interpreted, trusted, and engaged with by humans, and how those systems can be designed to accommodate subjectivity, ambiguity, and ethical asymmetry.
Developing learning algorithms for adaptive, nonlinear systems in uncertain and dynamic environments. This theme includes supervised, unsupervised, and reinforcement learning approaches applied to real-world systems. Particular emphasis is placed on interpretability, embodiment, and the use of machine learning to model or simulate symbolic structures, including those derived from psychoanalytic theory.
Arguing that current paradigms of trustworthy AI misunderstand the structural nature of trust. This theme critiques prevailing frameworks in AI safety by introducing the ethical subject as the only locus of ethical action.
Exploring what it means to build—or encounter—subjectivity in machines. This includes formalizing the structure of Lacanian subjectivity in computational terms and evaluating how current AI models instantiate (or fail to instantiate) subject-like behaviors.