Mutual Minds

Adaptive Cognition in Intelligent Systems

How do human minds and intelligent systems learn to adapt to each other? From millisecond sensorimotor adjustments to the long-term design of environments that respect human cognitive limits, Mutual Minds investigates the dynamics of adaptive cognition across timescales, domains, and modalities.

Mutual Minds is a research initiative at the Human-IST Institute, University of Fribourg, investigating how human operators and intelligent systems develop shared adaptive behaviour. As operating environments become increasingly autonomous and predictive, the programme addresses a fundamental question: how do we ensure that the cognitive coupling between operator and system remains stable, productive, and aligned with human capabilities?

The initiative is organised around an operator–environment interaction loop mediated by adaptive interfaces and studied across multiple timescales. Two complementary research platforms — one focused on embodied sensorimotor behaviour, the other on communicative and social regulation — operationalise this framework, supported by a specialised master’s programme that trains the next generation of human-centred system designers.

Core Research Questions

Five interconnected questions structure the Mutual Minds research agenda, spanning real-time regulation to long-term system design:

  1. Perception–Action Regulation. How do operators regulate perception and action in closed-loop interaction with intelligent systems?
  2. Signal-Level Indicators. How do measurable signals reveal adaptation, instability, or loss of agency during operator–environment interaction?
  • Evolving Internal Models. How do internal models evolve through interaction with increasingly predictive and autonomous intelligent systems?
  1. Cross-Timescale Dynamics. How do breakdowns and learning processes unfold across timescales in operator–environment interaction?
  2. Adaptive Environment Design. How can adaptive operating environments be designed to remain aligned with human cognitive constraints?
Figure 1. The Mutual Minds operator–environment interaction framework and five core research questions.

Two Platforms, Same Questions

Figure 2. MindCraft and Harbour: two complementary platforms addressing the same research questions through different modalities.

MINDCRAFT

Embodied Interaction & Sensorimotor Regulation

High-fidelity simulation environments (driving, flight) instrumented with multimodal physiological recording — eye-tracking, EEG, electrodermal activity, cardiac signals, inertial data, and motor telemetry — synchronised at high temporal resolution. Investigates how operators control and adapt their actions in closed-loop interaction with complex systems. Quantifies cognitive load, uncertainty, and skill acquisition under dynamic conditions.

Domains: Mobility, aviation, manufacturing, extended reality.

HARBOUR

Communication, Coordination & Social Regulation

Conversational training platform where users engage in structured dialogue with adaptive AI agents. Forensic-grade stereo recording with native speaker separation captures interaction dynamics for fine-grained analysis. Investigates dialogue as a regulatory process: turn-taking, prosody, and coordination across speakers and agents. Tracks how communicative repertoire evolves over time and affects autonomy and learning.

Domains: Education, teamwork, accessibility, neurodivergent support.

Active Projects

  • MindCraft Platform

    U+Behavioural Tokenisation Framework

    The overarching methodological framework of MindCraft. U+ converts continuous multimodal signals (gaze, motor, neural, physiological) into discrete behavioural tokens, extracts Perception-Action Cycles (PACs), and builds toward Active Inference world models. U+ provides the shared computational language across all MindCraft domain applications.

    MAP-DRIVEApplication of U+ to Driving

    Investigates predictive control during route familiarisation in simulated urban driving. 66 usable subjects, 6 modalities (Gaze, Motor, EEG, EDA, ECG, IMU) at 500 Hz, 415 aligned recordings across a 14-segment urban circuit. 13-paper publication programme across 4 waves. Detection pipeline (YOLO mAP50 = 95.3%), segment decoding at 71.7% balanced accuracy (4-class, chance = 25%).

    HABITApplication of U+ to Industry 5.0

    Human-Centered AI for Behavior Learning in Industrial Technologies. PhD project with the University of Strasbourg, ICAM, and iCube laboratory, funded through the ENACT AI Cluster. Multimodal data fusion (eye-tracking, IMU, video, neuroscience annotations) on a real production line 5.0 at ICAM Strasbourg-Europe. Co-supervised by Dr. Samy Rima and Ass. Prof. Rabih Amhaz. First publication accepted at FAIM 2026, to appear in Lecture Notes in Mechanical Engineering (Springer).

    SkyMindApplication of U+ to Aviation

    Applies the U+ behavioural tokenisation framework to the aviation domain using a high-fidelity flight simulator environment. Investigates pilot sensorimotor regulation, cognitive workload, and adaptive decision-making during simulated flight operations. Leverages the same multimodal recording infrastructure (EEG, eye-tracking, physiological signals) to study how operators build and maintain situational awareness in complex airspace environments.

  • Harbour Platform

    PRISMA New Lens for Conversation Analysis

    Open-source analysis pipeline that maps every utterance along three dimensions: its grammatical shape (question, statement, exclamation), its communicative purpose (informing, requesting, agreeing, hedging), and its acoustic signature (pitch, rhythm, voice quality, speaking rate). By aligning what someone says with how they say it at the level of individual phrases, PRISM makes the invisible mechanics of conversation visible and measurable.

    Project 1: Reading the SignalsProsodic Indicators of Communicative Regulation

    Investigates the prosodic signals that accompany different communicative strategies in conversation with AI. Tracks how a speaker’s voice changes moment to moment — when they hedge a difficult opinion, warm up to a new topic, or lose interest — to identify reliable, real-time indicators of communicative regulation: the ongoing process by which speakers adjust their behaviour in response to their conversational environment.

    Project 2: Mapping Conversational GrowthLongitudinal Development of Communicative Repertoire

    Tracks how a speaker’s conversational repertoire evolves over repeated practice sessions on the Harbour platform. Measures not just what people say but the range and flexibility of how they say it: do they develop new communicative strategies over time? Do they learn to match their approach to different social situations? By following the same speakers across weeks and scenarios, this project observes conversational competence as it develops.

    Project 3: Conversations That Adapt to YouReal-Time Adaptive Dialogue Systems

    Uses the signals from Project 1 and the competency map from Project 2 to build conversations that respond to the speaker in real time. If the system detects withdrawal — quieter speech, increased hedging, narrowing communicative range — it shifts to a more supportive mode before gradually reintroducing challenge. Studies whether this adaptive approach accelerates conversational development, and what it reveals about human–AI interaction dynamics.

Digital Neuroscience Specialised Master’s Programme

The Mutual Minds research agenda is tightly integrated with the Digital Neuroscience sp-MSc, a specialised master’s programme coordinated within Human-IST. Students learn to think in terms of systems, adaptation, and regulation across intelligence scales, gaining hands-on experience on both MindCraft and Harbour research platforms through direct engagement with experimental work, data analysis, and system design.

Graduates are prepared for research, industry, and public-sector roles where human-centred intelligent systems matter—bridging the gap between computational capability and human adaptability.

Shared Methodological Infrastructure

Mutual Minds operates two converging instrumentation pipelines — one capturing embodied sensorimotor behaviour, the other capturing communicative and acoustic behaviour — unified by a common commitment to multimodal, time-aligned, high-resolution recording.

MindCraft records six concurrent modalities — eye-tracking, EEG, electrodermal activity, cardiac signals, head-movement inertial data, and motor telemetry — synchronised at high temporal resolution in ecologically valid simulation environments (driving, flight). All signals are aligned to a common reference clock, enabling cross-modal analysis at the level of individual perception-action events.

Harbour captures conversational dynamics through forensic-grade stereo recording with native speaker separation, feeding into a multi-stage analysis pipeline (PRISM) that extracts voice quality, prosodic features, communicative structure, and semantic coherence at the utterance level. This enables fine-grained tracking of how speakers regulate and adapt their communicative behaviour across sessions.

Across both platforms, the U+ framework provides a shared computational approach: converting continuous multimodal signals into discrete behavioural tokens and Perception-Action Cycles that can be compared across domains — whether the operator is driving, flying, assembling on a production line, or navigating a conversation. Data processing leverages Python, MATLAB, and deep learning frameworks for time-series modelling, representation learning, and network analysis.

Selected Outputs

Selected publications, datasets, and tools from the Mutual Minds initiative:

Publications

Bussard, A., Roosch, L., Schmid, M. C., & Rima, S. (2025). MAP-DRIVE: Optimizing Driver Monitoring Systems Through Perception-Action Sequences and Neurophysiological Insights. In NAT’25 Proceedings, pp. 50–53. BTU Cottbus-Senftenberg. doi:10.26127/BTUOpen-7215

 

Gurbanov, K., Almakdessi, N., Kacimi, F., Bobenrieth, C., Chabrol, G., Rima, S., & Amhaz, R. (2026). HABIT: An AI-Driven Multimodal Framework for Human-Centric Productivity in Industry 5.0. FAIM 2026, LNME, Springer (forthcoming).

 

Rima, S. et al. (in preparation). MAP-Drive: A Multimodal Dataset of Urban Driving Behavior. Target: Scientific Data.

 

Rima, S. et al. (in preparation). Scanpath Reorganisation During Route Familiarisation in Urban Driving. Target: Human Factors.

 

Rima, S. et al. (in preparation). Pressure-State Tokenisation of Motor Signatures in Simulated Driving. Target: Transportation Research Part F.

Platforms

MindCraft: multimodal simulation laboratory with U+ behavioural tokenisation framework. Harbour: conversational AI training platform with PRISM analysis pipeline (multilingual, forensic-grade recording, adaptive dialogue).

Datasets

MAP-DRIVE: large-scale multimodal driving dataset (EEG, eye-tracking, EDA, cardiac, IMU, motor telemetry). Harbour: longitudinal conversational interaction corpus (in collection).

Theses

Completed and ongoing MSc theses on deep learning for eye-tracking, neurobehavioural modelling of driving, circadian EEG patterns, meta-learning algorithms, and network theory applied to genomic data.

Talks

ITU AI for Good Global Summit 2026 (submitted): “The Dialogic Gap: AI and Humanity’s Oldest Unsolved Problem.”

Team

Dr. Samy Rima

Principal Investigator, Maître-Assistant, Initiative Lead

Human-IST Institute, University of Fribourg

Multimodal research pipelines, AI/ML, neuroscience, XR, cybernetics. Coordinator of the Digital Neuroscience sp-MSc. Leads the Mutual Minds initiative and both MindCraft and Harbour research programmes.

Prof. Denis Lalanne

Co-PI, Full Professor, Institute Director

Human-IST Institute, University of Fribourg

Human-computer interaction, multimodal interfaces, visual analytics, Human-AI teaming, Human-Building Interaction.

Prof. Michael C. Schmid

Co-PI, Full Professor

University of Fribourg

Visual neuroscience, neural dynamics, sensory processing, attention and perception.

Ass. Prof. Rabih Amhaz

Co-PI, Assistant Professor

University of Strasbourg & ICAM

Signal processing, computational modelling, interdisciplinary engineering, applied mathematics.

Sofia Panagiotakou

Graduate Researcher, MSc, Digital Neuroscience

University of Fribourg

Vithusan Ramalingam

Graduate Researcher, MSc, Digital Neuroscience

University of Fribourg

Lukas Maurer

Graduate Researcher, MSc, Medicine

University of Fribourg

Alumni — MindCraft Contributors

The following students contributed to MindCraft projects during their master’s studies:

Alessia Bussard  •  Lorenzo Roosch  •  Zeynep Aydin  •  Celine Rohrer  •  Emile Alyev

Institutionnal Partners

  • University of Fribourg
  • Human-IST Institute
  • University of Strasbourg
  • ICAM
  • iCube Laboratory
  • ENACT AI Cluster