A Multimodal Computation Framework for Behavioral Recognition and Interaction for Mixed Reality Workspace
Keywords:
Human-Computer Interaction
Mixed Reality
Multimodal Behavioral Recognition
Affective Computing
Workspace
User Experience
interaction part: Ruyi Yang (Team Leader), Menghang Liu, Luqi Feng
construction part: Jieshi Chen, Jingtong Li, Tianyu Zhang
instructor: Professor Chao YAN, Tongji University
source code for interaction part: https://github.com/Emmaruyi/Humanizing-MR
SenseBox (2024, Participated as Team Leader)
Utilized mixed reality technology to assist in bending steel pipes for constructing a 3x3x3m installation space prototype.
Acquired human data using ZED camera and headset data, with body landmark detection and emotion recognition conducted.
Analyzed human data using qualitative and quantitative methods, and formulated rules based on distances, orientations, postures, head states, and expressions to derive interaction, task, attention, and emotional patterns.
Implemented real-time spatial changes based on human data, affecting spatial enclosure, height, openness, regularity, and color, using Grasshopper, Unity, and Python. Integrated with physical spaces to create dynamic virtual environments that adapt to human work states.
▪利用混合现实技术辅助弯曲钢管,构建 3*3*3m 的安装空间原型;
▪部署ZED 摄像头和VR头显设备获取人体数据,实现精准的人体关键点检测和情绪识别;
▪综合运用定性和定量分析方法,构建空间感知规则库(距离、方向、姿势、头部状态和表情),建立人机交互模型(互动、任务、注意力和情绪模式);
▪研发多模态空间响应系统,整合Python数据处理、Grasshopper参数化建模、Unity实时渲染技术栈,根据人类数据实时实现空间围合、高度、开放度、规则性和颜色的动态响应,并与物理空间集成,创建适应人类工作状态的动态虚拟环境
SenseBox (2024, Participated as Team Leader)
Continued Work - EEG Integrated
Nebula (2025, Participate as Teaching Assistant)