|M.Sc Student||Rozenfeld Katerina|
|Subject||Monitoring in the Operating Room; Task Analysis and Design|
Concept of Multimodal Data Displays
|Department||Department of Architecture and Town Planning||Supervisors||Professor Noemi Bitterman|
|Clinical Professor Reuven Pizov|
|Full Thesis text - in Hebrew|
In addition to traditional monitors that continuously record a patient’s hemodynamic, respiratory and electrophysiological signals, recent technological advancements have led to the introduction of a variety of devices into the operating room, such as signals from various devices, data about the environment and work rhythm, visual information from the picture archiving and communication system, online laboratory tests, and more. Each has its own display and different modes of presentation, leading to the congestion of data screens and an overload of information, dividing the surgical team’s attention between observations and performing their tasks, and enhancing the workload of operating room personnel. As a result, the cognitive attentional resources (especially the visual channel) of surgical personnel is overloaded, while in other extreme and stressful environments there are developments of multimodal and multisensory displays - interfaces, which combine two or more input/output options according to performed task, environment conditions and personal preferences.
The basic hypothesis of the current work is that the implementation of multimodal displays will improve the efficiency of operating room and reduce work overload for surgical team members.
1. Characterize the requirements of information exchange in the operating room in terms of monitoring, documentation, data use according to task performed, surgeons’ preferences and needs, and environmental conditions.
2. Characterize multimodal behavioral patterns of surgeons according to operating stages and surgeon types.
3. Develop a conceptual solution for a multimodal interface for operating rooms adapted and tailored to the team, the task and the environment.
Data was collected through a scientific literature review, unstructured observations in Carmel Hospital, Haifa (n=3) and task analysis based on structured observations of video recordings of open-heart surgery (n=6). Task analysis included activity sampling and timeline analysis.
The results of the task analysis show that information exchange in operating room is comprised of several modalities (visual, verbal and hand gestures) even though the majority of surgeons have reservations about the use of other options apart from traditional visual representation based on conservatism, habits, training routines and technology that is still undeveloped. Despite all the restrictions and difficulties of surgical workflow, surgeons communicate multimodally, i.e. they are regularly combining two or more information exchange modes simultaneously. It was identified when, how and according to which behavioral patterns it occurs.
The analysis identified the differences in multimodal behavioral patterns between chief surgeons and assistants. In addition, the multimodal behavior of surgeons varies between different operation stages.
Hand gestures were found to be one of the most used modalities by surgeons. During the observation of open-heart surgery recordings, the consideration was to all hand manipulations, which were not related to direct manipulation within the operation area. It was noticed that hand gestures were used by surgeons as a unique interaction language, transferring specific messages to other team members.
This work presents an approach for building the theoretical grounds for further development of multimodal interfaces for operating rooms. Knowledge about users, environment and display system in different areas, such as design, human engineering factors, and cognition processes was collected.