Database Open Access

Integration of Electroencephalogram and Eye-Gaze Datasets for Performance Evaluation in Fundamentals of Laparoscopic Surgery (FLS) Tasks

Somayeh B Shafiei Saeed Shadpour

Published: Aug. 23, 2023. Version: 1.0.0

When using this resource, please cite: (show more options)
Shafiei, S. B., & Shadpour, S. (2023). Integration of Electroencephalogram and Eye-Gaze Datasets for Performance Evaluation in Fundamentals of Laparoscopic Surgery (FLS) Tasks (version 1.0.0). PhysioNet.

Additionally, please cite the original publication:

Somayeh B. Shafiei*, Saeed Shadpour, Xavier Intes, Rahul Rahul, Mehdi Seilanian Toussi, Ambreen Shafqat, Performance and Learning Rate Prediction Models Development in FLS and RAS Surgical Tasks Using Electroencephalogram and Eye Gaze Data and Machine Learning, Surgical Endoscopy, 2023

Please include the standard citation for PhysioNet: (show more options)
Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., ... & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220.


Laparoscopic surgery represents a significant advancement in healthcare, aiming to enhance patient outcomes. However, evaluating performance in laparoscopic surgery tasks can be challenging due to subjective assessments. To address this, this study provides a comprehensive description and public access to electroencephalogram (EEG) and eye-gaze data collected from 25 participants with varying levels of robotic surgery experience. The participants performed Fundamentals of Laparoscopic Surgery (FLS) tasks using a trainer box (Pyxus®), focusing on three standard FLS elements. The dataset, named the NIBIB-RPCCC-FLS dataset, includes performance scores assessed by a rater after each task attempt. Each participant completed the tasks five times, with some subjects performing them only twice. The data collection received support from the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and Roswell Park Comprehensive Cancer Center (RPCCC), with the goal of facilitating objective performance evaluation in FLS tasks. The dataset consists of 315 EEG recordings in the ".edf" format, 315 eye-gaze recordings in the ".csv" format, performance scores, and participant demographic data, such as age, gender, dominant hand, and dominant eye. Notably, this dataset is the first shared collection to include EEG and eye-gaze data from participants with different experiences in robotic surgery who performed FLS tasks.


In recent years, laparoscopic surgery has become a prominent advancement in healthcare, offering several benefits such as minimal invasiveness, reduced postoperative complications, and faster recovery for patients [1, 2]. The acquisition of laparoscopic surgical skills is critical for surgeons, and the Fundamentals of Laparoscopic Surgery (FLS) tasks serve as a fundamental training platform to develop these essential skills [3]. The program evaluates key factors, including skill level, ability to make decisions, and proficiency in manual techniques, in order to ascertain competence in laparoscopic surgery [4]. Furthermore, obtaining certification from the FLS program is a requirement for becoming board-certified in robotic surgical fields [5]. Therefore, tasks within the FLS program play a crucial role in the training of laparoscopic surgical procedures.

However, evaluating performance in FLS tasks poses challenges due to the subjective nature of assessments. To address this limitation and promote standardized evaluations, researchers have sought to incorporate objective measures into the assessment process. One such approach is the utilization of neurophysiological data, including electroencephalogram (EEG) and eye-gaze recordings, which can provide valuable insights into cognitive processes and attentional mechanisms during surgical performance.

However, research in this area is often constrained due to limited access to the required equipment and resources for data recording. This limitation is a significant factor hindering progress in the field. To advance robotic surgery training and promote research in laparoscopic skills assessment, it is crucial to have publicly available datasets that encompass a sufficient number of participants with varying levels of robotic surgery expertise, all performing FLS tasks. The availability of such datasets would provide researchers with the opportunity to analyze and explore the relationships between neurophysiological measures, such as EEG and eye-gaze data, and surgical performance. By ensuring widespread access to standardized data, more comprehensive studies can be conducted, leading to a better understanding of the cognitive processes and attentional mechanisms involved in laparoscopic surgery. Ultimately, the accessibility of publicly available datasets has the potential to drive innovation and improvements in robotic surgery training. Researchers can leverage these datasets to develop and validate new techniques for performance assessment, create predictive models for surgical outcomes, and identify factors that contribute to proficiency and skill acquisition in laparoscopic procedures.

By addressing the lack of available data and promoting collaborative research efforts, the field of robotic surgery can benefit from a broader knowledge base, enabling advancements that improve training methodologies, enhance patient care, and ultimately optimize surgical outcomes [6]. This dataset provides a comprehensive collection of data specifically recorded for performance evaluation in FLS tasks, utilizing EEG and eye gaze recordings. To emphasize the significance of its origin, the dataset has been named "NIBIB_RPCCC_FLS," acknowledging the support provided by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and Roswell Park Comprehensive Cancer Center (RPCCC) during data collection. The primary objective behind releasing this dataset is to promote advancements in performance evaluation research within the field of robotic surgery. By sharing this dataset, researchers can leverage the data to drive innovation, improve training methodologies, and ultimately enhance the assessment of surgical performance in robotic surgery.


This study was approved by the Institutional Review Board (IRB: I-241913) of Roswell Park Comprehensive Cancer Center. The IRB granted permission to waive the need for written consent. Participants were given written information about the study and provided verbal consent.

Participants: A group of 25 participants, ranging in age from 20 to 67 years old and with varying levels of surgical robot expertise, participated in the study.

Data Recording: Each participant performed 3 Fundamentals of Laparoscopic Surgery (FLS) tasks using an FLS laparoscopic training box (Pyxus®), while wearing a 128-channel EEG headset (AntNeuro®) and Tobii eyeglasses (Tobii®).

Using eyeglasses, 20 visual metrics were captured at a constant rate of 50Hz. Using a 128-channel EEG headset, brain activity was captured from 119 areas of the brain at a constant rate of 500Hz. Nine leads of the cap, designed for electrooculogram recording and ground leads, were not used in this study.

Tasks: three FLS program tasks (peg transfer, pattern cut, and intracorporeal suturing).

In the peg transfer task of FLS, participants are required to transfer six objects through the air, moving them from their non-dominant hand to their dominant hand, and subsequently placing them onto a peg located on the opposite side of the pegboard. Afterwards, the process is reversed as they transfer the objects back to their initial side, with a penalty incurred for any instances of dropping objects outside the field of view. This task is designed to evaluate a surgeon's fine motor skills, hand-eye coordination, and depth perception [7].

During the FLS pattern cut task, participants are required to hold a Maryland dissector in one hand and apply traction to a gauze piece while using endoscopic scissors held in the other hand to cut along a pre-marked circle. The objective is to completely remove the gauze from the 4x4 gauze piece. Any cutting that deviates from the marked circle incurs penalties. This task evaluates essential skills for laparoscopic surgery, such as hand-eye coordination, dexterity, and depth perception [7].

In FLS intracorporeal suturing, participants are asked to pass a short suture through two designated marks in a Penrose drain, then tie a knot with two throws to close a slit. Penalties are incurred for any deviations from the marks, incorrect closure of the slit, or a knot that slips or unravels under tension. This particular task serves as an assessment of surgical aptitude, encompassing crucial skills such as hand-eye coordination, dexterity, knot-tying in confined spaces, handling of tissue, tension control, and suturing techniques [8].

Scoring criteria: The performance assessment of FLS peg transfer involved measuring completion time, occurrences of tool collisions and drops based on video analysis. Additionally, the Global Operative Assessment of Laparoscopic Skills (GOALS) tool was employed to evaluate five domains (depth perception, bimanual dexterity, efficiency, tissue handling, and autonomy) using a Likert scale ranging from 1 to 5. The total scoring range for the assessment was from 5 to 25 [3].

In the FLS pattern cut task, completion time and tool collisions were recorded through video analysis. The error area, determined by analyzing the final product using Fiji software [9], was utilized to evaluate the overall technical proficiency using the GOALS tool.

In the FLS suturing task, video recordings were utilized to measure the time taken to complete the task, count the number of drops and collisions. The performance evaluation was conducted using the Objective Structured Assessment of Technical Skills (OSAT) tool, which assesses eight domains (respect for tissues, time and motion, instrument handling, suture handling, flow of suturing, knowledge of the steps, overall appearance, and overall performance domains) on a Likert scale ranging from 1 to 5. The total score range for the assessment using the OSAT tool was from 8 to 40 [10].

Data Description

The Eye folder includes 315 files in "*.csv" format. The EEG folder includes 315 files in "*.edf" format. The names of EEG files are deidentified as: participantID_taskID_try.edf. Eye files are deidentified as follows: ParticipantID_TaskID_try.csv, where participantID is a number from 1 to 25. Task IDs: FLS Peg Transfer (Task 1), FLS Pattern Cut (Task 2), FLS Suturing (Task 3). The date in "patientID" and "recordID" variables in the "hdr" structure indicate the date when the EEG file was converted into an EDF format, not the data recording date.

Visual metrics: Each file consists of 20 columns, corresponding to 20 eyeglass measurements. These measurements are denoted from column 1 to column 20, and include: 1) Gaze point X; 2) Gaze point Y; 3) Gaze point 3D X; 4) Gaze point 3D Y; 5) Gaze point 3D Z; 6) Gaze direction left X; 7) Gaze direction left Y; 8) Gaze direction left Z; 9) Gaze direction right X; 10) Gaze direction right Y; 11) Gaze direction right Z; 12) Pupil position left X; 13) Pupil position left Y; 14) Pupil position left Z; 15) Pupil position right X; 16) Pupil position right Y; 17) Pupil position right Z; 18) Pupil diameter left; 19) Pupil diameter right; 20) Eye movement type index (1: Fixation; 2: Saccade; 0: Unknown).

EEG files: Each EEG file contains recorded EEG data, with the possibility of excluding certain channels due to poor quality. The 'hdr' structure within each EEG file provides information such as the labels assigned to each EEG channel, the total number of channels, and the duration of the recording in seconds.

Performance scores: The file named "PerformanceScores" contains a comprehensive summary of all EEG and Eye files. It includes information such as the subject's age, gender, dominant hand, dominant eye, and their corresponding performance score.

This dataset was used to develop performance and learning rate prediction models for FLS tasks using electroencephalogram and eye gaze data and machine learning [11].

Usage Notes

Note: The signals labeled as ‘EEGHEOGRCPz', 'EEGHEOGLCPz', 'EEGVEOGUCPz', and 'EEGVEOGLCPz' in the EEG data are specifically associated with electrooculogram (EOG) recordings and should be omitted or excluded from further analysis.

Limitations: The quality of signals recorded at F8, POz, AF4, AF8, F6, FC3 channels was poor for some recordings.

Data processing: GitHub code [12] can be used to load EEG data to MATLAB. The EEGLAB toolbox [13] can be used to pre-process EEG data. Eye-gaze data were pre-processed (Tobii Pro Lab ©) and are ready for analysis.

Reuse potential of the dataset: The EEG and Eye gaze data are suitable for utilization as input in deep neural network algorithms. The signal processing toolbox available in MATLAB can facilitate time-frequency analysis of the EEG data and extraction of Power Spectral Density (PSD) and coherence features. These extracted features can then be employed as inputs for machine learning algorithms. The dataset holds potential for various applications, including:

  • Establishing the relationship between visual metrics, EEG features, and surgical performance.
  • Developing models that leverage visual and EEG features to predict surgical performance.
  • Creating models capable of predicting the learning rate.
  • Identifying significant changes in brain function during performance improvement.

By leveraging these capabilities, the dataset offers opportunities to enhance understanding and advance research in the field of surgical performance assessment.

This dataset was released for academic research, not for commercial usage.

Release Notes

1.0.0 initial release of the dataset.


This study was conducted in accordance with relevant guidelines and regulations and was approved by the Roswell Park Comprehensive Cancer Center’s Institutional Review Board (IRB: I-241913). The IRB issued a waiver of documentation of consent. Participants were given a Research Study Information Sheet and provided verbal consent. 


Research reported in this dataset was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award number R01EB029398. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This work was supported by National Cancer Institute (NCI) grant P30CA016056 involving the use of Roswell Park Comprehensive Cancer Center’s Comparative Oncology Shared Resource and the Applied Technology Laboratory for Advanced Surgery (ATLAS). The authors thank all participants in the study. Additionally, the authors appreciate Philippa Doherty’s assistance in recording the data.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


  1. Kim, H.-H., et al., Morbidity and mortality of laparoscopic gastrectomy versus open gastrectomy for gastric cancer: an interim report—a phase III multicenter, prospective, randomized Trial (KLASS Trial). 2010, LWW .
  2. Hwang, S.-H., et al., Actual 3-year survival after laparoscopy-assisted gastrectomy for gastric cancer. Archives of Surgery, 2009. 144(6): p. 559-564.
  3. Vassiliou, M.C., et al., A global assessment tool for evaluation of intraoperative laparoscopic skills. The American journal of surgery, 2005. 190(1): p. 107-113.
  4. Peters, J.H., et al., Development and validation of a comprehensive program of education and assessment of the basic fundamentals of laparoscopic surgery. Surgery, 2004. 135(1): p. 21-27.
  5. Sroka, G., et al., Fundamentals of laparoscopic surgery simulator training to proficiency improves laparoscopic performance in the operating room—a randomized controlled trial. The American journal of surgery, 2010. 199(1): p. 115-120.
  6. Alberto IR, Alberto NR, Ghosh AK, Jain B, Jayakumar S, Martinez-Martin N, McCague N, Moukheiber D, Moukheiber L, Moukheiber M, Moukheiber S. The impact of commercial health datasets on medical research and health-care algorithms. The Lancet Digital Health. 2023 May 1;5(5):e288-94.
  7. Emken, J.L., E.M. McDougall, and R.V. Clayman, Training and assessment of laparoscopic skills. JSLS: Journal of the Society of Laparoendoscopic Surgeons, 2004. 8(2): p. 195.
  8. Soper, N.J. and G.M. Fried, The fundamentals of laparoscopic surgery: its time has come. Bull Am Coll Surg, 2008. 93(9): p. 30-32.
  9. Schindelin, J., et al., Fiji: an open-source platform for biological-image analysis. Nature methods, 2012. 9(7): p. 676-682.
  10. Alam, M., et al., Objective structured assessment of technical skills in elliptical excision repair of senior dermatology residents: a multirater, blinded study of operating room video recordings. JAMA dermatology, 2014. 150(6): p. 608-612.
  11. Somayeh B. Shafiei*, Saeed Shadpour, Xavier Intes, Rahul Rahul, Mehdi Seilanian Toussi, Ambreen Shafqat, Performance and Learning Rate Prediction Models Development in FLS and RAS Surgical Tasks Using Electroencephalogram and Eye Gaze Data and Machine Learning, Surgical Endoscopy, 2023
  12. Brett Shoelson (2023). edfRead (, MATLAB Central File Exchange. Retrieved June 19, 2023.
  13. Delorme, Arnaud, and Scott Makeig. "EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis." Journal of neuroscience methods 134, no. 1 (2004): 9-21.


Access Policy:
Anyone can access the files, as long as they conform to the terms of the specified license.

License (for files):
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License


DOI (version 1.0.0):

DOI (latest version):

Corresponding Author
You must be logged in to view the contact information.


Total uncompressed size: 7.8 GB.

Access the files

Visualize waveforms

Folder Navigation: <base>
Name Size Modified
LICENSE.txt (download) 0 B 2023-08-22
PerformanceScores.csv (download) 18.1 KB 2023-06-08
RECORDS (download) 13.2 KB 2023-07-21
SHA256SUMS.txt (download) 51.7 KB 2023-08-23