Skip to content

Official repository of the GraSP dataset and implemention of TAPIS

License

Notifications You must be signed in to change notification settings

BCV-Uniandes/GraSP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pixel-wise Recognition for Holistic Surgical Scene Understanding

Nicolás Ayobi1,2, Santiago Rodríguez1,2*, Alejandra Pérez1,2*, Isabela Hernández1,2*, Nicolás Aparicio1,2, Eugénie Dessevres1,2, Sebastián Peña3, Jessica Santander3, Juan Ignacio Caicedo3, Nicolás Fernández4,5, Pablo Arbeláez1,2

*Equal contribution.
1 Center for Research and Formation in Artificial Intelligence (CinfonIA), Bogotá, Colombia.
2 Universidad de los Andes, Bogotá, Colombia.
3 Fundación Santafé de Bogotá, Bogotá, Colombia
4 Seattle Children’s Hospital, Seattle, USA
5 University of Washington, Seattle, USA

  • Preprint available at arXiv
  • Visit the project on our website.

Abstract


We present the Holistic and Multi-Granular Surgical Scene Understanding of Prostatectomies (GraSP) dataset, a curated benchmark that models surgical scene understanding as a hierarchy of complementary tasks with varying levels of granularity. Our approach enables a multi-level comprehension of surgical activities, encompassing long-term tasks such as surgical phases and steps recognition and short-term tasks including surgical instrument segmentation and atomic visual actions detection. To exploit our proposed benchmark, we introduce the Transformers for Actions, Phases, Steps, and Instrument Segmentation (TAPIS) model, a general architecture that combines a global video feature extractor with localized region proposals from an instrument segmentation model to tackle the multi-granularity of our benchmark. Through extensive experimentation, we demonstrate the impact of including segmentation annotations in short-term recognition tasks, highlight the varying granularity requirements of each task, and establish TAPIS's superiority over previously proposed baselines and conventional CNN-based models. Additionally, we validate the robustness of our method across multiple public benchmarks, confirming the reliability and applicability of our dataset. This work represents a significant step forward in Endoscopic Vision, offering a novel and comprehensive framework for future research towards a holistic understanding of surgical procedures.

This repository provides instructions for downloading the GraSP dataset and running the PyTorch implementation of TAPIS, both presented in the paper Pixel-Wise Recognition for Holistic Surgical Scene Understanding.

Previous works

This work is an extended and consolidated version of three previous works:

Please check these works!

GraSP

In this Google Drive link, you will find all the files that compose the entire Holistic and Multi-Granular Surgical Scene Understanding of Prostatectomies (GraSP) dataset. These files include the original Radical Prostatectomy videos, our sampled preprocessed and raw frames, and the gathered annotations for all four semantic tasks. The data in the link has the following organization:

GraSP:
|
|__GraSP_30fps
|__GraSP_1fps
|__raw_frames_1fps.tar.gz
|__videos.tar.gz
|__1fps_to_30fps_association.json
|__README.txt

These files contain the following aspects and versions of our dataset:

  1. GraSP_30fps Is a directory with the compressed archives containing all the preprocessed frames sampled at 30fps and all annotations for all tasks in our benchmark. This is the complete dataset used for model training and evaluation.
  2. GraSP_1fps Is a directory with compressed archives containing a lighter version of the dataset with preprocessed frames sampled at 1fps and the annotations for these frames.
  3. raw_frames_1fps.tar.gz Is a compressed archive with original frames sampled at 1 fps before frame preprocessing.
  4. videos.tar.gz Is a compressed archive containing our dataset's original raw Radical Prostatctomy videos.
  5. 1fps_to_30fps_association.json Contains the frame name association between frames sampled at 1fps and frames sampled at 30fps.
  6. README.txt Information file with a summary of files' contents.

We recommend downloading the necessary dataset files' directory and uncompressing all internal files. For instance, the frames of the GraSP_30fps dataset have been stored on multiple compressed archives per surgery to ease storage space limits during download. We will soon include a single code to download the entire dataset programmatically. In the meantime, we recommend downloading the dataset and uncompressing all internal archives using the following command:

$ find /path/to/directory -type f -name "*.tar.gz" -execdir sh -c '
 for file; do
 echo "Uncompressing $file..."
 tar -xzf "$file" && rm -f "$file"
 done
 ' sh {} +

Note: Most directories and compressed archives contain a README file with further details and instructions on the data's structure and format.

Main Dataset to Run our Models

The GraSP_30fps directory is the only necessary to run our code.

After downloading and uncompressing all files, the GraSP_30fps directory must have the following structure:

GraSP_30fps
|
|___frames
|    |
|    |___CASE001
|    |    |__000000000.jpg
|    |    |__000000001.jpg
|    |    |__000000002.jpg
|    |    ...
|    |___CASE002
|    |    ...
|    ...
|    |
|    |___README.txt
|
|___annotations
     |__segmentations
     |       |
     |       |__CASE001
     |       |      |__000000068.png
     |       |      |__000001642.png
     |       |      |__000003218.png
     |       |      ...
     |       ...
     |       |__CASE053
     |              |__000000015.png
     |              |__000001065.png
     |              |__000002115.png
     |              ...
     |
     |__grasp_long-term_fold1.json
     |__grasp_long-term_fold2.json
     |__grasp_long-term_train.json
     |__grasp_long-term_test.json
     |__grasp_short-term_fold1.json
     |__grasp_short-term_fold2.json
     |__grasp_short-term_train.json
     |__grasp_short-term_test.json
     |__README.txt

Dataset updates and versions

We updated the dataset annotations for the surgical phase and surgical step recognition tasks (long-term tasks) in December 2024 to correct minor errors and ambiguities. This final release only modified some long-term annotations and includes a better curated benchmark. However, if older versions of surgical phase and step annotations are needed, they remain available for reference in this Google Drive Link.

Go to the TAPIS directory to find our source codes and instructions for running our TAPIS model.

Contact

If you have any doubts, questions, issues or comments, please email [email protected].

Citing GraSP

If you find GraSP or TAPIS useful for your research (or its previous versions, PSI-AVA, TAPIR, and MATIS), please include the following BibTex citations in your papers.

@article{ayobi2024pixelwise,
      title={Pixel-Wise Recognition for Holistic Surgical Scene Understanding}, 
      author={Nicol{\'a}s Ayobi and Santiago Rodr{\'i}guez and Alejandra P{\'e}rez and Isabela Hern{\'a}ndez and Nicol{\'a}s Aparicio and Eug{\'e}nie Dessevres and Sebasti{\'a}n Peña and Jessica Santander and Juan Ignacio Caicedo and Nicol{\'a}s Fernández and Pablo Arbel{\'a}ez},
      year={2024},
      url={https://arxiv.org/abs/2401.11174},
      eprint={2401.11174},
      journal={arXiv},
      primaryClass={cs.CV}
}

@InProceedings{ayobi2023matis,
      author={Nicol{\'a}s Ayobi and Alejandra P{\'e}rez-Rond{\'o}n and Santiago Rodr{\'i}guez and Pablo Arbel{\'a}es},
      booktitle={2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI)}, 
      title={MATIS: Masked-Attention Transformers for Surgical Instrument Segmentation}, 
      year={2023},
      pages={1-5},
      doi={10.1109/ISBI53787.2023.10230819}
}

@InProceedings{valderrama2020tapir,
      author={Natalia Valderrama and Paola Ruiz and Isabela Hern{\'a}ndez and Nicol{\'a}s Ayobi and Mathilde Verlyck and Jessica Santander and Juan Caicedo and Nicol{\'a}s Fern{\'a}ndez and Pablo Arbel{\'a}ez},
      title={Towards Holistic Surgical Scene Understanding},
      booktitle={Medical Image Computing and Computer Assisted Intervention -- MICCAI 2022},
      year={2022},
      publisher={Springer Nature Switzerland},
      address={Cham},
      pages={442--452},
      isbn={978-3-031-16449-1}
}

About

Official repository of the GraSP dataset and implemention of TAPIS

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published