Advancements in endovascular treatment have replaced complex open surgical procedures with minimally invasive options, like valve replacement and aneurysm repair. This paper proposes using three-dimensional (3D) modeling and virtual reality to aid in C-arm positioning, angle measurements, and roadmap generation for neuro-interventional catheterization lab procedural planning, minimizing procedure time.
Endovascular treatment of complex vascular anomalies shifts the risk of open surgical procedures to the benefit of minimally invasive endovascular procedural solutions. Complex open surgical procedures used to be the only option for the treatment of a myriad of conditions like pulmonary and aortic valve replacement as well as cerebral aneurysm repair. However, due to advancements in catheter-delivered devices and operator expertise, these procedures (along with many others) can now be performed through minimally invasive procedures delivered through a central or peripheral vein or artery. The decision to shift from an open procedure to an endovascular approach is based on multi-modal imaging, often including 3D Digital Imaging and Communications in Medicine (DICOM) imaging datasets. Utilizing these 3D images, our lab generates 3D models of the pathologic anatomy, thereby allowing the pre-procedural analysis necessary to pre-plan critical components of the catheterization lab procedure, namely, C-arm positioning, 3D measurement, and idealized road-map generation. This article describes how to take segmented 3D models of patient-specific pathology and predict generalized C-arm positions, how to measure critical two-dimensional (2D) measurements of 3D structures relevant to the 2D fluoroscopy projections, and how to generate 2D fluoroscopy roadmap analogs that can assist in proper C-arm positioning during catheterization lab procedures.
The treatment of intracranial aneurysms is a challenging aspect of neuro-interventional surgery, necessitating precise surgical planning to ensure optimal patient outcomes. In recent years, virtual reality (VR) technology has become a promising tool for enhancing surgical planning by providing surgeons access to immersive, patient-specific anatomical models in a virtual 3D environment1,2,3,4,5,6,7,8. This article presents a comprehensive protocol for the use of medical imaging and segmentation, 3D modeling, VR surgical planning, and idealized virtual roadmap generation to aid in surgical planning for the treatment of aneurysms.
The combination of these steps culminates in a virtual surgical planning approach, allowing physicians to immerse themselves in a virtual environment and gain a comprehensive understanding of a patient's unique anatomy prior to a surgical procedure. This immersive approach empowers surgeons to explore optimal positioning and simulate various procedural scenarios. Recording these scenarios can provide insight into the placement of real-world surgical equipment, such as C-arm positioning.
In addition to positioning angles, it is also possible to measure anatomy in a virtual environment using measurement tools designed for 3D space. These measurements can provide insight into the correct sizing and shape of the device to be used in an intracranial aneurysm case9.
This protocol presents a comprehensive process that seamlessly combines medical imaging, image segmentation, VR model preparation, and virtual surgical roadmap generation to enhance the surgical planning process. Using a combination of leading-edge technologies, this protocol provides opportunities to save valuable time in the operating room10, as well as a boost to surgeon confidence and understanding of complex surgical cases11,12,13.
De-identified human DICOMs or DICOMs for patient care are used in accordance with institutional guidelines for patient care, the Health Insurance Portability and Accountability Act of 1996 (HIPAA), and collaboration with the Institutional Review Board (IRB) when appropriate.
1. Segment patient-specific anatomy
2. Prepare the model for virtual reality
3. Training medical professionals in virtual reality
NOTE: The following instructions are written to be used with the Enduvo digital classroom software. While it may be possible to use other 3D viewing software, the ability to move models, place cameras, and record physician positioning are some features that make this software ideal for this procedure. Different VR headsets, controllers, and software combinations may have different controls.
4. Generation of fluoroscopy roadmap in VR
Following the presented protocol, virtual surgical roadmaps can be generated for both the AP and lateral fluoroscopy views. These roadmaps are created by placing a camera at the viewpoint of the surgeon in VR to capture their ideal AP and lateral views while also placing a colored background behind the target anatomy to better replicate a fluoroscopy image. The VR protractor is used at this point to record the angle from which the surgeon is viewing the target anatomy, recorded as right, or left anterior oblique (RAO/LAO – camera offset to the patient's right or left, respectively), and cranial or caudal anterior (CRA/CAA – camera offset to toward the patient's head or feet, respectively)15. When developing this process, retrospective cases were used to provide the ability to compare angles measured in VR with the actual angles used on the C-arm machines in surgery. Three different retrospective cases were selected for this process, each case having been treated with a different surgical device. The diversity of these three cases shows the versatility of the presented protocol. The surgeon was asked to find preferred AP and lateral angles without referencing the C-arm angles used during the procedure, and the VR measurements were then compared to these pre-existing C-arm positions.
In case 1, the declared preferred AP viewing angle was measured in VR as 16° CRA, 12° RAO. The actual measurements used in surgery for this case were 11° CRA and 13° RAO. The maximum error among these measurements is 5° on the cranial/caudal axis. Figure 2A shows the surgeon's declared AP view in virtual reality, followed by Figure 2B, which shows the actual angle used in surgery as seen in VR, and Figure 2C, which shows the surgical fluoroscopy image. Comparing the three images shows the VR images to be extraordinarily similar to the actual fluoroscopy image at the same angle.
The lateral view of the same case displayed one of the many challenges of this process due to the 3D model being inadequately reviewed. Because of this faulty review, there were some extraneous vessels segmented that, according to the surgeon, inhibited their view of the aneurysm in VR and are not connected to the target anatomy and, as such, are not accurately reflected in VR. These discrepancies were a result of miscommunication in the required target anatomy during the quality control session with the physician. These discrepancies can be seen in Figure 2D-F, which shows the surgeon's declared lateral, the VR representation based on surgical fluoroscopy angles, and the actual fluoroscopy images from left to right, respectively. With the exception of extraneous vessels, the surgeon's declared AP view closely resembles the actual fluoroscopy image, despite the measurements taken being 6° and 26° off in the coronal and axial planes, respectively. The replication of actual measurements in VR, as shown in Figure 2E, also depicts a similar view to the real fluoroscopy shown on the right of Figure 2F, with the main discrepancy being the anomalous extra vessels. This case employed a less reliable manual placement of the protractor tool, which may account for the slight difference in measurement. Future cases employ a protractor that is bound to the anatomy in order to ensure maximum accuracy of angle measurements taken in VR.
In cases 2 and 3, the views selected to be optimal in VR were not representative of the views used in the actual procedure. This was a consequence of the initial placement of models in VR being a blinded study. It is important to note the surgeon expressed that fluoroscopy procedures can have multiple acceptable treatment angles, and there is not necessarily a correct angle. For the purpose of comparison, images were taken in VR from the reported surgical angles. Figure 3 shows the VR AP view in Figure 3A and the surgical AP view in Figure 3B. In Figure 3, a similar comparison can be made between lateral views in Figure 3C,D for case 2. For case 3, Figure 4 shows the AP comparison Figure 4A,B, as well as the lateral comparison Figure 4C,D. The similarities between the VR and fluoroscopy images of these cases further demonstrate the ability of VR to be used in surgical planning.
An important benefit of this protocol is the improvement of surgical planning by leveraging 3D models in a VR environment. A previous study on the effectiveness of VR in surgical planning for complex oncological cases showed that roughly 50% of cases that employed the use of VR altered the surgical approach from the plan made using only 2D datasets9. VR has also been proven useful in the surgical planning process for liver tumor resection16,17, as well as procedures involving head and neck pathology18. The surgeon participating in the creation of this protocol stated that : in VR I can see [the anatomy] so much better, showing the benefit of VR for endovascular neurosurgical applications.
Figure 1: Screenshot from within segmentation software. The screenshot shows the highlighted anatomy based on Masks. Please click here to view a larger version of this figure.
Figure 2: Case 1. (A) Anteroposterior view of Case 1 as placed by the operating surgeon in VR. (B) Anteroposterior view of Case 1 in VR based on angle measurements taken during surgery. (C) Anteroposterior fluoroscopy view captured during surgery. (D) Lateral view of Case 1 as placed by the operating surgeon in VR. (E) Lateral view of Case 1 in VR based on angle measurements taken during surgery. (F) Lateral fluoroscopy view captured during surgery. Please click here to view a larger version of this figure.
Figure 3: Case 2. (A) Anteroposterior view of Case 2 in VR based on angle measurements taken during surgery. (B) Anteroposterior fluoroscopy view of Case 2 captured during surgery. (C) Lateral view of Case 2 in VR based on angle measurements taken during surgery. (D) Lateral fluoroscopy view of Case 2 captured during surgery. Please click here to view a larger version of this figure.
Figure 4: Case 3. (A) Anteroposterior view of Case 3 in VR based on angle measurements taken during surgery. (B) Anteroposterior fluoroscopy view of Case 3 captured during surgery. (C) Lateral view of Case 3 in VR based on angle measurements taken during surgery. (D) Lateral fluoroscopy view of Case 3 captured during surgery. Please click here to view a larger version of this figure.
Supplementary File 1: A 3D model of protractors was developed and used for the protocol in STL file format. Please click here to download this File.
3D modeling was introduced to medical workflows with the advent of 3D printing technologies2,3,4,6,7,9,11, but VR affords novel applications of 3D technology beyond a physical 3D object. Efforts to replicate anatomy and scenarios in a virtual world allow for personalized medical practice on individual patients1,2,3,4,9,11,13,16. This work demonstrates the expansive capability of creating new pre-surgical simulations in a digital world with minimal effort.
Throughout the presented protocol, there are several steps that are critical to the success of a case. The most important factor in producing adequate results with proper resolution is acquiring the correct medical imaging. The presented process does not require additional scans on the patient, using the standard CTA scan that is scheduled for every intracranial aneurysm case. Most scanners will store scans for a short time, depending on the scanner model and health system protocol, allowing the imaging technician to upload the acquired thin slices of the scans typically less than 1 mm thick slices are often not stored longer than a few days due to the storage size. These thin slices allow for greater detail and the inclusion of smaller anatomy, such as blood vessels. After segmentation has taken place, physician quality control must be completed to ensure the 3D models generated represent the patient anatomy as accurately as possible in future steps. Quality control of all models should be a part of the segmentation process, minimizing the potential for propagation of error throughout the remainder of the protocol. Quality control includes blood vessel borders and segmenting the aneurysm separately from the surrounding vessels, similar to how it would present with contrast. Quality control with a physician is of utmost importance as the physician holds the entirety of the responsibility for the accuracy of the models, especially if the models are to be used in further decision-making of patient's treatment. In some circumstances, it may be feasible or practical for the physician to complete the segmentation step themselves.
The next important step in the protocol is maintaining spatial model alignment while integrating the protractor measurement tool. Blender has proved to be an extremely helpful tool for this step as it allows for the combination of multiple STL file types into one combined file with multiple layers, each of which is spatially aligned and can be colored or textured for added clarity. Additionally, during this step, the protractor STL is added so that angle data can be gathered in VR. This protractor model was specifically developed using a computer aided design (CAD) tool, SolidWorks. Taking advantage of high-precision dimensioning tools within the software, an arc with tic marks denoting every 5° in all three axes was created. The protractor also has crosshairs denoting the true center of that model and allowing for alignment to the center of the patient's anatomy. There is also a large bar within the model signifying (0,0) and is to be aligned with the patient nose. Also, it is important to note that this was done manually and could have increased the error percentage. Alignment is of utmost importance to ensure the accuracy of all potential angle measurements. Once properly aligned, the model is ready for VR, where recording of the physician placement of the model allows for future determination of the angles at which the model has been placed. During the recording, everything within the virtual space is recorded in reference to one another, most importantly the physician's point-of-view (POV) and the models' movements and rotations. Taking full advantage of this recording and the pause feature, a straight edge is placed from the physician's POV through the protractor model's crosshair, and measurements can be observed in a manner remarkably similar to the use of an actual protractor.
This methodology does have some limitations. One such limitation is that there is not necessarily a single correct orientation for the aneurysm when viewing it in fluoroscopy. This led to multiple validation attempts simply due to the different viewing angles. This limitation can be viewed as a possible benefit from the perspective that with additional familiarity that comes from manipulating the 3D model, it is possible that the physician will find an optimal view as compared to the current method of determining angles within the operating suite. Another potential limitation of this protocol is that it is possible to determine a viewing angle in VR that would not actually be possible for the C-arms to get to. This limitation would be taken into account and known by the physician in VR so specifications could be made if this became part of surgical planning. Another limitation, proving the importance of the quality control step, is that in some instances, vessels that are distal of the aneurysm, in reality, are not seen as prominently in fluoroscopy procedures as they would be if included in the model in VR. This can force the physician to be mindful of a vessel that would not necessarily be in the way during the procedure in VR, leading to a suboptimal viewing angle being generated in VR. In segmentation, it is possible to segment out the majority of the blood vessels and the area of interest; the interventionalist could choose to toggle between models of vessels to ensure there would be no additional vessels in their viewing angle, the use of contract minimizes this risk as well.
The development of a 3D model protractor and a protocol that can provide angle measurements in multiple axes within VR holds immense importance and promises a wide array of potential applications. The benefits could prove to be multifaceted, potentially enhancing various industries from architecture and engineering to manufacturing and military applications. However, as shown in this protocol, its true potential shines in the realm of healthcare, directly within the surgical planning portions of patient care. Surgeons can utilize this tool to meticulously assess and plan all types of procedures by being able to visualize and measure angles directly in VR. This technique is similar to work done for cardiac catheterization19. One direct benefit of knowing particular angles pre-procedure is the significant reduction in the need for a full 360-degree spin during fluoroscopy, a commonly employed imaging technique during aneurysm repair. By determining the angles required to mimic the virtual surgical roadmap, the surgeons can position the equipment more accurately, thus minimizing the radiation exposure to the patient. This not only contributes to patient safety by minimizing risks associated with radiation exposure but also streamlines the surgical procedure. With reduced time spent on fluoroscopy adjustments, surgical teams can operate more efficiently, ultimately leading to shorter procedure times.
Recent advancements in 3D modeling and virtual reality technology allow medical staff to avoid improvisational thinking during surgeries by obtaining a deep understanding of a patient's internal anatomy prior to operation in all but the most urgent cases1,2,3,4,6,9,11,13,16. If time allows, medical staff should leverage the use of medical image segmentation and VR diagnostics to further their understanding of the case prior to placing the patient on the operating table. This will ultimately lead to a better understanding of each unique patient, as well as reduced surgery time and time under anesthesia.
The authors have nothing to disclose.
We extend a special thanks to the review committee for their insightful feedback, and to the editorial for their invaluable comments, expertise, guidance, and support throughout the writing process of this article. We greatly appreciate the collaborative environment fostered by the Mission Partners at OSF HealthCare System, which enhanced the quality of this work. Thanks to OSF HealthCare System for providing resources and support and to the Advanced Imaging and Modeling Lab at Jump Simulation and Education Center for their assistance.
3D Slicer | N/A | Open source segmentation software | |
Blender | N/A | Open-source CAD software that can import and edit organic models created through segmentation | |
Enduvo | Enduvo | N/A | A proprietary VR viewer built for education, and our VR viewer of choice |
McKesson PACS Change Healthcare Radiology Solution | McKesson | N/A | Any Picture Archiving and Communication System should be suffiecient, McKessen is simply our PACS software solution of choice. |
Mimics | Materialise | N/A | Segmentation software |
Quest | Oculus | N/A | Virtual Reality Headset |
Steam VR | Steam | N/A | Computer to headset connection software. |
VR capable computer | See Steam VR for minimal requirements. | ||
VR-STL-Viewer | GitHub | N/A | A open-source VR viewer capable of importing and viewing .stl and can be used, however we cannot guarantee all functionalities mentioned in this paper will be available |