A Dual Mode Photoacoustic Visual Servoing System to Track Needle and Catheter Tips during Surgical Procedures Using Beamformed Images or Raw Channel Data

Case ID:
C17437
Disclosure Date:
5/24/2022

Unmet Need:

Needle guided procedures like catheterizations and biopsies are important medical tools to diagnose and treat disease. In the US alone, over 2.7 million catheterization procedures and over 3 million breast cancer biopsies are performed every year (see iData and CST). Real time imaging of biopsy needles is a technique used by physicians to help precisely guide the needle directly to the target area. Ultrasound imaging has become the preferred imaging modality for needle tracking because of its portability, low cost, and absence of ionizing radiation (see GS). However, ultrasound guided surgeries are often not viable options in acoustically challenging environments like transcranial, abdominal, and spinal surgeries because of acoustic scattering and attenuation of signal. In these environments, acoustic artifacts are difficult for the image segmentation software to interpret which results in unreliable images. Therefore, there is a strong need for improved ultrasound image resolution and segmentation models to be developed to address the inability to use ultrasound imaging to track needles in acoustically challenging environments.

 

Technology Overview

Johns Hopkins researchers have developed a real-time autonomous photoacoustic needle tracking system to precisely track needle and catheter tips during surgical procedures. Two image segmentation techniques are available for the system to identify the target from the imaging data. In the first technique, a human-interpretable image is reconstructed using traditional beamforming algorithms and the target is segmented from the beamformed image. This technique allows for target position estimates to be provided to human operators within the context of surrounding tissue in the beamformed images. In the second technique, a deep learning image segmentation software interprets and segments the imaging data to obtain the target position with a focus on target tracking via robotic visual servoing. Specifically, a convolutional neural network is used to identify and localize photoacoustic targets formed by needle and catheter tips in tissue while filtering out artifacts. This dual-mode approach can be extended to a variety of applications involving photoacoustic-guided surgical procedures.

 

Stage of Development

Conceptual

 

Publication

N/A

Patent Information:
Title App Type Country Serial No. Patent No. File Date Issued Date Expire Date Patent Status
METHODS AND SYSTEMS FOR PHOTOACOUSTIC VISUAL SERVOING PCT: Patent Cooperation Treaty PCT PCT/US2023/023695   5/26/2023     Pending
Inventors:
Category(s):
Get custom alerts for techs in these categories/from these inventors:
For Information, Contact:
Lisa Schwier
lschwie2@jhu.edu
410-614-0300
Save This Technology:
2017 - 2022 © Johns Hopkins Technology Ventures. All Rights Reserved. Powered by Inteum