A surgeon is guided by computer and warned of potential problems, Photo courtesy Digital Surgery

The world is short of surgeons — 143m people are denied operations each year and 5bn lack access to safe, affordable surgical services

Training surgeons is expensive and time-consuming. But autonomous robots could eventually perform many medical operations as well as helping standardise procedures and ensure best practice. 

Totally robotic surgeons are still far-off, but researchers are using artificial intelligence, machine learning and augmented reality to help train human surgeons and give them more assistance in theatre.

One approach focuses on the cognitive — rather than mechanical — process of operations, using data and algorithms to define them. 

This is the route being taken by Digital Surgery, a technology company with operations in London and North America. Jean Nehme, its co-founder and chief executive, says: “It is about understanding the decision-making that goes on in the brain of expert surgeons based on their training and experience.” 

5bnThe number of people without access to safe, affordable surgical services

In 2012, Digital Surgery began analysing operations as linear sequences of events, combining the data with simulated graphics to create an app, which surgeons can use to rehearse and train for real-life operations. 

Some 3.5m people have downloaded the app, including more than 500,000 surgeons. Studies have shown that it improves performance in the operating theatre, Mr Nehme says. 

Last year, Digital Surgery began using cameras to observe surgeons’ actions, understand what was happening and feed information into an AI algorithm, enabling it to predict the next step. 

Two screens are used to display potential risks and necessary surgical tools as the procedure progresses. The data are used to help improve subsequent operations. Trials will continue in 2019. 

The team is also exploring the use of a Microsoft HoloLens headset to display information to surgeons and the projection of images directly on to the patient’s body.

Sanjay Purkayastha, senior lecturer and consultant for general, laparoscopic and bariatric surgery at Imperial College London, is another pioneer. 

The camera records everything the surgeon does, he says. “However brilliant surgeons are, if they do a lot of operations they’ll have things that go wrong. We want to learn from these mistakes and do things the same way every time, which produces better outcomes.”

Mr Purkayastha is piloting the use of digitised information to give trainees a better understanding of his procedures and what will be expected of them when they join his team. 

But while cameras can record images of surgeons’ hand movements, they cannot measure how hard they are pressing, how tightly they are grasping, and how quickly they are changing direction. 

This is the focus of research for Professor Carla Pugh, director of the technology-enabled clinical improvement centre at Stanford University. Prof Pugh is using sensors to track the force and velocity of surgeons’ hand movements in simulated procedures. 

The initial application is in training, she says. “The goal is to be able to know that someone has met the minimum competency in how they use their hands for procedures such as repairing a hernia.”

It is easy to see with the naked eye that one person is doing a procedure much better than another, simply by how smoothly or swiftly they move. The problem is you cannot translate those visual observations into learning objectives. 

“I can tell my trainees to move more smoothly. But that doesn’t give them the body mechanics and the detailed metrics to understand how to do that.”

Prof Pugh is using motion-tracking sensors to test how trainee surgeons use the instruments, for example in a simulated hernia repair. Their performance is measured, videoed and compared with best practice at each stage, so they can understand where they need to improve. 

“Like Olympic athletes, they can practise repeatedly until they understand the routine and where they need to improve. That is the goal in training surgeons.” The next step is to use sensors in real operations. 

Being able to measure pressure will help create better surgical robots, says Richard Trimlett, a cardiothoracic surgeon and head of mechanical support at the Royal Brompton and Harefield Trust, London. 

The current generation of robots lacks tactile sensation, which is why it has failed to meet expectations, he says. Robotics is commonplace in operating theatres in the form of consoles and hand controls that enable surgeons to perform keyhole operations using robot arms through small incisions. 

“The robot gives high-definition vision and fine control, but I can’t feel if I’m damaging anything by pulling on the tissues or snapping strings,” Mr Trimlett says. “As a result it’s a severe limitation and makes most operations too difficult. It’s like trying to write without being able to feel the pen or the paper.” 

Robot surgeons would also need better tools, such as a suturing device being developed by UK-based Sutrue. Using a curved needle that rotates, it allows a stitch to be performed in one movement and can work around corners and tie a knot. 

The device, which has yet to gain regulatory approval, reduces the average time to make a stitch from 25 seconds to one-third of a second. It can be used by human surgeons with one hand, freeing the other hand for something else, and could increase the capabilities of robotic surgery. 

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments