📜 ⬆️ ⬇️

Another neural network was taught to diagnose the problem by X-ray



Corporations Google, IBM and others have been working for a long time to create an AI (its weak form) that could analyze X-rays. What for? The problem is that radiology specialists, and not only they, have to spend a lot of time analyzing medical images. There are a lot of such pictures, and you need to look at and give your answer for each in a certain time.

The specialist has very little time to analyze the same X-ray image. And it is good if, when viewing the image, the doctor is fresh and alert. And what if it works already at the end of the working day, after watching a couple of hundreds of the same images? The human factor here is very strong, and the probability of error increases many times. In order to facilitate the task specialist, scientists and try to use the capabilities of artificial intelligence.

Another problem of physicians who regularly review medical images (not necessarily radiographs) is the “satisfaction of search” error. It lies in the fact that the doctor who looks at the picture, having discovered the problem, may not search for the rest, having decided that his assumption is correct, and immediately diagnose it. The consequences can be quite difficult for the patient, given that the identified problem is not always a manifestation of the underlying disease.

Now the development of a neural network that would search for the manifestations of various diseases in medical images was taken up by a team of scientists led by Andrew Ohn. Experts have created a neural network, which was taught on the example of a database consisting of several tens of thousands of images (almost 50 thousand), obtained from more than 14 thousand medical institutions. In addition, each of the images was previously analyzed by doctors, who diagnosed and marked the radiograph as normal or pathological.


The results of the neural network and the three radiologists-doctors

The effectiveness of the neural network after training was compared with the work of three radiologists-doctors. As it turned out, in two cases, the neural network almost did not lag behind the person, and in one case - it surpassed it. In general, the computer correctly detected damage in 74.9% of cases. It is worth noting that scientists have opened the results and materials of their research to the world. Thus, the database for which the neural network was trained was made publicly available and available on the Stanford website. It is ready so that it can be used to train other neural networks.

Neural networks also work with other types of medical images. For example, the deep neural network learns to recognize traces of the disease in images of positron emission tomography of the brain (PET). We are talking about Alzheimer's disease, which is characterized by the occurrence of amyloid plaques with a slower brain metabolism.

Scientists have previously found that some types of PET scans are able to detect signs of these negative conditions. Consequently, the technology can work to detect mild cognitive impairment in people, disorders that will later lead to the onset of Alzheimer's disease.

True, it is quite difficult for human scientists to interpret the resulting images. But here the neural network can quite cope with this thanks to one or two markers. For training the computer system, specialists used images of the brain of 182 people aged 70 years old with a healthy brain and 139 brain images of people of about the same age with a diagnosis of Alzheimer's disease. As a result, the AI ​​was able to recognize the difference between a healthy and diseased brain, and did so with a high degree of accuracy — above 90%.



As for Andrew Un and his team, they are trying to use the capabilities of the neural network and for another project. We are talking about patients with very serious diseases, patients and palliative therapy. The neural network is trying to make a prediction of how serious the patient’s condition is (mostly it’s about very old people). If we are talking about a progressive disease, which gives the patient no more than a year of life, then a team of palliative therapists comes to work, who are trying to remove the negative manifestations of the disease (pain, psychological state, etc.) to some extent. The problem is that the team must enter into work at a certain time so that the effect is maximum. And here the neural network also shows significant success.

In general, AI (its weak form) is now considered by scientists as an assistant doctor, and not an alternative, so to speak. Neural networks help a specialist to identify all sorts of problems, and already a human doctor makes an accurate diagnosis using the help of his digital assistants. As a result, time is saved and diagnostic accuracy is improved. Over time, neural networks will become reliable assistants to doctors - today this practice is experienced, but the results obtained inspire healthy optimism in the possibilities of computer technology in such a sphere as healthcare.

Source: https://habr.com/ru/post/409795/