A Comparative Knowledge base development for Cancerous cell detection by deep learning and fuzzy computer vision tool box

Global Summit on Radiology and Oncology
June 15-16, 2022 | Webinar

Subhasish Mohapatra

Adamas University, India

ScientificTracks Abstracts: J Clin Radiol Case Rep

Abstract

As cancer cells spread in a culture dish, Guillaume Jacquemet is watching. The cell movements hold clues to how drugs or gene variants might affect the spread of tumours in the body, and he is tracking the nucleus of each cell in frame after frame of time-lapse microscopy films. But because he has generated about 500 films, each with 120 frames and 200–300 cells per frame, that analysis is challenging to say the least. “If I had to do the tracking manually, it would be impossible,” says Jacquemet, a cell biologist at Åbo Akademi University in Turku, Finland.So he has trained a machine to spot the nuclei instead. Jacquemet uses methods available on a platform called ZeroCostDL4Mic, part of a growing collection of resources aimed at making artificial intelligence (AI) technology accessible to bench scientists who have minimal coding experience1.AI technologies encompass several methods. One, called machine learning, uses data that have been manually preprocessed and makes predictions according to what the AI learns. Deep learning, by contrast, can identify complex patterns in raw data. It is used in selfdriving cars, speech-recognition software, game-playing computers — and to spot cell nuclei in massive microscopy data sets.Deep learning has its origins in the 1940s, when scientists built a computer model that was organized in interconnected layers, like neurons in the human brain. Decades later, researchers taught these ‘neural networks’ to recognize shapes, words and numbers. But it wasn’t until about five years ago that deep learning began to gain traction in biology and medicine.A major driving force has been the explosive growth of life-sciences data. With modern gene-sequencing technologies, a single experiment can produce gigabytes of information. The Cancer Genome Atlas, launched in 2006, has collected information on tens of thousands of samples spanning 33 cancer types; the data exceed 2.5 petabytes (1 petabyte is 1 million gigabytes). And advances in tissue labelling and automated microscopy are generating complex imaging data faster than researchers can possibly mine them. “There’s definitely a revolution going on,” says Emma Lundberg, a bioengineer at the KTH Royal Institute of Technology in Stockholm.

Biography

Cancer biologist Neil Carragher caught his first glimpse of this revolution in 2004. He was leading a team at AstraZeneca in Loughborough, UK, that explores new technologies for the life sciences, when he came across a study that made the company rethink its drug-screening efforts. He and his team had been using cellbased screens to look for promising drug candidates, but hits were hard to come by. The study was suggesting that AI and analytics could help them to improve their screening processes2. “We thought this could be a solution to the productivity crisis,” Carragher says.