“It’s going to be a big issue,” Geoffrey Hinton said at a Reuters Newsmaker event in Toronto on Monday (Dec. 4).
Hinton should know: He led a group of scientists at the University of Toronto who developed some of the key algorithms that neural networks (programs that mimic the way humans learn to perform complex tasks) use to crunch massive quantities of data and train themselves to identify patterns to mimic the way the human brain would perform tasks, such as driving a car, analyzing potential financial trades or using medical images to diagnose diseases.
Those advances enabled Google to add voice recognition to Android mobile devices, and researchers also used it to cut error rates in optical recognition compared with earlier technology, said the Google executive.
And since neural networks teach themselves to perform complex operations, it is impossible for their developers to tell government regulators exactly how those systems work.
“All you need is lots and lots of data and lots of information about what the right answer is, and you’ll be able to train a big neural net to do what you want,” he said.
Hinton believes that deep learning is close to revolutionizing the way certain diseases are treated, including making more accurate diagnoses. In fact, he expects mobile apps to be created that use neural networks to examine images of skin lesions, advising users when to see a doctor for a possible biopsy.
“We’d like to make medicine better,” Hinton said.
Source: (Pymnts, 2017)