Trending Topics

  1. Real VS pretentious data scientists Troy Sadkowsky 01-Mar-2017

Latest News from DS

From Bhutan 19-Feb-2017

Can we unlock the Deep Learning black box?

Troy Sadkowsky - Wednesday, June 28, 2017

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Times; color: #000000; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Times; color: #000000; -webkit-text-stroke: #000000; min-height: 14.0px} span.s1 {font-kerning: none} span.s2 {text-decoration: underline ; font-kerning: none; color: #0000ff; -webkit-text-stroke: 0px #0000ff}

Can we unlock the Deep Learning black box?

Artificial intelligence has experienced a boom in recent years due to the increasing automation and generation of big data. Due to deep learning and artificial neural networks, machines are acquiring new perceptual abilities, such as recognizing images, speech, or reading handwriting, paving the way for an infinite number of new applications in human lives. However, this leap forward in AI processing comes at a cost – the reasoning behind the choices that deep learning networks make has become inscrutable to the engineers that built them.

Progress in automated vision greatly benefits from deep learning because general visual processing is far too complex to code by hand. A program that can learn by observing human example or through complicated training sets, and can generate its own algorithms through an interconnected network of a dozen to several hundred layers of simulated neurons, positions us that much closer to a self-driving car, or other AI programs capable of sophisticated automated decision-making. But if we cannot understand a program’s individual decisions – why it chose to drive into a tree, why a patient’s medications should be changed, why one individual was hired for the job and another flagged as a terrorist – what ethical and moral boundaries are we crossing? And if an artificial intelligence program is not 100% accurate, as they rarely are, what are the dangers of putting ourselves at the mercy of a machine’s mistakes?

According to an article in MIT Technology Review (https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/), trust will be a key factor in future applications of deep learning. Once machines can explain their reasoning to humans, we will better be able to learn from and act on their insights. Research is underway to develop tools for machine learning programs to be able to explain themselves. Regina Barzilay, at MIT, is developing a system that can collaborate with doctors by extracting snippets of text that represent patterns it has discover, and Carlos Guestrin’s system, from the University of Washington, highlights significant keywords or parts of an image to support a particular decision.

However, interpretable deep learning is far off, and even then, explanations offered by machines will always be simplified to a degree, which will always make trusting machine learning programs controversial. As a consequence, the European Union might soon pass legislation to make explanations for automated decisions a fundamental legal right, required of the companies that implement AI machine programs.

Jeff Clune, an investigator at the University of Wyoming testing deep neural networks, offers that, just as with human intelligence and decision-making, it is perhaps “the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable”. As deep learning offers more appealing possibilities in medicine, technology and other industries, when and should, society take the deep learning leap of faith?

Trackback Link
http://www.datascientists.com/BlogRetrieve.aspx?BlogID=10882&PostID=711829&A=Trackback
Trackbacks
Post has no trackbacks.

Can we unlock the Deep Learning black box?

Troy Sadkowsky - Wednesday, June 28, 2017

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Times; color: #000000; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Times; color: #000000; -webkit-text-stroke: #000000; min-height: 14.0px} span.s1 {font-kerning: none} span.s2 {text-decoration: underline ; font-kerning: none; color: #0000ff; -webkit-text-stroke: 0px #0000ff}

Can we unlock the Deep Learning black box?

Artificial intelligence has experienced a boom in recent years due to the increasing automation and generation of big data. Due to deep learning and artificial neural networks, machines are acquiring new perceptual abilities, such as recognizing images, speech, or reading handwriting, paving the way for an infinite number of new applications in human lives. However, this leap forward in AI processing comes at a cost – the reasoning behind the choices that deep learning networks make has become inscrutable to the engineers that built them.

Progress in automated vision greatly benefits from deep learning because general visual processing is far too complex to code by hand. A program that can learn by observing human example or through complicated training sets, and can generate its own algorithms through an interconnected network of a dozen to several hundred layers of simulated neurons, positions us that much closer to a self-driving car, or other AI programs capable of sophisticated automated decision-making. But if we cannot understand a program’s individual decisions – why it chose to drive into a tree, why a patient’s medications should be changed, why one individual was hired for the job and another flagged as a terrorist – what ethical and moral boundaries are we crossing? And if an artificial intelligence program is not 100% accurate, as they rarely are, what are the dangers of putting ourselves at the mercy of a machine’s mistakes?

According to an article in MIT Technology Review (https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/), trust will be a key factor in future applications of deep learning. Once machines can explain their reasoning to humans, we will better be able to learn from and act on their insights. Research is underway to develop tools for machine learning programs to be able to explain themselves. Regina Barzilay, at MIT, is developing a system that can collaborate with doctors by extracting snippets of text that represent patterns it has discover, and Carlos Guestrin’s system, from the University of Washington, highlights significant keywords or parts of an image to support a particular decision.

However, interpretable deep learning is far off, and even then, explanations offered by machines will always be simplified to a degree, which will always make trusting machine learning programs controversial. As a consequence, the European Union might soon pass legislation to make explanations for automated decisions a fundamental legal right, required of the companies that implement AI machine programs.

Jeff Clune, an investigator at the University of Wyoming testing deep neural networks, offers that, just as with human intelligence and decision-making, it is perhaps “the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable”. As deep learning offers more appealing possibilities in medicine, technology and other industries, when and should, society take the deep learning leap of faith?

Trackback Link
http://www.datascientists.com/BlogRetrieve.aspx?BlogID=10882&PostID=711829&A=Trackback
Trackbacks
Post has no trackbacks.