Google’s image recognition “deep learning” systems have evolved to the point where the engineers working on them no longer understand how they are “thinking”.

At the Machine Learning Conference in San Francisco last week, Google Engineer Quoc V Le revealed that the company’s image recognition cluster has learned how to identify items which the programmers don’t even know how to describe alogrithmically.

The Register reports:

Many of Quoc’s pals had trouble identifying paper shredders when he showed them pictures of the machines, he said. The computer system has a greater success rate, and he isn’t quite sure how he could write program to do this.

At this point in the presentation another Googler who was sitting next to our humble El Reg hack burst out laughing, gasping: “Wow.”

“We had to rely on data to engineer the features for us, rather than engineer the features ourselves,” Quoc explained.

This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.

 

Share with a friend!

About the Author: Conn Ó Muíneacháin
Conn Ó Muíneacháin works at Blacknight, Ireland's largest provider of domains and hosting. He's an award-winning radio producer, podcaster and blogger. He's an engineer as well (not the award-winning kind). Conn produces video for Blacknight and edits Technology.ie. Labhair Gaeilge leis!
4 Comments
Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 min readCategories: General, softwareTags: , , , , Last Updated: November 19, 2013

Share this post

View my Flipboard Magazine.