Google’s image recognition “deep learning” systems have evolved to the point where the engineers working on them no longer understand how they are “thinking”.
At the Machine Learning Conference in San Francisco last week, Google Engineer Quoc V Le revealed that the company’s image recognition cluster has learned how to identify items which the programmers don’t even know how to describe alogrithmically.
Many of Quoc’s pals had trouble identifying paper shredders when he showed them pictures of the machines, he said. The computer system has a greater success rate, and he isn’t quite sure how he could write program to do this.
At this point in the presentation another Googler who was sitting next to our humble El Reg hack burst out laughing, gasping: “Wow.”
“We had to rely on data to engineer the features for us, rather than engineer the features ourselves,” Quoc explained.
This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.
RT @blacknight: Google AI Figures Things Out For Itself: http://t.co/pbvTqq1N8V
Google AI has “learned” what its programmers could not teach it http://t.co/V6LPvIVvks #uhoh #imsorrydave
@blacknight I, for one, welcome our new image search overlords.
Google AI Figures Things Out For Itself http://t.co/wt2tByZvAT