Deep learning networks may prefer the human voice – as we do

Researchers at Columbia University have found that artificial intelligence systems reach higher levels of performance if they are programmed with sound files of human language. The researchers speculated that neural networks might learn faster and better if the systems were “trained” to recognize objects and animals by using the power of one of the world’s most highly evolved sounds – the human voice uttering specific words. Columbia University is a NYSERNet member Read more here:

Image credit & caption

A deep neural network that is taught to speak demonstrates higher learning.

Credit: Creative Machines Lab/Columbia Engineering