A very good essay on AI in The Paris Review by the novelist and computer scientist Zachary Nason - who specialises in artificial intelligence.
There are threads in AI unrelated to deep learning but none of them have ever really worked. Consider machine translation, as implemented in Google translate. It’s good enough for translating simple things, and can convey a general sense of a text, but with anything nuanced or complicated it immediately falls apart: Translated e-commerce websites are more or less usable, translated literature fails, translated poetry is unintentionally funny.
The state of the art in machine translation is to use statistical techniques to find roughly equivalent chunks of text in the source and target languages, and, lately, blending in deep learning to find higher order equivalences. There’s no real understanding or representation of the meaning of the text.
These limitations are inarguable and seemingly obvious but many techies seem to be in a haze of futurist denial. I’ve spoken with Googlers who have the the glazed intensity of the newly converted. They say things like, “Google has solved machine translation.” Such statements convey no useful information about the technology but do speak to how, especially with the younger employees, their affiliation with their company is a primary engine of meaning in their lives. “Working at Google is like making science fiction!” I’ve heard many Googlers enthuse.