“Looks like a face. Appears to be male. 26-44 years old. Appears to be sad.”
These are the findings of an artificial intelligence application called Rekognition developed by Amazon, which is studying the image of a man attending the recent Canadian Cloud and DevSecOps event in Toronto. When quizzed, the attendee admits that he is, in fact, 42 years old. He is not particularly sad, but that’s okay; the AI application said it was only 62 per cent confident in its prediction about his emotional state.
According to Darin Briskman, chief technical evangelist at Amazon Web Services (AWS), the race to improve the way technology can not only recognize humans but sense their inner feelings will be transformative across a range of activities, from customer service to marketing and even some elements of the selling process.
“We are entering a world where everybody expects analytics all the time,” he said. “I want basic audit analytics every time I pull up a document. In sales, I want to have ongoing analytics of what my successes are, alerts.”
When AI becomes more emotionally attuned, Briskman suggested it ca remedy the areas where technology has only created frustration among people. He pointed to interactive voice response (IVR), where customers or employees are given a series of automated menu options when they call into an organization, as a prime example.
“Everybody knows and hates IVRs. We all hate listening though those trees,” he said. “How can we turn those human interactions into something’s that’s comparable to a real conversation?”
AWS has tested such applications internally, and Briskman said HR ratings went up by an order of magnitude once the company ditched IVR for its own team members.
While most people might see the immediate applications of AI for Amazon’s consumer e-commerce portal, Briskman said the opportunities for AWS customers in the enterprise were just as viable. He pointed to Lex, the chatbots Amazon offers which can be used to check things like sales numbers, marketing performance data, inventory status and so on.
Similarly, there are multiple use cases for translating text into speech, which is why Amazon is working on an application dubbed Polly which is trying to make the end result sound “less computery,” according to Briskman.
All this has become more feasible for organizations, Briskman added, thanks to the compute capacity offered by cloud services such as AWS. He compared AI to the kind of eye tests we get when we go to an optometrists. The technology is really a vast series of A/B tests, which would have been cost-prohibitive when more IT infrastructure was needed to support it.
“When I look back at those NASA supercomputers I used to work on 20 years ago — now that’s work I could do in the cloud for about three dollars,” he said. The challenge is figuring out how to strike the right balance between freeing up humans for creative tasks and robots and AI for other tasks. Sometimes it’s a fine line.
“Robots aren’t good at putting things in boxes. people aren’t good at putting things on shelves,” he said, noting that Amazon has been able to reduce injury rates by more than 80 per cent by adjusting more of the workload for stocking shelves to machines.
“(Humans) are flexible and they’re smart and they’re unreliable — they don’t need to be right every time. We should use machines for when the reverse is true.”
Latest posts by Shane Schick (see all)
- FPX research shows 58% of B2B firms pursuing digital transformation want to improve buying processes - January 16, 2019
- Ethics guidelines from INFORMS focus on analytics and operations research - January 15, 2019
- CES 2019 experts identify enterprise XR opportunities based on virtual and augmented reality - January 11, 2019