Adobe is working on enhancements to its Sensei artificial intelligence tool that will make it easier for marketers to search for images through its stock service without having to use words to describe what they need.
In a presentation to designers at the Flash In The Can Toronto event on Monday, Adobe stock group product manager Shambhavi Kadam showed a feature, dubbed selective similarity, which she said was still in the lab stage that made it easy to “mask” areas of an image and have the system find related photos or illustrations.
Bringing up an image of a woman leaping in the air with a long cloth ribbon flowing behind her, for instance, Kadam showed how the basic version of Sensei would immediately suggest other images where a person was also jumping and holding a long cloth ribbon. By placing a mask over the woman alone, however, the system as able to find images of people that focused on the act of jumping alone.
“You don’t need to describe what she’s doing — you can show Sensei what you want,” Kadam explained. When one of the next batch of recommended images showed a family jumping on a beach, she put a mask over the sand and water to bring up a set of images of other people jumping on a beach. “I haven’t had to type anything in here.”
Concept canvas, meanwhile, allows users to start by making a query for images with a human face. By dragging it to the side of the canvas, however, Sensei will narrow down the search to images on Adobe Stock with a face on the right. By adding a query for a flower next to the face, the system will retrieve images of people with flowers next to their face.
These kind of tweaks will bring increased value to creative professionals who are frequently spending long time looking fruitlessly for a particular kind of stock image to go with a campaign, according to Kadam. “Who knows if the artist tagged it with the same elements you’re looking for?”
Sensei already makes this easier with aesthetic filters that allow users to look for specific qualities in a stock image, such as one where the subject is in focus but the background is blurry, or images with more muted tones. Creativity and AI work well together, she said, since it can be difficult to articulate certain concepts in simple text.
Like many organizations, Adobe is using AI to differentiate itself from competitors and bring increased automation to mundane tasks. Last week, for example, Adobe added Sensei features to video tools in Creative Cloud, and in March it partnered with Nvidia to have its technology tuned for the chip firms graphical processing units (GPUs).
Kadam said using AI for Adobe Stock is one way of solving the “content velocity” problem in marketing departments, where teams are being tasked to generate a higher volume of creative assets that are more personalized to specific customers and competing for their attention like never before.
“It’s not like they’re giving you more time to do it,” she said, adding many organizations are also challenged with less budget.
AI can also look for trends, anomalies and spikes in customer behavior to synthesize information and make recommendations that help prioritize decision-making, she added.
After her talk, an audience member asked Kadam if Sensei would eventually be able to offer a description of an image, and she said it was likely in the pipeline. “To have it say, ‘Man sitting in a conference room asking a question — it will be able to spit back a sentence like that,” she said.
FITC Toronto wraps up Tuesday.
Latest posts by Shane Schick (see all)
- Rangle begins rebrand that puts creative and strategic DX capabilities in the foreground - October 16, 2018
- B2B content marketing research shows customer disconnect and need for ‘buyer enablement’ - October 12, 2018
- SAP chief product marketing officer Alison Biggan translates what ‘intelligent enterprise’ really means - October 11, 2018