Datametrex: Big data meets conversational intelligence

datametrex conversational intelligence
0 Shares 0 Flares ×

We get to do some pretty cool stuff at B2B News Network, like have a real life conversation with an astrophysicist about conversational intelligence.

Claude Théoret from Nexalogy, joined Jen Evans from B2B News Network for a Twitter chat about their product, client work, and being acquired by Datametrex (DM.V).


You and I met in 2008 when social media listening was barely a blip on the horizon. Has conversational intelligence changed in the post-election era? I would imagine in the wake of all the fake accounts, Devumi and propaganda attacks that your work would be taking on an increasingly strategic focus.

Indeed, 2008 was the nascent years of listening. We had built a tool that was more focus on weak signal detection than measurement. We gave talks in 2012 on the rise of fake news and bots, so with Trump our message was of intelligence versus monitoring.

The methods of spreading fake news have been around for a while; content farms, bots, astro-turfing were all developed in greyhat/blackhat social media marketing strategies. Today these strategies have been weaponised.

In terms of fake accounts MIT identified a network of roughly 350,000 Twitter accounts that can be used to make issues trend. There is also the good work of hamilton68 project that tracks roughly 650 suspected fake Russian accounts.


They certainly have, and it doesn’t seem to be getting better. For example: we talked recently about a list for a project we were working on, and you said to me, “we have to rerun this list; it’s 90% bots.” How do you identify that so quickly?

Nexalogy’s software is great at automatically identifying conversations, certainly those that a human would miss. In the case of the work we were doing, there were so many bots that the humans were those that were the weak signals!

Some subjects have more bots spreading information than people. It usually gravitates around subjects where there is quick money to make, and “marketing” firms that have already deployed the bot network for other purposes.


Wow. So the bots/fake accounts were so active you identified humans by looking for accounts that *underindexed* on engagement. That is fascinating.

Essentially yes, humans tend to share and create content organically and not all share the same content or point to the same other accounts. Bot networks are easily identified by a few criteria: less sophisticated bots only retweet content. Bots also tend to tweet in massive groups, and will either tweet one link or tweet towards a single account. Our software clusters this behaviour quite easily.


What do marketers need to be concerned about in this kind of fluid, grey zone environment? How do people distill real patterns in an environment full of questionable actors?

That is a great question that sadly doesn’t have a clear answer. We have a great case study done by PwC-Tech Forecast where over 95% of the data was content farms. Marketers need tools that can identify these bots, and help them filter them out.


With so much expertise in the area and so much happening on this front, are the kinds of clients you work with and the work you do expanding? Changing?

Yes, we used to work much more with agencies and brands. These organisations are much more concerned with pushing their message out and measuring that effect. We tend to focus on organisations that are more concerned with stakeholder analysis, risk management, and most importantly, leveraging insights.

Our biggest changes was to only work with organisations that had the capacity and the will to act on the insights that surfaced from our software. Most companies don’t have that capacity.


Do you have any recommendations for marketers when it comes to bot identification? That PwC stat is scary.

I would say looking for behaviour that is sometimes too good to be true: a tweet that is just retweeted with no replies, an account that tweets hundreds or thousands of times on a subject or hashtag.

Bots also tend to have highly repeated content that is sometimes grammatically incorrect. Bots are getting smarter with #AI and #NLP, but they make some mistakes that really don’t sound human.


We’ve run into more than a few Trump supporting accounts like that. Weaponized is right!

It has gone way beyond Trump directly. There is some great research coming out. The decision from Facebook to close down 500 pages was a very important milestone in this regard.


You recently sold to Datametrex, a Canadian company that trades on the venture exchange, and raised for your expansion as well. Why did you decide to go that route?

It was a good fit because they understood the value of data in #AI. It was also a way for Nexalogy to continue its growth and raise money from public markets. Datametrex has a strong growth strategy based on acquisition that I also liked.


Can you share a bit about the vision Datametrex has for the business?

For sure. The goal is to double down on our current success, raise money on the public markets to fuel the growth, grow the company through strategic acquisitions and seek new markets where our software is suited, such as our recent IR division.


And congratulations on that launch. This was a fascinating conversation about how the social landscape is dramatically shifting and how Nexalogy is staying on top of it.



0 Shares Twitter 0 Facebook 0 Google+ 0 LinkedIn 0 Email -- 0 Flares ×
The following two tabs change content below.
B2BNN Newsdesk

B2BNN Newsdesk

We marry disciplined research methodology and extensive field experience with a publishing network that spans globally in order to create a totally new type of publishing environment designed specifically for B2B sales people, marketers, technologists and entrepreneurs.