Came across this NY Times article on FB ---- specifically how they moderate dangerous content. https://www.nytimes.com/2018/12/27/world/facebook-moderators…
Here’s some key nuggets from the article:
The company, which makes about $5 billion in profit per quarter, has to show that it is serious about removing dangerous content. It must also continue to attract more users from more countries and try to keep them on the site longer.
How can Facebook monitor billions of posts per day in over 100 languages, all without disturbing the endless expansion that is core to its business? The company’s solution: a network of workers using a maze of PowerPoint slides spelling out what’s forbidden.
Every other Tuesday morning, several dozen Facebook employees gather over breakfast to come up with the rules, hashing out what the site’s two billion users should be allowed to say. The guidelines that emerge from these meetings are sent out to 7,500-plus moderators around the world. (After publication of this article, Facebook said it had increased that number to around 15,000.)
Moderators were once told, for example, to remove fund-raising appeals for volcano victims in Indonesia because a co-sponsor of the drive was on Facebook’s internal list of banned groups. In Myanmar, a paperwork error allowed a prominent extremist group, accused of fomenting genocide, to stay on the platform for months. In India, moderators were mistakenly told to flag for possible removal comments critical of religion.
“We have billions of posts every day, we’re identifying more and more potential violations using our technical systems,” Ms. Bickert (Facebook’s head of global policy management) said. “At that scale, even if you’re 99 percent accurate, you’re going to have a lot of mistakes.”
Think about that folks. This is yet another problem for FB’s walled garden approach. Do you think longer term an advertiser using the FB walled garden is going to prefer to buy inventory and associate their precious brand on a semi-tarnished wall garden’s brand? Or would that same advertiser be willing to move to a demand side network that aggregates supply (like TTD) and has little to no brand tarnishing/risk? If the latter generates ROI and doesn’t yield that same FB-like-tarnishing I speak to…well, I think we know the answer to that question!
A few years ago, advertisers didn’t have the same digital options they do now ---- and FB capitalized on that. TTD’s TTM revenue is $419M. FB’s is $51.9Bn. TTD doesn’t need to get a whole ton of share from FB/other walled gardens or from the addressable market itself to move its market cap.
Anyway, I thought the article was interesting. Moderators having to make judgment calls on potential dangerous content within 8 to 10 seconds…using PowerPoint (they also spoke to using Excel in the article). Ouch. Yes, Zuck wishes to automate all this w AI. I get that. But can an algorithm quickly account for the latest domestic issue in any country until a human tells the bot what the rule is? Not easily and quickly. And related fashion, as the article points out, this model puts social networks in the position to make judgment calls that are traditionally the job of the courts.
All this general air of negativity around FB is probably yet another reason why TTD grows their share.