Social media companies have explained their reaction to the Dublin riots that resulted in millions of euro worth of damage to public infrastructure.
Tanaiste Micheal Martin said in the aftermath of the riots that he was concerned at the “rapid mobilisation of so many people via social media platforms”.
Dublin-headquartered companies Meta, TikTok and Google appeared before the Oireachtas media committee to discuss disinformation, media literacy and the response to the disorder in Dublin city that made headlines internationally.
Several members of the committee criticised social media site X, formerly known as Twitter, for not appearing before the committee.
Sinn Fein TD Imelda Munster said during the committee that their non-attendance was partly because of “ongoing legal proceedings” and asked for clarity on what that referred to.
“I don’t think that’s good enough,” she said in relation to the suggestion of answering queries in writing or in a private session.
Fine Gael TD Ciaran Cannon encouraged X employees to “drop an email to the owner of Twitter”, tech billionaire Elon Musk, suggesting that he “desist from commenting on affairs within Ireland, which he patently knows nothing about”.
He added that he “personally served to stoke up hatred and conflict in recent times here in Ireland and he should be deeply ashamed of those actions”.
Dualta O Broin, head of public policy of Meta in Ireland, told the committee the company had taken “significant” steps to tackle false information, and had removed 1,000 pieces of misinformation from Facebook in the first half of this year.
He said they have partnered with 26 factcheckers across the EU, and that “debunking” labels have been applied to 1.1 million posts on Facebook originating in Ireland in the first half of this year.
Fianna Fail TD Christopher O’Sullivan raised the issue of a voicenote being shared on the Meta messaging app WhatsApp “calling for immigrants to be killed”, and asked if there was a failure to curb it spreading “like wildfire”.
Mr O Broin said that virality was identified as a “significant problem” for WhatsApp, and that messages could be shared with “thousands upon thousands upon thousands of groups”, and that some measures were taken to limit this.
But as it was an encrypted messaging service, it could not act on a post unless it was reported.
“On WhatsApp, because it is a fundamentally different tech, it is end-to-end encrypted, in that case we cannot scan the content of the messages, so we cannot go in and scan what is actually happening in the messages the same way we can with posts on Facebook on public profiles. We rely on users reporting the content from WhatsApp in app… and that’s not just because of end-to-end encryption, that’s because of EU law as well.”
Mr O Broin said that Facebook got in contact with the gardai when the Dublin riots unfolded, and the purpose of that was “to establish what is happening”.
“Law enforcement can then go through our portal to request certain actions in relation to accounts, in relation to particular types of content… but the discussion at that stage wouldn’t have been about removing content,” he said.
He said that separate to that, a large team across the country ensured that “decisions were being made quickly and accurately” on the posts on its platforms.
Ryan Meade, Google’s government affairs and public policy manager in Ireland, said the company was “committed to fighting the spread of misinformation online”.
He said that “helping people sort facts from fiction has never been more important, something we saw most recently during the disturbing events in Dublin”.
He said that both the knife attack that took place on Parnell Square East and the Dublin riots were monitored by Google to see if incitement to hatred or dangerous disinformation would be triggered.
“I should say on the first day we didn’t see our platforms being used in respect of those events, although our teams were monitoring it, and other proactive steps would have been, for example, locking down reviews and edits on Google Maps around the site of the incident,” he said.
He said Google subsequently had discussions with Ireland’s new media regulator Coimisiun na Mean about the steps it was taking and continued to monitor events through the weekend.
Susan Moss, head of public policy for TikTok Ireland, said it activated its crisis management protocols to remove violating content and to prevent the spread of misinformation.
“We activated our emergency factchecking procedure in collaboration with those factcheckers, we have a factchecking organisation here in Ireland and what they did is they were flagging content not just on TikTok, but horizon scanning across the internet and flagging to us these individual claims that they were seeing.
“That then helped us to stop that content spreading on TikTok.”
Ms Moss said that there were 25 individual claims or types of “stories”, giving the example of one that claimed the military was moving into O’Connell Street.
She said that while a post is being checked whether it is accurate, it will not be recommended to other users.
She said that TikTok’s response was “very fast moving” and “really confident”.
She said that “disinformation is not a new problem, but the internet provides a new avenue to an old challenge”.
“We treat disinformation with the utmost seriousness and we are committed to preventing its spread, while elevating authoritative information and investing in media literacy to help build resilience in our community.”
On the elections next year, Mr O Broin said Meta “recognise how significant the period ahead is… particularly here in Ireland in the next two years”.
In response to Sinn Fein Senator Fintan Warfield’s questioning on whether grassroots posts about Gaza were being “censored”, Mr O Broin said that the objective is to “apply the policies fairly”, but added “it was a tricky balance to achieve”.
“Either the content or the behaviour can be violating, and the intention is to apply that fairly and equitably. It wouldn’t make any sense if we were to take any other approach because then we would be accused of being partisan.”
Asked about the use of ‘@’ symbols or misspelling words or hashtags to “bypass” systems, all representatives said their content moderators were well aware of this approach, with Google’s Ollie Irwin stating that it was an “old technique that goes back to spam days”.