Note: Where available, the PDF/Word icon below is provided to view the complete and fully formatted document
Select Committee on Social Media and Online Safety
22/12/2021
Online harms that may be faced by Australians on social media and other online platforms

JABRI-MARKWELL, Ms Rita, Adviser, Australian Muslim Advocacy Network [by video link]

CHAIR: I welcome representatives of the Australian Muslim Advocacy Network to give evidence today. There are some formalities before we commence. The committee does not require you to give evidence under oath, however I should advise you that this hearing is a legal proceeding of the parliament and therefore has the same standing as proceedings of the respective houses. The giving of false or misleading evidence is a serious matter and may be regarded as contempt of parliament. The evidence given today will be recorded by Hansard and attracts parliamentary privilege. Ms Jabri-Markwell, I invite you to make a brief opening statement or to provide any observations or contextual comments in relation to your own experiences or those of your members, and then we will proceed to discussion. I thank you so much for appearing today.

Ms Jabri-Markwell : Thank you so much for inviting AMAN, the Australian Muslim Advocacy Network. We're a national law advocacy organisation. I'm appearing today on behalf of AMAN. AMAN's goal is to prevent another Christchurch massacre. We're a civil society organisation that researches and monitors online discourse and dialogues on policy. We test existing laws that are there to protect the community and safeguard human dignity. Our directors include Australian Muslims who are recognised for their contributions to the Australian community, their chosen fields and their community advocacy. AMAN has engaged directly and extensively with platforms such as Facebook and Twitter. AMAN is a Christchurch Call Advisory Network member.

I am personally a member of the Global Internet Forum to Counter Terrorism's legal frameworks working group. This year I published research with two directors from AMAN into the dehumanisation of minorities on Facebook and Twitter. I also co-published a proposal for the benefit of the Global Internet Forum to Counter Terrorism on their taxonomy for terrorist and violent extremist content with a team from the University of Queensland. AMAN has made comprehensive submissions to various inquiries of this parliament. I co-authored a submission with Professor Katharine Gelber from the University of Queensland on the need for a statutory duty of care in Australia that applies to platforms.

Coming up to three years ago, I was on maternity leave from being a schoolteacher and heavily pregnant when my family and friends began to share the news that someone was killing men, women and children as they prayed in two mosques in New Zealand. Then I learned that that someone was an Australian boy. I was terrified for my family that day, and that sinking feeling has never gone away. Tarrant treated those men, women and children as characters in a videogame, as if they were not real or natural or human. Australia has never fully reckoned with the reality that this terrorist was an Australian. We don't count his actions in out terrorism tallies. We don’t grapple as our Kiwi neighbours have with the attacker's journey, what the life cycle of his journey looked like.

The Australian Human Rights Commission released a report this year which showed 79 per cent of Australian Muslims live in fear of another Christchurch. If it's not Christchurch, it's the fear of our children just one day having to face the discrimination and prejudice that we have experienced growing up: of putting on the veil as a young teenager and being abused in the street; of a hate crime like the 38-week-pregnant sister who was assaulted by a complete stranger in Parramatta; or of having to face hostile comments and questions in the classroom, staff lunch room and, yes, online.

For many Australian Muslims to stay safe online, we must avoid shared spaces, whether it is general buy and sell groups, the comment threads of major news sites or the comment threads of members of parliament, for example. Sometimes, if your picture shows that you wear the veil or because of your name, you will become a lightning post for abuse. If your name or photo isn't obvious, you can still be harmed by just watching the hatred be unleashed. If we don't feel safe on Facebook or other social media, we've been told that we should just take ourselves off it. But getting off social media won't keep our community safe either. Just look at Christchurch. Just look at a viral video posted by Fraser Anning outside a mosque during the last federal election, which led to violent threats being graffitied across the mosque's front wall in my home town, minutes from where I live.

AMAN's concern is that the measures from government and industry have focused far downstream, after individuals are abused or when terrorist violence has been live streamed. There is some investment right upstream in the education space. But midstream, where online audiences are manipulated by bad actors who are spreading disinformation, often in hateful echo chambers or in hateful newsfeeds, there has been no action. AMAN was part of DIGI's development of an Australian code of practice on misinformation and disinformation. We have concerns though about the threshold they've set for disinformation and that they are essentially self-regulating. AMAN and Birchgrove Legal have tested existing incitement to violence laws. They not only had never been used but do not target the bulk of harm.

The most prevalent and pernicious threats to community safety are not organisations or websites openly inciting, threatening or glorifying violence, but those inducing it indirectly through dehumanising material about out-groups to in-group audiences. The Australian Federal Police, in some responses to our referrals, actually pointed to their case prioritisation model and said that online incitement to violence by individuals was not a priority. We can empathise to a degree with their position because there are many individuals inciting violence online. Our laws need to focus on the most potent vectors of harm, and we think that is the bad actors who manipulate and mislead whole communities and audiences to feel extreme disgust and fury towards fellow members of our community.

We tested another law. We successfully brought a vilification action for Senator Fraser Anning under the Queensland Anti-Discrimination Act. He had spread the demographic invasion conspiracy theory, also spouted by terrorists Brenton Tarrant and Anders Breivik. The tribunal ordered the removal of 141 hate artefacts. While this seemed like a great win, getting Facebook, which is now known as Meta, to cooperate by removing the offending accounts was unsuccessful. Despite the 80 breaches of Queensland law, Facebook did not see these as breaching their policies to the extent that removal of those accounts was required. It was also an exhausting process that took 18 months. Today Anning's pages continue on Facebook. He divides his time between spreading hatred about Muslims and asylum seekers and spreading COVID misinformation.

Concurrently, we are also engaged directly with Facebook—we have been over 18 months—in good faith to improve their moderation of hate speech and bad actors. We've submitted a lot of proposals. Facebook continually sought to have us do the heavy lifting in providing the evidence, which they would then delete from their platform while not progressing any of the recommendations about targeting the bad actors or accounts. It was like trying to do something in quicksand—like trying to make the world a better place while standing in quicksand.

I'm now personally engaged in a complaint about Facebook under the Racial Discrimination Act, which I expect will be highly costly. I don't want to bring this action, but it feels like our only real option right now. God forbid, if an atrocity were to happen in Australia or anywhere else in the world at the hands of an Australian, AMAN would want to know—I would want to know—that we had tried everything. That was the promise I made to myself on the day of the Christchurch massacre, and this is where I'm standing almost three years later, because the dehumanisation of people like me, of Muslims, did not begin or end on 15 March 2019.

Dehumanisation is at the core of demographic invasion or 'counter jihad', extreme Right movements. These movements argue that Muslims are depraved and subhuman on account of their religion. Therefore, the more devout and religious we are, the more subhuman we must be. They also suggest that we are spreading like a virus or like vermin, poised to flick a switch and take over. They completely disregard any scope for human warmth, compassion, need or independent faculty for thought or reasoning. We have none of those things because, to those groups, we are not part of the human family. These narratives enabled Tarrant to see the worshippers as an existential threat or, as he called them, a nest of vipers. And those narratives were not new. It wasn't even the first time that those narratives were used to justify mass violence. Look at the Oslo massacre that killed 77 people, primarily Norwegian teenagers. A 2018 study by Victoria University found those anti-Islam and anti-Muslim narratives to be prevalent a year before Tarrant acted. Bad actors that perpetuate those narratives through purposed information operations are still allowed on Facebook and Twitter.

Social media plays a significant role—we all know that—in priming and socialising people towards violence. Despite the challenges in confronting disinformation and governing the internet, there is a worse cost in accepting the idea that it is too complex or too much of a slippery slope to act. Our communities cannot face this battle alone. We cannot just be told to leave Facebook or any other social media company or to stay in our shell and avoid shared spaces. Civil society must be at the table. The human rights at stake are too great for government officials to make these decisions alone. You will have seen the wealth of civil society expertise in this inquiry.

A member of parliament yesterday at this inquiry was asking whether reforms feel fragmented or cohesive. I can say that counterterrorism and countering violent extremism, as a space, has in the past exclusively involved law enforcement, government and industry, with researchers partly involved. Civil society has had no place at that table, and I think our strategies as a country have suffered as a result.

Today, ASIO recognises that the online fomentation of hatred is a real cause of violence. But the capacity is not there to problem-solve cohesively in this space. It requires multiple government departments, ACMA, eSafety, the Human Rights Commission, industry, researchers and civil society. We want to help the Australian government to contend with hate speech, disinformation and dehumanisation of minorities, in the Online Safety Act. As you know, the eSafety Commissioner is currently empowered to act on abuse targeting an individual, but it doesn't contend with hateful echo chambers that endanger segments of the community and Australia as a whole.

What more can government and industry do? We have put forward to the committee five different proposals, but, if it pleases the committee, I'll just briefly refer to the top three. First, we submit that it is in the public interest for social media companies to apply Australian legal standards and for Australian authorities to regulate their performance under a duty-of-care model, rather than leaving it to the community to challenge every piece of hate speech and every bad actor. We propose a statutory duty of care on platforms with respect to Australia's antidiscrimination and vilification laws as well as disinformation that leads to serious harm. We use the definition of disinformation as used by the Global Disinformation Index. Funding public interest litigation may also be necessary.

But, as bad actors move from platform to platform, this proposal is not enough to shift the burden experienced by minorities in combating this public harm. This is where I differ from the remarks that you heard earlier from the Online Hate Prevention Institute. I don't think connecting our antidiscrimination laws is going to be enough. That framework does not provide the significant financial penalties required to prompt systemic change by a digital platform. Companies are not incentivised to dedicate resources to monitor their platforms for hate speech and remove actors that are serial offenders. While a statutory duty of care might improve things by connecting our antidiscrimination framework to the online sphere, it won't be enough. We also need civil penalties that apply to serial bad actors and to the platforms that enable them which are applied by a regulator like eSafety.

Through our engagements, one of our primary concerns has been about protecting freedom of expression. AMAN believes that, if we limit civil penalties to the most severe end of the spectrum, which includes serial and readily apparent examples, and invoke the act's basic online safety expectations in an industry standard as levers to engender platform accountability on a broader range of dehumanising speech and discourse, this will go a long way to satisfying Australia's obligations under international human rights law on freedom of expression.

As mentioned before, incitement to violence won't be an effective threshold for civil penalties, but the eSafety Commissioner may also struggle with applying judicial hate speech standards, given that their decision-making process is not a courtroom. On regulation, one of the first challenges is how to define what is unsafe or harmful in a way that does not give rise to substantial ambiguity. There are concerns that our current hate speech laws, as written for a judicial context, aren't really suited for the online sphere in that way. Defining extremist material is also fraught. It creates a lot of legitimate anxiety about state or tech intrusions on freedom of speech. Therefore, we have suggested that, instead of defining extremist material or speech, the Australian government should consider targeting a technique that many violent extremist movements rely on, and that is the dehumanisation of out-groups.

Secondly, we ask the Australian government to define dehumanising language and discourse. For the benefit of the committee, we have provided a definition for dehumanising speech and discourse based on a study that we did in 2020 of Facebook and Twitter. That definition says:

An actor that seriously or systematically produces or publishes material, which an ordinary person would conclude,

(a) presents the class of persons identified based on a protected characteristic … to have the appearance, qualities, or behaviour of an animal, insect, filth, form of disease or bacteria, inanimate or mechanical objects, or a supernatural threat. This material would include words, images, and/or insignia.

That is the definition for dehumanising language. We've also included a definition for dehumanising discourse:

An actor that serially or systematically produces or publishes material, which an ordinary person would conclude,

   …   …   …

   (b) curates information to a specific audience to cumulatively portray that the class of persons identified on the basis of a protected characteristic (e.g., race or religious belief)

   (i) are polluting, despoiling, or debilitating society;

   (ii) have a diminished capacity for human warmth and feeling or independent thought;

   (iii) act in concert to cause mortal harm; or

   (iv) are to be held responsible for and deserving of collective punishment for the specific crimes, or alleged crimes of some of their "members"

We have drawn this definition from atrocity prevention and genocide prevention research, as well as our studies of the function of dehumanisation online and how it tends to work through discourse and through language.

As you know, dehumanisation in a lot of historical contexts has been a precursor to mass violence, whether it's the Holocaust, Rwanda, Bosnia—I could name any number. Even ISIL extremist movements have used dehumanisation to justify their violence against Yazidis. Dehumanisation is one of the most harmful forms of hate speech because it removes any moral barrier one might have to enacting violence against innocent people, because you no longer see them as part of the human family. We propose that that definition be used in education and across regulators, government policy and the public, and our submission has more details on that.

Lastly, we ask the Australian government to introduce civil penalties that reflect those definitions. Regulation is required to counter bad actors online who nurture hateful echo chambers against minority communities, and we think that we have struck the right and the most potent vectors of harm. This type of material is readily apparent and assessable by an administrative body like the eSafety Commissioner through a notice-and-action model, with the protection of judicial review. This proposal recognises that platforms, governments and civil society mutually benefit from precise and well-defined public laws to reduce the risk of those laws being overused or weaponised. Enforceable standards also assure Australians that their place in this community matters. The Australian government must do this to protect freedom of expression for all, including the freedom of expression of communities, like mine, that feel they must cut off their voices in order to stay safe.

A failure to regulate emboldens perpetrators to carry out hateful abuse, harassment, threats, assault and vandalism in public places. A failure to regulate these hateful echo chambers emboldens perpetrators to target online users from those communities in public threats, threads and private messages. Any member of our community that develops a high profile becomes a lightning rod for that hatred. It encourages far-Right networks and it mainstreams and legitimises their standing to broader audiences—not only far-Right networks but any extremist network that relies on dehumanisation. This poses a risk to all of Australia. Finally, a failure to regulate places a discriminatory burden on our communities, who must defend that we are human and who must litigate and battle this public harm alone.

I haven't had time to touch on the remaining proposals in our submission—proposals C, E and F—but I'll just name them. They are recommendations concerning an assessment framework for bad actors, antitrust legislation, and transparency reporting by platforms. I've spoken for quite a while. Thank you for giving me a chance to outline our position.

CHAIR: I appreciate you taking the time to outline very thoughtfully a number of your proposed ways forward. One of the things I was thinking, when you used the word 'dehumanise', is how often that's associated with abuse—online abuse in particular—of individuals. Personal abuse is often described to me as feeling dehumanising, degrading and demeaning. This committee is looking into the range of online harms that might be faced by Australians on social media and other online platforms, but it's also looking into the extent to which algorithms used by social media platforms might actually permit, increase or reduce those online harms to Australians.

I note you talked about the importance of regulation in this space. I wonder if I can go back a step and just ask you for your observations around what duty of care—particularly statutory duties of care, if you look at regulation—should be required of platforms. Going back to one of the conversations that we had in the public hearing yesterday, one of the witnesses was raising the concept of safety by design. In the context of that, what is your view about the responsibility of platforms? As you were speaking, I was thinking about some observations made by an earlier witness, particularly in relation to the role of artificial intelligence. I was thinking about algorithms and how they identify different posts on Facebook, for example, and what role they play when it comes to online harm. What role do you believe platforms themselves should play in ensuring safety by design?

Ms Jabri-Markwell : Are you asking whether a statutory duty of care that requires platforms to carry out safety by design would be beneficial?

CHAIR: Yes—just some observations about what onus should lie on platforms in terms of online safety. To simplify it, take a canoe business by a lake, with kids and holiday-makers wanting to canoe, kayak or paddle on the lake. Important safety provisions need to be built into that business platform, including for instance the mandatory use of life jackets. Even if it's a very still day, there's still a basic obligation there—an important obligation. What requirements do you believe should be adopted or implemented when it comes to platforms and their curation—for want of a better word—of online safety by design?

Ms Jabri-Markwell : One of the things we don't know is how much they spend on human moderation, when they're doing content regulation, and I don't know whether we'll ever be able to find that out. We know that there's only so much that AI can do with hate speech and that human moderation is a really important thing. The big players have the resources to significantly increase their budget for human moderation, but they choose not to. There's no incentive for them to do that. I suppose it would require us maybe having a look at the Santa Clara principles, which are to do with: what are the really important things that we need transparency on? Those principles are actually a voluntary set of principles that a lot of companies have already signed up to. It's not really meant as a guide for regulation or for enforcing transparency, but it does give you a good insight into the things that are needed to be able to make an assessment about whether platforms are taking enough reasonable steps to have safety for their platform users.

I think Andre Oboler, your previous witness, raised the point about having data broken down according to what group is experiencing hate speech and that sort of thing. That would be useful, but, again, a lot of data that is provided by Facebook—just to give you one example—is not that useful. Their data is what their automated systems capture versus what their users report. Their data doesn't include hate speech that their automated systems don't pick up or that no user ever reports. There's a sea of stuff which they don't include in their data, and it's the same for other social media companies. We are concerned about a lot of the transparency requirements. It's just very hard, and obviously it all comes down to how well you can verify transparency reports and how well you can audit them. That's where the international community is now moving in asking the question about the verifiability and auditability of transparency reports.

CHAIR: In relation to that, the basic online safety expectations are going to be a transparency mechanism in our jurisdiction to check how companies and platforms deal with online harms. My understanding of them is that the eSafety Commissioner can request and publish reports.

Ms Jabri-Markwell : That's right.

CHAIR: My question is: what sorts of reports would you like to see given and looked into?

Ms Jabri-Markwell : The issue that we have with the current framework is that it doesn't include vilification—so you're presuming that the framework will be changed to include hate speech and vilification?

CHAIR: The Online Safety Act does deal with harms against an individual, but my understanding is that, in terms of the Online Safety Act, the eSafety Commissioner can take into account broader considerations, like racial vilification, in their determination. My question is probably a little bit more to the operations of some of the powers of the eSafety Commissioner to request reports from various providers, such as online platforms. What sorts of reports would you like to see provided, and how would the eSafety Commissioner be able to use them to help prevent some of the harms that you've described?

Ms Jabri-Markwell : I think it would be really helpful if platforms gave clarity about how they define terms such as 'demote' and 'amplify', because we don't have real information on that. We would like to know the types of content that the newsfeeds prioritise. We saw, from whistleblower Frances Haugen, that Facebook had changed the configuration of its newsfeed algorithms to prioritise engaging content, which tends to prioritise a lot of the most harmful content. We would like to know more about their ranking and recommendation algorithms.

I think it would also be so good to have transparency about the impact of content demotion efforts. For example, we're often told, 'We are just reducing people's access to that harmful content,' but we don't know what that means. Whether we're talking about COVID misinformation or the disinformation targeting minorities, which is our issue, it's the same thing: we don't know what this means—the 'reduce' approach. What does it mean when they demote something?

Also, the recent leaks of Facebook internal documents confirmed that Facebook has a list of actors that it excises from its policies due to their profile newsworthiness or engagement ratings. We suspect that's why a number of anti-Muslim hate actors are supported by the platform—because of their engagement ability. Platforms should be required to be transparent about the names of actors or organisations that they excise from their community standard policies and the reasons for that exception.

We also support the views of other NGOs that any use of automated tools has to be based on clear and transparent policies, including the option for independent assessment of those automated tools to evaluate how they were created and how they function, and even how the platform itself evaluates those automated tools. Platforms shouldn't be allowed to nudge or influence or manipulate users without their knowledge or consent. We consider that to be a breach of international human rights in regard to freedom of opinion. They are actually, in a way, shaping people's opinions without them even knowing it. The use of content curation technology such as news feed hierarchisation or recommendation algorithms—I've mentioned that before—needs to be as transparent as possible. We need that available for independent auditing. We'd like to know how much each platform allows human choice and control over recommendation and ranking algorithms. When I say 'human choice', I mean people working within their company who can change those algorithms. We'd like to know how much they allow user choice and control over recommendation and ranking algorithms.

But we also understand that there is a very strong view put out there by the Global Disinformation Index that a lot of the transparency measures are actually limited in how much use they're going to be to government and to the community—to the public interest—because we already know that they are running a business model that is run on engagement and growth through amplifying harmful negative material. Knowing more of that is not necessarily going to help us, unless we have the information that shows us when they're improving, and that is so hard because they will always change the information, so it's hard to compare. Again, we come back to the auditability of their data and who has access. Monetisation of their tools is a really big thing about which we'd like some transparency. For example, do they let advertisers know where they can place the ads? Do advertisers have the right to choose where their content goes? That would also help market forces to take place as well, whereas, at the moment, those market forces are really compromised by this monopoly that Google, Facebook and others have on advertising space.

I don't want to take up too much more of your time, but we've actually suggested even more things in our submission. One I'd like to mention is that, as an NGO, it'd be really helpful for us to know what proposals they've received from other NGOs to fix policies and to fix procedures to address some of this imbalance that we currently have. There might be better ways to fix this imbalance. I think there probably are better ways to fix it. There's an imbalance of power. We go to a platform, if we have the resources, and in this case AMAN has had the resources to do it over the past couple of years. You've seen how much the Online Hate Prevention Institute have struggled. Many communities don't have these vehicles to do this work, so they just don't do it. We've been doing it, but we're kept in the dark about who else is saying what, and it's very hard for us to organise and work together. I'll leave it there.

CHAIR: Thank you. I note that, with the Online Safety Act that's coming into operation next month, some of the penalties are around failure to meet take-down times, sharing intimate images without consent, some trolling and posting of seriously harmful online content, which can attract significant penalties of up to $111,000 for individuals and $555,000 for companies. That does come into effect next month. Ms Jabri-Markwell, thank you for your forbearance in relation to the timing due to some technical issues earlier. We appreciate that. We will go to questions from the deputy chair.

Mr WATTS: I thought I might start by acknowledging that we are coming up to the third anniversary of the Christchurch terrorist atrocity, committed by an Australian, and acknowledging your comments about the failures to grapple with that as a society. This committee has heard extensive evidence about reform processes dealing with cyberabuse, bullying and trolling targeting individuals that has occurred over the last couple of years. Other than the abhorrent-content scheme, targeting live streaming of terrorist atrocities, in the three years since that attack have there been, in Australia, any reforms to the information ecosystem that produced the Christchurch terrorist?

Ms Jabri-Markwell : No. That's why I find—

Mr WATTS: How does it make you feel seeing all of the activity that has targeted individual psychological harms in this space and the absence of any activity targeting dehumanisation of group identities, particularly in the wake of the Christchurch attack? How does that make you feel?

Ms Jabri-Markwell : It's made us feel really lonely. I don't know how else to describe it. It's kind of like you don't matter. But we just have to keep going. If another scenario like that happens, I wouldn't be able to live with myself if I knew that I hadn't tried everything. At the moment, we haven't acted. It's just as you described. In that particular space, of information, we haven't acted.

Mr WATTS: Are you aware of any kind of proactive approach to dealing with group harms in the online space, not just radicalisation towards terrorism but also we've heard powerful evidence about the targeting of people for online abuse because of their group identity, chasing them out of the public sphere in Australia—television appearances by women or people of colour or people with other diversity backgrounds resulting in them being chased out of the Australian public sphere and our democratic process. Are you aware of any coordinated approach to dealing with this problem from either government or regulators or anyone in this space?

Ms Jabri-Markwell : The main thing that has happened, which I'm very happy about, is the Online Safety Act actions for taking down content where there is cyberabuse of individuals. I feel like, if one of those cases happens now, there might be more support through the eSafety Commissioner, but the problem is that it's acting too late. It's harmful. When it happens to a person, they have to do all the things you've learned. We've had examples in our community of people who have actually left Australia because it's too much. We need action further upstream to stop the manipulation of online audiences, to stop them being manipulated to feel that disgust, rage and fury, which they then go and take out on people who dare raise their voice and become high profile.

Mr WATTS: AMAN has really worked at the pointy end of this kind of content. You regularly deal with what could be described as the worst of the worst content, dehumanising entire groups of Australians. We've seen the consequences of that kind of rhetoric. Given the extremity of the content that you're dealing with, can you describe to the committee the nature of your engagement and relationship with the social media platforms? You're dealing not with line-ball issues but with the issues that are the most acute. What should we take away from the nature of your engagement and your relationship with the social media platforms on these acute issues?

Ms Jabri-Markwell : First of all, doing the work that we do is traumatising. We monitor hate pages and groups. We collect evidence. It's exhausting and traumatising because you're basically reading posts from people who want to see you dead, who want to see your people dead, because you're Muslim. I wish that I didn't have to do this work, and I don't want to do it forever; I don't want to do it for another five minutes. I have to rely on volunteers to help me with this work. They're also young Muslims, and I don't want to have to subject them to this. But we all do it because we don't have a choice. We just have to do it. We collect the evidence, but the problem is that when you take it to a social media company, like Facebook, they will delete the evidence. They take it and they say: 'Yes, thanks for letting us know about these obvious violations of our policies. We'll delete it now.'

As I've said to Facebook over the months and months that I've engaged with them: 'I do not want this to be a long-term relationship. I won't survive. None of us will survive. This is your platform. It's your product. You need to keep it safe. If you can't change your systems in the way that we're recommending, and you just keep deleting the evidence that we bring to you so we can't even use that evidence in legal proceedings, you're basically putting this unbearable burden on us.' As I said, we can't fight this battle alone. We are just waiting and hoping that there will be a moment where the Australian government acknowledges that this particular problem needs a regulatory approach and takes this heavy burden off our shoulders.

Mr WATTS: For the record: you're not just any group here; you're a recognised civil society participant in the Christchurch Call civil society consultation process—that's right?

Ms Jabri-Markwell : Yes. We are part of the Christchurch Call Advisory Network and we participate in a lot of Global Internet Forum to Counter Terrorism work.

Mr WATTS: So, of anyone that should be listened to by the platforms in this space, your organisation really is one of them. You have the expertise and the status and standing in those civil society consultative processes, but you still can't get the outcomes that you're looking for.

Ms Jabri-Markwell : We can't get the outcomes. One of the things I've only recently learnt is that the people we've been engaging with, for example, in the case of Facebook are a separate company, supposedly, to the company that operates and controls the platform based in the US. That left me reeling and feeling like 'what did I do with the last two years of my life' in talking to these people, negotiating with them and pleading with them: 'Please acknowledge that demographic invasion conspiracy theories are harmful. Please do something about these narratives. Change your policies to recognise that they're harmful. Escalate these pages and groups that are clearly dehumanising out-groups to in-group audiences. Here are the assessment frameworks that you can use.' Every problem they put to us in terms of the difficulties that they have as a company we came back with, 'Well, here's another solution.' It wasn't enough. In the end, we could see that it wasn't a matter of capability; it was a matter of will.

Mr WATTS: And you've been doing this for years now.

Ms Jabri-Markwell : Yes. And that company is operating out of the US and thinks it sits in an ivory tower and has no duty of care to the Australian people. That's wrong. This is Australia. We have standards here. We have decency. Sorry—it has taken a very big toll, doing this work.

Mr WATTS: I can't imagine the toll it would take, particularly continuing to do this for many years. I thank you for your persistence in this very important cause. I just want to be really clear about the kinds of regulatory tools that do and don't help you in that cause. So the Online Safety Act provides a range of protections for abuse targeting individuals, but it provides no protection for abuse targeting groups and dehumanising group identities—is that correct?

Ms Jabri-Markwell : Yes, that's correct.

Mr WATTS: And proposals to expand the use of defamation proceedings to respond to online trolling, providing restitution for reputational damage, won't provide any assistance to you for abuse targeting group identity and dehumanising in this way?

Ms Jabri-Markwell : No, unfortunately not.

Mr WATTS: Finally, before I hand over, I appreciate your engagement with the idea proposed by the Executive Council of Australian Jewry about connecting the eSafety Commissioner's powers to other legal frameworks—for example, state-based racial and religious discrimination regimes and the federal Racial Discrimination Act. I appreciate your comment that you don't view that as a solution that will solve all the issues, as it won't deal with the midstream issues that you identified. Just to be clear: is your view that it wouldn't be counterproductive—that it would assist but it wouldn't solve the entire problem?

Ms Jabri-Markwell : Yes. I think that you would need to have a statutory duty of care to apply Australian laws when it comes to vilification and hate speech under section 18C of the Racial Discrimination Act, including disinformation as well. As I said, we've proposed the definition of 'disinformation' used by the Global Disinformation Index. If you had that, I think it would be useful in shifting responsibility to the platforms to acknowledge that they are operating in Australia and they do need to follow Australian standards. I think the connecting idea has benefit and is common sense. However, as I've said, that still relies on communities to bring forward complaints against every piece of hate speech and every bad actor. That is useful, but it's not going to lift from our shoulders the burden of dealing with what is very much a public harm.

Mr WATTS: It's not going to put you out of a job.

Ms Jabri-Markwell : No, and I want to be put out of a job!

Mr WATTS: Yes, I appreciate that. Thank you for your evidence. I really appreciate it. I know it's not easy—what you do, or speaking about it in this forum. Thank you.

CHAIR: Mr Simmons has a few questions. I am mindful of time and cognisant of the next witnesses on the line.

Mr SIMMONDS: I'll be very quick. Thank you for all your work and for appearing here today. I've been engaged with Facebook whistleblowers, and often when we talk to companies like Facebook they talk about their AI and their algorithms. In looking through this material and trying to categorise it and find it, have you seen any evidence that they're being successful in proactively taking it down through AI?

Ms Jabri-Markwell : Do you mean hate speech?

Mr SIMMONDS: Yes, the dehumanising posts that are against their own terms of service.

Ms Jabri-Markwell : I do think that, overall, the AI is always improving in general, not necessarily in relation to our community's issues. I do think it is improving, but it's not enough. Actually, the problem is that you really do need some human moderation, and you also need very specific things in place to escalate and identify those accounts that are really there to dehumanise an out-group to an in-group audience, and that particular policy or procedure, that standard, is missing. Just to be clear, some of the proposals we've put forward involve using AI using natural language processing. We actually think that it is possible to use data over time to predict the actors that are there to purposefully dehumanise an out-group to an in-group audience in a way that doesn't capture false positives. It doesn't capture just mainstream news or people who are having discussions about religion or things like that. AI is actually part of some of our recommendations as well.

Mr SIMMONDS: Thank you for that. It's my experience as well that their claims of the success of their AI tends to be overblown. Have you had examples where you've identified hate speech or vilification and you've given them examples and they haven't taken it down; they've ignored it or they've declared that they don't think it's as bad as you do?

Ms Jabri-Markwell : Yes. That's the vast majority. I feel like I now really understand your question. In relation to the violations that we've identified that we've then reported through their AI tools, the vast majority are found to be consistent with their policies. That's what I mean. I sense it's specifically an issue for the content that we're looking at because it's anti-Muslim. I might not be right about that, but there is a very, very low success rate. We actually have some percentages on each lot of data that we've put through. Sometimes it was as low as a few per cent of replies that found it to be a violation—just a few per cent; maybe one or two or three per cent—yet when it went to Facebook, they found all of them to be violations and they took it down at the human level.

Mr SIMMONDS: Thank you very much. That was the crux of my question. Would you take on notice some good examples of things that you pick up that Facebook haven't been able to? If you could send those through, that would be very helpful, because we can put that to them. Often they say, 'Oh, well, the AI is on the job.' That's a bit of an excuse not to put their own human resources towards it or to take responsibility for it as a platform, so those examples would be great. Thank you, Chair. I'll hand back in the interest of time.

CHAIR: Thank you, Mr Simmonds. Ms Jabri-Markwell, I thank you again for your understanding with some of the technical difficulties we had earlier. I thank you also for your attendance today and for the evidence you have provided. I indicate that, if you've been asked to provide any additional information, including questions on notice, could you please forward your responses through to the secretariat by 14 January 2022. You will be sent a transcript of your evidence and will have an opportunity to request corrections to transcription errors. Thank you again for your time.

Ms Jabri-Markwell : Thank you to the committee.
.