Note: Where available, the PDF/Word icon below is provided to view the complete and fully formatted document
Parliamentary Joint Committee on Law Enforcement
09/12/2021
Law enforcement capabilities in relation to child exploitation

ASH, Mr Alex, Manager, Online Content, Office of the eSafety Commissioner [by video link]

DAGG, Mr Toby, Executive Manager, Investigations Branch, Office of the eSafety Commissioner [by video link]

[15:55]

CHAIR: Welcome. A few formalities first: I remind committee members and officers that the Senate has resolved that an officer of a department of the Commonwealth or of a state should not be asked to give opinions on matters of policy and shall be given reasonable opportunity to refer questions asked of the officer to superior officers or a minister. This resolution prohibits only questions asking for opinions on matters of policy and does not preclude questions asking for explanations of policies or factual questions about when and how policies were adopted.

We've got your submission, which we appreciate, and we hear from you regularly, which we appreciate even more, on various inquiries. However, does anybody want to make an opening statement or draw anything to the committee's attention? Toby, you look like you're itching to go.

Mr Dagg : Chair, I was actually going to say in the interest of time and the committee's time we'll forgo the opening statement. Everything we want to express and explain is largely contained in our submission.

CHAIR: That's very kind of you. I might kick off the questioning. Can I take you to the stat that 30 per cent of all image based abuse reports are from Australians under the age of 18 through your own information. Can we get a feel for this increase in volume? As a percentage, are you seeing an increase in under-18s versus over-18s in the production of IBA?

Mr Dagg : Interestingly, that statistic as an expression of all reports that we receive has remained fairly static in the fairly short time that the image based abuse scheme has been in operation. We've only had a civil penalty scheme in place since October 2018, so we're still working with only a scant few years of data. Generally speaking, 25 to 30 per cent of our reports are made by under-18s. Overall, the number of reports made to us by Australians has increased, and that was particularly acute through the COVID period last year where 2020, when compared with 2019, saw a 114 per cent increase in image based abuse reports made to the eSafety Commissioner.

CHAIR: So you're seeing an overall increase, which we knew, but not necessarily a larger increase in CAM material as opposed to over 18 as a percentage.

Mr Dagg : That's correct.

CHAIR: So you request removal of IBA. What percentage and how successful are you in taking down—I think I saw a figure of about 80 per cent, is that fair?

Mr Dagg : That's right. About 80 to 85 per cent of material is successfully removed based on our informal requests. As we may have explained in this submission, our preference is to work informally where we can. We find that is a rapid and efficient way of removing material. Perhaps somewhat counterintuitively, even the sites that create their business model around distributing image based abuse material, when they're approached by the eSafety Commissioner, often take quite rapid removal action in the absence of a formal removal notice. However, we have used our formal tools in the past.

CHAIR: So those 15 or to 20 per cent go to a formal notice, do they?

Mr Dagg : We'll give consideration to issuing a removal notice against the service or against the perpetrator. If a young person has been involved in distribution of material without consent, we'll generally try to work through an escalation process, so we'll issue a warning to that young person. We've had great success working with schools as well. They are very keen to become involved where there have been image based abuse matters that concern their students. For the most part, they seem to be fairly effective tools. When it comes to adults involved in image based abuse, between adults, we have had recourse to more extensive remedial notices requiring people to do things to prevent a continuation of the violation of the prohibition against sharing image based abuse.

CHAIR: Can I ask you about your Safety by Design approach? I know a significant amount of your effort goes into that, and it's incredibly admirable, but we are seeing this rapid explosion of CAM, so I'm wondering how successful we're really being in putting it back on industry to do Safety by Design. I'm wondering if, in your discussion with other jurisdictions and the like, there are any examples or experiences that are leading you to form the view that this Safety by Design stuff needs a bit of legislative or regulatory behind it if companies aren't going to take it up voluntarily.

Mr Dagg : I think we've seen much greater engagement from industry with the principles behind Safety by Design. As you may know, we recently launched some extensive tools that allow industry to assess their safety posture according to best practice, pitched at both startups and more mature companies, and we could not have produced those tools without the input of industry. Those tools were the product of consultation with hundreds of industry players. They were very much tools made for industry with the assistance of industry. We have seen some real innovations in these areas. Some of the platforms, for example, are making it impossible for users to search out young people and children. The platforms are being hardened against the possibility of adults who aren't connected to the child or young person through existing networks discovering them on the platforms. It's very welcome, but we acknowledge that there's a long way to go overall, because when we talk about industry we're really talking about a staggering variety of industry sectors—everything from ISPs through to hosting services and social media services. Some of those sectors are more advanced than others, and some within those sectors are better than others at adopting Safety by Design as an approach.

In terms of the legislative connection, you'd be aware that consultation recently finished on a set of basic online safety expectations, which are a really important part of the Online Safety Act. That will require companies to provide reporting to the eSafety Commissioner in relation to a range of safety issues. They have been designed very much with Safety by Design in mind. You can see the sorts of principles that are central to Safety by Design expressed through the basic online safety expectations. So we already have a legislative hook to require the giving of insights from industry about how they're approaching safety as a component of their design process once that instrument becomes operative under the Online Safety Act.

CHAIR: Are those struggling—or not struggling but less likely to throw their arms around Safety by Design—the smaller platforms who don't have the resources, or, counterintuitively, is it the larger ones who are trying to work back to incorporate those principles long after the horse has bolted?

Mr Dagg : It's definitely both. The reason why we created two separate assessment approaches was to accommodate startups, who, naturally with a small staffing complement and limited resources, are trying to get a product to market as quickly as possible. If experience watching that process unfold has taught us anything, it is that safety sometimes gets considered last, if at all. So that's why we're very much of the view that embedding a safety culture within companies [inaudible] is essential, and the Safety by Design framework accommodates that. Similarly, where very advanced, well-resourced, sophisticated platforms have brought products to market, the product often lands in the marketplace without any thought necessarily having been given to the safety of children using the product, and that requires a retrofit. That's something that we encourage companies to invest in as well.

CHAIR: How much emphasis do you think we should give to encouraging companies to enforce more heavily their own standards—that 13-year-olds shouldn't be on their platforms, for example. Some of our witnesses have suggested that platforms be required to get parental ID or something like that to demonstrate parental consent for an under-13-year-old versus pushing them to incorporate parental controls, which would allow ongoing monitoring by parents beyond whether or not they get on in the first place.

Mr Dagg : We would certainly like to see much more extensive parental controls rolled into services and platforms, particularly those that are designed with children in mind. We've seen some real movement in some of the larger platforms over the course of the last 12 months or so to provide better tools for controlling what kind of content children are exposed to and actually rolling out specific products for children which have more limited functionality and much greater ability for parents to observe the interactions that are occurring online. So the requirement to ensure that there is an age limit is something that we often are in conversation with platforms about. Where we learn that a child or young person has been involved in any of the harms that we deal with and they do present as under that age, we will often bring it to the attention of the platform, as well, because we see that as absolutely a factor in contributing to child safety on the platform.

CHAIR: Sure, but there are hundreds of thousands of kids under 13. To date, they haven't been that enthusiastic about enforcing their age limits. You can't speak for them—I get that—but what's been your experience? Is that up until now this has been a drive to get numbers on the platforms and this has been an easy one to ignore? Is that changing? Or do you think this is something that that is going to require a stick approach rather than a carrot?

Mr Dagg : Whether it requires a stick or a carrot is a question we'll leave to government, but I think the awareness of ensuring that the capacity of users to use the platform in a safe way is much more front of mind for industry than in it has been in the past. That's true for not just children but other vulnerable groups using those technologies too. We've seen a much greater openness within our conversations with industry to discuss those factors as contributing to the overall net effect of user safety online.

CHAIR: Can I take you to a part in your submission where you talk about encryption, which we all wring our hands about a little bit. There's a sentence that says, 'There are a number of solutions that would ensure illegal activity online can be addressed.' That's in relation to encryption. Can you expand on that? As I said, we've heard a lot from other witnesses and submissions about the dangers of encryption but very little about how anyone can impact what seems to be an inevitable march of all of them encrypting their communications.

Mr Dagg : I will preface this with a disclaimer that I'm not technical expert.

CHAIR: Neither are we. It's quite all right.

Mr Dagg : It seems to be the case from some of the discussions that we've had with people who are technical experts that there are two possible solutions in particular that seem to be key to understanding some of the options that might be available to platforms that are moving towards an end-to-end encrypted model. Whether they're commercially viable or feasible is another matter, of course. One of them is on-device hashing of media. We saw Apple introduce that feature earlier in the year, and there was some discussion about whether it was the most-effective or most-feasible way of going about ensuring that some of its services weren't being misused for the purpose of sharing child sexual exploitation material. There seems to be a view amongst some encryption experts that that initial hashing of the media could happen on the device and then the hash signature of that item—an image or a video—could travel along with the encrypted communication so that that could at least be shared with the likes of the National Centre for Missing and Exploited Children to identify accounts that are engaged in the distribution of child sexual exploitation material.

There is another option which is referred to as homomorphic encryption, which is a different form of encryption that, as I understand it, performs computation on already encrypted content to produce some insight into the kind of media and content being shared within that encrypted communication. Beyond that—I'm not a cryptographer, obviously. My technical knowledge of those areas is very limited. As I understand it, they are two of the more promising areas of mitigation being offered as possible alternatives to the current situation if platforms move to an end-to-end encrypted model.

CHAIR: If there is further information within your organisation from those technical people on those kinds of solutions, we'd be very grateful to get it on notice if there is the opportunity.

Mr Dagg : We're very happy to help where we can.

CHAIR: That would be great. We've spoken earlier in the day about the hashing technology. It still wouldn't allow you to read those communications, and probably not even identify the person, but you would pick up if they shared a particular image that was already in the database, essentially?

Mr Dagg : That's what we understand. If, for example, we take the first example of on-device hashing, where the hash travels with the cipher text or encrypted communication, if it were a violative hash that related to known material that was, for example, in the National Center for Missing and Exploited Children database, at least that account could be flagged with NCMEC. NCMEC would perform the initial triage of the matter and then pass it on to law enforcement for further consideration, and law enforcement might use its existing processes to obtain more information from the company about the account user.

CHAIR: Can I take you to a few trends that the committee is interested in? If you've not seen them, that's as good as information as if you have. It will help us focus our time. Are you seeing many instances of IBA produced by deepfake technology, people manufacturing CAM material that isn't real?

Mr Dagg : Not to my knowledge. It's something that we are preparing to see more of. We're fortunate in that the existing legislation at this point has made particularly clear in the Online Safety Act, ready for when it commences on 23 January, that image based abuse material is material that is or appears to be a particular person. It accommodates for the potential growth in deepfake based reports, which we think is probably on the horizon.

CHAIR: Are they seeing it in other jurisdictions overseas? I think particularly of revenge porn type instances, where a broken-down relationship among minors might result in that kind of stuff being produced.

Mr Dagg : We haven't heard many reports from colleagues at the Revenge Porn Helpline in the UK or the INHOPE hotline in the IWF. What they're actually concerned about mostly is the rise of self-produced material rather than deepfake material.

CHAIR: Okay. In terms of self-produced material—we were talking about this with the representatives of VicPol—how are you seeing that material being procured? What are the practicalities of it? Are they friending somebody on Facebook and then direct messaging them? Are they doing it via gaming chat portals? Where is the prevalence?

Mr Dagg : I will hand over to Mr Ash for more information on that question from the perspective of the online content team. Certainly when it comes to image based abuse material that relates to under-18s, the sexual exploitation of children features in about 25 per cent of our reports. Very often we see that the services most heavily implicated in some of those reports are Snapchat and Instagram. We've also seen instances of offenders frequenting Omegle, an online service, which allows them to connect randomly with young people online and move them to other services where they can further explore the conversation. Alex, maybe you have some insights from the online content side of things?

Mr Ash : I think all of the examples you provided are examples that we see. We see examples where people start conversations on a platform and then move off platform to procure child sexual abuse material. We see examples of peer-to-peer sharing that then ends up in the hands of people who it wasn't intended for originally, but also those instances where children are coerced into producing content that they didn't want to, so they're blackmailed, often in their own home. We've seen videos where you can hear parents in the background, having a conversation, while the child's in the bathroom. I think it's all of those examples.

CHAIR: Are you seeing any examples of remote-access trojans being used—spyware, essentially, where people are trawling other people's photo collections and things like that remotely?

Mr Ash : I don't think so in the online content side of things. Potentially in IBA—Toby?

Mr Dagg : Not to any great extent, as far as I understand it. Certainly, the potential risk of a device being compromised by those means is something that we talk about to frontline staff working with the domestic and family violence sector through our eSafety Women program. But, in terms of content production, it's not a methodology that we see represented in our data to any great extent.

CHAIR: Okay. We heard as well from an NGO. I'm not trying to be specific to that NGO, so let's say just say NGOs in general. NGOs might want to take on a more investigative role than the traditional NGO role of education and awareness—and I guess NGOs playing some kind of an investigative role is a US model that they see over there more. How to do you engage with NGOs who might want to add capacity from that point of view?

Mr Dagg : We work very closely with the Canadian Centre for Child Protection in Winnipeg. They don't perform an investigative function per se, but they certainly take reports from the public about child sexual exploitation material, and pass information to law enforcement. We've worked with them very productively on sharing information about some of the trends that they're seeing and some of the issues that they are concerned about that are manifesting on social media, and we've used those insights as a basis for our own investigative activities and have collaborated really productively. The Canadian centre is no longer a member of INHOPE, but we work very closely with our colleagues within INHOPE as well, along similar lines. Whether or not an NGO can or should be performing an investigative function is probably not something I'm best placed to comment on.

CHAIR: Okay. I might hand over to you, Deputy Chair. I can cover some more things at the end. Go ahead.

Dr ALY: I've got a few questions. In your submission you talk about the clear web and, basically, that the assumption that something is solely on the dark web is a false assumption and that, predominantly, the material is on the clear web. The Uniting Church, in their submission, referred to a very interesting phenomenon. They referred to the sharing and the uploading of handbooks and manuals on how to entrap and groom children. They refer to a couple of arrests, one in the UK and one in the US, of perpetrators who were found in possession of these books and were found to be distributing these books or these manuals. Is that something that the eSafety Commissioner has come across? Do the tools and the imprimatur of the eSafety Commissioner extend to that so that you have capacity to deal with that kind of material?

Mr Dagg : I think the short answer to your last question is yes. A handbook that would be providing instruction in grooming techniques, for example, would be almost certainly considered class 1 content under the Online Safety Act, and we'd be able to issue a removal notice against that. We're just very careful about the dark web services like Tor, and not doing anything that could potentially compromise or prejudice a law enforcement operation there. That goes to things like creating accounts, for example, and leaving a trail of our own traffic. When a service is rolled up and investigated by law enforcement, we certainly don't want a situation where we're going to have to deconflict after the fact. So we're very respectful of the fact that the dark web is largely where our law enforcement colleagues do some of their excellent work, particularly when it comes to identifying victims. In terms of the existence of handbooks and the like, it's probably a question, if I could respectfully suggest, best asked of the AFP. As we said in our submission, we are concentrating on the distribution of material at scale on the clear web, and we're really focusing on images and videos.

Dr ALY: But you're not precluded from also removing textual information, such as is included in a manual or a handbook?

Mr Dagg : Right. That would be material that would likely be captured under the provisions relating to material inciting, instructing or promoting crime. If there is a handbook that provides practical guidance, it would fall within the 'instructing' subcategory of some of those material types.

Dr ALY: Thank you. Is much of the CAM captured through the AVM act? We had the review of that act just a couple of weeks ago, and I believe you both appeared in that. Is much of the material captured through that act?

Mr Dagg : Typically, the penetrative sexual abuse of the child could be considered under the rape definition set out under the AVM amendment. It's perpetrator produced material that shows abhorrent and violent conduct in the form of rape. So, yes, it could. But we've found that a far more effective, efficient and rapid way to deal with that content at scale—noting that we concluded about 14,500 regulatory investigations into child sexual exploitation material last financial year—is to leverage our relationship with the INHOPE network. INHOPE was notified of about 13,700-odd matters the last financial year. About 74 per cent of all of the material that INHOPE handles is removed in about three days or less. Last calendar year, 2020, the INHOPE network handled more than a million individual media posts related to child sexual exploitation material.

Dr ALY: I want to go to the problem of material self-generated by children who are either coerced or, as I think you also mention in here, in a romantic relationship. The scheme that you have is the image based abuse scheme that deals with that—is that correct?

Mr Dagg : It can be dealt with, arguably, under both schemes. It depends on how the matter is brought to our attention. Through the Online Content Scheme, we aren't working directly with victims or survivors; we're dealing with members of the public who, very often anonymously, report to us where child sexual exploitation material is provided on the clear web, on a website. Through the image based scheme, of course we're dealing with those who are directly affected by the non-consensual sharing of images.

Dr ALY: Does the non-consensual sharing of images cover materials self-generated by children who have been groomed?

Mr Dagg : If the material that has been produced is intimate material and has been shared without consent then, yes, it will be captured under the legislation.

Dr ALY: Is there confusion there, or does there need to be a bit more clarity in differentiating between, on the one hand, material that is self-generated by children who have been groomed or are being blackmailed or otherwise forced to produce the material and, on the other, material that is intimate images that are being shared without consent?

Mr Dagg : I think there may be an argument to potentially make that differentiation clearer in criminal law, but I will leave it to our colleagues in the AFP and AGD. For us, the test is quite simple: has the material been shared without the consent of the person depicted, and is it intimate material? If the answers are in the affirmative, we can proceed with removal. If we have a perpetrator in the frame, we can potentially issue a remedial notice as well. We have a very well-established ladder of regulatory interventions that we can apply regardless of how the material was produced, and that goes up to seeking exemptions from the Federal Court. So for us it's really about being able to remove the material as quickly as we can and act as quickly as we can. Whether or not it's been produced consensually in the first place, without any trick or manipulation or coercion, isn't necessarily a factor that we take into consideration. except where our relationship with law enforcement is concerned. I think that's a really crucial point to make. Where we see evidence of grooming, we work very closely with the ACCCE to bring the facts to their attention so that we can determine whether or not it's the kind of issue that we should pass over to our colleagues in the AFP for further investigation.

Dr ALY: What I'm hearing is that there's potential to capture a broad range of perpetrators—young people who are sharing images without consent, what's called revenge porn, right through to self-generated CAM by children who are being groomed—and then you make a determination around the origins of it, of how to deal with it; is that correct?

Mr Dagg : That's right. If a person were to bring us information that showed material depicting them had been shared on a website without their consent, we are able to establish that lack of consent, which is really apparent if the person is under the age of 18. We will work directly with the platform to have that material removed. If we're unable to establish a perpetrator, if the person's not able to nominate someone who is in possession of material shared without their consent, then we will focus on that first avenue. If we're able to establish a perpetrator with image based abuse, we will consider what options are available to us to deal with their contravention of the fundamental prohibition against sharing image based abuse material, which is expressed in the act. There may also be instances where we're not able to establish either a victim-survivor, because it's material that's been posted on a website that's been publicly reported to us, or a perpetrator. In those cases, we will deal with that material as class 1 material under the Online Content Scheme and work to issue a written notice if it's not the kind of site we can action through the INHOPE relationship.

Dr ALY: Does it matter if the victim is based in Australia or is an international victim?

Mr Dagg : In the image based abuse scheme, as long as there's an anchor point for Australia that enlivens our powers. The perpetrator may be based here and the victim-survivor may be based overseas, or it might be reversed and we might have a victim-survivor here and a perpetrator overseas. As long as we can establish that link with Australia, we're able to enliven our powers.

Dr ALY: So it would capture people live-streaming offences that are occurring in the Philippines, for example?

Mr Dagg : The legislation is not well suited to live streaming. It doesn't give the eSafety Commissioner the power to intervene to deal with that live stream. Trying to understand the two ends of that conversation is often very challenging. If that material, however, were then shared as a video file and shared on websites, if that video was brought to our attention through the Online Content Scheme, it is absolutely the kind of material we can take action against with the class 1 removal powers.

Dr ALY: I want to ask about two different kinds of platforms, the first being online gaming chat rooms like Discord that are being utilised for targeting, identifying and grooming. What role does the eSafety Commissioner play there? The second one is around the sharing of images on closed social media platforms—things like Snapchat, where you've got maybe a group of 10 or 12 people sharing images. According to Snapchat—I don't mean to verbalise what Snapchat said in their evidence at the AVM inquiry—if somebody reports it from within that group then they become aware of it, but apart from that they're not aware of it.

Mr Dagg : In relation to the first example, we don't see a huge amount of material that's relevant to either the image based abuse scheme or the Online Content Scheme being shared through Discord. That's not discounting the avenue that might be created through Discord's service to establish grooming relationships with children, but that's more the domain of law enforcement—rather than the preparation phase, the actual production, of material or the sharing of material without consent.

In the second case, the two platforms that are the most heavily represented in our image based abuse data are Snapchat and Instagram. We have a productive relationship with both those platforms. We find that they action our reports with haste and provide us with information we're seeking to support our investigative activities. To the extent Snapchat are responding to the exploitation of minors on the platform: our experience has been that, once we bring that to their attention, they work swiftly to notify the network, to take action against accounts and to take other appropriate action according to their own terms of service.

Dr ALY: I can understand how that's done on Snapchat, because you have a closed group and you share images in that closed group. But, on Instagram, is it through that subplatform of 'Instagram fan' or whatever it's called, where you do only have a closed group?

Mr Dagg : It's largely through Instagram messaging too. Messaging is a major component of the traffic that is exchanged on Instagram [inaudible] that messaging function.

Dr ALY: Okay. I would like your input on a thread that's been coming out through the inquiry today, about having a standalone suite of legislation that applies specifically to CAM and to online offending, in the same way that we have a fairly broad suite of specific standalone legislation to deal with all the different aspects of terrorism, from recruitment to fundraising. Would a standalone suite of legislation that deals with all aspects of offending here—from grooming to the sharing of images to the acquisition of an act to viewing an act to sharing of manuals to forming a group in order to share material—assist the eSafety Commission in carrying out its functions with regard to this kind of material?

Mr Dagg : Fortunately for us, we have the Online Safety Act commencing early next year, and that is a major step forward in creating a much better platform for us to utilise our powers across a whole range of services. We will have power, when it comes to hosting services that are providing access to child sexual exploitation material, to focus our attentions not just on Australia but on services that might have their origin overseas or are being operated overseas. Our class 1 removal powers extend beyond the borders of Australia, and that's the first time, in the regulatory history of this particular harm, that that's been provided to the eSafety Commissioner and, prior to that, the ACMA.

So we are really looking forward to testing those powers as early as we can and exploring them through the course of the next year or two, and we expect that will be quite a powerful augmentation of our role, because it really doesn't make any distinction between the different forms of harms, particularly sexual based harms, that are directed towards children. The Online Content Scheme actually doesn't define child sexual abuse or exploitation; it relies on quite a broad definition within the Classification Code, which refers to an offensive depiction of a child, whether or not engaged in sexual activity. So it provides us with the power to deal with material that depicts physical abuse of children. As I've said, guidance that might be provided to offenders about methodologies and ways of inveigling children and gaining their trust is the sort of material that is brought within the scope as well. Of course, our image based abuse scheme has proven to be particularly resilient, effective and robust. As to that, we don't see a huge change between the current situation and the new situation under the Online Safety Act, except for a shortening of removal times for platforms where we issued a removal notice.

To round off the answer to your question, we think that the full scope of powers provided to the eSafety Commissioner under the Online Safety Act is an appropriate response to the breadth of issues that we've observed over the course of the last 5½ years or so.

Dr ALY: And you envisage that it will take 12 months to implement the act and see some outcomes from it?

Mr Dagg : We are ready to implement now. We've been spending the last 12 months doing just about nothing other than getting our staff in place and getting our regulatory guidance in place. We'll be looking to test those powers as early as we can. In the next several weeks, we'll have published all of our regulatory guidance to show how we're administering the schemes. In particular, the committee may be interested in referring to our compliance and enforcement regulatory guidance, which sets out our approach to enforcing matters and the kinds of factors that we take into consideration when thinking about progressing matters for enforcement. We'll also have a very comprehensive explanation of the Online Content Scheme which will set out some additional factors. So I think that, if we were to have this conversation in 12 months, we would be able to have a much richer exchange about how we will have been seeing those tools work to protect Australians, particularly younger children.

Dr ALY: Chair, do I have time for one more question?

CHAIR: Yes.

Dr ALY: I wanted to ask you about the awareness and education piece because a lot of the research and the studies that we've seen today really make that differentiation between online offending and contact offending, to the point where we've had some witnesses today submitting recommendations or evidence around early online interventions for people who are perpetrators. Another witness submitted a kind of process of how someone goes about offending online, from accessing an identity to accessing material and how they do that. I wanted to ask you about an education and awareness piece that would go directly to online offending, really tackling this defence that it's victimless, and whether there's a need for some awareness piece. An example would be somebody typing something and a pop-up coming out saying: 'You're about to commit an offence. These are the jail terms. There is a victim,' and those kinds of things. Have you seen anything like that? Is that something that you think would be valuable?

Mr Dagg : It's a really interesting question. In 2019, we partnered with the University of Tasmania on this exact question, about whether or not pop-up interventions were effective in discouraging people from accessing child sexual exploitation material, and the results of that research—and, overall, this area of research is in its early days—tended to suggest that deterrence messaging worked effectively to discourage people from taking that next step of clicking through to what they thought might be child sexual exploitation material or at least some form of exploitation material. Obviously, the ethical concerns within that research project were quite sensitive. But the design demonstrated that that was, in fact, a viable approach—particularly when it is backed by a message from a perceived authority like the eSafety Commissioner that there is harm associated with the consumption of child exploitation material and that there were penalties that applied in relation to accessing it or sharing it. So that seems to have been demonstrated by this study and a couple of other studies.

More broadly, in relation to the comment that you made about online and offline offending, the view we came to, through being exposed to such a huge flood of content over many years, is that, overwhelmingly, this occurs in private homes and is perpetrated by those who are close to the child, who are trusted by the child. We often think of child sexual exploitation in terms of remote exploitation via webcam and self-produced content and live streaming, but, overwhelmingly, the abuse of children happens within bedrooms, within bathrooms and with those who are part of their family. So all of that offending starts with a physical, real-world offence against a child, and that's important to remember and think on. It's one of the really important things that are picked up in the recent Australian Centre to Counter Child Exploitation series Stop the Stigma, which has real victims and real parents talking about their experience and working to start a conversation about the reality of sexual abuse in our community against children, and I think that's a very important innovation in our conversation and the state of maturity in relation to this issue.

Dr ALY: Is that research that you did with the University of Tasmania available publicly or are the findings available publicly?

Mr Dagg : It is, yes. There's a version that's published online. So we are very happy to take that on notice and share it with you.

Dr ALY: That would be wonderful. Thank you so much. Were there any outcomes for that? Did you adopt any of the experimental pop-ups that you used in the research?

Mr Dagg : Not in any direct way yet. Certainly there were some avenues for further research to really understand why it was that deterrence messaging might be more effective than harm based messaging in discouraging people from accessing content. It's a promising area of technical innovation that is worth exploring right across the board.

Dr ALY: I would be really keen to hear more. That's all from me. Thank you very much, gentlemen, for appearing today and for your wonderful contributions.

CHAIR: Thank you, Deputy Chair. You've covered the additional topics that I wanted to as well. We are pretty much in sync after a long day of this. The other members of the committee have indicated that they don't have any further questions. Thank you very much, gentlemen, and thank you for all the work that the eSafety Commission does under your guidance. It is much appreciated.

Mr Dagg : Thank you, Chair.

CHAIR: We will conclude today's proceedings. The committee will agree that answers to questions taken on notice at today's hearing should be returned by Thursday 13 January 2022.

Committee adjourned at 16:41