Citations[]
Australian Communications and Media Authority, Developments in Internet Filtering Technologies and Other Measures for Promoting Online Safety (First annual report to the Minister for Broadband, Communications and the Digital Economy) (Feb. 2008) (full-text).
Australian Communications and Media Authority, Developments in Internet Filtering Technologies and Other Measures for Promoting Online Safety (Second annual report to the Minister for Broadband, Communications and the Digital Economy) (Apr. 2009) (full-report).
First annual report[]
This report was prepared in response to a ministerial direction received in June 2007 to investigate developments in internet filtering technologies and other safety initiatives to protect consumers, including minors, who access content on the internet.
The report draws together key trends and makes a series of observations about online risks and methods for mitigating those risks. In particular, the report highlights that as users increasingly engage with interactive internet technologies, the online risks have shifted from content risks associated with the use of static content to include communication risks associated with interaction with other users.
The report discusses how different online safety measures can play a part in mitigating one or more online risks. At this time, filtering technologies are regarded as suited to addressing particular static content risks. The report also discusses how the use of content rating and labelling can minimise risks associated with inappropriate static content and how internet hotlines provide a mechanism for users to report potentially illegal content to appropriate organisations for investigation. Legal frameworks also make the production and online publication of this content unlawful in some jurisdictions.
The report identifies how users can be empowered to manage the online risks they encounter. Parental monitoring of online activity can be effective in minimising both content and communication risks. Education initiatives can raise awareness of issues and provide information and support to develop protective skills and behaviours. These initiatives empower users to engage in online activities in a way that minimises exposure to the risks associated with their use of online services.
When considering content and communication risks, the report highlights that clusters of measures can be more effective in minimising risks than single initiatives. As a general rule, filtering and content rating and labelling schemes are aimed at addressing content risks, while monitoring of users' online activity and educational programs are mostly aimed at equipping users to identify and deal with communication risks.
The report observes that, given the complexity of online services, technologies and experiences, notably the newer risks associated with Web 2.0 applications, a constantly developing combination of measures that responds to changing risks is most likely to meet the challenges posed by new technologies and platforms.
Second annual report[]
The report draws together key trends and makes a number of observations about initiatives for mitigating online safety and security risks that have been deployed internationally. It observes that the implementation of measures at multiple points along the supply chain increases the probability of success compared to efforts that are focused on a single point of intervention.
With the growth of the digital economy, social as well as business users are relying on internet services and communities to a greater extent. Static content remains an area of use and risk, but the report notes the growing popularity of social media for children, young people and adults. Major online risks arising from the increased use of these services show little sign of discriminating on the basis of age. This development alters safety and security risks for all groups in relation to security of personal information, cyberbullying and online fraud.
The report identifies what particular risk a measure aims to mitigate. A small number of measures have potential application to a larger number of different risk categories.
The report considers measures that can be used to address:
- content risks such as access to illegal material or content that may be considered inappropriate for some users
- e-security risks such as spam, malware, online fraud and the misuse of personal information
- behavioural risks such as cyberbullying and grooming.
Risk mitigation can be employed to:
- reduce the availability of illegal content;
- restrict access to illegal activity and content that may be considered inappropriate; and
- build resilience towards illegal activity and content that may be considered inappropriate.