The European Union has escalated its fight against big tech, proposing the introduction of a “digital obligation of care” that would force online companies to reduce the negative impact of their services.
The Australian government has announced its intention to introduce legislation that would require social media companies, including Google, Facebook, Instagram, X, and TikTok, to take steps to mitigate the spread of misinformation and disinformation on their platforms for users under 16 years old?
On the final evening of a landmark address, Australian Minister for Communications Michelle Rowland articulated the compelling reasons behind the federal government’s decision to implement a Digital Obligation of Care.
To effectively mitigate online harms, we must transition from relying solely on content moderation strategies and instead adopt a systemic approach that prioritizes proactive prevention measures. This shift necessitates a more comprehensive understanding of the diverse range of online harms, encompassing not only obvious threats but also subtle, insidious forms that can have profound impacts.
This marks a significant milestone in our journey towards global harmonization, fostering cooperation among diverse jurisdictions worldwide.
A digital obligation of care refers to the duty of tech companies and online platforms to ensure that their services are safe and responsible, ultimately promoting a healthier internet culture. This concept is often associated with the idea of “harm minimization” in the context of digital media, where companies strive to reduce the potential for harmful content, such as hate speech, disinformation, or cyberbullying, from spreading online.
The obligation to provide care is a legally recognized responsibility to ensure the well-being and safety of others. It doesn’t just mean avoiding harm, but also encompasses taking minimal measures to prevent or mitigate harm.
The proposed digital obligation of care aims to hold tech giants like Meta, Google, and others accountable for ensuring customer safety on their online platforms. Social media platforms will be held accountable to the same standards as corporations that produce physical products, which are already obligated to prioritize customer safety by ensuring their goods do not cause harm.
Tech companies would need to regularly conduct threat assessments to anticipate and effectively mitigate harmful online content.
Considering the notion of “enduring classes of hurt”, as conceived by Rowland, legislation must also be factored into this assessment. Rowland suggests that such categories may encompass:
- harms to younger folks
- harms to psychological wellbeing
- The perpetuation of harmful habits: a recipe for disaster?
- Unlawful activities, content, and practices are strictly prohibited.
This method has been particularly beneficial in its current form. The concept is already being implemented globally, including in the UK as part of its membership within the EU.
These laws also empower users to combat harmful content on tech platforms by shifting the responsibility to safeguard their online experience from corporate entities to individual consumers?
Within the European Union, customers have the right to report concerns about dangerous materials to technology corporations, which are legally bound to respond to such complaints. Customers who are unable to remove unwanted content from the website of a tech firm may submit a complaint to the Digital Services Coordinator for further investigation and resolution. If a mutually agreeable outcome cannot be achieved, they will consider seeking a court judgment.
The European Union’s acts outline that if technology corporations breach their obligation of care to customers, they will.
The Australian Human Rights Commission has conceptualised a digital duty of care, supporting individuals and entities to navigate online risks and responsibilities. Digital platforms should be legally bound by a duty of care to all their users.
The fact that the situation requires an immediate response makes a social media ban particularly relevant.
Experts have raised concerns about the federal government’s proposal to restrict social media access for individuals under the age of 16.
The ostensibly straightforward “one size fits all” age requirement overlooks the vastly diverse levels of emotional and cognitive maturation among youth. Banning younger people solely from social media doesn’t stop them from accessing harmful online content – it just pushes the issue further down the line? The revised text reads:
This measure effectively silences fogeys and educators from engaging with children on these platforms, thereby hindering their capacity to educate young people about safe online interactions and mitigate potential risks.
The US government’s proposed “digital responsibility framework” aims to address these concerns.
The AI-powered platform is designed to safeguard the digital landscape by eliminating harmful online content, including images and videos that encourage self-harm or other risky behaviors, thereby creating a safer internet experience for tech corporations and their users. This platform ensures unrestricted access for young people to valuable resources and online social networks without restricting their entry.
A digital duty of care also holds the potential to tackle the problem of misinformation and disinformation.
Can Australia’s future success depend on its willingness to follow global best practices?
There’s a growing global effort to tackle harmful content on platforms, shifting the responsibility from consumers to companies themselves.
As various countries implement similar regulations and standards, the likelihood increases that tech companies will adapt to these requirements, thereby ensuring greater compliance with global norms.

The Federal Government is focusing its efforts on platforms like Meta’s Facebook to enforce accountability through legislation, specifically obligation of care laws.
Will the enforcement of this policy be effective?
Australia’s authorities plan to strictly enforce the digital obligations of care. As Minister Rowland stated last evening:
The platforms’ failure to uphold their duty of care is a critical breach, particularly when systemic shortcomings exist; we will ensure that the regulatory body has robust and effective penalty frameworks at its disposal.
Precise details regarding these penalty preparations will be forthcoming. Individuals can also report complaints to regulators about harmful online content they’ve encountered, seeking its removal.
Several factors influencing the execution of a plan exist. In order to achieve success in Australia and beyond, meticulous attention to detail in contract specifics is crucial. Determining the scope of harm will remain a contentious issue, necessitating regular examination through a thorough review of incidents reported in complaints or court proceedings.
As the EU and UK implemented these laws within the past year, the overall impact of these regulations, including tech companies’ levels of compliance, remains unclear.
The federal government’s sudden shift towards holding tech companies accountable for removing harmful content is a long-overdue welcome change. This initiative aims to create a safer online environment for users of all ages, from the youngest to the most mature individuals.