Balancing Innovation and Risk

Australia’s prolonged productivity stagnation has coincided with rapid advances in artificial intelligence (‘AI’). In August 2025, the Productivity Commission (‘PC’) released its interim report, Harnessing Data and Digital Technologies, which addresses three domains — AI, data sharing, and financial reporting infrastructure.¹

This article focuses on the Commission’s treatment of AI, where it estimates that machine intelligence could contribute more than $116 billion to the national economy within a decade and increase labour productivity by 4.3 percent.² The report situates Australia between two emerging regulatory models: the European Union’s risk-based legislative framework and the United Kingdom’s pro-innovation, principles-based approach.

For a country struggling to break the one percent productivity barrier, this is an enticing promise. But embedded in the Commission’s optimism lies a calculated policy trade off: innovation cannot be accelerated without eroding rights, widening inequality, or deepening digital divides.

The Commission’s guiding philosophy is unmistakably pro-growth. It calls on state and federal governments to remove regulatory friction, invest in digital infrastructure, and enable data sharing between firms, researchers, and public agencies. Yet, it also warns that excessive or premature regulation could “stifle opportunity.” The tension between risk and reward captures the report’s central message. Regulation should intervene only when necessary, and only once innovation has room to develop.³

I. A Productivity “First” Philosophy

The PC’s logic begins with an economic imperative. Australia’s productivity growth has stagnated for more than a decade, and the Report casts AI as a potential breakthrough comparable to the steam engine or electrification.⁴ The Commission situates AI not as a niche technology but as a general-purpose enabler capable of transforming healthcare, manufacturing, logistics, and public administration.

From this perspective, the question is not whether AI should be adopted, but how fast and under what conditions it should be. The Report’s preferred answer is an outcome-based regulatory philosophy. Existing frameworks — including privacy, competition, consumer protection, and discrimination — should be adapted to address AI related risks where necessary, while new AI specific legislation should be treated as a “last resort.”⁵ By advocating this position, the Commission aligns itself with a growing international bloc, led by the United States and the United Kingdom, that prioritises innovation leadership over precautionary control — a model most clearly articulated in the United Kingdom under a previously conservative government.⁶

It is a pragmatic argument, and one that resonates with industry. After all, Australia risks falling behind more agile economies if compliance costs outweigh investment incentives. However, the PC’s optimism reveals an underlying tension, it assumes that productivity and public trust can be pursued sequentially rather than simultaneously. In doing so, it leaves open the question of who bears the cost when innovation runs ahead of governance. By contrast, the European Union’s Regulation (EU) 2024/1689 of the European Parliament and of the Council (‘AI Act’) answers this directly: it allocates responsibility through ex ante risk tiers, prescriptive compliance duties and explicit accountability structures for high-risk systems.⁷ Rather than assuming that harms will be absorbed by markets or recovered later, the EU embeds cost-bearing obligations upfront — effectively socialising responsibility across developers, deployers and regulators. This stands in marked contrast to the PC’s sequencing logic, which implies that the distribution of risk can be worked out downstream.⁸

II. The Resistance to AI Specific Laws

The Commission’s resistance to bespoke AI legislation is rooted in economics. Regulation, it argues, imposes fixed costs that deter entry and experimentation, particularly for small firms.⁹ Moreover, because AI systems evolve rapidly, rigid rules could become obsolete before they are enacted. The PC instead promotes a principle-based model focused on outcomes, safety, transparency, accountability, rather than prescriptive inputs.¹⁰

This reasoning is sound in theory but fragile in practice. “Outcome-based” regulation often relies on self-assessment and voluntary standards, mechanisms that function most effectively here regulators possess strong oversight capacity. Comparatively research by the Organisation for Economic Co-operation and Development (‘OECD’) indicates that without adequate oversight and procedural legitimacy, such models may reduce compliance rather than enhance it.¹¹ Australia’s fragmented digital governance landscape, split between privacy regulators, consumer agencies, and competition authorities, makes such coherence difficult.¹² Without stronger coordination, the promise of flexible governance can easily collapse into regulatory ambiguity. A coordinated model could involve a central lead agency or cross-regulatory taskforce empowered to set standards, allocate jurisdiction and coordinate enforcement. The Australian Competition and Consumer Commission’s (‘ACCC’) successful s 18 action against Microsoft for misleading claims around Co-pilot adoption demonstrates both the effectiveness and the limitations of Australia’s current regulatory approach.¹³ The proceedings were resolved swiftly through settlement, signalling strong consumer protection enforcement. However, the conduct engaged competition law, consumer protection, corporate governance and intellectual property (‘IP’) concerns, yet was pursued through a single statutory pathway. In a coordinated environment, regulators would share data, assign responsibility and develop consistent compliance expectations, rather than responding in siloed bursts.

The Commission’s light-touch stance also reveals its economic priorities. By placing innovation ahead of precaution, it mirrors the logic of what might be called the AI productivity gamble: betting that the social dividends of acceleration will outweigh the risks of disruption. This approach is not reckless; indeed, it reflects the Commission’s mandate to improve efficiency and growth, but it sidelines the broader human consequences that lie beyond the balance sheets.

III. The Human Gap

While the PC acknowledges that AI adoption will cause “painful transitions” in the labour market, its policy response remains underdeveloped.¹⁴ The report gestures toward reskilling programs and workforce mobility but stops short of addressing systemic inequality or the psychosocial cost of technological displacement. In its calculus, productivity gains appear as national aggregates, not as lived experiences.

This gap matters because the benefits of AI will not be evenly distributed. The Commission assumes that displaced workers will find new roles in higher value sectors, but empirical evidence suggests otherwise. Recent research from the Centre for Future Work at The Australia Institute similarly cautions that productivity gains are not evenly distributed; structural barriers, insecure employment and unequal wage pass-through mean lower-paid and precarious workers often experience technological change as exclusion rather than mobility.¹⁵ Without targeted safety nets, the “transition” may feel less like mobility and more like exclusion.

A similar gap appears in the creative industries. One of the most contentious elements of the PC’s report is its proposal to explore a text and data mining (‘TDM’) exception to Australian copyright law, allowing AI systems to scrape and train on creative works without prior permission. The Commission frames this as essential for innovation and competitive parity with jurisdictions such as the United States and Japan.¹⁶ Yet for authors, musicians, and visual artists, it signals the erosion of both economic and moral rights.

Creative Australia’s submission to the Commission raised this concern directly, warning that prioritising broad data access rights for AI developers over creator consent could undermine the cultural economy and weaken Australia’s creative industries.¹⁷ In other words, the PC’s productivity lens may measure what AI contributes to GDP, but not what it subtracts from creative livelihoods.

IV. Balancing Innovation and Rights

None of this denies the PC’s core insight that overregulation can suffocate technological dynamism. Yet the tension between innovation and regulation is often overstated. Governance does not inevitably slow growth, it can build trust, stability and diffusion — conditions that make productivity gains possible.

A more balanced model would treat AI governance as a dual mandate: promoting innovation and protecting human dignity. That means embedding rights-based principles — transparency, fairness, explainability — within outcome-based frameworks instead of treating them as afterthoughts. It also requires clearer lines of institutional responsibility. Who ensures that algorithmic decisions are auditable? Who verifies that public sector AI complies with administrative law values such as procedural fairness? These questions engage core principles of Australian administrative law, including procedural fairness, accountability, and reviewability.

Other jurisdictions offer instructive contrasts. The EU’s AI Act, for all its bureaucratic sprawl, at least defines risk categories and allocates compliance duties.¹⁸ The United Kingdom’s pro-innovation strategy, by contrast, relies on cross regulator coordination and voluntary standards, an approach the PC appears to emulate. Australia could synthesise both models, combining the EU’s clarity with the UK’s agility, to craft a governance architecture that supports innovation without abdicating responsibility.¹⁹

V. Toward Smarter Governance

The PC’s interim report is an ambitious blueprint for economic renewal, but its success will depend on whether governments translate aspiration into credible safeguards. Productivity and protection are not mutually exclusive; they are mutually reinforcing. A workforce that trusts technology is more likely to adopt it. Consumers who understand algorithmic decision making are more likely to engage with it.

A truly “outcome-based” regulatory framework should therefore define whose outcomes matter. Economic efficiency is one outcome; social inclusion, creative integrity, and equitable opportunity are others. A narrow focus on GDP gains risks transforming AI into a winner-takes-most game, one that boosts output but erodes cohesion.

To avoid that fate, policymakers must move beyond the productivity lens. They should view AI not merely as a set of tools but as a sociotechnical system that reorganises work, culture, and governance itself. That requires investing in digital rights literacy, worker transition programs, and ethical design capacity across the public sector. These are not constraints on innovation; they are the infrastructure of sustainable growth.

VI. Conclusion

The Productivity Commission’s AI strategy is a calculated bet on the future: that by holding back from heavy handed regulation, Australia can unleash a new wave of productivity. The wager may pay off, but only if the country also invests in the human and ethical foundations that make innovation worth pursuing.

AI will reshape the economy whether governments are ready or not. The question is whether Australia’s policy institutions can shape that transformation with foresight rather than hindsight. If the Commission’s optimism is to become more than rhetoric, its productivity blueprint must evolve into a governance model that measures success not just by how efficiently machines learn, but by how wisely humans lead.

Written by: Eiman Yambio (MDN Law and Ethics Committee Member)

Read the full article here! https://monashdeepneuron.medium.com/balancing-innovation-and-risk-fba2b67d3e6b

Next
Next

MDN x REA Immersion Day: Bridging Industry and Education