The AI landscape - what every business leader needs to know right now
Since the launch of ChatGPT in late 2022, the generative AI market has exploded. Projections for its value range from US$20 billion to US$200 billion by 2032.
However, the rapid deployment of these tools has outpaced the understanding of their risks. While companies rush to publicise their AI credentials, regulators are scrambling to catch up, and prominent tech leaders are calling for caution.
For business leaders, keeping pace with these developments is critical. It is no longer just a technical issue; it is a strategic one involving legal compliance, governance, and risk management.
Here is an update on the current status and emerging considerations for your organisation.
1. What is Generative AI?
Unlike traditional AI, which recognises patterns or makes predictions based on existing data, generative AI creates new content. This includes text (ChatGPT, Gemini), images (Midjourney, GPT-Image), software code (Copilot), voice and even music.
These tools are powered by Large Language Models (LLMs) trained on massive datasets scraped from the Internet. To give you a sense of scale: GPT-3 has around 175 billion parameters. Its successor, GPT-4 has 1 trillion parameters.
2. Recent market shifts
The market is moving fast. Several key developments have reshaped the landscape:
More competition: Google has launched Gemini. Meta (Facebook) released LLaMa, an open-source model that was subsequently leaked to the public. NVIDIA has launched a service allowing businesses to customize AI for specific needs.
Commercial integration: Major players are already embedding GPT-4 into their products. Microsoft (Bing/Azure), Duolingo, and Stripe are all using the technology to enhance customer service and data analysis.
Autonomous agents: We are seeing the rise of "AI agents" (like Auto-GPT). These tools don't just answer a single prompt; they can be given a goal and then autonomously create and execute a series of tasks to achieve it, significantly reducing the need for human oversight.
3. Safety and regulatory backlash
As autonomy increases, so does scrutiny. There is a growing global debate regarding the safety and ethics of these systems.
Industry warnings: The non-profit Future for Life Institute published an open letter signed by Elon Musk and Steve Wozniak calling for a pause on training powerful AI systems. More recently, Geoffrey Hinton (the "Godfather of AI") left Google to speak freely about the risks, citing the "emergent properties" of AI, such as a Google program teaching itself Bengali without being trained to do so.
Regulatory action: Regulators are moving from discussion to enforcement.
Italy: The privacy regulator (Garante) temporarily banned ChatGPT over data collection concerns. Service was restored only after OpenAI improved privacy controls.
Europe: Regulators in Spain, France, and Germany have launched probes. The EU Parliament has reached a preliminary agreement on the "AI Act," which would require developers to undergo extensive safety assessments and disclose if copyrighted material was used in training.
Australia: The Federal Minister for Industry and Science has commissioned a report to inform the government's regulatory approach.
4. Key legal risks for business
For developers and customers alike, several legal risks are crystallising:
Privacy: The action in Italy highlights the risk of using personal data to train models without a clear legal basis.
Copyright: There are increasing calls for LLM developers to pay royalties to copyright owners (including news publishers) for the material used to train their systems.
Governance: The EU's "AI Act" signals a future in which transparency and safety assessments will be mandatory compliance requirements.
Key takeaway
The benefits of generative AI are transformative, but the "move fast and break things" era is ending. The risks: privacy, IP, and ESG, must be managed through targeted due diligence, robust contracts, and internal governance.
Action items for you to take:
Review your vendors: Conduct due diligence on the AI tools you are deploying. Do they comply with emerging privacy standards?
Update governance: Ensure internal controls are in place for the data your teams can feed into these models.
Monitor regulation: The regulatory environment is fluid. Stay informed about the EU AI Act and local developments in Australia, as these will set the standard for compliance.
Stakeholders impacted:
Legal Teams
Chief Information Officers (CIOs)
Innovation & Procurement Teams

