“AI governance” has become one of the most overused phrases in tech and legal circles. It excites some people, scares others, and often makes people feel more knowledgeable simply by saying it. Yet the term has become so broad that it loses meaning without context.

AI governance is not a single policy or a universal standard. It changes entirely based on how you use AI, why you use AI, and who is using it. A startup experimenting with ChatGPT for internal workflows has different obligations than a company selling AI-enabled products. A law firm using AI for research must consider confidentiality and professional responsibility rules. A business with hundreds of employees using AI daily must think about internal controls, approved tools, data handling, and review processes. Even geography matters because laws differ from state to state.

Despite these differences, we continue to hear vague statements like “You need AI governance.” That does not help anyone understand what they actually need to do.

AI governance should be practical. It should be a structure of policies, workflows, training, and oversight that keeps your use of AI lawful, ethical, and aligned with your goals. But it must be tailored. If your employees are using AI, you need clear rules about what information they can enter, what tools are allowed, and when human review is required. If you are selling an AI-driven product, you need documentation, testing, disclosures, and risk assessments. Even if AI is used only for productivity, you still need safeguards around confidentiality and regulatory compliance.

The consequences of skipping governance are real. Deloitte recently issued a partial refund to the Australian government after a $440,000 report was found to contain multiple errors, including nonexistent citations. Deloitte later admitted that part of the report had been generated using a large language model and that incorrect references had been substituted with new incorrect ones in an attempted update. The department ultimately confirmed that several footnotes and citations were inaccurate and had to be amended.¹ These are preventable failures when clear internal rules and review processes are in place.

Good governance is not about fear or buzzwords. It is about clarity, accountability, and real safeguards that match how your organization actually uses AI.


¹ Source: The Guardian, “Deloitte to pay money back to Albanese government after using AI in $440,000 report,” https://www.theguardian.com/australia-news/2025/oct/06/deloitte-to-pay-money-back-to-albanese-government-after-using-ai-in-440000-report