We connected ChatGPT to our CRM last month. The first thing it did was ask what it was allowed to touch.

I'll be honest, I'd been building this up in my head. I cleared twenty minutes in my calendar. I told the team I was going to "connect the AI to our client data" like I was about to perform surgery. One colleague asked if I wanted her to stand by in case something went wrong. I wasn't sure if she was joking.

The reality? A permission screen. A list of checkboxes. Contacts, deals, companies, tickets, tasks. Each one with a toggle for "view" or "view and edit." It took about thirty seconds to configure. I'd prepared for a heist movie. I got a terms and conditions page.

And here's the bit that genuinely surprised me: that boring, anticlimactic permission screen might be the most important thing happening in AI security right now.

The Permission Screen You've Been Ignoring

You know that screen you click "Allow" on without reading? The one that pops up when you connect any app to anything? "This app would like to access your contacts, calendar, and first-born child." Click. Allow. Move on with your day.

This is that screen. Except this time, maybe read it.

When you connect ChatGPT to HubSpot (through OpenAI's connectors feature), you get an OAuth consent screen that lists every HubSpot data type the integration wants to access. Contacts. Deals. Companies. Tickets. Tasks. Notes. Calls. Meetings. Products. Quotes. Each one has separate permissions for "view properties and other details" versus "create, delete, or make changes to."

The HubSpot ChatGPT connector permission screen showing granular data access controls with Required and Optional permission categories

It's granular. It's specific. And it's the same OAuth consent pattern we've been using for regular app integrations for years. The only difference is that now the app on the other end is an AI.

That's it. That's the whole thing.

I don't understand why this isn't getting more attention. The entire conversation around AI and data security has been about fear, about companies locking everything down, about horror stories of someone pasting their company's source code into ChatGPT (Samsung, 2023, we all remember). And meanwhile, the actual safety mechanism has been sitting there the whole time. It's a checkbox. It's boring. And boring, it turns out, is exactly what good security looks like.

Only about 34% of enterprise organisations have implemented AI-specific security controls (USCSI/WWT, February 2026). Which means roughly two-thirds of companies are running AI integrations with whatever default access they clicked "Allow" on.

I've been doing this for twenty years and I definitely just clicked "Allow" on something last week without reading it. I'm part of the problem. At least I can admit it.

This Isn't Just HubSpot

Here's the thing. Every major SaaS vendor is doing this now. The permission model isn't a HubSpot quirk. It's an industry pattern.

Microsoft 365 Copilot is probably the most documented example. Copilot can only access content that you, the user, already have permission to view. If you can't see a SharePoint site, Copilot can't see it either. Sensitivity labels? Enforced. Data classification? Respected. Microsoft even published an entire "Oversharing Blueprint" that acknowledges, in writing, that Copilot amplifies existing permission problems.

I genuinely love that Microsoft called it an "Oversharing Blueprint." It sounds like a self-help pamphlet for people who post too much on LinkedIn. (Not that I'd know anything about that.)

Their documentation is blunt about it too: "Oversharing is one of the most common risks organizations encounter when deploying Microsoft 365 Copilot. Because Copilot surfaces information that users already have permission to access, overly broad sharing can expose content to a wider audience than intended" (Microsoft Learn, March 2026).

In plain English: Copilot doesn't hack anything. It just finds stuff you technically had access to but never knew existed. Like discovering your company's salary spreadsheet was on a SharePoint site that everyone in the organisation had read access to. The permission was always there. Copilot just made it discoverable.

Salesforce Einstein applies field-level security to its AI features. Einstein Copilot Builder lets admins manage exactly which actions AI agents can perform and set permissions per field and object. If a sales rep can't see a field in their Salesforce profile, Einstein can't see it either.

Google Workspace Gemini gives workspace admins controls to enable or disable AI features per user group and organisational unit. Want Gemini in Gmail for the marketing team but not for the legal department? You can do that.

Slack AI respects channel-level permissions. It can only summarise conversations in channels you're a member of. Can't access private channels you're not in. Won't touch DMs. The AI has the same hallway pass as you.

Notion AI has a workspace-level toggle and respects page-level permissions. If someone shared a page with you, AI can see it. If they didn't, it can't.

The pattern is the same everywhere: every vendor is building AI permissions into their existing security model. They're not bolting on a separate "AI security layer." They're extending the permission system that was already there. Your existing security setup carries forward.

Boring? Absolutely. Effective? Also yes.

Why "Just Block AI" Was Never Going to Work

I love the LinkedIn posts where someone connects ChatGPT to their email and acts like they've handed the nuclear codes to a toddler. Very dramatic. Great engagement. Completely misses the point.

Look, I get the impulse. The early days of ChatGPT were wild. Samsung employees leaked source code. Companies issued blanket bans. IT departments blocked ChatGPT URLs at the firewall level like it was a dodgy gambling site. And for a while, "block everything" felt like the responsible thing to do.

But here's what actually happened when companies blocked AI: employees used it on their phones. Personal accounts. Personal devices. Shadow AI. The exact same data exposure risk, except now IT had zero visibility into it.

Blocking AI was like locking the front door while the back windows were all open. You felt safer. You weren't.

The robots.txt approach (which we've written about before) is about external AI crawlers visiting your website. That's a different problem. The bigger risk for most businesses is internal AI tools accessing data they shouldn't. And that's a permission problem, not a firewall problem.

Gartner predicted that 40% of agentic AI projects will be cancelled by the end of 2027, specifically because of governance and risk issues (Gartner, June 2025). Forty percent. Not because the technology doesn't work. Because the permissions aren't sorted.

That stat genuinely surprised me. It's not a technology failure. It's a checkbox failure. Companies are building sophisticated AI agents and then realising nobody figured out what those agents should actually be allowed to do. That's like building a Formula One car and then asking "wait, does anyone know the route?"

But I'm getting sidetracked. The point is: the permission model is the middle ground between "block everything" and "allow everything." AI gets access to what it needs. Nothing more. It's the principle of least privilege, and it's been a security best practice since before AI was involved. We just need to apply it to the new stuff.

What This Actually Means for Australian Businesses

Right, here's where I bring it back to something practical. (I know, I've been talking about permission checkboxes for 1,500 words. Bear with me. This part matters.)

If you're running a business in Australia, you've got obligations under the Privacy Act. The Australian Privacy Principles require you to know where personal information flows. APP 6 restricts how you use and disclose personal information. APP 11 says you need to take reasonable steps to protect it.

When you connect an AI tool to your CRM and it accesses your contact database, that's a "use" of personal information under the Act. Not theoretical. Actual.

Here's the good news: the granular permission screens we've been talking about map directly to these requirements. The OAIC's purpose limitation principle? That's what the checkboxes are for. You're explicitly choosing which data the AI can access and for what purpose. It's auditable. It's documented. It's exactly the kind of evidence you'd want if someone ever asked "how did you ensure AI wasn't accessing personal information it shouldn't have?"

Practical steps (and I'm keeping this short because I'd rather you actually do these than read another paragraph about them):

Screenshot the permission screen when you enable an AI integration. Save it somewhere. Date it. If you ever need to demonstrate what access you granted, you've got the receipt.

Document what you've allowed. A simple spreadsheet works. Tool name, what data it can access, read or write, date enabled. It doesn't need to be fancy. It needs to exist.

Review quarterly. Permissions drift. Someone enables something "temporarily" and it stays enabled for three years. (We've done this. Multiple times. I am not judging.)

If a tool doesn't show you a permission screen, that's not because it's smarter. It's because nobody built one. And that tells you something about their security maturity. I'd be asking questions.

Deloitte's State of AI 2026 report, which surveyed 3,235 business leaders, found that only about 20% of companies have what they'd call "mature" governance models for autonomous AI agents, even though usage of those agents is projected to jump from 23% to 74% within two years (Deloitte, March 2026). Twenty percent. That means four out of five companies are just... winging it.

The same report found that organisations with mature AI governance achieve 52% faster time-to-value from their AI investments. So the companies that actually configured the checkboxes? They're getting results faster than the ones who didn't.

Pretty insane, when you think about it. The boring admin work is literally the competitive advantage.

The Permission Screen as Strategy

Here's something I don't think most business owners have clocked yet.

When you audit your SaaS stack and start turning on AI features (which you should, because you're already paying for them), the permission screen is the first thing you'll see. It's the gateway between "we have AI" and "we have AI that's properly controlled."

Digital illustration of software application interfaces with hidden AI feature indicators, representing undiscovered capabilities in existing business tools
Related Article11 min read

Your SaaS Stack Is Full of AI Features You've Never Turned On

Most businesses are buying new AI tools while ignoring AI features already included in their Google Workspace, GA4, Canva, and Microsoft 365...

Read full article

My kids have a better permission model than most enterprise AI deployments. My six-year-old knows he can watch certain things on the iPad and not others. He doesn't get full admin access to the streaming accounts. He gets age-appropriate content with parental controls. He occasionally tries to escalate his privileges (he's six, not stupid), but the controls are there.

The average enterprise AI deployment? "Yeah, give it access to everything. It's fine. What could go wrong?"

The tools have the controls. You just haven't looked at them. Every major SaaS vendor, Microsoft, Salesforce, Google, HubSpot, Slack, Notion, has built AI-specific permission models into their products. The infrastructure is there. The safety net exists. You've been walking over it every day.

The question isn't whether AI is safe. That's the wrong question. It's always been the wrong question.

The question is: did you configure it?

The Honest Assessment

I should probably admit that I haven't tested every permission control in every tool on that list. I've gone deep on HubSpot (because we use it every day) and Microsoft 365 (because we deploy it for clients). The Salesforce and Notion details are based on their published documentation, not firsthand experience. I'm being upfront about that because I think it matters.

What I can tell you from twenty years of building websites and deploying enterprise software is this: the pattern is real. The controls exist. And the gap between "having the controls" and "using the controls" is where most businesses are stuck right now.

We're all figuring this out. Every agency, every IT team, every business owner who's trying to use AI without accidentally exposing their entire client database. The good news is that the safety net was built before most of us got around to looking for it.

What To Do Next

If you're not sure what your AI integrations can access right now, that's a problem. Not a huge, dramatic, sky-is-falling problem. But a real one. And it's fixable in about an hour.

Go through every AI integration you've enabled. Check the permission screens. Screenshot them. Write down what access each tool has. Then ask yourself: does it need all of that?

If you want help doing this properly (auditing your AI integrations, configuring permissions for HubSpot or M365 Copilot, making sure your setup is defensible under the Privacy Act), that's something we do at Webcoda. Get in touch and we'll walk you through it.

The future of AI security is a checkbox. I know. Try not to get too excited.

Job done.

---

Sources
  1. Microsoft. "Data, Privacy, and Security for Microsoft 365 Copilot." Updated 9 March 2026. https://learn.microsoft.com/en-us/copilot/micro...
  2. Microsoft. "Microsoft 365 Copilot Blueprint for Oversharing." Updated 6 March 2026. https://learn.microsoft.com/en-us/copilot/micro...
  3. Microsoft. "Get Ready for Microsoft 365 Copilot with SharePoint Advanced Management." Updated 29 January 2026. https://learn.microsoft.com/en-us/sharepoint/ge...
  4. Microsoft. "Microsoft Purview Data Security and Compliance Protections for AI Apps." Updated 27 February 2026. https://learn.microsoft.com/en-us/purview/ai-mi...
  5. Microsoft. "Copilot Control System Security and Governance." Updated 25 February 2026. https://learn.microsoft.com/en-us/copilot/micro...
  6. USCSI/WWT. "When AI Stops Asking Permission: The New Security Imperative." February 2026. https://www.wwt.com/article/when-ai-stops-askin...
  7. Gartner. "Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027." June 2025. https://www.gartner.com/en/newsroom/press-relea...
  8. Deloitte. "State of AI in the Enterprise, 2026." March 2026. https://www.deloitte.com/us/en/what-we-do/capab...
  9. HubSpot. "OAuth Scopes Documentation." 2026. https://developers.hubspot.com/docs/guides/apps...
  10. Office of the Australian Information Commissioner. "Australian Privacy Principles." https://www.oaic.gov.au/privacy/australian-priv...

---