Reel Time Tech: AI at Work podcast image.

AI at Work in 2026: Super Assistant or Security Snafu?


AI Is Officially in the Office

If 2023 was the age of “AI hype,” then 2024–2025 was the era of AI being your new coworker.

  • Microsoft’s New Future of Work research found that about 29% of information workers report using generative AI several times a week at work and saving roughly 30 minutes a day on average. (SOURCE – Microsoft New Future of Work Report 2024)
  • A Federal Reserve Bank analysis using U.S. survey data found that around 28% of all workers used generative AI at work by late 2024, with ongoing research tying that to meaningful productivity gains.
  • Macroeconomically, U.S. productivity in Q3 2025 grew at its fastest pace in two years, and economists explicitly pointed to AI adoption as one of the forces behind that jump.

In other words: AI in the workplace isn’t “coming” — it’s here, and it’s already showing up in how we write, research, plan, and support customers.

At Fisher’s Technology, we’re seeing this shift first-hand with customers across Idaho, Montana, Washington, and Utah. Teams are excited about AI’s speed and creativity – and rightfully wary of security, compliance, and both team members & customers feeding the machine sensitive data.

You can be sure to expect future deep dives, but today let’s cover: benefits, risks, big-name tools vs. smaller ones, and practical best practices so your organization gets the upside without becoming a headline.

Why Businesses Are Leaning Into AI

First things first – let’s give AI its due, when used well, it’s a serious productivity boost!

1. Faster content & communication

Tools like ChatGPT, Microsoft Copilot, Google Gemini, and Claude can:

  • Draft emails, proposals, and knowledgebase articles
  • Summarize long PDFs, meeting transcripts, and email threads
  • Rewrite content for different audiences (executive summary vs. technical detail)

Early Microsoft research shows that workers using Copilot-style tools report saving 30+ minutes per day and feeling less mentally drained by repetitive writing tasks.

2. Better research & decision support

Generative AI is increasingly used for:

  • Quick background research on a topic
  • Comparing pros and cons of tools or strategies
  • Drafting outlines for policies, project plans, or training

The St. Louis Federal Bank’s 2025 work on generative AI adoption suggests that even modest time savings per worker can add up to aggregate productivity gains at the firm and national level – especially with work that requires employees to have more theoretical or analytical knowledge.

3. Lower barrier to “coherent” drafts

Not every organization can hire a full-time technical writer, analyst, or designer.

AI assistants can help small teams:

  • Produce more polished customer communication
  • Standardize tone and structure
  • Get from “blank page” to “reasonable draft” quickly

For SMBs – the core of who we serve here at Fisher’s – this is a big deal: you can get large-enterprise-level polish without large-enterprise headcount.

The Flip Side: Real Risks and Vulnerabilities

Now the not-so-fun part. As AI tools get more powerful, the risks grow too.

1. Shadow AI & data leakage

One of the biggest 2025 red flags is shadow AI: employees using personal AI accounts or unsanctioned tools to do work tasks.

Security researchers are seeing:

  • Sensitive content pasted into free ChatGPT or Gemini accounts
  • Staff installing unvetted AI browser extensions that read everything on the page
  • No logging, no data loss prevention (DLP), no central policy

This can expose:

  • Customer data (emails, tickets, health or financial info)
  • Confidential business documents (contracts, pricing, roadmaps)
  • Credentials or secrets accidentally pasted into prompts

Even if the AI provider itself is reputable, using personal accounts instead of enterprise versions makes it harder—or impossible—for your IT team to monitor and control.

2. AI misuse by attackers

AI isn’t just a tool for your staff. Attackers are using it too.

In 2025, Anthropic published reports about detecting and blocking attempts to misuse its Claude AI system for:

  • Crafting highly targeted phishing emails
  • Generating malicious code and ransomware
  • Automating reconnaissance and “vibe hacking” (social engineering at scale)

Other major providers have acknowledged similar pressures, and regulators are moving toward tighter safety and governance expectations globally.

3. Over-trusting the “smart intern”

Generative AI is still prone to:

  • Hallucinations – confidently wrong answers
  • Outdated information
  • Bias, especially on sensitive topics

If your team assumes “the AI must be right,” you risk shipping wrong numbers to clients, misinterpreting laws, or making bad strategic calls.

The healthy mindset is: AI = fast intern, not final authority.

Big Players vs. Smaller AI Platforms: How to Think About It

You asked specifically to steer away from vague “AI platforms,” so let’s talk plainly about the big names and how they’re positioning themselves for enterprise use.

The “Big Four” in Workplace AI

  1. ChatGPT Enterprise / ChatGPT Teams (OpenAI)
    • Offers admin controls, SSO, audit logs, and a “no training on your business data” promise for enterprise tiers.
    • OpenAI highlights security certifications like ISO 27001/27017/27018/27701 and provides a Trust Center and SOC 2 report under NDA.
  2. Microsoft 365 Copilot
    • Runs inside your existing Microsoft 365 tenant, respecting existing permissions and sensitivity labels. That means Copilot can only see what the user can see.
    • Security guidance emphasizes Purview DLP, sensitivity labels, and access controls as key to keeping Copilot from surfacing or processing sensitive content inappropriately.
  3. Google Workspace with Gemini
    • Built on Google Cloud’s existing infrastructure, with long-standing certifications including HIPAA, FedRAMP, ISO 27xxx, SOC reports, and PCI DSS across relevant services.
    • Google explicitly markets Gemini Enterprise as “enterprise-ready,” with configurable data governance controls and support for regulated workloads.
  4. Claude Enterprise (Anthropic)
    • Positions itself as a “safety-first” model with strong focus on aligned behavior.
    • Enterprise offerings (Claude Enterprise / Claude for Enterprise on AWS) emphasize that enterprise conversations aren’t used to train models, and support private deployment and enterprise controls.

These platforms all invest heavily in:

  • Compliance certifications
  • Admin controls
  • Clear(er) data handling policies
  • Security engineering and monitoring

It doesn’t mean they’re perfect—but you at least have documentation, contracts, and audit trails to work with.

Smaller vendors and niche tools

Smaller AI tools, extensions, and startups can be:

Pros:

  • Cheaper, more flexible pricing
  • Niche features (industry-specific templates, specialized integrations)
  • Faster to innovate in certain workflows

Cons:

  • Limited published information on security & compliance
  • Fewer (or no) third-party audits and certifications
  • Vague EULAs about how they use your data
  • Higher risk of going under or being acquired, changing behavior overnight

The key question isn’t “big vs. small” = good vs. bad. It’s:

“Can we get the level of transparency, control, and contractual protection we need from this vendor?”

For most SMBs, it’s usually safer and simpler to anchor on one or two large platforms (e.g., Microsoft 365 Copilot + a sanctioned ChatGPT/Claude deployment) and strictly limit everything else.

How to Make AI Safer to Use with Company Data

Here’s a practical checklist you can use internally at Fisher’s or share with customers.

1. Pick your “official” tools and document them

Choose 1–3 platforms you’ll support, for example:

  • Microsoft 365 Copilot
  • ChatGPT Enterprise or Teams
  • Google Gemini or Claude Enterprise for specific use cases

Document:

  • Which plan/tiers are approved
  • Who can use what
  • Where data is stored and how it’s governed

2. Configure security and governance before a full rollout

For tools like Copilot, Gemini, and ChatGPT Enterprise:

  • Integrate with SSO and MFA so accounts are tied to corporate identity.
  • Set up DLP and sensitivity labels so AI tools can’t casually surface restricted content.
  • Confirm data retention and training settings (e.g., “your prompts aren’t used to train our base models” on enterprise tiers).
  • Turn on audit logs where available so you can see which teams are using what.

3. Draw a bold line around “never paste” content

For both internal and external audiences, make this crystal clear:

Do NOT paste these into AI tools (even approved ones) without explicit policy approval:

  • Full payment card numbers
  • Social Security numbers and government IDs
  • Protected health information (PHI)
  • Unreleased financials, M&A plans, or legal strategy
  • Credentials, API keys, VPN configs

You can refine the list by industry. Healthcare, finance, and legal will have stricter guardrails.

4. Train people like AI is here to stay

This has to be a living conversation, not a one-time email:

  • Run short quarterly trainings on:
    • What tools are approved
    • How to handle company and client data
    • How to spot AI-written phishing and invoice scams
  • Encourage a culture of “ask before you paste”, not fear and shame.

F-Secure and others have shown that email remains the top channel for scams, and AI is making those messages more convincing, not less. Training is a must, not a “nice to have.”

5. Monitor and adjust (AI governance is not “set and forget”)

  • Track which AI tools are actually being used (sanctioned and “shadow”).
  • Review logs and adjust permissions and policies as new features roll out.
  • Stay plugged into vendor trust and security updates – major AI providers are frequently updating safety features and disclosure.

Where Fisher’s Fits In

For our internal Fisher’s team and for customers reading this: you don’t have to untangle all of this alone.

Our Managed IT & Security practice is already helping SMBs:

  • Evaluate whether Copilot, ChatGPT Enterprise, Gemini, or Claude fit their environment
  • Configure identity, permissions, and DLP so AI tools respect existing security boundaries
  • Build simple, human-readable AI policies for staff
  • Monitor usage and adjust as AI becomes part of everyday workflows

If you’re in Idaho, Montana, Washington, or Utah and want a sanity check on your AI plans – or want help rolling out AI without rolling out red carpets for attackers – you can reach us at fisherstech.com.

Final Thoughts: Aim for “AI ASSISTANT,” Not “AI Free-for-All”

AI in the workplace isn’t a fad. The productivity and creativity gains are real, which means so are the risks.

With AI becoming even more ingrained in the workplace, organizations need to ensure they:

  • Choose a trusted AI platform(s)
  • Configure them with security and compliance in mind
  • Coach their employees on how to use AI wisely
  • Review and refine as the tools evolve

If you remember nothing else, make it this:

AI is like a power tool: awesome in skilled hands, dangerous in a free-for-all.

Use it. Just use it with purpose.