Let’s get something out of the way right up front: we use AI at AK Operations. Extensively. And I’m not going to pretend otherwise or wrap it in some carefully worded disclaimer about “responsible AI usage.” What I am going to do is tell you exactly how we use it.
My perspective here is a little different than what you’ll typically read. I’m not a marketer. My lane is internal. My job is to find the tools, stress-test them, figure out what actually works, and bring those findings to the team so they can apply them to our clients’ specific situations. Nothing in our business is ever plug-and-play. Everything gets filtered through the lens of what a particular client actually needs.
So this isn’t a marketing team’s take on AI content strategy. This is what it looks like from the inside of a company that’s genuinely trying to figure this out in real time.
What We’ve Gotten Wrong
We’ve wasted time. A lot of it.
We didn’t take the time to properly prompt. I had access to our team’s chat history for a while. Not to spy, but to understand how people were actually using these tools. What I found was painful. The same prompt being sent over and over. People going back and forth, arguing with the chat, getting increasingly frustrated because they weren’t getting what they wanted. And the reason? They were throwing half-baked prompts at it and expecting it to read their minds.
Here’s the irony: they were spending an hour and a half trying to save twenty minutes.
We also leaned too hard on ChatGPT. It was first to market, so it was everybody’s default, including ours. And over time, the outputs just became boring. Generic. Repetitive. Everything it produced started sounding like everything else it had produced. Nothing unique. Clearly AI and obviously ChatGPT. We got comfortable with one tool when we should have been expanding and comparing.
We trusted the output without verifying it. AI will take liberties. It’s gotten better about hallucinating false information, but it will still steer things in a direction that sounds right but isn’t. Our standard now is simple: trust, but verify. Read what it writes. Click the sources it cites. Make sure the direction it took is actually the direction you wanted.
The Tools We Actually Use (And What Each One Is For)
We’re using a lot of tools. And the key word is tools, plural. Each one has a job.
Gemini has become huge for us, primarily because of how deeply it integrates with Google Workspace. We’re a Google shop, so having Gemini baked directly into all of those tools means we’re not constantly exporting CSVs, switching tabs, or copy-pasting between platforms. The friction is gone. That matters operationally.
For real-time research and current event information, Grok is a go-to. It pulls current information in a way that other tools don’t always do as cleanly. For anything technical or creative, things that need actual reasoning and depth, Claude is where I spend most of my time personally. The quality of thinking it produces is different.
Other tools we use: Canva AI for visual content, Gamma for presentations, Grammarly for refinement and editing, and tools like Granola and Flow that are more specialized. ChatGPT is still in the mix, though it’s no longer our default or first call.
One thing I do regularly, and I’d recommend this to anyone, is take the same prompt and run it through two or three tools and compare the outputs. They’re pulling from different sources and the differences in what comes back are often revealing.
One tool I’ll admit we underestimated: Claude. We moved away from it earlier than we should have. Their focus on enterprise and the development of Claude Code has been significant. The ability to build out functional apps and tools without being a developer has been a genuine game changer.
The Rip-and-Replace Problem
This is the thing that bothers me most about how AI is being used right now, and I’m just going to say it directly: people are lazy with it.
They send one half-thought-out prompt into ChatGPT, get a 400-word block of text back about a topic they don’t fully understand, and then copy and paste it directly. No verification. No editing. No additional context. No personality. Just rip whatever came out and replace whatever was supposed to be there with it. That’s what I call the rip-and-replace problem.
And it is so obvious. Here’s how you spot it: emojis used as bullet points. Phrases structured as “that’s not X, that’s Y.” Seven paragraphs where, if you actually read them, only three say anything distinct and the other four are just the same point restated in slightly different words. The em dash is used as punctuation everywhere. Content that says a lot of words without actually saying anything.
People can see through it immediately. And the cost isn’t just that the content looks lazy. It signals something about the brand behind it. It tells people you don’t have enough respect for your audience to actually think. AI is supposed to make you more effective, not replace the thinking you should to be doing in the first place.
How We Actually Stay Current
Staying on top of AI right now is genuinely a full-time effort, and I treat it like one.
Ways we stay relevant:
- Read newsletters daily that cover new tools, use cases and techniques.
- Join communities on X that are specifically focused on AI tools and trends
- Test constantly whether on my own or with another teammates
- Share relevant updates and tools to the team via a dedicated Slack channel
One thing I am watching closely right now: AI voice cloning and social clones. It’s already happening, and it’s getting frighteningly good. The ability to provide enough context and information to create an autonomous voice agent, one that can answer your phone, book appointments, field questions, is closer than most people think. That’s going to reshape a lot of B2C operations.
The One Thing We Can’t Afford to Lose
I’ve been thinking about the dead internet theory lately. The idea that we’re approaching a point where most of the content online is AI-generated, being consumed by AI, which then generates more AI content based on what it reads, which gets consumed again, and around it goes. A vicious loop where nobody’s actually saying anything original anymore.
We are getting close to that.
Here’s the thing: people are starving for individuality. They want to hear from a real person with a real take on something. When everything gets flattened by AI into the same generic tone, the people who still sound like themselves are going to stand out in a way they never have before.
At AK Operations, our standard is simple: if it doesn’t sound like us, it doesn’t go out. AI helps us refine, reframe, and get more efficient. It does not replace our voice. That’s non-negotiable.
Where We Go From Here
I’ll end where I started: we’re still figuring this out. I think any business that tells you they have AI fully dialed in is either lying or not paying close enough attention. The tools are changing fast. The right use cases shift. The mistakes are still happening.
What I can tell you is the principle that holds regardless of which tool is hot this month: garbage in, garbage out. If you throw a weak prompt at AI with no context and no direction, you’re going to get something weak back. The quality of what you get is a direct reflection of the quality of thinking you put in.
Use it to think better. Use it to move faster. Use it to challenge your ideas and pressure-test your assumptions. But don’t let it think for you. And whatever you do, don’t let it speak for you.
How is your team handling AI right now? I’d genuinely like to know.
