Old School CTO Solving New School Problems

AI Work Related Privacy

According to a Facebook ad I saw yesterday, I'm officially a "senior" now. Fantastic. Nothing says "cutting-edge technology leader" quite like being targeted for retirement community brochures and mobility scooters.

But here's the thing about learning your IT skills in the latter part of the 1900s (yes, that's the nineteen hundreds for those keeping track): you actually had to understand how things worked. We didn't have Stack Overflow, ChatGPT, or even Google. When something broke, you fixed it yourself. When you needed to optimize code, you counted CPU cycles. When you ran out of memory, you got creative.

Fast forward to today, and I'm the CTO at 4Square Capital, where we've built askdiana.ai on what I can only describe as a "remarkably light budget." And somehow, using these ancient skills from the last millennium, we've solved three problems that teams of people have been working on for over a year without success.

Let me tell you about them.

Problem 1: Teaching an LLM to Count

Large Language Models are essentially very sophisticated pattern matchers. They're brilliant at understanding context, generating human-like text, and even reasoning through complex problems. But ask them to multiply 847 by 923, and they'll give you an answer that's... well, let's call it "creatively interpreted."

Why? Because LLMs don't actually do mathematics. They predict what number would most likely appear in that position based on patterns they've seen in their training data. It's like asking a literature professor to design a bridge – they might have read about bridges, but you probably don't want to drive over their creation.

"LLMs are like that friend who's great at storytelling but terrible at splitting the restaurant bill."

The problem gets worse with larger numbers, more complex operations, and anything involving precision. For a financial AI system, this is... suboptimal.

Our solution? We don't let the LLM do the math. When askdiana.ai needs to perform calculations, it recognizes the requirement, extracts the actual numbers, passes them to proper computational engines, and integrates the results back into its response. The LLM handles what it's good at (understanding intent and context), and actual mathematics is handled by, you know, actual mathematics.

Revolutionary? No. Effective? Absolutely. Sometimes the old-school approach of "use the right tool for the job" still works.

Problem 2: Eradicating Hallucinations

LLMs hallucinate. It's not a bug; it's a feature of how they work. They're confidence-based pattern generators, and sometimes they generate patterns with high confidence that are complete fiction. When you're dealing with financial information, this is a career-limiting feature.

Enter our Genius² engine. Without going into proprietary details, Genius² works on a principle that anyone who debugged code in the 1990s will appreciate: verification.

Back in the day, you couldn't just trust your code would work. You had to test it, verify it, check it against known good outputs, and build in validation at every step. Genius² applies this same principle to AI outputs.

The Genius² Difference

Rather than accepting LLM outputs at face value, Genius² implements multi-layered verification, cross-referencing, and confidence scoring. If the system isn't certain about something, it says so. Novel concept, I know.

The result? We've effectively eliminated hallucinations from our financial AI responses. Not reduced them. Eliminated them. Because when you're dealing with people's money, "mostly accurate" doesn't cut it.

Problem 3: Keeping Your Data Actually Private

Here's a fun fact: when you use most AI services, your data goes to their servers. OpenAI, Anthropic, Google – they all process your data on their infrastructure. They promise it's secure, promise they won't train on it (with asterisks), and promise your secrets are safe.

But you know what's more secure than a promise? Physics.

askdiana.ai runs on private infrastructure. Your data never leaves your environment. Not because we're extra good at security (though we are), but because it literally doesn't go anywhere. Can't hack what isn't there.

This required some creative architecture – the kind of creative architecture you learn when you had to fit entire applications into 64KB of RAM and couldn't just throw more cloud resources at the problem. We've optimized, streamlined, and built an AI system that delivers enterprise-grade capabilities while running on infrastructure you control.

The Budget Plot Twist

Here's the part that makes me simultaneously proud and slightly bemused: we've spoken with companies that have had teams of people working on these same problems for over a year. Full teams. Full budgets. Full access to the latest everything.

And they're still stuck.

Meanwhile, we built askdiana.ai on what venture capitalists would call "concerning capital efficiency" (which is code for "not much money"). How?

  • First-principles thinking – Understanding the actual problem, not just the trendy solution
  • Efficient architecture – Building what's needed, not what's impressive
  • Old-school optimization – When you learned to code on limited resources, you never unlearn efficiency
  • Clear requirements – Financial AI needs to be accurate, private, and reliable. Everything else is optional.

Turns out, decades of experience solving problems with limited resources is exactly the right background for building efficient AI systems.

The Senior Advantage

So yes, I'm officially "senior" according to Facebook's ad algorithms. And yes, I learned my trade in the 1900s. But here's what that actually means:

I've seen technology hype cycles come and go. I've watched "revolutionary" approaches fail because they ignored fundamentals. I've debugged systems at 3 AM with nothing but hex dumps and determination. I've optimized code that had to fit in physical memory because virtual memory wasn't a thing yet.

These aren't obsolete skills. They're foundational skills. And in an age where everyone's throwing AI at problems without understanding what's actually happening under the hood, those foundational skills are more valuable than ever.

At 4Square Capital, we're not trying to build the flashiest AI. We're building the most reliable AI. The kind that actually solves real problems for real businesses with real money.

And apparently, being a "senior" with "outdated" skills from the last millennium is exactly what that takes.

"Sometimes the best way to solve tomorrow's problems is with yesterday's wisdom."

If you're interested in AI that actually works – the kind built by people who understand both the technology and its limitations – check out askdiana.ai and 4Square Capital.

We may be "seniors" with skills from the 1900s, but we're solving 2025's problems better than most.

Now if you'll excuse me, I need to yell at some clouds about how nobody understands proper memory management anymore.