top of page
Search

AI and Startups: The “Faster, Better, Cheaper” Promise and Why I’m Still Skeptical

  • Writer: Annekah Hall-Solomon
    Annekah Hall-Solomon
  • Aug 14
  • 5 min read
AI and Startups: The “Faster, Better, Cheaper” Promise and Why I’m Still Skeptical
AI and Startups: The “Faster, Better, Cheaper” Promise and Why I’m Still Skeptical

Executive Take

I’m in the business of people. That means my job, and yes my bias, is to make sure technology serves the humans who keep companies alive.


Lately, I’ve noticed a pattern: many founders and executives are treating AI as a guaranteed win. “It’s faster. It’s better. It’s cheaper.” But is it?


The data tells a more complicated story. 65% of companies now say they’re using generative AI in some form, yet only ~1% consider their adoption “mature.” That gap tells me most of us are still experimenting and that decisions are often driven more by excitement than by disciplined evaluation.


I’m not anti-AI. I’m pro-results. And my skepticism comes from watching too many leaders roll out AI without:

  • A clear definition of the problem it’s solving

  • A baseline for measuring improvement

  • Guardrails for compliance and human oversight

  • A plan for the inevitable risks


This isn’t about resisting innovation. It’s about making it earn its way into our operations the same way any other strategic initiative would.


The Temptation of “Faster, Better, Cheaper”


This phrase sounds irresistible, especially to founders who are under pressure to scale quickly. The danger? It becomes a shortcut for skipping due diligence.


Faster?

Generative AI can speed things up. I’ve seen it shave hours off drafting job descriptions, answering repetitive support requests, or analyzing survey data.


But here’s the nuance:

  • Speed is uneven. A marketing team might see dramatic gains in ad copy creation, while a legal or HR team might see little to no benefit because of the need for human review.

  • Time saved isn’t always time gained. That “extra” time often gets reabsorbed into new work. Double-checking outputs, clarifying prompts, or handling edge cases AI can’t yet manage.


Example: A SaaS startup I advised deployed an AI tool for first-round resume screening. It processed applications 80% faster than the human team. Sounds like a win, right? Except… the recruiting team then spent more time manually reviewing borderline cases and addressing false negatives (qualified candidates incorrectly screened out). In the end, the net speed gain was closer to 20%, and the team still needed to adjust their process to avoid bias risks.


People-first takeaway: Faster matters only if it’s faster and accurate. Otherwise, you’re just making bad decisions more quickly.


Better?

AI is improving rapidly. Models are more sophisticated, context windows are bigger, and retrieval-augmented generation (RAG) makes outputs more relevant. But “better” is relative.


  • Better compared to what? If your baseline process is already inconsistent, AI might outperform it. But in a mature, well-trained team, AI might only offer incremental gains.

  • Hallucinations are still a thing. Even “low” error rates can be unacceptable in high-stakes workflows like hiring, compensation, or compliance.


Example: A mid-stage fintech startup piloted AI for drafting offer letters. The tool was faster, but twice, it inserted outdated benefits information from old training data. Both errors were caught before sending, but it underscored the risk: without human oversight, “better” outputs could still damage trust or create legal exposure.


People-first takeaway: “Better” should always be defined in terms of better outcomes for people, not just cleaner copy or faster turnaround.


Cheaper?

This is where many leaders underestimate the real cost. AI isn’t your traditional flat-rate SaaS. Usage costs scale with tokens processed, model latency, and service-level agreements (SLAs).


Hidden cost drivers include:

  • Integration and engineering work to connect AI into existing systems

  • Ongoing monitoring, evaluation, and retraining

  • Legal review for compliance-heavy use cases

  • Additional QA when AI touches customer or employee-facing outputs


Example: A retail e-commerce company introduced an AI chatbot to handle customer service. They expected to cut support costs by 30%. In reality:

  • The bot needed retraining to handle seasonal promotions and new SKUs.

  • Complex queries still required escalation to human agents.

  • The brand team invested more in monitoring conversations for tone and brand alignment.


The total monthly cost? Only 8% lower than before and that was before factoring in the one-time integration spend.


People-first takeaway: Cheaper is only real if you calculate the full total cost of ownership (TCO) — and compare it to the quality of human-led outcomes.


Lessons from the Field


These aren’t abstract hypotheticals. They’re real-world scenarios that show where good intentions can go sideways:


  • Amazon’s AI Recruiting Tool — Trained on historical data from a male-dominated industry, it learned to downgrade resumes from women. No one programmed it to discriminate, but bias in, bias out.

  • NYC Local Law 144 — If you use AI in hiring in New York City, you’re legally required to run annual bias audits, disclose your tools publicly, and notify candidates. “Cheap” AI gets more expensive when you add regulatory overhead.

  • EEOC & DOJ Guidance — U.S. regulators have warned employers that they’re responsible for discrimination caused by AI tools, especially in disability-related cases. Leaders can’t hide behind “the algorithm did it.”


The “FBCT” Test I Use Before Recommending AI

I don’t greenlight AI in People Ops without running it through my Faster, Better, Cheaper Test (FBCT). This checklist works for founders, HR leaders, and operations managers alike:


  1. Define the job-to-be-done: Instead of “AI for hiring,” specify: “AI-assisted resume triage for software engineering roles, processing 200+ applicants per week, with under 5% false negatives.”

  2. Baseline speed and quality: Measure your current process first. If you don’t know cycle time, error rate, and cost per unit, you can’t prove ROI.

  3. Shadow deployment: Run AI alongside humans for 2–4 weeks. Compare task-level metrics, not just “does it feel better?”

  4. Truth layer & bias checks: Test AI outputs against known good data. Run adverse-impact testing before go-live.

  5. Compliance mapping: Understand the legal implications in every jurisdiction you operate. Laws vary. Sometimes drastically.

  6. Human-in-the-loop gates: Decide exactly when a human must review, edit, or override AI outputs.

  7. Full TCO modeling: Include variable usage costs, engineering time, QA, legal review, and training.

  8. Change management: Leaders must know how to explain the “why” of AI to their teams, and address fear or resistance.

  9. Kill-switch & audit trail: Always have a way to roll back or pause AI use, and keep detailed logs in case of disputes.


What “Good” Looks Like in Practice


In my experience, responsible AI use in people-related workflows looks like this:

  • Sourcing & triage: AI drafts shortlists, but recruiters make final calls.

  • JD writing: AI drafts inclusive language, but hiring managers approve.

  • Employee communications: AI suggests frameworks, but HR/legal reviews.

  • Analytics & forecasting: AI surfaces scenarios; leadership chooses tradeoffs.


The common thread? AI assists. It doesn’t replace. The human decision-making that protects culture, compliance, and trust.


Founder Traps to Avoid

  1. Vanity automations: Announcing “we automated hiring!” before confirming it actually reduced effort or improved outcomes.

  2. Pilot sprawl: Running multiple tools with no owner or measurement plan.

  3. Budget blind spots: Forgetting that AI costs scale with usage, not seats.


My Bottom Line

AI is an accelerator but only if you’re clear on what you’re accelerating. As a people-first operator, I’m not against AI. I’m against skipping the hard questions. The founders who win with AI won’t be the ones who implement it the fastest. They’ll be the ones who deploy it with intention, measure it honestly, and govern it well.

Thinking about bringing AI into your startup’s operations?

Don’t just buy the hype — build the systems to make it work. I help founders and operators cut through the noise, assess the ROI, and implement AI tools that actually deliver results.


 
 
 

Comments


Contact Us

bottom of page