Sony and eFinancial Webinar Recap: Turning AI Hype into Safe, Scalable CX Automation

In our latest fireside chat, Whitney Smith, Director of Customer Innovation at Sony Electronics, and Zeb Jennings, CIO at eFinancial, joined Zingtree CRO Pete Lee to cut through the AI noise and talk about what it really takes to operationalize LLMs inside regulated, complex environments.

10 min read
Back to Blog

Why GenAI fails in CX and what comes next

Generative AI is powerful, but in high-stakes customer support, power isn’t enough.

When accuracy matters, when compliance is non-negotiable, and when workflows span multiple systems, freeform output becomes a liability.

That’s where deterministic AI comes in.

In our latest fireside chat, Whitney Smith, Director of Customer Innovation at Sony Electronics, and Zeb Jennings, CIO at eFinancial, joined Zingtree CRO Pete Lee to unpack what CX leaders really need to scale AI safely.

From confidence scoring to expert systems, they shared what works, what doesn’t – and why control is everything.

AI is like hiring a new team member – you still need to train it

Whitney opened with an analogy that reframed the conversation instantly:

“LLMs are like a new hire. They come in with experience. But you still have to teach them how things work here.”

That shift in mindset. Treating AI like an employee, not a magic box, was central to the entire discussion.

She explained how her team uses this lens to define guardrails:

  • Low-risk tasks (like answering common questions)? AI has more autonomy.
  • High-risk tasks (like contract workflows or PII handling)? The AI must follow strict instructions and escalate when uncertain.

In other words, AI autonomy is earned – not assumed.

Managing AI variance through Confidence Scoring and compliance measures

As Zeb explained, the more complex your environment, the higher the risk of AI giving the wrong answer.

The complexity increases variance, and variance increases the number of “plausible” but incorrect outputs from a model.

That’s where confidence scoring, guardrails, and deterministic layers come in.

“We treat AI like any other system. What are the inputs, what’s allowed, and who’s responsible if it goes wrong?” – Zeb Jennings

His team maps every use case based on:

  • Business value
  • Regulatory risk
  • Data accessibility
  • Required precision

For example:

  • A customer asking for policy hours? Low risk. AI can answer directly.
  • A customer requesting a Do Not Contact removal? High risk. The workflow must be deterministic, trackable, and auditable.

Whitney added that confidence scores should be tied to risk tolerance:

  • High-risk = 95% confidence minimum
  • Low-risk = 80% might be acceptable, with fallback options

When the model misses the mark?

It escalates. And the team learns, by retraining the rules, not just the model.

What’s actually working: Real use cases from Sony and eFinancial

Plenty of AI pilots fail. But both speakers shared examples of real progress:

From Whitney at Sony:

  • Proactive firmware updates: Using device data to notify customers about model-specific updates—without needing a human touchpoint.
  • Voice of customer analysis: AI summarizes thousands of survey responses into clear trends, which the team uses to prioritize CX improvements.
“This isn’t about replacing people. It’s about clearing the noise so they can do the work that matters.”

From Zeb at eFinancial:

  • Expert systems + RPA for regulated workflows: Tasks like DSR processing and data deletion use deterministic logic to stay compliant—and fast.
  • Education before execution: His IT team runs ongoing sessions with business leaders to explain the difference between generative, probabilistic, and deterministic models—so projects start aligned.

AI readiness starts with alignment, not automation

Both speakers agreed: you don’t scale AI by buying tools. You scale it by aligning stakeholders around real use cases, constraints, and success criteria.

What that looks like:

  • CX and Ops define the problem
  • IT vets feasibility and risk
  • Legal sets the limits
  • Then AI gets to work within clearly defined boundaries
“If you wait to loop in IT until the project’s scoped, you’ve already lost time.” – Pete Lee

Why GenAI fails in CX and what comes next

Generative AI is powerful, but in high-stakes customer support, power isn’t enough.

When accuracy matters, when compliance is non-negotiable, and when workflows span multiple systems, freeform output becomes a liability.

That’s where deterministic AI comes in.

In our latest fireside chat, Whitney Smith, Director of Customer Innovation at Sony Electronics, and Zeb Jennings, CIO at eFinancial, joined Zingtree CRO Pete Lee to unpack what CX leaders really need to scale AI safely.

From confidence scoring to expert systems, they shared what works, what doesn’t – and why control is everything.

AI is like hiring a new team member – you still need to train it

Whitney opened with an analogy that reframed the conversation instantly:

“LLMs are like a new hire. They come in with experience. But you still have to teach them how things work here.”

That shift in mindset. Treating AI like an employee, not a magic box, was central to the entire discussion.

She explained how her team uses this lens to define guardrails:

  • Low-risk tasks (like answering common questions)? AI has more autonomy.
  • High-risk tasks (like contract workflows or PII handling)? The AI must follow strict instructions and escalate when uncertain.

In other words, AI autonomy is earned – not assumed.

Managing AI variance through Confidence Scoring and compliance measures

As Zeb explained, the more complex your environment, the higher the risk of AI giving the wrong answer.

The complexity increases variance, and variance increases the number of “plausible” but incorrect outputs from a model.

That’s where confidence scoring, guardrails, and deterministic layers come in.

“We treat AI like any other system. What are the inputs, what’s allowed, and who’s responsible if it goes wrong?” – Zeb Jennings

His team maps every use case based on:

  • Business value
  • Regulatory risk
  • Data accessibility
  • Required precision

For example:

  • A customer asking for policy hours? Low risk. AI can answer directly.
  • A customer requesting a Do Not Contact removal? High risk. The workflow must be deterministic, trackable, and auditable.

Whitney added that confidence scores should be tied to risk tolerance:

  • High-risk = 95% confidence minimum
  • Low-risk = 80% might be acceptable, with fallback options

When the model misses the mark?

It escalates. And the team learns, by retraining the rules, not just the model.

What’s actually working: Real use cases from Sony and eFinancial

Plenty of AI pilots fail. But both speakers shared examples of real progress:

From Whitney at Sony:

  • Proactive firmware updates: Using device data to notify customers about model-specific updates—without needing a human touchpoint.
  • Voice of customer analysis: AI summarizes thousands of survey responses into clear trends, which the team uses to prioritize CX improvements.
“This isn’t about replacing people. It’s about clearing the noise so they can do the work that matters.”

From Zeb at eFinancial:

  • Expert systems + RPA for regulated workflows: Tasks like DSR processing and data deletion use deterministic logic to stay compliant—and fast.
  • Education before execution: His IT team runs ongoing sessions with business leaders to explain the difference between generative, probabilistic, and deterministic models—so projects start aligned.

AI readiness starts with alignment, not automation

Both speakers agreed: you don’t scale AI by buying tools. You scale it by aligning stakeholders around real use cases, constraints, and success criteria.

What that looks like:

  • CX and Ops define the problem
  • IT vets feasibility and risk
  • Legal sets the limits
  • Then AI gets to work within clearly defined boundaries
“If you wait to loop in IT until the project’s scoped, you’ve already lost time.” – Pete Lee