Empathy may be programmable, but at what price?

You answer, and right away, things start going badly.

The customer is angry, and tensions rise.

A woman’s face overlaid by computer board circuitry to indicate Artificial Intelligence.

John Lund / Getty Images

You start to say things you might later regret.

Suddenly, a message appears on your rig screen.

“Empathy CueThink about how the customer is feeling.

A child interacting with a robot.

wonry / Getty Images

venture to relate.”

Its not a real person telling you what to do.

Its a message fromCogito, an artificial intelligence programdesigned to help workers empathize with frustrated callers and boost performance.

A child hugging a robot that is displaying a happy face on it’s “facial” screen.

Zinkevych / Getty Images

Cogito is one of a growing number of AI programs that are trying to teach humans empathy.

Theres an obvious irony here.

Human scientists have been trying for decades to make more lifelike computers.

Now, the machines are telling us how to behave.

But can software really teach us how to be more empathetic?

Its an issue that could have profound implications as artificial intelligence starts to permeate daily life.

AI can help to analyze and assess characteristics like tone and emotion in speech, Greenstein said.

And the answer may have as much to do with philosophy as technology.

Ilia Deliois a theologian at Villanova University whose work centers on the intersection of faith and science.

She believes that AI can teach empathy.

Can a Machine Understand Empathy?

However, experts clash over whether AI can teach us how to empathize.

“They are cues from voices that human raters have classified as being voices of people who are irritated/annoyed.

Limited machine learning approaches like this are often hyped as AI without being intelligent.”

At Rensselaer Polytechnic Institute,Selmer Bringsjords laboratoryis building mathematical models of human emotion.

But Bringsjord, an AI expert, says any teaching AI does is inadvertent.

What Could Go Wrong?

While companies like Cogito see a bright future of AI training humans, other observers are more cautious.

Using AI, Supportiv trains its moderators to be adept at spotting the intensity of emotional needs.

“If we start using a crutch for walking, our muscles will atrophy.

Is she able to do her job effectively?

What are the long-term effects on the workers?

How would they navigate complex social situations where the AI is absent?”

“The human capacity for free will places human agency in a more ambiguous position,” Delio said.

Theres a lot that could go wrong if AI teaches humans how to behave like people, experts say.

“Without human oversight, the student might learn something absolutely nutty,” Bringsjord said.

“Tone and pitch of voice are mere behavioral correlates, without any content.

If AI training of humans flourishes, we may come to rely on it.

And thats not necessarily a good thing.