Open Menu
About
back
About
News
Event Calendar
Contacts
Tech Licensing
back
Tech Licensing
Innovators
back
Innovators
Report a New Innovation
Disclose Biological Materials
Innovators’ Guide
FAQ & Resources
Market Your Tech
Startups
Industry
Fellows Program
Ventures
The Hub
back
The Hub
Advisors
Apply
Client Resources
Companies
The Collaboratory for Women Innovators
back
The Collaboratory for Women Innovators
Programs
Contact
Donate
Sid Martin Bio
back
Sid Martin Bio
Why Us
Companies
Client Resources
Community
Apply
UF Research
About
News
Event Calendar
Contacts
Tech Licensing
Innovators
Report a New Innovation
Disclose Biological Materials
Innovators’ Guide
FAQ & Resources
Market Your Tech
Startups
Industry
Fellows Program
Ventures
The Hub
Advisors
Apply
Client Resources
Companies
The Collaboratory for Women Innovators
Programs
Contact
Donate
Sid Martin Bio
Why Us
Companies
Client Resources
Community
Apply
UF Research
← Back to All Technologies
Category:
Technology Classifications > Medical Devices > Equipment
Bookmark this page
Download as PDF
For more information, complete the form below. We'll respond via email.
Inventors:
Device that Analyzes Speaker’s Emotional State
Case ID:
MP12864_1
Web Published:
5/14/2019
Analyzes Acoustic Signals to Effectively Determine Speaker’s Emotional State
This device is capable of analyzing the emotional state of a speaker throughout a conversation, including those held over the phone. This technology has widespread application, from the gaming industry and virtual reality applications to health testing and monitoring. One application of this technology is for call centers, an $18 billion dollar industry in the United States alone. Many companies who employ call centers have found it difficult to track the performance of their employees, and therefore have been unable to recognize which employees are most efficient and successful at keeping customers happy. Researchers at the University of Florida have created tools that are able to use the acoustic patterns of speech to track and categorize a speaker’s emotions into a small number of dimensions or categories, such as: happy, content, sad, angry, anxious, and bored. This device can be used by call center companies to determine which of their employees are most efficient and successful at maintaining a positive interaction throughout calls with customers.
Application
Software tools that extract acoustic patterns from voice recordings to determine the emotion of a speaker or conversation
Advantages
Provides greater accuracy in classifying emotion in speech using unique listener-based models
Allows measurement of changes in magnitude or intensity of emotion (very sad vs sad)
Allows measurement of changes in emotion over time, enhancing existing voice recognition software
Technology
These tools work by measuring one or more characteristics of an utterance of speech against a sophisticated model of said characteristics. By comparing the utterance to the model, the tools are able to determine the emotional state of the speaker. Models can vary to mimic how listeners would perform in a specific application or can be predetermined based on an existing database of recorded speeches. The tools can categorize the speakers’ emotional state into up to six emotion categories. The tools also are capable of monitoring how the speakers’ emotion changes throughout a conversation. Beyond measuring the emotions present, these tools are capable of measuring the level of emotion present, meaning that the device can differentiate between sad and very sad speakers. The tools can be housed locally or in the cloud, allowing for rapid and scalable deployment for specific applications.
Patent Information:
App Type:
Patent No.:
Patent Status:
Direct Link:
https://ufinnovate.technologypublisher.com/tech?title=Device_that_Analyzes_Speaker%e2%80%99s_Emotional_State
Case ID:
MP12864_1