New York Tech Media
  • News
  • FinTech
  • AI & Robotics
  • Cybersecurity
  • Startups & Leaders
  • Venture Capital
No Result
View All Result
  • News
  • FinTech
  • AI & Robotics
  • Cybersecurity
  • Startups & Leaders
  • Venture Capital
No Result
View All Result
New York Tech Media
No Result
View All Result
Home AI & Robotics

Recognizing Employee Stress Through Facial Analysis at Work

New York Tech Editorial Team by New York Tech Editorial Team
November 24, 2021
in AI & Robotics
0
Recognizing Employee Stress Through Facial Analysis at Work
Share on FacebookShare on Twitter

In the context of the changing culture around Zoom-meeting etiquette, and the emergence of Zoom fatigue, researchers from Cambridge have released a study that uses machine learning to determine our stress levels via AI-enabled webcam coverage of our facial expressions at work.

On the left, the data-gathering environment, with multiple monitoring equipment either trained on or attached to a volunteer; on the right, example facial expressions generated by test subjects at varying levels of task difficulty. Source: https://arxiv.org/pdf/2111.11862.pdf

On the left, the data-gathering environment, with multiple monitoring equipment either trained on or attached to a volunteer; on the right, example facial expressions generated by test subjects at varying levels of task difficulty. Source: https://arxiv.org/pdf/2111.11862.pdf

The research is intended for affect analysis (i.e., emotion recognition) in ‘Ambient Assistive Living’ systems, and presumably is designed to enable video-based AI facial expression monitoring frameworks in such systems; though the paper does not expand on this aspect, the research effort makes no sense in any other context.

The specific ambit of the project is to learn facial expression patterns in working environments – including remote working arrangements – rather than ‘leisure’ or ‘passive’ situations, such as traveling.

Face-Based Emotion Recognition in the Workplace

While ‘Ambient Assistive Living’  may sound like a scheme for elder care, that’s far from the case. Speaking of the intended ‘end users’, the authors state*:

‘Systems created for ambient assistive living environments [†] aim to be able to perform both automatic affect analysis and responding. Ambient assistive living relies on the usage of information and communication technology (ICT) to aid in person’s every day living and working environment to keep them healthier and active longer, and enable them to live independently as they age. Thus, ambient assistive living aims to facilitate health workers, nurses, doctors, factory workers, drivers, pilots, teachers as well as various industries via sensing, assessment and intervention.

‘The system is intended to determine the physical, emotional and mental strain and respond and adapt as and when needed, for instance, a car equipped with a drowsiness detection system can inform the driver to be attentive and can suggest them to take a little break to avoid accidents [††].’

The paper is titled Inferring User Facial Affect in Work-like Settings, and comes from three researchers at the Affective Intelligence & Robotics Lab at Cambridge.

Test Conditions

Since prior work in this field has depended largely on ad hoc collections of images scraped from the internet, the Cambridge researchers conducted local data-gathering experiments with 12 campus volunteers, 5 male and 7 female. The volunteers came from nine countries, and were aged 22-41.

The project aimed to recreate three potentially stressful working environments: an office; a factory production line; and a teleconference call – such as the kind of Zoom group chat that has become a frequent feature of homeworking since the advent of the pandemic.

Subjects were monitored by various means, including three cameras, a Jabra neck-worn microphone, an Empatica wristband (a wireless multi-sensor wearable offering real-time biofeedback), and a Muse 2 headband sensor (which also offers biofeedback). Additionally, the volunteers were asked to complete surveys and self-evaluate their mood periodically.

However, this does not mean that future Ambient Assistive Living rigs are going to ‘plug you in’ to that extent (if only for cost reasons); all of the non-camera monitoring equipment and methods used in the data-gathering, including the written self-assessments, are intended to verify the face-based affect recognition systems that are enabled by camera footage.

Ramping up the Pressure: The Office Scenario

In the first two of the three scenarios (‘Office’ and ‘Factory’), the volunteers were started off at an easy pace, with the pressure gradually increasing over four phases, with different types of task for each.

At the highest level of induced stress, the volunteers also had to endure the ‘white coat effect’ of someone looking over their shoulder, plus 85db of additional noise, which is just five decibels below the legal limit for an office environment in the US, and the exact maximum limit specified by the National Institute for Occupational Safety and Health (NIOSH).

In the office-like data-gathering phase, the subjects were tasked with remembering previous letters that had flashed across their screen, with increasing levels of difficulty (such as having to remember two-letter sequences that occurred two screens ago).

The Factory Scenario

To simulate a manual labor environment, the subjects were asked to play the game Operation, which challenges user dexterity by requiring the player to extract small objects from a board through narrow, metal-rimmed apertures without touching the sides, which event triggers a ‘failure’ buzzer.

By the time the toughest phase came round, the volunteer was challenged to extract all 12 items without error inside one minute. For context, the world record for this task, set in the UK in 2019, stands at 12.68 seconds.

The Teleconferencing Scenario

Finally, in the homeworking/teleconference test, the volunteers were asked by an experimenter over an MS Teams call to recall their own positive and negative memories. For the most stressful phase of this scenario, the volunteer was required to recall a very negative or sad memory from their recent past.

The various tasks and scenarios were executed in random order, and compiled into a custom dataset titled Working-Environment-Context-Aware Dataset (WECARE-DB).

Method and Training

The results of the users’ self-assessments of their mood were used as ground truth, and mapped to valence and arousal dimensions. The captured video of the experiments were run through a facial landmark detection network, and the aligned images fed to a ResNet-18 network trained on the AffectNet dataset.

450,000 images from AffectNet, all drawn/labeled from the internet using emotion-related queries, were manually annotated, the paper says, with valence and arousal dimensions.

Next, the researchers refined the network based solely on their own WECARE dataset, while spectral representation encoding was used to summarize frame-based predictions.

Results

The model’s performance was evaluated on three metrics commonly associated with automated affect prediction: Concordance Coefficient Correlation; Pearson Coefficient Correlation; and Root Mean Square Error (RMSE).

The authors note that the model fine-tuned on their own WECARE dataset outperformed ResNet-18, and deduce from this that the way we govern our facial expressions is very different in a work environment than in the more abstract contexts from which prior studies have derived source material from the internet.

They state:

‘Looking at the table we observe that the model fine-tuned on WECARE-DB outperformed the ResNet-18 model pre-trained on [AffectNet], indicating that the facial behaviours displayed in work-like environments are different compared to the in-the-wild Internet settings utilised in the AffectNet DB. Thus, it is necessary to acquire datasets and train models for recognising facial affect in work-like settings.’

As regards the future of in-work affect recognition, enabled by networks of cameras trained at employees, and constantly making predictions of their emotional states, the authors conclude*:

‘The ultimate goal is to implement and use the trained models in real time and in real work settings to provide input to decision support systems to promote health and well-being of people during their working age in the context of the EU Working Age Project.’

 

 

* My emphasis.

† Here the authors make three citations:

Automatic, dimensional and Continuous Emotion recognition – https://ibug.doc.ic.ac.uk/media/uploads/documents/GunesPantic_IJSE_2010_camera.pdf
Exploring the ambient assisted living domain: a systematic review – https://link.springer.com/article/10.1007/s12652-016-0374-3
A Review of Internet of Things Technologies for Ambient Assisted Living Environments – https://mdpi-res.com/d_attachment/futureinternet/futureinternet-11-00259/article_deploy/futureinternet-11-00259-v2.pdf

†† Here the authors make two citations:

Real-time Driver Drowsiness Detection for Embedded System Using Model Compression of Deep Neural Networks – https://openaccess.thecvf.com/content_cvpr_2017_workshops/w4/papers/Reddy_Real-Time_Driver_Drowsiness_CVPR_2017_paper.pdf
Real-Time Driver-Drowsiness Detection System Using Facial Features – https://www.semanticscholar.org/paper/Real-Time-Driver-Drowsiness-Detection-System-Using-Deng-Wu/1f4b0094c9e70bf7aa287234e0fdb4c764a5c532

Credit: Source link

Previous Post

Venture DAO Platform for The Multi-Chain DeFi Cosmos

Next Post

Is It Too Late to Buy Upstart Stock?

New York Tech Editorial Team

New York Tech Editorial Team

New York Tech Media is a leading news publication that aims to provide the latest tech news, fintech, AI & robotics, cybersecurity, startups & leaders, venture capital, and much more!

Next Post
Is It Too Late to Buy Upstart Stock?

Is It Too Late to Buy Upstart Stock?

  • Trending
  • Comments
  • Latest
Meet the Top 10 K-Pop Artists Taking Over 2024

Meet the Top 10 K-Pop Artists Taking Over 2024

March 17, 2024
Panther for AWS allows security teams to monitor their AWS infrastructure in real-time

Many businesses lack a formal ransomware plan

March 29, 2022
Zach Mulcahey, 25 | Cover Story | Style Weekly

Zach Mulcahey, 25 | Cover Story | Style Weekly

March 29, 2022
How To Pitch The Investor: Ronen Menipaz, Founder of M51

How To Pitch The Investor: Ronen Menipaz, Founder of M51

March 29, 2022
Japanese Space Industry Startup “Synspective” Raises US $100 Million in Funding

Japanese Space Industry Startup “Synspective” Raises US $100 Million in Funding

March 29, 2022
UK VC fund performance up on last year

VC-backed Aerium develops antibody treatment for Covid-19

March 29, 2022
Startups On Demand: renovai is the Netflix of Online Shopping

Startups On Demand: renovai is the Netflix of Online Shopping

2
Robot Company Offers $200K for Right to Use One Applicant’s Face and Voice ‘Forever’

Robot Company Offers $200K for Right to Use One Applicant’s Face and Voice ‘Forever’

1
Menashe Shani Accessibility High Tech on the low

Revolutionizing Accessibility: The Story of Purple Lens

1

Netgear announces a $1,500 Wi-Fi 6E mesh router

0
These apps let you customize Windows 11 to bring the taskbar back to life

These apps let you customize Windows 11 to bring the taskbar back to life

0
This bipedal robot uses propeller arms to slackline and skateboard

This bipedal robot uses propeller arms to slackline and skateboard

0
Driving Innovation in Academic Technologies: Spotlight from ICTIS 2025

Driving Innovation in Academic Technologies: Spotlight from ICTIS 2025

June 4, 2025
Coffee Nova’s $COFFEE Token

Coffee Nova’s $COFFEE Token

May 29, 2025
Money TLV website

BridgerPay to Spotlight Cross-Border Payments Innovation at Money TLV 2025

May 27, 2025
The Future of Software Development: Why Low-Code Is Here to Stay

Building Brand Loyalty Starts With Your Team

May 23, 2025
Tork Media Expands Digital Reach with Acquisition of NewsBlaze and Buzzworthy

Creative Swag Ideas for Hackathons & Launch Parties

May 23, 2025
Tork Media Expands Digital Reach with Acquisition of NewsBlaze and Buzzworthy

Strengthening Cloud Security With Automation

May 22, 2025

Recommended

Driving Innovation in Academic Technologies: Spotlight from ICTIS 2025

Driving Innovation in Academic Technologies: Spotlight from ICTIS 2025

June 4, 2025
Coffee Nova’s $COFFEE Token

Coffee Nova’s $COFFEE Token

May 29, 2025
Money TLV website

BridgerPay to Spotlight Cross-Border Payments Innovation at Money TLV 2025

May 27, 2025
The Future of Software Development: Why Low-Code Is Here to Stay

Building Brand Loyalty Starts With Your Team

May 23, 2025

Categories

  • AI & Robotics
  • Benzinga
  • Cybersecurity
  • FinTech
  • New York Tech
  • News
  • Startups & Leaders
  • Venture Capital

Tags

3D bio-printing acoustic AI Allseated B2B marketing Business carbon footprint climate change coding Collaborations Companies To Watch consumer tech crypto cryptocurrency deforestation drones earphones Entrepreneur Fetcherr Finance Fintech food security Investing Investors investorsummit israelitech Leaders LinkedIn Leaders Metaverse news OurCrowd PR Real Estate reforestation software start- up Startups Startups On Demand startuptech Tech Tech leaders technology UAVs Unlimited Robotics VC
  • Contact Us
  • Privacy Policy
  • Terms and conditions

© 2024 All Rights Reserved - New York Tech Media

No Result
View All Result
  • News
  • FinTech
  • AI & Robotics
  • Cybersecurity
  • Startups & Leaders
  • Venture Capital

© 2024 All Rights Reserved - New York Tech Media