Ethics Society
In Home Robots, Machine-like Designs Are Trusted the Most – Women Avoid Human-like Figures
The appearance and perceived gender of robots that assist at home significantly influence how much they are trusted. A new Spanish study shows that specifically machine-like home robots inspire more trust than models resembling humans.
The study examined trust in various robots performing household and caregiving tasks. The background is the idea of the increasing prevalence of home robots, for example, supporting the daily lives of the elderly. The question was not only about the functionality of the technology but also about the emotional response: how human-likeness, gendered perceptions, and the feeling of safety intertwine.
More than two thousand Spanish respondents were presented with fictional, text-based scenarios in which robots placed at home assisted in daily life. The robots were depicted in three different ways: as fully human-like humanoids, only partially human-like, and clearly machine or device-like. Respondents were asked to evaluate how much they would trust each type of robot.
The results were consistent across different scenarios: machine-like robots inspired the most trust. Models resembling humans were approached with more caution, and especially women reported often finding human-like robots uncomfortable, even frightening.
The findings challenge the design ideals that idolize human likeness and weaken the predictive value of the so-called uncanny valley hypothesis in domestic use. According to the thinking regarding the uncanny valley, a figure that is too human-like but still clearly artificial evokes particular repulsion; now, a broader suspicion towards human-like home robots appears more significant.
Additionally, respondents clearly favored gender-neutral robots, especially women. This suggests a desire to detach the roles of home robots from traditional gender roles, particularly in caregiving work.
Source: Trust and emotions in domestic robots: do gender preferences favor machine-like designs?, AI & SOCIETY.
The study examined trust in various robots performing household and caregiving tasks. The background is the idea of the increasing prevalence of home robots, for example, supporting the daily lives of the elderly. The question was not only about the functionality of the technology but also about the emotional response: how human-likeness, gendered perceptions, and the feeling of safety intertwine.
More than two thousand Spanish respondents were presented with fictional, text-based scenarios in which robots placed at home assisted in daily life. The robots were depicted in three different ways: as fully human-like humanoids, only partially human-like, and clearly machine or device-like. Respondents were asked to evaluate how much they would trust each type of robot.
The results were consistent across different scenarios: machine-like robots inspired the most trust. Models resembling humans were approached with more caution, and especially women reported often finding human-like robots uncomfortable, even frightening.
The findings challenge the design ideals that idolize human likeness and weaken the predictive value of the so-called uncanny valley hypothesis in domestic use. According to the thinking regarding the uncanny valley, a figure that is too human-like but still clearly artificial evokes particular repulsion; now, a broader suspicion towards human-like home robots appears more significant.
Additionally, respondents clearly favored gender-neutral robots, especially women. This suggests a desire to detach the roles of home robots from traditional gender roles, particularly in caregiving work.
Source: Trust and emotions in domestic robots: do gender preferences favor machine-like designs?, AI & SOCIETY.
This text was generated with AI assistance and may contain errors. Please verify details from the original source.
Original research: Trust and emotions in domestic robots: do gender preferences favor machine-like designs?
Publisher: AI & SOCIETY
Authors: Eva Sotomayor, Fernando Aguiar, Francisco Herreros
December 24, 2025
Read original →