Tech

The Future of Digital Assistants is Queer


Shoutout to a smart wife, in its simplest form, can tell digital assistants different personalities, more accurately embodying the many versions of femininity that exist around the world, as opposed to femininity. pleasant, unpretentious way that many companies have chosen to adopt.

Strengers adds: “The Q would be a logical case for what these devices should look like. Another option that can bring out masculinity in different ways. An example might be Pepper, a humanoid robot developed by Softbank Robotics that is commonly used as a personal pronoun and can recognize faces and basic human emotions. Hay Jibo, another robot, introduced in 2017, also uses masculine pronouns and is marketed as a social robot for the home, although it has since been launched into a second life as a focus device. into health care and education. With the “gentle and meek” masculinity shown by Pepper and Jibo — for example, the first person answers questions politely and often gives flirty glances, the second often turns around unnaturally. and approach users in a likable manner – Strengers and Kennedy see them as positive steps in the right direction.

Inactive digital assistants can also lead to the creation of bot personalities to replace humanistic notions of technology. When Eno, the baking robot Capital One launched in 2019, is asked about its gender, it happily replies: “I’m binary. I don’t mean I’m both, I mean I’m really just number one and zero. Treat me like a bot”.

Similarly, Kai, an online banking chatbot developed by Kasisto – an organization that builds AI software for online banking – ditches human traits entirely. Jacqueline Feldman, Massachusetts-based writer and UX designer who created Kai, explains that bots are “designed to be genderless.” Not by assuming a non-binary identity, like Q, but by assuming a robot-specific identity and using the pronoun “it”. “From my perspective as a designer, a bot can be beautifully designed and captivating in new bot-specific ways, without having to pretend to be human,” she says.

When asked if it was a real person, Kai would say: “A bot is a bot is a bot. Please for follow-up question”, clearly signals to the user that it is not human as well as pretending. And if asked about gender, it would answer, “As a bot, I’m not human. But I learn. That is machine learning”.

Bot identity doesn’t mean Kai is abusive. A few years ago, Feldman also talked about deliberately designed Kai with an ability to deflect and prevent harassment. For example, if a user repeatedly harasses the bot, Kai will reply something like “I’m picturing white sand and hammock, please try me later!” “I really did my best to give the bot a little dignity,” Feldman told Australian Broadcasting Corporation in 2017.

However, Feldman believes there is a moral imperative for bots to identify themselves as bots. “Lack of transparency when companies design [bots] makes it easy for people interacting with the bot to forget that it’s a bot,” she said, and gendering bots or telling them human voices makes that much more difficult. Because of the many consumer experiences with chatbots can be frustrating and a lot of people want to talk to one person, Feldman thinks emphasizing the human qualities of bots can be a case of “over-engineering”.

.



Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button