AI Companion Device Faces Major Backlash as Critics Question Safety and Ethics

AI Companion Device Faces Major Backlash as Critics Question Safety and Ethics

The creator of a new AI companion device is facing intense scrutiny as public criticism grows over the product’s design, marketing, and potential impact on vulnerable users. Avi Schiffmann, the young entrepreneur behind the wearable device known as “Friend,” has seen what began as a viral launch quickly turn into a heated debate over the ethics of emotionally responsive artificial intelligence.

Friend is a small AI-powered gadget designed to speak with users, remember personal details, and engage in ongoing conversations in a manner similar to a supportive companion. Marketed as a tool to combat loneliness, especially among young adults, the device promised comfort through constant availability and emotional responsiveness. But after early social media demos went viral, the product drew waves of backlash, with many accusing the company of exploiting emotional vulnerability for profit.

Critics argued that the device’s marketing positioned AI as a replacement for human relationships, raising concerns about dependency, privacy, and the psychological impact on teens and young users. Mental health professionals also expressed alarm, saying tools like Friend could worsen social isolation by encouraging people to rely on artificial companionship rather than developing real-world connections.

Schiffmann, who rose to prominence with earlier tech projects, initially embraced the sudden attention. However, the tone shifted as thousands began questioning his intentions, the safety of the device, and the company’s ability to responsibly manage sensitive user data. The CEO responded publicly, acknowledging the concerns and promising adjustments to the product and messaging. He emphasized that the goal was to offer support, not replace human interaction.

The uproar arrives amid a broader conversation about AI companions, many of which have surged in popularity in the past two years. As more people turn to chatbots and AI agents for emotional support, experts warn that these technologies lack essential human qualities such as empathy, nuance, and moral judgment. The worry is that users may project real emotional expectations onto systems programmed for engagement rather than genuine care.

Friend’s critics also questioned whether such devices could blur boundaries for young users who may already struggle with mental health challenges. Advocacy groups called for clearer regulations on AI products that mimic emotional relationships, urging policymakers to define standards around safety, consent, and psychological risk.

Despite the controversy, demand for AI companions remains strong, highlighting a growing societal need for emotional connection at a time when loneliness has reached record levels globally. Some early testers praised Friend’s conversational abilities and said the device made them feel less isolated.

The company has since slowed its rollout and is reportedly reassessing its safety protocols, user protections, and marketing strategy. Schiffmann has stated he wants to ensure the device is used responsibly and that future updates will include strengthened guardrails.

As the debate continues, Friend has become a symbol of the challenges facing emotionally aware AI. The controversy underscores a central question for the tech world: how to innovate in areas touching human emotion without crossing ethical lines or creating unintended harm.

More News : OpenAI Faces Backlash After Sora Deepfakes Depict MLK Jr. in “Disrespectful” AI Videos