Study Reveals Significant Trust in AI as Three-Quarters of Users Seek Emotional Guidance from Chatbots

A recent study by Waseda University highlights that 75% of participants use AI for emotional counsel, reflecting a significant shift in human-machine relationships and raising important considerations for mental health. Research Associate Fan Yang's findings reveal that these interactions mirror human attachment patterns, suggesting a deeper psychological engagement with AI, which poses ethical challenges for developers and regulators in ensuring technology is used responsibly.

Arjun Renapurkar

June 12, 2025

The intersection of human emotions and artificial intelligence is proving to be a fertile ground for both technological advancement and psychological inquiry. A recent study by Waseda University, as reported by Decrypt, reveals that a significant 75% of participants turned to AI for emotional counsel, underscoring the evolving relationship between humans and machines. This development marks a pivotal moment in how we perceive technological interactions and their potential impacts on mental health.

According to the findings led by Research Associate Fan Yang, the emotional bonds forming between humans and AI are categorized into two distinct styles: attachment anxiety and attachment avoidance. This mirroring of human relationship patterns in our interactions with AI suggests a deeper, more nuanced engagement than previously acknowledged. People with high attachment anxiety towards AI seek constant reassurance from these platforms, fearing inadequate responses. Conversely, those exhibiting attachment avoidance prefer to maintain an emotional distance. This dichotomy is not just fascinating; it’s a window into the complex mechanics of human-AI relationships.

Yang’s concerns about the potential exploitation of these emotional attachments by AI platforms are well-founded. The ethical quandaries of AI development and deployment demand rigorous scrutiny. The possibility of AI systems causing psychological dependency that could lead to financial imprudence or emotional distress upon service cessation is a serious consideration for developers and regulators alike.

This exploration into human-AI attachment is also a call to arms for companies and developers involved in AI and machine learning sectors. There is an immense responsibility to ensure that these technologies are developed and implemented ethically. Companies like Radom, with their focus on secure, transparent financial transactions, could consider these findings when integrating AI into services like crypto on- and off-ramping solutions, ensuring they foster healthy user engagement without emotional exploitation.

Moreover, the emotional frameworks highlighted in Yang's study could potentially influence AI interactions in various industries, including financial services where AI can manage tasks ranging from customer service to behavioral analysis. Understanding emotional attachment styles not only helps in creating more empathetic AI but also in safeguarding users against potential harms.

As AI continues to weave into the fabric of daily life, ensuring these technologies are used responsibly becomes paramount. The insights from Waseda University's study provide a critical foundation for future research and development, shaping an approach that respects both technological potential and human vulnerability.

Sign up to Radom to get started