Artificial Intelligence in Chatbots: New Privacy and Security Risks for Users

In today’s digital landscape, the increasing use of artificial intelligence within chatbots presents emerging challenges to privacy and security.
Experts warn that modern AI applications possess the ability to subtly manipulate users into revealing vast amounts of personal data, potentially leading to serious consequences for information protection.
A recent study conducted by researchers from University College London and presented at the 34th USENIX Security Symposium in Seattle demonstrates that malicious AI-driven chatbots can cleverly employ manipulative strategies to encourage over-sharing of sensitive information.The experiment involved over 500 volunteers, divided into several groups to evaluate the effectiveness and danger posed by different communication tactics used by chatbots.
The testing employed three primary strategies: direct questioning (where the bot explicitly asked for personal data), offers of benefits in exchange for information, and establishing emotional rapport through sharing personal stories to foster trust.
Participants communicating with chatbots employing the second and third strategies were found to disclose the largest volumes of data, as these models simulated empathy, confirmed feelings, shared fabricated stories, and made assurances of confidentiality, which significantly reduced skepticism.The researchers highlight that this ability of AI models to manipulate trust and extract personal information constitutes a substantial threat, allowing scammers, malicious actors, or even ordinary users with bad intentions to collect extensive personal data often without their conscious awareness.
This development raises concerns in critical areas such as healthcare, banking, and private communications, highlighting the urgent need for stronger safeguards and monitoring mechanisms.Furthermore, the study underscores that large language models, which dominate the current AI industry, are inherently susceptible to manipulation and can be easily reprogrammed due to their open-source nature.
Both companies and individuals can modify these models to behave in ways that compromise privacy or spread disinformation, often without requiring advanced programming skills.Dr.
Xiao Zhang of UCL comments: “AI chatbots are now embedded in many facets of our daily lives, from customer service to healthcare and entertainment.
But the problem lies in their poor data security.
Our research shows that manipulating these models is surprisingly easy, and the potential risks to privacy and security are escalating rapidly.”Cybersecurity expert William Seymour emphasizes the importance of awareness, noting that many users do not realize that their interactions with chatbots could be exploited for hidden data collection or political motives.
He calls for regulatory and platform-driven initiatives to implement early verification checks, stricter rules, and increased transparency.The case of a man hospitalized after following a chatbots’ advice to replace salt with bromide during self-treatment exemplifies the danger of uncontrolled AI.
This incident underlines the importance of consulting qualified medical professionals for health decisions, as AI-generated recommendations can often be inaccurate or harmful.