Artificial Intelligence and Privacy Risks: Current Realities and Future Challenges
In today’s world, artificial intelligence (AI) has evolved from a mere tool for automation and convenience to a potential threat to personal privacy and data security.
Experts highlight that advanced technologies can create digital replicas of individuals based on data collected from their connected devices, which are constantly transmitting information over the internet.
Oleksiy Kostenko, head of the Immersive Technologies and Law Laboratory at the National Academy of Sciences of Ukraine and a doctor of legal sciences, points out that there are currently over 60-70 billion devices worldwide, including computers, smartphones, sensors, and video cameras, continually sending data.
Unfortunately, most of these devices are protected by weak passwords or lack proper security measures, making them vulnerable to data leaks.
Furthermore, with such data, it is possible to generate digital copies that imitate a person’s reactions and thought patterns with an accuracy of 85-90%.
In the future, these copies could be used to develop comprehensive social models capable of predicting elections, economic trends, market behaviors, and even public reactions to news or products.
Kostenko warns that it is unknown whether Ukraine already possesses its own digital simulacrum hubs—while foreign ones are well-established—raising questions about whether Ukraine’s data is being modeled within someone else’s system of coordinates.
This situation poses serious concerns regarding privacy, security, and ethical use of these emerging technologies, as oversight and regulation remain insufficient.
Overall, artificial intelligence presents both opportunities for human civilization and significant risks, especially concerning individual rights and democratic processes in the digital age.
