Meta tests new AI-powered photo editing feature directly on users’ smartphones

Chas Pravdy - 29 June 2025 11:28

Meta, one of the world's largest tech companies, has recently launched an experimental feature on its Facebook platform that enables users to edit photos using artificial intelligence directly on their mobile devices. As part of a pilot project currently available in Canada and the USA, Mark Zuckerberg's company offers users the option to activate a function that automatically reviews uploaded photos and videos, offering to store them in a cloud storage for further editing and creation of collages or themed albums for holidays. This process is initiated via a pop-up window titled “cloud processing,” which appears when users attempt to upload media content. According to the terms of use, users consent to the processing of personal data, including facial recognition, identification of people and objects, and the creation of new images based on the analyzed data. This application of AI raises concerns among experts and human rights advocates, since even without publishing content publicly, the AI has access to large volumes of personal information, potentially infringing on privacy and endangering users' security. The company assures that this feature is being tested solely to explore its usefulness and demonstrate automated content generation capabilities — in this mode, it is not used for training AI models, and media files from photos may only be used to improve services with user consent. Meta spokesperson Maria Cubeta explained: “We aim to make content sharing on Facebook more convenient and efficient. To do this, we are testing suggestions that help automatically generate content, which are only visible to you unless you choose to share them. You can disable this feature at any time.” However, many analysts emphasize that the widespread application of AI in such contexts could have serious implications for mental and physical health, as well as for user privacy, potentially leading to breaches and misuse. They call for stronger regulation and oversight by governments to mitigate risks and control the development of these technologies, which might pose threats to human rights and security. More details on this topic and other issues related to AI development can be found in Sergey Kozyakov's article “What should be the role of state policy in supporting AI development and mitigating its dangers.”

Source