Zoom’s recent update, establishes their right to utilise some aspects of customer data for training and tuning its AI, or machine-learning models.

The terms state, “You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of … machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models).”

This comes with a growing public discussion on the ethical boundaries of AI models.

Commenting on this, Iterate.ai head of machine learning, Shomron Jacob, said: “In an era where data privacy is paramount, Zoom’s decision to train its AI models using certain customer data, as per their updated terms, underscores the delicate balance tech companies must strike between innovation and user trust. While the move aligns with Zoom’s AI ambitions, it also amplifies the broader debate on the ethical boundaries of AI training. As consumers, it’s crucial we stay informed and exercise our rights, especially when our data becomes a pivotal asset in shaping the future of technology.”

Zoom came forward to Adweek to address the development, stating that they “will not use customer content, including education records or protected health information, to train our artificial intelligence models without [user] consent.” 

This demonstrates that consumers must stay informed, especially as AI develops and is integrated in more areas.