Google joins Microsoft to protect users from AI copyright lawsuits
Google has announced that it will offer legal protection to its cloud customers who use its artificial intelligence (AI) services, following a similar move by Microsoft at the beginning of this year. The company said it will indemnify Google Cloud Platform (GCP) users for any claims related to intellectual property rights (IPR) infringement arising from their use of Google's artificial intelligence technologies.
This means that if a third party sues a GCP customer for using Google's artificial intelligence services, such as its natural language processing, computer vision, or speech recognition tools, Google will cover legal costs and any damages awarded. , up to a certain limit. The limit varies depending on the type and amount of AI services used by the customer, but can range from $500.000 to $1,5 million per customer per year.
Google said this initiative is part of its commitment to "responsible AI" and that it aims to provide "peace of mind" to its customers who want to take advantage of the benefits of AI without worrying about potential legal risks. The company also said it will continue to invest in research and development to ensure its AI services are trustworthy, fair and ethical.
Google's announcement comes after Microsoft introduced a similar indemnification policy for its Azure customers in January 2021. Microsoft said it will defend and indemnify Azure customers facing lawsuits over the use of its artificial intelligence services, such as Azure Cognitive Services, Azure Machine Learning, or Azure Bot Service. Microsoft's policy also covers legal costs and damages up to a certain limit, which depends on the customer's subscription level and the type of AI service used.
Both Google and Microsoft are among the leading providers of cloud-based AI services, allowing businesses and organizations to access advanced AI capabilities without having to build their own infrastructure or expertise. However, the use of AI also poses some challenges and uncertainties, especially regarding the ownership and protection of intellectual property rights related to content or products generated or assisted by AI.
According to a report by the World Intellectual Property Organization (WIPO), there is no clear consensus or international framework on how to address intellectual property rights issues arising from AI. For example, it is not clear who owns the rights to an AI-generated work, such as a text, image, or song, or whether such a work may be protected by copyright. Similarly, it is not clear who is responsible for any harm or damage caused by an AI system or application, such as a faulty diagnosis, a biased recommendation, or a misleading translation.
By offering indemnification policies, Google and Microsoft are trying to address some of these uncertainties and provide more confidence and security to their cloud customers using their AI services. However, these policies also have some limitations and exclusions, such as not covering claims related to patent infringement, trade secret misappropriation, or privacy violations. Furthermore, these policies do not resolve the underlying legal and ethical issues surrounding the use of AI in various settings and contexts.
Therefore, while Google and Microsoft's initiatives are welcome steps to foster greater trust and responsibility in the AI ecosystem, they are not enough to ensure that AI is used legally and ethically. Greater dialogue and collaboration is still needed among stakeholders, including policymakers, regulators, researchers, developers, users and civil society, to establish clear and consistent rules and standards for AI governance. .
Google Cloud and Workspace customers who use the company's generative AI tools can rest assured that Google will protect them from any potential legal issues that arise from the use of these tools.
That's what Neal Suggs, VP Legal at Google Cloud, and Phil Venables, VP IT Security and CISO at Google Cloud, announced in a blog post this week.
They wrote: "We want to make it clear to our customers: if they face any legal challenge based on copyright infringement, we will take responsibility for the potential legal risks involved."
This is similar to what Microsoft promised regarding its Copilot AI tool last month, as well as Adobe and Shutterstock for their enterprise customers.
These protections are a response to concerns that AI could unintentionally copy or reuse copyrighted works and expose the user or company to legal action. For example, Google was sued in a class-action lawsuit earlier this year for allegedly using public data to train its Bard chatbot.
Google's protections cover several products, such as Google Workspace, Google Cloud, and the Vertex AI platform. They offer compensation to clients in two aspects: training data and generated results.
On the training data side, Google and other chatbot developers have faced criticism from authors, artists, publications and others for using their online content to train their AI chatbots. And while Google already has third-party intellectual property compensation, Google says customers have asked for "explicit clarifications" regarding Google's AI tools.