As generative AI becomes more common across industries, product teams increasingly use it to create copy, brainstorm interface designs, summarize research and speed up software development. Chief experience officers (CXOs) in digital transformation favor this integration as it reduces iteration time, allowing teams to move from concept to prototype faster.
However, copyright risks also arise from the efficiency that generative AI brings. For CXOs, the question becomes how to enable it without exposing themselves to avoidable intellectual property risk.
Why Copyright Concerns Are Hard to Ignore
Generative AI is now a visible contributor to the software supply chain, and there is an awareness that governance is needed. A data-driven review of over 30 million GitHub commits found that AI is responsible for 29% of Python functions written in the U.S. Since the adoption of generative AI, online code contributions (commits per quarter) have increased by 3.6%.
The large training datasets of text, images or computer code behind many generative AI models may be copyrightable, but this remains unclear. The problem is that if a model is trained on copyrighted content and generates a work that reads too closely to previously published content, the business might face legal exposure or suffer reputational damage. Not all generative AI outputs infringe. Companies need higher standards than “the tool produced it, so it must be fine.”
Copyright fights rarely stay confined to the legal department. If a product team relies on AI-generated interface copy, design assets, code or branded content, outputs that closely resemble existing copyrighted works may make their way into product releases, creating legal exposure and reputational risk.
Generative AI is as much a governance problem as an innovation problem for CXOs, as its outputs can no longer be treated as an innocuous brainstorming assistant when they impact public-facing products.
Why Companies Still Keep AI in the Workflow
Intellectual property concerns have not dissuaded companies from pursuing AI. In 2025, small business AI adoption reached 55%, and that number only continues to grow. The productivity argument is strong, especially in businesses where speed and scale translate into better customer service.
AI lowers redundancy, with phones and virtual meeting technologies offering transcriptions, summaries, voice-to-text and note capture as workflow support inside daily operations. These features do not replace human judgment — instead, they enable product and customer-facing teams to overcome workflow friction. The real question is whether the organization has enough structure around its use to stop convenience from outpacing oversight.
How CXOs Can Reduce IP Risk Without Freezing Innovation
Companies don’t have to abandon generative AI, but they must take the time to define when it’s appropriate, what needs double-checking, and which tools can be trusted.
1. Setting Policies
Organizations should set documented policies around responsible AI, and determine which tools they’re going to permit, which types of content might need human review, and whether it’s acceptable to put proprietary or sensitive information in tools available to the public. Otherwise, there will be different levels of risk.
2. Reviewing Output
A human should review every element created by AI before it goes into production. An appointed employee should consider code, marketing text, visuals and copy in user interfaces. Reviews should involve checks for plagiarism, factual inaccuracy and unintended resemblances to copyrighted works. AI can help generate early drafts, but a human expert must decide which text is best.
3. Monitoring Vendors
Companies should be discerning about the vendors they choose, since some information won’t be available due to the opacity of training data. Much of the training data behind generative AI models comes from large, publicly available datasets like Wikipedia or Reddit, though the exact composition is often not disclosed.
Knowing how programmers trained AI models doesn’t solve copyright litigation, but it does explain why leaders must ask harder questions of data sourcing, licensing, indemnification and retention before deploying any AI platform broadly.
4. Sharing Responsibility
Leadership in legal, design and engineering, and customer experience must treat AI governance as a shared responsibility. Copyright risk is cross-departmental, and siloing it almost always creates blind spots.
A Wiser Path Forward
Generative AI can help product teams move faster. But speed without review is like a paper boat in water. For CXOs, the safest path is neither total resistance nor blind adoption. It is a disciplined adoption.
With vendor diligence, human oversight and established internal policies, companies can use AI much more freely without overexposing intellectual property. The art is finding the right balance that works for a business in its unique environment, turning the legal gamble into a business asset rather than a liability.
