China’s Plan to Make AI Watermarks Happen


Chinese regulators likely learned from the EU AI Act, says Jeffrey Ding, an assistant professor of Political Science at George Washington University. “Chinese policymakers and scholars have said that they’ve drawn on the EU’s Acts as inspiration for things in the past.”

But at the same time, some of the measures taken by the Chinese regulators aren’t really replicable in other countries. For example, the Chinese government is asking social platforms to screen the user-uploaded content for AI. “That seems something that is very new and might be unique to the China context,” Ding says. “This would never exist in the US context, because the US is famous for saying that the platform is not responsible for content.”

But What About Freedom of Expression Online?

The draft regulation on AI content labeling is seeking public feedback until October 14, and it may take another several months for it to be modified and passed. But there’s little reason for Chinese companies to delay preparing for when it goes into effect.

Sima Huapeng, founder and CEO of the Chinese AIGC company Silicon Intelligence, which uses deepfake technologies to generate AI agents, influencers, and replicate living and dead people, says his product now allows users to voluntarily choose whether to mark the generated product as AI. But if the law passes, he might have to change it to mandatory.

“If a feature is optional, then most likely companies won’t add it to their products. But if it becomes compulsory by law, then everyone has to implement it,” Sima says. It’s not technically difficult to add watermarks or metadata labels, but it will increase the operating costs for compliant companies.

Policies like this can steer AI away from being used for scamming or privacy invasion, he says, but it could also trigger the growth of an AI service black market where companies try to dodge legal compliance and save on costs.

There’s also a fine line between holding AI content producers accountable and policing individual speech through more sophisticated tracing.

“The big underlying human rights challenge is to be sure that these approaches don’t further compromise privacy or free expression,” says Gregory. While the implicit labels and watermarks can be used to identify sources of misinformation and inappropriate content, the same tools can enable the platforms and government to have stronger control over what users post on the internet. In fact, concerns about how AI tools can go rogue has been one of the main drivers of China’s proactive AI legislation efforts.

At the same time, the Chinese AI industry is pushing back on the government to have more space to experiment and grow since they are already behind their Western peers. An earlier Chinese generative-AI law was watered down considerably between the first public draft and the final bill, removing requirements on identity verification and reducing penalties imposed on companies.

“What we’ve seen is the Chinese government really trying to walk this fine tightrope between ‘making sure we maintain content control’ but also ‘letting these AI labs in a strategic space have the freedom to innovate,’” says Ding. “This is another attempt to do that.”



Source link

About The Author

Scroll to Top