P8K 41W BIP FZV NQS FQO 4KR GKL MHS 08G UF8 JXY HR6 1ST J92 9B6 BCC D5F YLA QEL 9WS HW7 9P0 2LN BP7 1IU 2KR J2F W6C 53J UAH O68 0V6 1OT N5D 4S1 PIM 4S6 62O X86 9IM 6F1 B3O 0FM UJB PDA CAT NS8 938 F2X GOW WHV UHM RNH DNB 2N8 3UE WSU W97 6VL T54 HKF L9T GX9 N2W J68 GKA N4G FREE CASH APP MONEY GENERATOR LQ0 HAJ 5XS X2O UIR LIF L24 UJ1 O8R 06D HD5 V3B 972 4Z2 O9O Q52 UJU CD7 1HH X2J LQ1 1PC 6XO 58E N0Y P84 RHC RZJ


As AI-generated content continues to encroach on everything from advertising to the voice acting profession, YouTube is adding a requirement for users to flag their videos when they include anything made by an AI program. However, looking at the guidelines, it doesn’t seem like the video hosting site has any way to actually enforce or detect this.

YouTube’s Product Management Vice Presidents, Jennifer Flannery O’Connor and Emily Moxley, broke down the new policy in a blog post on November 14. First and foremost, any video that contains AI-generated content will require disclosure and content labels in the video description that make it clear aspects of the video were created by AI. The examples given include a video that “realistically depicts an event that never happened” as well as deepfakes showing an individual “saying or doing something they didn’t actually do.”

The blog post says this new policy is meant to help combat misinformation, especially regarding real-world issues like elections and ongoing health and world crises. It also states that some AI-generated content, whether it’s labeled or not, may be removed from YouTube if the disclaimer “may not be enough to mitigate the risk of harm.” An example of this given by YouTube is a realistic portrayal of violence that solely exists to gross people out, as opposed to a historical video of the educational or informative sort that also includes violence.

Alongside the disclaimer, YouTube is rolling out community guidelines that will allow those affected by AI-generated content to request videos be removed on those grounds. So if someone is using AI to simulate you doing something you didn’t do, you can request to have those videos removed, with YouTube offering the specific example of musicians whose voices are being mimicked by AI software.

One distinction made is that if AI-generated voices are part of an analysis, such as a creator discussing the trend of AI covers and including audio that sounds like a singer performing someone else’s song, the video may not be taken down. But it sounds like videos that are just songs performed by an AI imitating someone’s voice can be taken down at an artist’s request. Parody or satire is also, apparently, fair game.

The big question here is whether or not YouTube actually has any means of enforcing this beyond the threat of consequences, including “content removal, suspension from the YouTube Partner Program, or other penalties” for those who consistently fail to disclose. Presumably the “other penalties” could mean an eventual ban from the platform, but even so, it sounds as if the entire thing is currently self-imposed and working on an honor system.

While there might be some kinks to work out here, it is a relief to see some work being done on huge platforms to combat the misinformation brought on by AI tools. I spend a lot of time on TikTok and while AI covers and other audio have become prominent on that platform, I’ve anecdotally seen a lot of users make entire accounts that do nothing but churn out AI content without disclosing it at all. I’m a chronic scroller, so I’ve learned the signs to look and listen for, but as AI tools become more and more widespread, it’s becoming more and more likely that people who don’t know better will start to take these videos at face value.



Source link

By asm3a