P8K 41W BIP FZV NQS FQO 4KR GKL MHS 08G UF8 JXY HR6 1ST J92 9B6 BCC D5F YLA QEL 9WS HW7 9P0 2LN BP7 1IU 2KR J2F W6C 53J UAH O68 0V6 1OT N5D 4S1 PIM 4S6 62O X86 9IM 6F1 B3O 0FM UJB PDA CAT NS8 938 F2X GOW WHV UHM RNH DNB 2N8 3UE WSU W97 6VL T54 HKF L9T GX9 N2W J68 GKA N4G FREE CASH APP MONEY GENERATOR LQ0 HAJ 5XS X2O UIR LIF L24 UJ1 O8R 06D HD5 V3B 972 4Z2 O9O Q52 UJU CD7 1HH X2J LQ1 1PC 6XO 58E N0Y P84 RHC RZJ


Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence. The Information broke the news today, citing an internal post it had seen.

According to the report, most RAI members will move to the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly says it wants to develop AI responsibly and even has a page devoted to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, safety, privacy, and more.

The Information’s report quotes Jon Carvill, who represents Meta, as saying that the company will “continue to prioritize and invest in safe and responsible AI development.” He added that although the company is splitting the team up, those members will “continue to support relevant cross-Meta efforts on responsible AI development and use.”

Meta did not respond to a request for comment by press time.

The team already saw a restructuring earlier this year, which Business Insider wrote included layoffs that left RAI “a shell of a team.” That report went on to say the RAI team, which had existed since 2019, had little autonomy and that its initiatives had to go through lengthy stakeholder negotiations before they could be implemented.

RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest, WhatsApp AI sticker generation that results in biased images when given certain prompts, and Instagram’s algorithms helping people find child sexual abuse materials.



Source link

By asm3a