Caste bias is rampant in OpenAI’s merchandise, together with ChatGPT, in line with an MIT Expertise Assessment investigation. Although CEO Sam Altman boasted about India being its second-largest market throughout the launch of GPT-5 in August, we discovered that each this new mannequin, which now powers ChatGPT, in addition to Sora, OpenAI’s text-to-video generator, exhibit caste bias. This dangers entrenching discriminatory views in methods which might be presently going unaddressed.
Mitigating caste bias in AI fashions is extra urgent than ever. In up to date India, many caste-oppressed Dalit individuals have escaped poverty and have grow to be docs, civil service officers, and students; some have even risen to grow to be the president of India. However AI fashions proceed to breed socioeconomic and occupational stereotypes that render Dalits as soiled, poor, and performing solely menial jobs. Learn the total story.
—Nilesh Christopher
MIT Expertise Assessment Narrated: how do AI fashions generate movies?
It’s been an enormous yr for video technology. The draw back is that creators are competing with AI slop, and social media feeds are filling up with faked information footage. Video technology additionally makes use of up an enormous quantity of power, many instances greater than textual content or picture technology.
With AI-generated movies in every single place, let’s take a second to speak in regards to the tech that makes them work.
That is our newest story to be became a MIT Expertise Assessment Narrated podcast, which we’re publishing every week on Spotify and Apple Podcasts. Simply navigate to MIT Expertise Assessment Narrated on both platform, and observe us to get all our new content material because it’s launched.
The must-reads