Sooner or later, an AI agent couldn’t solely counsel issues to do and locations to remain on my honeymoon; it could additionally go a step additional than ChatGPT and e book flights for me. It might bear in mind my preferences and finances for lodges and solely suggest lodging that matched my standards. It may also bear in mind what I favored to do on previous journeys, and counsel very particular issues to do tailor-made to these tastes. It’d even request bookings for eating places on my behalf.
Sadly for my honeymoon, right this moment’s AI techniques lack the form of reasoning, planning, and reminiscence wanted. It’s nonetheless early days for these techniques, and there are a variety of unsolved analysis questions. However who is aware of—perhaps for our tenth anniversary journey?
Deeper Studying
A solution to let robots be taught by listening will make them extra helpful
Most AI-powered robots right this moment use cameras to grasp their environment and be taught new duties, but it surely’s turning into simpler to coach robots with sound too, serving to them adapt to duties and environments the place visibility is restricted.
Sound on: Researchers at Stanford College examined how far more profitable a robotic might be if it’s able to “listening.” They selected 4 duties: flipping a bagel in a pan, erasing a whiteboard, placing two Velcro strips collectively, and pouring cube out of a cup. In every job, sounds supplied clues that cameras or tactile sensors battle with, like understanding if the eraser is correctly contacting the whiteboard or whether or not the cup comprises cube. When utilizing imaginative and prescient alone within the final check, the robotic might inform 27% of the time whether or not there have been cube within the cup, however that rose to 94% when sound was included. Learn extra from James O’Donnell.
Bits and Bytes
AI lie detectors are higher than people at recognizing lies
Researchers on the College of Würzburg in Germany discovered that an AI system was considerably higher at recognizing fabricated statements than people. People often solely get it proper round half the time, however the AI might spot if a press release was true or false in 67% of instances. Nonetheless, lie detection is a controversial and unreliable know-how, and it’s debatable whether or not we should always even be utilizing it within the first place. (MIT Know-how Overview)
A hacker stole secrets and techniques from OpenAI
A hacker managed to entry OpenAI’s inside messaging techniques and steal details about its AI know-how. The corporate believes the hacker was a personal particular person, however the incident raised fears amongst OpenAI staff that China might steal the corporate’s know-how too. (The New York Occasions)
AI has vastly elevated Google’s emissions over the previous 5 years
Google mentioned its greenhouse-gas emissions totaled 14.3 million metric tons of carbon dioxide equal all through 2023. That is 48% greater than in 2019, the corporate mentioned. That is principally on account of Google’s huge push towards AI, which can seemingly make it more durable to hit its objective of eliminating carbon emissions by 2030. That is an totally miserable instance of how our societies prioritize revenue over the local weather emergency we’re in. (Bloomberg)
Why a $14 billion startup is hiring PhDs to coach AI techniques from their residing rooms
An attention-grabbing learn concerning the shift taking place in AI and information work. Scale AI has beforehand employed low-paid information employees in nations corresponding to India and the Philippines to annotate information that’s used to coach AI. However the huge increase in language fashions has prompted Scale to rent extremely expert contractors within the US with the required experience to assist prepare these fashions. This highlights simply how essential information work actually is to AI. (The Info)
A brand new “moral” AI music generator can’t write a midway respectable tune
Copyright is likely one of the thorniest issues going through AI right this moment. Simply final week I wrote about how AI corporations are being pressured to cough up for high-quality coaching information to construct highly effective AI. This story illustrates why this issues. This story is about an “moral” AI music generator, which solely used a restricted information set of licensed music. However with out high-quality information, it’s not capable of generate something even near respectable. (Wired)