In 2021, the Maryland Division of Well being and the state police have been confronting a disaster: Deadly drug overdoses within the state have been at an all-time excessive, and authorities didn’t know why.
In search of solutions, Maryland officers turned to scientists on the Nationwide Institute of Requirements and Expertise, the nationwide metrology institute for the USA, which defines and maintains requirements of measurement important to a variety of business sectors and well being and safety purposes.
There, a analysis chemist named Ed Sisco and his group had developed strategies for detecting hint quantities of medication, explosives, and different harmful supplies—strategies that might shield regulation enforcement officers and others who needed to acquire these samples. And a pilot uncovered new, important info virtually instantly. Learn the complete story.
—Adam Bluestein
This story is from the following version of our print journal. Subscribe now to learn it and get a duplicate of the journal when it lands!
Part two of army AI has arrived
—James O’Donnell
Final week, I spoke with two US Marines who spent a lot of final 12 months deployed within the Pacific, conducting coaching workout routines from South Korea to the Philippines. Each have been chargeable for analyzing surveillance to warn their superiors about doable threats to the unit. However this deployment was distinctive: For the primary time, they have been utilizing generative AI to scour intelligence, by way of a chatbot interface much like ChatGPT.
As I wrote in my new story, this experiment is the newest proof of the Pentagon’s push to make use of generative AI—instruments that may have interaction in humanlike dialog—all through its ranks, for duties together with surveillance. This push raises alarms from some AI security specialists about whether or not giant language fashions are match to research delicate items of intelligence in conditions with excessive geopolitical stakes.