Fascinating article in the New Yorker about fMRI scans tracing iron atoms involved in neuronal oxygenation, and mapping this activity onto the stimuli being experienced by the patient, like watching a Hitchock movie or hearing a series of words. A means of programmatically mapping out thought space. Not much to do with AI, although of course machine learning was involved in analyzing these vast data sets (square millimeters or "voxels" of brain matter, presumably grey matter but the article didn't specify). Naturally, iARPA was involved:
The work at Princeton was funded by iARPA, an R. & D. organization that’s run by the Office of the Director of National Intelligence. Brandon Minnery, the iARPA project manager for the Knowledge Representation in Neural Systems program at the time, told me that he had some applications in mind. If you knew how knowledge was represented in the brain, you might be able to distinguish between novice and expert intelligence agents. You might learn how to teach languages more effectively by seeing how closely a student’s mental representation of a word matches that of a native speaker. Minnery’s most fanciful idea— “Never an official focus of the program,” he said—was to change how databases are indexed. Instead of labelling items by hand, you could show an item to someone sitting in an fMRI scanner—the person’s brain state could be the label. Later, to query the database, someone else could sit in the scanner and simply think of whatever she wanted. The software could compare the searcher’s brain state with the indexer’s. It would be the ultimate solution to the vocabulary problem.
Jack Gallant, a professor at Berkeley who has used thought decoding to reconstruct video montages from brain scans—as you watch a video in the scanner, the system pulls up frames from similar YouTube clips, based only on your voxel patterns—suggested that one group of people interested in decoding were Silicon Valley investors. “A future technology would be a portable hat—like a thinking hat,” he said. He imagined a company paying people thirty thousand dollars a year to wear the thinking hat, along with video-recording eyeglasses and other sensors, allowing the system to record everything they see, hear, and think, ultimately creating an exhaustive inventory of the mind. Wearing the thinking hat, you could ask your computer a question just by imagining the words. Instantaneous translation might be possible. In theory, a pair of wearers could skip language altogether, conversing directly, mind to mind. Perhaps we could even communicate across species. Among the challenges the designers of such a system would face, of course, is the fact that today’s fMRI machines can weigh more than twenty thousand pounds. There are efforts under way to make powerful miniature imaging devices, using lasers, ultrasound, or even microwaves. “It’s going to require some sort of punctuated-equilibrium technology revolution,” Gallant said. Still, the conceptual foundation, which goes back to the nineteen-fifties, has been laid.
edit: typo fixes in the first sentence, from "experience" to "experienced" and from "word" to "words"