Friday, 10th May 2024
To guardian.ng
Search

Advances in Artificial Intelligence top seven technologies to watch in 2024

By Guardian Nigeria
15 February 2024   |   3:55 am
Two decades ago, David Baker at the University of Washington in Seattle, United States, and his colleagues achieved a landmark feat: they used computational tools to design an entirely new protein from scratch.

Artificial Intelligence (AI). PHOTO; FORBES

From protein engineering and 3D printing to detection of deepfake media, here are seven areas of technology that the Nature journal will be watching in the year ahead.

Deep learning for protein design
Two decades ago, David Baker at the University of Washington in Seattle, United States, and his colleagues achieved a landmark feat: they used computational tools to design an entirely new protein from scratch. ‘Top7’ folded as predicted, but it was inert: it performed no meaningful biological functions. Today, de novo protein design has matured into a practical tool for generating made-to-order enzymes and other proteins. “It’s hugely empowering,” says Neil King, a biochemist at the University of Washington who collaborates with Baker’s team to design protein-based vaccines and vehicles for drug delivery. “Things that were impossible a year and a half ago — now you just do it.”

Much of that progress comes down to increasingly massive data sets that link protein sequence to structure. But sophisticated methods of deep learning, a form of artificial intelligence (AI), have also been essential.

‘Sequence based’ strategies use the large language models (LLMs) that power tools such as the chatbot ChatGPT. By treating protein sequences like documents comprising polypeptide ‘words’, these algorithms can discern the patterns that underlie the architectural playbook of real-world proteins. “They really learn the hidden grammar,” says Noelia Ferruz, a protein biochemist at the Molecular Biology Institute of Barcelona, Spain. In 2022, her team developed an algorithm called ProtGPT2 that consistently comes up with synthetic proteins that fold stably when produced in the laboratory1. Another tool co-developed by Ferruz, called ZymCTRL, draws on sequence and functional data to design members of naturally occurring enzyme families.

ChatGPT? Maybe next year
Readers might detect a theme in this year’s technologies to watch: the outsized impact of deep-learning methods. But one such tool did not make the final cut: the much-hyped artificial-intelligence (AI)-powered chatbots. ChatGPT and its ilk seem poised to become part of many researchers’ daily routines and were feted as part of the 2023 Nature’s 10 round-up (see go.nature.com/3trp7rg). Respondents to a Nature survey in September (see go.nature.com/45232vd) cited ChatGPT as the most useful AI-based tool and were enthusiastic about its potential for coding, literature reviews and administrative tasks.

Such tools are also proving valuable from an equity perspective, helping those for whom English isn’t their first language to refine their prose and thereby ease their paths to publication and career growth.

However, many of these applications represent labour-saving gains rather than transformations of the research process. Furthermore, ChatGPT’s persistent issuing of either misleading or fabricated responses was the leading concern of more than two-thirds of survey respondents. Although worth monitoring, these tools need time to mature and to establish their broader role in the scientific world.

Sequence-based approaches can build on and adapt existing protein features to form new frameworks, but they’re less effective for the bespoke design of structural elements or features, such as the ability to bind specific targets in a predictable fashion. ‘Structure based’ approaches are better for this, and 2023 saw notable progress in this type of protein-design algorithm, too. Some of the most sophisticated of these use ‘diffusion’ models, which also underlie image-generating tools such as DALL-E. These algorithms are initially trained to remove computer-generated noise from large numbers of real structures; by learning to discriminate realistic structural elements from noise, they gain the ability to form biologically plausible, user-defined structures.

RFdiffusion software developed by Baker’s lab and the Chroma tool by Generate Biomedicines in Somerville, Massachusetts4, exploit this strategy to remarkable effect. For example, Baker’s team is using RFdiffusion to engineer novel proteins that can form snug interfaces with targets of interest, yielding designs that “just conform perfectly to the surface,” Baker says. A newer ‘all atom’ iteration of RFdiffusion5 allows designers to computationally shape proteins around non-protein targets such as DNA, small molecules and even metal ions. The resulting versatility opens new horizons for engineered enzymes, transcriptional regulators, functional biomaterials and more.

Deepfake detection
The explosion of publicly available generative AI algorithms has made it simple to synthesize convincing, but entirely artificial images, audio and video. The results can offer amusing distractions, but with multiple ongoing geopolitical conflicts and a US presidential election on the horizon, opportunities for weaponized media manipulation are rife.

Siwei Lyu, a computer scientist at the University at Buffalo in New York, says he’s seen numerous AI-generated ‘deepfake’ images and audio related to the Israel–Hamas conflict, for instance. This is just the latest round in a high-stakes game of cat-and-mouse in which AI users produce deceptive content and Lyu and other media-forensics specialists work to detect and intercept it.

AI and science: what 1,600 researchers think
One solution is for generative-AI developers to embed hidden signals in the models’ output, producing watermarks of AI-generated content. Other strategies focus on the content itself. Some manipulated videos, for instance, replace the facial features of one public figure with those of another, and new algorithms can recognize artefacts at the boundaries of the substituted features, says Lyu. The distinctive folds of a person’s outer ear can also reveal mismatches between a face and a head, whereas irregularities in the teeth can reveal edited lip-sync videos in which a person’s mouth was digitally manipulated to say something that the subject didn’t say. AI-generated photos also present a thorny challenge — and a moving target. In 2019, Luisa Verdoliva, a media-forensics specialist at University Federico II of Naples, Italy, helped to develop FaceForensics++, a tool for spotting faces manipulated by several widely used software packages6. But image-forensic methods are subject- and software-specific, and generalization is a challenge. “You cannot have one single universal detector — it’s very difficult,” she says.

And then there’s the challenge of implementation. The US Defense Advanced Research Projects Agency’s Semantic Forensics (SemaFor) programme has developed a useful toolbox for deepfake analysis, but, as reported in Nature (see Nature 621, 676–679; 2023) major social-media sites are not routinely employing it. Broadening the access to such tools could help to fuel uptake, and to this end Lyu’s team has developed the DeepFake-O-Meter, a centralized public repository of algorithms that can analyse video content from different angles to sniff out deepfake content. Such resources will be helpful, but it is likely that the battle against AI-generated misinformation will persist for years to come.

Large-fragment DNA insertion.

In late 2023, US and UK regulators approved the first-ever CRISPR-based gene-editing therapy for sickle-cell disease and transfusion-dependent β-thalassaemia — a major win for genome editing as a clinical tool.

CRISPR and its derivatives use a short programmable RNA to direct a DNA-cutting enzyme such as Cas9 to a specific genomic site. They are routinely used in the lab to disable defective genes and introduce small sequence changes. The precise and programmable insertion of larger DNA sequences spanning thousands of nucleotides is difficult, but emerging solutions could allow scientists to replace crucial segments of defective genes or insert fully functional gene sequences. Le Cong, a molecular geneticist at Stanford University in California and his colleagues are exploring single-stranded annealing proteins (SSAPs) — virus-derived molecules that mediate DNA recombination. When combined with a CRISPR–Cas system in which the DNA-slicing function of Cas9 has been disabled, these SSAPs allow precisely targeted insertion of up to 2 kilobases of DNA into the human genome.

Other methods exploit a CRISPR-based method called prime editing to introduce short ‘landing pad’ sequences that selectively recruit enzymes that in turn can precisely splice large DNA fragments into the genome. In 2022, for instance, genome engineers Omar Abudayyeh and Jonathan Gootenberg at the Massachusetts Institute of Technology, Cambridge and their colleagues first described programmable addition through site-specific targeting elements (PASTE), a method that can precisely insert up to 36 kilobases of DNA8. PASTE is especially promising for ex vivo modification of cultured, patient-derived cells, says Cong, and the underlying prime-editing technology is already on track for clinical studies. But for in vivo modification of human cells, SSAP might offer a more compact solution: the bulkier PASTE machinery requires three separate viral vectors for delivery, which could undermine editing efficiency relative to the two-component SSAP system. That said, even relatively inefficient gene-replacement strategies could be sufficient to mitigate the effects of many genetic diseases.

And such methods are not just relevant to human health. Researchers led by Caixia Gao at the Chinese Academy of Sciences in Beijing developed PrimeRoot, a method that uses prime editing to introduce specific target sites that enzymes can use to insert up to 20 kilobases of DNA in both rice and maize9. Gao thinks that the technique could be broadly useful for endowing crops with disease and pathogen resistance, continuing a wave of innovation in CRISPR-based plant genome engineering. “I believe that this technology can be applied in any plant species,” she says.

Brain–computer interfaces
Pat Bennett has slower than average speech, and can sometimes use the wrong word. But given that motor neuron disease, also known as amyotrophic lateral sclerosis, had previously left her unable to express herself verbally, that is a remarkable achievement.

Bennett’s recovery comes courtesy of a sophisticated brain–computer interface (BCI) device developed by Stanford University neuroscientist Francis Willett and his colleagues at the US-based BrainGate consortium10. Willett and his colleagues implanted electrodes in Bennett’s brain to track neuronal activity and then trained deep-learning algorithms to translate those signals into speech. After a few weeks of training, Bennett was able to say as many as 62 words per minute from a vocabulary of 125,000 words — more than twice the vocabulary of the average English speaker. “It’s really truly impressive, the rates at which they’re communicating,” says bioengineer Jennifer Collinger, who develops BCI technologies at the University of Pittsburgh in Pennsylvania.
Researchers help Pat Bennett translate attempts at speech into words on a screen using a brain-computer interface

BrainGate’s trial is just one of several studies from the past few years demonstrating how BCI technology can help people with severe neurological damage to regain lost skills and achieve greater independence. Some of that progress stems from the steady accumulation of knowledge about functional neuroanatomy in the brains of individuals with various neurological conditions, says Leigh Hochberg, a neurologist at Brown University in Providence, Rhode Island, and director of the BrainGate consortium. But that knowledge has been greatly amplified, he adds, by machine-learning-driven analytical methods that are revealing how to better place electrodes and decrypt the signals that they pick up.

Researchers are also applying AI-based language models to speed up the interpretation of what patients are trying to communicate — essentially, ‘autocomplete’ for the brain. This was a core component of Willett’s study, as well as another from a team led by neurosurgeon Edward Chang at the University of California, San Francisco. In that work, a BCI neuroprosthesis allowed a woman who was unable to speak as a result of a stroke to communicate at 78 words per minute — roughly half the average speed of English, but more than five times faster than the woman’s previous speech-assistance device. The field is seeing progress in other areas as well. In 2021, Collinger and biomedical engineer Robert Gaunt at the University of Pittsburgh implanted electrodes into the motor and somatosensory cortex of an individual who was paralysed in all four limbs to provide rapid and precise control over a robotic arm along with tactile sensory feedback. Also under way are independent clinical studies from BrainGate and researchers at UMC Utrecht in the Netherlands, as well as a trial from BCI firm Synchron in Brooklyn, New York, to test a system that allows people who are paralysed to control a computer — the first industry-sponsored trial of a BCI apparatus.

As an intensive-care specialist, Hochberg is eager to deliver these technologies to his patients with the most severe disabilities. But as BCI capabilities evolve, he sees potential to treat more-moderate cognitive impairments as well as mental-health conditions, such as mood disorders. “Closed-loop neuromodulation systems informed by brain–computer interfaces could be of tremendous help to a lot of people,” he says.

Super-duper resolution
Stefan Hell, Eric Betzig and William Moerner were awarded the 2014 Nobel Prize in Chemistry for shattering the ‘diffraction limit’ that constrained the spatial resolution of light microscopy. The resulting level of detail — in the order of tens of nanometres — opened a wide range of molecular-scale imaging experiments. Still, some researchers yearn for better — and they are making swift progress. “We’re really trying to close the gap from super-resolution microscopy to structural-biology techniques like cryo-electron microscopy,” says Ralf Jungmann, a nanotechnology researcher at the Max Planck Institute of Biochemistry in Planegg, Germany, referring to a method that can reconstruct protein structures with atomic-scale resolution.

Researchers led by Hell and his team at the Max Planck Institute for Multidisciplinary Sciences in Göttingen made an initial foray into this realm in late 2022 with a method called MINSTED that can resolve individual fluorescent labels with 2.3-ångström precision — roughly one-quarter of a nanometre — using a specialized optical microscope.

Newer methods provide comparable resolution using conventional microscopes. Jungmann and his team, for instance, described a strategy in 2023 in which individual molecules are labelled with distinct DNA strands. These molecules are then detected with dye-tagged complementary DNA strands that bind to their corresponding targets transiently but repeatedly, making it possible to discriminate individual fluorescent ‘blinking’ points that would blur into a single blob if imaged simultaneously. This resolution enhancement by sequential imaging (RESI) approach could resolve individual base pairs on a DNA strand, demonstrating ångström-scale resolution with a standard fluorescence microscope.

The one-step nanoscale expansion (ONE) microscopy method, developed by a team led by neuroscientists Ali Shaib and Silvio Rizzoli at University Medical Center Göttingen, Germany, doesn’t quite achieve this level of resolution. However, ONE microscopy offers an unprecedented opportunity to directly image fine structural details of individual proteins and multiprotein complexes, both in isolation and in cells.

ONE is an expansion-microscopy-based approach that involves chemically coupling proteins in the sample to a hydrogel matrix, breaking the proteins apart, and then allowing the hydrogel to expand 1,000-fold in volume. The fragments expand evenly in all directions, preserving the protein structure and enabling users to resolve features separated by a few nanometres with a standard confocal microscope. “We took antibodies, put them in the gel, labelled them after expansion, and were like, “Oh — we see Y shapes!” says Rizzoli, referring to the characteristic shape of the proteins.

ONE microscopy could provide insights into conformationally dynamic biomolecules or enable visual diagnosis of protein-misfolding disorders such as Parkinson’s disease from blood samples, says Rizzoli. Jungmann is similarly enthusiastic about the potential for RESI to document reorganization of individual proteins in disease or in response to drug treatments. It might even be possible to zoom in more tightly. “Maybe it’s not the end for the spatial resolution limits,” Jungmann says. “It might get better.”

0 Comments