Singularity Hub https://singularityhub.com/ News and Insights on Technology, Science, and the Future from Singularity Group Mon, 28 Aug 2023 17:45:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 https://singularityhub.com/wp-content/uploads/2021/09/6138dcf7843f950e69f4c1b8_singularity-favicon02.png Singularity Hub https://singularityhub.com/ 32 32 4183809 NASA’s Psyche Mission to a Metal Asteroid May Unlock the Mysteries of Earth’s Core https://singularityhub.com/2023/08/28/nasas-psyche-mission-to-a-metal-asteroid-may-unlock-the-mysteries-of-earths-core/ Mon, 28 Aug 2023 17:45:04 +0000 https://singularityhub.com/?p=153155 French novelist Jules Verne delighted 19th-century readers with the tantalizing notion that a journey to the center of the Earth was actually plausible.

Since then, scientists have long acknowledged that Verne’s literary journey was only science fiction. The extreme temperatures of the Earth’s interior—around 10,000 degrees Fahrenheit (5,537 Celsius) at the core—and the accompanying crushing pressure, which is millions of times more than at the surface, prevent people from venturing down very far.

Still, there are a few things known about the Earth’s interior. For example, geophysicists discovered that the core consists of a solid sphere of iron and nickel that comprises 20 percent of the Earth’s radius, surrounded by a shell of molten iron and nickel that spans an additional 15 percent of Earth’s radius.

That, and the rest of our knowledge about our world’s interior, was learned indirectly—either by studying Earth’s magnetic field or the way earthquake waves bounce off different layers below the Earth’s surface.

But indirect discovery has its limitations. How can scientists find out more about our planet’s deep interior?

Planetary scientists like me think the best way to learn about inner Earth is in outer space. NASA’s robotic mission to a metal world is scheduled for liftoff on Oct. 5, 2023. That mission, the spacecraft traveling there, and the world it will explore all have the same name—Psyche. And for six years now, I’ve been part of NASA’s Psyche team.

About the Asteroid Psyche

Asteroids are small worlds, with some the size of small cities and others as large as small countries. They are the leftover building blocks from our solar system’s early and violent period, a time of planetary formation.

Although most are rocky, icy, or a combination of both, perhaps 20 percent of asteroids are worlds made of metal similar in composition to the Earth’s core. So it’s tempting to imagine that these metallic asteroids are pieces of the cores of once-existing planets, ripped apart by ancient cosmic collisions with each other. Maybe, by studying these pieces, scientists could find out directly what a planetary core is like.

Psyche is the largest-known of the metallic asteroids. Discovered in 1852, Psyche has the width of Massachusetts, a squashed spherical shape reminiscent of a pincushion, and an orbit between Mars and Jupiter in the main asteroid belt. An amateur astronomer can see Psyche with a backyard telescope, but it appears only as a pinpoint of light.

About the Psyche Mission

In early 2017, NASA approved the $1 billion mission to Psyche. To do its work, there’s no need for the uncrewed spacecraft to land—instead, it will orbit the asteroid repeatedly and methodically, starting from 435 miles (700 kilometers) out and then going down to 46 miles (75 km) from the surface, and perhaps even lower.

Once it arrives in August 2029, the probe will spend 26 months mapping the asteroid’s geology, topography, and gravity; it will search for evidence of a magnetic field; and it will compare the asteroid’s composition with what scientists know, or think we know, about Earth’s core.

The central questions are these: Is Psyche really an exposed planetary core? Is the asteroid one big bedrock boulder, a rubble pile of smaller boulders, or something else entirely? Are there clues that the previous outer layers of this small world—the crust and mantle—were violently stripped away long ago? And maybe the most critical question: Can what we learn about Psyche be extrapolated to solve some of the mysteries about the Earth’s core?

Technicians, inside a clean room and dressed in white garb, examine the Psyche spacecraft.
NASA’s Psyche spacecraft, undergoing final tests in a clean room at a facility near Florida’s Kennedy Space Center. Image credit: NASA/Frank Michaux

About the Spacecraft Psyche

The probe’s body is about the same size and mass as a large SUV. Solar panels, stretching a bit wider than a tennis court, power the cameras, spectrometers, and other systems.

A SpaceX Falcon Heavy rocket will take Psyche off the Earth. The rest of the way, Psyche will rely on ion propulsion—the gentle pressure of ionized xenon gas jetting out of a nozzle provides a continuous, reliable, and low-cost way to propel spacecraft out into the solar system.

The journey, a slow spiral of 2.5 billion miles (4 billion kilometers) that includes a gravity-assist flyby past Mars, will take nearly six years. Throughout the cruise, the Psyche team at NASA’s Jet Propulsion Laboratory in Pasadena, California, and here at Arizona State University in Tempe, will stay in regular contact with the spacecraft. Our team will send and receive data using NASA’s Deep Space Network of giant radio antennas.

Even if we learn that Psyche is not an ancient planetary core, we’re bound to significantly add to our body of knowledge about the solar system and the way planets form. After all, Psyche is still unlike any world humans have ever visited. Maybe we can’t yet journey to the center of the Earth, but robotic avatars to places like Psyche can help unlock the mysteries hidden deep inside the planets—including our own.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: NASA/JPL-Caltech/ASU

]]>
153155
New Codes Could Accelerate the Advent of Practical Quantum Computing https://singularityhub.com/2023/08/27/ibms-new-codes-could-accelerate-the-advent-of-practical-quantum-computing/ Sun, 27 Aug 2023 14:00:27 +0000 https://singularityhub.com/?p=153139 One of the biggest stumbling blocks for quantum computers is their tendency to be error-prone and the massive computational overhead required to clean up their mistakes. IBM has now made a breakthrough by dramatically reducing the number of qubits required to do so.

All computers are prone to errors, and even the computer chip in your laptop runs code designed to fix things when a bit flips accidentally. But fragile quantum states are far more vulnerable to things like environmental noise, which means correcting errors in quantum processors will require considerable resources.

Most estimates predict that creating just a single fault-tolerant qubit, or logical qubit, that can carry out useful operations will require thousands of physical qubits dedicated to error correction. Given that today’s biggest processors have just hundreds of qubits, this suggests we’re still a long way from building practical quantum computers that can solve real problems.

But now researchers at IBM say they’ve discovered a new approach that slashes the number of qubits required for error correction by a factor of 10. While the approach currently only works on quantum memory rather than computation, the technique could open the door to efficient new approaches to creating fault-tolerant devices.

Practical error correction is far from a solved problem,” the researchers write in a blog post. “However, these new codes and other advances across the field are increasing our confidence that fault tolerant quantum computing isn’t just possible, but is possible without having to build an unreasonably large quantum computer.”

The leading approach to error correction today is known as the surface code, which involves arranging qubits in a specially configured 2D lattice and using some to encode data and others to make measurements to see if an error has occurred. The approach is effective, but it requires a large number of physical qubits to pull off—as many as 20 million for some key problems of interest, according to IBM.

The new technique, outlined in a preprint on arXiv, comes from the same family of error-correction approaches as the surface code. But while each qubit in the surface code is connected to four others, the new technique connects them to six others, which makes it possible to encode more information into the same number of physical qubits.

As a result, the researchers say they can reduce the number of qubits required by an order of magnitude. Creating 12 logical qubits using their approach would require only 288 physical qubits, compared to more than 4,000 when using the surface code.

There are some significant caveats, though. For a start, it’s currently impossible to achieve the kind of six-way connectivity the team envisages. While the surface code operates on a single plane and can therefore be easily implemented on the kind of flat chip already found in quantum processors, the new approach requires connections to distant qubits that aren’t located on the same surface.

The researchers say this isn’t an insurmountable barrier, and IBM is already developing the kind of long-range couplers required to make these kinds of corrections. The technologies needed are certainly plausible, Jérémie Guillaud at French quantum computing startup Alice & Bob told New Scientist, and could be here in just a matter of years.

A bigger open question, though, is the fact that so far the approach only works with a small number of logical operations. This means that while it works for reading and writing to a quantum memory in a fault-tolerant way, it wouldn’t support most quantum computations.

But the IBM researchers say the techniques they’ve unveiled are just a stepping stone that points toward a rich new vein of even better error-correction approaches. If they’re right and scientists are able to find more efficient alternatives to the surface code, it could significantly accelerate the advent of practical quantum computing.

Image Credit: IBM

]]>
153139
This Week’s Awesome Tech Stories From Around the Web (Through August 26) https://singularityhub.com/2023/08/26/this-weeks-awesome-tech-stories-from-around-the-web-through-august-26/ Sat, 26 Aug 2023 14:00:15 +0000 https://singularityhub.com/?p=153132 ARTIFICIAL INTELLIGENCE

Meta Is Building a Space-Age ‘Universal Language Translator’
Alex Blake | Digital Trends
“When you think of tools infused with artificial intelligence (AI) these days, it’s natural for ChatGPT and Bing Chat to spring to mind. But Facebook owner Meta wants to change that with SeamlessM4T, an AI-powered ‘universal language translator’ that could instantly convert any language in the world into whatever output you want.”

COMPUTING

Brain Implants That Help Paralyzed People Speak Just Broke New Records
Emily Mullin | Wired
Two new studies show that AI-powered devices can help paralyzed people communicate faster and more accurately. ...’It is now possible to imagine a future where we can restore fluid conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably,’ said Frank Willett, a research scientist at Stanford University’s Neural Prosthetics Translational Laboratory, during a media briefing on Tuesday.”

SPACE

The Re-Flight of a Rutherford Engine Demonstrates Rocket Reuse Is Here to Stay
Eric Berger | Ars Technica
“Whereas SpaceX was the anomaly in 2015 when it first landed an orbital booster and then flew a first stage for the second time in 2017, the company is now not alone. Nearly every commercial development program for medium- and heavy-lift rockets in the world today has a component of reusability… With Rocket Lab, this is no longer theoretical. It is happening. And this trend, which seemed so improbable as recently as five to seven years ago, now seems irreversible.”

AUTOMATION

New Robot Searches for Solar Cell Materials 14 Times Faster
Dina Genkina | Ars Technica
“To cut down on [the manual task of making new materials], Amassian’s team built a robot, lovingly named RoboMapper. …The ability to position hundreds of tiny samples on a single chip, a task impossible with human-level dexterity, enables researchers to test all these samples simultaneously using various diagnostic tools. The researchers say this speeds up the synthesis and characterization of materials by a factor of 14 compared to manual exploration and by a factor of nine compared to other automated methods.”

ROBOTICS

This Is Apptronik’s Humanoid Robot, Apollo
Brian Heater | Tech Crunch
“The ultimate efficacy of a humanoid robot is still very much an open question—but it’s one a lot of founders and backers believe in. To date there’s 1X, Figure, Sanctuary AI and—arguably—Agility. There’s also Apptronik—though the Austin-based firm is hardly a newcomer to the scene. …[This week], the company has released a series of videos featuring the robot performing a variety of different tasks, including walking, unloading trailers, palletizing, and case picking.”

SPACE

India Becomes the Fourth Country Ever to Land on the Moon
Passant Rabie | Gizmodo
“Chandrayaan-3 proved India has what it takes to land on the Moon, and ISRO now has big plans moving forward. Following the successful touchdown of the mission, space agency officials stated that they are now aiming to launch the first astronaut from India to space, as well as send a mission to Mars and Venus. Things are looking good for India’s space program as a new space race to the moon starts to take shape.”

AUTOMATION

Alphabet’s Wing Partners With Walmart for Drone Deliveries in Dallas
Emma Roth | The Verge
In an announcement on Thursday, Walmart says the partnership will allow the retailer to deliver to an additional 60,000 homes. In the coming weeks, Wing will start delivering out of a Walmart Supercenter in Frisco, Texas, before expanding to a second nearby store by the end of this year. The company will make deliveries to homes within six miles of the stores, with deliveries arriving ‘in under 30 minutes.’i” 

ARTIFICIAL INTELLIGENCE

My Books Were Used to Train AI
Stephen King | The Atlantic
“I have said in one of my few forays into nonfiction (On Writing) that you can’t learn to write unless you’re a reader, and unless you read a lot. AI programmers have apparently taken this advice to heart. Because the capacity of computer memory is so large—everything I ever wrote could fit on one thumb drive, a fact that never ceases to blow my mind—these programmers can dump thousands of books into state-of-the-art digital blenders. Including, it seems, mine. The real question is whether you get a sum that’s greater than the parts, when you pour back out.”

LAW AND ETHICS

Some of the Thorniest Questions About AI Will Be Answered in Court
Ryan Tracy | The Wall Street Journal
“Congress and the White House are talking about regulating artificial intelligence, but courts might well decide some of the most economically significant questions about the booming technology. Since the late 2022 launch of ChatGPT, the viral AI-powered chatbot, a flurry of suits has targeted AI purveyors including OpenAI, Microsoft, Google, and Meta Platforms.”

Image Credit: Steven Wei / Unsplash

]]>
153132
IBM’s Brain-Inspired Analog Chip Aims to Make AI More Sustainable https://singularityhub.com/2023/08/25/ibms-brain-inspired-analog-chip-aims-to-make-ai-more-sustainable/ Fri, 25 Aug 2023 14:00:23 +0000 https://singularityhub.com/?p=153125 ChatGPT, DALL-E, Stable Diffusion, and other generative AIs have taken the world by storm. They create fabulous poetry and images. They’re seeping into every nook of our world, from marketing to writing legal briefs and drug discovery. They seem like the poster child for a man-machine mind meld success story.

But under the hood, things are looking less peachy. These systems are massive energy hogs, requiring data centers that spit out thousands of tons of carbon emissions—further stressing an already volatile climate—and suck up billions of dollars. As the neural networks become more sophisticated and more widely used, energy consumption is likely to skyrocket even more.

Plenty of ink has been spilled on generative AI’s carbon footprint. Its energy demand could be its downfall, hindering development as it further grows. Using current hardware, generative AI is “expected to stall soon if it continues to rely on standard computing hardware,” said Dr. Hechen Wang at Intel Labs.

It’s high time we build sustainable AI.

This week, a study from IBM took a practical step in that direction. They created a 14-nanometer analog chip packed with 35 million memory units. Unlike current chips, computation happens directly within those units, nixing the need to shuttle data back and forth—in turn saving energy.

Data shuttling can increase energy consumption anywhere from 3 to 10,000 times above what’s required for the actual computation, said Wang.

The chip was highly efficient when challenged with two speech recognition tasks. One, Google Speech Commands, is small but practical. Here, speed is key. The other, Librispeech, is a mammoth system that helps transcribe speech to text, taxing the chip’s ability to process massive amounts of data.

When pitted against conventional computers, the chip performed equally as accurately but finished the job faster and with far less energy, using less than a tenth of what’s normally required for some tasks.

“These are, to our knowledge, the first demonstrations of commercially relevant accuracy levels on a commercially relevant model…with efficiency and massive parallelism” for an analog chip, the team said.

Brainy Bytes

This is hardly the first analog chip. However, it pushes the idea of neuromorphic computing into the realm of practicality—a chip that could one day power your phone, smart home, and other devices with an efficiency near that of the brain.

Um, what? Let’s back up.

Current computers are built on the Von Neumann architecture. Think of it as a house with multiple rooms. One, the central processing unit (CPU), analyzes data. Another stores memory.

For each calculation, the computer needs to shuttle data back and forth between those two rooms, and it takes time and energy and decreases efficiency.

The brain, in contrast, combines both computation and memory into a studio apartment. Its mushroom-like junctions, called synapses, both form neural networks and store memories at the same location. Synapses are highly flexible, adjusting how strongly they connect with other neurons based on stored memory and new learnings—a property called “weights.” Our brains quickly adapt to an ever-changing environment by adjusting these synaptic weights.

IBM has been at the forefront of designing analog chips that mimic brain computation. A breakthrough came in 2016, when they introduced a chip based on a fascinating material usually found in rewritable CDs. The material changes its physical state and shape-shifts from a goopy soup to crystal-like structures when zapped with electricity—akin to a digital 0 and 1.

Here’s the key: the chip can also exist in a hybrid state. In other words, similar to a biological synapse, the artificial one can encode a myriad of different weights—not just binary—allowing it to accumulate multiple calculations without having to move a single bit of data.

Jekyll and Hyde

The new study built on previous work by also using phase-change materials. The basic components are “memory tiles.” Each is jam-packed with thousands of phase-change materials in a grid structure. The tiles readily communicate with each other.

Each tile is controlled by a programmable local controller, allowing the team to tweak the component—akin to a neuron—with precision. The chip further stores hundreds of commands in sequence, creating a black box of sorts that allows them to dig back in and analyze its performance.

Overall, the chip contained 35 million phase-change memory structures. The connections amounted to 45 million synapses—a far cry from the human brain, but very impressive on a 14-nanometer chip.

A 14nm analog AI chip resting in a researcher’s hand. Image Credit: Ryan Lavine for IBM

These mind-numbing numbers present a problem for initializing the AI chip: there are simply too many parameters to seek through. The team tackled the problem with what amounts to an AI kindergarten, pre-programming synaptic weights before computations begin. (It’s a bit like seasoning a new cast-iron pan before cooking with it.)

They “tailored their network-training techniques with the benefits and limitations of the hardware in mind,” and then set the weights for the most optimal results, explained Wang, who was not involved in the study.

It worked out. In one initial test, the chip readily churned through 12.4 trillion operations per second for each watt of power. The energy consumption is “tens or even hundreds of times higher than for the most powerful CPUs and GPUs,” said Wang.

The chip nailed a core computational process underlying deep neural networks with just a few classical hardware components in the memory tiles. In contrast, traditional computers need hundreds or thousands of transistors (a basic unit that performs calculations).

Talk of the Town

The team next challenged the chip to two speech recognition tasks. Each one stressed a different facet of the chip.

The first test was speed when challenged with a relatively small database. Using the Google Speech Commands database, the task required the AI chip to spot 12 keywords in a set of roughly 65,000 clips of thousands of people speaking 30 short words (“small” is relative in deep learning universe). When using an accepted benchmark—MLPerf— the chip performed seven times faster than in previous work.

The chip also shone when challenged with a large database, Librispeech. The corpus contains over 1,000 hours of read English speech commonly used to train AI for parsing speech and automatic speech-to-text transcription.

Overall, the team used five chips to eventually encode more than 45 million weights using data from 140 million phase-change devices. When pitted against conventional hardware, the chip was roughly 14 times more energy-efficient—processing nearly 550 samples every second per watt of energy consumption—with an error rate a bit over 9 percent.

Although impressive, analog chips are still in their infancy. They show “enormous promise for combating the sustainability problems associated with AI,” said Wang, but the path forward requires clearing a few more hurdles.

One factor is finessing the design of the memory technology itself and its surrounding components—that is, how the chip is laid out. IBM’s new chip does not yet contain all the elements needed. A next critical step is integrating everything onto a single chip while maintaining its efficacy.

On the software side, we’ll also need algorithms that specifically tailor to analog chips, and software that readily translates code into language that machines can understand. As these chips become increasingly commercially viable, developing dedicated applications will keep the dream of an analog chip future alive.

“It took decades to shape the computational ecosystems in which CPUs and GPUs operate so successfully,” said Wang. “And it will probably take years to establish the same sort of environment for analog AI.”

Image Credit: Ryan Lavine for IBM

]]>
153125
This 3D-Printed House Goes Up in 2 Days and Costs the Same as a Car https://singularityhub.com/2023/08/24/this-3d-printed-house-goes-up-in-2-days-and-costs-the-same-as-a-car/ Thu, 24 Aug 2023 14:00:37 +0000 https://singularityhub.com/?p=153119 3D printing is becoming more popular as a construction method, with multiple companies building entire 3D-printed neighborhoods in various parts of the world. But the technique has come under scrutiny, with critics saying it’s not nearly as cost-effective nor environmentally friendly as advocates claim. A Japanese company called Serendix is hoping to be a case to the contrary; the company is 3D printing tiny homes that cost just $37,600.

Admittedly, the homes are quite small at 538 square feet; that’s about the size of a large studio apartment. But their design, called Fujitsubo (“barnacle” in Japanese) includes a bedroom, a bathroom, and an open-concept living/kitchen space.

Likely owing to the island nation’s compact geography, the Japanese tend to live in smaller spaces than Americans or Europeans; the average home size in Japan is 93 square meters (just over 1,000 square feet). In the US, meanwhile, we take up a lot more space, with our average single-family house occupying 2,273 square feet. The company says the design was created partly to cater to demand from older married couples wanting to downsize during their retirement.

The first home Serendix completed in Japan was called the Sphere, though at 107 square feet it was more a proof of concept than an actual house. Printing was completed in less than 24 hours, and the structure was up to code for both Japanese earthquake and European insulation standards. The company said they envision the Sphere having multiple purposes, including providing emergency housing or serving as a stand-alone cabin or hotel room for vacationers. Its cost to build was $25,500.

Fujitsubo is a bit different in that its walls are printed in separate sections that are then attached to its foundation with steel columns. The roof is made of panels that are cut by a computer numerical control (CNC) machine, in which pre-programmed software controls the movement of factory tools and machinery. Serendix said it took 44.5 hours to print and assemble the home.

One of the issues cited by detractors of 3D-printed construction is that the method isn’t feasible in dense urban areas, which tend to be where there’s the most need for low-cost housing; there’s not a lot of extra space or empty land available in big cities, and even if there is, it’s not efficient or cost-effective to plunk down a 3D-printed home.

Serendix gets this, and they’re aiming to stay away from building in big cities, focusing instead on small towns where there’s more land available. Given the exodus from city centers that happened during the pandemic and the increased number of people who are now working remotely, the company believes there could be a strong market for its homes in non-urban locations.

Once they receive safety approvals, Serendix plans to sell its first six Fujitsubo homes for the equivalent of $37,600—well below the average price of a home in Japan (and below the price of many cars). The company currently has five 3D printers, and it says each one can build up to 50 homes in a year. It’s aiming to acquire 12 more printers, giving it the capacity to build as many as 850 houses in a year.

“In the automotive industry 40 years ago, the price reduction of products began due to innovation of the manufacturing process using robots,” the company said in a statement. “We believe that the 3D-printed house is the beginning of complete robotization of the housing industry.”

Image Credit: Serendix

]]>
153119
Consciousness May Rely on Brain Cells Acting Collectively, New Psychedelics Research Finds https://singularityhub.com/2023/08/23/consciousness-may-rely-on-brain-cells-acting-collectively-new-psychedelics-research-finds/ Wed, 23 Aug 2023 17:45:27 +0000 https://singularityhub.com/?p=153115 Psychedelics are known for inducing altered states of consciousness in humans by fundamentally changing our normal patterns of sensory perception, thought, and emotion. Research into the therapeutic potential of psychedelics has increased significantly in the last decade.

While this research is important, I have always been more intrigued by the idea that psychedelics can be used as a tool to study the neural basis of human consciousness in laboratory animals. We ultimately share the same basic neural hardware with other mammals, and possibly some basic aspects of consciousness too. So by examining what happens in the brain when there’s a psychedelically-induced change in conscious experience, we can perhaps glean insights into what consciousness is in the first place.

We still don’t know a lot about how the networks of cells in the brain enable conscious experience. The dominating view is that consciousness somehow emerges as a collective phenomenon when the dispersed information processing of individual neurons (brain cells) is integrated as the cells interact.

But the mechanism by which this is supposed to happen remains unclear. Now our study on rats, published in Communications Biology, suggests that psychedelics radically change the way that neurons interact and behave collectively.

Our study compared two different classes of psychedelics in rats: the classic LSD type and the less-typical ketamine type (ketamine is an anesthetic in larger doses). Both classes are known to induce psychedelic experiences in humans, despite acting on different receptors in the brain.

Exploring Brain Waves

We used electrodes to simultaneously measure electrical activity from 128 separate areas of the brain of nine awake rats while they were given psychedelics. The electrodes could pick up two kinds of signals: electrical brain waves caused by the cumulative activity in thousands of neurons, and smaller transient electrical pulses, called action potentials, from individual neurons.

The classic psychedelics, such as LSD and psilocybin (the active ingredient in magic mushrooms), activate a receptor in the brain (5-HT2A) which normally binds to serotonin, a neurotransmitter that regulates mood and many other things. Ketamine, on the other hand, works by inhibiting another receptor (NMDA), which normally is activated by glutamate, the primary neurotransmitter in the brain for making neurons fire.

We speculated that, despite these differences, the two classes of psychedelics might have similar effects on the activity of brain cells. Indeed, it turned out that both drug classes induced a very similar and distinctive pattern of brain waves in multiple brain regions.

The brain waves were unusually fast, oscillating about 150 times per second. They were also surprisingly synchronized between different brain regions. Short bursts of oscillations at a similar frequency are known to occur occasionally under normal conditions in some brain regions. But in this case, they occurred for prolonged durations.

First, we assumed that a single brain structure was generating the wave and that it then spread to other locations. But the data was not consistent with that scenario. Instead, we saw that the waves went up and down almost simultaneously in all parts of the brain where we could detect them, a phenomenon called phase synchronization. Such tight phase synchronization over such long distances has, to our knowledge, never been observed before.

We were also able to measure action potentials from individual neurons during the psychedelic state. Action potentials are electrical pulses, no longer than a thousandth of a second, that are generated by the opening and closing of ion channels in the cell membrane. The action potentials are the primary way that neurons influence each other. Consequently, they are considered to be the main carrier of information in the brain.

However, the action potential activity caused by LSD and ketamine differed significantly. As such, they could not be directly linked to the general psychedelic state. For LSD, neurons were inhibited—meaning they fired fewer action potentials—in all parts of the brain. For ketamine, the effect depended on cell type—certain large neurons were inhibited, while a type of smaller, locally connecting neurons fired more.

Therefore, it is probably the synchronized wave phenomenon—how the neurons behave collectively—that is most strongly linked to the psychedelic state. Mechanistically, this makes some sense. It is likely that this type of increased synchrony has large effects on the integration of information across neural systems that normal perception and cognition rely on.

I think that this possible link between neuron-level system dynamics and consciousness is fascinating. It suggests that consciousness relies on a coupled collective state rather than the activity of individual neurons—it is greater than the sum of its parts.

That said, this link is still highly speculative at this point. That’s because the phenomenon has not yet been observed in human brains. Also, one should be cautious when extrapolating human experiences to other animals—it is of course impossible to know exactly what aspects of a trip we share with our rodent relatives.

But when it comes to cracking the deep mystery of consciousness, every bit of information is valuable.The Conversation

Pär Halje, Associate Research Fellow of Neurophysiology, Lund University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Gerd Altmann from Pixabay

]]>
153115
AI Can Now Design Proteins That Behave Like Biological ‘Transistors’ https://singularityhub.com/2023/08/22/ai-can-now-design-proteins-that-behave-like-biological-transistors/ Tue, 22 Aug 2023 14:00:18 +0000 https://singularityhub.com/?p=153108 We often think of proteins as immutable 3D sculptures.

That’s not quite right. Many proteins are transformers that twist and change their shapes depending on biological needs. One configuration may propagate damaging signals from a stroke or heart attack. Another may block the resulting molecular cascade and limit harm.

In a way, proteins act like biological transistors—on-off switches at the root of the body’s molecular “computer” determining how it reacts to external and internal forces and feedback. Scientists have long studied these shape-shifting proteins to decipher how our bodies function.

But why rely on nature alone? Can we create biological “transistors,” unknown to the biological universe, from scratch?

Enter AI. Multiple deep learning methods can already accurately predict protein structures—a breakthrough half a century in the making. Subsequent studies using increasingly powerful algorithms have hallucinated protein structures untethered by the forces of evolution.

Yet these AI-generated structures have a downfall: although highly intricate, most are completely static—essentially, a sort of digital protein sculpture frozen in time.

A new study in Science this month broke the mold by adding flexibility to designer proteins. The new structures aren’t contortionists without limits. However, the designer proteins can stabilize into two different forms—think a hinge in either an open or closed configuration—depending on an external biological “lock.” Each state is analogous to a computer’s “0” or “1,” which subsequently controls the cell’s output.

“Before, we could only create proteins that had one stable configuration,” said study author Dr. Florian Praetorius at the University of Washington. “Now, we can finally create proteins that move, which should open up an extraordinary range of applications.”

Lead author Dr. David Baker has ideas: “From forming nanostructures that respond to chemicals in the environment to applications in drug delivery, we’re just starting to tap into their potential.”

A Protein Marriage Made in AI

A quick bit of biology 101.

Proteins build and run our bodies. These macromolecules begin their journey from DNA. Genetic information is translated into amino acids, the building blocks of a protein—picture beads on a string. Each string is then folded into intricate 3D shapes, with some parts sticking to others. Called secondary structures, some configurations look like Twizzlers. Others weave into carpet-like sheets. These shapes further build on each other, forming highly sophisticated protein architectures.

By understanding how proteins gain their shapes, we can potentially engineer new ones from scratch, expanding the biological universe and creating new weapons against viral infections and other diseases.

Back in 2020, DeepMind’s AlphaFold and David Baker lab’s RoseTTAFold broke the structural biology internet by accurately predicting protein structures based on their amino acid sequences alone.

Since then, the AI models have predicted the shape of almost every protein known—and unknown—to science. These powerful tools are already reshaping biological research, helping scientists quickly nail down potential targets to combat antibiotic resistance, study the “housing” of our DNA, develop new vaccines or even shed light on diseases that ravage the brain, like Parkinson’s disease.

Then came a bombshell: generative AI models, such as DALL-E and ChatGPT, offered a tantalizing prospect. Rather than simply predicting protein structures, why not have AI dream up completely novel protein structures instead? From a protein that binds hormones to regulate calcium levels to artificial enzymes that catalyze bioluminescence, initial results sparked enthusiasm and the potential for AI-designed proteins seemed endless.

At the helm of these discoveries is Baker’s lab. Shortly after releasing RoseTTAFold, they further developed the algorithm to nail down functional sites on a protein—where it interacts with other proteins, drugs, or antibodies—paving the way for scientists to dream up new medications they haven’t yet imagined.

Yet one thing was missing: flexibility. A large number of proteins “code shift” in shape to change their biological message. The result could literally be life or death: a protein called Bax, for example, alters its shape into a conformation that triggers cell death. Amyloid beta, a protein involved in Alzheimer’s disease, notoriously takes a different shape as it harms brain cells.

An AI that hallucinates similar flip-flop proteins could edge us closer to understanding and recapitulating these biological conundrums—leading to new medical solutions.

Hinge, Line, and Sinker

Designing one protein at the atomic level—and hoping it works in a living cell—is hard. Designing one with two configurations is a nightmare.

As a loose analogy, think of ice crystals in a cloud that eventually form into snowflakes, each one different in structure. The AI’s job is to make proteins that can shift between two different “snowflakes” using the same amino acid “ice crystals,” with each state corresponding to an “on” or “off” switch. Additionally, the protein has to play nice inside living cells.

The team began with several rules. First, each structure should look vastly different between the two states—like a human profile standing or sitting. They could check this by measuring distances between atoms, explained the team. Second, the change needs to happen fast. This means the protein can’t completely unfurl before piecing itself back together into another shape, which takes time.

Then there are some groundskeeping guidelines for a functional protein: it needs to play nice with bodily liquids in both states. Finally, it has to act as a switch, changing its shape depending on inputs and outputs.

Meeting all “these properties in one protein system is challenging,” said the team.

Using a mix of AlphaFold, Rosetta, and proteinMPNN, the final design looks like a hinge. It has two rigid parts that can move relative to each other, while another piece remains folded. Normally the protein is closed. The trigger is a small peptide—a short chain of amino acids—that binds to the hinges and triggers its shape change. These so-called “effector peptides” were carefully designed for specificity, lowering their chances of grabbing onto off-target parts.

The team first added glow-in-the-dark trigger peptides to multiple hinge designs. Subsequent analysis found that the trigger easily grabbed onto the hinge. The protein’s configuration changed. As a sanity check, the shape was one previously predicted using AI analysis.

Additional studies using crystallized structures of the protein designs, either with or without the effector, further validated the results. These tests also hunted down design principles that made the hinge work, and parameters that tip one state to the other.

The take away? AI can now design proteins with two different states—essentially building biological transistors for synthetic biology. For now, the system only uses custom-designed effector peptides in their studies, which may limit research and clinical potential. But according to the team, the strategy can also extend to natural peptides, such as those that bind proteins involved in regulating blood sugar, regulate water in tissues, or influence brain activity.

“Like transistors in electronic circuits, we can couple the switches to external outputs and inputs to create sensing devices and incorporate them into larger protein systems,” the team said.

Study author Dr. Philip Leung adds: “This could revolutionize biotechnology in the same way transistors transformed electronics.”

Image Credit: Ian C Haydon/ UW Institute for Protein Design

]]>
153108
A ‘Memory Wipe’ for Stem Cells May Be the Key to Better Regenerative Therapies https://singularityhub.com/2023/08/21/a-memory-wipe-for-stem-cells-may-be-the-key-to-better-regenerative-therapies/ Mon, 21 Aug 2023 14:00:18 +0000 https://singularityhub.com/?p=153072 Stem cells are special kinds of cells in our bodies that can become any other type of cell. They have huge potential for medicine, and trials are currently under way using stem cells to replace damaged cells in diseases like Parkinson’s.

One way to get stem cells is from human embryos, but this has ethical concerns and practical limitations. Another way is to turn adult cells from the skin or elsewhere into what are called “induced pluripotent stem cells” (iPS cells).

However, these cells sometimes carry a “memory” of the kind of cell they used to be, which can make them less predictable or efficient when we try to turn them into other types of cells.

In a study published in Nature, my colleagues and I have found a way to erase this memory, to make iPS cells function more like embryonic stem cells.

Great Promise for Regenerative Medicine

Mature, specialized cells like skin cells can be reprogrammed into iPS cells in the lab. These “blank slate” cells show great promise in regenerative medicine, a field focused on regrowing, repairing, or replacing damaged or diseased cells, organs, or tissues.

Scientists can make iPS cells from a patient’s own tissue, so there’s less risk the new cells will be rejected by the patient’s immune system.

To take one example, iPS cells are being tested for making insulin-producing pancreas cells to help people with diabetes. We’re not there yet, but it’s an example of what might be possible.

Research using iPS cells is a rapidly advancing field, yet many technical challenges remain. Scientists are still figuring out how to better control what cell types iPS cells become and ensure the process is safe.

One of these technical challenges is overcoming “epigenetic memory,” where the iPS cells retain traces of the cell type they once were.

Epigenetic Memory and How It Can Impair the Use of iPS Cells

To understand “epigenetic memory,” let’s first talk about epigenetics. Our DNA carries sequences of instructions known as genes. When various factors influence gene activity (turning them on or off) without changing the DNA sequence itself, this is known as epigenetics—literally meaning “above genetics.”

A cell’s epigenome is a collective term to describe all the epigenetic modifications in a cell. Each of our cells contains the same DNA, but the epigenome controls which genes are turned on or off, and this determines whether it becomes a heart cell, a kidney cell, a liver cell, or any other cell type.

You can think of DNA as a cookbook and the epigenome as a set of bookmarks. The bookmarks don’t alter the recipes, but they direct which ones are used.

Similarly, epigenetic marks guide cells to interpret the genetic code without changing it.

When we reprogram a mature cell into an iPS cell, we want to erase all its “bookmarks.” However, this doesn’t always work completely. When some bookmarks remain, this “epigenetic memory” can influence the behavior of the iPS cells.

An iPS cell made from a skin cell can retain a partial “memory” of being a skin cell, which makes it more likely to turn back into a skin-like cell and less likely to turn into other cell types. This is because some of the DNA’s epigenetic marks can tell the cell to behave like a skin cell.

This can be a hurdle for using iPS cells because it can impact the process of turning iPS cells into the types of cells you want. It might also affect the function of the cells once they’re created. If you want to use iPS cells to help repair a pancreas, but the cells have a “memory” of being skin cells, they might not function as well as true pancreatic cells.

How to Clear iPS Cell Epigenetic Memory and Improve Function

Overcoming the issue of epigenetic memory in iPS cells is a widely recognized challenge for regenerative medicine.

By studying how the epigenome transforms when we reprogram adult skin cells into iPS cells, we discovered a new way to reprogram cells that more completely erases epigenetic memory. We made this discovery by reprogramming cells using a method that imitates how the epigenome of embryo cells is naturally reset.

During the early development of an embryo, before it is implanted into the uterus, the epigenetic marks inherited from the sperm and egg cells are essentially erased. This reset allows the early embryo cells to start fresh and become any cell type as the embryo grows and develops.

By introducing a step during the reprogramming process that briefly mimics this reset process, we made iPS cells that are more like embryonic stem cells than conventional iPS cells.

More effective epigenetic memory erasure in iPS cells will enhance their medical potential. It will allow the iPS cells to behave as “blank slates” like embryonic stem cells, making them more likely to transform into any desired cell type.

If iPS cells can forget their past identities, they can more reliably become any type of cell and help create specific cells needed for therapies, like new insulin-producing cells for someone with diabetes, or neuronal cells for someone with Parkinson’s. It could also reduce the risk of unexpected behaviors or complications when iPS cells are used in medical treatments.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NIH

]]>
153072
US Invests $1.2 Billion in Carbon Capture Plants to Suck Tons of CO2 From the Air https://singularityhub.com/2023/08/20/us-invests-1-2-billion-in-carbon-capture-plants-to-suck-tons-of-co2-from-the-air/ Sun, 20 Aug 2023 14:00:14 +0000 https://singularityhub.com/?p=153065 Pulling large amounts of carbon dioxide (CO2) out of the atmosphere is likely to be a crucial part of efforts to tackle climate change. A new $1.2 billion investment by the US government in two large-scale facilities could help jumpstart the technology.

While there is strong consensus that rapidly reducing carbon emissions will be essential if we want to avoid the worst impacts of climate change, there’s growing recognition that this isn’t happening fast enough to hit present targets. As a result, it seems increasingly likely that we’ll have to find ways to remove CO2 from the atmosphere later this century.

While various nature-based solutions exist, including reforestation and locking up carbon in soil, direct air capture (DAC) technology that pulls CO2 out of the air could be a crucial tool. The technology is in its infancy though and currently costs a huge amount of money to remove very little carbon from the atmosphere.

The US government hopes to change that with the announcement of $1.2 billion in funding to build two plants capable of removing up to a million tons of CO2 a year in Texas and Louisiana. The hope is that building facilities at a much larger scale than shown in previous demonstrations will help prove the feasibility of the technology and cut costs.

“Cutting back on our carbon emissions alone won’t reverse the growing impacts of climate change; we also need to remove the CO2 that we’ve already put in the atmosphere,” US Secretary of Energy Jennifer Granholm said in a statement announcing the investment.

The plants will be the first of four direct air capture (DAC) demonstrators due to be built over the next decade using money from last year’s bipartisan infrastructure law. The agency says each facility will eventually remove more than 250 times more CO2 than the largest existing DAC plant, which is based in Iceland.

Both will rely on massive arrays of fans to suck air over special materials that selectively remove CO2. The materials are then heated to liberate the captured CO2 in preparation for further processing and storage deep underground (though in the future it may be possible to repurpose the gas into things like cement or sustainable aviation fuels).

The Louisiana project is a collaboration between non-profit technology company Batelle and DAC technology providers Climeworks Corporation and Heirloom Carbon Technologies, while the Texas plant will be built by Occidental Petroleum using technology from Carbon Engineering.

The announcement has drawn mixed reactions. Some experts have praised the investment as crucial for kick-starting commercialization of an important climate technology, but others have suggested the money could be better spent on other carbon reduction efforts.

It can cost more than $1,000 to remove each ton of CO2 using current DAC technology. It also requires large amounts of electricity to run fans and heat the CO2-absorbing materials, which diverts renewable power that could otherwise be displacing energy produced using fossil fuels.

Proponents have made rosy predictions about how quickly these costs and energy requirements could come down. But Robert Howarth, a biogeochemist at Cornell University, told Science that the low concentration of CO2 in the air means the physics of removing it is fundamentally challenging and doubts it will see the same rapid improvements as other climate technologies like solar panels.

Another concern is that the promise of the technology could act as an excuse for fossil fuel companies to continue extraction for decades to come, Jonathan Foley, executive director of climate group Project Drawdown, told the Associated Press. “What worries me and a lot of other climate scientists is that it potentially creates a fig leaf for the fossil fuel industry,” he said.

Occidental, which will operate the Texas plant, has been quite explicit on this front. Occidental CEO Vicki Hollub told the Wall Street Journal earlier this year that it plans to build 135 DAC plants to help it reach net-zero emissions by 2050 while still investing heavily in oil extraction.

Nonetheless, others say that the scale of the climate challenge means that DAC is going to be a crucial tool and work needs to start now if it is to be ready by the time we need it. “In order to have direct air capture ready at the scale we need it by 2050, we need to invest in it today,” climate researcher Claire Nelson, from Columbia University, told the Associated Press.

The US is also not the only government focusing on this area. The United Kingdom recently announced £20 billion in funding over the next two decades for carbon capture storage, which focuses on removing CO2 from industrial emissions, though the funding could also go towards DAC. The European Union has also recently announced plans to produce a carbon capture strategy with the hope of storing 50 million tons of CO2 by 2030.

While it’s still too early to say how much of an impact the technology could have on the climate challenge, it seems likely we will find out soon.

Image Credit: Climeworks

]]>
153065
This Week’s Awesome Tech Stories From Around the Web (Through August 19) https://singularityhub.com/2023/08/19/this-weeks-awesome-tech-stories-from-around-the-web-through-august-19/ Sat, 19 Aug 2023 14:00:09 +0000 https://singularityhub.com/?p=153040 COMPUTING

Scientists Recreate Pink Floyd Song by Reading Brain Signals of Listeners
Hana Kiros | The New York Times
“Scientists have trained a computer to analyze the brain activity of someone listening to music and, based only on those neuronal patterns, recreate the song. The research, published on Tuesday, produced a recognizable, if muffled version of Pink Floyd’s 1979 song, ‘Another Brick in the Wall (Part 1).’ …Now, ‘you can actually listen to the brain and restore the music that person heard,’ said Gerwin Schalk, a neuroscientist who directs a research lab in Shanghai and collected data for this study.”

ARTIFICIAL INTELLIGENCE

Meta’s AI Agents Learn to Move by Copying Toddlers
Eliza Strickland | IEEE Spectrum
“In a simulated environment, a disembodied skeletal arm powered by artificial intelligence lifted a small toy elephant and rotated it in its hand. It used a combination of 39 muscles acting through 29 joints to experiment with the object, exploring its properties as a toddler might. Then it tried its luck with a tube of toothpaste, a stapler, and an alarm clock. …The project applies machine learning to biomechanical control problems, with the aim of demonstrating human-level dexterity and agility. ”

BIOTECH

Scientists Bioengineer Plants to Have an Animal-Like Immune System
Peter Rogers | Big Think
“Plants lack an adaptive immune system—a powerful system capable of detecting practically any foreign molecule—and instead rely on a more general immune system. Unfortunately, pathogens can rapidly evolve new ways to avoid detection, resulting in colossal crop loss. Using a rice plant as a model, scientists have bioengineered a hybrid molecule—by fusing components from an animal’s adaptive immune system with those of a plant’s innate immune system—that protects it from a pathogen.”

ENERGY

The Clean Energy Future Is Arriving Faster Than You Think
David Gelles, Brad Plumer, Jim Tankersly, and Jack Ewing | The New York Times
“The cost of generating electricity from the sun and wind is falling fast and in many areas is now cheaper than gas, oil or coal. Private investment is flooding into companies that are jockeying for advantage in emerging green industries. ‘We look at energy data on a daily basis, and it’s astonishing what’s happening,’ said Fatih Birol, the executive director of the International Energy Agency. ‘Clean energy is moving faster than many people think, and it’s become turbocharged lately.’i

INNOVATION

AI Mania Triggers Dot-Com Bubble Flashbacks
Eric Wallerstein | The Wall Street Journal
“The nascency of AI programs such as ChatGPT means it is likely too early to determine whether Nvidia can raise revenue in line with the eye-watering valuation investors have slapped on its shares. If the company’s growth isn’t enough to reflect its price, the stock could crater. A basket of 43 high-multiple internet stocks—those worth at least $5 billion that traded at 25 times their revenue at the turn of the century—crashed 80% over the next two years, according to Sparkline. The companies weren’t duds, either.”

FUTURE

Gartner Hype Cycle places generative AI on the ‘Peak of Inflated Expectations’
Sharon Goldman | VentureBeat
i‘Generative AI is almost positioned as human-level intelligence, with a lot of people equating it to AGI,’  [Gartner analyst Arun Chandrasekaran] said, adding that others confuse generative AI with other AI techniques when they should be using predictive AI or causal AI instead, for example. But the main reason AI has hit the peak of the hype cycle, said Chandrasekaran, is the sheer number of products claiming to have generative AI baked into them. ‘It’s just enormous,’ he said.”

AUTOMATION

A Race for Autopilot Dominance Is Giving China the Edge in Autonomous Driving
Zeyi Yang | MIT Technology Review
“In just the past six months, nearly a dozen Chinese car companies have announced ambitious plans to roll out their NOA [Navigation on Autopilot] products to multiple cities across the country. While some of the services remain inaccessible to the public now, Sundin tells MIT Technology Review ‘the watershed could be next year.’i

FUTURE

A Letter Prompted Talk of AI Doomsday. Many Who Signed Weren’t Actually AI Doomers
Will Knight | Wired
“Two enterprising students at MIT, Isabella Struckman and Sofie Kupiec, reached out to the first hundred signatories of the [Future of Life Institute] letter calling for a pause on AI development to learn more about their motivations and concerns. The duo’s write-up of their findings reveals a broad array of perspectives among those who put their name to the document. Despite the letter’s public reception, relatively few were actually worried about AI posing a looming threat to humanity itself.”

COMPUTING

Is Quantum Computing Hype or Almost Here?
Adam Frank | Big Think
“Last spring, I attended a conference where a leading expert in quantum computing gave an overview talk about the state of her field. Afterward, over coffee, I asked her how long before we would have working, practical quantum computers. She looked at me gravely and said, ‘Not for a very long time.’ Her quick response was remarkable given what we are told about progress in the field. From breathless media accounts, many people assume that quantum computing machines are just around the corner. It turns out that is not the case at all.”

SPACE

JWST Spots Giant Black Holes All Over the Early Universe
Charlie Wood | Quanta
“The most straightforward explanation for the tornado-hearted galaxies [discovered by JWST] is that large black holes weighing millions of suns are whipping the gas clouds into a frenzy. That finding is both expected and perplexing. It is expected because JWST was built, in part, to find the ancient objects. …Yet the observations are also perplexing because few astronomers expected JWST to find so many young, hungry black holes—and surveys are turning them up by the dozen.”

Image Credit: Vimal S / Unsplash

]]>
153040