NERSC Simulations Shed Light on Origins of Human Brain Recordings

2022-07-15 20:44:33 By : Mr. Ekin Yan

Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

July 12, 2022 — Where does electrical activity recorded on the surface of the brain come from, and what exactly determines which electrical signals get recorded? Using simulations run at the National Energy Research Scientific Computing Center (NERSC), a team of researchers at Lawrence Berkeley National Laboratory has zeroed in on answers to these questions, finding the origin of the cortical surface electrical signals and why the signals originate where they do. Their results were published in The Journal of Neuroscience in May.

Neurosurgeons have long used electrocorticography (ECoG) – electrode sensors placed directly on the surface of the brain during neurosurgery, which record a particular type of electrical signal – to localize where seizures are originating and study how the human brain works. ECoG has yielded insights into how brain dynamics produce perception, behavior, and cognition in humans and shows great promise for interactions between machines and the brain. However, the sources of the recorded signals and how those sources are distributed has remained a mystery.

Going Deep on Surface-Recorded Signals

In a seven-year process – including developing new ECoG devices, performing brain surgery on rats to record their neurological signals, developing new machine-learning algorithms to process the data, and running a full-scale biophysically accurate simulation at NERSC – Berkeley Lab Computational Biosciences Group Lead Kris Bouchard and his team found answers to many of these questions, establishing a new understanding of which neurons, precisely, are generating the recorded signals and paving the way for further research and clinical work. Vyassa L. Baratham, Maximilian E. Dougherty, and John Hermiz from Berkeley Lab, Michel M. Maharbiz from UC Berkeley, and Peter Ledochowitsch of Canary, LLC are also featured as authors of the paper.

Using ECoG during neurosurgery, “[researchers] record signals, but they don’t know what neurons in the brain are generating those signals,” said Bouchard. “ECoG is an aggregate, mesoscale signal, and the lack of a deeper, more precise understanding of what is generating that signal hinders the ability to use it both for clinical applications and as a basic neuroscience tool.”

To isolate the source of the signals, the team used a model to simulate a single cortical column of the brain – a recurring module of organization of the cortex composed of tens of thousands of densely interconnected neurons. Each column is organized into five to six layers composed of different types of neurons. According to the new findings, neurons in cortical layers V and VI – those located deepest in the brain – produce about 85% of the electrical signals picked up by ECoG. This is counterintuitive because the neurons with the largest contribution would typically be expected to come from layers closest to the ECoG sensors, which are on the surface of the brain. However, further investigation showed that distance is only a minor factor in the strength of the signal; rather, the layers deepest in the brain have the most neurons, and those neurons are the most likely to fire simultaneously, which contribute to strong signals coming from those deeper layers of the brain.

Bouchard and his team used the Cori supercomputer at NERSC and a software package known as NEURON to simulate the differential equations that describe individual neurons, and connected the neurons with cortical column details derived from the Human Brain Project and the Blueprint Project. Ultimately, the project simulated 30 million interconnected neuronal segments of a single cortical column and showed how signals flowed through the tissue over the course of 60 seconds at high resolution. These simulations used up to 2,646 nodes (180,000 compute cores) concurrently on Cori – the first time this type of simulation has been used to understand ECoG signals.

This improved knowledge of the fundamental processes that generate recorded brain signals offers researchers new tools for understanding brain processing, connecting animal studies with human physiology, and coming up with more effective therapeutics.

“This paper advances fundamental knowledge of the biophysical origins of a recorded signal used to understand how the human brain works and to treat human illness, ” said Bouchard. “With this knowledge, we can get more precise information from the human brain as well as from animal models, and more directly link studies in animal models with studies in the human brain. Because we now know which neurons are being recorded by ECoG, we can study those very specific neurons in animal models and make predictions about what we would expect to see in the human case with this less resolved recording technology.”

And it’s just the beginning. Upcoming work will connect cortical columns with their neighbor columns, offering a more holistic and accurate view of the detailed results; a separate project is training deep neural networks to infer the parameters of single neurons from indirect measurements.

“One thing we’re particularly interested in doing is to take the simulation that we currently have and replicate it – replicate that column, and connect it to columns around it, to get a broader perspective on how more brain tissue is generating signals that are recorded at the surface. We hypothesize that different frequency ranges in the recorded ECoG signal are biomarkers of activity in different layers,” said Bouchard. “Because neurons in different layers perform different computations, having access to signals that reflect those computations from the human brain would be a game-changer.”

The paper is also the fruition, seven years in the making, of a series of partnerships investing in brain science at Berkeley Lab. Motivated by the US BRAIN Initiative, beginning in 2015 as part of a cross-institutional collaboration between Berkeley Lab, University of California San Francisco, and UC Berkeley, and born out of the Laboratory Directed Research and Development program, this project represents results of collaboration between the Computing Sciences and Biosciences Areas and the Computational Biosciences Group within Berkeley Lab’s Scientific Data division. The computations were supported by an allocation of computer time from the NERSC Director’s Discretionary Reserve.

Bouchard says this type of collaboration – cutting-edge experimental science backed up by high-powered computing and a broad range of other resources and disciplines – is the beauty of working in the national lab system.

“Things take time to come to fruition when they involve brain surgeries in live animals, and this is an example of the fruits that have been born of those early investments by the Lab,” said Bouchard. “This is a shining example of the kind of real cross-disciplinary science that can be done at a national lab because it combines new experimentation coupled with the resources that are unique to a national lab, like high-powered computing. We’re using DOE high-powered computing resources to address problems in the health space that are simply intractable without these unique facilities.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 7,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. Learn more about computing sciences at Berkeley Lab.

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The development of a whole device model (WDM) for a fusion reactor is critical for the science of magnetically confined fusion plasmas. In the next decade, the ITER fusion reactor project will realize plasmas well beyon Read more…

The direction that exascale supercomputing will need to follow and the continuing value of visual and other non-computational experts in computer visualizations were the focus of the final two plenary sessions at the PEARC22 conference in Boston on July 13. Jack Dongarra, director of research staff and professor at the Oak Ridge National Laboratory and the University of Tennessee, Knoxville... Read more…

Harvard University is making a more concerted research computing push with the creation of a new university organization and a spate of hiring announcements. The new organization, called “University Research Computing, Read more…

Because humans are by our nature biased, our data – and our code – will necessarily be as well, said Ayanna Howard, dean of The Ohio State University College of Engineering. But there is hope: Sometimes we can leverage human bias to beneficial ends. The trick is that we need to build our systems so that, when we identify bad outcomes from bias, we can fix them rapidly. Read more…

Arizona State University (ASU) has announced that its new supercomputer, “Sol,” will launch this summer. Sol, pictured in the header image courtesy of ASU's Andy DeLisle, is set to supplement ASU’s existing superco Read more…

As companies shift high-performance workloads toward cloud solutions, data storage and data protection go side-by-side. Many companies have both internal and external security rules and regulations they must adhere to when storing their data.  Read more…

Financial institutions such as banks, hedge funds, and mutual funds use quantitative analysis to make stock trades. An Investopedia article indicates, “Quantitative trading consists of trading strategies based on quantitative analysis, which rely on mathematical computations and number crunching to identify trading opportunities. Read more…

It may seem like just a moment since ISC 2022 wrapped up in Hamburg, but get ready: as of today, registration is open for SC22. The conference will be held in Dallas, Texas, from November 14-17, 2022. Early registration Read more…

The direction that exascale supercomputing will need to follow and the continuing value of visual and other non-computational experts in computer visualizations were the focus of the final two plenary sessions at the PEARC22 conference in Boston on July 13. Jack Dongarra, director of research staff and professor at the Oak Ridge National Laboratory and the University of Tennessee, Knoxville... Read more…

Harvard University is making a more concerted research computing push with the creation of a new university organization and a spate of hiring announcements. Th Read more…

Because humans are by our nature biased, our data – and our code – will necessarily be as well, said Ayanna Howard, dean of The Ohio State University College of Engineering. But there is hope: Sometimes we can leverage human bias to beneficial ends. The trick is that we need to build our systems so that, when we identify bad outcomes from bias, we can fix them rapidly. Read more…

Arizona State University (ASU) has announced that its new supercomputer, “Sol,” will launch this summer. Sol, pictured in the header image courtesy of ASU's Read more…

It may seem like just a moment since ISC 2022 wrapped up in Hamburg, but get ready: as of today, registration is open for SC22. The conference will be held in D Read more…

GPU and accelerated-computing powerhouse Nvidia today announced a new programming platform – NVIDIA Quantum Optimized Device Architecture (QODA) – targeting development and management of applications run on hybrid classical-quantum systems. QODA joins cuQuantum, Nvidia’s quantum simulation SDK for use on GPU-accelerated systems. Unlike cuQuantum... Read more…

The supply chain for chips, already extraordinarily fraught with logistical and geopolitical impediments, is about to face another. According to a report from t Read more…

Robust quantum error correction (QEC) is a necessary ingredient for achieving practical quantum computing and as you might expect there’s an abundance of ongo Read more…

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on Read more…

© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication

HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.

Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.