U Missouri Researchers Use Computational Models To Study Fear

Researchers at the University of Missouri in Columbia are using computational models of the brain to study fear. The research is taking place in the Computational Neurobiology Center, which is part of the university's College of Engineering.

"Computational models make it much easier to study the brain because they can effectively integrate different types of information related to a problem into a computational framework and analyze possible neural mechanisms from a systems perspective," said Guoshi Li, an electrical and computer engineering doctoral student involved in the research. "We simulate activity and test a variety of 'what if' scenarios in a rapid and inexpensive way without having to use human subjects."

From previous research, scientists have found that fear can subside--though not disappear completely--when overcome with fear extinction memory. Fear extinction is a process in which a conditioned response to something that produces fear gradually diminishes over time as subjects learn to disassociate a response from a stimulus. One theory says that fear extinction memory deletes fear memory; another suggests that it isn't lost, but simply inhibited.

"Fear extinction memory is not well understood, and our computational model can capture the neuron response well in rats during auditory fear conditioning with a mixture of mathematics and biophysical data," said Li. "Our main contribution is that our model predicts that fear memory is only partially erased by extinction, and inhibition is necessary for a complete extinction, which is a reconciliation of the erasure and inhibition theories."

The findings may help victims of post-traumatic stress disorder (PTSD), for whom the "fear circuit" is disrupted, preventing them from recalling the fear extinction memory. The research team is targeting the inhibitory connection in the brain that makes it possible to retrieve the extinction memory. Li said he hopes that his research can contribute to new drugs that can help PTSD victims.

"Treatment for PTSD patients depends on which connection stores the fear extinction memory and which circuit misfires," Li said. "With our model, we can figure out what specific connections store fear/extinction memory and how such connections are disrupted in the pathology of PTSD, which may lead to the suggestions of new drugs to treat the disease."

The team has received a three-year National Institutes of Health grant for further research in fear modeling.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.