Neuroscience – Artifex.News https://artifex.news Stay Connected. Stay Informed. Thu, 23 May 2024 12:40:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png Neuroscience – Artifex.News https://artifex.news 32 32 After 180 years, clues reveal how general anaesthesia works in the brain https://artifex.news/article68206901-ece/ Thu, 23 May 2024 12:40:43 +0000 https://artifex.news/article68206901-ece/ Read More “After 180 years, clues reveal how general anaesthesia works in the brain” »

]]>

How anaesthetic drugs work in the brain has largely remained a mystery since it was introduced into medicine over 180 years ago.
| Photo Credit: The Hindu

Over 350 million surgeries are performed globally each year. For most of us, it’s likely at some point in our lives we’ll have to undergo a procedure that needs general anaesthesia.

Even though it is one of the safest medical practices, we still don’t have a complete, thorough understanding of precisely how anaesthetic drugs work in the brain.

In fact, it has largely remained a mystery since general anaesthesia was introduced into medicine over 180 years ago.

Our study published in The Journal of Neuroscience today provides new clues on the intricacies of the process. General anaesthetic drugs seem to only affect specific parts of the brain responsible for keeping us alert and awake.

Brain cells striking a balance

In a study using fruit flies, we found a potential way that allows anaesthetic drugs to interact with specific types of neurons (brain cells), and it’s all to do with proteins. Your brain has around 86 billion neurons and not all of them are the same – it’s these differences that allow general anaesthesia to be effective.

To be clear, we’re not completely in the dark on how anaesthetic drugs affect us. We know why general anaesthetics are able to make us lose consciousness so quickly, thanks to a landmark discovery made in 1994.

But to better understand the fine details, we first have to look to the minute differences between the cells in our brains.

Broadly speaking, there are two main categories of neurons in the brain.

The first are what we call “excitatory” neurons, generally responsible for keeping us alert and awake. The second are “inhibitory” neurons – their job is to regulate and control the excitatory ones.

Also Read | The birth of modern local anaesthesia

In our day-to-day lives, excitatory and inhibitory neurons are constantly working and balancing one another.

When we fall asleep, there are inhibitory neurons in the brain that “silence” the excitatory ones keeping us awake. This happens gradually over time, which is why you may feel progressively more tired through the day.

General anaesthetics speed up this process by directly silencing these excitatory neurons without any action from the inhibitory ones. This is why your anaesthetist will tell you that they’ll “put you to sleep” for the procedure: it’s essentially the same process.

A special kind of sleep

While we know why anaesthetics put us to sleep, the question then becomes: “why do we stay asleep during surgery?”. If you went to bed tonight, fell asleep and somebody tried to do surgery on you, you’d wake up with quite a shock.

To date, there is no strong consensus in the field as to why general anaesthesia causes people to remain unconscious during surgery.

Over the last couple of decades, researchers have proposed several potential explanations, but they all seem to point to one root cause. Neurons stop talking to each other when exposed to general anaesthetics.

While the idea of “cells talking to each other” may sound a little strange, it’s a fundamental concept in neuroscience. Without this communication, our brains wouldn’t be able to function at all. And it allows the brain to know what’s happening throughout the body.

What did we discover?

Our new study shows that general anaesthetics appear to stop excitatory neurons from communicating, but not inhibitory ones. This concept isn’t new, but we found some compelling evidence as to why only excitatory neurons are affected.

For neurons to communicate, proteins have to get involved. One of the jobs these proteins have is to get neurons to release molecules called neurotransmitters. These chemical messengers are what gets signals across from one neuron to another: dopamine, adrenaline and serotonin are all neurotransmitters, for example.

We found that general anaesthetics impair the ability of these proteins to release neurotransmitters, but only in excitatory neurons. To test this, we used Drosophila melanogaster fruit flies and super resolution microscopy to directly see what effects a general anaesthetic was having on these proteins at a molecular scale.

Part of what makes excitatory and inhibitory neurons different from each other is that they express different types of the same protein. This is kind of like having two cars of the same make and model, but one is green and has a sports package, while the other is just standard and red. They both do the same thing, but one’s just a little bit different.

Neurotransmitter release is a complex process involving lots of different proteins. If one piece of the puzzle isn’t exactly right, then general anaesthetics won’t be able to do their job.

As a next research step, we will need to figure out which piece of the puzzle is different, to understand why general anaesthetics only stop excitatory communication.

Ultimately, our results hint that the drugs used in general anaesthetics cause massive global inhibition in the brain. By silencing excitability in two ways, these drugs put us to sleep and keep it that way.

This article is republished from The Conversation under a Creative Commons license. Read the original article.



Source link

]]>
How neuroscience reshapes marketing strategies in India https://artifex.news/article68012885-ece/ Mon, 01 Apr 2024 03:00:00 +0000 https://artifex.news/article68012885-ece/ Read More “How neuroscience reshapes marketing strategies in India” »

]]>

Elon Musk’s N1 implant, introduced to facilitate operating devices by just intending it in the brain, has jolted many into realising how far seemingly exotic neuroscience has been put to practical, commercial use. While the implant may be the outlier in neuroscience, what’s common and par for the course today is mapping the brain to understand and predict human responses with data and real insight. This is being used in India to solve business problems from why life insurance buyers typically stop paying premiums after the first two years to whether an online ad can be made to ensure the consumer hits the “buy” button.

Neuroscientific techniques provide a scientific or objective understanding of the brain-behaviour relationship, says Tanusree Dutta, faculty at IIM Ranchi. “Advertisements, product design, aesthetics, store layout, use of music, colour to attract attention, nudges and so on can all be tested with the use of neuroscientific tools to ensure their effectiveness before being launched,” she adds.

Anil Pillai, CEO of Tarragni Consulting that specialises in neuroscience, says that questionnaire-based surveys have limitations since the responses are filtered and affected by cognitive biases. Neuromarketing says impressions and therefore decisions are made at the emotional, instinctive and unconscious levels of the human mind.

The Implicit Association Test would be a simple demonstration of plumbing the unconscious mind for deeply held beliefs and biases that may be filtered out by participants in a questionnaire-based survey. The rapid-fire type tests give little time for considered responses that can otherwise filter out biases.

Neuroscience-based market research can give reliable hard data, says Mr. Pillai. Instead of questionnaires, neuroscience employs a range of instruments to directly get information on how the brain is being impacted and what decisions it will take.

Neuroscience had a breakthrough more than 15 years ago in the U.S. when Functional magnetic resonance imaging (fMRI) showed that ads evoking 9/11 attacks triggered fear among voters but the brain activity was different among Republican versus Democratic voters. Neuromarketing experts say that opinion polls in India can be more accurate and probe voter minds better in today’s highly polarised, ideological politics by using the FACS (Facial Action Coding System). The FMRI would be prohibitively expensive in India, says Mr. Pillai.

Enabling devices

An enabler of neuroscience in India and across the world is the rapid strides in bio instruments, making some of them cheaper and easier to use. Today wearable watches can deliver much health information. The eyeball tracker, the classic neuroscience tool, is available on Amazon today, says Puneet Garg, co-founder, Story Prediction.

The typical neuromarketing tool is an adaptation of an instrument originally intended for medical diagnostics. They can be broadly divided into those that measure the electrical impulses of the brain and those that generate heat maps through other means. The former set includes Electro Encephalo Gram (EEG), Quantitative Electro Encephalo Graphy (QEEG) and so on.

An eye tracking device helps to measure attention, attention span, and shift in attention. What catches the attention in the mind gets processed further. Eye trackers generate heat maps depending on where the eyeballs are focusing. Heatmaps for webpages, for instance, are otherwise generated by mouse movements. Mouse movements can, however, be also used for scrolling and not everyone paying attention to what interests them clicks there. Therefore such heatmaps can be inaccurate. With a jewellery video ad with a timestamp, eye trackers can tell precisely where the interest is going — the product, the model, the discount, or the Purchase button. With this feedback, the vendor can tweak the ad to ensure consumers are drawn towards hitting the purchase button more.

A thermal imaging camera helps to capture temperature changes when a person is interacting with any situation or stimulus. EEGs were intended to measure health parameters such as detecting brain tumour and whether the medicine to treat them is working or not. Wearable EEG senses 21 points in the brain such as pleasure point, fear point, pain point and so on. It measures brain waves, typically beta waves while filtering alpha, gamma and others.

The reptile brain is the seat of pleasure, fear and other emotions. Arousal here can be tracked by the EEG. If the EEG detects that the ad a person is watching has touched his or her pleasure point, then neuro marketers conclude that the ad has impacted the subconscious mind favourable to the product. Neuroscience tells us that such impacts influence decision making on buying a product.

Skin conductance measurement devices originally used in myography applications in physical therapy and sports training are applied in marketing to detect emotional arousal by gauging skin secretions.

Skin conductance devices are probably the least expensive but also the least efficient. Eyeball trackers are more efficient whereas EEGs can have efficiencies of up to 75%. The more sophisticated an instrument is, the more expensive it is. Experts can come up with optimum choices and sample sizes so that the confidence level of the results is above 95%. Sometimes a combination of devices is used.

The neuroscience scene in India features progressive digital companies including multinationals that use these tools for their business decisions, market research consultants who specialise in the subject, and institutions such as the IITs and IIMs that provide research support. It’s still a “rarefied” world featuring forward thinking businesses but with a bright future, says Mr. Pillai.

While neuromarketing may push the boundaries, cost is an issue. Devi Prasanna, AVP digital marketing at Loan Tap, says big companies that are large consumers of TV spots use neuromarketing in advertising. For others, there are a range of tools that offer similar or higher returns and are cheaper too. In the digital space, for instance, insights on ad effectiveness can be tracked by tools such as YouTube’s brand lift surveys. While neuromarketing is a predictive model, today there are ads on Connected TVs that place QR codes with UTM to track who took an action, he adds.

The immediate application of neuroscience in India was in advertising and marketing although the problem there was that the application was after the fact and provided feedback for the future, says Mr. Garg. His company is developing an AI-based product that uses large language model (LLM) to predict whether an ad or even a film can be a hit by assessing the script for its power and potential to sustain emotional engagement with the viewer.

Mr. Pillai does acknowledge the cost factor. But he adds that the application of neuroscience is far wider than just advertising and marketing. It can help to solve tough business problems that require hard, highly reliable data and where the returns are substantial.

Indian consumer behaviour

While neuromarketing is several decades old in the west, in India, the activity has picked up in the last ten years, says Ms. Dutta. And in this time, neuroscience has revealed many facets of Indian consumer behaviour at their visceral level.

A study by the consultancy Final Mile that specialises in behavioural science showed that most fatalities of trespassers crossing railway tracks in Mumbai were that of young men, not old people or even women. Further, the fatalities were high in between stations, not at stations, and happened mostly during the day. The study concluded that this was a case of male bravado and that honking by train drivers didn’t help. Further, the human mind typically estimates the speed of incoming large objects to be 40% less so the trespassers underestimated the dangers. The solution that Final Mile implemented with success included posting photographs of the bodies of actual men who had died trespassing to push trespassers’ fear buttons. The second part of the solution was that the honking by train drivers didn’t consist of one long blast but two staccato sounds since the brain’s awareness is known to be heightened during the silence between two musical notes. The third part was to put yellow paint on the ties of the tracks so that they would disappear quickly in the case of an incoming train and the brain would rapidly correct the error in gauging the speed of the train.

Ms. Dutta talks about how Indian consumers respond more to typically Indian themes in ads. Neuroscience has shown that an ad that shows the protagonist achieving something through jugaad resonates in India, for instance, she adds.

Mr. Pillai cites a business problem that his firm helped solve for a life insurance provider. It is now received wisdom that the Indian market is price sensitive, so the cheapest product will succeed if it’s good enough. The average Indian consumer should then be a cold computer driven by money alone. But, Pillai says neuroscience surveys have shown that “friction” is often the driving factor in India.

Living in India is marked by procedures and systems that is needing much effort to understand and act upon. And at the end of it the intended outcome is not guaranteed.

Anyone who has attempted to navigate through the government provident fund system would testify to it.

Mr. Pillai talks about functional friction that matters more to the semi-urban and rural population due to higher ego depletion. Functional friction is the frictional barrier that prevents one from achieving the base objective they had embarked upon. In this particular case, the base objective is choosing an optimal insurance product, paying for it and acquiring it.

Customers looking for insurance with no external pressure to buy require higher sensitivity and empathy from insurance providers due to the heightened physical, cognitive, and time friction they face. “There is an emerging, young and aspirational segment in Tier 2/3 that has Tier 1 as their benchmark. These customers seek similar levels of service and sophistication from insurance providers, necessitating tailored solutions to meet their expectations. What’s often the case in India is that family members, co-beneficiaries, and particularly women of the house play a significant role in decision-making within Tier 2/3,” Mr. Pillai says, adding that all these insights come from high component of neuroscience based non-conscious validated by other methods like depth conversations and data.

Ethical concerns

Meanwhile, Mr. Musk’s Neuralink has indeed drawn up scary scenarios on neuroscience applications. Mr. Garg raises concerns about the possible misuse of Neuralink data to manipulate consumer responses. Some wonder if the implants would make the implanted susceptible to suggestions from outside. Less exotic, more mundane applications of neuroscience have raised some concerns too. Besides these, the surveys are under the scanner. The Neuromarketing Science and Business Association (NMSBA) has introduced the first neuromarketing code of ethics. It covers areas such as privacy, consent and transparency. The Advertising Standards Council of India, replying to an email, said they have issued no guidelines on neuromarketing. The key issue is informed consent of survey participants — whether they are aware of all the implications of their participation and whether they are being exploited. Using young people below 18 years as survey participants adds another layer of concern. The informed consent of their parent or guardian would be needed, notes NMSBA.



Source link

]]>
What makes the brain the most complicated object in the universe https://artifex.news/article67852849-ece/ Fri, 16 Feb 2024 11:19:12 +0000 https://artifex.news/article67852849-ece/ Read More “What makes the brain the most complicated object in the universe” »

]]>

In the middle of 2023, a study conducted by the HuthLab at the University of Texas sent shockwaves through the realms of neuroscience and technology. For the first time, the thoughts and impressions of people unable to communicate with the outside world were translated into continuous natural language, using a combination of artificial intelligence (AI) and brain imaging technology.

This is the closest science has yet come to reading someone’s mind. While advances in neuroimaging over the past two decades have enabled non-responsive and minimally conscious patients to control a computer cursor with their brain, HuthLab’s research is a significant step closer towards accessing people’s actual thoughts. As Alexander Huth, the neuroscientist who co-led the research, told the New York Times, “This isn’t just a language stimulus. We’re getting at meaning – something about the idea of what’s happening. And the fact that’s possible is very exciting.”

Combining AI and brain-scanning technology, the team created a non-invasive brain decoder capable of reconstructing continuous natural language among people otherwise unable to communicate with the outside world. The development of such technology – and the parallel development of brain-controlled motor prosthetics that enable paralysed patients to achieve some renewed mobility – holds tremendous prospects for people suffering from neurological diseases including locked-in syndrome and quadriplegia.

In the longer term, this could lead to wider public applications such as fitbit-style health monitors for the brain and brain-controlled smartphones. On January 29, Elon Musk announced that his Neuralink tech startup had implanted a chip in a human brain for the first time. He had previously told followers that Neuralink’s first product, Telepathy, would one day allow people to control their phones or computers “just by thinking”.

But alongside such technological developments come major ethical and legal concerns. It’s not only privacy but the very identity of people that may be at risk. As we enter this new era of so-called mind-reading technology, we will also need to consider how to prevent its potential to help people being outweighed by its potential to do harm.

Humanity’s greatest mapping challenge

The brain is the most complicated object in the universe. It contains more than 89 billion neurons, each connected to around 7,000 other neurons that send between ten and 100 signals every second. The development of AI was based on the brain and the concept of neurons working together. Now, the way AI works with deep learning is helping us understand much more clearly how the brain works.

By fully mapping the structure and function of a healthy human brain, we can determine with great precision what goes awry in diseases of the brain and mind. In 2009, the Human Connectome Project was launched by the US National Institute of Health with the goal of building a map of the structure and function of a healthy human brain. Similar initiatives were launched in Europe in 2013 (the Human Brain Project) and China in 2016 (the China Brain Project).

This daunting endeavour may still take generations to complete – but the scientific ambition of mapping and reading people’s brains dates back more than two centuries. With the world having been circumnavigated many times over, Antarctica discovered and much of the planet charted, humanity was ready for a new (and even more complicated) mapping challenge – the human brain.

These efforts began in earnest in the late 18th century with the development of a systematic framework for scientists to ask how the brain and its regions produce psychological experiences – our thoughts, feelings and behaviour. One of the earliest attempts was phrenology, pioneered by the Austrian physician and anatomist Franz Joseph Gall.

While this long-discredited science may now be best known for the decorative busts sold in flea markets, it was all the rage by the early 19th century. Gall and his assistant Johann Spurzheim suggested that the brain was organised along 35 psychological functions, each linked to a different underlying region.

Just as you might start lifting dumbbells if you want larger biceps, phrenology argued that the more you use a particular psychological function, the more the brain region underlying it should grow – leading to a corresponding lump in your skull. According to Gall and Spurzheim, some of these functions (including memory, love of offspring and the instinct to kill) were shared with animals, whereas others (such as wit, poetic ability and morality) were uniquely human.

Throughout the British empire and later in the US, phrenology was used to justify classism, colonialism, slavery and white supremacy. Queen Victoria had readings done on her children, but Napoleon Bonaparte was not a fan. When Gall moved to Paris in 1807 to perform much of his phrenological theorising, France’s emperor pronounced: “It is an ingenious fable which might seduce the gens du monde, but could not stand the scrutiny of the anatomist.”

In the 1860s, “locationist” views of how the brain worked made a comeback – though the scientists leading this research were keen to distinguish their theories from phrenology. French anatomist Paul Broca discovered a region of the left hemisphere responsible for producing speech – thanks in part to his patient, Louis Victor Leborgne, who at age 30 lost the ability to say anything other than the syllable “tan”. Today, Patient Tan remains one of psychology’s most famous case studies, and Broca’s area, in the frontal cortex, is one of the most important language regions of the brain, playing a critical part in putting our thoughts into words.

Similarly, German neuroanatomist Korbinian Brodmann’s map of 52 distinct regions of the cerebral cortex, first published in 1909, is still an important tool of contemporary neuroscience – and today’s neuroscientists continue to ask some of the same questions as these pioneers: are our thoughts, feelings and behaviour produced by the collective action of the brain, or specific brain regions?

In modern neuroscience studies, hi-tech scanning tools such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) allow researchers to map the brain by measuring changes in local blood flow that are linked to changes in local neural activity. This approach depends on the findings of American physiologist John Fulton almost a century ago. Fulton was treating Walter K, a 26-year-old sailor suffering from headaches and vision failure. When using his eyes after leaving a dark room, the patient sensed a noise in the back of his head, located over the visual cortex. This stronger pulse of activity was not replicated by other sensory inputs, for example when smelling tobacco or vanilla.

Over the remainder of the 20th century, this first observation of the link between local cerebral blood flow and brain function was built on by neuroscientists including American Seymour Kety and Swedish collaborators David Ingvar and Neils Lassen. Their pioneering work paved the way for modern brain mapping, led by the ground-breaking work of BrainGate – a multidisciplinary research unit originating in the neuroscience department at Brown University in the US state of Rhode Island.

The first clinical trial

Prototype brain-computer interfaces (BCIs) record and decode a patient’s brain activity, translating it into actions that can be carried out by a neural cursor, prosthetic limb or powered exoskeleton. The ultimate goal is wireless, non-invasive devices that help patients communicate and move with precision in the real world. AI is critical to this goal, and is already being used to help BCI systems produce finely controlled, rapid motor movements.

In 2004, BrainGate began the first clinical trial using BCIs to enable patients with impaired motor systems (including spinal cord injuries, brainstem infarctions, locked-in syndrome and muscular dystrophy) control a computer cursor with their thoughts.

Patient MN, a quadriplegic since being stabbed in the neck in 2001, was the trial’s first patient. After neuroscientist Leigh Hochberg’s team implanted electrodes over the hand-arm region of the patient’s primary motor cortex, they reported that Patient MN was able to open emails, draw figures using a paint program, and operate a television using a cursor. In addition, brain activity was linked to the patient’s prosthetic hand and robotic arm, enabling rudimentary actions including grasping and transporting an object. What’s more, these tasks could be done while the patient was having a conversation, suggesting they did not even demand the full concentration of the patient.

Other quadriplegic patients subsequently used BCI devices connected to multi-joint robotic arms to pick up and drink from a cup – and in 2015, a patient with locked-in syndrome was shown operating a point-and-click keyboard five years after the device’s implantation. Advanced decoding algorithms saw their cursor control improve such that patients went from typing 24 characters per minute in 2015 to 39 characters per minute two years later.

Also in 2017, BrainGate clinical trials reported the first evidence that BCIs could be used to help patients regain movement of their own limbs by bypassing the damaged portion of the spinal cord. One patient with a high-cervical spinal cord injury was able to reach and grasp a cup eight years after sustaining his injury.

Then in 2021, the Braingate team reported that quadriplegic patients were now using a wireless system in their own homes to control a tablet computer – an important first step toward a future where BCI devices can help people move and communicate outside the confines of the hospital or laboratory. Furthermore, the researchers said they anticipate “significant advances and paradigm shifts in neural signal processing, decoding algorithms and control frameworks” in the quest to make such devices available to the wider public.

Beyond Braingate’s successes, another team led by American neurosurgeon Edward Chang recently reported using surgically implanted electrocorticogram electrodes to create a “digital avatar” that could convey what a paralysed patient wants to say. With the help of AI, the BCI decoded muscle movements related to speech the patients were thinking in their minds (as opposed to decoding the actual semantic content).

Activity patterns emerging from the specific brain region that is critical for speech are the key focus for this type of BCI. One expert not involved in the research told the Guardian: “This is quite a jump from previous results. We’re at a tipping point.”

A new era of ‘mind reading’ technology

Brain activity has long been recorded by non-invasive imaging methods such as fMRI and electroencephalography (EEG). But having been primarily envisaged as a tool for diagnostics and monitoring, it is now also a core element of the latest neural communication and prosthetic devices.

A landmark moment came in 2012, when a team led by Canada-based neuroscientist Adrian Owen used neuroimaging to establish a line of communication with people suffering from disorders of consciousness. Despite being behaviourally non-responsive and minimally conscious, these patients were able to answer yes-or-no questions just by using their minds. For patients unable to communicate via facial or eye movements (methods that had been available to locked-in patients for many years), this was a very promising evolution.

Now, a decade on, the HuthLab research at the University of Texas constitutes a paradigmatic shift in the evolution of communication-enabling neuroimaging systems.

In the study’s first stage, participants were placed in an fMRI scanner and their brain activity was recorded while they listened to 16 hours of podcasts (the model training dataset consisted of 82 five to 15-minute stories taken from the Moth Radio Hour and Modern Love). This brain activity data was then linked to audio fragments the participants were listening to, in order to map what their brain activity patterns looked like when they had specific semantic content in their minds.

Next, the same participants were exposed to new audio fragments they had never heard before, or alternatively were asked to imagine a story. The decoder was then applied to this new set of brain activity data, to “reconstruct” the stories the participants had been listening to or imagining – with some striking results. For instance, when a patient was played this audio, “I don’t have my driver’s licence yet and I just jumped out right when I needed to, and she says: ‘Well, why don’t you come back to my house and I’ll give you a ride?’ I say OK.”

… the decoder reconstructed it as follows, “She is not ready – she has not even started to learn to drive, yet I had to push her out of the car. I said: ‘We will take her home now’ and she agreed.”

While there were also a considerable number of mistakes over the entirety of the trial, the reconstruction of continuous language solely on the base of brain activity patterns, including some exact word matches, is arguably the closest we have yet come to truly reading someone’s thoughts.

Whereas the brain’s capacity to produce motor intentions is shared across species, the ability to produce and perceive language is uniquely human. Thus, decoding actual semantic content from brain activity in regions used in language perception (primarily the association and prefrontal regions of the brain’s cortex) seems more fundamental to what makes us human.

Also, the HuthLab study used non-invasive fMRI technology – a form of neuroimaging that measures oxygen levels of blood in the brain in order to make inferences on brain activity. The disadvantage of fMRI is that it can only take slow measurements of brain signals (typically, one brain volume every two or three seconds). The study overcame this by using generative AI language models (akin to ChatGPT) that predict the probability of word sequences, and thus what words are most likely to come next in someone’s thoughts.

The researchers also worked with patients watching silent short film clips. They demonstrated that the system could be used not only to decode semantic content entertained through auditive perception, but also through visual perception.

Importantly, they also explicitly addressed the potential threat to a person’s mental privacy posed by this kind of technology. Jerry Tang, one of the study’s lead researchers, stated, “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that. We want to make sure people only use these types of technologies when they want to and that it helps them.”

The very fact this semantic decoder has to be trained on each person separately, with their cooperation over a long period of time, constitutes a robust safeguard. In other words, one of the major hurdles in the development of language decoders – the fact they are not universally applicable – constitutes one of the strongest safeguards against privacy violations.

However, while there is no risk of a malevolent company being able to read the thoughts of a random person in the street any time soon, there are nonetheless important ethical, legal and data protection concerns that must be considered as this technology develops.

We have already seen the consequences of unfettered corporate access to personal data and online behaviour. Although we are a long way off from neural data being collected and processed at such scale, it is important to consider burgeoning ethical questions in the early stages of technological progress.

The ethical implications are immense

Losing the ability to communicate is a deep cut to one’s sense of self. Restoring this ability gives the patient greater control over their lives and their ability to navigate the world – but it could also give other entities, such as corporations, researchers and other third parties, an uncomfortable degree of insight into, or even control over, the lives of patients.

Even other types of intimate biological data, such as that about our genomes or our biometrics, do not come as close to approximating our private inner lives as neural data. The ethical implications of providing access to such data to scientific and corporate entities are potentially immense.

This is reflected in Resolution 51/3 of the UN Human Rights Council, which commissioned a study on “the impact, opportunities and challenges of neurotechnology with regard to the promotion and protection of all human rights” in time for the council’s 57th session in September 2024. However, whether the introduction of novel human rights is warranted to address the challenges posed by neurotechnology remains a hotly debated issue among human rights experts and advocacy groups.

The NeuroRights Foundation, based at Columbia University in New York, argues that novel rights surrounding neurotechnologies will be needed for all humans to preserve their privacy, identity, and free will. The potential vulnerability of disabled patients makes this a particularly important problem. For example, Parkinson’s disease, a neurodegenerative disease that affects movement, is co-morbid with dementia, which affects the ability to reason and think clearly.

In line with this approach, Chile was the first country that adopted legislation to address the risks inherent to neurotechnology. It not only introduced a new constitutional right to mental integrity, but is also in the process of adopting a bill that bans selling neurodata, and subjects all neurotech devices to be regulated as medical devices, even those intended for the general consumer. The proposed legislation recognises the intensely personal nature of neural data and considers it akin to organ tissue which cannot be bought or sold, only donated. But this legislation has also faced criticism, with legal scholars questioning the need for new rights and pointing out that this regime could stifle beneficial BCI research for disabled patients.

While the legal action taken by Chile is the most impactful and far-reaching to date, other countries are considering following suit by updating existing laws to address the developments in neurotechnologies.

One of the cornerstones of ethical research is the principle of informed consentParticular attention must be paid to the capacity of paralysed patients and their family members to understand and consent to novel experimental therapies. Patients with a very limited ability to communicate may not be able to answer more extensive questions associated with the obtaining of informed consent, which is often more complex than a simple opt-in procedure. Also, not all potential risks and side-effects (both physical and mental) can be foreseen, making it difficult for physicians to adequately inform their patients.

At the same time, it is important to keep in mind that denying treatment to a patient whose only hope may be communicating through BCI presents a significant opportunity cost, such as a lifetime without communication, that may be very well greater than the costs of participation in experimental treatments. The appropriate balance to strike for clinicians and researchers will be challenging to determine.

In a burgeoning new era of big (brain) data, longstanding ethical concerns about the hacking, leaking, unauthorised use or commercial exploitation of personal data will be amplified in the case of sensitive data on a person’s thoughts or movements (as controlled through neuroprosthetics). Paralysed patients may be particularly vulnerable to neurodata theft given their reliance on caregivers, and increasingly, the BCI technologies themselves, to communicate and move around the world. Care must be taken to ensure that information disclosed by a BCI represents a patient’s true and consensual thoughts.

And while it is likely that the first advances in neurotech will be therapeutic in nature, such as for disabled and neurodivergent patients, future advances are likely to involve consumer applications such as entertainment, as well as for military and security purposes. The growing availability of neurotechnology in a commercial context that is generally subject to far less regulation only amplifies these ethical and legal concerns.

Data protection laws should be assessed on their ability to account for the new risks arising from increasing access to and collection of neurodata by organisations and entities of different types. Take the example – for the time being completely hypothetical – of using BCI to infer the thoughts of suspects in police interrogations.

One might say that BCI cannot be used in police interrogations as the error rate of misinterpreting a person’s neural data is currently unacceptably high, although accuracy could improve in the future. Or, one might say that BCI should never be used to “read” a person’s brain without their consent, regardless of the technology’s accuracy. Or, one might say that using BCI for interrogations is justified under certain extreme circumstances, such as when crucial information is needed to save someone’s life, and the suspect is refusing to cooperate.

Different people, societies, and cultures will disagree on where to draw the line. We are at an early stage of technological development and as we begin to uncover the great potential of BCI, both for therapeutic applications and beyond, the need to consider these ethical questions and their implications for legal action becomes more pressing.

Decoding our neuro future

This is a groundbreaking moment in our quest to understand the inner workings of our brains and minds. In the past year alone, neuroscientists have reversed spinal disabilities, translated MRI data into text to understand what someone is thinking, and begun to conduct clinical trials to help people interact with objects using thoughts alone, something already seen in trials with monkeys two years ago. Such developments could all lead to transformative impacts on people’s lives.

At the same time, it’s important to note that research such as the HuthLab study uses a very small sample, and that the training process for its semantic decoder is complex, time-consuming and expensive. Add to this the fact that fMRI, although non-invasive, is a non-wearable neuro-imaging technique, and it is clear these methods are not set to leave a strictly organised laboratory setting any time soon.

However, the HuthLab researchers suggest that in time, fMRI could be replaced by functional near-infrared spectroscopy (fNRIS) which, by “measuring where there’s more or less blood flow in the brain at different points in time”, could give similar results to fMRI using a wearable device.

Certainly, the exponential global investment in the development of neurotechnologies such as this, by governments and private actors alike, shows that the world is eager to create accessible BCIs that are suited to function as medical devices, but also as commercial consumer goods. By the middle of 2021, the total investment in neurotechnology companies amounted to just over US$33 billion (around £26 billion).

One of the most high-profile companies is Musk’s Neuralink. “Initial results show promising neuron spike detection,” Musk tweeted on January 29, of his neurotech startup’s first implanted chip in a human brain. The implant is said to include 1,024 electrodes, yet is only slightly larger than the diameter of a red blood cell. According to Neuralink: “Its small size allows threads to be inserted with minimal damage to the [brain] cortex.”

While this wireless implant is currently being developed as a medical device, aiming at enhancing the quality of life for patients suffering from various neurological diseases (Neuralink’s clinical trial has enlisted people aged 22 and above living with quadriplegia), Musk stated on X-Twitter that the ultimate aim is to create a device that “enables control of your phone or computer, and through them almost any device, just by thinking”.

Indeed, commercial neuro-imaging devices are already on the market. The Kernel Flow, for example, is a commercially available, wearable headset that uses fNRIS technology to monitor brain activity. Another prominent player in commercial neuro-imaging, Emotiv, has developed earpods incorporating EEG technology that are able to monitor brain activity for signs of focus, attention and stress – with the stated ambition of boosting the wearer’s productivity at work.

While the era of big data has enabled increasingly personalised and complex approximations of people’s inner lives through our biometrics, genetics and online presence, nothing has been so powerful as to capture the inner workings of our minds – yet.

But as HuthLab’s research suggests, and Musk’s pronouncements claim, this may now not be so very far away. The dawn of a new era of brain-computer interfaces should be treated with great care and great respect – in acknowledgement of its immense potential to both help, and harm, our future generations.

Nicholas J. Kelley, Assistant Professor in Social Psychology, University of Southampton; Stephanie Sheir, Research Associate, Trustworthy Autonomous Systems Hub, University of Bristol, and Timo Istace, PhD Researcher in Neurotechnology and the Law, University of Antwerp

This article is republished from The Conversation under a Creative Commons license. Read the original article.





Source link

]]>
If anxiety is in my brain, why is my heart pounding? A psychiatrist explains the neuroscience and physiology of fear https://artifex.news/article67348152-ece/ Tue, 26 Sep 2023 09:55:07 +0000 https://artifex.news/article67348152-ece/ Read More “If anxiety is in my brain, why is my heart pounding? A psychiatrist explains the neuroscience and physiology of fear” »

]]>

Heart in your throat. Butterflies in your stomach. Bad gut feeling. These are all phrases many people use to describe fear and anxiety. You have likely felt anxiety inside your chest or stomach, and your brain usually doesn’t hurt when you’re scared. Many cultures tie cowardice and bravery more to the heart or the guts than to the brain.

But science has traditionally seen the brain as the birthplace and processing site of fear and anxiety. Then why and how do you feel these emotions in other parts of your body?

I am a psychiatrist and neuroscientist who researches and treats fear and anxiety. In my book Afraid, I explain how fear works in the brain and the body and what too much anxiety does to the body. Research confirms that while emotions do originate in your brain, it’s your body that carries out the orders.

Also Read | Where the mind is without fear: What is anxiety and how can we beat it? 

Fear and the brain

While your brain evolved to save you from a falling rock or speeding predator, the anxieties of modern life are often a lot more abstract. Fifty-thousand years ago, being rejected by your tribe could mean death, but not doing a great job on a public speech at school or at work doesn’t have the same consequences. Your brain, however, might not know the difference.

There are a few key areas of the brain that are heavily involved in processing fear.

When you perceive something as dangerous, whether it’s a gun pointed at you or a group of people looking unhappily at you, these sensory inputs are first relayed to the amygdala. This small, almond-shaped area of the brain located near your ears detects salience, or the emotional relevance of a situation and how to react to it. When you see something, it determines whether you should eat it, attack it, run away from it or have sex with it.

Threat detection is a vital part of this process, and it has to be fast. Early humans did not have much time to think when a lion was lunging toward them. They had to act quickly. For this reason, the amygdala evolved to bypass brain areas involved in logical thinking and can directly engage physical responses. For example, seeing an angry face on a computer screen can immediately trigger a detectable response from the amygdala without the viewer even being aware of this reaction.

Also Read | Sadness, sleeplessness, stress, and anxiety top mental health concerns shared on Tele MANAS

The hippocampus is near and tightly connected to the amygdala. It’s involved in memorizing what is safe and what is dangerous, especially in relation to the environment – it puts fear in context. For example, seeing an angry lion in the zoo and in the Sahara both trigger a fear response in the amygdala. But the hippocampus steps in and blocks this response when you’re at the zoo because you aren’t in danger.

The prefrontal cortex, located above your eyes, is mostly involved in the cognitive and social aspects of fear processing. For example, you might be scared of a snake until you read a sign that the snake is nonpoisonous or the owner tells you it’s their friendly pet.

Although the prefrontal cortex is usually seen as the part of the brain that regulates emotions, it can also teach you fear based on your social environment. For example, you might feel neutral about a meeting with your boss but immediately feel nervous when a colleague tells you about rumors of layoffs. Many prejudices like racism are rooted in learning fear through tribalism.

Also Read | Mental health awareness month: how to cope in the age of anxiety  

Fear and the rest of the body

If your brain decides that a fear response is justified in a particular situation, it activates a cascade of neuronal and hormonal pathways to prepare you for immediate action. Some of the fight-or-flight response – like heightened attention and threat detection – takes place in the brain. But the body is where most of the action happens.

Several pathways prepare different body systems for intense physical action. The motor cortex of the brain sends rapid signals to your muscles to prepare them for quick and forceful movements. These include muscles in the chest and stomach that help protect vital organs in those areas. That might contribute to a feeling of tightness in your chest and stomach in stressful conditions.

The sympathetic nervous system is the gas pedal that speeds up the systems involved in fight or flight. Sympathetic neurons are spread throughout the body and are especially dense in places like the heart, lungs and intestines. These neurons trigger the adrenal gland to release hormones like adrenaline that travel through the blood to reach those organs and increase the rate at which they undergo the fear response.

Also Read | How anxiety can look different in children

To assure sufficient blood supply to your muscles when they’re in high demand, signals from the sympathetic nervous system increase the rate your heart beats and the force with which it contracts. You feel both increased heart rate and contraction force in your chest, which is why you may connect the feeling of intense emotions to your heart.

In your lungs, signals from the sympathetic nervous system dilate airways and often increase your breathing rate and depth. Sometimes this results in a feeling of shortness of breath.

As digestion is the last priority during a fight-or-flight situation, sympathetic activation slows down your gut and reduces blood flow to your stomach to save oxygen and nutrients for more vital organs like the heart and the brain. These changes to your gastrointestinal system can be perceived as the discomfort linked to fear and anxiety.

It all goes back to the brain

All bodily sensations, including those visceral feelings from your chest and stomach, are relayed back to the brain through the pathways via the spinal cord. Your already anxious and highly alert brain then processes these signals at both conscious and unconscious levels.

The insula is a part of the brain specifically involved in conscious awareness of your emotions, pain and bodily sensations. The prefrontal cortex also engages in self-awareness, especially by labeling and naming these physical sensations, like feeling tightness or pain in your stomach, and attributing cognitive value to them, like “this is fine and will go away” or “this is terrible and I am dying.” These physical sensations can sometimes create a loop of increasing anxiety as they make the brain feel more scared of the situation because of the turmoil it senses in the body.

Although the feelings of fear and anxiety start in your brain, you also feel them in your body because your brain alters your bodily functions. Emotions take place in both your body and your brain, but you become aware of their existence with your brain. As the rapper Eminem recounted in his song “Lose Yourself,” the reason his palms were sweaty, his knees weak and his arms heavy was because his brain was nervous.

Arash Javanbakht, Associate Professor of Psychiatry, Wayne State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.



Source link

]]>