When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

Foundation of neurophysiology

Zhongzhi Shi, in Intelligence Science, 2021

2.3.1.2 Postsynaptic element

The postsynaptic element is usually the membrane of soma or dendrite of postsynaptic neuron. The portion opposite the presynaptic membrane thickens to form postsynaptic membrane. It is thicker than presynaptic membrane, about 20-50 nm. There are receptors and chemically gated ion channels in postsynaptic membrane. Based on the thickness of the dense materials in the cytoplasm surface of presynaptic and postsynaptic membrane, synapses can be divided into type I and type I: (1) In a type I synapse, dense materials in the cytoplasm surface of the postsynaptic membrane are thicker than that of presynaptic membrane. It is called an asymmetrical synapse due to the asymmetrical thickness of its membrane, and it has round synaptic vesicles and 20–50 nm wide synaptic cleft. Type I synapses are usually considered excitatory synapses, which are mainly axon–dendritic synapses distributed in the trunk of dendrite. (2) A type I synapse has few dense materials in the cytoplasm surface of the presynaptic and postsynaptic membrane, and its thickness is similar in presynaptic and postsynaptic membrane, so it is called a symmetrical synapse. The symmetrical synapse has flat synaptic vesicles and a narrow 10–20 nm synaptic cleft. Type I synapses are usually considered inhibitory synapses, which are mainly axon–somatic synapses distributed in the soma.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323853804000026

Neurotransmitters and Their Life Cycle☆

Javier Cuevas, in Reference Module in Biomedical Sciences, 2019

Abstract

Neurotransmitters are the chemical messengers that allow electrical signals from neurons to be transmitted to the postsynaptic neuron or effector target. A substance is generally considered a neurotransmitter if it is synthesized in the neuron, is found in the presynaptic terminus and released to have an effect in the postsynaptic cell, is mimicked by exogenous application to the postsynaptic cell, and has a specific mechanism for termination of its action. Various types of molecules, ranging from simple gases, such as nitric oxide (NO), to complex peptides, such as pituitary adenylate cyclase-activating peptide, satisfy these criteria. Most small-molecule neurotransmitters, such as acetylcholine and dopamine, are synthesized in the cytoplasm of the nerve terminal and transported into vesicles; a variety of substrates and biosynthetic enzymes are involved in the synthesis of small-molecule neurotransmitters. Only 12 small-molecule neurotransmitters have been identified, but over 100 neuroactive peptides have been identified. Unlike small-molecule neurotransmitters, neuropeptides are encoded by specific genes and are synthesized from protein precursors formed in the cell body. The emerging understanding of atypical neurotransmitters such as the gases NO and CO, lipid mediators, and the phenomena of gliotransmitter action and exosomal transmission is constantly revising the understanding of what constitutes a “neurotransmitter.”

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128012383113182

Nature's Learning Rule

Bernard Widrow, ... Jose Krause Perin, in Artificial Intelligence in the Age of Neural Networks and Brain Computing, 2019

9 The Synapse

The connection linking neuron to neuron is the synapse. Signal flows in one direction, from the presynaptic neuron to the postsynaptic neuron via the synapse which acts as a variable attenuator. A simplified diagram of a synapse is shown in Fig. 1.16A [20]. As an element of neural circuits, it is a “two-terminal device.”

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

Figure 1.16. A synapse corresponding to a variable weight. (A) Synapse. (B) A variable weight.

There is a 0.02 μ gap between the presynaptic side and the postsynaptic side of the synapse which is called the synaptic cleft. When the presynaptic neuron fires, a protein called a neurotransmitter is injected into the cleft. Each activation pulse generated by the presynaptic neuron causes a finite amount of neurotransmitter to be injected into the cleft. The neurotransmitter lasts only for a very short time, some being reabsorbed and some diffusing away. The average concentration of neurotransmitter in the cleft is proportional to the presynaptic neuron's firing rate.

Some of the neurotransmitter molecules attach to receptors located on the postsynaptic side of the cleft. The effect of this on the postsynaptic neuron is either excitatory or inhibitory, depending on the nature of the synapse and its neurotransmitter chemistry [20–24]. A synaptic effect results when neurotransmitter molecules attach to the receptors. The effect is proportional to the average amount of neurotransmitter present and the number of receptors. Thus, the effect of the presynaptic neuron on the postsynaptic neuron is proportional to the product of the presynaptic firing rate and the number of receptors present. The input signal to the synapse is the presynaptic firing rate, and the synaptic weight is proportional to the number of receptors. The weight or the synaptic “efficiency” described by Hebb is increased or decreased by increasing or decreasing the number of receptors. This can only occur when neurotransmitter is present [20]. Neurotransmitter is essential both as a signal carrier and as a facilitator for weight changing. A symbolic representation of the synapse is shown in Fig. 1.16B.

The effect of the action of a single synapse upon the postsynaptic neuron is actually quite small. Signals from thousands of synapses, some excitatory, some inhibitory, add in the postsynaptic neuron to create the (SUM) [20,25]. If the (SUM) of both the positive and negative inputs is below a threshold, the postsynaptic neuron will not fire and its output will be zero. If the (SUM) is greater than the threshold, the postsynaptic neuron will fire at a rate that increases with the magnitude of the (SUM) above the threshold. The threshold voltage within the postsynaptic neuron is a “resting potential” close to −70 mV. Summing in the postsynaptic neuron is accomplished by Kirchoff addition.

Learning and weight changing can only be done in the presence of neurotransmitter in the synaptic cleft. Thus, there will be no weight changing if the presynaptic neuron is not firing, that is, if the input signal to the synapse is zero. If the presynaptic neuron is firing, there will be weight change. The number of receptors will gradually increase (up to a limit) if the postsynaptic neuron is firing, that is, when the (SUM) of the postsynaptic neuron has a voltage above threshold. Then the synaptic membrane that the receptors are attached to will have a voltage above threshold since this membrane is part of the postsynaptic neuron. See Fig 1.17. All this corresponds to Hebbian learning, firing together wiring together. Extending Hebb's rule, if the presynaptic neuron is firing and the postsynaptic neuron is not firing, the postsynaptic (SUM) will be negative and below the threshold, the membrane voltage will be negative and below the threshold, and the number of receptors will gradually decrease.

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

Figure 1.17. A neuron, dendrites, and a synapse.

There is another mechanism having further control over the synaptic weight values, and it is called synaptic scaling [26–30]. This natural mechanism is implemented chemically for stability, to maintain the voltage of (SUM) within an approximate range about two set points. This is done by scaling up or down all of the synapses supplying signal to a given neuron. There is a positive set point and a negative one, and they turn out to be analogous to the equilibrium points shown in Fig. 1.8. This kind of stabilization is called homeostasis and is a phenomenon of regularization that takes place over all living systems. The Hebbian-LMS algorithm exhibits homeostasis about the two equilibrium points, caused by reversal of the error signal at these equilibrium points. See Fig. 1.8. Slow adaptation over thousands of adapt cycles, over hours of real time, results in homeostasis of the (SUM).

Fig. 1.17 shows an exaggerated diagram of a neuron, dendrites, and a synapse. This diagram suggests how the voltage of the (SUM) in the soma of the postsynaptic neuron can by ohmic conduction determine the voltage of the membrane.

Activation pulses are generated by a pulse generator in the soma of the postsynaptic neuron. The pulse generator is energized when the (SUM) exceeds the threshold. The pulse generator triggers the axon to generate electrochemical waves that carry the neuron's output signal. The firing rate of the pulse generator is controlled by the (SUM). The output signal of the neuron is its firing rate.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128154809000013

A Dynamic Net for Robot Control

Bridget Hallam, ... Gillian Hayes, in Neural Systems for Robotics, 1997

8.6.3 Varying Neuronal Gains

The gains in the input register affect the time that the neuron goes on and off and therefore the time relationship between pre- and postsynaptic neuron. They do not affect the weight change that happens with any given time relationship.

The value of the activity register up gains, and even the correlation in these gains between pre- and postsynaptic neuron, affect the final weight only if the burst length is short. If the burst length is sufficiently long, then the initial co-activation values are not represented in the command register value achieved.

The neuronal time constants most influential in affecting the weight change are those governing the decay of the activity registers. With command register gains set at 1, the activity register down gain had to be over 1.5 for any strengthening to occur and under 4 if there was to be weakening when R was on “too long.” Greatest “R on too long” weakening occurred with an activity register down gain of 2.5. Since strengthening was also strong with this gain, this was the value chosen for subsequent experiments. Results for some of the activity register down gains tried are given in Figure 8.11. In each case, t exp was 2 time units, synapse weight started at 0.5, S was on for 10 time units from t (0), and R on a variable time from t (0) causing the difference in firing between S and R to be as indicated.

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

FIGURE 8.11. Effect of varying activity register down gains at various time relationships.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080925097500129

Neuronal excitation

Andrej Kral, ... Hannes Maier, in Prostheses for the Brain, 2021

Thresholds in excitation

Individual neurons function as an integrator of inputs; the activation of the presynaptic neuron ultimately causes postsynaptic potentials that sum up at the passive membrane of the postsynaptic neuron. However, as these propagate along the passive membrane, their amplitudes decrease due to the leaky nature of the passive membrane (Fig. 4.10). At the point where the axon originates (called the "action potential initiation zone" or "trigger zone"), the voltage-gated channels are found in large numbers. If the depolarization is sufficient at this point to generate an action potential, this will be propagated along the axonal membrane to the presynaptic elements. Thus, the neuron is a leaky integrator of inputs, whereas the temporal and spatial properties are defined by the time and the length constants of the neuronal membrane. A significant portion of the charge injected into a neuron may exit the stimulated cell and may therefore not be effective. This also has implications for neuroprosthetic stimulation.

The excitation threshold itself as well as the synaptic efficacy can change over time in response to a repeated stimulus. It is assumed that these processes underlie the ability for learning (see Chapter 9). However, not all synapses learn similarly fast. Synapses located in the peripheral nervous system or connecting sensory organs to the brain as a rule are highly active, must be very reliable, and therefore do not undergo plastic changes. Synapses in the central nervous system, particularly the cerebral cortex, are more plastic and change with learning.

For the present context of brain prostheses, the temporal and spatial constraints for excitation are of essential importance. The properties make clear that a constant electrical field that does not change in time or space does not induce neuronal excitation. In such a steady state, the membrane will keep its constant transmembrane potential given by the concentrations of ions on both sides of the membrane. Only if the electrical field around the neuron can induce a change in transmembrane potential sufficient to generate action potentials, excitation results. For this, gradients in time and space are required that are steeper than the membrane constants. Chapter 6 examines these in detail.

The neurons as described above form macroscopic structures: nerves, neuronal ganglia, the spinal cord, and the brain. They are embedded in structures that provide protection, nutrition, and oxygen. These macroscopic structures are the eventual targets for neuroprosthetic intervention.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128188927000031

Neural cell pinning on surfaces by semiconducting silicon nanowire arrays

C. Villard, in Semiconducting Silicon Nanowires for Biomedical Applications, 2014

10.2.1 Building in vitro controlled neuron networks: an overview

Developing neurons extend neurites that subsequently compete with each other, and one of them becomes the axon, which is the extension that conveys the action potential towards post-synaptic neurons. This process, named axonal differentiation and characterized by a rapid elongation of one single neurite among all others, occurs about 36 hours after plating for hippocampal neurons extracted from mice embryos (Dotti et al., 1998) or after 2–4 DIV (days in vitro) for cortical neurons (Barnes and Polleux, 2009). The other neurites further differentiate into dendrites and form the highly ramified structure that collects, and partly computes, the electric signals coming from pre-synaptic neurons.

Micropatterns are common tools to guide neurite outgrowth, independently of their axonal or dendritic nature, and to locate neuron cellular bodies (or soma) at specific positions, thus allowing a global design of neuronal architectures (Fig. 10.1) (Wyart et al., 2002). However, failures in long-term confinement can occur as a result of the mechanical pulling forces exerted by neurites, which can be strong enough to displace cellular bodies over micrometric distances (Fromhertz, 2003). This is why strategies based on mechanical confinement have been developed, leading to long-term soma positioning and therefore to a stable neuron–sensor coupling. Let us cite the use of either parylene neurocages on top of micro electrode arrays (Erickson et al, 2008) or picket fences of polyimide aligned above silicon transistors (Zeck and Fromherz, 2001) (Fig. 10.2). However, we will see later in this chapter that the mechanical properties of neurites can be exploited to impose the axo-dendritic polarity of the cell and, doing so, to control the topography and to direct the flow of information within neuronal networks.

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

10.1. Image of neural networks of controlled architecture. Cell bodies of neurons are restricted to squares of 80 μm and neurites to lines (80 μm length, 2–4 μm wide). Scale bar is 50 μm.

Reprinted with permission from Journal of Neuroscience Methods 117, 123 (2002). Copyright 2013 Elsevier Limited.

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

10.2. Mechanical confinement of the cellular body of neurons. (a) Parylene neuro-cages. Left: Electron micrograph of the final neurocage design. The main parts of the neurocage are labeled. The cage is made out of 4 μm thick parylene, a biocompatible polymer. Low-stress silicon nitride insulates the gold electrode and leads. Scale bar: 10 μm. Right: Neurochip culture at DIV10. Soma are trapped in the central chimney aegion of the cages spaced 110 μm apart. Process outgrowth through the tunnels is evident. A rich network has formed. The networking is even richer than shown in the photo as only the thickest processes are visible. (b) Left: Electron micrograph of a snail neuron immobilized by picket fences after 3 days in culture. Scale bar 20 μm. Right: Micrograph of neuronal net with cell bodies (dark blobs) trapped within a double circle of circular fences with neurites grown in the central area (bright threads) after two days in culture. Scale bar 100 μm. Pairs of pickets in the inner circle are fused to bar-like structures.

Reprinted with permission from Journal of Neuroscience Methods 175, 1 (2008). Copyright 2012 Elsevier Limited. Reprinted with permission from Proc. Natl. Acad. Sci. USA 98, 10457 (2001). Copyright 2013 National Academy of Sciences of the United States of America.

Various experimental strategies have been developed in order to gain control over polarity at the single neuron level or, in other terms, to get an a priori specification of axons and dendrites. However, some of these methods have resulted in a global growth of all the axons in one direction using electric fields (Britland and McCaig, 1996), microfluidic channels (Peyrin et al., 2011) or chemical gradients (Dertinger et al., 2002), ruling out the fabrication of looped networks. A second set of experimental methods has paved the way to an axon-by-axon positioning, but at the price of a complex surface topography (Shinoe et al., 2010) or chemistry (Oliva et al., 2003) that may be hardly compatible with the top surface of an electronic chip.

An alternative strategy would consist of exploitation of axonal characteristics at the microscopic rather than at the molecular level. At least three properties specific to axons have been reported in the literature: resistance to bending associated with the naturally observed straightness of axons (Katz, 1985), the positioning of the centrosome at the base of the nascent axon (de Anda et al., 2005) and the axonal preference for low adhesive conditions in contrast to the tendency of dendrites and soma to spread on surfaces (Prochiantz et al., 1990). This is the strategy that we and others have followed, and each of these axonal properties has inspired various adhesive macro-or micropatterns, as detailed below.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780857097668500107

Noise Exploitation and Adaptation in Neuromorphic Sensors

Thamira Hindo, Shantanu Chakrabartty, in Engineered Biomimicry, 2013

2.2 Organization of neurobiological sensory systems

The typical structure of a neurobiological sensory system is shown in Figure 2.2. The system consists of an array of sensors (mechanoreceptors, optical, or auditory) that are directly coupled to a group of sensory neurons, also referred to as afferent neurons. Depending on the type of sensory system, the sensors (skin, hair, retina, cochlea) convert input stimulus such as sound, mechanical, temperature, or pressure into electric stimuli. Each of the afferent neurons could potentially receive electrical stimuli from multiple sensors (as shown in Figure 2.2), an organization that is commonly referred to as the sensory receptive field. For example, in the electric fish, the electro-sense receptors distributed on the skin detect a disruption in the electric field (generated by the fish itself) that corresponds to the movement and identification of the prey. The receptive field in this case corresponds to electrical intensity spots that are then encoded by the afferent neurons using spike trains [10]. The neurons are connected with each other through specialized junctions known as synapses. While the neurons (afferent or non-afferent) form the core signal-processing unit of the sensory system, the synapses are responsible for adaptation by modulating the strength of the connection between two neurons. The dendrites of the neurons transmit and receive electrical signals to and from other neurons, and the soma receives and integrates the electrical stimuli. The axon, which is an extension of the soma, transmits the generated signals or spikes to other neurons and higher layers.

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

Figure 2.2. Organization of a generic neurobiological sensory system. Images adapted from Wikipedia and Ref. 11.

The underlying mechanism of a spike or action-potential generation is due to unbalanced movement of ions across a membrane, as shown in Figure 2.3, which alters the potential difference between the inside and the outside of the neuron. In the absence of any stimuli to the neuron, the potential inside the membrane with respect to the potential outside the membrane is about −65 mV, also referred to as the resting potential. This potential is increased by the influx of sodium ions (Na+) inside the cell, causing depolarization, whereas the potential is decreased by the efflux of potassium ions (K+) outside the cell, causing hyper polarization Once the action potential is generated, the Na+ ion channels are unable to reopen immediately until a built-up potential is formed across the membrane. The delay in reopening the sodium channels results in a time period called the refractory period, as shown in Figure 2.3, during which the neuron cannot spike.

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

Figure 2.3. Mechanism of spike generation and signal propagation through synapses and neurons. (Images from Wikipedia.)

The network of afferent spiking neurons can be viewed as an analog-to-digital converter, where the network faithfully encodes different features of the input analog sensory stimuli using a train of spikes (that can be viewed as a binary sequence). Note that the organization of the receptive field introduces significant redundancy in the firing patterns produced by the afferent neurons. At the lower level of processing, this redundancy makes the encoding robust to noise, but as the spike trains are propagated to higher processing layers this redundancy leads to degradation in energy efficiency. Therefore, the network of afferent neurons self-optimizes and adapts to the statistics of the input stimuli using inhibitory synaptic connections.

The process of inhibition (among the same layer of neurons) is referred to as lateral inhibition, where by the objective is to optimize (reduce) the spiking rate of the network while faithfully capturing the information embedded in the receptive field. This idea is illustrated in Figure 2.2, where the afferent neural network emphasizes the discriminatory information present in the input spike trains while inhibiting the rest. This not only reduces the rate of spike generation at the higher layer of the receptive field (leading to improved energy efficiency), but it also optimizes the information transfer that facilitates real-time recognition and motor operation. Indeed, later inhibition and synaptic adaptation are related to the concept of noise shaping. Before we discuss the role of noise in neurobiological sensory systems, let us introduce some mathematical models that are commonly used to capture the dynamics of spike generation and spike-based information encoding.

2.2.1 Spiking models of neuron and neural coding

As a convention, the neuron transmitting or generating a spike and incident onto a synapse is referred as the presynaptic neuron, whereas the neuron receiving the spike from the synapse is referred as the postsynaptic neuron (see Figure 2.3). Also, there are two types of synapses typically encountered in neurobiology: excitatory synapses and inhibitory synapses. For excitatory synapses, the membrane potential of the postsynaptic neuron (referred to as the excitatory postsynaptic potential, or EPSP) increases, whereas for inhibitory synapses, the membrane potential of the post-synaptic neuron (referred to as the inhibitory postsynaptic potential, or IPSP) decreases.It is important to note that the underlying dynamics of EPSP, IPSP, and the action potential are complex and several texts have been dedicated to discuss the underlying mathematics [12]. Therefore, for the sake of brevity, we only describe a simple integrate-and-fire neuron model that has been extensively used for the design of neuromorphic sensors [9] and is sufficient to explain the noise exploitation techniques described in this chapter.

We first define a spike train ρ(t) using a sequence of time-shifted Kronecker delta functions as

(2.1)ρ(t)=∑m=1∞ δ(t-tm),

where δ(t) = 0 for t ≠ 0 and ∫- ∞+∞δ(τ)dτ=1. In the above Eq. (2.1), the spike is generated when t is equal to the firing time of the neuron tm. If the somatic (or membrane) potential of the neuron is denoted by v(t), then the dynamics of the integrate-and-fire model can be summarized using the following first-order differential equation:

(2.2)ddtv(t)=- v(t)/τm-∑j=1N Wj[h(t)∗ρj(t) ]+x(t),

where N denotes the number of presynaptic neurons, Wj is a scalar transconductance representing the strength of the synaptic connection between the jth presynaptic neuron and the postsynaptic neuron, τm is the time constant that determines the maximum firing rate, h(t) is a presynaptic filtering function that filters the spike train ρj(t) before it is integrated at the soma, and * denotes a convolution operator. The variable x(t) in Eq. (2.2) denotes an extrinsic contribution to the membrane current, which could be an external stimulation current. When the membrane potential v(t) reaches a certain threshold, the neuron generates a spike or a train of spikes. Again, different chaotic models have been proposed that can capture different types of spike dynamics. For the sake of brevity, specific details of the dynamical models can be found in Ref. 13. We next briefly describe different methods by which neuronal spikes encode information.

The simplest form of neural coding is the rate-based encoding [13] that computes the instantaneous spiking rate of the ith neuron Ri(t) according to

(2.3)Ri(t)=1T∫tt+Tρi(t)dt,

where ρi(t) denotes the spike train generated by the ith neuron and is given by Eq. (2.1), and T is the observation interval over which the integral or spike count is computed. Note that the instantaneous spiking rate R(t) does not capture any information related to the relative phase of the individual spikes, and hence it embeds significant redundancy in encoding. However, at the sensory layer, this redundancy plays a critical role because the stimuli need to be precisely encoded and the encoding have to be robust to the loss or temporal variability of the individual spikes.

Another mechanism by which neurons improve reliability and transmission of spikes is through the use of bursting, which refers to trains of repetitive spikes followed by periods of silence. This method of encoding has been shown to improve the reliability of information transmission across unreliable synapses [14] and, in some cases, to enhance the SNR of the encoded signal. Modulating the bursting pattern also provides the neuron with more ways to encode different properties of the stimulus. For instance, in the case of the electric fish, a change in bursting signifies a change in the states (or modes) of the input stimuli, which could distinguish different types of prey in the fish’s environment [14].

Whether bursting is used or not, the main disadvantage of rate-based encoding is that it is intrinsically slow. The averaging operation in Eq. (2.3) requires that a sufficient number of spikes be generated within T to reliably compute Ri(t). One possible approach to improve the reliability of rate-based encoding is to compute the rate across a population of neurons where each neuron is encoding the same stimuli. The corresponding rate metric, also known as the population rate R(t), is computed as

(2.4)R(t)=1N∑i=1NRi(t),

where N denotes the number of neurons in the population. By using the population rate, the stimuli can now be effectively encoded at a signal-to-noise ratio that is N1/2 times higher than that of a single neuron [15]. Unfortunately, even an improvement by a factor of N is not efficient enough to encode fast-varying sensory stimuli in real time. Later, in Section 2.4, we show that lateral inhibition between the neurons would potentially be beneficial to enhance the SNR of a population code by a factor of N2[16] through the use of noise shaping.

We complete the discussion of neural encoding by describing other forms of codes: time-to-first spike, phase encoding, and neural correlations and synchrony. We do not describe the mathematical models for these codes but illustrate the codes using Figure 2.4d.

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

Figure 2.4. Different types of neural coding: (a) rate, (b) population rate, (c) burst coding, (d) time-to-spike pulse code, (e) phase pulse code, and (f) correlation and synchrony-based code. Adapted from Ref. 13.

The time-to-spike is defined as the time difference between the onset of the stimuli and the time when a neuron produces the first spike. The time difference is inversely proportional to the strength of the stimulus and can efficiently encode the real-time stimuli compared to the rate-based code. Time-to-spike code is efficient since most of the information is conveyed during the first 20–50 ms [17, 18]. However, time-to-first-spike encoding is susceptible to channel noise and spike loss; therefore, this type of encoding is typically observed in the cortex, where the spiking rate could be as low as one spike per second.

An extension of the time-to-spike code is the phase code that is applicable for a periodic stimulus. An example of phase encoding is shown in Figure 2.4e, where the spiking rate is shown to vary with the phase of the input stimulus. Yet another kind of neural code that has attracted significant interest from the neuroscience community uses the information encoded by correlated and synchronous firings between groups of neurons [13]. The response is referred to as synchrony and is illustrated in Figure 2.4f, where a sequence of spikes generated by neuron 1, followed by neuron 2 and neuron 3, encodes a specific feature of the input stimulus. Thus information is encoded in the trajectory of the spike pattern and so can provide a more elaborate mechanism of encoding different stimuli and its properties [19].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124159952000027

Implementation of biomimetic central pattern generators on field-programmable gate array

M. Ambroise, ... S. Saïghi, in Biomimetic Technologies, 2015

12.3.3.2 System blocks

Presenting the architecture in this way provides an overview of the way the entire system operates, on the basis of three blocks: the computation core dedicated to neurons, the one dedicated to synapses, and the RAM. The way each of these three blocks is connected is presented in Figure 12.3.

To summarize, the computation core dedicated to neurons updates the state variables for all the neurons (u and v) and applies the exponential decrease to each synaptic current. In our neural network, the role of the computation core dedicated to synapses is to update all the currents and connection weights. This computation core, therefore, behaves in two different ways, depending on whether it receives a spike or not.

Furthermore, the IZH model has a time step resolution of 1 ms, which must be applied. Consequently, all the “u” and “v” value updates, the exponential decrease in synaptic currents, and the synaptic value updates (currents and depression factor) must be completed during the same millisecond.

Our implementation consisted of a network of Nn neurons and Ns synapses. Each synapse was described by three parameters: a weight, Wsyn (indicating whether the synapse was inhibitory or excitatory, depending on its sign), a scaling factor, xsyn, and a percentage, P. Consequently, three twin matrixes were stored in the RAM for these three parameters, in addition to a connectivity matrix (indicating the postsynaptic neurons connected to each presynaptic neuron).

To minimize the size of RAM, the matrixes were created with Nn lines. The ith line in the connectivity matrix thus corresponded to the connectivity of presynaptic neuron Ni with the other neurons. In this way, synapses were identified by the addresses of their postsynaptic neurons. To summarize, so that each neuron could have a different-sized connectivity list, each line in the connectivity matrix ended with the address of a virtual “end of list” neuron, Nn + 1. This address, therefore, limited the connectivity list of each neuron and saved memory space.

This is not an optimal solution in cases where a neuron is connected to itself and all the others (the worst case) but saves memory in other cases. Furthermore, this worst case does not correspond to a biologically plausible network and this remains the best solution for a CPG implementation with few synapses. Indeed, according to Marom and Shahaf (2002) and Garofalo et al. (2009), each neuron in a network is connected to 10–30% of the other neurons in the same network.

The occurrences with the address Nn + 1 in the connectivity matrix are replaced by element 0 in the three other matrices. For example, if synapse 10 is the synapse connecting presynaptic neuron 2 to postsynaptic neuron 15 with a synaptic weight of 5, a scaling factor of 1, and a percentage of 0.1%, then address 10:

in the connectivity matrix is address 15 (address of the postsynaptic neuron)

in the synaptic weight matrix is 5 (synaptic weight value)

in the percentage matrix is 0.1

in the scaling factor matrix is 1

As RAM is also a precious resource, it was not used to store Nn × Nn matrices. Indeed, CPGs are only small neural networks (8 neurons and 12 synapses).

In the European Brainbow project, our platform hosted 100 neurons and 1200 (external and internal) synapses. Furthermore, a synaptic delay was added to our network for this project. A synaptic delay consists of delaying the arrival of an action potential for a time ranging from 1 to 51 ms. To achieve this, each synapse had a 6-bit delay value Tdelay and a 50-bit vector capable of storing an action potential in the Tdelay position. Every millisecond, the 50-bit vectors are shifted to the right and the current action potential is indicated by the least-significant bit.

Table 12.1 shows the resources required for the implementation, according to the number of options (delay and short-term plasticity) required.

Table 12.1. Implementation on Spartan 6 LX 150 for 100 neurons and 1200 synapses

Architecture fixed point on 31 bits
Options of the SNN
Delay X X
Short-term plasticity X X
Resources Available total
Slices FFs 184,304 2398 1981 2075 1720
Slice LUT 92,152 4219 3558 4034 3409
DSP48A1 180 36 28 36 28
RAMB16WER 268 40 22 20 8
RAMB8BWER 536 17 11 15 11
Total RAM 4824 kb

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081002490000124

Alkaloids as Potential Multi-Target Drugs to Treat Alzheimer's Disease

Josélia A. Lima, Lidilhone Hamerski, in Studies in Natural Products Chemistry, 2019

Cholinergic Neurotransmission in the Central Nervous System

The transmission of information between cholinergic neurons (Fig. 8.3) occurs through the release of ACh by presynaptic neurons in the synaptic cleft. ACh diffuses to bind itself to nicotinic acetylcholine receptors (nAChR) and muscarinic acetylcholine receptors (mAChR) in the postsynaptic neurons, as illustrated in Fig. 8.3. Much of the released ACh (about 90%) is rapidly hydrolyzed in choline and acetate by AChE, found in soluble form in the synaptic cleft or bound to the basement membrane. The remainder (about 10%) of ACh diffuses through the synaptic cleft and reaches the postsynaptic neuron where it interacts with cholinergic receptors, activating them. After dissociating from receptors, ACh is rapidly hydrolyzed by AChE.

When a neurotransmitter binds to a postsynaptic neuron it triggers the opening of these channels?

Fig. 8.3. Representation of cholinergic neurotransmission. ACh is synthesized in the presynaptic neuron, is is released in the synaptic cleft, and moves to the postsynaptic neuron where it binds to cholinergic receptors activating them. ACh is hydrolyzed by AChE in the synaptic cleft.

ACh is synthesized in the neuronal cytoplasm, from choline and acetyl coenzyme A (AcCoA), by the catalytic action of choline acetyltransferase (ChAT), a cytosolic protein found only in cholinergic neurons. So, ACh is stored in synaptic vesicles (on average 8000 molecules per vesicles), which are matured in the axon and transported, by axonal transport via microtubules, to the axon terminal where they anchor in regions called active zones (AZ), as shown in Fig. 8.3. The vesicles anchored in AZ are preferentially released when cytoplasmic calcium (Ca2 +) levels increase, due to depolarization induced by the generation of an action potential [45].

Any interference in the steps of synthesis, storage, or release of ACh leads to a reduction in the release of this neurotransmitter and, consequently, to a failure in the transmission of information.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444641830000087

PET radiopharmaceuticals for imaging inflammatory diseases

Xiang-Guo Li, ... Anne Roivainen, in Reference Module in Biomedical Sciences, 2021

Cannabinoid receptor 2

In the context of PET imaging of neuroinflammation, CB2R is a relevant subtype among the cannabinoid receptors. Under normal physiological conditions, CB2R is not detectable in the brain. However, upon neuroinflammation, CB2R is upregulated in microglia, macrophages, and post-synaptic neurons. Thus, it is feasible to image neuroinflammation by targeting CB2R. Dozens of CB2R-targeting PET tracers have so far been developed, and most of them are in the preclinical study phases. The tracers are radiolabeled oxoquinolines, indoles, oxadiazoles, thiophenes, or thiazoles (Ni et al., 2019). For example, [11C]NE40 is an oxoquinoline compound with high affinity (KD = 9.6 nM) to CB2R and 100-fold higher selectivity to CB2R over cannabinoid receptor 1. The clinical safety and dosimetry of [11C]NE40 were studied in healthy volunteers, and the results showed that [11C]NE40 is well tolerated in humans, with a reasonable radiation burden. However, the imaging performance of [11C]NE40 in patients with Alzheimer's disease was not as good as that observed in the preclinical trials, presumably because of variations in CB2R expression among experimental disease models and patients (Ahmed et al., 2016).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128229606000752

What happens when a neurotransmitter binds to a postsynaptic neuron?

The binding of neurotransmitters, either directly or indirectly, causes ion channels in the postsynaptic membrane to open or close (Figure 7.1). Typically, the resulting ion fluxes change the membrane potential of the postsynaptic cell, thus mediating the transfer of information across the synapse.

What type of channels in the postsynaptic membrane open when neurotransmitters bind to them?

The binding of a specific neurotransmitter causes particular ion channels, in this case ligand-gated channels, on the postsynaptic membrane to open.

What channels do neurotransmitters open?

Animation 11.1. Ionotropic receptors, also called ligand-gated channels, are ion channels that are opened by the binding of neurotransmitters. Voltage-gated channels are opened by the membrane potential of the cell reaching threshold. Both types of channels allow ions to diffuse down their electrochemical gradient.

What type of channel do neurotransmitters bind to and open?

Ligand-gated ion channels (LGICs) are one type of ionotropic receptor or channel-linked receptor. They are a group of transmembrane ion channels that are opened or closed in response to the binding of a chemical messenger (i.e., a ligand), such as a neurotransmitter.