This is the first section of chapter 7, which is about signal transmission and neuronal coding, i.e., reaction of population of neurons to different inputs.
Linearized Population Equation
In this section, we linearize the population equation. More specifically, we develop a perturbation theory of the population equations.
 Input: $I(t)=I_0+\Delta i(t)$. The input is restrained to be a small perturbation upon a constant input.
 Population activity: $A(t) = A_0 + \Delta A(t)$. We expect the output to be also not changing dramatically. This might not be true if the equation is nonlinear, especially for integrodifferential equations. In the examples given in Sec. 6.3, this perturbative output assumption is always true. HOWEVER, WE SHOULD DEFINITELY TAKE NOTE ON THIS CAVEAT.
From Eq. 6.75, we can derive a “equation” that shows how this method works.
We would also expect the interval distribution is perturbative under small perturbations of input current, i.e.,
For a perturbation method, we drop all terms with high orders. Thus Eq. ($\ref{eqnperturbation1}$) becomes
Some Simplifications
How to calculate the integral $\int_{\infty}^t P_I(t\vert\hat t) d\hat t$? For models without noise, we should have
which means the spikes depends on the time interval since last spike. The integral $\int_{\infty}^t P_I(t\vert\hat t) d\hat t$ becomes
Eq. ($\ref{eqnperturbation2}$) reads
More Simplifications
The problem is reduced to the calculation of $P_I(t\vert\hat t)$. Recall that interval distribution $P_I$ is in fact the time derivative of the survivor function (Sec. 5.2.3). The subscript ${}_I$ indicates the current.
Some review of the definitions.

Interval distribution is defined as

Survivor function is
We also derived

Hazard function is
Meanwhile
I have no idea how to proceed from the Eq. ($\ref{eqnperturbation3}$).
Anyways, the book postulated a solution with multiple integrals,
with
Interpretations
 The first term in Eq. ($\ref{eqnperturbationfinal}$) describes the dependence on the previous population activity perturbation.
 The second term is the dependence on the input variations.

Low noise limit: kernel $\mathcal L(x) \to \delta(x)$ (Eq. 6.75 in the book)
This amazing result tells us that any fast change in the input current leads to a large jump in the population activity, regardless of the amplitude of the input current.
 Large noise limit: noise is critical.
 Slow noise (adiabatic limit): the response of the system is fast enough to keep track on the noise at any time. This is similar to noisefree limit with noise as the effective input or something. Fast changing input means large change in population activity.
 Fast noise: the system is too slow to track the noise. The noise is ambient so that the kernel $\mathcal{L}(x)$ becomes broad. In a limit that the kernel is flat, the population activity depends on the amplitude of potential $\Delta h$.
 Sec. 7.1.1 proves that the noisefree system indeed depends on the time derivative of the input.
 Sec. 7.1.2 proves that the escape noise, aka fast noise, indeed provides a broad kernel $\mathcal{L}(x)$.
Rapid change in input might indicate danger for the host of the neural system. This feature might be useful for the survivability of animals.
Transients
The problem to solve in this section is the time course of population activity under a rapid change of input.
Consider steplike input
which generates step like input potential
As long as the kernel $\kappa$ is defined, we can obtain the time course of every quantity.
We assume the population activity has reached equilibrium (asynchronous firing) before $t_0$. The population activity would also obtain a sudden change
This change is called transient. During a short time interval $[t_0,t_0+\Delta t]$, we expect the first term in Eq. ($\ref{eqnperturbationfinal}$) doesn’t contribute because the network has not react to the change at time $t_0$, i.e., $\Delta A(\hat t)=0$. We have to solve
for $tt_0\ll T$, where $T=1/A_0$ is the mean interspike interval.
As we expected, the short time interval $[t_0,t_0+\Delta t]$ is short enough if $\Delta t\ll 1/A_0$.
Noisefree Network
For noisefree network, averaging over time is equivalent to averaging over populations (ensembles).
We can inspect a single neuron. A neuron fired at $t_0$ will fire again at $t_0+T$ with $T$ being determined by $u(t_0+T)=\theta$. Meanwhile the theory of population gives us the population activity change
for $t_0< t<t_0+T$.
The solution is written as
where $a$ can be derived for different models.
Does it come from the differentiation of time? Not sure how it is derived.
 SRM0: $a=R\Delta I/\eta’$, where $\eta$ is the intrinsic response;
 Integrateandfire: $a=R\Delta I/u’$.
Kernel $\kappa$.

Integrate and fire model:
For integrateandfire model, we can show that the input potential is
after the input current change, i.e., $t>t_0$. Within a short period, $h(t)\sim h_0$. However, as we have derived previously, the population activity change drastically,
for $t_0<t<t_0+T$.
Fig. 7.5
Transients with Noise
For slow noise, refer to Fig. 7.6A.
Some random questions:
 How to explain the abrupt transient? Before the change of input, there are always neurons at subthreshold in the population. Increase the input triggers these neurons immediately.
 Why does the population activity approach another asynchronous state after a long time? Donno.
For fast noise (standart rate mode, or WilsonCowan based model), refer to Fig. 7.6B.
For SRM0 with escape noise, refer to Fig. 7.7.
For diffusive noise, refer to Fig. 7.8.
comments powered by Disqus