A recent study has shown that a particular type of primary motor cortex neurons in monkeys encode the speed of movement. These cells exhibit a typical bell-shaped profile of velocity as a function of distance, demonstrating that they contain this information.
However, the behavior of these neurons cannot be accounted for by assuming that their responses code only a particular aspect of the observed movement. Rather, they may also predict the beginning and outcome of an action (for example, by exploiting contextual information), similar to prefrontal neurons in humans.
Several studies have investigated mirror neuron-like responses in monkeys. Those studies have shown that some neurons in the lateral intraparietal area respond when monkeys orient their attention towards the receptive fields of these neurons. These neurons are also activated when the monkey sees other monkeys orienting in the same direction. These results suggest that mirror neuron-like responses are common in the primate brain and that they can be preserved during the course of evolution.
In this study, we tested the hypothesis that the amplitude of stimulus pulses detected by monkeys in an eye-tracking task is reflected in the activity of video-selective neurons in prefrontal cortex (VPL). We used a tactile detection task where a probe tip was gently indented on the skin of the experimenter’s hand and a light illuminating a button was activated when the corresponding stimulus was present.
We recorded activity of QA and SA neurons during the presentation of each trial, which occurred in approximately 200 ms after the probe tip touched the skin (probe down [PD]). QA neurons exhibited an abrupt increase in firing rate when the probe tip was indented on the skin but then returned to their spontaneous firing rates within a few hundred milliseconds. In contrast, SA neurons maintained the increased firing rate from PD until the probe tip was lifted off of the skin. Regardless of the stimulus amplitude, modulations in firing rate within a 50-ms window essentially accounted for overall psychophysical performance.
These findings are consistent with a neural code for stimulus amplitude that is faithfully modulated as a function of spike train periodicity, but not as a function of the duration of single stimulation pulses. Using this interpretation, we calculated neurometric curves based on the spike train periodicity evoked during the stimulation period and compared them with the corresponding psychometric curves (Fig. 3C).
The neurometric curves showed that the sensitivity to a change in stimulus amplitude was much stronger for the first stimulation pulses than for later ones, although individual neurons were generally still slightly worse detectors. This difference is most apparent when the monkeys were required to report their detection of the stimulus in each trial, but it also persists when they were not required to perform the task. Thus, the somatosensory thalamus seems to be the most likely source of the stimulus-related information for VPL, which then provides this information to the cerebral cortex, where it is transformed into other cognitive components related to behavioral performance.
There are many experiments demonstrating that activation enhances visuospatial attention (Eriksen and Rohrbaugh, 1970; Nakayama and Mackeben, 1989; Posner, 1980; Treisman, 1980). The effect is probably due to the modulation of the firing rate of VPL neurons in a short time window. This modulation can account for the increased sensitivity to vibrotactile stimuli in a monkey’s visual field. The ability to modulate firing rate in a short period of time may also help explain why the sensitivity of these neurons is similar to that of the monkeys’ perceptual and motor sensitivities during task performance, as well as during passive stimulation.
To investigate the nature of the activation process in VLPF, we trained monkeys to observe videos showing biological movements and object motion. We recorded the neuronal discharge of 102 VLPF neurons contemporarily to these videos. The majority of the neurons responded to one video, and some of them were highly selective; they responded only to a certain epoch of one or more videos.
The neurons discharging during observation of goal-directed actions were most selective. Among these, the highest percentage preferred monkey or human grasping and mimicking actions. During observation of non-goal-directed movements, a lower proportion of the neurons responded.
These results are in line with previous studies on mirror neurons for hand action observation in primates. During observation of a monkey or human grasping action, some mirror neurons activated in response to the hand movement, while other mirror neurons mirrored the action of placing the object into a cup, but did not respond to the act of eating it (Kinoshita et al., 2019).
Among the neurons responding to more than one video, a higher percentage discharged during observation of goal-directed actions. These neurons started their discharges at the beginning of the first epoch of the video, and their activity peaks just before the hand-object interaction, which is the main part of the movement observed by the experimenter. They discharged until the end of the video.
We performed behavioral tests in both monkeys twenty-two months after optogenetic manipulations, revealing subtle but lasting deficits in visual sensitivity in regions of the visual field corresponding to the inactivated regions of the V1 cortex. The deficits were reflected in the probability, latency, and accuracy of visually guided saccades. The deficits were not as dramatic as those that we observed on the contrast detection task.
The receptive field is the region of space in which a sensory stimulus will elicit an electrical response from a neuron. It is a key concept in sensory neurobiology, and it has been used to map many sensory systems from the photoreceptors to the lateral geniculate nucleus, to the visual cortex, and beyond.
Receptive fields of neurons in different brain areas are organized differently. For example, in the retina, receptive fields have a center-surround structure; in the lateral geniculate nucleus, they are oriented 1-4 and bandpass (selective to structures at different spatial scales).
In higher brain areas, receptive fields are more complex and selective for the orientation or direction of motion of a stimulus. This is true of both peripheral and central neurons, and it may be a result of an increase in the complexity of the underlying neural circuitry.
For this reason, it is often difficult to make a clear description of a specific neuron’s receptive field that is robust to changes in the overall field. It has therefore been important to study receptive fields on a global basis and in multiple brain regions.
These studies have revealed that receptive fields are correlated and evolve over time. They have also shown that receptive fields are dynamic, and that their effects on the computation of outputs are not linear.
This means that the effective receptive field, which is the effective contribution of each pixel to an output, is not a uniform distribution; rather, it is a Gaussian distribution. This can lead to a greater gradient magnitude for pixels near the center of the output, and a decrease for those near the edges.
It is this effect that has been incorporated into a variety of techniques for training artificial neural networks to perform local operations such as image segmentation, object detection and recognition, and facial expressions. One such technique is called backpropagation, and it is based on the idea that receptive fields are dynamic and that not all pixels contribute equally to an output computation.
Receptive fields have also been used to train deep neural networks, and these methods have shown that the effectiveness of a receptive field is a dynamic function that can be influenced by many factors such as network topology, the training method, and the input data. It is not surprising then that the effective receptive field of a deep convolutional neural network is a Gaussian distribution instead of a uniform one.
Detecting a visual stimulus is no walk in the park. It involves a series of complex circuitry that spans several subcortical relay stations spanning multiple levels of the brain. This includes neurons in the thalamus and cortical regions such as the lateral geniculate nucleus and Broca’s area.
The most obvious part of this process is the visual feedback that activates neurons in the VPL, which then send axons to subcortical motor centers and trigger responses. This feedback loop explains why the somatosensory thalamus is one of the most heavily influenced areas in the brain and is involved in many cognitive processes such as learning, memory, attention, and emotion.
It is also interesting that some of the most sophisticated sensory thalamic circuitry appears to be responsible for the most obvious sensory task, that is, detection of tactile stimuli. In this regard the VPL is well suited to perform the job, as it has been shown to encode touch and is an important component of the Broca’s area’s somatosensory gestalt.
In short, the most impressive thing about the VPL is that it can do the job in the presence of other less capable sensory and motor centers. This translates into more efficient control of motor outputs, such as finger or hand movements and facial expressions. As a result, the ventral premotor cortex may be able to play a key role in the acquisition of complex motor skills, such as recognizing that touching an apple in a cup can be considered a worthy pursuit.