Foraging as a natural visual search for multiple targets has increasingly been studied in humans in recent years. Here, we aimed to model the differences in foraging strategies between feature and conjunction foraging tasks found by Kristjánsson et al. (2014). Bundesen (1990) proposed the Theory of Visual Attention (TVA) as a computational model of attentional function that divides the selection process into filtering and pigeonholing. The theory describes a mechanism by which the strength of sensory evidence serves to categorize elements. We combined these ideas to train augmented Naïve Bayesian classifiers using data from Kristjánsson et al. (2014) as input. Specifically, we attempted to answer whether it is possible to predict how frequently observers switch between different target types during consecutive selections (switches) during feature and conjunction foraging using Bayesian classifiers. We formulated eleven new parameters that represent key sensory and bias information that could be used for each selection during the foraging task and tested them with multiple Bayesian models. Separate Bayesian networks were trained on feature and conjunction foraging data, and parameters that had no impact on the model's predictability were pruned away. We report high accuracy for switch prediction in both tasks from the classifiers, although the model for conjunction foraging was more accurate. We also report our Bayesian parameters in terms of their theoretical associations to TVA parameters, π_j (denoting the pertinence value) and β_i (denoting the decision-making bias).
Predictive remapping may be the principal mechanism of maintaining visual stability, and attention is crucial for this process. We aimed to investigate the role of attention in predictive remapping in a dual task paradigm with two conditions, with and without saccadic remapping. The first task was to remember the clock hand position either after a saccade to the clock face (saccade condition requiring remapping) or after the clock being displaced to the fixation point (fixation condition with no saccade). The second task was to report the remembered location of a dot shown peripherally in the upper screen for 1 s. We predicted that performance in the two tasks would interfere in the saccade condition, but not in the fixation condition, because of the attentional demands needed for remapping with the saccade. For the clock estimation task, answers in the saccadic trials tended to underestimate the actual position by approximately 37 ms while responses in the fixation trials were closer to veridical. As predicted, the findings also revealed significant interaction between the two tasks showing decreased predicted accuracy in the clock task for increased error in the localization task, but only for the saccadic condition. Taken together, these results point at the key role of attention in predictive remapping.
Attention is proposed to be a system of multiple functional networks, including alertness, orienting and executive control. A popular experimental paradigm for testing these networks and their interactions within a single design is the Attentional Networks Test (ANT) (Fan et al., 2002). The role of the oculomotor system in these various networks, however, has not been tested despite the strong link between attention and eye movements. We modified the executive control component of the manual response ANT version (ANTm) that allows to test the networks’ involvement with oculomotor responses. Specifically, we used a central target to signal pro or anti-saccades that allows us to match the saccadic response compatibility of the original ANTm. We conducted three experiments to compare interactions of the networks between the traditional ANTm that used a flanker task response, our new ANTs with saccadic responses signalled with a fixation arrow, and a manual response version with the response arrow at fixation (ANTf). Results for all three experiments showed typical main effects of all three attention networks, but we observed differences in their interactions. The ANTm showed only an interaction of alerting enhancing the orienting, ANTs showed a congruency by orienting interaction with the orienting effect only observed for pro-saccades. The ANTf showed both alerting by orienting, and orienting by congruency. Although the saccadic response did differ from the original ANTm, key differences were also highlighted by the switch from peripheral to central target. Overall the proposed ANTf is a valid tool to test main effects of attentional networks. Further investigation of interaction differences between manual and oculomotor systems is required.
Nowadays studying online is becoming commonly used in many universities but there are not so many tools to do it effectively. The possibilities of some public online services do not always satisfy all narrowly focused requirements that are necessary for studying process at universities. The goal of this project is to create a service that makes testing easier for teachers and provides quick and detailed feedback to the students. This program is developed on the basis of Google technologies which meet the necessary requirements. We offer a program that may be easily implemented in the studying process and is universal for different courses.
Foraging as a natural visual search for multiple targets has increasingly been studied in humans in recent years. Here, we aimed to model the differences in foraging strategies between feature and conjunction foraging tasks found by Kristjánsson et al.(2014). Bundesen (1990) proposed the Theory of Visual Attention (TVA) as a computational model of attentional function that divides the selection process into filtering and pigeonholing. The theory describes a mechanism by which the strength of sensory evidence serves to categorize elements. We combined these ideas to train augmented Naïve Bayesian classifiers using data from Kristjánsson et al.(2014) as input. Specifically, we attempted to answer whether it is possible to predict how frequently observers switch between different target types during consecutive selections (switch rates) during feature and conjunction foraging using Bayesian classifiers. We formulated eleven new parameters that represent key sensory and bias information that could be used for each selection during the foraging task and tested them with multiple Bayesian models. Separate Bayesian networks were trained on feature and conjunction foraging data, and parameters that had no impact on the model's predictability were pruned away. We report high accuracy for switch prediction in both tasks from the classifiers, although the model for conjunction foraging was more accurate. We also report our Bayesian parameters in terms of their theoretical associations to TVA parameters, π_j (denoting the pertinence value) and β_i (denoting the decision-making bias).
Inhibition of return (IOR) is an inhibitory aftereffect of visuospatial orienting, typically resulting in slower responses to targets presented in an area that has been recently attended. Since its discovery, myriad research has sought to explain the causes and effects underlying this phenomenon. Here, we briefly summarize the history of the phenomenon, and describe the early work supporting the functional significance of IOR as a foraging facilitator. We then shine a light on the discordance in the literature with respect to mechanism — in particular the lack of theoretical constructs that can consistently explain innumerable dissociations. We then describe three diagnostics (central arrow targets, locus of slack logic and the psychological refractory period, and performance in speed-accuracy space) used to support our theory that there are two forms of inhibition of return — the form which is manifest being contingent upon the activation state of the reflexive oculomotor system. The input form, which operates to decrease the salience of inputs, is generated when the reflexive oculomotor system is suppressed; the output form, which operates to bias responding, is generated when the reflexive oculomotor system is not suppressed. Then, we subject a published data set, where inhibitory effects had been generated while the reflexive oculomotor system was either active or suppressed, to diffusion modeling. As we hypothesized, based on the aforementioned theory, the effects of the two forms of IOR were best accounted for by different drift diffusion parameters.
We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze-contingent display, with a second task where a similarly sized contingent window is controlled with a mouse, allowing a covert aperture to be controlled independently by overt gaze. Larger apertures improved performance for both the mouse- and gaze-contingent trials, suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse-controlled aperture somewhat independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.
Inhibition of return (IOR) represents a delay in responding to a previously inspected location and is viewed as a crucial mechanism that sways attention toward novelty in visual search. Although most visual processing occurs in retinotopic, eye-centered, coordinates, IOR must be coded in spatiotopic, environmental, coordinates to successfully serve its role as a foraging facilitator. Early studies supported this suggestion but recent results have shown that both spatiotopic and retinotopic reference frames of IOR may co-exist. The present study tested possible sources for IOR at the retinotopic location including being part of the spatiotopic IOR gradient, part of hemifield inhibition and being an independent source of IOR. We conducted four experiments that alternated the cue-target spatial distance (discrete and contiguous) and the response modality (manual and saccadic). In all experiments, we tested spatiotopic, retinotopic and neutral (neither spatiotopic nor retinotopic) locations. We did find IOR at both the retinotopic and spatiotopic locations but no evidence for an independent source of retinotopic IOR for either of the response modalities. In fact, we observed the spread of IOR across entire validly cued hemifield including at neutral locations. We conclude that these results indicate a strategy to inhibit the whole cued hemifield or suggest a large horizontal gradient around the spatiotopically cued location.
Chromatic stimuli across a boundary of basic color categories (BCCs; eg blue and green) are discriminated faster than colorimetrically equidistant colors within a given category. Russian has two BCCs for blue, sinij 'dark blue' and goluboj 'light blue'. These language-specific BCCs were reported to enable native Russian speakers to discriminate cross-boundary dark and light blues faster than English speakers (Winawer et al., 2007, PNAS, 4, 7780-7785). We re-evaluated this finding in two experiments that employed identical tasks as in the cited study. In Experiment 1, Russian and English speakers categorized colors as sinij / goluboj or dark blue / light blue respectively; this was followed by a color discrimination task. In experiment 2 Russian speakers initially performed the discrimination task on sinij / goluboj and goluboj / zelënyj 'green' sets. They then categorized these colors in three frequency contexts with each stimulus presented: (i) an equal number of times (unbiased); more frequent (ii) either sinij or goluboj; (iii) either goluboj or zelënyj. We observed a boundary response speed advantage for goluboj / zelënyj but not for sinij / goluboj. The frequency bias affected only the sinij / goluboj boundary such that in a lighter context, the boundary shifted towards lighter shades, and vice versa. Contrary to previous research, our results show that in Russian, stimulus discrimination at the lightnessdefined blue BCC boundary is not reflected in processing speed. The sinij / goluboj boundary did have a sharper categorical transition than the dark blue / light blue boundary,
The Itti and Koch’s Saliency Model has been used extensively to simulate fixation selection in a variety of tasks from visual search to simple reaction times. Although the Saliency Model has been tested for its spatial prediction of fixations in visual salience, it has not been well tested for their temporal accuracy. Visual tasks, like search, invariably result in a positively skewed distribution of saccadic reaction times over large numbers of samples, yet we show that the leaky integrate and fire (LIF) neuronal layer included in the classic implementation of the model tends to produce a distribution shifted to shorter fixations (in comparison with human data). Further, while parameter optimization using a genetic algorithm and Nelder–Mead method does improve the fit of the resulting distribution, it is still unable to match temporal distributions of human responses in a simple visual search task. Analysis of times for individual images reveal that the LIF algorithm produces initial fixation durations that are fixed instead of a sampled from a distribution (as in the human case). Only by aggregating responses over many input images do they result in a distribution although the form of this distribution still depends on the input images used to create it and not on internal model variability
Since the 1990s, there has been an ongoing discussion in religious studies about the uses of the terms “secular” and “religious.” This article applies the methodology of the critical study of religion within the psychology of religion. There are two main strategies to construct a research program in this field: (1) studying how religious senses occur (neurotheology, transpersonal psychology) and (2) studying how religious representations emerge (cognitive religious studies). This paper provides an overview of these two paradigms through the lens of the religious/secular dichotomy. Scholars who are trying to understand the nature of religious phenomena ignore a significant amount of data labeled as “secular.” The author then suggests studying such representations or senses beyond the religious/secular dichotomy.
Medial frontal cortex is currently viewed as the main hub of the performance monitoring system; upon detection of an error committed, it establishes functional connections with brain regions involved in task performance, thus leading to neural adjustments in them. Previous research has identified targets of such adjustments in the dorsolateral prefrontal cortex, posterior cortical regions, motor cortical areas, and subthalamic nucleus. Yet most of such studies involved visual tasks with relatively moderate cognitive load and strong dependence on motor inhibition – thus highlighting sensory, executive and motor effects while underestimating sensorimotor transformation and related aspects of decision making. Currently there is ample evidence that posterior parietal cortical areas are involved in task-specific neural processes of decision making (including evidence accumulation, sensorimotor transformation, attention, etc.) – yet, to our knowledge, no EEG studies have demonstrated post-error increase in functional connectivity in the theta-band between midfrontal and posterior parietal areas during performance on non-visual tasks. In the present study, we recorded EEG while subjects were performing an auditory version of the cognitively demanding attentional condensation task; this task involves rather non-straightforward stimulus-to-response mapping rules, thus, creating increased load on sensorimotor transformation. We observed strong pre-response alpha-band suppression in the left parietal area, which presumably reflected involvement of the posterior parietal cortex in task-specific decision-making processes. Negative feedback was followed by increased midfrontal theta-band power and increased functional coupling in the theta band between midfrontal and left parietal regions. This could be interpreted as activation of the performance monitoring system and top–down influence of this system on the posterior parietal regions involved in decision making, respectively. This inter-site coupling related to negative feedback was stronger for subjects who tended to commit errors with slower response times. Generally, current findings support the idea that slower errors are related to the state of outcome uncertainty caused by failures of task-specific processes, associated with posterior parietal regions.
Microsaccadic eye movements belong to the category of micromovements, such as tremor and drift, though their functional purpose is still debated. Spatial cueing paradigms typically require fixational control, but this does not eliminate all oculomotor activity associated with the preparation of saccades in the cued direction. During the antisaccade task, planning and execution are separate processes and we therefore hypothesise that we may notice reduced microsaccade behaviour during the execution of antisaccade tasks as compared to saccade trials. The study is based on an eyetracking experiment involving 22 participants asked to perform saccades and antisaccades in blocked or mixed sets of trials. Each participant contributed to three main blocks: 50 trials in the fixed saccade block, 50 trials in the fixed antisaccade block, and 200 trials in the mixed saccade — antisaccade condition. In the saccade trials, a green fixation cross is displayed at the centre of the screen, whereas during the antisaccade trials the fixation cross is red, allowing participants to prepare the appropriate response (but not direction) prior to the target. The results of the study imply a strong latency cost of antisaccades as compared to prosaccades and an additional cost of mixed blocks, though these two effects did not interact. Crucially, in the blocked antisaccade trials, we predict that a supressed oculomotor system would lead to a lower occurrence of microsaccades initiated by the participants, in particular the trials where observers did not make erroneous prosaccades. We believe this may be due to participants having enough time to prepare the top-down control of the oculomotor system, which leads to a predictable pattern for each participant, where they either suppress microsaccadic movements completely or do not throughout the entire block. We also predict that in the mixed block participants have less time to prepare the top-down microsaccade suppression and we will test this by comparing data between the saccade, the antisaccade and mixed blocks
The seminal model by Laurent Itti and Cristoph Koch demonstrated that we can compute the entire flow of visual processing from input to resulting fixations. Despite many replications and follow-ups, few have matched the impact of the original model - so what made this model so groundbreaking? We have selected five key contributions that distinguish the original salience model by Itti and Koch; namely its contribution to our theoretical, neural and computational understanding of visual processing. Further, the model showed how salience could be used to make predictions for both spatial and temporal distributions of fixations. During the last 20 years, advances in the field have brought up various techniques and approaches to salience modeling, many of which tried to augment the initial Itti and Koch model. One of the most recent trends has been to adopt the computational power of deep learning neural networks, however, this has also shifted their primary focus to spatial classification. We present a review of recent approaches to modeling salience, and discuss the models from the point of view of their contribution to computational cognitive neuroscience.
The allocation of attention can occur not only in space, but also in time. Application of Rescorla's "truly random control" procedure about independency of cues and targets allowed us to differentiate the impact of endogenous (voluntary) and exogenous (automatic) components of temporal attention on the performance separately and within their interaction. In a random dot motion task, variation of luminance and motion of dots, that represent the cue, affects the engagement of exogenous mode. Temporal contingency between cues and targets or its absence affects the impact of endogenous mode. Combining these conditions, the results are as follows. For endogenous cues, we see improvement of both speed and accuracy at early cue target onset asynchrony. For exogenous cues, we see improvement of response times, but not accuracy. When both are involved, we observe a trade-off of speed and accuracy. This parallels from the auditory modalities of alertness cueing but with purely visual stimuli.
The classic Saliency Model by Itti and Koch launched many studies that contributed to the modelling of layers for vision and visual attention. The aim of this study is to improve the existing saliency model by using a neural network to generate salience maps to model human saccade generation. The proposed model uses a Leaky Integrate-and-Fire layer for temporal predictions, and replaces spatial salience with a deep learning neural network in order to create a generative model that combines spatial and temporal predictions. The results involve a deep neural network which is able to predict eye movements based on unsupervised learning from raw image input, as well as supervised learning from fixation maps retrieved during an eye-tracking experiment with 35 participants at later stages in order to train a 2D softmax layer. The results imply that it is possible to match model human fixation locations but temporal distributions are still limited by the accuracy of the leaky algorithm.
As a foraging facilitator, Inhibition of return (IOR) must be coded in spatiotopic coordinates. Early reports confirmed this suggestion but these results have been recently challenged. The present study was designed to examine the reference frame of IOR and to test whether retinotopic IOR might be a part of the spatiotopic IOR gradient. We conducted four experiments with spatiotopically and retinotopically cued coordinates and an intervening saccade between the cue and target presentations. We alternated the response modality (manual and saccadic) and the cue-target spatial distance (fixed and contiguous). Our data showed evidence for an independent source of retinotopic IOR neither at discrete locations nor as a gradient; moreover, we observed the spread of IOR across the whole validly cued hemifield. We propose that these results indicate a strategy to attend and then inhibit the entire cued hemifield.
Transcranial alternating current stimulation (tACS) can be used to modulate brain activity. tACS was shown to induce frequency-, state-, and phase- dependent effects which makes tACS a neurostimulation technique that provides a more valuable predictable outcome. However, the impact of different tACS intensities has not been systematically investigated yet. Here, we proposed to investigate the effects of tACS of the primary motor cortex (M1) delivered at different intensities.
There is a common assumption that application of stimulation for longer duration or for higher intensity leads to more reliable physiological and behavioral effects. However, previous studies performed using different transcranial electrical stimulation methods such as transcranial direct current stimulation (tDCS) and/or at high-frequency such as tACS at ripple range, showed non-monotonic effect of stimulation intensity. Nevertheless, tDCS and high-frequency tACS potentially rely on different mechanisms of neuromodulation with respect to conventional tACS delivered at EEG range (1 – 70 Hz).
In this study we applied 20 Hz tACS to the primary motor cortex (M1) to investigate potential non-monotonic effect of tACS intensities (ranging from 0.25 mA to 2 mA with 0.25 mA interval between conditions) on the M1 excitability measured as the peak-to-peak amplitude of TMS-induced motor evoked potentials (MEPs). As for control, we used 1 mA 10 Hz (alpha) tACS and a no stimulation condition.
Preliminary results (N = 9) showed increase of MEPs for higher intensities (1.5 mA, 2 mA) of stimulation. In addition, an interesting effect emerged for those subjects with a lower motor threshold which showed a higher MEPs modulation effect of beta-tACS
Transcranial direct current stimulation (tDCS) is a promising tool for modulation of learning and memory, allowing to transiently change cortical excitability of specific brain regions with physiological and behavioral outcomes. A detailed exploration of factors that can moderate tDCS effects on episodic long-term memory (LTM) is of high interest due to the clinical potential for patients with traumatic or pathological memory deficits and with cognitive impairments. This commentary discusses findings by Marián et al. (2018) recently published in Cortex within a broad context of brain stimulation in memory research.