Senior Lecturer, School of Computer Science,
   University of Lincoln, Lincoln, U.K.

Faculty, Athens International Masters Program in Neuroscience,
   Dept of Biology, University of Athens, Greece

Faculty, Budapest Semester in Cognitive Science,
   Eotvos Lorand University, Budapest, Hungary

Guest Lecturer, Institute of Automation,
   Chinese Academy of Sciences, Beijing, China

Associate Editor, Cognitive Computation


Guest Associate Editor, Frontiers in Systems Neuroscience


Associate Editor, Scholarpedia


Review Editor, Frontiers in Cognitive Science
email: vcutsuridisgmail.com
Books

Hippocampal Microcircuits:
A Computational Modeler's
Resource Book, 1st ed



Springer 2010
Perception-Action Cycle:
Models, Architectures
and Hardware



Springer 2011
Hippocampal Microcircuits:
A Computational Modeler's
Resource Book, 2nd ed



Springer, in 2017
Edited Proceedings

Brain Inspired
Cognitive Systems
(BICS) 2008



Springer 2008
Book Series

Springer Series in Cognitive
& Neural Systems



Springer
Trends in Augmentation
of Human Performance



Springer


Home Page
Research
   • Computational neuroscience
   • Cognitive systems
    • Cognitive models of visual saliency, attention, active visual search and scene understanding
    • Cognitive models of the perception-action cycle
   • Machine learning applications
   • Brain-machine interfaces
Teaching
Publications
Activities, Grants
Talks, Seminars
Industry
Education, Career
Software
Full CV   (July 20, 2016)


Hierarchical cognitive models of visual saliency, attention, active visual search and scene understanding  
To view and understand the visual world, we shift our gaze from one location to another about three times per second. These rapid changes in gaze direction result from very fast eye movements called saccades. Visual information is acquired only during fixations, stationary periods between saccades. Active visual search of pictures is the process of active scanning of the visual environment for a particular target among distracters or for the extraction of its meaning.

I’ve recently proposed a bio-inspired cognitive architecture of active visual search and picture scanning. My model was multi-modular consisting of spatial and object visual processing, attention, reinforcement learning, motor plan and motor execution modules. The novelty of the model lied on its decision making mechanisms. In contrast to previous models, decisions were made from the interplay of a winner-take-all mechanism in the spatial, object and motor saliency maps between the resonated by top-down attention and bottom-up visual salient maps selectively tuned by a reinforcement signal to the spatial, object and motor representations. A reset mechanism due to feedback inhibitory signals from the motor execution module to all other modules suppressed the last attended location from the saliency map and allows for the next gaze to be executed.

The model offered a plausible hypothesis of how the participating brain areas work together to accomplish a scan of a scene within the allocated time (3–4 fixations per second). In addition, the model unravelled the neurocomputational mechanisms involved in such process and discussed its physiological implications by answering the following questions:
  • How is a complex visual scene processed?

  • How is the selection of one particular location in a visual scene accomplished?

  • Does it involve bottom-up, sensory-driven cues or top-down world knowledge expectations? Or both?

  • How is the decision when to terminate a fixation and move the gaze made?

  • How is the decision where to direct the gaze in order to take the next sample made?

  • What are the neural mechanisms of inhibition of return?

References
Hierarchical cognitive models of vision guided bimanual reaching and grasping of objects  
A cognitive control architecture of the perception–action cycle for visually guided reaching and grasping of objects. The objects themselves are not known a priori to the system, but their knowledge is assumed to be built by the system through interaction and experimentation with them.

The architecture is multi-modular consisting of object recognition, object localization, attention, cognitive control, affordance extraction, value attribution, decision-making, motor planning and motor execution modules.

The work is based on the notion that separate visuomotor channels are activated in parallel by specific visual inputs and are continuously modulated by attention and reward, which control a robot’s/agent’s action repertoire. The suggested visual apparatus allows the robot/agent to recognize both the object’s shape and location, extract affordances and formulate motor plans for reaching and grasping. A focus-of-attention signal plays an instrumental role in selecting the correct object in its corresponding location as well as selects the most appropriate arm reaching and hand grasping configuration from a list of other configurations based on the success of previous experiences.

The cognitive control architecture consists of a number of neurocomputational mechanisms heavily supported by experimental brain evidence: spatial saliency, object selectivity, invariance to object transformations, focus of attention, resonance, motor priming, spatial-to-joint direction transformation and volitional scaling of movement.

References