This PhD uses brain-inspired AI to decode vision from neural data. Using human fMRI (24 hours of Doctor Who) and monkey electrophysiology, signals are transformed into 2D brain maps to improve reconstruction. The model learns receptive-field structure, compares contributions of V1/V4/IT, and aims for efficient, interpretable decoding with applications to neuroscience and BCIs.

This research investigates how motion perception changes with age and how these changes are reflected in brain function. Using behavioural tasks and fMRI, the research aims to develop simple visual tests that could be used in routine eye-care settings to identify early signs of cognitive decline and support healthy ageing.