Understanding attention starts with understanding the brain.
Dragonfly AI’s algorithm isn’t just software - it’s the result of over a decade of neuroscience research into how humans instinctively respond to visual information.
Biologically Inspired, Not Data-Trained 🔬
Dragonfly AI's attention algorithm is not trained on human behavioural data, as this introduce bias. Instead, it’s designed to simulate how the brain itself processes visual input. That means no bias toward a demographic, category, or channel.
This makes Dragonfly AI’s insights applicable across audiences, industries, and geographies - at scale. Capturing what the brain is most likely to notice in the first critical seconds—regardless of viewer intent or past behaviour.
Pre-Cognitive Attention Simulation ⚡
Our brains react to visuals long before we “decide” to look at something.
Dragonfly’s algorithm models this pre-cognitive phase: the 0–3 seconds where attention is reflexive, not reflective.
So even if a shopper is hunting for a specific brand on a shelf, Dragonfly AI tells you whether your packaging or product is likely to get noticed at all.
The Algorithm 🤖
To simulate the attention process, Dragonfly AI breaks down each input image into five biologically inspired channels:
🌓 Luminance – How bright/dark contrasts draw the eye
🎨 Colour – Red-green and blue-yellow opponency patterns
✂️ Edge – Structural features like shapes and boundaries
🌀 Motion – Sensitivity to directional changes between frames
🧭 Orientation – Alignment of elements in space
Each channel is processed using proprietary mathematical models that simulate how neurons in the visual cortex respond to stimuli. Then the algorithm dynamically combines them into a final saliency map.
Independently Validated Accuracy ✅
Dragonfly AI isn’t just based on theory—it’s tested and proven.
We benchmark against the MIT300 dataset, a gold standard in visual attention modelling.
Prediction accuracy: 89.2%, matching professional-grade eye-tracking results without requiring human testing.

