Die Suchmaschine für Unternehmensdaten in Europa

UK-Förderung (490.506 £): Von Insekten inspirierte Tiefenwahrnehmung Ukri01.02.2023 Forschung und Innovation im Vereinigten Königreich, Großbritannien

Auf einen Blick

Text

Von Insekten inspirierte Tiefenwahrnehmung

Zusammenfassung Any animal, or robot, that wants to interact with objects needs to obtain information about their 3D shape. Humans use stereo vision (two views from two eyes) to gain information about depth, but require large brains to process this information. Robots have also been built that use stereo vision, or other kinds of depth sensors that use projected light or reflected light. But these have a number of limitations, such as energy consumption, sensitivity to lighting conditions, and the amount of computational processing needed. We are interested how insects solve the problem of 3D sensing, with small compound eyes and a tiny brain (altogether ~100,000 neurons), and whether this provides an alternative solution for robotics. Insects such as fruit flies (Drosophila) can be studied with high-speed/high-resolution neural activity and behaviour recordings. This has revealed they use a special mechanism to get depth information, which involves motion of the individual light receptors in the eye. Eyes (unlike conventional cameras) register relative light change. In Drosophila, individual light sensitive cells - corresponding to individual "pixels" of the scene - react to these light changes by generating an fast counter-motion, which we call a photoreceptor microsaccade. Each photoreceptor moves in a specific direction at its particular location inside the compound eye, transiently readjusting its own light input. The photoreceptor microsaccades are mirror-symmetric in the left and right eyes, meaning that the same light change makes them move simultaneously in opposite directions. Therefore, during binocular viewing, the pixels in one eye move transiently with the world and in the other eye against it. Ultimately, these opposing microsaccades should cause small timing differences in the eye and the brain networks' electrical signals, rapidly and accurately informing the fly of the 3D world structure. We now want to determine exactly how the Drosophila brain networks utilise this mirror-symmetric left and right eye information to produce super-resolution stereo vision. We will build realistic models of binocular stereo information processing in the fly and use these to reproduce and predict responses to 3D objects. We will test the efficiency of this encoding in Artificial Neural Network (ANN) simulations driven by microsaccadic sampling. This approach will be combined with experiments on Drosophila that monitor neural activity using 3D object stimulation, and use behavioural tests to reveal the animal's 3D perception capabilities. Our hypotheses about function will then be realised and tested in hardware, to determine if the same depth sensing capabilities can be obtained using either conventional camera input processed in a novel way, or through the design of a novel light sensing array that incorporates individual movement of the elements. The outcome will be a new method to efficiently detect 3D shape, which would have multiple potential applications, e.g. for robot grasping tasks.
Kategorie Research Grant
Referenz EP/X019632/1
Status Active
Laufzeit von 01.02.2023
Laufzeit bis 31.01.2027
Fördersumme 490.506,00 £
Quelle https://gtr.ukri.org/projects?ref=EP%2FX019632%2F1

Beteiligte Organisationen

University of Edinburgh
FESTO
University of Sheffield
Festo SE & Co.KG
Opteran Technologies Ltd

Die Bekanntmachung bezieht sich auf einen vergangenen Zeitpunkt, und spiegelt nicht notwendigerweise den heutigen Stand wider. Der aktuelle Stand wird auf folgender Seite wiedergegeben: University OF Edinburgh CHARITY, Edinburgh, Großbritannien.