Foveated rendering technology is a mathematical model that helps accurately predict the next gaze fixation point and reduces the inaccuracy caused by blinking.
New mathematical model has been developed that helps predict the next gaze fixation point accurately and reduces the inaccuracy caused by blinking, reveals a new study. The study, published in the SID Symposium Digest of Technical Papers, indicates that the model would make VR/AR systems more realistic and sensitive to user actions.
‘Foveated rendering technology helps improve computational performance and eliminates issues caused by the gap between the limited capabilities of graphic processors and increasing display resolution.’
"We have effectively solved the issue with the foveated rendering technology that existed in the mass production of VR systems," researcher Viktor Belyaev, Professor at the RUDN University in Russia. Foveated rendering is a basic technology of VR systems. When a person looks at something, their gaze is focused on the so-called foveated region, and everything else is covered by peripheral vision. Therefore, a computer has to render the images in the foveated region with the highest degree of detail, while other parts require less computational powers.
This approach helps improve computational performance and eliminates issues caused by the gap between the limited capabilities of graphic processors and increasing display resolution. However, foveated rendering technology is limited in speed and accuracy of the next gaze fixation point prediction because the movement of a human eye is a complex and largely random process.
To solve this issue, the researchers developed a mathematical modeling method that helps calculate next gaze fixation points in advance.
The predictions of the model are based on the study of the so-called saccadic movements (fast and rhythmic movements of the eye). They accompany the shifts of our gaze from one object to another and can suggest the next fixation point. However, these models cannot be used by eye trackers to predict eye movements because they are not accurate enough, the team said.
Advertisement
The new method was tested experimentally using a VR helmet and AR glasses. The eye tracker based on the mathematical model was able to detect minor eye movements (3.4 minutes, which is equal to 0.05 degrees), and the inaccuracy amounted to 6.7 minutes (0.11 degrees).
Advertisement
Source-IANS