Researchers at the University of Washington have created a system that can tailor a user interface to the motor and visual abilities of the user. After a short assessment, the system presents a user interface with presets for the user based on the assessment. I remember reading about adaptive interfaces quite a long time ago. Could something similar be built to accommodate age-related cognitive differences? Perhaps a spatial abilities assessment could be given to change the structure of the user interface to make it easier to use?
[from Slashdot via The Future of Things]