The emergence and eventual mass adoption of AR technologies will create a mass explosion in data, as our daily lives are recorded and our eye-gaze and correlated attention is tracked with on-headset sensors. This data can be used to understand the way we think, what we care about, what we desire, and what could help improve our lives. This project aims to leverage the massive amounts of data provided by daily AR glasses to create a "superintelligent" AR assistant by combining cognitive architectures and large language models.

Project Goals

<aside> 💡 J[AR]VIS is a novel educational system that provides info, insights, and visualizations to users both automatically and upon explicit query. The system leverages powerful AI and AR technology semantically understand the environment the user is in, using computer vision algorithms and microphone data, and uses a language model to answer virtually any question the user makes. As an AR experience, 3D visualizations of objects and data can be automatically coded using the AI. One can simply look at anything in their world, ask a question about it, and receive an intelligent answer. As the system is conversational, knowledge can be built on from past queries, allowing for a highly personal and high bandwidth educational tool.


Working on timeline and system architecture.

Working on timeline and system architecture.


Meeting Notes

Team Roadmap

Overall Tasks

Literature Review