![]() ![]() Remixed Reality: Manipulating Space and Time in Augmented Reality. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects appearance changes such as changing textures temporal changes such as pausing time and viewpoint changes that allow users to see the world from different points without changing their physical location. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. We present Remixed Reality, a novel form of mixed reality. Remixed Reality: Manipulating Space and Time in Augmented Reality Project page / Full video (5 min) / talk recording from UIST '19 Hilliges, 2019.Ĭontext-Aware Online Adaptation of Mixed Reality Interfaces. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.ĭ. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. Our system adapts which applications are displayed, how much information they show, and where they are placed. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. ![]() Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Ĭheck out the Augmented Perception Lab at CMU HCII.Ĭontext-Aware Online Adaptation of Mixed Reality Interfaces You can also find me on Twitter, Linkedin, and Google Scholar, or contact me via .ĭownload my cv here: cv_davidlindlbauer.pdf. I have worked with Jörg Müller at TU Berlin, the Media Interaction Lab in Hagenberg, Austria, Stacey Scott and Mark Hancock at the University of Waterloo, and interned Microsoft Research (Redmond) in the Perception & Interaction Group. ![]() I completed my PhD at TU Berlin in the Computer Graphics group, advised by Marc Alexa. To achieve this, I create and study enabling technologies and computational approaches that control when, where and how virtual content is displayed to increase the usability of AR and VR interfaces.īefore CMU, I was a postdoc at ETH Zurich in the AIT Lab of Otmar Hilliges. My research focusses on understanding how humans perceive and interact with digital information, and to build technology that goes beyond the flat displays of PCs and smartphones to advances our capabilities when interacting with the digital world. ![]() We will work at the intersection of perception, interaction, computation and Mixed Reality. I am looking for students to join my new lab at CMU. I am an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University, leading the Augmented Perception Lab. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |