Research

Federated Evaluation and Tuning for On-Device Personalization: System Design & Applications

M. Paulik, M. Seigel, H. Mason, D. Telaar, J. Kluivers, R. van Dalen, C.W. Lau, L. Carlson, F. Granqvist, C. Vandevelde, S. Agarwal, J. Freudiger, A. Byde, A. Bhowmick, G. Kapoor, S. Beaumont, A. Cahill, D. Hughes, O. Javidbakht, F. Dong, R. Rishi, S. Hung

We describe the design of our federated task processing system. Originally, the system was created to support two specific federated tasks: evaluation and tuning of on-device ML systems, primarily for the purpose of personalizing these systems. In recent years, support for an additional federated task has been added: federated learning (FL) of deep neural networks. To our knowledge, only one other system has been described in literature that supports FL at scale. We include comparisons to that system to help discuss design decisions and attached trade-offs. Finally, we describe two specific large scale personalization use cases in detail to showcase the applicability of federated tuning to on-device personalization and to highlight application specific solutions.

Mixed Reality for Learning Programming

J. Kim, S. Agarwal, K. Marotta, S. Li, J. Leo, D. H. Chau

We present our ongoing investigation into leveraging mixed reality (MR) to help students learn coding more easily and with more fun. We have developed an MR coding learning platform using Apple’s ARKit 2 on iOS, with a physical user-configurable coding game board. Our approach could provide major benefits over conventional augmented reality (AR) approaches for learning coding and debugging: (1) allowing teachers to tailor the platform to their instructional needs, and spark creativity and engagement among students in designing programming problems that interest them; (2) enabling students to physically interact with a program, concretizing coding errors and providing real-time visual feedback to aid students’ program understanding and reduce cognitive load. We present our preliminary results that uses ARKit’s image tracking and object detection to enable core mixed-reality interaction capabilities on our platform.

Augmenting Coding: Augmented Reality for Learning Programming

N. Dass, J. Kim, S. Ford, S. Agarwal, D. H. Chau

Augmented reality (AR) is breaking into every industry and is finding a home in many unique and novel applications, due in part to its ability to engage users and their physical surroundings in potentially immersive means. We present our early investigation into whether these qualities of AR may be leveraged to help people learn coding more easily and with more fun. Using a within-subjects design with 12 participants, our pilot study evaluated two interactive AR coding environments: (1) head-mounted AR with Microsoft HoloLens, (2) mobile AR with ARKit on an iPhone; together with a conventional 2D touch interface using Swift Playground on an iPad as baseline. Participants enjoyed using mobile AR the most, and they also completed programming tasks the fastest when using it. Our current results suggest AR may have potential in enhancing beginners’ learning experience for coding, especially for tasks that are more interactive and benefit from visual feedback.