Robotics and Visual Computing Lab tours at Brown University

When: 
Thursday, November 1, 2018 - 6:30pm
Room: 
Brown University, Watson Center for Information Technology, corner of Waterman St. and Brook St, Providence, RI
Lecturer(s): 
Stefanie Telex, Daniel Ritchie, James Tompkin, Andy van Dam, David Laidlaw + colleagues & students

Robotics and Visual Computing Lab tours at Brown University
by Boston Chapters of IEEE Computer and Robotics Societies and GBC/ACM

Please sign up at Eventbrite page
https://www.eventbrite.com/e/robotics-and-visual-computing-lab-tours-at-brown-university-tickets-51638169154?utm_source=eb_email&utm_medium=email&utm_campaign=new_event_email&utm_term=viewmyevent_button
to get information about car pooling and updates on logistical details.

DESCRIPTION
Title: Robotics and Visual Computing Lab tours at Brown University’s CS department Thursday Nov 1, 2018, 6:30-9:00pm. Visitors will have their choice of a variety of demos in the labs on the ground floor of the CS Building on the Brown campus, the Watson Center for Information Technology, corner of Waterman St. and Brook St, Providence. Parking is available in a lot across the street from the CIT.

Abstract:

1) Robotics: Prof. Stefanie Tellex, colleagues and students

a. Demo 1: Virtual reality teleoperation. Visitors put on a VR headset. They see a visualization of the robot's sensor stream and teleoperates the robot in VR to pick up objects and manipulate them.

b. Demo 2: Drones. Students from Brown and the PCTA will do a live demo of our low-cost autonomous drone. This is part of our goal to empower every person with a collaborative robot.

c. Demo 3: Social feedback. Visitors use language and gesture to point out an object to the Baxter robot. The robot responds by delivering the object.

2) Visual Computing: Prof. Daniel Ritchie and students

a. Indoor scene synthesis: learning how to choose and lay out furniture & other objects in indoor spaces using deep neural networks. We have a short narrative video that overviews how the method works, and PhD student Kai can also present a poster about the same project

b. Learning visual style compatibility of 3D objects: Given two 3D objects, e.g., assets that might be used for a game / VR application, can a computer quantify how well they would "go together" if used in the same scene? We're training neural networks to do this. One source of data is human judgments about style similarity

c. Building a large dataset of articulated 3D models: Existing large 3D model datasets consist of only static geometry. We're building a large dataset of objects annotated with part mobilities (e.g. door handles can turn) for use in VR, robotics, and other applications.

d. Visual program induction: Given an image or a 3D model, can a computer infer a high-level program that, when executed, reproduces the input image/model? This ability facilitates interesting 'semantic' edits to the image/shape. We can show results from recent systems that do this in several domains, including converting hand-drawn graph sketches into LaTeX-like programs.

3) Visual Computing: Prof. James Tompkin and students

a. Unsupervised Machine Learning-based Image Translation: See yourself transformed into a cat or an anime character using machine learning techniques which automatically learn how to 'translate' between classes of objects in images

b. Organizing Databases of Imagery with Interactive Labeling: It is easy to scrape databases of imagery online, but how can we organize these easily when they are unlabeled? We show an interactive labeling system to quickly organize databases of human body geometry or artistic paintings into user-defined criteria.

c. Light field Segmentation and Rendering for Image Editing: As smartphones become multi-camera systems, how can we consistently edit images captured with these camera arrays (images/video)

d. Machine Perception of Data Visualizations: Can current machine-learning perception systems reason about data visualizations like graphs, and what does this tell us about the pros and cons of machine vision systems and human vision? (images/video)

4) Visual Computing: Prof. Andy van Dam and students

a. Vizdom: interactive analytics through pen and touch. Vizdom’s frontend allows users to visually compose complex workflows of machine learning and statistics operators on an interactive whiteboard, and the backend leverages recent advances in workflow compilation techniques to run these computations at interactive speeds (joint work with Prof. Tim Kraska of MIT, and van Dam’s Ph.d. student Emanuel Zgraggen, now post-doc’ing with Tim at MIT)

b. Dash: a pen- and touch-enabled 2D information management system for desktops, slates and large interactive whiteboards. Using unbounded 2d workspaces users can gather documents and fragments from a variety of sources, organize them spatially and hierarchically, annotate them, and hyperlink related content to discover and encode relationships. New insights can be presented via customizable dashboards and slide sequence style presentations.

5) Visual Computing: Prof. David Laidlaw and associates

a. The YURT, a high-resolution VR facility: it displays over 100 million stereo pixels using 69 full HD projectors driven by 20 nodes of an HPC cluster. The projectors display onto 145 mirrors covering a 360 degree surface including overhead and underfoot. At normal viewing distances, the pixels are smaller than are resolvable by the human retina. Visitors will walk 2 blocks to 180 George St, put on 3D stereo glasses to have an immersive virtual reality demonstration of some science and education projects as well as some applications that are a bit more frivolous.