8:30 AM-3:00 PM
NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning and accelerated computing.
In this workshop you will learn how to train convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to generate captions from images and video using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset.
Familiarity with basic Python (functions and variables),
Understanding of fundamentals of convolutional neural networks for computer vision (e.g. see material here or here)
Prior experience training neural networks (in any framework, e.g. see this simple example in keras)
TOOLS AND FRAMEWORKS: TensorFlow
8:30 - 8:45 Breakfast
8:45 - 10:15 Part 1: Segmentation
10:15 - 10:30 Coffee break
10:30 - 12:00 Part 2: Word Generation
12:00 - 1:00 Lunch
1:00 - 3:00 Part 3: Image & Video Captioning
Dima Lituiev, PhD,
Bakar Computational Health Sciences Institute, UCSF
Certified NVIDIA Deep Learning Institute Ambassador
Subscribe to our calendar and stay updated on the latest events.
You are now on our mailing list and will receive updates regularly.
Also, if you want to have your calendar updated automatically with QBI events, please download and double click on our .ics file listed below. It is compatible with Apple, Google, Facebook and Microsoft platforms.
Share event to your friends by email.
The event has been mailed to the given email address.