- W empik go
Caffe2 Quick Start Guide - ebook
Caffe2 Quick Start Guide - ebook
Build and train scalable neural network models on various platforms by leveraging the power of Caffe2
Key Features:
Migrate models trained with other deep learning frameworks on Caffe2Integrate Caffe2 with Android or iOS and implement deep learning models for mobile devicesLeverage the distributed capabilities of Caffe2 to build models that scale easily
Book Description:
Caffe2 is a popular deep learning library used for fast and scalable training and inference of deep learning models on various platforms. This book introduces you to the Caffe2 framework and shows how you can leverage its power to build, train, and deploy efficient neural network models at scale.
It will cover the topics of installing Caffe2, composing networks using its operators, training models, and deploying models to different architectures. It will also show how to import models from Caffe and from other frameworks using the ONNX interchange format. It covers the topic of deep learning accelerators such as CPU and GPU and shows how to deploy Caffe2 models for inference on accelerators using inference engines. Caffe2 is built for deployment to a diverse set of hardware, using containers on the cloud and resource constrained hardware such as Raspberry Pi, which will be demonstrated.
By the end of this book, you will be able to not only compose and train popular neural network models with Caffe2, but also be able to deploy them on accelerators, to the cloud and on resource constrained platforms such as mobile and embedded hardware.
What you will learn:
Build and install Caffe2Compose neural networksTrain neural network on CPU or GPUImport a neural network from CaffeImport deep learning models from other frameworksDeploy models on CPU or GPU accelerators using inference enginesDeploy models at the edge and in the cloud
Who this book is for:
Data scientists and machine learning engineers who wish to create fast and scalable deep learning models in Caffe2 will find this book to be very useful. Some understanding of the basic machine learning concepts and prior exposure to programming languages like C++ and Python will be useful.
Ashwin Nanjappa is a senior architect at NVIDIA, working in the TensorRT team on improving deep learning inference on GPU accelerators. He has a PhD from the National University of Singapore in developing GPU algorithms for the fundamental computational geometry problem of 3D Delaunay triangulation. As a post-doctoral research fellow at the BioInformatics Institute (Singapore), he developed GPU-accelerated machine learning algorithms for pose estimation using depth cameras. As an algorithms research engineer at Visenze (Singapore), he implemented computer vision algorithm pipelines in C++, developed a training framework built upon Caffe in Python, and trained deep learning models for some of the world's most popular online shopping portals.
Kategoria: | Computer Technology |
Język: | Angielski |
Zabezpieczenie: |
Watermark
|
ISBN: | 978-1-78913-826-9 |
Rozmiar pliku: | 5,3 MB |