This is the corresponding code for the book "Deep Learning Framework PyTorch: Getting Started and Practice (2nd Edition)", but it can also be used as an independent PyTorch beginner guide and tutorial.
New upgrade
The second edition of this book has been newly upgraded based on the first edition, and the code of the book is written based on the 1st edition of PyTorch version 1.8. It has been comprehensively updated in combination with the opinions of readers of the first edition, including three modules: basic use, advanced expansion and practical applications.
content
The content of this book (tutorial/repository) is shown in the figure: 
It can be seen that this tutorial can be divided into three parts:
Basic use (Chapter 2 to 5) explains the content of PyTorch. This part introduces the main modules in PyTorch and some commonly used tools in deep learning. For this part of the content, here we use Jupyter Notebook as a teaching tool, and readers can modify and run the notebook and repeatedly experiment.
- Chapter 2 introduces the installation of PyTorch and the configuration of related learning environments. At the same time, this chapter introduces the main content of PyTorch in an outline manner to help readers gain a preliminary understanding of PyTorch.
- Chapter 3 introduces the use of multi-dimensional array Tensor and automatic differential system autograd in PyTorch, and gives examples of how to use Tensor and autograd to achieve linear regression, and compare their differences. This chapter analyzes the basic structure of Tensor and the principles of autograd to help readers understand PyTorch's underlying modules more comprehensively.
- Chapter 4 introduces the basic usage of the neural network module nn in PyTorch, explains the layers, activation functions, loss functions and optimizers in the neural network, and leads readers to use less than 50 lines of code to implement the classic network structure ResNet.
- Chapter 5 introduces tools such as data loading, pre-training models, visualization tools, and GPU acceleration in PyTorch. Using these tools rationally can improve user programming efficiency.
Advanced Extensions (Chapters 6-8) explains some advanced extensions in PyTorch. Learning this part can help readers write more efficient programs.
- Chapter 6 introduces vectorization ideas in PyTorch, mainly including broadcasting rules, basic indexes, advanced indexes and Einstein operations. At the end of this chapter, lead readers to use vectorization ideas to implement convolutional operations, interleaving, RoI Align, and reverse Unique operations in deep learning.
- Chapter 7 introduces distributed operations in PyTorch. Distributed computing and parallel computing can accelerate the training process of the network. This chapter introduces in detail the basic principles of parallel computing and distributed computing, and also introduces how to use torch.distributed and Horovod for distributed training of PyTorch.
- Chapter 8 introduces the CUDA extension in PyTorch, leading readers to use CUDA to implement Sigmoid functions. At the same time, this chapter summarizes the relationship between CUDA, NVIDIA-driver, cuDNN and Python.
Practical applications (Chapter 9-13) use PyTorch to implement several cool and interesting applications. For this part of the content, this repository provides complete implementation code and provides pre-trained models as demos for readers to test.
- Chapter 9 is a chapter that connects the past and the future. The goal is not to teach readers new functions and new knowledge, but to combine it with a classic competition in Kaggle to realize the simplest image binary classification problem in deep learning. During the implementation process, readers will be led to review the knowledge of the first 5 chapters and help readers organize programs and code reasonably to make the program easier to read and better maintain. At the same time, this chapter introduces how to debug in PyTorch.
- Chapter 10 introduces the basic principles of generating adversarial networks, leading readers to implement an anime avatar generator from scratch, and can use the generative adversarial network to generate anime avatars with varying styles.
- Chapter 11 introduces some basic knowledge of natural language processing, and introduces in detail the basic principles of CharRNN and Transformer. This chapter leads readers to use Transformer to automatically write poems. This program can imitate ancient people to sequel poems and generate hidden poems.
- Chapter 12 introduces the basic principles of style transfer and leads readers to implement neural networks that support arbitrary style transfer. Through this network, readers can convert any picture into the style of famous paintings.
- Chapter 13 introduces the basic principles of object detection and leads readers to implement a single-stage, anchor-free, and maximum-value suppression target detection algorithm CenterNet. CenterNet's design ideas can be migrated to classic computer vision problems such as object detection, human posture estimation, and target tracking of three-dimensional images.
The text description content and some Markdown content in Notebook belong to the first draft of this book. There may be some inconvenience in the description. Please understand that the author will gradually correct it later . This part of the content ensures 80% consistency with the book, but there may be some grammatical problems. Due to time constraints, the author will gradually update it in the future.
Do you need to buy a book?
Books are not necessary . This warehouse contains more than 60% of the text content and more than 90% of the code in the book, especially the introduction content in the previous few chapters, which almost completely retain the explanation content in the book. Readers can use this tutorial normally even if they don’t buy books.
If you think you prefer the reading experience of the paper version and want to leave a beautifully printed and fully printed book for easy reading, you might as well spend a little money to support the author's work over the past year~
Code description
The code is mainly tested under python3+PyTorch1.6~1.8 to get the final result. Python2 has not been tested yet, and the updated version of PyTorch has not been tested yet.
If there is any inappropriateness or any areas that need improvement, please open an issue discussion or submit a pull request.
Environment configuration
To install PyTorch, please select the specified version from the official website to install it, and install it with one click. For more installation methods, please refer to the instructions in the book.
Cloning the repository
git clone https : // github . com / chenyuntc / PyTorch - book . git
^_^
If you have any bugs, unclear explanations or confusions, please open an issue
Welcome to pull requests
Happy Coding!

- JD.com purchase link
- Dangdang Purchase Link