

We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines. The stack trace points to exactly where your code was defined. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward.

There isn't an asynchronous view of the world. When you execute a line of code, it gets executed. PyTorch is designed to be intuitive, linear in thought, and easy to use. Our goal is to not reinvent the wheel where appropriate. You can write your new neural network layers in Python itself, using your favorite librariesĪnd use packages such as Cython and Numba.

You can use it naturally like you would use NumPy / SciPy / scikit-learn etc. It is built to be deeply integrated into Python. PyTorch is not a Python binding into a monolithic C++ framework. You get the best of speed and flexibility for your crazy research. While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date. Our inspiration comesįrom several research papers on this topic, as well as current and past work such as With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you toĬhange the way your network behaves arbitrarily with zero lag or overhead. One has to build a neural network and reuse the same structure again and again.Ĭhanging the way the network behaves means that one has to start from scratch. Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world. PyTorch has a unique way of building neural networks: using and replaying a tape recorder. Such as slicing, indexing, mathematical operations, linear algebra, reductions.Īnd they are fast! Dynamic Neural Networks: Tape-Based Autograd We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the If you use NumPy, then you have used Tensors (a.k.a.
