Registers a global forward hook for all the modules Registers a forward pre-hook common to all modules. Non-linear Activations (weighted sum, nonlinearity)ĭataParallel Layers (multi-GPU, distributed)Ī kind of Tensor that is to be considered a module parameter.īase class for all neural network modules. These are the basic building blocks for graphs: Extending torch.func with autograd.Function. CPU threading and TorchScript inference.CUDA Automatic Mixed Precision examples.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |