Skip to content

Commit

Permalink
Revert "Revert D21337640: [pytorch][PR] Split up documentation into s…
Browse files Browse the repository at this point in the history
…ubpages and clean up some warnings" (pytorch#37778)

Summary:
Original PR: pytorch#37419

cc mattip suo
Pull Request resolved: pytorch#37778

Differential Revision: D21385774

Pulled By: ezyang

fbshipit-source-id: 5de532faab8bae132736b6b5189e0ee2ac9935be
  • Loading branch information
ezyang authored and facebook-github-bot committed May 4, 2020
1 parent 4025d87 commit 4fef376
Show file tree
Hide file tree
Showing 51 changed files with 1,699 additions and 1,983 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ dist/
docs/src/**/*
docs/cpp/build
docs/cpp/source/api
docs/source/generated/
log
test/.coverage
test/.hypothesis/
Expand Down
12 changes: 12 additions & 0 deletions docs/source/_templates/autosummary/class.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
.. role:: hidden
:class: hidden-section
.. currentmodule:: {{ module }}


{{ name | underline}}

.. autoclass:: {{ name }}
:inherited-members:
:members:

.. autogenerated from source/_templates/autosummary/class.rst
14 changes: 14 additions & 0 deletions docs/source/_templates/classtemplate.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
.. role:: hidden
:class: hidden-section
.. currentmodule:: {{ module }}


{{ name | underline}}

.. autoclass:: {{ name }}
:members:


..
autogenerated from source/_templates/classtemplate.rst
note it does not have :inherited-members:
14 changes: 14 additions & 0 deletions docs/source/_templates/sobolengine.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
.. currentmodule:: {{ module }}


{{ name | underline}}

.. autoclass:: {{ name }}
:members:
:exclude-members: MAXBIT, MAXDIM
:undoc-members:


..
autogenerated from source/_templates/sobolengine.rst
note it has specific options
2 changes: 1 addition & 1 deletion docs/source/community/persons_of_interest.rst
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ PowerPC
- Alfredo Mendoza (`avmgithub <https://github.com/avmgithub>`__)

Library-level maintainers
------------------------
-------------------------

XLA
~~~
Expand Down
11 changes: 8 additions & 3 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
# sys.path.insert(0, os.path.abspath('../..'))

import torch

try:
import torchvision # noqa: F401
except ImportError:
Expand Down Expand Up @@ -56,6 +57,10 @@
'javasphinx',
]

# build the templated autosummary files
autosummary_generate = True
numpydoc_show_class_members = False

# autosectionlabel throws warnings if section names are duplicated.
# The following tells autosectionlabel to not throw a warning for
# duplicated section names that are in different documents.
Expand All @@ -72,7 +77,7 @@
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
if RELEASE:
templates_path = ['_templates-stable']
templates_path = ['_templates-stable'] + templates_path

# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
Expand Down Expand Up @@ -240,8 +245,8 @@ def setup(app):

# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
'python': ('https://docs.python.org/', None),
'numpy': ('https://docs.scipy.org/doc/numpy/', None),
'python': ('https://docs.python.org/3', None),
'numpy': ('https://numpy.org/doc/stable', None),
}

# -- A patch that prevents Sphinx from cross-referencing ivar tags -------
Expand Down
5 changes: 0 additions & 5 deletions docs/source/cuda_deterministic.rst

This file was deleted.

5 changes: 0 additions & 5 deletions docs/source/cuda_deterministic_backward.rst

This file was deleted.

8 changes: 0 additions & 8 deletions docs/source/cudnn_deterministic.rst

This file was deleted.

3 changes: 3 additions & 0 deletions docs/source/cudnn_persistent_rnn.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
:orphan:


.. note::

If the following conditions are satisfied:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/distributions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ Probability distributions - torch.distributions
:show-inheritance:

:hidden:`ContinuousBernoulli`
~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. currentmodule:: torch.distributions.continuous_bernoulli
.. autoclass:: ContinuousBernoulli
Expand Down
2 changes: 2 additions & 0 deletions docs/source/docutils.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[html writers]
table_style: colwidths-auto
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.
onnx
optim
quantization
rpc/index.rst
rpc
torch.random <random>
sparse
storage
Expand Down
131 changes: 62 additions & 69 deletions docs/source/jit.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@ TorchScript


.. toctree::
:maxdepth: 1
:caption: Language Reference
:hidden:
:maxdepth: 1
:caption: Language Reference
:hidden:

language_reference <jit_language_reference>
jit_language_reference

.. contents:: :local:
:depth: 2
Expand All @@ -40,25 +40,18 @@ For an end-to-end example of converting a PyTorch model to TorchScript and runni
Creating TorchScript Code
--------------------------

.. autofunction:: script(obj)

.. autofunction:: trace(func, example_inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-5)

.. autofunction:: trace_module(mod, inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-5)

.. autoclass:: ScriptModule()
:members:

.. autoclass:: ScriptFunction()

.. autofunction:: save

.. autofunction:: load

.. autofunction:: ignore

.. autofunction:: unused
.. autosummary::
:toctree: generated

script
trace
trace_module
ScriptModule
ScriptFunction
save
load
ignore
unused

Mixing Tracing and Scripting
----------------------------
Expand Down Expand Up @@ -167,7 +160,7 @@ TorchScript is a statically typed subset of Python, so many Python features appl
directly to TorchScript. See the full :ref:`language-reference` for details.


.. _Builtin functions:
.. _builtin functions:

Built-in Functions and Modules
------------------------------
Expand Down Expand Up @@ -207,39 +200,38 @@ Disable JIT for Debugging
~~~~~~~~~~~~~~~~~~~~~~~~~
.. envvar:: PYTORCH_JIT

Setting the environment variable ``PYTORCH_JIT=0`` will disable all script
and tracing annotations. If there is hard-to-debug error in one of your
TorchScript model, you can use this flag to force everything to run using native
Python. Since TorchScript (scripting and tracing) are disabled with this flag,
you can use tools like ``pdb`` to debug the model code.

Given an example

@torch.jit.script
def scripted_fn(x : torch.Tensor):
for i in range(12):
x = x + x
return x
Setting the environment variable ``PYTORCH_JIT=0`` will disable all script
and tracing annotations. If there is hard-to-debug error in one of your
TorchScript model, you can use this flag to force everything to run using native
Python. Since TorchScript (scripting and tracing) are disabled with this flag,
you can use tools like ``pdb`` to debug the model code. For example::

@torch.jit.script
def scripted_fn(x : torch.Tensor):
for i in range(12):
x = x + x
return x

def fn(x):
x = torch.neg(x)
import pdb; pdb.set_trace()
return scripted_fn(x)
def fn(x):
x = torch.neg(x)
import pdb; pdb.set_trace()
return scripted_fn(x)

traced_fn = torch.jit.trace(fn, (torch.rand(4, 5),))
traced_fn(torch.rand(3, 4))
traced_fn = torch.jit.trace(fn, (torch.rand(4, 5),))
traced_fn(torch.rand(3, 4))

Debugging this script with ``pdb`` works except for when we invoke the :func:`@torch.jit.script <torch.jit.script>`
function. We can globally disable JIT, so that we can call the :func:`@torch.jit.script <torch.jit.script>`
function as a normal Python function and not compile it. If the above script
is called ``disable_jit_example.py``, we can invoke it like so::
Debugging this script with ``pdb`` works except for when we invoke the
:func:`@torch.jit.script <torch.jit.script>` function. We can globally disable
JIT, so that we can call the :func:`@torch.jit.script <torch.jit.script>`
function as a normal Python function and not compile it. If the above script
is called ``disable_jit_example.py``, we can invoke it like so::

$ PYTORCH_JIT=0 python disable_jit_example.py
$ PYTORCH_JIT=0 python disable_jit_example.py

and we will be able to step into the :func:`@torch.jit.script <torch.jit.script>` function as a normal Python
function. To disable the TorchScript compiler for a specific function, see
:func:`@torch.jit.ignore <torch.jit.ignore>`.
and we will be able to step into the :func:`@torch.jit.script
<torch.jit.script>` function as a normal Python function. To disable the
TorchScript compiler for a specific function, see
:func:`@torch.jit.ignore <torch.jit.ignore>`.


Inspecting Code
Expand Down Expand Up @@ -537,14 +529,6 @@ rather build up the result tensor out-of-place with ``torch.cat``:

...

.. _Builtin functions:

Built-in Functions and Modules
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

See :ref:`builtin-functions` for a full reference of supported functions.


Frequently Asked Questions
--------------------------

Expand Down Expand Up @@ -582,7 +566,9 @@ Q: How do I store attributes on a :class:`ScriptModule`?

.. testcode::

class Model(nn.Module):
import torch

class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.x = 2
Expand All @@ -608,13 +594,11 @@ Q: How do I store attributes on a :class:`ScriptModule`?
3. Constants - Annotating a class member as ``Final`` (or adding it to a list called
``__constants__`` at the class definition level) will mark the contained names
as constants. Constants are saved directly in the code of the model. See
`Python-defined Constants`_ for details.
`builtin-constants` for details.

4. Attributes - Values that are a `supported type`_ can be added as mutable
4. Attributes - Values that are a `supported type` can be added as mutable
attributes. Most types can be inferred but some may need to be specified, see
`Module Attributes`_ for details.


`module attributes` for details.

Q: I would like to trace module's method but I keep getting this error:

Expand Down Expand Up @@ -741,12 +725,13 @@ TorchScript Classes
for simple record-like types (think a ``NamedTuple`` with methods
attached).

Everything in a user defined `TorchScript Class`_ is exported by default, functions
can be decorated with :func:`@torch.jit.ignore <torch.jit.ignore>` if needed.
Everything in a user defined `TorchScript Class <torchscript-class>`_ is
exported by default, functions can be decorated with :func:`@torch.jit.ignore
<torch.jit.ignore>` if needed.

Attributes
^^^^^^^^^^
The TorchScript compiler needs to know the types of `module attributes`_. Most types
The TorchScript compiler needs to know the types of `module attributes`. Most types
can be inferred from the value of the member. Empty lists and dicts cannot have their
types inferred and must have their types annotated with `PEP 526-style <https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations>`_ class annotations.
If a type cannot be inferred and is not explicitly annotated, it will not be added as an attribute
Expand Down Expand Up @@ -793,7 +778,7 @@ New API:

Constants
^^^^^^^^^
The ``Final`` type constructor can be used to mark members as `constant`_. If members are not marked constant, they will be copied to the resulting :class:`ScriptModule` as an attribute. Using ``Final`` opens opportunities for optimization if the value is known to be fixed and gives additional type safety.
The ``Final`` type constructor can be used to mark members as `constant`. If members are not marked constant, they will be copied to the resulting :class:`ScriptModule` as an attribute. Using ``Final`` opens opportunities for optimization if the value is known to be fixed and gives additional type safety.

Old API:

Expand Down Expand Up @@ -839,7 +824,7 @@ New API:
Variables
^^^^^^^^^
Containers are assumed to have type ``Tensor`` and be non-optional (see
`Default Types`_ for more information). Previously, ``torch.jit.annotate`` was used to
`Default Types` for more information). Previously, ``torch.jit.annotate`` was used to
tell the TorchScript compiler what the type should be. Python 3 style type hints are
now supported.

Expand All @@ -856,3 +841,11 @@ now supported.
if flag:
b = 2
return x, b

References
~~~~~~~~~~
.. toctree::
:maxdepth: 1

jit_python_reference
jit_unsupported
Loading

0 comments on commit 4fef376

Please sign in to comment.