Skip to content

Commit

Permalink
Replace master with main in links and docs/conf.py (pytorch#100176)
Browse files Browse the repository at this point in the history
Fixes #ISSUE_NUMBER

Pull Request resolved: pytorch#100176
Approved by: https://github.com/albanD, https://github.com/malfet
  • Loading branch information
Svetlana Karslioglu authored and pytorchmergebot committed May 2, 2023
1 parent 0aac244 commit d425da8
Show file tree
Hide file tree
Showing 27 changed files with 46 additions and 46 deletions.
6 changes: 3 additions & 3 deletions docs/cpp/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
] if run_doxygen else [])

intersphinx_mapping = {
'pytorch': ('https://pytorch.org/docs/master', None)
'pytorch': ('https://pytorch.org/docs/main', None)
}

# Setup absolute paths for communicating with breathe / exhale where
Expand Down Expand Up @@ -133,10 +133,10 @@
#
# The short X.Y version.
# TODO: change to [:2] at v1.0
version = 'master'
version = 'main'
# The full version, including alpha/beta/rc tags.
# TODO: verify this works as expected
release = 'master'
release = 'main'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
4 changes: 2 additions & 2 deletions docs/cpp/source/notes/maybe_owned.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,5 +55,5 @@ So, general rules of thumb:
reference count, but never in misbehavior - so it's always the safer bet, unless
the lifetime of the Tensor you're looking to wrap is crystal clear.

More details and implementation code can be found at <https://github.com/pytorch/pytorch/blob/master/c10/util/MaybeOwned.h> and
<https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/templates/TensorBody.h>.
More details and implementation code can be found at <https://github.com/pytorch/pytorch/blob/main/c10/util/MaybeOwned.h> and
<https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/templates/TensorBody.h>.
2 changes: 1 addition & 1 deletion docs/cpp/source/notes/tensor_creation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ allowed values for these axes at the moment are:

There exist "Rust-style" shorthands for dtypes, like ``kF32`` instead of
``kFloat32``. See `here
<https://github.com/pytorch/pytorch/blob/master/torch/csrc/api/include/torch/types.h>`_
<https://github.com/pytorch/pytorch/blob/main/torch/csrc/api/include/torch/types.h>`_
for the full list.


Expand Down
4 changes: 2 additions & 2 deletions docs/libtorch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,9 @@ Note that we are working on eliminating tools/build_pytorch_libs.sh in favor of
Building libtorch using CMake
--------------------------------------

You can build C++ libtorch.so directly with cmake. For example, to build a Release version from the master branch and install it in the directory specified by CMAKE_INSTALL_PREFIX below, you can use
You can build C++ libtorch.so directly with cmake. For example, to build a Release version from the main branch and install it in the directory specified by CMAKE_INSTALL_PREFIX below, you can use
::
git clone -b master --recurse-submodule https://github.com/pytorch/pytorch.git
git clone -b main --recurse-submodule https://github.com/pytorch/pytorch.git
mkdir pytorch-build
cd pytorch-build
cmake -DBUILD_SHARED_LIBS:BOOL=ON -DCMAKE_BUILD_TYPE:STRING=Release -DPYTHON_EXECUTABLE:PATH=`which python3` -DCMAKE_INSTALL_PREFIX:PATH=../pytorch-install ../pytorch
Expand Down
8 changes: 4 additions & 4 deletions docs/source/_templates/layout.html
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,18 @@
<link rel="canonical" href="{{ theme_canonical_url }}{{ pagename }}.html" />

{% block extrahead %}
{% if release == "master" %}
{% if release == "main" %}
<!--
Search engines should not index the master version of documentation.
Stable documentation are built without release == 'master'.
Search engines should not index the main version of documentation.
Stable documentation are built without release == 'main'.
-->
<meta name="robots" content="noindex">
{% endif %}
{{ super() }}
{% endblock %}

{% block menu %}
{% if release == "master" %}
{% if release == "main" %}
<div>
<a style="color:#F05732" href="{{ theme_canonical_url }}{{ pagename }}.html">
You are viewing unstable developer preview docs.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/community/build_ci_governance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ For the person to be a maintainer, a person needs to:
* At least one of these commits must be submitted in the last six months

To add a qualified person to the maintainers' list, please create
a PR that adds a person to the `persons of interests <https://pytorch.org/docs/master/community/persons_of_interest.html>`__ page and
`merge_rules <https://github.com/pytorch/pytorch/blob/master/.github/merge_rules.yaml>`__ files. Current maintainers will cast their votes of
a PR that adds a person to the `persons of interests <https://pytorch.org/docs/main/community/persons_of_interest.html>`__ page and
`merge_rules <https://github.com/pytorch/pytorch/blob/main/.github/merge_rules.yaml>`__ files. Current maintainers will cast their votes of
support. Decision criteria for approving the PR:
* Not earlier than two business days passed before merging (ensure the majority of the contributors have seen it)
* PR has the correct label (`module: ci`)
Expand Down
6 changes: 3 additions & 3 deletions docs/source/community/contribution_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ here is the basic process.
open an issue first before implementing a PR.

- Core changes and refactors can be quite difficult to coordinate
since the pace of development on PyTorch master is quite fast.
since the pace of development on the PyTorch main branch is quite fast.
Definitely reach out about fundamental or cross-cutting changes;
we can often give guidance about how to stage such changes into
more easily reviewable pieces.
Expand All @@ -85,7 +85,7 @@ here is the basic process.
everything, but if you happen to know who the maintainer for a
given subsystem affected by your patch is, feel free to include
them directly on the pull request. You can learn more about
`Persons of Interest <https://pytorch.org/docs/master/community/persons_of_interest.html>`_
`Persons of Interest <https://pytorch.org/docs/main/community/persons_of_interest.html>`_
that could review your code.

- **Iterate on the pull request until it's accepted!**
Expand Down Expand Up @@ -315,7 +315,7 @@ Python Docs

PyTorch documentation is generated from python source using
`Sphinx <https://www.sphinx-doc.org/en/master/>`__. Generated HTML is
copied to the docs folder in the master branch of
copied to the docs folder in the main branch of
`pytorch.github.io <https://github.com/pytorch/pytorch.github.io/tree/master/docs>`__,
and is served via GitHub pages.

Expand Down
6 changes: 3 additions & 3 deletions docs/source/community/design.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ serve as a guide to help trade off different concerns and to resolve
disagreements that may come up while developing PyTorch. For more
information on contributing, module maintainership, and how to escalate a
disagreement to the Core Maintainers, please see `PyTorch
Governance <https://pytorch.org/docs/master/community/governance.html>`__.
Governance <https://pytorch.org/docs/main/community/governance.html>`__.

Design Principles
-----------------
Expand Down Expand Up @@ -64,7 +64,7 @@ Python <https://peps.python.org/pep-0020/>`__:
A more concise way of describing these two goals is `Simple Over
Easy <https://www.infoq.com/presentations/Simple-Made-Easy/>`_. Let’s start with an example because *simple* and *easy* are
often used interchangeably in everyday English. Consider how one may
model `devices <https://pytorch.org/docs/master/tensor_attributes.html#torch.device>`__
model `devices <https://pytorch.org/docs/main/tensor_attributes.html#torch.device>`__
in PyTorch:

- **Simple / Explicit (to understand, debug):** every tensor is associated
Expand Down Expand Up @@ -141,7 +141,7 @@ Python usability end of the curve:
- `TorchDynamo <https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361>`__,
a Python frame evaluation tool capable of speeding up existing
eager-mode PyTorch programs with minimal user intervention.
- `torch_function <https://pytorch.org/docs/master/notes/extending.html#extending-torch>`__
- `torch_function <https://pytorch.org/docs/main/notes/extending.html#extending-torch>`__
and `torch_dispatch <https://dev-discuss.pytorch.org/t/what-and-why-is-torch-dispatch/557>`__
extension points, which have enabled Python-first functionality to be
built on-top of C++ internals, such as the `torch.fx
Expand Down
6 changes: 3 additions & 3 deletions docs/source/compile/custom-backends.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ Registration serves two purposes:

* You can pass a string containing your backend function's name to ``torch.compile`` instead of the function itself,
for example, ``torch.compile(model, backend="my_compiler")``.
* It is required for use with the `minifier <https://pytorch.org/docs/master/compile/troubleshooting.html>`__. Any generated
* It is required for use with the `minifier <https://pytorch.org/docs/main/compile/troubleshooting.html>`__. Any generated
code from the minifier must call your code that registers your backend function, typically through an ``import`` statement.

Custom Backends after AOTAutograd
Expand All @@ -94,7 +94,7 @@ It is possible to define custom backends that are called by AOTAutograd rather t
This is useful for 2 main reasons:

* Users can define backends that support model training, as AOTAutograd can generate the backward graph for compilation.
* AOTAutograd produces FX graphs consisting of `canonical Aten ops <https://pytorch.org/docs/master/ir.html#canonical-aten-ir>`__. As a result,
* AOTAutograd produces FX graphs consisting of `canonical Aten ops <https://pytorch.org/docs/main/ir.html#canonical-aten-ir>`__. As a result,
custom backends only need to support the canonical Aten opset, which is a significantly smaller opset than the entire torch/Aten opset.

Wrap your backend with
Expand Down Expand Up @@ -265,7 +265,7 @@ Composable Backends
^^^^^^^^^^^^^^^^^^^

TorchDynamo includes many backends, which can be found in
`backends.py <https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/optimizations/backends.py>`__
`backends.py <https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/optimizations/backends.py>`__
or ``torch._dynamo.list_backends()``. You can combine these backends
together with the following code:

Expand Down
2 changes: 1 addition & 1 deletion docs/source/compile/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ backwards ops, due to how AOTAutograd compiled functions interact with
dispatcher hooks.

The basic strategy for optimizing DDP with Dynamo is outlined in
`distributed.py <https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/optimizations/distributed.py>`__
`distributed.py <https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/optimizations/distributed.py>`__
where the main idea will be to graph break on `DDP bucket
boundaries <https://pytorch.org/docs/stable/notes/ddp.html#internal-design>`__.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/compile/get-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ Existing Backends
~~~~~~~~~~~~~~~~~

TorchDynamo has a growing list of backends, which can be found in the
`backends <https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/backends/>`__ folder
`backends <https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/backends/>`__ folder
or ``torch._dynamo.list_backends()`` each of which with its optional dependencies.

Some of the most commonly used backends include:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/compile/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on real world models for both training and inference with a single line of code.
.. note::
The :func:`~torch.compile` API is experimental and subject to change.

The simplest possible interesting program is the below which we go over in a lot more detail in `getting started <https://pytorch.org/docs/master/compile/get-started.html>`__
The simplest possible interesting program is the below which we go over in a lot more detail in `getting started <https://pytorch.org/docs/main/compile/get-started.html>`__
showing how to use :func:`~torch.compile` to speed up inference on a variety of real world models from both TIMM and HuggingFace which we
co-announced `here <https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/>`__

Expand Down
2 changes: 1 addition & 1 deletion docs/source/compile/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -659,7 +659,7 @@ recompile that function (or part) up to
hitting the cache limit, you will first need to determine which guard is
failing and what part of your program is triggering it.

The `compile profiler <https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py>`__ automates the
The `compile profiler <https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/utils.py>`__ automates the
process of setting TorchDynamo’s cache limit to 1 and running your
program under an observation-only 'compiler' that records the causes of
any guard failures. You should be sure to run your program for at least
Expand Down
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -361,10 +361,10 @@
#
# The short X.Y version.
# TODO: change to [:2] at v1.0
version = 'master (' + torch_version + ' )'
version = 'main (' + torch_version + ' )'
# The full version, including alpha/beta/rc tags.
# TODO: verify this works as expected
release = 'master'
release = 'main'

# Customized html_title here.
# Default is " ".join(project, release, "documentation") if not set
Expand Down
2 changes: 1 addition & 1 deletion docs/source/distributed.checkpoint.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The entrypoints to load and save a checkpoint are the following:
.. autofunction:: load_state_dict
.. autofunction:: save_state_dict

This `example <https://github.com/pytorch/pytorch/blob/master/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py>`_ shows how to use Pytorch Distributed Checkpoint to save a FSDP model.
This `example <https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py>`_ shows how to use Pytorch Distributed Checkpoint to save a FSDP model.


The following types define the IO interface used during checkpoint:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/distributed.rst
Original file line number Diff line number Diff line change
Expand Up @@ -476,7 +476,7 @@ Note that you can use ``torch.profiler`` (recommended, only available after 1.8.
tensor = torch.randn(20, 10)
dist.all_reduce(tensor)

Please refer to the `profiler documentation <https://pytorch.org/docs/master/profiler.html>`__ for a full overview of profiler features.
Please refer to the `profiler documentation <https://pytorch.org/docs/main/profiler.html>`__ for a full overview of profiler features.


Multi-GPU collective functions
Expand Down
2 changes: 1 addition & 1 deletion docs/source/distributed.tensor.parallel.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Tensor Parallelism - torch.distributed.tensor.parallel
======================================================

Tensor Parallelism(TP) is built on top of the PyTorch DistributedTensor
(`DTensor <https://github.com/pytorch/pytorch/blob/master/torch/distributed/_tensor/README.md>`__)
(`DTensor <https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/README.md>`__)
and provides several parallelism styles: Rowwise, Colwise and Pairwise Parallelism.

.. warning ::
Expand Down
4 changes: 2 additions & 2 deletions docs/source/fx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ can be found below.
node.replace_all_uses_with(new_node)

For simple transformations that only consist of substitutions, you can also
make use of the `subgraph rewriter. <https://github.com/pytorch/pytorch/blob/master/torch/fx/subgraph_rewriter.py>`__
make use of the `subgraph rewriter. <https://github.com/pytorch/pytorch/blob/main/torch/fx/subgraph_rewriter.py>`__

Subgraph Rewriting With replace_pattern()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -230,7 +230,7 @@ Graph Manipulation Examples
- `Conv/Batch Norm
fusion <https://github.com/pytorch/pytorch/blob/40cbf342d3c000712da92cfafeaca651b3e0bd3e/torch/fx/experimental/optimization.py#L50>`__
- `replace_pattern: Basic usage <https://github.com/pytorch/examples/blob/master/fx/subgraph_rewriter_basic_use.py>`__
- `Quantization <https://pytorch.org/docs/master/quantization.html#prototype-fx-graph-mode-quantization>`__
- `Quantization <https://pytorch.org/docs/main/quantization.html#prototype-fx-graph-mode-quantization>`__
- `Invert Transformation <https://github.com/pytorch/examples/blob/master/fx/invert.py>`__

Proxy/Retracing
Expand Down
2 changes: 1 addition & 1 deletion docs/source/jit.rst
Original file line number Diff line number Diff line change
Expand Up @@ -871,7 +871,7 @@ now supported.

Fusion Backends
~~~~~~~~~~~~~~~
There are a couple of fusion backends available to optimize TorchScript execution. The default fuser on CPUs is NNC, which can perform fusions for both CPUs and GPUs. The default fuser on GPUs is NVFuser, which supports a wider range of operators and has demonstrated generated kernels with improved throughput. See the `NVFuser documentation <https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/codegen/cuda/README.md>`_ for more details on usage and debugging.
There are a couple of fusion backends available to optimize TorchScript execution. The default fuser on CPUs is NNC, which can perform fusions for both CPUs and GPUs. The default fuser on GPUs is NVFuser, which supports a wider range of operators and has demonstrated generated kernels with improved throughput. See the `NVFuser documentation <https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/codegen/cuda/README.md>`_ for more details on usage and debugging.


References
Expand Down
2 changes: 1 addition & 1 deletion docs/source/nested.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ with the main difference being :ref:`construction of the inputs <construction>`.

As this is a prototype feature, the :ref:`operations supported <supported operations>` are still
limited. However, we welcome issues, feature requests and contributions. More information on contributing can be found
`in this Readme <https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/nested/README.md>`_.
`in this Readme <https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/nested/README.md>`_.

.. _construction:

Expand Down
2 changes: 1 addition & 1 deletion docs/source/notes/modules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -534,7 +534,7 @@ The following class demonstrates the various ways of registering parameters and
For more information, check out:

* Saving and loading: https://pytorch.org/tutorials/beginner/saving_loading_models.html
* Serialization semantics: https://pytorch.org/docs/master/notes/serialization.html
* Serialization semantics: https://pytorch.org/docs/main/notes/serialization.html
* What is a state dict? https://pytorch.org/tutorials/recipes/recipes/what_is_state_dict.html

Module Initialization
Expand Down
2 changes: 1 addition & 1 deletion docs/source/notes/serialization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Saving tensors preserves their view relationships:
tensor([ 1, 4, 3, 8, 5, 12, 7, 16, 9])

Behind the scenes, these tensors share the same "storage." See
`Tensor Views <https://pytorch.org/docs/master/tensor_view.html>`_ for more
`Tensor Views <https://pytorch.org/docs/main/tensor_view.html>`_ for more
on views and storage.

When PyTorch saves tensors it saves their storage objects and tensor
Expand Down
6 changes: 3 additions & 3 deletions docs/source/onnx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ an op named ``foo`` would look something like::
...

The ``torch._C`` types are Python wrappers around the types defined in C++ in
`ir.h <https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.h>`_.
`ir.h <https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/ir/ir.h>`_.

The process for adding a symbolic function depends on the type of operator.

Expand All @@ -359,7 +359,7 @@ Adding support for an aten or quantized operator
If the operator is not in the list above:

* Define the symbolic function in ``torch/onnx/symbolic_opset<version>.py``, for example
`torch/onnx/symbolic_opset9.py <https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_opset9.py>`_.
`torch/onnx/symbolic_opset9.py <https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset9.py>`_.
Make sure the function has the same name as the ATen function, which may be declared in
``torch/_C/_VariableFunctions.pyi`` or ``torch/nn/functional.pyi`` (these files are generated at
build time, so will not appear in your checkout until you build PyTorch).
Expand Down Expand Up @@ -456,7 +456,7 @@ ONNX operators that represent the function's behavior in ONNX. For example::
.. . ``torch::jit::Value::setType``). This is not required, but it can help the exporter's
.. shape and type inference for down-stream nodes. For a non-trivial example of ``setType``, see
.. ``test_aten_embedding_2`` in
.. `test_operators.py <https://github.com/pytorch/pytorch/blob/master/test/onnx/test_operators.py>`_.
.. `test_operators.py <https://github.com/pytorch/pytorch/blob/main/test/onnx/test_operators.py>`_.
.. The example below shows how you can access ``requires_grad`` via the ``Node`` object:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/quantization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -753,7 +753,7 @@ Backend/Hardware Support
Today, PyTorch supports the following backends for running quantized operators efficiently:

* x86 CPUs with AVX2 support or higher (without AVX2 some operations have inefficient implementations), via `x86` optimized by `fbgemm <https://github.com/pytorch/FBGEMM>`_ and `onednn <https://github.com/oneapi-src/oneDNN>`_ (see the details at `RFC <https://github.com/pytorch/pytorch/issues/83888>`_)
* ARM CPUs (typically found in mobile/embedded devices), via `qnnpack <https://github.com/pytorch/pytorch/tree/master/aten/src/ATen/native/quantized/cpu/qnnpack>`_
* ARM CPUs (typically found in mobile/embedded devices), via `qnnpack <https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/native/quantized/cpu/qnnpack>`_
* (early prototype) support for NVidia GPU via `TensorRT <https://developer.nvidia.com/tensorrt>`_ through `fx2trt` (to be open sourced)


Expand Down
Loading

0 comments on commit d425da8

Please sign in to comment.