diff --git a/tutorial/source/intro_long.ipynb b/tutorial/source/intro_long.ipynb index 361ad745fb..0d2cbd5aaa 100644 --- a/tutorial/source/intro_long.ipynb +++ b/tutorial/source/intro_long.ipynb @@ -656,16 +656,16 @@ "```python\n", "def param(\n", " name: str,\n", - " init: Optional[Union[torch.Tensor, Callable[..., torch.Tensor]]] = None,\n", + " init_tensor: Optional[Union[torch.Tensor, Callable[..., torch.Tensor]]] = None,\n", " *,\n", " constraint: torch.distributions.constraints.Constraint = constraints.real\n", ") -> torch.Tensor:\n", " ...\n", "```\n", "\n", - "Like `pyro.sample`, `pyro.param` is always called with a name as its first argument. The first time `pyro.param` is called with a particular name, it stores the initial value specified by the second argument `init` in the parameter store and then returns that value. After that, when it is called with that name, it returns the value from the parameter store regardless of any other arguments. After a parameter has been initialized, it is no longer necessary to specify `init` to retrieve its value (e.g. `pyro.param(\"a\")`).\n", + "Like `pyro.sample`, `pyro.param` is always called with a name as its first argument. The first time `pyro.param` is called with a particular name, it stores the initial value specified by the second argument `init_tensor` in the parameter store and then returns that value. After that, when it is called with that name, it returns the value from the parameter store regardless of any other arguments. After a parameter has been initialized, it is no longer necessary to specify `init_tensor` to retrieve its value (e.g. `pyro.param(\"a\")`).\n", "\n", - "The second argument, `init`, can be either a `torch.Tensor` or a function that takes no arguments and returns a tensor. The second form is useful because it avoids repeatedly constructing initial values that are only used the first time a model is run.\n", + "The second argument, `init_tensor`, can be either a `torch.Tensor` or a function that takes no arguments and returns a tensor. The second form is useful because it avoids repeatedly constructing initial values that are only used the first time a model is run.\n", "\n", "Unlike PyTorch's `torch.nn.Parameter`s, parameters in Pyro can be explicitly constrained to various subsets of $\\mathbb{R}^n$, an important feature because many elementary probability distributions have parameters with restricted domains. For example, the `scale` parameter of a `Normal` distribution must be positive. The optional third argument to `pyro.param`, `constraint`, is a [torch.distributions.constraints.Constraint](https://docs.pyro.ai/en/stable/distributions.html#module-pyro.distributions.constraints) object stored when a parameter is initialized; constraints are reapplied after every update. Pyro ships with a large number of predefined constraints. \n", "\n",