Skip to content

Commit

Permalink
Removed words that minimize involved difficulty, closes #1089
Browse files Browse the repository at this point in the history
  • Loading branch information
simonw committed Nov 12, 2020
1 parent 253f2d9 commit 5eb8e9b
Show file tree
Hide file tree
Showing 10 changed files with 19 additions and 19 deletions.
8 changes: 4 additions & 4 deletions docs/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ Better plugin documentation
The plugin documentation has been re-arranged into four sections, including a brand new section on testing plugins. (`#687 <https://github.com/simonw/datasette/issues/687>`__)

- :ref:`plugins` introduces Datasette's plugin system and describes how to install and configure plugins.
- :ref:`writing_plugins` describes how to author plugins, from simple one-off plugins to packaged plugins that can be published to PyPI. It also describes how to start a plugin using the new `datasette-plugin <https://github.com/simonw/datasette-plugin>`__ cookiecutter template.
- :ref:`writing_plugins` describes how to author plugins, from one-off single file plugins to packaged plugins that can be published to PyPI. It also describes how to start a plugin using the new `datasette-plugin <https://github.com/simonw/datasette-plugin>`__ cookiecutter template.
- :ref:`plugin_hooks` is a full list of detailed documentation for every Datasette plugin hook.
- :ref:`testing_plugins` describes how to write tests for Datasette plugins, using `pytest <https://docs.pytest.org/>`__ and `HTTPX <https://www.python-httpx.org/>`__.

Expand Down Expand Up @@ -277,7 +277,7 @@ Authentication

Prior to this release the Datasette ecosystem has treated authentication as exclusively the realm of plugins, most notably through `datasette-auth-github <https://github.com/simonw/datasette-auth-github>`__.

0.44 introduces :ref:`authentication` as core Datasette concepts (`#699 <https://github.com/simonw/datasette/issues/699>`__). This makes it easier for different plugins can share responsibility for authenticating requests - you might have one plugin that handles user accounts and another one that allows automated access via API keys, for example.
0.44 introduces :ref:`authentication` as core Datasette concepts (`#699 <https://github.com/simonw/datasette/issues/699>`__). This enables different plugins to share responsibility for authenticating requests - you might have one plugin that handles user accounts and another one that allows automated access via API keys, for example.

You'll need to install plugins if you want full user accounts, but default Datasette can now authenticate a single root user with the new ``--root`` command-line option, which outputs a one-time use URL to :ref:`authenticate as a root actor <authentication_root>` (`#784 <https://github.com/simonw/datasette/issues/784>`__)::

Expand Down Expand Up @@ -572,7 +572,7 @@ Also in this release:
0.32 (2019-11-14)
-----------------

Datasette now renders templates using `Jinja async mode <https://jinja.palletsprojects.com/en/2.10.x/api/#async-support>`__. This makes it easy for plugins to provide custom template functions that perform asynchronous actions, for example the new `datasette-template-sql <https://github.com/simonw/datasette-template-sql>`__ plugin which allows custom templates to directly execute SQL queries and render their results. (`#628 <https://github.com/simonw/datasette/issues/628>`__)
Datasette now renders templates using `Jinja async mode <https://jinja.palletsprojects.com/en/2.10.x/api/#async-support>`__. This means plugins can provide custom template functions that perform asynchronous actions, for example the new `datasette-template-sql <https://github.com/simonw/datasette-template-sql>`__ plugin which allows custom templates to directly execute SQL queries and render their results. (`#628 <https://github.com/simonw/datasette/issues/628>`__)

.. _v0_31_2:

Expand Down Expand Up @@ -1881,7 +1881,7 @@ as a more powerful alternative to SQL views.
This will write those values into the metadata.json that is packaged with the
app. If you also pass ``--metadata=metadata.json`` that file will be updated with the extra
values before being written into the Docker image.
- Added simple production-ready Dockerfile (`#94`_) [Andrew
- Added production-ready Dockerfile (`#94`_) [Andrew
Cutler]
- New ``?_sql_time_limit_ms=10`` argument to database and table page (`#95`_)
- SQL syntax highlighting with Codemirror (`#89`_) [Tom Dyson]
Expand Down
8 changes: 4 additions & 4 deletions docs/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,15 @@ General guidelines
Setting up a development environment
------------------------------------

If you have Python 3.6 or higher installed on your computer (on OS X the easiest way to do this `is using homebrew <https://docs.python-guide.org/starting/install3/osx/>`__) you can install an editable copy of Datasette using the following steps.
If you have Python 3.6 or higher installed on your computer (on OS X the quickest way to do this `is using homebrew <https://docs.python-guide.org/starting/install3/osx/>`__) you can install an editable copy of Datasette using the following steps.

If you want to use GitHub to publish your changes, first `create a fork of datasette <https://github.com/simonw/datasette/fork>`__ under your own GitHub account.

Now clone that repository somewhere on your computer::

git clone [email protected]:YOURNAME/datasette

If you just want to get started without creating your own fork, you can do this instead::
If you want to get started without creating your own fork, you can do this instead::

git clone [email protected]:simonw/datasette

Expand All @@ -47,9 +47,9 @@ Once you have done this, you can run the Datasette unit tests from inside your `

pytest

To run Datasette itself, just type ``datasette``.
To run Datasette itself, type ``datasette``.

You're going to need at least one SQLite database. An easy way to get started is to use the fixtures database that Datasette uses for its own tests.
You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests.

You can create a copy of that database by running this command::

Expand Down
4 changes: 2 additions & 2 deletions docs/deploying.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Deploying Datasette
=====================

The easiest way to deploy a Datasette instance on the internet is to use the ``datasette publish`` command, described in :ref:`publishing`. This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel.
The quickest way to deploy a Datasette instance on the internet is to use the ``datasette publish`` command, described in :ref:`publishing`. This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel.

You can deploy Datasette to other hosting providers using the instructions on this page.

Expand Down Expand Up @@ -109,7 +109,7 @@ If you want to build SQLite files or download them as part of the deployment pro

wget https://fivethirtyeight.datasettes.com/fivethirtyeight.db

`simonw/buildpack-datasette-demo <https://github.com/simonw/buildpack-datasette-demo>`__ is an example GitHub repository showing a simple Datasette configuration that can be deployed to a buildpack-supporting host.
`simonw/buildpack-datasette-demo <https://github.com/simonw/buildpack-datasette-demo>`__ is an example GitHub repository showing a Datasette configuration that can be deployed to a buildpack-supporting host.

.. _deploying_proxy:

Expand Down
2 changes: 1 addition & 1 deletion docs/ecosystem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ For example, to create a SQLite database of the `City of Dallas Payment Register
Datasette Plugins
=================

Datasette's :ref:`plugin system <plugins>` makes it easy to enhance Datasette with additional functionality.
Datasette's :ref:`plugin system <plugins>` allows developers to enhance Datasette with additional functionality.

datasette-graphql
-----------------
Expand Down
2 changes: 1 addition & 1 deletion docs/internals.rst
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ For example:
content_type="application/xml; charset=utf-8"
)
The easiest way to create responses is using the ``Response.text(...)``, ``Response.html(...)``, ``Response.json(...)`` or ``Response.redirect(...)`` helper methods:
The quickest way to create responses is using the ``Response.text(...)``, ``Response.html(...)``, ``Response.json(...)`` or ``Response.redirect(...)`` helper methods:

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/metadata.rst
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@ Here's an example of a ``metadata.yml`` file, re-using an example from :ref:`can
where neighborhood like '%' || :text || '%' order by neighborhood;
title: Search neighborhoods
description_html: |-
<p>This demonstrates <em>simple</em> LIKE search
<p>This demonstrates <em>basic</em> LIKE search
The ``metadata.yml`` file is passed to Datasette using the same ``--metadata`` option::

Expand Down
2 changes: 1 addition & 1 deletion docs/plugin_hooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ It can also return a dictionary with the following keys. This format is **deprec
``headers`` - dictionary, optional
Extra HTTP headers to be returned in the response.

A simple example of an output renderer callback function:
An example of an output renderer callback function:

.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/publish.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ You can specify a custom app name by passing ``-n my-app-name`` to the publish c
Publishing to Vercel
--------------------

`Vercel <https://vercel.com/>`__ - previously known as Zeit Now - provides a layer over AWS Lambda to allow for easy, scale-to-zero deployment. You can deploy Datasette instances to Vercel using the `datasette-publish-vercel <https://github.com/simonw/datasette-publish-vercel>`__ plugin.
`Vercel <https://vercel.com/>`__ - previously known as Zeit Now - provides a layer over AWS Lambda to allow for quick, scale-to-zero deployment. You can deploy Datasette instances to Vercel using the `datasette-publish-vercel <https://github.com/simonw/datasette-publish-vercel>`__ plugin.

::

Expand All @@ -85,7 +85,7 @@ Not every feature is supported: consult the `datasette-publish-vercel README <ht
Publishing to Fly
-----------------

`Fly <https://fly.io/>`__ is a `competitively priced <https://fly.io/docs/pricing/>`__ Docker-compatible hosting platform that makes it easy to run applications in globally distributed data centers close to your end users. You can deploy Datasette instances to Fly using the `datasette-publish-fly <https://github.com/simonw/datasette-publish-fly>`__ plugin.
`Fly <https://fly.io/>`__ is a `competitively priced <https://fly.io/docs/pricing/>`__ Docker-compatible hosting platform that supports running applications in globally distributed data centers close to your end users. You can deploy Datasette instances to Fly using the `datasette-publish-fly <https://github.com/simonw/datasette-publish-fly>`__ plugin.

::

Expand Down
2 changes: 1 addition & 1 deletion docs/sql_queries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ If you want to bundle some pre-written SQL queries with your Datasette-hosted
database you can do so in two ways. The first is to include SQL views in your
database - Datasette will then list those views on your database index page.

The easiest way to create views is with the SQLite command-line interface::
The quickest way to create views is with the SQLite command-line interface::

$ sqlite3 sf-trees.db
SQLite version 3.19.3 2017-06-27 16:48:08
Expand Down
4 changes: 2 additions & 2 deletions docs/writing_plugins.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ You can write one-off plugins that apply to just one Datasette instance, or you
Writing one-off plugins
-----------------------

The easiest way to write a plugin is to create a ``my_plugin.py`` file and drop it into your ``plugins/`` directory. Here is an example plugin, which adds a new custom SQL function called ``hello_world()`` which takes no arguments and returns the string ``Hello world!``.
The quickest way to start writing a plugin is to create a ``my_plugin.py`` file and drop it into your ``plugins/`` directory. Here is an example plugin, which adds a new custom SQL function called ``hello_world()`` which takes no arguments and returns the string ``Hello world!``.

.. code-block:: python
Expand All @@ -37,7 +37,7 @@ Starting an installable plugin using cookiecutter

Plugins that can be installed should be written as Python packages using a ``setup.py`` file.

The easiest way to start writing one an installable plugin is to use the `datasette-plugin <https://github.com/simonw/datasette-plugin>`__ cookiecutter template. This creates a new plugin structure for you complete with an example test and GitHub Actions workflows for testing and publishing your plugin.
The quickest way to start writing one an installable plugin is to use the `datasette-plugin <https://github.com/simonw/datasette-plugin>`__ cookiecutter template. This creates a new plugin structure for you complete with an example test and GitHub Actions workflows for testing and publishing your plugin.

`Install cookiecutter <https://cookiecutter.readthedocs.io/en/1.7.2/installation.html>`__ and then run this command to start building a plugin using the template::

Expand Down

0 comments on commit 5eb8e9b

Please sign in to comment.