From 4e2602bd53d246087b19b6bd958976da541002af Mon Sep 17 00:00:00 2001 From: Rui Vasconcelos Date: Tue, 20 Apr 2021 23:00:41 +0200 Subject: [PATCH] Apply Docs Restructure to `v1.2-branch` = update `v1.2-branch` to current `master` v2 (#2612) * Create "Distributions" with kfctl + Kubeflow Operator (#2492) * Create methods folder == section * Move /operator under /methods * Update links on Operator * Add 'kfctl' folder == section * mv kfctl specific minikube docs under /kfctl * Update links on minikube * mv kustomize from other-guides to /kfctl * fix links for kustomize change * delete outdated redirect * move istio-dex-auth to /kfctl + rename to multi-user * fix links after name change * move kfctl install under /kfctl + rename to deployment * fix links after move * Add OWNERS for accountability Update kfctl description Update content/en/docs/methods/_index.md * Add redirects for Operator * Add redirects for kfctl * Rename "methods" to "distributions" * update redirects to distributions as folder name * doc: Add instructions to access cluster with IBM Cloud vpc-gen2. (#2530) * doc, Add instructions to access cluster with IBM Cloud vpc-gen2. * added extra steps. * Improved formatting * Added details for creating cluster against existing VPC * Apply suggestions from code review Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Formatting fixes, as per the review. * Added a note about security. * Choose between a classic or vpc-gen2 provider. * added a note * formatting fixes * Apply suggestions from code review Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Document split up. * Cleanup. * Apply suggestions from code review Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Tommy Li * Formatting improvements and cleanup. * format fixes * Apply suggestions from code review Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> Co-authored-by: Tommy Li * Add RFMVasconcelos to OWNERS/approvers (#2539) * Deletes old redirects - pages do not exist anymore (#2552) * Move `AWS` platform under /distributions (#2551) * move /aws under /distributions * fix AWS redirects + add catch-all * update broken link (#2557) * UPDATE fix broken links to tensoflorw serving (#2558) * Move `Google` platform under /distributions (#2547) * move /gke folder to under /distributions * update redirects * Move `Azure` platform under /distributions (#2548) * mv /azure to /distributions * add catch-all azure to redirects * KFP - Update Python function-based component doc with param naming rules (#2544) * Describe pipeline param naming Adds notes on how the KFP SDK updates param names to describe the data instead of the implementation. Updates passing data by value to indicate that users can pass lists and dictionaries. * Update auto-gen Markdown Updates python-function-components.md with changes to python-function-components.ipynb. * Move `Openshift` platform under /distributions (#2550) * move /openshift to under /distributions * add openshift catch-all to redirects * Move `IBM` platform under /distributions (#2549) * move /ibm to under /distributions * Add IBM catch-all to redirects * [IBM] Update openshift kubeflow installation (#2560) * Make kfctl first distribution (#2562) * Move getting started on K8s page to under kfctl distribution (#2569) * mv overview to under kfctl * delete empty getting started with k8s section * Add redirect to catch traffic * Update GCP distribution OWNERS (#2574) * Update KFP shortcodes OWNERS (#2575) * Move MicroK8s to distributions (#2577) * create microk8s folder in distributions * move microk8s docs to distributions * update title * Add redirect for MicroK8s move - missed on #2577 (#2579) * Add Charmed Kubeflow Operators to list of available Kubeflow distributions (#2578) * Uplevel clouds for a level playing field * Add Owners + Index of Charmed Kubeflow * Add install page to Charmed Kubeflow distribution * Link to Charmed Kubeflow docs * Naming corrections * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Update content/en/docs/distributions/charmed/install-kubeflow.md Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * final fixes Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> * Fix broken link (#2581) * IBM Cloud docs: update pipelines SDK setup for single-user (#2571) Made the following changes to the instructions for setting up the pipelines SDK for single-user. * append '/pipeline' to the host string * add client.list_experiments to make sure the setup is working, consistent with the multi-user example in section 2 * add a note about KUBEFLOW_PUBLIC_ENDPOINT_URL since the user may or may not have exposed the endpoint as a LoadBalancer Signed-off-by: Chin Huang * update broken links / tweak names (#2583) * Move MiniKF to distributions (#2576) * create minikf folde + index * move minikf docs to minikf folder * Add redirects for external links * Change naming according to request * update description minikf * Clean up "Frameworks for training" + rename to "Training Operators" (#2584) * Remove outdated banners from Pytorch and TF * delete chainer * order TF and pyT up * rename "Frameworks for training" to "Training operators" * Fix broken link (#2580) * Remove "outdated" banners from MPI + MXnet operators (#2585) * docs: Update MPI and MXNet operator pages (#2586) Signed-off-by: terrytangyuan * Pin the version of kustomize, v4 is not supported. (#2572) * Pin the version of kustomize, v4 is not supported. There are issues installing Kubeflow with version v4. Note: https://github.com/kubeflow/website/issues/2570 https://github.com/kubeflow/kubeflow/issues/5755 * Add refrence to manifest repo version. * Default to 3.2.0 * Update gke/anthos.md (#2591) * fix broken link (#2603) Co-authored-by: Prashant Sharma Co-authored-by: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> Co-authored-by: Tommy Li Co-authored-by: Mathew Wicks Co-authored-by: JohanWork <39947546+JohanWork@users.noreply.github.com> Co-authored-by: Joe Liedtke Co-authored-by: Mofizur Rahman Co-authored-by: Yuan (Bob) Gong <4957653+Bobgy@users.noreply.github.com> Co-authored-by: Chin Huang Co-authored-by: brett koonce Co-authored-by: Yuan Tang Co-authored-by: drPytho Co-authored-by: Ihor Sychevskyi --- OWNERS | 7 +- content/en/_redirects | 55 +++- .../docs/components/central-dash/overview.md | 2 +- .../en/docs/components/katib/experiment.md | 2 +- .../docs/components/katib/hyperparameter.md | 6 +- content/en/docs/components/katib/overview.md | 2 +- .../docs/components/multi-tenancy/design.md | 2 +- .../pipelines/overview/pipelines-overview.md | 2 +- .../pipelines/sdk/pipelines-with-tekton.md | 2 +- .../sdk/python-function-components.ipynb | 49 ++- .../sdk/python-function-components.md | 57 +++- content/en/docs/components/training/_index.md | 4 +- .../en/docs/components/training/chainer.md | 19 -- content/en/docs/components/training/mpi.md | 46 ++- content/en/docs/components/training/mxnet.md | 194 +++++++++--- .../en/docs/components/training/pytorch.md | 6 +- .../en/docs/components/training/tftraining.md | 6 +- content/en/docs/distributions/OWNERS | 6 + content/en/docs/distributions/_index.md | 5 + .../en/docs/{ => distributions}/aws/OWNERS | 0 .../en/docs/{ => distributions}/aws/_index.md | 2 +- .../aws/authentication-oidc.md | 0 .../{ => distributions}/aws/authentication.md | 0 .../docs/{ => distributions}/aws/aws-e2e.md | 0 .../{ => distributions}/aws/custom-domain.md | 0 .../aws/customizing-aws.md | 0 .../{ => distributions}/aws/deploy/_index.md | 0 .../aws/deploy/install-kubeflow.md | 0 .../aws/deploy/uninstall-kubeflow.md | 0 .../docs/{ => distributions}/aws/features.md | 0 .../{ => distributions}/aws/files/rds.yaml | 0 .../{ => distributions}/aws/iam-for-sa.md | 0 .../docs/{ => distributions}/aws/logging.md | 0 .../aws/notebook-server.md | 0 .../docs/{ => distributions}/aws/pipeline.md | 0 .../{ => distributions}/aws/private-access.md | 0 .../en/docs/{ => distributions}/aws/rds.md | 0 .../docs/{ => distributions}/aws/storage.md | 0 .../aws/troubleshooting-aws.md | 0 .../en/docs/{ => distributions}/azure/OWNERS | 0 .../docs/{ => distributions}/azure/_index.md | 2 +- .../azure/authentication-oidc.md | 0 .../azure/authentication.md | 0 .../azure/azureEndtoEnd.md | 0 .../{ => distributions}/azure/azureMySQL.md | 0 .../azure/deploy/_index.md | 0 .../azure/deploy/existing-cluster.md | 0 .../azure/deploy/install-kubeflow.md | 2 +- .../azure/deploy/uninstall-kubeflow.md | 0 .../azure/images/appReg.PNG | Bin .../azure/images/clientID2.PNG | Bin .../azure/images/createContainerReg.PNG | Bin .../azure/images/creatingWS.PNG | Bin .../azure/images/finalOutput.PNG | Bin .../azure/images/finishedRunning.PNG | Bin .../azure/images/password.PNG | Bin .../azure/images/pipelinedash.PNG | Bin .../azure/images/pipelinesInput.png | Bin .../azure/images/pipelinesUpload.PNG | Bin .../azure/images/roleAssign.PNG | Bin .../azure/machinelearningcomponent.md | 0 .../azure/troubleshooting-azure.md | 0 content/en/docs/distributions/charmed/OWNERS | 5 + .../en/docs/distributions/charmed/_index.md | 5 + .../distributions/charmed/install-kubeflow.md | 110 +++++++ .../en/docs/{ => distributions}/gke/OWNERS | 2 +- .../en/docs/{ => distributions}/gke/_index.md | 2 +- .../en/docs/{ => distributions}/gke/anthos.md | 6 +- .../{ => distributions}/gke/authentication.md | 0 .../gke/cloud-filestore.md | 0 .../{ => distributions}/gke/custom-domain.md | 0 .../gke/customizing-gke.md | 2 +- .../{ => distributions}/gke/deploy/_index.md | 0 .../gke/deploy/delete-cli.md | 0 .../gke/deploy/deploy-cli.md | 6 +- .../gke/deploy/deploy-ui.md | 0 .../gke/deploy/management-setup.md | 0 .../gke/deploy/monitor-iap-setup.md | 0 .../gke/deploy/oauth-setup.md | 0 .../gke/deploy/project-setup.md | 0 .../{ => distributions}/gke/deploy/reasons.md | 0 .../{ => distributions}/gke/deploy/upgrade.md | 0 .../docs/{ => distributions}/gke/gcp-e2e.md | 2 +- .../{ => distributions}/gke/monitoring.md | 0 .../gke/pipelines/_index.md | 0 .../gke/pipelines/authentication-pipelines.md | 0 .../gke/pipelines/authentication-sdk.md | 0 .../gke/pipelines/enable-gpu-and-tpu.md | 0 .../gke/pipelines/preemptible.md | 0 .../gke/pipelines/upgrade.md | 0 .../gke/private-clusters.md | 0 .../gke/troubleshooting-gke.md | 0 .../en/docs/{ => distributions}/ibm/OWNERS | 0 .../en/docs/{ => distributions}/ibm/_index.md | 2 +- .../distributions/ibm/create-cluster-vpc.md | 289 ++++++++++++++++++ .../{ => distributions}/ibm/create-cluster.md | 78 ++++- .../{ => distributions}/ibm/deploy/OWNERS | 0 .../{ => distributions}/ibm/deploy/_index.md | 0 .../ibm/deploy/authentication.md | 0 .../ibm/deploy/deployment-process.md | 0 .../install-kubeflow-on-IBM-openshift.md | 13 +- .../ibm/deploy/install-kubeflow-on-iks.md | 67 +++- .../ibm/deploy/uninstall-kubeflow.md | 0 .../docs/{ => distributions}/ibm/iks-e2e.md | 2 +- .../{ => distributions}/ibm/kfp-tekton.png | Bin .../docs/{ => distributions}/ibm/pipelines.md | 9 +- .../docs/{ => distributions}/ibm/using-icr.md | 0 .../k8s => distributions/kfctl}/OWNERS | 2 +- content/en/docs/distributions/kfctl/_index.md | 5 + .../kfctl/deployment.md} | 4 +- .../kfctl}/kustomize.md | 2 +- .../kfctl/minikube.md} | 0 .../kfctl/multi-user.md} | 4 +- .../k8s => distributions/kfctl}/overview.md | 6 +- content/en/docs/distributions/microk8s/OWNERS | 7 + .../en/docs/distributions/microk8s/_index.md | 5 + .../microk8s}/kubeflow-on-microk8s.md | 4 +- .../en/docs/distributions/minikf/_index.md | 5 + .../minikf}/getting-started-minikf.md | 2 +- .../minikf}/minikf-aws.md | 4 +- .../minikf}/minikf-gcp.md | 4 +- .../docs/{ => distributions}/openshift/OWNERS | 0 .../{ => distributions}/openshift/_index.md | 0 .../openshift/install-kubeflow.md | 0 .../openshift/uninstall-kubeflow.md | 0 .../docs/{ => distributions}/operator/OWNERS | 0 .../{ => distributions}/operator/_index.md | 0 .../operator/install-kubeflow.md | 4 +- .../operator/install-operator.md | 0 .../operator/introduction.md | 2 +- .../operator/troubleshooting.md | 0 .../operator/uninstall-kubeflow.md | 2 +- .../operator/uninstall-operator.md | 0 .../en/docs/other-guides/usage-reporting.md | 4 +- content/en/docs/reference/images.md | 4 +- content/en/docs/reference/version-policy.md | 2 +- content/en/docs/started/getting-started.md | 4 +- content/en/docs/started/k8s/_index.md | 5 - .../started/k8s/kfctl-existing-arrikto.md | 13 - .../workstation/getting-started-linux.md | 2 +- .../workstation/getting-started-macos.md | 2 +- .../workstation/getting-started-windows.md | 2 +- content/en/serving.svg | 2 +- layouts/shortcodes/pipelines/OWNERS | 6 +- 144 files changed, 947 insertions(+), 239 deletions(-) delete mode 100644 content/en/docs/components/training/chainer.md create mode 100644 content/en/docs/distributions/OWNERS create mode 100644 content/en/docs/distributions/_index.md rename content/en/docs/{ => distributions}/aws/OWNERS (100%) rename content/en/docs/{ => distributions}/aws/_index.md (90%) rename content/en/docs/{ => distributions}/aws/authentication-oidc.md (100%) rename content/en/docs/{ => distributions}/aws/authentication.md (100%) rename content/en/docs/{ => distributions}/aws/aws-e2e.md (100%) rename content/en/docs/{ => distributions}/aws/custom-domain.md (100%) rename content/en/docs/{ => distributions}/aws/customizing-aws.md (100%) rename content/en/docs/{ => distributions}/aws/deploy/_index.md (100%) rename content/en/docs/{ => distributions}/aws/deploy/install-kubeflow.md (100%) rename content/en/docs/{ => distributions}/aws/deploy/uninstall-kubeflow.md (100%) rename content/en/docs/{ => distributions}/aws/features.md (100%) rename content/en/docs/{ => distributions}/aws/files/rds.yaml (100%) rename content/en/docs/{ => distributions}/aws/iam-for-sa.md (100%) rename content/en/docs/{ => distributions}/aws/logging.md (100%) rename content/en/docs/{ => distributions}/aws/notebook-server.md (100%) rename content/en/docs/{ => distributions}/aws/pipeline.md (100%) rename content/en/docs/{ => distributions}/aws/private-access.md (100%) rename content/en/docs/{ => distributions}/aws/rds.md (100%) rename content/en/docs/{ => distributions}/aws/storage.md (100%) rename content/en/docs/{ => distributions}/aws/troubleshooting-aws.md (100%) rename content/en/docs/{ => distributions}/azure/OWNERS (100%) rename content/en/docs/{ => distributions}/azure/_index.md (90%) rename content/en/docs/{ => distributions}/azure/authentication-oidc.md (100%) rename content/en/docs/{ => distributions}/azure/authentication.md (100%) rename content/en/docs/{ => distributions}/azure/azureEndtoEnd.md (100%) rename content/en/docs/{ => distributions}/azure/azureMySQL.md (100%) rename content/en/docs/{ => distributions}/azure/deploy/_index.md (100%) rename content/en/docs/{ => distributions}/azure/deploy/existing-cluster.md (100%) rename content/en/docs/{ => distributions}/azure/deploy/install-kubeflow.md (99%) rename content/en/docs/{ => distributions}/azure/deploy/uninstall-kubeflow.md (100%) rename content/en/docs/{ => distributions}/azure/images/appReg.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/clientID2.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/createContainerReg.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/creatingWS.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/finalOutput.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/finishedRunning.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/password.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/pipelinedash.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/pipelinesInput.png (100%) rename content/en/docs/{ => distributions}/azure/images/pipelinesUpload.PNG (100%) rename content/en/docs/{ => distributions}/azure/images/roleAssign.PNG (100%) rename content/en/docs/{ => distributions}/azure/machinelearningcomponent.md (100%) rename content/en/docs/{ => distributions}/azure/troubleshooting-azure.md (100%) create mode 100644 content/en/docs/distributions/charmed/OWNERS create mode 100644 content/en/docs/distributions/charmed/_index.md create mode 100644 content/en/docs/distributions/charmed/install-kubeflow.md rename content/en/docs/{ => distributions}/gke/OWNERS (84%) rename content/en/docs/{ => distributions}/gke/_index.md (90%) rename content/en/docs/{ => distributions}/gke/anthos.md (74%) rename content/en/docs/{ => distributions}/gke/authentication.md (100%) rename content/en/docs/{ => distributions}/gke/cloud-filestore.md (100%) rename content/en/docs/{ => distributions}/gke/custom-domain.md (100%) rename content/en/docs/{ => distributions}/gke/customizing-gke.md (99%) rename content/en/docs/{ => distributions}/gke/deploy/_index.md (100%) rename content/en/docs/{ => distributions}/gke/deploy/delete-cli.md (100%) rename content/en/docs/{ => distributions}/gke/deploy/deploy-cli.md (98%) rename content/en/docs/{ => distributions}/gke/deploy/deploy-ui.md (100%) rename content/en/docs/{ => distributions}/gke/deploy/management-setup.md (100%) rename content/en/docs/{ => distributions}/gke/deploy/monitor-iap-setup.md (100%) rename content/en/docs/{ => distributions}/gke/deploy/oauth-setup.md (100%) rename content/en/docs/{ => distributions}/gke/deploy/project-setup.md (100%) rename content/en/docs/{ => distributions}/gke/deploy/reasons.md (100%) rename content/en/docs/{ => distributions}/gke/deploy/upgrade.md (100%) rename content/en/docs/{ => distributions}/gke/gcp-e2e.md (98%) rename content/en/docs/{ => distributions}/gke/monitoring.md (100%) rename content/en/docs/{ => distributions}/gke/pipelines/_index.md (100%) rename content/en/docs/{ => distributions}/gke/pipelines/authentication-pipelines.md (100%) rename content/en/docs/{ => distributions}/gke/pipelines/authentication-sdk.md (100%) rename content/en/docs/{ => distributions}/gke/pipelines/enable-gpu-and-tpu.md (100%) rename content/en/docs/{ => distributions}/gke/pipelines/preemptible.md (100%) rename content/en/docs/{ => distributions}/gke/pipelines/upgrade.md (100%) rename content/en/docs/{ => distributions}/gke/private-clusters.md (100%) rename content/en/docs/{ => distributions}/gke/troubleshooting-gke.md (100%) rename content/en/docs/{ => distributions}/ibm/OWNERS (100%) rename content/en/docs/{ => distributions}/ibm/_index.md (90%) create mode 100644 content/en/docs/distributions/ibm/create-cluster-vpc.md rename content/en/docs/{ => distributions}/ibm/create-cluster.md (56%) rename content/en/docs/{ => distributions}/ibm/deploy/OWNERS (100%) rename content/en/docs/{ => distributions}/ibm/deploy/_index.md (100%) rename content/en/docs/{ => distributions}/ibm/deploy/authentication.md (100%) rename content/en/docs/{ => distributions}/ibm/deploy/deployment-process.md (100%) rename content/en/docs/{ => distributions}/ibm/deploy/install-kubeflow-on-IBM-openshift.md (86%) rename content/en/docs/{ => distributions}/ibm/deploy/install-kubeflow-on-iks.md (76%) rename content/en/docs/{ => distributions}/ibm/deploy/uninstall-kubeflow.md (100%) rename content/en/docs/{ => distributions}/ibm/iks-e2e.md (98%) rename content/en/docs/{ => distributions}/ibm/kfp-tekton.png (100%) rename content/en/docs/{ => distributions}/ibm/pipelines.md (91%) rename content/en/docs/{ => distributions}/ibm/using-icr.md (100%) rename content/en/docs/{started/k8s => distributions/kfctl}/OWNERS (82%) create mode 100644 content/en/docs/distributions/kfctl/_index.md rename content/en/docs/{started/k8s/kfctl-k8s-istio.md => distributions/kfctl/deployment.md} (98%) rename content/en/docs/{other-guides => distributions/kfctl}/kustomize.md (99%) rename content/en/docs/{started/workstation/minikube-linux.md => distributions/kfctl/minikube.md} (100%) rename content/en/docs/{started/k8s/kfctl-istio-dex.md => distributions/kfctl/multi-user.md} (99%) rename content/en/docs/{started/k8s => distributions/kfctl}/overview.md (94%) create mode 100644 content/en/docs/distributions/microk8s/OWNERS create mode 100644 content/en/docs/distributions/microk8s/_index.md rename content/en/docs/{started/workstation => distributions/microk8s}/kubeflow-on-microk8s.md (97%) create mode 100644 content/en/docs/distributions/minikf/_index.md rename content/en/docs/{started/workstation => distributions/minikf}/getting-started-minikf.md (98%) rename content/en/docs/{started/workstation => distributions/minikf}/minikf-aws.md (98%) rename content/en/docs/{started/workstation => distributions/minikf}/minikf-gcp.md (97%) rename content/en/docs/{ => distributions}/openshift/OWNERS (100%) rename content/en/docs/{ => distributions}/openshift/_index.md (100%) rename content/en/docs/{ => distributions}/openshift/install-kubeflow.md (100%) rename content/en/docs/{ => distributions}/openshift/uninstall-kubeflow.md (100%) rename content/en/docs/{ => distributions}/operator/OWNERS (100%) rename content/en/docs/{ => distributions}/operator/_index.md (100%) rename content/en/docs/{ => distributions}/operator/install-kubeflow.md (93%) rename content/en/docs/{ => distributions}/operator/install-operator.md (100%) rename content/en/docs/{ => distributions}/operator/introduction.md (98%) rename content/en/docs/{ => distributions}/operator/troubleshooting.md (100%) rename content/en/docs/{ => distributions}/operator/uninstall-kubeflow.md (85%) rename content/en/docs/{ => distributions}/operator/uninstall-operator.md (100%) delete mode 100644 content/en/docs/started/k8s/_index.md delete mode 100644 content/en/docs/started/k8s/kfctl-existing-arrikto.md diff --git a/OWNERS b/OWNERS index dd941fe37d..fb5c351c42 100644 --- a/OWNERS +++ b/OWNERS @@ -2,6 +2,7 @@ approvers: - animeshsingh - Bobgy - joeliedtke + - RFMVasconcelos reviewers: - 8bitmp3 - aronchick @@ -9,9 +10,7 @@ reviewers: - dansanche - dsdinter - Jeffwan - - jinchihe + - jinchihe - nickchase - pdmack - - RFMVasconcelos - - terrytangyuan - + - terrytangyuan diff --git a/content/en/_redirects b/content/en/_redirects index a14aa61a48..9207fc5202 100644 --- a/content/en/_redirects +++ b/content/en/_redirects @@ -34,22 +34,22 @@ /docs/pipelines/tutorials/pipelines-tutorial/ /docs/components/pipelines/tutorials/cloud-tutorials/ /docs/gke/pipelines-tutorial/ /docs/components/pipelines/tutorials/cloud-tutorials/ /docs/gke/pipelines/pipelines-tutorial/ /docs/components/pipelines/tutorials/cloud-tutorials/ -/docs/gke/authentication-pipelines/ /docs/gke/pipelines/authentication-pipelines/ +/docs/gke/authentication-pipelines/ /docs/distributions/gke/pipelines/authentication-pipelines/ /docs/pipelines/metrics/ /docs/components/pipelines/sdk/pipelines-metrics/ /docs/pipelines/metrics/pipelines-metrics/ /docs/components/pipelines/sdk/pipelines-metrics/ /docs/pipelines/metrics/output-viewer/ /docs/components/pipelines/sdk/output-viewer/ /docs/pipelines/pipelines-overview/ /docs/components/pipelines/overview/pipelines-overview/ -/docs/pipelines/enable-gpu-and-tpu/ /docs/gke/pipelines/enable-gpu-and-tpu/ -/docs/pipelines/sdk/enable-gpu-and-tpu/ /docs/gke/pipelines/enable-gpu-and-tpu/ -/docs/pipelines/sdk/gcp/enable-gpu-and-tpu/ /docs/gke/pipelines/enable-gpu-and-tpu/ -/docs/pipelines/preemptible/ /docs/gke/pipelines/preemptible/ -/docs/pipelines/sdk/gcp/preemptible/ /docs/gke/pipelines/preemptible/ +/docs/pipelines/enable-gpu-and-tpu/ /docs/distributions/gke/pipelines/enable-gpu-and-tpu/ +/docs/pipelines/sdk/enable-gpu-and-tpu/ /docs/distributions/gke/pipelines/enable-gpu-and-tpu/ +/docs/pipelines/sdk/gcp/enable-gpu-and-tpu/ /docs/distributions/gke/pipelines/enable-gpu-and-tpu/ +/docs/pipelines/preemptible/ /docs/distributions/gke/pipelines/preemptible/ +/docs/pipelines/sdk/gcp/preemptible/ /docs/distributions/gke/pipelines/preemptible/ /docs/pipelines/reusable-components/ /docs/examples/shared-resources/ /docs/pipelines/sdk/reusable-components/ /docs/examples/shared-resources/ # Moved the guide to monitoring GKE deployments. -/docs/other-guides/monitoring/ /docs/gke/monitoring/ +/docs/other-guides/monitoring/ /docs/distributions/gke/monitoring/ # Created a new section for pipeline concepts. /docs/pipelines/pipelines-concepts/ /docs/components/pipelines/concepts/ @@ -88,24 +88,20 @@ docs/started/requirements/ /docs/started/getting-started/ # Restructured the getting-started and other-guides sections. /docs/started/getting-started-k8s/ /docs/started/k8s/ /docs/started/getting-started-minikf/ /docs/started/workstation/getting-started-minikf/ -/docs/started/getting-started-minikube/ /docs/started/workstation/minikube-linux/ +/docs/started/getting-started-minikube/ /docs/started/distributions/kfctl/minikube/ /docs/other-guides/virtual-dev/getting-started-minikf/ /docs/started/workstation/getting-started-minikf/ /docs/started/getting-started-multipass/ /docs/started/workstation/getting-started-multipass/ /docs/other-guides/virtual-dev/getting-started-multipass/ /docs/started/workstation/getting-started-multipass/ /docs/other-guides/virtual-dev/ /docs/started/workstation/ -/docs/started/getting-started-aws/ /docs/started/cloud/getting-started-aws/ -/docs/started/getting-started-azure/ /docs/started/cloud/getting-started-azure/ -/docs/started/getting-started-gke/ /docs/started/cloud/getting-started-gke/ -/docs/started/getting-started-iks/ /docs/started/cloud/getting-started-iks/ /docs/use-cases/kubeflow-on-multinode-cluster/ /docs/other-guides/kubeflow-on-multinode-cluster/ /docs/use-cases/job-scheduling/ /docs/other-guides/job-scheduling/ # Remove Kubeflow installation on existing EKS cluster -/docs/aws/deploy/existing-cluster/ /docs/aws/deploy/install-kubeflow/ +/docs/aws/deploy/existing-cluster/ /docs/distributions/aws/deploy/install-kubeflow/ # Move the kustomize guide to the config section -/docs/components/misc/kustomize/ /docs/other-guides/kustomize/ +/docs/components/misc/kustomize/ /docs/distributions/kfctl/kustomize/ # Merged the UIs page with the new central dashboard page /docs/other-guides/accessing-uis/ /docs/components/central-dash/overview/ @@ -116,12 +112,38 @@ docs/started/requirements/ /docs/started/getting-started/ # Rename TensorRT Inference Server to Triton Inference Server /docs/components/serving/trtinferenceserver /docs/components/serving/tritoninferenceserver +# Kubeflow Operator move to under distributions +/docs/operator /docs/distributions/operator +/docs/operator/introduction /docs/distributions/operator/introduction +/docs/operator/install-operator /docs/distributions/operator/install-operator +/docs/operator/install-kubeflow /docs/distributions/operator/install-kubeflow +/docs/operator/uninstall-kubeflow /docs/distributions/operator/uninstall-kubeflow +/docs/operator/uninstall-operator /docs/distributions/operator/uninstall-operator +/docs/operator/troubleshooting /docs/distributions/operator/troubleshooting + +# kfctl move to under distributions +/docs/started/workstation/minikube-linux /docs/distributions/kfctl/minikube +/docs/other-guides/kustomize /docs/distributions/kfctl/kustomize +/docs/started/k8s/kfctl-istio-dex /docs/distributions/kfctl/multi-user +/docs/started/k8s/kfctl-k8s-istio /docs/distributions/kfctl/deployment + # Moved Job scheduling under Training /docs/other-guides/job-scheduling/ /docs/components/training/job-scheduling/ # Moved KFServing /docs/components/serving/kfserving/ /docs/components/kfserving +# Moved MicroK8s to distributions +/docs/started/workstation/kubeflow-on-microk8s /docs/distributions/microk8s/kubeflow-on-microk8s + +# Moved K8s deployment overview to under kfctl +/docs/started/k8s/overview /docs/distributions/kfctl/overview + +# Moved MiniKF to distributions +/docs/started/workstation/getting-started-minikf /docs/distributions/getting-started-minikf +/docs/started/workstation/minikf-aws /docs/distributions/minikf-aws +/docs/started/workstation/minikf-gcp /docs/distributions/minikf-gcp + # =============== # IMPORTANT NOTE: # Catch-all redirects should be added at the end of this file as redirects happen from top to bottom @@ -129,3 +151,8 @@ docs/started/requirements/ /docs/started/getting-started/ /docs/guides/* /docs/:splat /docs/pipelines/concepts/* /docs/components/pipelines/overview/concepts/:splat /docs/pipelines/* /docs/components/pipelines/:splat +/docs/aws/* /docs/distributions/aws/:splat +/docs/azure/* /docs/distributions/azure/:splat +/docs/gke/* /docs/distributions/gke/:splat +/docs/ibm/* /docs/distributions/ibm/:splat +/docs/openshift/* /docs/distributions/openshift/:splat diff --git a/content/en/docs/components/central-dash/overview.md b/content/en/docs/components/central-dash/overview.md index bb2189c30e..c2813eb630 100644 --- a/content/en/docs/components/central-dash/overview.md +++ b/content/en/docs/components/central-dash/overview.md @@ -74,7 +74,7 @@ Port-forwarding typically does not work if any of the following are true: with the [CLI deployment](/docs/gke/deploy/deploy-cli/). (If you want to use port forwarding, you must deploy Kubeflow on an existing Kubernetes cluster using the [`kfctl_k8s_istio` - configuration](/docs/started/k8s/kfctl-k8s-istio/).) + configuration](/docs/methods/kfctl/deployment).) * You've configured the Istio ingress to only accept HTTPS traffic on a specific domain or IP address. diff --git a/content/en/docs/components/katib/experiment.md b/content/en/docs/components/katib/experiment.md index 93572d5048..58eae8bbcd 100644 --- a/content/en/docs/components/katib/experiment.md +++ b/content/en/docs/components/katib/experiment.md @@ -755,7 +755,7 @@ kubectl apply -f - (Optional) Katib's experiments don't work with [Istio sidecar injection](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection). If you install Kubeflow using - [Istio config](https://www.kubeflow.org/docs/started/k8s/kfctl-k8s-istio/), + [Istio config](https://www.kubeflow.org/docs/methods/kfctl/deployment), you have to disable sidecar injection. To do that, specify this annotation: `sidecar.istio.io/inject: "false"` in your experiment's trial template. For examples on how to do it for `Job`, `TFJob` (TensorFlow) or diff --git a/content/en/docs/components/katib/hyperparameter.md b/content/en/docs/components/katib/hyperparameter.md index 85c167b531..f43934761a 100644 --- a/content/en/docs/components/katib/hyperparameter.md +++ b/content/en/docs/components/katib/hyperparameter.md @@ -141,7 +141,7 @@ an experiment using the random algorithm example: 1. (Optional) **Note:** Katib's experiments don't work with [Istio sidecar injection](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection). If you installed Kubeflow using - [Istio config](/docs/started/k8s/kfctl-k8s-istio/), + [Istio config](/docs/methods/kfctl/deployment), you have to disable sidecar injection. To do that, specify this annotation: `sidecar.istio.io/inject: "false"` in your experiment's trial template. @@ -394,7 +394,7 @@ the Kubeflow's TensorFlow training job operator, TFJob: 1. (Optional) **Note:** Katib's experiments don't work with [Istio sidecar injection](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection). If you installed Kubeflow using - [Istio config](/docs/started/k8s/kfctl-k8s-istio/), + [Istio config](/docs/methods/kfctl/deployment), you have to disable sidecar injection. To do that, specify this annotation: `sidecar.istio.io/inject: "false"` in your experiment's trial template. For the provided `TFJob` example check @@ -438,7 +438,7 @@ using Kubeflow's PyTorch training job operator, PyTorchJob: 1. (Optional) **Note:** Katib's experiments don't work with [Istio sidecar injection](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection). If you installed Kubeflow using - [Istio config](/docs/started/k8s/kfctl-k8s-istio/), + [Istio config](/docs/methods/kfctl/deployment), you have to disable sidecar injection. To do that, specify this annotation: `sidecar.istio.io/inject: "false"` in your experiment's trial template. For the provided `PyTorchJob` example setting the annotation should be similar to diff --git a/content/en/docs/components/katib/overview.md b/content/en/docs/components/katib/overview.md index 8a4f7e1fa3..6d135a2901 100644 --- a/content/en/docs/components/katib/overview.md +++ b/content/en/docs/components/katib/overview.md @@ -133,7 +133,7 @@ You can use the following interfaces to interact with Katib: - **kfctl** is the Kubeflow CLI that you can use to install and configure Kubeflow. Learn about kfctl in the guide to - [configuring Kubeflow](/docs/other-guides/kustomize/). + [configuring Kubeflow](/docs/methods/kfctl/kustomize/). - The Kubernetes CLI, **kubectl**, is useful for running commands against your Kubeflow cluster. Learn about kubectl in the [Kubernetes diff --git a/content/en/docs/components/multi-tenancy/design.md b/content/en/docs/components/multi-tenancy/design.md index 509fb7ec36..baed058c20 100644 --- a/content/en/docs/components/multi-tenancy/design.md +++ b/content/en/docs/components/multi-tenancy/design.md @@ -53,7 +53,7 @@ master should share the same identity management. ## Supported platforms * Kubeflow multi-tenancy is enabled by default if you deploy Kubeflow on GCP with [IAP](/docs/gke/deploy). -* If you are not on GCP, you can deploy multi-tenancy to [your existing cluster](/docs/started/k8s/kfctl-istio-dex/). +* If you are not on GCP, you can deploy multi-tenancy to [your existing cluster](/docs/methods/kfctl/multi-user). ## Next steps diff --git a/content/en/docs/components/pipelines/overview/pipelines-overview.md b/content/en/docs/components/pipelines/overview/pipelines-overview.md index 196ef292cd..419d04f44e 100644 --- a/content/en/docs/components/pipelines/overview/pipelines-overview.md +++ b/content/en/docs/components/pipelines/overview/pipelines-overview.md @@ -56,7 +56,7 @@ A _pipeline component_ is a self-contained set of user code, packaged as a performs one step in the pipeline. For example, a component can be responsible for data preprocessing, data transformation, model training, and so on. -See the conceptual guides to [pipelines](/docs/components/pipelines/concepts/pipeline/) +See the conceptual guides to [pipelines](/docs/components/pipelines/overview/concepts/pipeline/) and [components](/docs/components/pipelines/concepts/component/). ## Example of a pipeline diff --git a/content/en/docs/components/pipelines/sdk/pipelines-with-tekton.md b/content/en/docs/components/pipelines/sdk/pipelines-with-tekton.md index 6f5f31c22f..bf7bd90ce5 100644 --- a/content/en/docs/components/pipelines/sdk/pipelines-with-tekton.md +++ b/content/en/docs/components/pipelines/sdk/pipelines-with-tekton.md @@ -5,7 +5,7 @@ weight = 140 +++ -You can use the [KFP-Tekton SDK](https://github.com/kubeflow/kfp-tekton/sdk) +You can use the [KFP-Tekton SDK](https://github.com/kubeflow/kfp-tekton/tree/master/sdk) to compile, upload and run your Kubeflow Pipeline DSL Python scripts on a [Kubeflow Pipelines with Tekton backend](https://github.com/kubeflow/kfp-tekton/tree/master/tekton_kfp_guide.md). diff --git a/content/en/docs/components/pipelines/sdk/python-function-components.ipynb b/content/en/docs/components/pipelines/sdk/python-function-components.ipynb index 4026b4958d..1df914dea2 100644 --- a/content/en/docs/components/pipelines/sdk/python-function-components.ipynb +++ b/content/en/docs/components/pipelines/sdk/python-function-components.ipynb @@ -287,14 +287,55 @@ " storage service. Kubeflow Pipelines passes parameters to your component by\n", " file, by passing their paths as a command-line argument.\n", "\n", + "\n", + "#### Input and output parameter names\n", + "\n", + "When you use the Kubeflow Pipelines SDK to convert your Python function to a\n", + "pipeline component, the Kubeflow Pipelines SDK uses the function's interface\n", + "to define the interface of your component in the following ways.\n", + "\n", + "* Some arguments define input parameters.\n", + "* Some arguments define output parameters.\n", + "* The function's return value is used as an output parameter. If the return\n", + " value is a [`collections.namedtuple`][named-tuple], the named tuple is used\n", + " to return several small values. \n", + "\n", + "Since you can pass parameters between components as a value or as a path, the\n", + "Kubeflow Pipelines SDK removes common parameter suffixes that leak the\n", + "component's expected implementation. For example, a Python function-based\n", + "component that ingests data and outputs CSV data may have an output argument\n", + "that is defined as `csv_path: comp.OutputPath(str)`. In this case, the output\n", + "is the CSV data, not the path. So, the Kubeflow Pipelines SDK simplifies the\n", + "output name to `csv`.\n", + "\n", + "The Kubeflow Pipelines SDK uses the following rules to define the input and\n", + "output parameter names in your component's interface:\n", + "\n", + "* If the argument name ends with `_path` and the argument is annotated as an\n", + " [`kfp.components.InputPath`][input-path] or\n", + " [`kfp.components.OutputPath`][output-path], the parameter name is the\n", + " argument name with the trailing `_path` removed.\n", + "* If the argument name ends with `_file`, the parameter name is the argument\n", + " name with the trailing `_file` removed.\n", + "* If you return a single small value from your component using the `return`\n", + " statement, the output parameter is named `output`.\n", + "* If you return several small values from your component by returning a \n", + " [`collections.namedtuple`][named-tuple], the Kubeflow Pipelines SDK uses\n", + " the tuple's field names as the output parameter names. \n", + "\n", + "Otherwise, the Kubeflow Pipelines SDK uses the argument name as the parameter\n", + "name.\n", + "\n", "\n", "#### Passing parameters by value\n", "\n", "Python function-based components make it easier to pass parameters between\n", "components by value (such as numbers, booleans, and short strings), by letting\n", "you define your component’s interface by annotating your Python function. The\n", - "supported types are `int`, `float`, `bool`, and `string`. If you do not\n", - "annotate your function, these input parameters are passed as strings.\n", + "supported types are `int`, `float`, `bool`, and `str`. You can also pass \n", + "`list` or `dict` instances by value, if they contain small values, such as\n", + "`int`, `float`, `bool`, or `str` values. If you do not annotate your function,\n", + "these input parameters are passed as strings.\n", "\n", "If your component returns multiple outputs by value, annotate your function\n", "with the [`typing.NamedTuple`][named-tuple-hint] type hint and use the\n", @@ -320,7 +361,9 @@ "[named-tuple-hint]: https://docs.python.org/3/library/typing.html#typing.NamedTuple\n", "[named-tuple]: https://docs.python.org/3/library/collections.html#collections.namedtuple\n", "[kfp-visualize]: https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/\n", - "[kfp-metrics]: https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/" + "[kfp-metrics]: https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/\n", + "[input-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.InputPath\n", + "[output-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.OutputPath" ] }, { diff --git a/content/en/docs/components/pipelines/sdk/python-function-components.md b/content/en/docs/components/pipelines/sdk/python-function-components.md index e80b86bf7c..b31e9b3520 100644 --- a/content/en/docs/components/pipelines/sdk/python-function-components.md +++ b/content/en/docs/components/pipelines/sdk/python-function-components.md @@ -5,7 +5,7 @@ weight = 50 +++ @@ -257,14 +257,55 @@ The following sections describe how to pass parameters by value and by file. storage service. Kubeflow Pipelines passes parameters to your component by file, by passing their paths as a command-line argument. + +#### Input and output parameter names + +When you use the Kubeflow Pipelines SDK to convert your Python function to a +pipeline component, the Kubeflow Pipelines SDK uses the function's interface +to define the interface of your component in the following ways. + +* Some arguments define input parameters. +* Some arguments define output parameters. +* The function's return value is used as an output parameter. If the return + value is a [`collections.namedtuple`][named-tuple], the named tuple is used + to return several small values. + +Since you can pass parameters between components as a value or as a path, the +Kubeflow Pipelines SDK removes common parameter suffixes that leak the +component's expected implementation. For example, a Python function-based +component that ingests data and outputs CSV data may have an output argument +that is defined as `csv_path: comp.OutputPath(str)`. In this case, the output +is the CSV data, not the path. So, the Kubeflow Pipelines SDK simplifies the +output name to `csv`. + +The Kubeflow Pipelines SDK uses the following rules to define the input and +output parameter names in your component's interface: + +* If the argument name ends with `_path` and the argument is annotated as an + [`kfp.components.InputPath`][input-path] or + [`kfp.components.OutputPath`][output-path], the parameter name is the + argument name with the trailing `_path` removed. +* If the argument name ends with `_file`, the parameter name is the argument + name with the trailing `_file` removed. +* If you return a single small value from your component using the `return` + statement, the output parameter is named `output`. +* If you return several small values from your component by returning a + [`collections.namedtuple`][named-tuple], the Kubeflow Pipelines SDK uses + the tuple's field names as the output parameter names. + +Otherwise, the Kubeflow Pipelines SDK uses the argument name as the parameter +name. + #### Passing parameters by value Python function-based components make it easier to pass parameters between components by value (such as numbers, booleans, and short strings), by letting you define your component’s interface by annotating your Python function. The -supported types are `int`, `float`, `bool`, and `string`. If you do not -annotate your function, these input parameters are passed as strings. +supported types are `int`, `float`, `bool`, and `str`. You can also pass +`list` or `dict` instances by value, if they contain small values, such as +`int`, `float`, `bool`, or `str` values. If you do not annotate your function, +these input parameters are passed as strings. If your component returns multiple outputs by value, annotate your function with the [`typing.NamedTuple`][named-tuple-hint] type hint and use the @@ -291,6 +332,8 @@ including component metadata and metrics. [named-tuple]: https://docs.python.org/3/library/collections.html#collections.namedtuple [kfp-visualize]: https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/ [kfp-metrics]: https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/ +[input-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.InputPath +[output-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.OutputPath ```python @@ -540,6 +583,6 @@ client.create_run_from_pipeline_func(calc_pipeline, arguments=arguments) diff --git a/content/en/docs/components/training/_index.md b/content/en/docs/components/training/_index.md index 7c294b6591..09cbaebf85 100644 --- a/content/en/docs/components/training/_index.md +++ b/content/en/docs/components/training/_index.md @@ -1,5 +1,5 @@ +++ -title = "Frameworks for Training" -description = "Training of ML models in Kubeflow" +title = "Training Operators" +description = "Training of ML models in Kubeflow through operators" weight = 70 +++ diff --git a/content/en/docs/components/training/chainer.md b/content/en/docs/components/training/chainer.md deleted file mode 100644 index 845d567949..0000000000 --- a/content/en/docs/components/training/chainer.md +++ /dev/null @@ -1,19 +0,0 @@ -+++ -title = "Chainer Training" -description = "See Kubeflow [v0.6 docs](https://v0-6.kubeflow.org/docs/components/training/chainer/) for instructions on using Chainer for training" -weight = 4 -toc = true - -+++ -{{% alert title="Out of date" color="warning" %}} -This guide contains outdated information pertaining to Kubeflow 1.0. This guide -needs to be updated for Kubeflow 1.1. -{{% /alert %}} - -{{% alpha-status - feedbacklink="https://github.com/kubeflow/chainer-operator/issues" %}} - -[Chainer](https://github.com/kubeflow/chainer-operator) is not supported in -Kubeflow versions greater than v0.6. See the [Kubeflow v0.6 -documentation](https://v0-6.kubeflow.org/docs/components/training/chainer/) -for earlier support for Chainer training. diff --git a/content/en/docs/components/training/mpi.md b/content/en/docs/components/training/mpi.md index 02faa47321..3a98d94b54 100644 --- a/content/en/docs/components/training/mpi.md +++ b/content/en/docs/components/training/mpi.md @@ -4,10 +4,6 @@ description = "Instructions for using MPI for training" weight = 25 +++ -{{% alert title="Out of date" color="warning" %}} -This guide contains outdated information pertaining to Kubeflow 1.0. This guide -needs to be updated for Kubeflow 1.1. -{{% /alert %}} {{% alpha-status feedbacklink="https://github.com/kubeflow/mpi-operator/issues" %}} @@ -26,7 +22,7 @@ cd mpi-operator kubectl create -f deploy/v1alpha2/mpi-operator.yaml ``` -Alternatively, follow the [getting started guide](/docs/started/getting-started/) to deploy Kubeflow. +Alternatively, follow the [getting started guide](https://www.kubeflow.org/docs/started/getting-started/) to deploy Kubeflow. An alpha version of MPI support was introduced with Kubeflow 0.2.0. You must be using a version of Kubeflow newer than 0.2.0. @@ -48,9 +44,9 @@ mpijobs.kubeflow.org 4d If it is not included you can add it as follows using [kustomize](https://github.com/kubernetes-sigs/kustomize): ```bash -git clone https://github.com/kubeflow/manifests -cd manifests/mpi-job/mpi-operator -kustomize build base | kubectl apply -f - +git clone https://github.com/kubeflow/mpi-operator +cd mpi-operator/manifests +kustomize build overlays/kubeflow | kubectl apply -f - ``` Note that since Kubernetes v1.14, `kustomize` became a subcommand in `kubectl` so you can also run the following command instead: @@ -66,6 +62,7 @@ You can create an MPI job by defining an `MPIJob` config file. See [TensorFlow b ``` cat examples/v1alpha2/tensorflow-benchmarks.yaml ``` + Deploy the `MPIJob` resource to start training: ``` @@ -166,7 +163,6 @@ status: startTime: "2019-07-09T22:15:51Z" ``` - Training should run for 100 steps and takes a few minutes on a GPU cluster. You can inspect the logs to see the training progress. When the job starts, access the logs from the `launcher` pod: ``` @@ -192,20 +188,20 @@ Variables: horovod ... -40 images/sec: 154.4 +/- 0.7 (jitter = 4.0) 8.280 -40 images/sec: 154.4 +/- 0.7 (jitter = 4.1) 8.482 -50 images/sec: 154.8 +/- 0.6 (jitter = 4.0) 8.397 -50 images/sec: 154.8 +/- 0.6 (jitter = 4.2) 8.450 -60 images/sec: 154.5 +/- 0.5 (jitter = 4.1) 8.321 -60 images/sec: 154.5 +/- 0.5 (jitter = 4.4) 8.349 -70 images/sec: 154.5 +/- 0.5 (jitter = 4.0) 8.433 -70 images/sec: 154.5 +/- 0.5 (jitter = 4.4) 8.430 -80 images/sec: 154.8 +/- 0.4 (jitter = 3.6) 8.199 -80 images/sec: 154.8 +/- 0.4 (jitter = 3.8) 8.404 -90 images/sec: 154.6 +/- 0.4 (jitter = 3.7) 8.418 -90 images/sec: 154.6 +/- 0.4 (jitter = 3.6) 8.459 -100 images/sec: 154.2 +/- 0.4 (jitter = 4.0) 8.372 -100 images/sec: 154.2 +/- 0.4 (jitter = 4.0) 8.542 +40 images/sec: 154.4 +/- 0.7 (jitter = 4.0) 8.280 +40 images/sec: 154.4 +/- 0.7 (jitter = 4.1) 8.482 +50 images/sec: 154.8 +/- 0.6 (jitter = 4.0) 8.397 +50 images/sec: 154.8 +/- 0.6 (jitter = 4.2) 8.450 +60 images/sec: 154.5 +/- 0.5 (jitter = 4.1) 8.321 +60 images/sec: 154.5 +/- 0.5 (jitter = 4.4) 8.349 +70 images/sec: 154.5 +/- 0.5 (jitter = 4.0) 8.433 +70 images/sec: 154.5 +/- 0.5 (jitter = 4.4) 8.430 +80 images/sec: 154.8 +/- 0.4 (jitter = 3.6) 8.199 +80 images/sec: 154.8 +/- 0.4 (jitter = 3.8) 8.404 +90 images/sec: 154.6 +/- 0.4 (jitter = 3.7) 8.418 +90 images/sec: 154.6 +/- 0.4 (jitter = 3.6) 8.459 +100 images/sec: 154.2 +/- 0.4 (jitter = 4.0) 8.372 +100 images/sec: 154.2 +/- 0.4 (jitter = 4.0) 8.542 ---------------------------------------------------------------- total images/sec: 308.27 ``` @@ -214,5 +210,5 @@ total images/sec: 308.27 Docker images are built and pushed automatically to [mpioperator on Dockerhub](https://hub.docker.com/u/mpioperator). You can use the following Dockerfiles to build the images yourself: -* [mpi-operator](https://github.com/kubeflow/mpi-operator/blob/master/Dockerfile) -* [kubectl-delivery](https://github.com/kubeflow/mpi-operator/blob/master/cmd/kubectl-delivery/Dockerfile) +- [mpi-operator](https://github.com/kubeflow/mpi-operator/blob/master/Dockerfile) +- [kubectl-delivery](https://github.com/kubeflow/mpi-operator/blob/master/cmd/kubectl-delivery/Dockerfile) diff --git a/content/en/docs/components/training/mxnet.md b/content/en/docs/components/training/mxnet.md index ada7e4eb78..732b23f257 100644 --- a/content/en/docs/components/training/mxnet.md +++ b/content/en/docs/components/training/mxnet.md @@ -4,31 +4,34 @@ description = "Instructions for using MXNet" weight = 25 +++ -{{% alert title="Out of date" color="warning" %}} -This guide contains outdated information pertaining to Kubeflow 1.0. This guide -needs to be updated for Kubeflow 1.1. -{{% /alert %}} {{% alpha-status feedbacklink="https://github.com/kubeflow/mxnet-operator/issues" %}} -This guide walks you through using MXNet with Kubeflow. +This guide walks you through using [Apache MXNet (incubating)](https://github.com/apache/incubator-mxnet) with Kubeflow. -## Installing MXNet Operator +MXNet Operator provides a Kubernetes custom resource `MXJob` that makes it easy to run distributed or non-distributed +Apache MXNet jobs (training and tuning) and other extended framework like [BytePS](https://github.com/bytedance/byteps) +jobs on Kubernetes. Using a Custom Resource Definition (CRD) gives users the ability to create +and manage Apache MXNet jobs just like built-in K8S resources. -If you haven't already done so please follow the [Getting Started Guide](https://www.kubeflow.org/docs/started/getting-started/) to deploy Kubeflow. +## Installing the MXJob CRD and operator on your k8s cluster -A version of MXNet support was introduced with Kubeflow 0.2.0. You must be using a version of Kubeflow newer than 0.2.0. +### Deploy MXJob CRD and Apache MXNet Operator -## Verify that MXNet support is included in your Kubeflow deployment +``` +kustomize build manifests/overlays/v1 | kubectl apply -f - +``` -Check that the MXNet custom resource is installed +### Verify that MXJob CRD and Apache MXNet Operator are installed + +Check that the Apache MXNet custom resource is installed via: ``` kubectl get crd ``` -The output should include `mxjobs.kubeflow.org` +The output should include `mxjobs.kubeflow.org` like the following: ``` NAME AGE @@ -37,72 +40,119 @@ mxjobs.kubeflow.org 4d ... ``` -If it is not included you can add it as follows +Check that the Apache MXNet operator is running via: ``` -git clone https://github.com/kubeflow/manifests -cd manifests/mxnet-job/mxnet-operator -kubectl kustomize base | kubectl apply -f - +kubectl get pods ``` -Alternatively, you can deploy the operator with default settings without using kustomize by running the following from the repo: +The output should include `mxnet-operaror-xxx` like the following: ``` -git clone https://github.com/kubeflow/mxnet-operator.git -cd mxnet-operator -kubectl create -f manifests/crd-v1beta1.yaml -kubectl create -f manifests/rbac.yaml -kubectl create -f manifests/deployment.yaml +NAME READY STATUS RESTARTS AGE +mxnet-operator-d466b46bc-xbqvs 1/1 Running 0 4m37s ``` -## Creating a MXNet training job +### Creating a Apache MXNet training job +You create a training job by defining a `MXJob` with `MXTrain` mode and then creating it with. -You create a training job by defining a MXJob with MXTrain mode and then creating it with +``` +kubectl create -f examples/train/mx_job_dist_gpu_v1.yaml +``` +Each `replicaSpec` defines a set of Apache MXNet processes. +The `mxReplicaType` defines the semantics for the set of processes. +The semantics are as follows: + +**scheduler** + * A job must have 1 and only 1 scheduler + * The pod must contain a container named mxnet + * The overall status of the `MXJob` is determined by the exit code of the + mxnet container + * 0 = success + * 1 || 2 || 126 || 127 || 128 || 139 = permanent errors: + * 1: general errors + * 2: misuse of shell builtins + * 126: command invoked cannot execute + * 127: command not found + * 128: invalid argument to exit + * 139: container terminated by SIGSEGV(Invalid memory reference) + * 130 || 137 || 143 = retryable error for unexpected system signals: + * 130: container terminated by Control-C + * 137: container received a SIGKILL + * 143: container received a SIGTERM + * 138 = reserved in tf-operator for user specified retryable errors + * others = undefined and no guarantee + +**worker** + * A job can have 0 to N workers + * The pod must contain a container named mxnet + * Workers are automatically restarted if they exit + +**server** + * A job can have 0 to N servers + * parameter servers are automatically restarted if they exit + + +For each replica you define a **template** which is a K8S +[PodTemplateSpec](https://kubernetes.io/docs/api-reference/v1.8/#podtemplatespec-v1-core). +The template allows you to specify the containers, volumes, etc... that +should be created for each replica. + +### Creating a TVM tuning job (AutoTVM) + +[TVM](https://docs.tvm.ai/tutorials/) is a end to end deep learning compiler stack, you can easily run AutoTVM with mxnet-operator. +You can create a auto tuning job by define a type of MXTune job and then creating it with ``` -kubectl create -f examples/v1beta1/train/mx_job_dist_gpu.yaml +kubectl create -f examples/tune/mx_job_tune_gpu_v1.yaml ``` +Before you use the auto-tuning example, there is some preparatory work need to be finished in advance. +To let TVM tune your network, you should create a docker image which has TVM module. +Then, you need a auto-tuning script to specify which network will be tuned and set the auto-tuning parameters. +For more details, please see [tutorials](https://docs.tvm.ai/tutorials/autotvm/tune_relay_mobile_gpu.html#sphx-glr-tutorials-autotvm-tune-relay-mobile-gpu-py). +Finally, you need a startup script to start the auto-tuning program. In fact, mxnet-operator will set all the parameters as environment variables and the startup script need to reed these variable and then transmit them to auto-tuning script. +We provide an example under `examples/tune/`, tuning result will be saved in a log file like resnet-18.log in the example we gave. You can refer it for details. -## Creating a TVM tuning job (AutoTVM) +### Using GPUs +MXNet Operator supports training with GPUs. -[TVM](https://docs.tvm.ai/tutorials/) is a end to end deep learning compiler stack, you can easily run AutoTVM with mxnet-operator. -You can create a auto tuning job by define a type of MXTune job and then creating it with +Please verify your image is available for distributed training with GPUs. +For example, if you have the following, MXNet Operator will arrange the pod to nodes to satisfy the GPU limit. ``` -kubectl create -f examples/v1beta1/tune/mx_job_tune_gpu.yaml +command: ["python"] +args: ["/incubator-mxnet/example/image-classification/train_mnist.py","--num-epochs","1","--num-layers","2","--kv-store","dist_device_sync","--gpus","0"] +resources: + limits: + nvidia.com/gpu: 1 ``` - -Before you use the auto-tuning example, there is some preparatory work need to be finished in advance. To let TVM tune your network, you should create a docker image which has TVM module. Then, you need a auto-tuning script to specify which network will be tuned and set the auto-tuning parameters, For more details, please see https://docs.tvm.ai/tutorials/autotvm/tune_relay_mobile_gpu.html#sphx-glr-tutorials-autotvm-tune-relay-mobile-gpu-py. Finally, you need a startup script to start the auto-tuning program. In fact, mxnet-operator will set all the parameters as environment variables and the startup script need to reed these variable and then transmit them to auto-tuning script. We provide an example under examples/v1beta1/tune/, tuning result will be saved in a log file like resnet-18.log in the example we gave. You can refer it for details. - - -## Monitoring a MXNet Job - +### Monitoring your Apache MXNet job To get the status of your job ```bash -kubectl get -o yaml mxjobs ${JOB} -``` +kubectl get -o yaml mxjobs $JOB +``` Here is sample output for an example job ```yaml -apiVersion: kubeflow.org/v1beta1 +apiVersion: kubeflow.org/v1 kind: MXJob metadata: - creationTimestamp: 2019-03-19T09:24:27Z + creationTimestamp: 2021-03-24T15:37:27Z generation: 1 name: mxnet-job namespace: default - resourceVersion: "3681685" - selfLink: /apis/kubeflow.org/v1beta1/namespaces/default/mxjobs/mxnet-job - uid: cb11013b-4a28-11e9-b7f4-704d7bb59f71 + resourceVersion: "5123435" + selfLink: /apis/kubeflow.org/v1/namespaces/default/mxjobs/mxnet-job + uid: xx11013b-4a28-11e9-s5a1-704d7bb912f91 spec: cleanPodPolicy: All jobMode: MXTrain @@ -164,22 +214,22 @@ spec: limits: nvidia.com/gpu: "1" status: - completionTime: 2019-03-19T09:25:11Z + completionTime: 2021-03-24T09:25:11Z conditions: - - lastTransitionTime: 2019-03-19T09:24:27Z - lastUpdateTime: 2019-03-19T09:24:27Z + - lastTransitionTime: 2021-03-24T15:37:27Z + lastUpdateTime: 2021-03-24T15:37:27Z message: MXJob mxnet-job is created. reason: MXJobCreated status: "True" type: Created - - lastTransitionTime: 2019-03-19T09:24:27Z - lastUpdateTime: 2019-03-19T09:24:29Z + - lastTransitionTime: 2021-03-24T15:37:27Z + lastUpdateTime: 2021-03-24T15:37:29Z message: MXJob mxnet-job is running. reason: MXJobRunning status: "False" type: Running - - lastTransitionTime: 2019-03-19T09:24:27Z - lastUpdateTime: 2019-03-19T09:25:11Z + - lastTransitionTime: 2021-03-24T15:37:27Z + lastUpdateTime: 2021-03-24T09:25:11Z message: MXJob mxnet-job is successfully completed. reason: MXJobSucceeded status: "True" @@ -188,5 +238,51 @@ status: Scheduler: {} Server: {} Worker: {} - startTime: 2019-03-19T09:24:29Z + startTime: 2021-03-24T15:37:29Z ``` + +The first thing to note is the **RuntimeId**. This is a random unique +string which is used to give names to all the K8s resouces +(e.g Job controllers & services) that are created by the `MXJob`. + +As with other K8S resources status provides information about the state +of the resource. + +**phase** - Indicates the phase of a job and will be one of + - Creating + - Running + - CleanUp + - Failed + - Done + +**state** - Provides the overall status of the job and will be one of + - Running + - Succeeded + - Failed + +For each replica type in the job, there will be a `ReplicaStatus` that +provides the number of replicas of that type in each state. + +For each replica type, the job creates a set of K8s +[Job Controllers](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) +named + +``` +${REPLICA-TYPE}-${RUNTIME_ID}-${INDEX} +``` + +For example, if you have 2 servers and the runtime id is "76n0", then `MXJob` +will create the following two jobs: + +``` +server-76no-0 +server-76no-1 +``` + +## Contributing + +Please refer to the [this document](./CONTRIBUTING.md) for contributing guidelines. + +## Community + +Please check out [Kubeflow community page](https://www.kubeflow.org/docs/about/community/) for more information on how to get involved in our community. diff --git a/content/en/docs/components/training/pytorch.md b/content/en/docs/components/training/pytorch.md index dab359b836..c68393226f 100644 --- a/content/en/docs/components/training/pytorch.md +++ b/content/en/docs/components/training/pytorch.md @@ -1,13 +1,9 @@ +++ title = "PyTorch Training" description = "Instructions for using PyTorch" -weight = 35 +weight = 15 +++ -{{% alert title="Out of date" color="warning" %}} -This guide contains outdated information pertaining to Kubeflow 1.0. This guide -needs to be updated for Kubeflow 1.1. -{{% /alert %}} {{% stable-status %}} diff --git a/content/en/docs/components/training/tftraining.md b/content/en/docs/components/training/tftraining.md index 1a99fb8f6c..ffad4a4657 100644 --- a/content/en/docs/components/training/tftraining.md +++ b/content/en/docs/components/training/tftraining.md @@ -2,13 +2,9 @@ title = "TensorFlow Training (TFJob)" linkTitle = "TensorFlow Training (TFJob)" description = "Using TFJob to train a model with TensorFlow" -weight = 60 +weight = 10 +++ -{{% alert title="Out of date" color="warning" %}} -This guide contains outdated information pertaining to Kubeflow 1.0. This guide -needs to be updated for Kubeflow 1.1. -{{% /alert %}} {{% stable-status %}} diff --git a/content/en/docs/distributions/OWNERS b/content/en/docs/distributions/OWNERS new file mode 100644 index 0000000000..9e5e36c27d --- /dev/null +++ b/content/en/docs/distributions/OWNERS @@ -0,0 +1,6 @@ +approvers: + - Bobgy + - RFMVasconcelos + +reviewers: + - 8bitmp3 \ No newline at end of file diff --git a/content/en/docs/distributions/_index.md b/content/en/docs/distributions/_index.md new file mode 100644 index 0000000000..8677eeb59e --- /dev/null +++ b/content/en/docs/distributions/_index.md @@ -0,0 +1,5 @@ ++++ +title = "Distributions" +description = "A list of available Kubeflow distributions" +weight = 40 ++++ diff --git a/content/en/docs/aws/OWNERS b/content/en/docs/distributions/aws/OWNERS similarity index 100% rename from content/en/docs/aws/OWNERS rename to content/en/docs/distributions/aws/OWNERS diff --git a/content/en/docs/aws/_index.md b/content/en/docs/distributions/aws/_index.md similarity index 90% rename from content/en/docs/aws/_index.md rename to content/en/docs/distributions/aws/_index.md index e04f332c2d..93032789a4 100644 --- a/content/en/docs/aws/_index.md +++ b/content/en/docs/distributions/aws/_index.md @@ -1,5 +1,5 @@ +++ title = "Kubeflow on AWS" description = "Running Kubeflow on Kubernetes Engine and Amazon Web Services" -weight = 50 +weight = 20 +++ diff --git a/content/en/docs/aws/authentication-oidc.md b/content/en/docs/distributions/aws/authentication-oidc.md similarity index 100% rename from content/en/docs/aws/authentication-oidc.md rename to content/en/docs/distributions/aws/authentication-oidc.md diff --git a/content/en/docs/aws/authentication.md b/content/en/docs/distributions/aws/authentication.md similarity index 100% rename from content/en/docs/aws/authentication.md rename to content/en/docs/distributions/aws/authentication.md diff --git a/content/en/docs/aws/aws-e2e.md b/content/en/docs/distributions/aws/aws-e2e.md similarity index 100% rename from content/en/docs/aws/aws-e2e.md rename to content/en/docs/distributions/aws/aws-e2e.md diff --git a/content/en/docs/aws/custom-domain.md b/content/en/docs/distributions/aws/custom-domain.md similarity index 100% rename from content/en/docs/aws/custom-domain.md rename to content/en/docs/distributions/aws/custom-domain.md diff --git a/content/en/docs/aws/customizing-aws.md b/content/en/docs/distributions/aws/customizing-aws.md similarity index 100% rename from content/en/docs/aws/customizing-aws.md rename to content/en/docs/distributions/aws/customizing-aws.md diff --git a/content/en/docs/aws/deploy/_index.md b/content/en/docs/distributions/aws/deploy/_index.md similarity index 100% rename from content/en/docs/aws/deploy/_index.md rename to content/en/docs/distributions/aws/deploy/_index.md diff --git a/content/en/docs/aws/deploy/install-kubeflow.md b/content/en/docs/distributions/aws/deploy/install-kubeflow.md similarity index 100% rename from content/en/docs/aws/deploy/install-kubeflow.md rename to content/en/docs/distributions/aws/deploy/install-kubeflow.md diff --git a/content/en/docs/aws/deploy/uninstall-kubeflow.md b/content/en/docs/distributions/aws/deploy/uninstall-kubeflow.md similarity index 100% rename from content/en/docs/aws/deploy/uninstall-kubeflow.md rename to content/en/docs/distributions/aws/deploy/uninstall-kubeflow.md diff --git a/content/en/docs/aws/features.md b/content/en/docs/distributions/aws/features.md similarity index 100% rename from content/en/docs/aws/features.md rename to content/en/docs/distributions/aws/features.md diff --git a/content/en/docs/aws/files/rds.yaml b/content/en/docs/distributions/aws/files/rds.yaml similarity index 100% rename from content/en/docs/aws/files/rds.yaml rename to content/en/docs/distributions/aws/files/rds.yaml diff --git a/content/en/docs/aws/iam-for-sa.md b/content/en/docs/distributions/aws/iam-for-sa.md similarity index 100% rename from content/en/docs/aws/iam-for-sa.md rename to content/en/docs/distributions/aws/iam-for-sa.md diff --git a/content/en/docs/aws/logging.md b/content/en/docs/distributions/aws/logging.md similarity index 100% rename from content/en/docs/aws/logging.md rename to content/en/docs/distributions/aws/logging.md diff --git a/content/en/docs/aws/notebook-server.md b/content/en/docs/distributions/aws/notebook-server.md similarity index 100% rename from content/en/docs/aws/notebook-server.md rename to content/en/docs/distributions/aws/notebook-server.md diff --git a/content/en/docs/aws/pipeline.md b/content/en/docs/distributions/aws/pipeline.md similarity index 100% rename from content/en/docs/aws/pipeline.md rename to content/en/docs/distributions/aws/pipeline.md diff --git a/content/en/docs/aws/private-access.md b/content/en/docs/distributions/aws/private-access.md similarity index 100% rename from content/en/docs/aws/private-access.md rename to content/en/docs/distributions/aws/private-access.md diff --git a/content/en/docs/aws/rds.md b/content/en/docs/distributions/aws/rds.md similarity index 100% rename from content/en/docs/aws/rds.md rename to content/en/docs/distributions/aws/rds.md diff --git a/content/en/docs/aws/storage.md b/content/en/docs/distributions/aws/storage.md similarity index 100% rename from content/en/docs/aws/storage.md rename to content/en/docs/distributions/aws/storage.md diff --git a/content/en/docs/aws/troubleshooting-aws.md b/content/en/docs/distributions/aws/troubleshooting-aws.md similarity index 100% rename from content/en/docs/aws/troubleshooting-aws.md rename to content/en/docs/distributions/aws/troubleshooting-aws.md diff --git a/content/en/docs/azure/OWNERS b/content/en/docs/distributions/azure/OWNERS similarity index 100% rename from content/en/docs/azure/OWNERS rename to content/en/docs/distributions/azure/OWNERS diff --git a/content/en/docs/azure/_index.md b/content/en/docs/distributions/azure/_index.md similarity index 90% rename from content/en/docs/azure/_index.md rename to content/en/docs/distributions/azure/_index.md index 8241b9a2a9..af7485a6cb 100644 --- a/content/en/docs/azure/_index.md +++ b/content/en/docs/distributions/azure/_index.md @@ -1,5 +1,5 @@ +++ title = "Kubeflow on Azure" description = "Running Kubeflow on Kubernetes Engine and Microsoft Azure" -weight = 50 +weight = 20 +++ diff --git a/content/en/docs/azure/authentication-oidc.md b/content/en/docs/distributions/azure/authentication-oidc.md similarity index 100% rename from content/en/docs/azure/authentication-oidc.md rename to content/en/docs/distributions/azure/authentication-oidc.md diff --git a/content/en/docs/azure/authentication.md b/content/en/docs/distributions/azure/authentication.md similarity index 100% rename from content/en/docs/azure/authentication.md rename to content/en/docs/distributions/azure/authentication.md diff --git a/content/en/docs/azure/azureEndtoEnd.md b/content/en/docs/distributions/azure/azureEndtoEnd.md similarity index 100% rename from content/en/docs/azure/azureEndtoEnd.md rename to content/en/docs/distributions/azure/azureEndtoEnd.md diff --git a/content/en/docs/azure/azureMySQL.md b/content/en/docs/distributions/azure/azureMySQL.md similarity index 100% rename from content/en/docs/azure/azureMySQL.md rename to content/en/docs/distributions/azure/azureMySQL.md diff --git a/content/en/docs/azure/deploy/_index.md b/content/en/docs/distributions/azure/deploy/_index.md similarity index 100% rename from content/en/docs/azure/deploy/_index.md rename to content/en/docs/distributions/azure/deploy/_index.md diff --git a/content/en/docs/azure/deploy/existing-cluster.md b/content/en/docs/distributions/azure/deploy/existing-cluster.md similarity index 100% rename from content/en/docs/azure/deploy/existing-cluster.md rename to content/en/docs/distributions/azure/deploy/existing-cluster.md diff --git a/content/en/docs/azure/deploy/install-kubeflow.md b/content/en/docs/distributions/azure/deploy/install-kubeflow.md similarity index 99% rename from content/en/docs/azure/deploy/install-kubeflow.md rename to content/en/docs/distributions/azure/deploy/install-kubeflow.md index d0b286151e..61e8b8819f 100644 --- a/content/en/docs/azure/deploy/install-kubeflow.md +++ b/content/en/docs/distributions/azure/deploy/install-kubeflow.md @@ -184,4 +184,4 @@ Run the following commands to set up and deploy Kubeflow. ## Additional information - You can find general information about Kubeflow configuration in the guide to [configuring Kubeflow with kfctl and kustomize](/docs/other-guides/kustomize/). + You can find general information about Kubeflow configuration in the guide to [configuring Kubeflow with kfctl and kustomize](/docs/methods/kfctl/kustomize/). diff --git a/content/en/docs/azure/deploy/uninstall-kubeflow.md b/content/en/docs/distributions/azure/deploy/uninstall-kubeflow.md similarity index 100% rename from content/en/docs/azure/deploy/uninstall-kubeflow.md rename to content/en/docs/distributions/azure/deploy/uninstall-kubeflow.md diff --git a/content/en/docs/azure/images/appReg.PNG b/content/en/docs/distributions/azure/images/appReg.PNG similarity index 100% rename from content/en/docs/azure/images/appReg.PNG rename to content/en/docs/distributions/azure/images/appReg.PNG diff --git a/content/en/docs/azure/images/clientID2.PNG b/content/en/docs/distributions/azure/images/clientID2.PNG similarity index 100% rename from content/en/docs/azure/images/clientID2.PNG rename to content/en/docs/distributions/azure/images/clientID2.PNG diff --git a/content/en/docs/azure/images/createContainerReg.PNG b/content/en/docs/distributions/azure/images/createContainerReg.PNG similarity index 100% rename from content/en/docs/azure/images/createContainerReg.PNG rename to content/en/docs/distributions/azure/images/createContainerReg.PNG diff --git a/content/en/docs/azure/images/creatingWS.PNG b/content/en/docs/distributions/azure/images/creatingWS.PNG similarity index 100% rename from content/en/docs/azure/images/creatingWS.PNG rename to content/en/docs/distributions/azure/images/creatingWS.PNG diff --git a/content/en/docs/azure/images/finalOutput.PNG b/content/en/docs/distributions/azure/images/finalOutput.PNG similarity index 100% rename from content/en/docs/azure/images/finalOutput.PNG rename to content/en/docs/distributions/azure/images/finalOutput.PNG diff --git a/content/en/docs/azure/images/finishedRunning.PNG b/content/en/docs/distributions/azure/images/finishedRunning.PNG similarity index 100% rename from content/en/docs/azure/images/finishedRunning.PNG rename to content/en/docs/distributions/azure/images/finishedRunning.PNG diff --git a/content/en/docs/azure/images/password.PNG b/content/en/docs/distributions/azure/images/password.PNG similarity index 100% rename from content/en/docs/azure/images/password.PNG rename to content/en/docs/distributions/azure/images/password.PNG diff --git a/content/en/docs/azure/images/pipelinedash.PNG b/content/en/docs/distributions/azure/images/pipelinedash.PNG similarity index 100% rename from content/en/docs/azure/images/pipelinedash.PNG rename to content/en/docs/distributions/azure/images/pipelinedash.PNG diff --git a/content/en/docs/azure/images/pipelinesInput.png b/content/en/docs/distributions/azure/images/pipelinesInput.png similarity index 100% rename from content/en/docs/azure/images/pipelinesInput.png rename to content/en/docs/distributions/azure/images/pipelinesInput.png diff --git a/content/en/docs/azure/images/pipelinesUpload.PNG b/content/en/docs/distributions/azure/images/pipelinesUpload.PNG similarity index 100% rename from content/en/docs/azure/images/pipelinesUpload.PNG rename to content/en/docs/distributions/azure/images/pipelinesUpload.PNG diff --git a/content/en/docs/azure/images/roleAssign.PNG b/content/en/docs/distributions/azure/images/roleAssign.PNG similarity index 100% rename from content/en/docs/azure/images/roleAssign.PNG rename to content/en/docs/distributions/azure/images/roleAssign.PNG diff --git a/content/en/docs/azure/machinelearningcomponent.md b/content/en/docs/distributions/azure/machinelearningcomponent.md similarity index 100% rename from content/en/docs/azure/machinelearningcomponent.md rename to content/en/docs/distributions/azure/machinelearningcomponent.md diff --git a/content/en/docs/azure/troubleshooting-azure.md b/content/en/docs/distributions/azure/troubleshooting-azure.md similarity index 100% rename from content/en/docs/azure/troubleshooting-azure.md rename to content/en/docs/distributions/azure/troubleshooting-azure.md diff --git a/content/en/docs/distributions/charmed/OWNERS b/content/en/docs/distributions/charmed/OWNERS new file mode 100644 index 0000000000..4ed9ad6751 --- /dev/null +++ b/content/en/docs/distributions/charmed/OWNERS @@ -0,0 +1,5 @@ +approvers: + - RFMVasconcelos + - knkski +reviewers: + - DomFleischmann diff --git a/content/en/docs/distributions/charmed/_index.md b/content/en/docs/distributions/charmed/_index.md new file mode 100644 index 0000000000..3317c3c7ca --- /dev/null +++ b/content/en/docs/distributions/charmed/_index.md @@ -0,0 +1,5 @@ ++++ +title = "Kubeflow Charmed Operators" +description = "Charmed Operators for Kubeflow deployment and day-2 operations" +weight = 50 ++++ diff --git a/content/en/docs/distributions/charmed/install-kubeflow.md b/content/en/docs/distributions/charmed/install-kubeflow.md new file mode 100644 index 0000000000..f84de9266e --- /dev/null +++ b/content/en/docs/distributions/charmed/install-kubeflow.md @@ -0,0 +1,110 @@ ++++ +title = "Installing Kubeflow with Charmed Operators" +description = "Instructions for Kubeflow deployment with Kubeflow Charmed Operators" +weight = 10 ++++ + +This guide outlines the steps you need to install and deploy Kubeflow with [Charmed Operators](https://charmed-kubeflow.io/docs) and [Juju](https://juju.is/docs/kubernetes) on any conformant Kubernetes, including [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/), [Amazon Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/index.html), [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/), [OpenShift](https://docs.openshift.com), and any [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/)-deployed cluster (provided that you have access to it via `kubectl`). + +#### 1. Install the Juju client + +On Linux, install `juju` via [snap](https://snapcraft.io/docs/installing-snapd) with the following command: + +```bash +snap install juju --classic +``` + +If you use macOS, you can use [Homebrew](https://brew.sh) and type `brew install juju` in the command line. For Windows, download the Windows [installer for Juju](https://launchpad.net/juju/2.8/2.8.5/+download/juju-setup-2.8.5-signed.exe). + +#### 2. Connect Juju to your Kubernetes cluster + +To operate workloads in your Kubernetes cluster with Juju, you have to add the cluster to the list of *clouds* in Juju via the `add-k8s` command. + +If your Kubernetes config file is in the default location (such as `~/.kube/config` on Linux) and you only have one cluster, you can simply run: + +```bash +juju add-k8s myk8s +``` +If your kubectl config file contains multiple clusters, you can specify the appropriate one by name: + +```bash +juju add-k8s myk8s --cluster-name=foo +``` +Finally, to use a different config file, you can set the `KUBECONFIG` environment variable to point to the relevant file. For example: + +```bash +KUBECONFIG=path/to/file juju add-k8s myk8s +``` + +For more details, go to the [official Juju documentation](https://juju.is/docs/clouds). + +#### 3. Create a controller + +To operate workloads on your Kubernetes cluster, Juju uses controllers. You can create a controller with the `bootstrap` command: + +```bash +juju bootstrap myk8s my-controller +``` + +This command will create a couple of pods under the `my-controller` namespace. You can see your controllers with the `juju controllers` command. + +You can read more about controllers in the [Juju documentation](https://juju.is/docs/creating-a-controller). + +#### 4. Create a model + +A model in Juju is a blank canvas where your operators will be deployed, and it holds a 1:1 relationship with a Kubernetes namespace. + +You can create a model and give it a name, e.g. `kubeflow`, with the `add-model` command, and you will also be creating a Kubernetes namespace of the same name: + +```bash +juju add-model kubeflow +``` +You can list your models with the `juju models` command. + +#### 5. Deploy Kubeflow + +[note type="caution" status="MIN RESOURCES"] +To deploy `kubeflow`, you'll need at least 50Gb available of disk, 14Gb of RAM, and 2 CPUs available in your machine/VM. +If you have fewer resources, deploy `kubeflow-lite` or `kubeflow-edge`. +[/note] + +Once you have a model, you can simply `juju deploy` any of the provided [Kubeflow bundles](https://charmed-kubeflow.io/docs/operators-and-bundles) into your cluster. For the _Kubeflow lite_ bundle, run: + +```bash +juju deploy kubeflow-lite +``` + +and your Kubeflow installation should begin! + +You can observe your Kubeflow deployment getting spun-up with the command: + +```bash +watch -c juju status --color +``` + +#### 6. Add an RBAC role for Istio + +At the time of writing this guide, to set up Kubeflow with [Istio](https://istio.io) correctly, you need to provide the `istio-ingressgateway` operator access to Kubernetes resources. Use the following command to create the appropriate role: + +```bash +kubectl patch role -n kubeflow istio-ingressgateway-operator -p '{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"name":"istio-ingressgateway-operator"},"rules":[{"apiGroups":["*"],"resources":["*"],"verbs":["*"]}]}' +``` + +#### 7. Set URL in authentication methods + +Finally, you need to enable your Kubeflow dashboard access. Provide the dashboard's public URL to dex-auth and oidc-gatekeeper as follows: + +```bash +juju config dex-auth public-url=http:// +juju config oidc-gatekeeper public-url=http:// +``` + +where in place of `` you should use the hostname that the Kubeflow dashboard responds to. + +#### More documentation + +For more documentation, visit the [Charmed Kubeflow website](https://charmed-kubeflow.io/docs). + +#### Having issues? + +If you have any issues or questions, feel free to create a GitHub issue [here](https://github.com/canonical/bundle-kubeflow/issues). diff --git a/content/en/docs/gke/OWNERS b/content/en/docs/distributions/gke/OWNERS similarity index 84% rename from content/en/docs/gke/OWNERS rename to content/en/docs/distributions/gke/OWNERS index 3924159385..8c366b5500 100644 --- a/content/en/docs/gke/OWNERS +++ b/content/en/docs/distributions/gke/OWNERS @@ -1,7 +1,7 @@ approvers: - Bobgy - joeliedtke - - rmgogogo + - zijianjoy reviewers: - 8bitmp3 - joeliedtke diff --git a/content/en/docs/gke/_index.md b/content/en/docs/distributions/gke/_index.md similarity index 90% rename from content/en/docs/gke/_index.md rename to content/en/docs/distributions/gke/_index.md index aee6944c9f..bb64609cca 100644 --- a/content/en/docs/gke/_index.md +++ b/content/en/docs/distributions/gke/_index.md @@ -1,5 +1,5 @@ +++ title = "Kubeflow on GCP" description = "Running Kubeflow on Kubernetes Engine and Google Cloud Platform" -weight = 50 +weight = 20 +++ diff --git a/content/en/docs/gke/anthos.md b/content/en/docs/distributions/gke/anthos.md similarity index 74% rename from content/en/docs/gke/anthos.md rename to content/en/docs/distributions/gke/anthos.md index 05a70faf44..c6654281fe 100644 --- a/content/en/docs/gke/anthos.md +++ b/content/en/docs/distributions/gke/anthos.md @@ -4,10 +4,6 @@ description = "Running Kubeflow across on-premises and cloud environments with A weight = 12 +++ -{{% alert title="Out of date" color="warning" %}} -This guide contains outdated information pertaining to Kubeflow 1.0. This guide -needs to be updated for Kubeflow 1.1. -{{% /alert %}} [Anthos](https://cloud.google.com/anthos) is a hybrid and multi-cloud application platform developed and supported by Google. Anthos is built on @@ -16,7 +12,7 @@ open source technologies, including Kubernetes, Istio, and Knative. Using Anthos, you can create a consistent setup across your on-premises and cloud environments, helping you to automate policy and security at scale. -Kubeflow on GKE On Prem is a work in progress. To track progress you can subscribe +We are collecting interest for Kubeflow on GKE On Prem. You can subscribe to the GitHub issue [kubeflow/gcp-blueprints#138](https://github.com/kubeflow/gcp-blueprints/issues/138). ## Next steps diff --git a/content/en/docs/gke/authentication.md b/content/en/docs/distributions/gke/authentication.md similarity index 100% rename from content/en/docs/gke/authentication.md rename to content/en/docs/distributions/gke/authentication.md diff --git a/content/en/docs/gke/cloud-filestore.md b/content/en/docs/distributions/gke/cloud-filestore.md similarity index 100% rename from content/en/docs/gke/cloud-filestore.md rename to content/en/docs/distributions/gke/cloud-filestore.md diff --git a/content/en/docs/gke/custom-domain.md b/content/en/docs/distributions/gke/custom-domain.md similarity index 100% rename from content/en/docs/gke/custom-domain.md rename to content/en/docs/distributions/gke/custom-domain.md diff --git a/content/en/docs/gke/customizing-gke.md b/content/en/docs/distributions/gke/customizing-gke.md similarity index 99% rename from content/en/docs/gke/customizing-gke.md rename to content/en/docs/distributions/gke/customizing-gke.md index 3bb8937a30..238f387a48 100644 --- a/content/en/docs/gke/customizing-gke.md +++ b/content/en/docs/distributions/gke/customizing-gke.md @@ -105,7 +105,7 @@ You can use [kustomize](https://kustomize.io/) to customize Kubeflow. Make sure that you have the minimum required version of kustomize: {{% kustomize-min-version %}} or later. For more information about kustomize in Kubeflow, see -[how Kubeflow uses kustomize](/docs/other-guides/kustomize/). +[how Kubeflow uses kustomize](/docs/methods/kfctl/kustomize/). To customize the Kubernetes resources running within the cluster, you can modify the kustomize manifests in `${KF_DIR}/kustomize`. diff --git a/content/en/docs/gke/deploy/_index.md b/content/en/docs/distributions/gke/deploy/_index.md similarity index 100% rename from content/en/docs/gke/deploy/_index.md rename to content/en/docs/distributions/gke/deploy/_index.md diff --git a/content/en/docs/gke/deploy/delete-cli.md b/content/en/docs/distributions/gke/deploy/delete-cli.md similarity index 100% rename from content/en/docs/gke/deploy/delete-cli.md rename to content/en/docs/distributions/gke/deploy/delete-cli.md diff --git a/content/en/docs/gke/deploy/deploy-cli.md b/content/en/docs/distributions/gke/deploy/deploy-cli.md similarity index 98% rename from content/en/docs/gke/deploy/deploy-cli.md rename to content/en/docs/distributions/gke/deploy/deploy-cli.md index a2023cc02e..8e0c7a6295 100644 --- a/content/en/docs/gke/deploy/deploy-cli.md +++ b/content/en/docs/distributions/gke/deploy/deploy-cli.md @@ -78,14 +78,14 @@ purpose. No tools will assume they actually exists in your terminal environment. 1. Install [Kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/). - **Note:** Prior to Kubeflow v1.2, Kubeflow was compatible only with Kustomize `v3.2.1`. Starting from Kubeflow v1.2, you can now use the latest Kustomize versions to install Kubeflow. + **Note:** Prior to Kubeflow v1.2, Kubeflow was compatible only with Kustomize `v3.2.1`. Starting from Kubeflow v1.2, you can now use any `v3` Kustomize version to install Kubeflow. Kustomize `v4` is not supported out of the box yet. [Official Version](https://github.com/kubeflow/manifests/tree/master#prerequisites) To deploy the latest version of Kustomize on a Linux or Mac machine, run the following commands: ```bash # Detect your OS and download the corresponding latest Kustomize binary - curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash - + curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" > install_kustomize.sh + bash ./install_kustomize.sh 3.2.0 # Add the kustomize package to your $PATH env variable sudo mv ./kustomize /usr/local/bin/kustomize ``` diff --git a/content/en/docs/gke/deploy/deploy-ui.md b/content/en/docs/distributions/gke/deploy/deploy-ui.md similarity index 100% rename from content/en/docs/gke/deploy/deploy-ui.md rename to content/en/docs/distributions/gke/deploy/deploy-ui.md diff --git a/content/en/docs/gke/deploy/management-setup.md b/content/en/docs/distributions/gke/deploy/management-setup.md similarity index 100% rename from content/en/docs/gke/deploy/management-setup.md rename to content/en/docs/distributions/gke/deploy/management-setup.md diff --git a/content/en/docs/gke/deploy/monitor-iap-setup.md b/content/en/docs/distributions/gke/deploy/monitor-iap-setup.md similarity index 100% rename from content/en/docs/gke/deploy/monitor-iap-setup.md rename to content/en/docs/distributions/gke/deploy/monitor-iap-setup.md diff --git a/content/en/docs/gke/deploy/oauth-setup.md b/content/en/docs/distributions/gke/deploy/oauth-setup.md similarity index 100% rename from content/en/docs/gke/deploy/oauth-setup.md rename to content/en/docs/distributions/gke/deploy/oauth-setup.md diff --git a/content/en/docs/gke/deploy/project-setup.md b/content/en/docs/distributions/gke/deploy/project-setup.md similarity index 100% rename from content/en/docs/gke/deploy/project-setup.md rename to content/en/docs/distributions/gke/deploy/project-setup.md diff --git a/content/en/docs/gke/deploy/reasons.md b/content/en/docs/distributions/gke/deploy/reasons.md similarity index 100% rename from content/en/docs/gke/deploy/reasons.md rename to content/en/docs/distributions/gke/deploy/reasons.md diff --git a/content/en/docs/gke/deploy/upgrade.md b/content/en/docs/distributions/gke/deploy/upgrade.md similarity index 100% rename from content/en/docs/gke/deploy/upgrade.md rename to content/en/docs/distributions/gke/deploy/upgrade.md diff --git a/content/en/docs/gke/gcp-e2e.md b/content/en/docs/distributions/gke/gcp-e2e.md similarity index 98% rename from content/en/docs/gke/gcp-e2e.md rename to content/en/docs/distributions/gke/gcp-e2e.md index 8d09af3aaa..7c061094db 100644 --- a/content/en/docs/gke/gcp-e2e.md +++ b/content/en/docs/distributions/gke/gcp-e2e.md @@ -124,7 +124,7 @@ It's time to get started! [tensorflow]: https://www.tensorflow.org/ [tf-train]: https://www.tensorflow.org/api_guides/python/train -[tf-serving]: https://www.tensorflow.org/serving/ +[tf-serving]: https://www.tensorflow.org/tfx/guide/serving [kubernetes]: https://kubernetes.io/ [kubernetes-engine]: https://cloud.google.com/kubernetes-engine/ diff --git a/content/en/docs/gke/monitoring.md b/content/en/docs/distributions/gke/monitoring.md similarity index 100% rename from content/en/docs/gke/monitoring.md rename to content/en/docs/distributions/gke/monitoring.md diff --git a/content/en/docs/gke/pipelines/_index.md b/content/en/docs/distributions/gke/pipelines/_index.md similarity index 100% rename from content/en/docs/gke/pipelines/_index.md rename to content/en/docs/distributions/gke/pipelines/_index.md diff --git a/content/en/docs/gke/pipelines/authentication-pipelines.md b/content/en/docs/distributions/gke/pipelines/authentication-pipelines.md similarity index 100% rename from content/en/docs/gke/pipelines/authentication-pipelines.md rename to content/en/docs/distributions/gke/pipelines/authentication-pipelines.md diff --git a/content/en/docs/gke/pipelines/authentication-sdk.md b/content/en/docs/distributions/gke/pipelines/authentication-sdk.md similarity index 100% rename from content/en/docs/gke/pipelines/authentication-sdk.md rename to content/en/docs/distributions/gke/pipelines/authentication-sdk.md diff --git a/content/en/docs/gke/pipelines/enable-gpu-and-tpu.md b/content/en/docs/distributions/gke/pipelines/enable-gpu-and-tpu.md similarity index 100% rename from content/en/docs/gke/pipelines/enable-gpu-and-tpu.md rename to content/en/docs/distributions/gke/pipelines/enable-gpu-and-tpu.md diff --git a/content/en/docs/gke/pipelines/preemptible.md b/content/en/docs/distributions/gke/pipelines/preemptible.md similarity index 100% rename from content/en/docs/gke/pipelines/preemptible.md rename to content/en/docs/distributions/gke/pipelines/preemptible.md diff --git a/content/en/docs/gke/pipelines/upgrade.md b/content/en/docs/distributions/gke/pipelines/upgrade.md similarity index 100% rename from content/en/docs/gke/pipelines/upgrade.md rename to content/en/docs/distributions/gke/pipelines/upgrade.md diff --git a/content/en/docs/gke/private-clusters.md b/content/en/docs/distributions/gke/private-clusters.md similarity index 100% rename from content/en/docs/gke/private-clusters.md rename to content/en/docs/distributions/gke/private-clusters.md diff --git a/content/en/docs/gke/troubleshooting-gke.md b/content/en/docs/distributions/gke/troubleshooting-gke.md similarity index 100% rename from content/en/docs/gke/troubleshooting-gke.md rename to content/en/docs/distributions/gke/troubleshooting-gke.md diff --git a/content/en/docs/ibm/OWNERS b/content/en/docs/distributions/ibm/OWNERS similarity index 100% rename from content/en/docs/ibm/OWNERS rename to content/en/docs/distributions/ibm/OWNERS diff --git a/content/en/docs/ibm/_index.md b/content/en/docs/distributions/ibm/_index.md similarity index 90% rename from content/en/docs/ibm/_index.md rename to content/en/docs/distributions/ibm/_index.md index 36b6fd9303..1e4a0ae2d2 100644 --- a/content/en/docs/ibm/_index.md +++ b/content/en/docs/distributions/ibm/_index.md @@ -1,5 +1,5 @@ +++ title = "Kubeflow on IBM Cloud" description = "Running Kubeflow on IBM Cloud Kubernetes Service (IKS)" -weight = 50 +weight = 20 +++ diff --git a/content/en/docs/distributions/ibm/create-cluster-vpc.md b/content/en/docs/distributions/ibm/create-cluster-vpc.md new file mode 100644 index 0000000000..c79b8dbe9e --- /dev/null +++ b/content/en/docs/distributions/ibm/create-cluster-vpc.md @@ -0,0 +1,289 @@ ++++ +title = "Create or access an IBM Cloud Kubernetes cluster on a VPC" +description = "Instructions for creating or connecting to a Kubernetes cluster on IBM Cloud vpc-gen2" +weight = 4 ++++ + +## Create and setup a new cluster + +Follow these steps to create and setup a new IBM Cloud Kubernetes Service(IKS) cluster on `vpc-gen2` provider. + +A `vpc-gen2` cluster does not expose each node to the public internet directly and thus has more secure +and more complex network setup. It is recommended setup for secured production use cases of Kubeflow. + +### Setting environment variables + +Choose the region and the worker node provider for your cluster, and set the environment variables. + +```shell +export KUBERNERTES_VERSION=1.18 +export CLUSTER_ZONE=us-south-3 +export CLUSTER_NAME=kubeflow-vpc +``` + +where: + +- `KUBERNETES_VERSION`: Run `ibmcloud ks versions` to see the supported Kubernetes versions. Refer to + [Supported version matrix](https://www.kubeflow.org/docs/started/k8s/overview/#minimum-system-requirements). +- `CLUSTER_ZONE`: Run `ibmcloud ks locations` to list supported zones. For example, choose `us-south-3` to create your + cluster in the Dallas (US) data center. +- `CLUSTER_NAME` must be lowercase and unique among any other Kubernetes + clusters in the specified `${CLUSTER_ZONE}`. + +**Notice**: Refer to [Creating clusters](https://cloud.ibm.com/docs/containers?topic=containers-clusters) in the IBM +Cloud documentation for additional information on how to set up other providers and zones in your cluster. + +### Choosing a worker node flavor + +The worker nodes flavor name varies from zones and providers. Run +`ibmcloud ks flavors --zone ${CLUSTER_ZONE} --provider vpc-gen2` to list available flavors. + +Below are some examples of flavors supported in the `us-south-3` zone with `vpc-gen2` node provider: + +```shell +ibmcloud ks flavors --zone us-south-3 --provider vpc-gen2 +``` + +Example output: + +``` +For more information about these flavors, see 'https://ibm.biz/flavors' +Name Cores Memory Network Speed OS Server Type Storage Secondary Storage Provider +bx2.16x64 16 64GB 16Gbps UBUNTU_18_64 virtual 100GB 0B vpc-gen2 +bx2.2x8† 2 8GB 4Gbps UBUNTU_18_64 virtual 100GB 0B vpc-gen2 +bx2.32x128 32 128GB 16Gbps UBUNTU_18_64 virtual 100GB 0B vpc-gen2 +bx2.48x192 48 192GB 16Gbps UBUNTU_18_64 virtual 100GB 0B vpc-gen2 +bx2.4x16 4 16GB 8Gbps UBUNTU_18_64 virtual 100GB 0B vpc-gen2 +... +``` + +The recommended configuration for a cluster is at least 8 vCPU cores with 16GB memory. Hence, we recommend +`bx2.4x16` flavor to create a two-worker-node cluster. Keep in mind that you can always scale the cluster +by adding more worker nodes should your application scales up. + +Now set the environment variable with the flavor you choose. + +```shell +export WORKER_NODE_FLAVOR=bx2.4x16 +``` + +## Create an IBM Cloud Kubernetes cluster for `vpc-gen2` infrastructure + +Creating a `vpc-gen2` based cluster needs a VPC, a subnet and a public gateway attached to it. Fortunately, this is a one +time setup. Future `vpc-gen2` clusters can reuse the same VPC/subnet(with attached public-gateway). + +1. Begin with installing a `vpc-infrastructure` plugin: + + ```shell + ibmcloud plugin install vpc-infrastructure + ``` + + Refer to this [link](https://cloud.ibm.com/docs/containers?topic=containers-vpc_ks_tutorial), for more information. + +2. Target `vpc-gen 2` to access gen 2 VPC resources: + + ```shell + ibmcloud is target --gen 2 + ``` + + Verify that the target is correctly set up: + + ```shell + ibmcloud is target + ``` + + Example output: + + ``` + Target Generation: 2 + ``` + +3. Create or use an existing VPC: + + a) Use an existing VPC: + + ```shell + ibmcloud is vpcs + ``` + + + Example output: + ``` + Listing vpcs for generation 2 compute in all resource groups and region ... + ID Name Status Classic access Default network ACL Default security group Resource group + r006-hidden-68cc-4d40-xxxx-4319fa3gxxxx my-vpc1 available false husker-sloping-bee-resize blimp-hasty-unaware-overflow kubeflow + ``` + + If the above list contains the VPC that can be used to deploy your cluster - make a note of its ID. + + b) To create a new VPC, proceed as follows: + + ```shell + ibmcloud is vpc-create my-vpc + ``` + + Example output: + + ``` + Creating vpc my-vpc in resource group kubeflow under account IBM as ... + + ID r006-hidden-68cc-4d40-xxxx-4319fa3fxxxx + Name my-vpc + ... + ``` + + **Save the ID in a variable `VPC_ID` as follows, so that we can use it later.** + + ```shell + export VPC_ID=r006-hidden-68cc-4d40-xxxx-4319fa3fxxxx + ``` + +4. Create or use an existing subnet: + + a) To use an existing subnet: + + ```shell + ibmcloud is subnets + ``` + + Example output: + + ``` + Listing subnets for generation 2 compute in all resource groups and region ... + ID Name Status Subnet CIDR Addresses ACL Public Gateway VPC Zone Resource group + 0737-27299d09-1d95-4a9d-a491-a6949axxxxxx my-subnet available 10.240.128.0/18 16373/16384 husker-sloping-bee-resize my-gateway my-vpc us-south-3 kubeflow + ``` + + If the above list contains the subnet corresponding to your VPC, that can be used to deploy your cluster - make sure + you note it's ID. + + b) To create a new subnet: + - List address prefixes and note the CIDR block corresponding to a Zone; + in the below example, for Zone: `us-south-3` the CIDR block is : `10.240.128.0/18`. + + ```shell + ibmcloud is vpc-address-prefixes $VPC_ID + ``` + + Example output: + + ``` + Listing address prefixes of vpc r006-hidden-68cc-4d40-xxxx-4319fa3fxxxx under account IBM as user new@user-email.com... + ID Name CIDR block Zone Has subnets Is default Created + r006-xxxxxxxx-4002-46d2-8a4f-f69e7ba3xxxx rising-rectified-much-brew 10.240.0.0/18 us-south-1 false true 2021-03-05T14:58:39+05:30 + r006-xxxxxxxx-dca9-4321-bb6c-960c4424xxxx retrial-reversal-pelican-cavalier 10.240.64.0/18 us-south-2 false true 2021-03-05T14:58:39+05:30 + r006-xxxxxxxx-7352-4a46-bfb1-fcbac6cbxxxx subfloor-certainly-herbal-ajar 10.240.128.0/18 us-south-3 false true 2021-03-05T14:58:39+05:30 + ``` + + - Now create a subnet as follows: + + ```shell + ibmcloud is subnet-create my-subnet $VPC_ID $CLUSTER_ZONE --ipv4-cidr-block "10.240.128.0/18" + ``` + + Example output: + + ``` + Creating subnet my-subnet in resource group kubeflow under account IBM as user new@user-email.com... + + ID 0737-27299d09-1d95-4a9d-a491-a6949axxxxxx + Name my-subnet + ``` + + - Make sure you export the subnet IDs follows: + + ```shell + export SUBNET_ID=0737-27299d09-1d95-4a9d-a491-a6949axxxxxx + ``` + +5. Create a `vpc-gen2` based Kubernetes cluster: + + ```shell + ibmcloud ks cluster create vpc-gen2 \ + --name $CLUSTER_NAME \ + --zone $CLUSTER_ZONE \ + --version ${KUBERNETES_VERSION} \ + --flavor ${WORKER_NODE_FLAVOR} \ + --vpc-id ${VPC_ID} \ + --subnet-id ${SUBNET_ID} \ + --workers 2 + ``` + +6. Attach a public gateway + + This step is mandatory for Kubeflow deployment to succeed, because pods need public internet access to download images. + + - First, check if your cluster is already assigned a public gateway: + + ```shell + ibmcloud is pubgws + ``` + + Example output: + + ``` + Listing public gateways for generation 2 compute in all resource groups and region ... + ID Name Status Floating IP VPC Zone Resource group + r006-xxxxxxxx-5731-4ffe-bc51-1d9e5fxxxxxx my-gateway available xxx.xxx.xxx.xxx my-vpc us-south-3 default + + ``` + + In the above run, the gateway is already attached for the vpc: `my-vpc`. In case no gateway is attached, proceed with + the rest of the setup. + + - Next, attach a public gateway by running the following command: + + ```shell + ibmcloud is public-gateway-create my-gateway $VPC_ID $CLUSTER_ZONE + ``` + + Example output: + ``` + ID: r006-xxxxxxxx-5731-4ffe-bc51-1d9e5fxxxxxx + ``` + + Save the above generated gateway ID as follows: + + ```shell + export GATEWAY_ID="r006-xxxxxxxx-5731-4ffe-bc51-1d9e5fxxxxxx" + ``` + + - Finally, attach the public gateway to the subnet: + + ```shell + ibmcloud is subnet-update $SUBNET_ID --public-gateway-id $GATEWAY_ID + ``` + + Example output: + + ``` + Updating subnet 0737-27299d09-1d95-4a9d-a491-a6949axxxxxx under account IBM as user new@user-email.com... + + ID 0737-27299d09-1d95-4a9d-a491-a6949axxxxxx + Name my-subnet + ... + ``` + +### Verifying the cluster + +To use the created cluster, switch the Kubernetes context to point to the cluster: + +```shell +ibmcloud ks cluster config --cluster ${CLUSTER_NAME} +``` + +Make sure all worker nodes are up with the command below: + +```shell +kubectl get nodes +``` + +and verify that all the nodes are in `Ready` state. + +### Delete the cluster + +Delete the cluster including it's storage: + +```shell +ibmcloud ks cluster rm --force-delete-storage -c ${CLUSTER_NAME} +``` diff --git a/content/en/docs/ibm/create-cluster.md b/content/en/docs/distributions/ibm/create-cluster.md similarity index 56% rename from content/en/docs/ibm/create-cluster.md rename to content/en/docs/distributions/ibm/create-cluster.md index e2808201c6..39256fceb2 100644 --- a/content/en/docs/ibm/create-cluster.md +++ b/content/en/docs/distributions/ibm/create-cluster.md @@ -44,12 +44,23 @@ Get the Kubeconfig file: ibmcloud ks cluster config --cluster $CLUSTER_NAME ``` -From here on, please see [Install Kubeflow](/docs/ibm/deploy/install-kubeflow). +From here on, go to [Install Kubeflow on IKS](/docs/ibm/deploy/install-kubeflow-on-iks) for more information. ## Create and setup a new cluster -Follow these steps to create and setup a new [IBM Cloud Kubernetes Service(IKS) cluster: +* Use a `classic` provider if you want to try out Kubeflow. +* Use a `vpc-gen2` provider if you are familiar with Cloud networking and want to deploy Kubeflow on a secure environment. + +A `classic` provider exposes each cluster node to the public internet and therefore has +a relatively simpler networking setup. Services exposed using Kubernetes `NodePort` need to be secured using +authentication mechanism. + +To create a cluster with `vpc-gen2` provider, follow the +[Create a cluster on IKS with a `vpc-gen2` provider](/docs/ibm/create-cluster-vpc) +guide. + +The next section will explain how to create and set up a new IBM Cloud Kubernetes Service (IKS) ### Setting environment variables @@ -62,20 +73,41 @@ export WORKER_NODE_PROVIDER=classic export CLUSTER_NAME=kubeflow ``` -- `KUBERNETES_VERSION` specifies the Kubernetes version for the cluster. Run `ibmcloud ks versions` to see the supported Kubernetes versions. If this environment variable is not set, the cluster will be created with the default version set by IBM Cloud Kubernetes Service. Refer to the [Minimum system requirements](https://www.kubeflow.org/docs/started/k8s/overview/#minimum-system-requirements) and choose a Kubernetes version compatible with the Kubeflow release to be deployed. -- `CLUSTER_ZONE` identifies the regions or location where CLUSTER_NAME will be created. Run `ibmcloud ks locations` to list supported IBM Cloud Kubernetes Service locations. For example, choose `dal13` to create CLUSTER_NAME in the Dallas (US) data center. -- `WORKER_NODE_PROVIDER` specifies the kind of IBM Cloud infrastructure on which the Kubernetes worker nodes will be created. The `classic` type supports worker nodes with GPUs. There are other worker nodes providers including `vpc-classic` and `vpc-gen2` where zone names and worker flavors will be different. Please use `ibmcloud ks zones --provider ${WORKER_NODE_PROVIDER}` to list zone names if using other providers and set the `CLUSTER_ZONE` accordingly. +where: + +- `KUBERNETES_VERSION` specifies the Kubernetes version for the cluster. Run `ibmcloud ks versions` to see the supported + Kubernetes versions. If this environment variable is not set, the cluster will be created with the default version set + by IBM Cloud Kubernetes Service. Refer to + [Minimum system requirements](https://www.kubeflow.org/docs/started/k8s/overview/#minimum-system-requirements) + and choose a Kubernetes version compatible with the Kubeflow release to be deployed. +- `CLUSTER_ZONE` identifies the regions or location where cluster will be created. Run `ibmcloud ks locations` to + list supported IBM Cloud Kubernetes Service locations. For example, choose `dal13` to create your cluster in the + Dallas (US) data center. +- `WORKER_NODE_PROVIDER` specifies the kind of IBM Cloud infrastructure on which the Kubernetes worker nodes will be + created. The `classic` type supports worker nodes with GPUs. There are other worker nodes providers including + `vpc-classic` and `vpc-gen2` where zone names and worker flavors will be different. Run + `ibmcloud ks zones --provider classic` to list zone names for `classic` provider and set the `CLUSTER_ZONE` + accordingly. - `CLUSTER_NAME` must be lowercase and unique among any other Kubernetes clusters in the specified `${CLUSTER_ZONE}`. -**Notice**: If choosing other Kubernetes worker nodes providers than `classic`, refer to the IBM Cloud official document [Creating clusters](https://cloud.ibm.com/docs/containers?topic=containers-clusters) for detailed steps. +**Notice**: Refer to [Creating clusters](https://cloud.ibm.com/docs/containers?topic=containers-clusters) in the IBM +Cloud documentation for additional information on how to set up other providers and zones in your cluster. ### Choosing a worker node flavor -The worker nodes flavor name varies from zones and providers. Run `ibmcloud ks flavors --zone ${CLUSTER_ZONE} --provider ${WORKER_NODE_PROVIDER}` to list available flavors. For example, following are some flavors supported in the `dal13` zone with `classic` worker node provider. +The worker node flavor name varies from zones and providers. Run +`ibmcloud ks flavors --zone ${CLUSTER_ZONE} --provider ${WORKER_NODE_PROVIDER}` to list available flavors. + +For example, the following are some worker node flavors supported in the `dal13` zone with a `classic` node provider. + +```shell +ibmcloud ks flavors --zone dal13 --provider classic +``` + +Example output: -```text -$ ibmcloud ks flavors --zone dal13 --provider classic +``` OK For more information about these flavors, see 'https://ibm.biz/flavors' Name Cores Memory Network Speed OS Server Type Storage Secondary Storage Provider @@ -92,15 +124,18 @@ b3c.8x32 8 32GB 1000Mbps UBUNTU_18_64 virtua ... ``` -Choose a flavor that will work for your applications. For the purpose of the Kubeflow deployment, the recommended configuration for a cluster is at least 8 vCPU cores with 16GB memory. Hence you can either choose the `b3c.8x32` flavor to create a one-worker-node cluster or choose the `b3c.4x16` flavor to create a two-worker-node cluster. Keep in mind that you can always scale the cluster by adding more worker nodes should your application scales up. +Choose a flavor that will work for your applications. For the purpose of the Kubeflow deployment, the recommended +configuration for a cluster is at least 8 vCPU cores with 16GB memory. Hence you can either choose the `b3c.8x32` flavor +to create a one-worker-node cluster or choose the `b3c.4x16` flavor to create a two-worker-node cluster. Keep in mind +that you can always scale the cluster by adding more worker nodes should your application scales up. -Now set the environment variable with the flavor you choose. +Now, set the environment variable with the worker node flavor of your choice: ```shell export WORKER_NODE_FLAVOR=b3c.4x16 ``` -### Creating a IBM Cloud Kubernetes cluster +### Creating an IBM Cloud Kubernetes cluster Run with the following command to create a cluster: @@ -115,7 +150,11 @@ ibmcloud ks cluster create ${WORKER_NODE_PROVIDER} \ Replace the `workers` parameter above with the desired number of worker nodes. -Note: If you're starting in a fresh account with no public and private VLANs, they are created automatically for you when creating a Kubernetes cluster with worker nodes provider `classic` for the first time. If you already have VLANs configured in your account, retrieve them via `ibmcloud ks vlans --zone ${CLUSTER_ZONE}` and include the public and private VLAN ids (set in the `PUBLIC_VLAN_ID` and `PRIVATE_VLAN_ID` environment variables) in the command, for example: + +**Note**: If you're starting in a fresh account with no public and private VLANs, they are created automatically for you +when creating a Kubernetes cluster with worker nodes provider `classic` for the first time. If you already have VLANs +configured in your account, retrieve them via `ibmcloud ks vlans --zone ${CLUSTER_ZONE}` and include the public and +private VLAN ids (set in the `PUBLIC_VLAN_ID` and `PRIVATE_VLAN_ID` environment variables) in the command, for example: ```shell ibmcloud ks cluster create ${WORKER_NODE_PROVIDER} \ @@ -128,10 +167,11 @@ ibmcloud ks cluster create ${WORKER_NODE_PROVIDER} \ --public-vlan ${PUBLIC_VLAN_ID} ``` -Wait until the cluster is deployed and configured. It can take a while for the cluster to be ready. Run with following command to periodically check the state of your cluster. Your cluster is ready when the state is `normal`. +Wait until the cluster is deployed and configured. It can take a while for the cluster to be ready. Run with following +command to periodically check the state of your cluster. Your cluster is ready when the state is `normal`. ```shell -ibmcloud ks clusters --provider ${WORKER_NODE_PROVIDER} |grep ${CLUSTER_NAME}|awk '{print "Name:"$1"\tState:"$3}' +ibmcloud ks clusters --provider ${WORKER_NODE_PROVIDER} |grep ${CLUSTER_NAME} |awk '{print "Name:"$1"\tState:"$3}' ``` ### Verifying the cluster @@ -149,3 +189,11 @@ kubectl get nodes ``` and make sure all the nodes are in `Ready` state. + +### Delete the cluster + +Delete the cluster including it's storage: + +```shell +ibmcloud ks cluster rm --force-delete-storage -c ${CLUSTER_NAME} +``` diff --git a/content/en/docs/ibm/deploy/OWNERS b/content/en/docs/distributions/ibm/deploy/OWNERS similarity index 100% rename from content/en/docs/ibm/deploy/OWNERS rename to content/en/docs/distributions/ibm/deploy/OWNERS diff --git a/content/en/docs/ibm/deploy/_index.md b/content/en/docs/distributions/ibm/deploy/_index.md similarity index 100% rename from content/en/docs/ibm/deploy/_index.md rename to content/en/docs/distributions/ibm/deploy/_index.md diff --git a/content/en/docs/ibm/deploy/authentication.md b/content/en/docs/distributions/ibm/deploy/authentication.md similarity index 100% rename from content/en/docs/ibm/deploy/authentication.md rename to content/en/docs/distributions/ibm/deploy/authentication.md diff --git a/content/en/docs/ibm/deploy/deployment-process.md b/content/en/docs/distributions/ibm/deploy/deployment-process.md similarity index 100% rename from content/en/docs/ibm/deploy/deployment-process.md rename to content/en/docs/distributions/ibm/deploy/deployment-process.md diff --git a/content/en/docs/ibm/deploy/install-kubeflow-on-IBM-openshift.md b/content/en/docs/distributions/ibm/deploy/install-kubeflow-on-IBM-openshift.md similarity index 86% rename from content/en/docs/ibm/deploy/install-kubeflow-on-IBM-openshift.md rename to content/en/docs/distributions/ibm/deploy/install-kubeflow-on-IBM-openshift.md index eda79a2d9d..ace3fee8f4 100644 --- a/content/en/docs/ibm/deploy/install-kubeflow-on-IBM-openshift.md +++ b/content/en/docs/distributions/ibm/deploy/install-kubeflow-on-IBM-openshift.md @@ -35,7 +35,7 @@ Run the following commands to set up and deploy Kubeflow for a single user witho If you want to use the Kubeflow pipeline with the Argo backend, you can change `CONFIG_URI` to this kfdef instead ``` -https://raw.githubusercontent.com/kubeflow/manifests/master/kfdef/kfctl_openshift.v1.2.0.yaml +https://raw.githubusercontent.com/kubeflow/manifests/v1.2-branch/kfdef/kfctl_openshift.v1.2.0.yaml ``` ```shell @@ -52,12 +52,19 @@ export KF_DIR=${BASE_DIR}/${KF_NAME} # Set the configuration file to use, such as: export CONFIG_FILE=kfctl_ibm.yaml -export CONFIG_URI="https://raw.githubusercontent.com/kubeflow/manifests/master/kfdef/kfctl_openshift.master.kfptekton.yaml" +export CONFIG_URI="https://raw.githubusercontent.com/kubeflow/manifests/master/distributions/kfdef/kfctl_openshift.master.kfptekton.yaml" # Generate Kubeflow: mkdir -p ${KF_DIR} cd ${KF_DIR} -curl -L ${CONFIG_URI} > ${CONFIG_FILE} + +wget ${CONFIG_URI} -O ${CONFIG_FILE} + +# On MacOS +sed -i '' -e 's#https://github.com/kubeflow/manifests/archive/master.tar.gz#https://github.com/kubeflow/manifests/archive/552a4ba84567ed8c0f9abca12f15b8eed000426c.tar.gz#g' ${CONFIG_FILE} + +# On Linux +sed -i -e 's#https://github.com/kubeflow/manifests/archive/master.tar.gz#https://github.com/kubeflow/manifests/archive/552a4ba84567ed8c0f9abca12f15b8eed000426c.tar.gz#g' ${CONFIG_FILE} # Deploy Kubeflow. You can customize the CONFIG_FILE if needed. kfctl apply -V -f ${CONFIG_FILE} diff --git a/content/en/docs/ibm/deploy/install-kubeflow-on-iks.md b/content/en/docs/distributions/ibm/deploy/install-kubeflow-on-iks.md similarity index 76% rename from content/en/docs/ibm/deploy/install-kubeflow-on-iks.md rename to content/en/docs/distributions/ibm/deploy/install-kubeflow-on-iks.md index 1abd7fdd2f..703316b3af 100644 --- a/content/en/docs/ibm/deploy/install-kubeflow-on-iks.md +++ b/content/en/docs/distributions/ibm/deploy/install-kubeflow-on-iks.md @@ -16,6 +16,12 @@ This guide describes how to use the kfctl binary to deploy Kubeflow on IBM Cloud ibmcloud login ``` + Or, if you have federated credentials, run the following command: + + ```shell + ibmcloud login --sso + ``` + * Create and access a Kubernetes cluster on IKS To deploy Kubeflow on IBM Cloud, you need a cluster running on IKS. If you don't have a cluster running, follow the [Create an IBM Cloud cluster](/docs/ibm/create-cluster) guide. @@ -28,7 +34,7 @@ This guide describes how to use the kfctl binary to deploy Kubeflow on IBM Cloud Replace `` with your cluster name. -## IBM Cloud Group ID Storage Setup +### Storage setup for a `classic` IBM Cloud Kubernetes cluster **Note**: This section is only required when the worker nodes provider `WORKER_NODE_PROVIDER` is set to `classic`. For other infrastructures, IBM Cloud Storage with Group ID support is already set up as the cluster's default storage class. @@ -57,6 +63,22 @@ Therefore, you're recommended to set up the default storage class with Group ID kubectl patch storageclass ${OLD_STORAGE_CLASS} -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' ``` +### Storage setup for `vpc-gen2` IBM Cloud Kubernetes cluster + +**Note**: To deploy Kubeflow, you don't need to change the storage setup for `vpc-gen2` Kubernetes cluster. + +Currently, there is no option available for setting up RWX (read-write multiple nodes) type of storages. +RWX is not a mandatory requirement to run Kubeflow and most pipelines. +It is required by certain sample jobs/pipelines where multiple pods write results to a common storage. +A job or a pipeline can also write to a common object storage like `minio`, so the absence of this feature is +not a blocker for working with Kubeflow. +Examples of jobs/pipelines that will not work, are: +[Distributed training with tf-operator](https://github.com/kubeflow/tf-operator/tree/master/examples/v1/mnist_with_summaries) + +If you are on `vpc-gen2` and still need RWX, you may try [portworx enterprise product](https://portworx.com/products/features/). +To set it up on IBM Cloud use the [portworx install with IBM Cloud](https://docs.portworx.com/portworx-install-with-kubernetes/cloud/ibm/) guide. + + ## Installation Choose either **single user** or **multi-tenant** section based on your usage. @@ -202,7 +224,7 @@ step 2 accordingly: * `` - fill in the value of secret * `` - fill in the FQDN of Kubeflow, if you don't know yet, just give a dummy one like `localhost`. Then change it after you got one. - **Note**: If any of the parameters changed after the initial Kubeflow deployment, you + **Note**: If any of the parameters are changed after the initial Kubeflow deployment, you will need to manually update these parameters in the secret `appid-application-configuration`. Then, restart authservice by running the command `kubectl rollout restart sts authservice -n istio-system`. @@ -214,9 +236,40 @@ Check the pod `authservice-0` is in running state in namespace `istio-system`: kubectl get pod authservice-0 -n istio-system ``` -## Next steps +### Extra network setup requirement for `vpc-gen2` clusters only + +**Note**: These steps are not required for `classic` clusters, i.e. where `WORKER_NODE_PROVIDER` is set to `classic`. + +A `vpc-gen2` cluster does not assign a public IP address to the Kubernetes master node by default. +It provides access via a Load Balancer, which is configured to allow only a set of ports over public internet. +Access the cluster's resources in a `vpc-gen2` cluster, using one of the following options, + +* Load Balancer method: To configure via a Load Balancer, go to [Expose the Kubeflow endpoint as a LoadBalancer](#expose-the-kubeflow-endpoint-as-loadbalancer). + This method is recommended when you have Kubeflow deployed with [Multi-user, auth-enabled](#multi-user-auth-enabled) support — otherwise it will expose + cluster resources to the public. -To secure the Kubeflow dashboard with HTTPS, follow the steps in [Exposing the Kubeflow dashboard with DNS and TLS termination](../authentication/#exposing-the-kubeflow-dashboard-with-dns-and-tls-termination). +* Socks proxy method: If you need access to nodes or NodePort in the `vpc-gen2` cluster, this can be achieved by starting another instance in the +same `vpc-gen2` cluster and assigning it a public IP (i.e. the floating IP). Next, use SSH to log into the instance or create an SSH socks proxy, + such as `ssh -D9999 root@new-instance-public-ip`. + +Then, configure the socks proxy at `localhost:9999` and access cluster services. + +* `kubectl port-forward` method: To access Kubeflow dashboard, run `kubectl -n istio-system port-forward service/istio-ingressgateway 7080:http2`. + Then in a browser, go to[http://127.0.0.1:7080/](http://127.0.0.1:7080/) + +_**Important notice**: Exposing cluster/compute resources publicly without setting up a proper user authentication mechanism +is very insecure and can have very serious consequences(even legal). If there is no need to expose cluster services publicly, +Socks proxy method or `kubectl port-forward` method are recommended._ + +## Next steps: secure the Kubeflow dashboard with HTTPS + +### Prerequisites + +For both `classic` and `vpc-gen2` cluster providers, make sure you have [Multi-user, auth-enabled](#multi-user-auth-enabled) Kubeflow set up. + +### Setup + +Follow the steps in [Exposing the Kubeflow dashboard with DNS and TLS termination](../authentication/#exposing-the-kubeflow-dashboard-with-dns-and-tls-termination). Then, you will have the required DNS name as Kubeflow FQDN to enable the OIDC flow for AppID: @@ -230,9 +283,10 @@ redirect_url=$(printf https:///login/oidc | base64 -w0) \ kubectl patch secret appid-application-configuration -n istio-system \ -p $(printf '{"data":{"oidcRedirectUrl": "%s"}}' $redirect_url) ``` + 3. Restart the pod `authservice-0`: -```SHELL +```shell kubectl rollout restart statefulset authservice -n istio-system ``` @@ -240,7 +294,7 @@ Then, visit `https:///`. The page should redirect you to AppID fo ## Additional information -You can find general information about Kubeflow configuration in the guide to [configuring Kubeflow with kfctl and kustomize](/docs/other-guides/kustomize/). +You can find general information about Kubeflow configuration in the guide to [configuring Kubeflow with kfctl and kustomize](/docs/methods/kfctl/kustomize/). ## Troubleshooting @@ -258,3 +312,4 @@ Then, you can locate the LoadBalancer in the **EXTERNAL_IP** column when you run kubectl get svc istio-ingressgateway -n istio-system ``` +There is a small delay, usually ~5 mins, for above commands to take effect. diff --git a/content/en/docs/ibm/deploy/uninstall-kubeflow.md b/content/en/docs/distributions/ibm/deploy/uninstall-kubeflow.md similarity index 100% rename from content/en/docs/ibm/deploy/uninstall-kubeflow.md rename to content/en/docs/distributions/ibm/deploy/uninstall-kubeflow.md diff --git a/content/en/docs/ibm/iks-e2e.md b/content/en/docs/distributions/ibm/iks-e2e.md similarity index 98% rename from content/en/docs/ibm/iks-e2e.md rename to content/en/docs/distributions/ibm/iks-e2e.md index 543fd44381..620bf913a4 100644 --- a/content/en/docs/ibm/iks-e2e.md +++ b/content/en/docs/distributions/ibm/iks-e2e.md @@ -83,4 +83,4 @@ It's time to get started! [mnist-data]: http://yann.lecun.com/exdb/mnist/index.html [tensorflow]: https://www.tensorflow.org/ [tf-train]: https://www.tensorflow.org/api_guides/python/train -[tf-serving]: https://www.tensorflow.org/serving/ +[tf-serving]: https://www.tensorflow.org/tfx/guide/serving diff --git a/content/en/docs/ibm/kfp-tekton.png b/content/en/docs/distributions/ibm/kfp-tekton.png similarity index 100% rename from content/en/docs/ibm/kfp-tekton.png rename to content/en/docs/distributions/ibm/kfp-tekton.png diff --git a/content/en/docs/ibm/pipelines.md b/content/en/docs/distributions/ibm/pipelines.md similarity index 91% rename from content/en/docs/ibm/pipelines.md rename to content/en/docs/distributions/ibm/pipelines.md index b977935f6b..2c3935b4bf 100644 --- a/content/en/docs/ibm/pipelines.md +++ b/content/en/docs/distributions/ibm/pipelines.md @@ -38,15 +38,18 @@ def echo_pipeline( * You will be using the Kubeflow Pipelines with Tekton SDK ([`kfp-tekton`](https://pypi.org/project/kfp-tekton/)) v0.4.0 or above. * If you have deployed Kubeflow on IBM Cloud using the [`kfctl_ibm.v1.2.0.yaml`](https://raw.githubusercontent.com/kubeflow/manifests/v1.2-branch/kfdef/kfctl_ibm.v1.2.0.yaml) -manifest you can configure ([`kfp-tekton`](https://pypi.org/project/kfp-tekton/)) SDK for a single user as follows: +manifest you can configure ([`kfp-tekton`](https://pypi.org/project/kfp-tekton/)) SDK to list all your Kubeflow Pipelines experiments as follows: ```python from kfp_tekton import TektonClient -KUBEFLOW_PUBLIC_ENDPOINT_URL = 'http://' +KUBEFLOW_PUBLIC_ENDPOINT_URL = 'http://' KUBEFLOW_PROFILE_NAME = None -client = TektonClient(host=KUBEFLOW_PUBLIC_ENDPOINT_URL) +client = TektonClient(host=f'{KUBEFLOW_PUBLIC_ENDPOINT_URL}/pipeline') + +experiments = client.list_experiments(namespace=KUBEFLOW_PROFILE_NAME) ``` +**Note**: `` is the EXTERNAL_IP you exposed as a LoadBalancer following [`this instruction`](https://www.kubeflow.org/docs/ibm/deploy/install-kubeflow-on-iks/#expose-the-kubeflow-endpoint-as-a-loadbalancer). If you have not done that step during Kubeflow setup, please include port 31380 because the Kubeflow endpoint is exposed with NodePort 31380. ## 2. Authenticating multi-user Kubeflow Pipelines with the SDK diff --git a/content/en/docs/ibm/using-icr.md b/content/en/docs/distributions/ibm/using-icr.md similarity index 100% rename from content/en/docs/ibm/using-icr.md rename to content/en/docs/distributions/ibm/using-icr.md diff --git a/content/en/docs/started/k8s/OWNERS b/content/en/docs/distributions/kfctl/OWNERS similarity index 82% rename from content/en/docs/started/k8s/OWNERS rename to content/en/docs/distributions/kfctl/OWNERS index 50ea12679e..a960159db9 100644 --- a/content/en/docs/started/k8s/OWNERS +++ b/content/en/docs/distributions/kfctl/OWNERS @@ -3,4 +3,4 @@ approvers: - yanniszark reviewers: - - 8bitmp3 + - 8bitmp3 \ No newline at end of file diff --git a/content/en/docs/distributions/kfctl/_index.md b/content/en/docs/distributions/kfctl/_index.md new file mode 100644 index 0000000000..dd2d8009f0 --- /dev/null +++ b/content/en/docs/distributions/kfctl/_index.md @@ -0,0 +1,5 @@ ++++ +title = "kfctl" +description = "How to use the kfctl method for Kubeflow" +weight = 10 ++++ diff --git a/content/en/docs/started/k8s/kfctl-k8s-istio.md b/content/en/docs/distributions/kfctl/deployment.md similarity index 98% rename from content/en/docs/started/k8s/kfctl-k8s-istio.md rename to content/en/docs/distributions/kfctl/deployment.md index f48c217a1a..b7349e649d 100644 --- a/content/en/docs/started/k8s/kfctl-k8s-istio.md +++ b/content/en/docs/distributions/kfctl/deployment.md @@ -116,7 +116,7 @@ deploy Kubeflow: ``` 1. Edit the configuration files, as described in the guide to - [customizing your Kubeflow deployment](/docs/other-guides/kustomize/). + [customizing your Kubeflow deployment](/docs/methods/kfctl/kustomize/). 1. Set an environment variable pointing to your local configuration file: @@ -169,7 +169,7 @@ and directories: which you can further customize if necessary. * **kustomize** is a directory that contains the kustomize packages for Kubeflow applications. See - [how Kubeflow uses kustomize](/docs/other-guides/kustomize/). + [how Kubeflow uses kustomize](/docs/methods/kfctl/kustomize/). * The directory is created when you run `kfctl build` or `kfctl apply`. * You can customize the Kubernetes resources by modifying the manifests and diff --git a/content/en/docs/other-guides/kustomize.md b/content/en/docs/distributions/kfctl/kustomize.md similarity index 99% rename from content/en/docs/other-guides/kustomize.md rename to content/en/docs/distributions/kfctl/kustomize.md index 8518020c8e..3fa9a53e20 100644 --- a/content/en/docs/other-guides/kustomize.md +++ b/content/en/docs/distributions/kfctl/kustomize.md @@ -43,7 +43,7 @@ The kfctl deployment process includes the following commands: When you install Kubeflow, the deployment process uses one of a few possible YAML configuration files to bootstrap the configuration. You can see all the [configuration files on -GitHub](https://github.com/kubeflow/manifests/tree/master/kfdef). +GitHub](https://github.com/kubeflow/manifests/tree/master/distributions/kfdef). You can also compose your own configuration file with components and applications of your choice. Starting with Kubeflow 1.1 release, the _KfDef_ manifest also supports using _stack_ to declare a specific stack of applications. To use this new feature, the manifest should contain one application with `kubeflow-apps` name. diff --git a/content/en/docs/started/workstation/minikube-linux.md b/content/en/docs/distributions/kfctl/minikube.md similarity index 100% rename from content/en/docs/started/workstation/minikube-linux.md rename to content/en/docs/distributions/kfctl/minikube.md diff --git a/content/en/docs/started/k8s/kfctl-istio-dex.md b/content/en/docs/distributions/kfctl/multi-user.md similarity index 99% rename from content/en/docs/started/k8s/kfctl-istio-dex.md rename to content/en/docs/distributions/kfctl/multi-user.md index 8b6634b99a..9fbd59154d 100644 --- a/content/en/docs/started/k8s/kfctl-istio-dex.md +++ b/content/en/docs/distributions/kfctl/multi-user.md @@ -156,7 +156,7 @@ deploy Kubeflow: ``` 1. Edit the configuration files, as described in the guide to - [customizing your Kubeflow deployment](/docs/other-guides/kustomize/). + [customizing your Kubeflow deployment](/docs/methods/kfctl/kustomize/). 1. Set an environment variable pointing to your local configuration file: @@ -791,7 +791,7 @@ directories: * **kustomize** is a directory that contains the kustomize packages for Kubeflow applications. See - [how Kubeflow uses kustomize](/docs/other-guides/kustomize/). + [how Kubeflow uses kustomize](/docs/methods/kfctl/kustomize/). * The directory is created when you run `kfctl build` or `kfctl apply`. * You can customize the Kubernetes resources by modifying the manifests and diff --git a/content/en/docs/started/k8s/overview.md b/content/en/docs/distributions/kfctl/overview.md similarity index 94% rename from content/en/docs/started/k8s/overview.md rename to content/en/docs/distributions/kfctl/overview.md index ff421fc9ce..342e1fdfde 100644 --- a/content/en/docs/started/k8s/overview.md +++ b/content/en/docs/distributions/kfctl/overview.md @@ -32,7 +32,7 @@ Your Kubernetes cluster must meet the following minimum requirements: with a [dynamic volume provisioner](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/). For more information, refer to [this - guide](https://www.kubeflow.org/docs/started/k8s/kfctl-k8s-istio/#before-you-start). + guide](https://www.kubeflow.org/docs/methods/kfctl/deployment/#before-you-start).
@@ -176,13 +176,13 @@ governed by consensus within the Kubeflow community. - + - +
kfctl_k8s_istio.yaml This config creates a vanilla deployment of Kubeflow with all its core components without any external dependencies. The deployment can be customized based on your environment needs.
Follow instructions: Kubeflow Deployment with kfctl_k8s_istio
This config creates a vanilla deployment of Kubeflow with all its core components without any external dependencies. The deployment can be customized based on your environment needs.
Follow instructions: Kubeflow Deployment with kfctl_k8s_istio
kfctl_istio_dex.yaml This config creates a Kubeflow deployment with all its core components, and uses Dex and Istio for vendor-neutral authentication.
Follow instructions: Multi-user, auth-enabled Kubeflow with kfctl_istio_dex
This config creates a Kubeflow deployment with all its core components, and uses Dex and Istio for vendor-neutral authentication.
Follow instructions: Multi-user, auth-enabled Kubeflow with kfctl_istio_dex
diff --git a/content/en/docs/distributions/microk8s/OWNERS b/content/en/docs/distributions/microk8s/OWNERS new file mode 100644 index 0000000000..6dfd91c715 --- /dev/null +++ b/content/en/docs/distributions/microk8s/OWNERS @@ -0,0 +1,7 @@ +approvers: + - RFMVasconcelos + - knkski + +reviewers: + - DomFleischmann + - 8bitmp3 diff --git a/content/en/docs/distributions/microk8s/_index.md b/content/en/docs/distributions/microk8s/_index.md new file mode 100644 index 0000000000..ba8aaf2c5c --- /dev/null +++ b/content/en/docs/distributions/microk8s/_index.md @@ -0,0 +1,5 @@ ++++ +title = "MicroK8s Kubeflow add-on" +description = "Running Kubeflow on MicroK8s" +weight = 50 ++++ diff --git a/content/en/docs/started/workstation/kubeflow-on-microk8s.md b/content/en/docs/distributions/microk8s/kubeflow-on-microk8s.md similarity index 97% rename from content/en/docs/started/workstation/kubeflow-on-microk8s.md rename to content/en/docs/distributions/microk8s/kubeflow-on-microk8s.md index 9ef3a8c6f4..6f32da5764 100644 --- a/content/en/docs/started/workstation/kubeflow-on-microk8s.md +++ b/content/en/docs/distributions/microk8s/kubeflow-on-microk8s.md @@ -1,7 +1,7 @@ +++ title = "Kubeflow on MicroK8s" -description = "Run Kubeflow locally on built-in hypervisors with MicroK8s" -weight = 60 +description = "Run Kubeflow on MicroK8s with built-in Kubeflow add-on" +weight = 10 +++ diff --git a/content/en/docs/distributions/minikf/_index.md b/content/en/docs/distributions/minikf/_index.md new file mode 100644 index 0000000000..8072460e9c --- /dev/null +++ b/content/en/docs/distributions/minikf/_index.md @@ -0,0 +1,5 @@ ++++ +title = "MiniKF" +description = "Running Kubeflow with MiniKF" +weight = 50 ++++ diff --git a/content/en/docs/started/workstation/getting-started-minikf.md b/content/en/docs/distributions/minikf/getting-started-minikf.md similarity index 98% rename from content/en/docs/started/workstation/getting-started-minikf.md rename to content/en/docs/distributions/minikf/getting-started-minikf.md index 124c7d464f..78b0598538 100644 --- a/content/en/docs/started/workstation/getting-started-minikf.md +++ b/content/en/docs/distributions/minikf/getting-started-minikf.md @@ -1,5 +1,5 @@ +++ -title = "MiniKF" +title = "MiniKF on laptop/desktop" description = "A fast and easy way to deploy Kubeflow on your laptop" weight = 35 diff --git a/content/en/docs/started/workstation/minikf-aws.md b/content/en/docs/distributions/minikf/minikf-aws.md similarity index 98% rename from content/en/docs/started/workstation/minikf-aws.md rename to content/en/docs/distributions/minikf/minikf-aws.md index 237e067804..6dec027555 100644 --- a/content/en/docs/started/workstation/minikf-aws.md +++ b/content/en/docs/distributions/minikf/minikf-aws.md @@ -1,6 +1,6 @@ +++ -title = "Deploy Kubeflow using MiniKF on AWS" -description = "Deploy Kubeflow with MiniKF (mini Kubeflow) via AWS" +title = "MiniKF on AWS Marketplace" +description = "Deploy Kubeflow with MiniKF via AWS Marketplace" weight = 40 +++ diff --git a/content/en/docs/started/workstation/minikf-gcp.md b/content/en/docs/distributions/minikf/minikf-gcp.md similarity index 97% rename from content/en/docs/started/workstation/minikf-gcp.md rename to content/en/docs/distributions/minikf/minikf-gcp.md index d14ba13b4c..68798295b6 100644 --- a/content/en/docs/started/workstation/minikf-gcp.md +++ b/content/en/docs/distributions/minikf/minikf-gcp.md @@ -1,6 +1,6 @@ +++ -title = "Deploy Kubeflow using MiniKF on Google Cloud" -description = "Deploy Kubeflow with MiniKF (mini Kubeflow) via Google Cloud Marketplace" +title = "MiniKF on GCP Marketplace" +description = "Deploy Kubeflow with MiniKF via GCP Marketplace" weight = 40 +++ diff --git a/content/en/docs/openshift/OWNERS b/content/en/docs/distributions/openshift/OWNERS similarity index 100% rename from content/en/docs/openshift/OWNERS rename to content/en/docs/distributions/openshift/OWNERS diff --git a/content/en/docs/openshift/_index.md b/content/en/docs/distributions/openshift/_index.md similarity index 100% rename from content/en/docs/openshift/_index.md rename to content/en/docs/distributions/openshift/_index.md diff --git a/content/en/docs/openshift/install-kubeflow.md b/content/en/docs/distributions/openshift/install-kubeflow.md similarity index 100% rename from content/en/docs/openshift/install-kubeflow.md rename to content/en/docs/distributions/openshift/install-kubeflow.md diff --git a/content/en/docs/openshift/uninstall-kubeflow.md b/content/en/docs/distributions/openshift/uninstall-kubeflow.md similarity index 100% rename from content/en/docs/openshift/uninstall-kubeflow.md rename to content/en/docs/distributions/openshift/uninstall-kubeflow.md diff --git a/content/en/docs/operator/OWNERS b/content/en/docs/distributions/operator/OWNERS similarity index 100% rename from content/en/docs/operator/OWNERS rename to content/en/docs/distributions/operator/OWNERS diff --git a/content/en/docs/operator/_index.md b/content/en/docs/distributions/operator/_index.md similarity index 100% rename from content/en/docs/operator/_index.md rename to content/en/docs/distributions/operator/_index.md diff --git a/content/en/docs/operator/install-kubeflow.md b/content/en/docs/distributions/operator/install-kubeflow.md similarity index 93% rename from content/en/docs/operator/install-kubeflow.md rename to content/en/docs/distributions/operator/install-kubeflow.md index d89bcfd05d..62901ce48e 100644 --- a/content/en/docs/operator/install-kubeflow.md +++ b/content/en/docs/distributions/operator/install-kubeflow.md @@ -4,11 +4,11 @@ description = "Instructions for Kubeflow deployment with Kubeflow Operator" weight = 10 +++ -This guide describes how to use the Kubeflow Operator to deploy Kubeflow. As mentioned in the Operator [introduction](/docs/operator/introduction.md), the Operator also allows you to monitor and manage the Kubeflow installation beyond the initial installation. +This guide describes how to use the Kubeflow Operator to deploy Kubeflow. As mentioned in the Operator [introduction](/docs/methods/operator/introduction.md), the Operator also allows you to monitor and manage the Kubeflow installation beyond the initial installation. ## Prerequisites -* Kubeflow Operator needs to be deployed on your cluster for rest of steps to work. Please follow the [`Install the Kubeflow Operator`](/docs/operator/install-operator) guide to install the Kubeflow Operator +* Kubeflow Operator needs to be deployed on your cluster for rest of steps to work. Please follow the [`Install the Kubeflow Operator`](/docs/methods/operator/install-operator) guide to install the Kubeflow Operator ## Deployment Instructions diff --git a/content/en/docs/operator/install-operator.md b/content/en/docs/distributions/operator/install-operator.md similarity index 100% rename from content/en/docs/operator/install-operator.md rename to content/en/docs/distributions/operator/install-operator.md diff --git a/content/en/docs/operator/introduction.md b/content/en/docs/distributions/operator/introduction.md similarity index 98% rename from content/en/docs/operator/introduction.md rename to content/en/docs/distributions/operator/introduction.md index d1da770507..3c85b1c4ff 100644 --- a/content/en/docs/operator/introduction.md +++ b/content/en/docs/distributions/operator/introduction.md @@ -76,7 +76,7 @@ The operator responds to following events: * When any resource deployed as part of a _KfDef_ instance is deleted, the operator's _reconciler_ will be notified of the event and invoke the `Apply` functions provided by the [`kfctl` package](https://github.com/kubeflow/kfctl/tree/master/pkg) to re-deploy the Kubeflow. The deleted resource will be recreated with the same manifest as specified when the _KfDef_ instance is created. -Deploying Kubeflow with the Kubeflow Operator includes two steps: [installing the Kubeflow Operator](/docs/operator/install-operator) followed by [deploying](/docs/operator/deploy/operator) the KfDef custom resource. +Deploying Kubeflow with the Kubeflow Operator includes two steps: [installing the Kubeflow Operator](/docs/methods/operator/install-operator) followed by [deploying](/docs/methods/operator/deploy/operator) the KfDef custom resource. ## Current Tested Operators and Pre-built Images diff --git a/content/en/docs/operator/troubleshooting.md b/content/en/docs/distributions/operator/troubleshooting.md similarity index 100% rename from content/en/docs/operator/troubleshooting.md rename to content/en/docs/distributions/operator/troubleshooting.md diff --git a/content/en/docs/operator/uninstall-kubeflow.md b/content/en/docs/distributions/operator/uninstall-kubeflow.md similarity index 85% rename from content/en/docs/operator/uninstall-kubeflow.md rename to content/en/docs/distributions/operator/uninstall-kubeflow.md index 7ac9bca793..5200fbf60b 100644 --- a/content/en/docs/operator/uninstall-kubeflow.md +++ b/content/en/docs/distributions/operator/uninstall-kubeflow.md @@ -12,4 +12,4 @@ To delete the Kubeflow deployment, simply delete the KfDef custom resource from kubectl delete kfdef ${KUBEFLOW_DEPLOYMENT_NAME} -n ${KUBEFLOW_NAMESPACE} ``` -Note: ${KUBEFLOW_DEPLOYMENT_NAME} and ${KUBEFLOW_NAMESPACE} are defined in the [Installing Kubeflow](/docs/operator/install-kubeflow) guide. +Note: ${KUBEFLOW_DEPLOYMENT_NAME} and ${KUBEFLOW_NAMESPACE} are defined in the [Installing Kubeflow](/docs/methods/operator/install-kubeflow) guide. diff --git a/content/en/docs/operator/uninstall-operator.md b/content/en/docs/distributions/operator/uninstall-operator.md similarity index 100% rename from content/en/docs/operator/uninstall-operator.md rename to content/en/docs/distributions/operator/uninstall-operator.md diff --git a/content/en/docs/other-guides/usage-reporting.md b/content/en/docs/other-guides/usage-reporting.md index 55452227ef..8d38d9e92b 100644 --- a/content/en/docs/other-guides/usage-reporting.md +++ b/content/en/docs/other-guides/usage-reporting.md @@ -54,9 +54,9 @@ To prevent Spartakus from being deployed: 1. Follow your chosen guide to deploying Kubeflow, but stop before you deploy Kubeflow. For example, see the guide to - [deploying Kubeflow with kfctl_k8s_istio](/docs/started/k8s/kfctl-k8s-istio/). + [deploying Kubeflow with kfctl_k8s_istio](/docs/methods/kfctl/deployment). 1. When you reach the - [setup and deploy step](/docs/started/k8s/kfctl-k8s-istio/#alt-set-up-and-deploy), + [setup and deploy step](/docs/methods/kfctl/deployment#alt-set-up-and-deploy), **skip the `kfctl apply` command** and run the **`kfctl build`** command instead, as described in the above guide. Now you can edit the configuration files before deploying Kubeflow. diff --git a/content/en/docs/reference/images.md b/content/en/docs/reference/images.md index f17eab8f13..b02e4e0c71 100644 --- a/content/en/docs/reference/images.md +++ b/content/en/docs/reference/images.md @@ -32,7 +32,7 @@ weight = 10 | kubeflowkatib/darts-cnn-cifar10 | | | kubeflow/mpi-horovod-mnist | | | datawire/ambassador | | -| tensorflow-1.13.1-notebook-cpu | | -| jupyter-web-app | | +| jupyter-tensorflow-full | | +| jupyter | | | profile-controller | | | notebook-controller | | diff --git a/content/en/docs/reference/version-policy.md b/content/en/docs/reference/version-policy.md index c2fddc88a2..a15a748881 100644 --- a/content/en/docs/reference/version-policy.md +++ b/content/en/docs/reference/version-policy.md @@ -239,7 +239,7 @@ one of the following Kubeflow SDKs and command-line interfaces 0.7.1 - kfctl + kfctl (GitHub ) Stable diff --git a/content/en/docs/started/getting-started.md b/content/en/docs/started/getting-started.md index 6f60bea661..b8fd8a50eb 100644 --- a/content/en/docs/started/getting-started.md +++ b/content/en/docs/started/getting-started.md @@ -106,11 +106,11 @@ The matrix is therefore an alternative way of accessing the information in the Existing Kubernetes cluster using a standard Kubeflow installation - Docs + Docs Existing Kubernetes cluster using Dex for authentication - Docs + Docs Amazon Web Services (AWS) using the standard setup diff --git a/content/en/docs/started/k8s/_index.md b/content/en/docs/started/k8s/_index.md deleted file mode 100644 index 891d9536f7..0000000000 --- a/content/en/docs/started/k8s/_index.md +++ /dev/null @@ -1,5 +0,0 @@ -+++ -title = "Kubernetes Installation" -description = "Instructions for installing Kubeflow on an existing Kubernetes cluster" -weight = 40 -+++ diff --git a/content/en/docs/started/k8s/kfctl-existing-arrikto.md b/content/en/docs/started/k8s/kfctl-existing-arrikto.md deleted file mode 100644 index cae990ed1b..0000000000 --- a/content/en/docs/started/k8s/kfctl-existing-arrikto.md +++ /dev/null @@ -1,13 +0,0 @@ -+++ -title = "Multi-user, auth-enabled Kubeflow with kfctl_existing_arrikto" -description = "Migration from kfctl_existing_arrikto.yaml config" -weight = 4 -page_hide = true - -+++ -{{% alert title="Out of date" color="warning" %}} -This guide contains outdated information pertaining to Kubeflow 1.0. This guide -needs to be updated for Kubeflow 1.1. -{{% /alert %}} - -If you were using the `kfctl_existing_arrikto` configuration in Kubeflow v0.7 or earlier, you should use {{% config-file-istio-dex %}} in Kubeflow {{% kf-latest-version %}}. Follow the instructions in the [guide to `kfctl_istio_dex`](/docs/started/k8s/kfctl-istio-dex/). diff --git a/content/en/docs/started/workstation/getting-started-linux.md b/content/en/docs/started/workstation/getting-started-linux.md index 0521c1eccd..e626aae6e8 100644 --- a/content/en/docs/started/workstation/getting-started-linux.md +++ b/content/en/docs/started/workstation/getting-started-linux.md @@ -38,7 +38,7 @@ to install Kubeflow. - Install [Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) -Follow the instructions on [Kubeflow on MiniKube](/docs/started/workstation/minikube-linux/) to complete this path. +Follow the instructions on [Kubeflow on MiniKube](/docs/started/methods/kfctl/minikube/) to complete this path. ### Multipass diff --git a/content/en/docs/started/workstation/getting-started-macos.md b/content/en/docs/started/workstation/getting-started-macos.md index 3740ede72f..54b40dee56 100644 --- a/content/en/docs/started/workstation/getting-started-macos.md +++ b/content/en/docs/started/workstation/getting-started-macos.md @@ -38,7 +38,7 @@ to install Kubeflow. - Install [Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) -Follow the instructions on [Kubeflow on MiniKube](/docs/started/workstation/minikube-linux/) to complete this path. +Follow the instructions on [Kubeflow on MiniKube](/docs/started/methods/kfctl/minikube) to complete this path. ### Multipass diff --git a/content/en/docs/started/workstation/getting-started-windows.md b/content/en/docs/started/workstation/getting-started-windows.md index fc16b024fd..b2811fba22 100644 --- a/content/en/docs/started/workstation/getting-started-windows.md +++ b/content/en/docs/started/workstation/getting-started-windows.md @@ -38,7 +38,7 @@ to install Kubeflow. - Install [Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) -Follow the instructions on [Kubeflow on MiniKube](/docs/started/workstation/minikube-linux/) to complete this path. +Follow the instructions on [Kubeflow on MiniKube](/docs/started/methods/kfctl/minikube) to complete this path. ### Multipass diff --git a/content/en/serving.svg b/content/en/serving.svg index 86af3a4ec6..995b6fa428 100644 --- a/content/en/serving.svg +++ b/content/en/serving.svg @@ -62,7 +62,7 @@ transform="translate(194.55005,-307.89663)"> diff --git a/layouts/shortcodes/pipelines/OWNERS b/layouts/shortcodes/pipelines/OWNERS index a3b176e22f..72637b84e9 100644 --- a/layouts/shortcodes/pipelines/OWNERS +++ b/layouts/shortcodes/pipelines/OWNERS @@ -1,13 +1,11 @@ approvers: - Ark-kun - Bobgy - - gaoning777 + - capri-xiyue - hongye-sun - IronPan - - jingzhang36 - neuromage - - numerology - paveldournov - - rmgogogo + - zijianjoy reviewers: - Bobgy