-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplifying README, adding library mode example #393
base: main
Are you sure you want to change the base?
Conversation
@@ -37,322 +37,137 @@ A service that: | |||
- Runs a static pipeline or fixed set of operations on every submitted document. | |||
- Acts as a wrapper for any specific document parsing library. | |||
|
|||
For production level performance and scalability, we recommend deploying the pipeline and supporting NIMs via docker-compose or kubernetes (via the provided helm charts). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For production level performance and scalability, we recommend deploying the pipeline and supporting NIMs via docker-compose or kubernetes (via the provided helm charts). | |
For production-level performance and scalability, we recommend that you deploy the pipeline and supporting NIMs by using Docker Compose or Kubernetes. You can use the helm charts provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, can you link from here to the helm charts?
|
||
## Prerequisites | ||
For hardware and software pre-requisites for container and kubernetes (helm) based deployments, please find [our comprehensive doc site](https://docs.nvidia.com/nv-ingest/user-guide/getting-started/prerequisites/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For hardware and software pre-requisites for container and kubernetes (helm) based deployments, please find [our comprehensive doc site](https://docs.nvidia.com/nv-ingest/user-guide/getting-started/prerequisites/). | |
For hardware and software requirements for container- and Kubernetes-based deployments, refer to [Prerequisites](https://docs.nvidia.com/nv-ingest/user-guide/getting-started/prerequisites/). |
|
||
### Software | ||
To facilitate an easier evaluation experience, and for small scale (<100 PDFs) workloads, you can use our "library mode" setup, which depends on NIMs either already self hosted, or, by default, NIMs hosted on build.nvidia.com. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To facilitate an easier evaluation experience, and for small scale (<100 PDFs) workloads, you can use our "library mode" setup, which depends on NIMs either already self hosted, or, by default, NIMs hosted on build.nvidia.com. | |
For small-scale workloads, such as workloads of fewer than 100 PDFs, you can use library mode setup. Library mode set up depends on NIMs that are already self-hosted, or, by default, NIMs that are hosted on build.nvidia.com. |
Co-authored-by: nkmcalli <[email protected]>
Co-authored-by: nkmcalli <[email protected]>
No description provided.