From 19a43322a9b1c4e8edfbca6824ce5f2084c75fc3 Mon Sep 17 00:00:00 2001 From: anakinxc <103552181+anakinxc@users.noreply.github.com> Date: Tue, 19 Dec 2023 18:07:52 +0800 Subject: [PATCH] Update README.md --- examples/python/ml/flax_llama7b/README.md | 22 ---------------------- 1 file changed, 22 deletions(-) diff --git a/examples/python/ml/flax_llama7b/README.md b/examples/python/ml/flax_llama7b/README.md index 949947dc..ab1f5a90 100644 --- a/examples/python/ml/flax_llama7b/README.md +++ b/examples/python/ml/flax_llama7b/README.md @@ -33,28 +33,6 @@ This example demonstrates how to use SPU to run secure inference on a pre-traine parser.add_argument("--streaming", action="store_true", default=False, help="whether is model weight saved stream format",) ``` - Since EasyLM have an issue,so we have to make a samll change to support the option "streaming=false". - Open and edit "convert_hf_to_easylm.py", chang this: - - ```python - parser.add_argument("--streaming", action="store_true", default=True, help="whether is model weight saved stream format",) - ``` - - to: - - ```python - parser.add_argument("--streaming", action="store_true", default=False, help="whether is model weight saved stream format",) - ``` - - Since EasyLM have an issue,so we have to make a samll change to support the option "streaming=false". - Open and edit "convert_hf_to_easylm.py", chang this: - ```python - parser.add_argument("--streaming", action="store_true", default=True, help="whether is model weight saved stream format",) - ``` - to: - ```python - parser.add_argument("--streaming", action="store_true", default=False, help="whether is model weight saved stream format",) - ``` Download trained LLaMA-B[PyTroch-Version] from [Hugging Face](https://huggingface.co/openlm-research/open_llama_7b) , and convert it to Flax.msgpack as: ```sh