ktou commited on
Commit
15a17ae
·
verified ·
1 Parent(s): f88c905

Transfer owner

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -5952,7 +5952,7 @@ model-index:
5952
  value: 78.62997658079624
5953
  ---
5954
 
5955
- # ktou/multilingual-e5-large-Q4_K_M-GGUF
5956
  This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
5957
  Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model.
5958
 
@@ -5967,12 +5967,12 @@ Invoke the llama.cpp server or the CLI.
5967
 
5968
  ### CLI:
5969
  ```bash
5970
- llama-cli --hf-repo ktou/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -p "The meaning to life and the universe is"
5971
  ```
5972
 
5973
  ### Server:
5974
  ```bash
5975
- llama-server --hf-repo ktou/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -c 2048
5976
  ```
5977
 
5978
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -5989,9 +5989,9 @@ cd llama.cpp && LLAMA_CURL=1 make
5989
 
5990
  Step 3: Run inference through the main binary.
5991
  ```
5992
- ./llama-cli --hf-repo ktou/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -p "The meaning to life and the universe is"
5993
  ```
5994
  or
5995
  ```
5996
- ./llama-server --hf-repo ktou/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -c 2048
5997
  ```
 
5952
  value: 78.62997658079624
5953
  ---
5954
 
5955
+ # groonga/multilingual-e5-large-Q4_K_M-GGUF
5956
  This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
5957
  Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model.
5958
 
 
5967
 
5968
  ### CLI:
5969
  ```bash
5970
+ llama-cli --hf-repo groonga/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -p "The meaning to life and the universe is"
5971
  ```
5972
 
5973
  ### Server:
5974
  ```bash
5975
+ llama-server --hf-repo groonga/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -c 2048
5976
  ```
5977
 
5978
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
5989
 
5990
  Step 3: Run inference through the main binary.
5991
  ```
5992
+ ./llama-cli --hf-repo groonga/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -p "The meaning to life and the universe is"
5993
  ```
5994
  or
5995
  ```
5996
+ ./llama-server --hf-repo groonga/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -c 2048
5997
  ```