llama-3-chinese-8b-instruct-v3-gguf

Author: hfl
Downloads: 235,780
Likes: 73
License: Apache 2.0
Created: May 28, 2024
Last Modified: Jun 6, 2024

Llama-3-Chinese-8B-Instruct-v3-GGUF

[👉👉👉 Chat with Llama-3-Chinese-8B-Instruct-v3 @ HF Space]

This repository contains Llama-3-Chinese-8B-Instruct-v3-GGUF (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of Llama-3-Chinese-8B-Instruct-v3.

Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.

Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

Performance

Metric: PPL, lower is better

Note: Unless constrained by memory, we suggest using Q8_0 or Q6_K for better performance.

QuantSizePPL
Q2_K2.96 GB10.0534 +/- 0.13135
Q3_K3.74 GB6.3295 +/- 0.07816
Q4_04.34 GB6.3200 +/- 0.07893
Q4_K4.58 GB6.0042 +/- 0.07431
Q5_05.21 GB6.0437 +/- 0.07526
Q5_K5.34 GB5.9484 +/- 0.07399
Q6_K6.14 GB5.9469 +/- 0.07404
Q8_07.95 GB5.8933 +/- 0.07305
F1614.97 GB5.8902 +/- 0.07303

Others