-
Notifications
You must be signed in to change notification settings - Fork 12.8k
Closed
Labels
Description
Name and Version
llama.cpp 6e67254
Operating systems
Linux
GGML backends
CUDA, CPU
Hardware
3x3090 + 5950x + 128GB DDR4
Models
- LFM2-350M: inconsistent size for blk.0.shortconv.in_proj.weight (4096 vs 3072)
- LFM2-700M: inconsistent size for blk.0.shortconv.in_proj.weight (6144 vs 4608)
- LFM2-1.2B: inconsistent size for blk.0.shortconv.in_proj.weight (8192 vs 6144)
Problem description & steps to reproduce
pytohn convert_hf_to_gguf.py LFM2-350M/ --outfile LFM2-350M-F16.gguf
./llama-imatrix -m LFM2-350M-F16.gguf -f calibration_datav3.txt -o imatrix.gguf
./llama-imatrix -m LFM2-350M-F16.gguf -f calibration_datav3.txt -o imatrix.gguf -ngl 99
./llama-imatrix -m LFM2-350M-F16.gguf -f calibration_datav3.txt -o imatrix.dat
All commands produce the same inconsistent size
error.
Note that all size errors have a ratio of 4/3 (4096 / 3072 = 4/3).
First Bad Commit
9008328 is the problematic commit
Relevant log output
Logs
./llama-imatrix -m LFM2-350M-F16.gguf -f calibration_datav3.txt -o imatrix.gguf
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
build: 4 (6e67254) with cc (GCC) 15.1.1 20250425 for x86_64-pc-linux-gnu
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23062 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) - 22766 MiB free
llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23461 MiB free
llama_model_loader: loaded meta data with 34 key-value pairs and 148 tensors from LFM2-350M-F16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = lfm2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = LFM2 350M
llama_model_loader: - kv 3: general.basename str = LFM2
llama_model_loader: - kv 4: general.size_label str = 350M
llama_model_loader: - kv 5: general.license str = other
llama_model_loader: - kv 6: general.license.name str = lfm1.0
llama_model_loader: - kv 7: general.license.link str = LICENSE
llama_model_loader: - kv 8: general.tags arr[str,4] = ["liquid", "lfm2", "edge", "text-gene...
llama_model_loader: - kv 9: general.languages arr[str,8] = ["en", "ar", "zh", "fr", "de", "ja", ...
llama_model_loader: - kv 10: lfm2.block_count u32 = 16
llama_model_loader: - kv 11: lfm2.context_length u32 = 128000
llama_model_loader: - kv 12: lfm2.embedding_length u32 = 1024
llama_model_loader: - kv 13: lfm2.feed_forward_length u32 = 4608
llama_model_loader: - kv 14: lfm2.attention.head_count u32 = 16
llama_model_loader: - kv 15: lfm2.attention.head_count_kv arr[i32,16] = [0, 0, 8, 0, 0, 8, 0, 0, 8, 0, 8, 0, ...
llama_model_loader: - kv 16: lfm2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 17: general.file_type u32 = 1
llama_model_loader: - kv 18: lfm2.vocab_size u32 = 65536
llama_model_loader: - kv 19: lfm2.shortconv.l_cache u32 = 3
llama_model_loader: - kv 20: lfm2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 23: tokenizer.ggml.pre str = lfm2
llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,65536] = ["<|pad|>", "<|startoftext|>", "<|end...
llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,65536] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,63683] = ["Ċ Ċ", "Ċ ĊĊ", "ĊĊ Ċ", "Ċ �...
llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 7
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 31: tokenizer.ggml.add_sep_token bool = false
llama_model_loader: - kv 32: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 33: tokenizer.chat_template str = {{- bos_token -}}\n{%- set system_prom...
llama_model_loader: - type f32: 55 tensors
llama_model_loader: - type f16: 93 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = F16
print_info: file size = 676.25 MiB (16.00 BPW)
load: special tokens cache size = 507
load: token to piece cache size = 0.3756 MB
print_info: arch = lfm2
print_info: vocab_only = 0
print_info: n_ctx_train = 128000
print_info: n_embd = 1024
print_info: n_layer = 16
print_info: n_head = 16
print_info: n_head_kv = [0, 0, 8, 0, 0, 8, 0, 0, 8, 0, 8, 0, 8, 0, 8, 0]
print_info: n_rot = 64
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 64
print_info: n_embd_head_v = 64
print_info: n_gqa = [0, 0, 2, 0, 0, 2, 0, 0, 2, 0, 2, 0, 2, 0, 2, 0]
print_info: n_embd_k_gqa = [0, 0, 512, 0, 0, 512, 0, 0, 512, 0, 512, 0, 512, 0, 512, 0]
print_info: n_embd_v_gqa = [0, 0, 512, 0, 0, 512, 0, 0, 512, 0, 512, 0, 512, 0, 512, 0]
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 4608
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 128000
print_info: rope_finetuned = unknown
print_info: model type = 350M
print_info: model params = 354.48 M
print_info: general.name = LFM2 350M
print_info: vocab type = BPE
print_info: n_vocab = 65536
print_info: n_merges = 63683
print_info: BOS token = 1 '<|startoftext|>'
print_info: EOS token = 7 '<|im_end|>'
print_info: EOT token = 2 '<|endoftext|>'
print_info: PAD token = 0 '<|pad|>'
print_info: LF token = 708 'Ċ'
print_info: EOG token = 2 '<|endoftext|>'
print_info: EOG token = 7 '<|im_end|>'
print_info: max token length = 30
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 0 repeating layers to GPU
load_tensors: offloaded 0/17 layers to GPU
load_tensors: CPU_Mapped model buffer size = 676.25 MiB
...................................................................
llama_context: constructing llama_context
llama_context: non-unified KV cache requires ggml_set_rows() - forcing unified KV cache
llama_context: n_seq_max = 4
llama_context: n_ctx = 2048
llama_context: n_ctx_per_seq = 512
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: kv_unified = true
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (512) < n_ctx_train (128000) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 1.00 MiB
llama_kv_cache_unified: the V embeddings have different sizes across layers and FA is not enabled - padding V cache to 512
llama_kv_cache_unified: CPU KV buffer size = 24.00 MiB
llama_kv_cache_unified: size = 24.00 MiB ( 2048 cells, 6 layers, 4/ 1 seqs), K (f16): 12.00 MiB, V (f16): 12.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_memory_recurrent: CPU RS buffer size = 0.31 MiB
llama_memory_recurrent: size = 0.31 MiB ( 4 cells, 16 layers, 4 seqs), R (f32): 0.31 MiB, S (f32): 0.00 MiB
llama_context: CUDA0 compute buffer size = 264.01 MiB
llama_context: CUDA_Host compute buffer size = 72.01 MiB
llama_context: graph nodes = 602
llama_context: graph splits = 167 (with bs=512), 1 (with bs=1)
common_init_from_params: added <|endoftext|> logit bias = -inf
common_init_from_params: added <|im_end|> logit bias = -inf
common_init_from_params: setting dry_penalty_last_n to ctx_size = 2048
system_info: n_threads = 16 (n_threads_batch = 16) / 32 | CUDA : ARCHS = 860 | F16 = 1 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | FA_ALL_QUANTS = 1 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
compute_imatrix: tokenizing the input ..
compute_imatrix: tokenization took 36.134 ms
compute_imatrix: computing over 131 chunks, n_ctx=512, batch_size=2048, n_seq=4
compute_imatrix: 1.59 seconds per pass - ETA 0.85 minutes
[1]16.1218,[2]16.0527,[3]15.8228,[4]19.0799,[5]19.5578,[6]16.6691,[7]19.1424,[8]19.6710,[9]19.9923,[10]18.0396,[11]15.8644,[12]16.6023,[13]17.7008,[14]17.5375,[15]18.4722,[16]18.7421,[17]19.4153,[18]19.5910,[19]18.3784,[20]18.1191,[21]18.6345,[22]18.7850,[23]19.1435,[24]19.7705,[25]20.1693,[26]20.5836,[27]20.7830,[28]20.9911,[29]22.1619,[30]22.2467,[31]22.8473,[32]21.9694,[33]21.2215,[34]20.8443,[35]20.4220,[36]19.9727,[37]19.6946,[38]19.7684,[39]19.6883,[40]19.6866,[41]20.0332,[42]20.0252,[43]20.7494,[44]21.2795,[45]21.9291,[46]22.4473,[47]23.0100,[48]22.5381,[49]22.6655,[50]22.7210,[51]22.8786,[52]22.3267,[53]22.3220,[54]22.5152,[55]22.6539,[56]22.9201,[57]23.1667,[58]23.2689,[59]23.2460,[60]23.2207,[61]23.0716,[62]22.8744,[63]22.4267,[64]22.1785,[65]22.5743,[66]22.6038,[67]22.2798,[68]22.1210,[69]21.9884,[70]21.8635,[71]21.5494,[72]21.4812,[73]21.5279,[74]21.2028,[75]21.1883,[76]21.1878, [77]21.0229,[78]21.0508,[79]20.8843,[80]20.8705,[81]20.7459,[82]20.8424,[83]20.7547,[84]20.6542,[85]20.6533,[86]20.6420,[87]20.7667,[88]20.5743,[89]20.5772,[90]20.6513,[91]20.6953,[92]20.6352,[93]20.4360,[94]20.1697,[95]19.9172,[96]19.6822,[97]19.4291,[98]19.1929,[99]18.9387,[100]18.7208,[101]18.6064,[102]18.5537,[103]18.6731,[104]18.8646,[105]19.0555,[106]19.1200,[107]19.4036,[108]19.4173,[109]19.4649,[110]19.3261,[111]19.3172,[112]19.2799,[113]19.2930,[114]19.1867,[115]19.1901,[116]19.2213,[117]19.1991,[118]19.2698,[119]19.2935,[120]19.3463,[121]19.3871,[122]19.3852,[123]19.4242,[124]19.4143,[125]19.2542,[126]19.4468,[127]19.6391,[128]19.7795,
collect_imatrix: inconsistent size for blk.0.shortconv.in_proj.weight (4096 vs 3072)
compilade