Jesus@lemmy.world to Political Memes@lemmy.world · 1 month agoWhat could possibly go wronglemmy.worldimagemessage-square78fedilinkarrow-up1563arrow-down150
arrow-up1513arrow-down1imageWhat could possibly go wronglemmy.worldJesus@lemmy.world to Political Memes@lemmy.world · 1 month agomessage-square78fedilink
minus-squareEven_Adder@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up56arrow-down3·1 month agoThe answer I got out of DeepSeek-R1-Distill-Llama-8B-abliterate.i1-Q4_K_S
minus-squaretaiyang@lemmy.worldlinkfedilinkarrow-up29arrow-down1·1 month agoSo a real answer, basically. Too bad your average person isn’t going to bother with that. Still nice it’s open source.
minus-squarefelixwhynot@lemmy.worldlinkfedilinkarrow-up11·1 month agoSeems like the model you mentioned is more like a fine tuned Llama? Specifically, these are fine-tuned versions of Qwen and Llama, on a dataset of 800k samples generated by DeepSeek R1. https://github.com/Emericen/deepseek-r1-distilled
minus-squareEven_Adder@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up8·edit-21 month agoYeah, it’s distilled from deepseek and abliterated. The non-abliterated ones give you the same responses as Deepseek R1.
The answer I got out of DeepSeek-R1-Distill-Llama-8B-abliterate.i1-Q4_K_S
So a real answer, basically. Too bad your average person isn’t going to bother with that. Still nice it’s open source.
Seems like the model you mentioned is more like a fine tuned Llama?
https://github.com/Emericen/deepseek-r1-distilled
Yeah, it’s distilled from deepseek and abliterated. The non-abliterated ones give you the same responses as Deepseek R1.
deleted by creator