The last line proves the user asking was making it use this type of language by prompting it
The last line proves the user asking was making it use this type of language by prompting it
Point 1> Line 1The last line proves the user asking was making it use this type of language by prompting it
Was looking for a way to assess how good this deepseek thing is and this graphic is the only post to demonstrate that.R1 seems to be a pretty big jump, and it has rightfully caused some buzz based on the numbers (comparisons with o1 and previous deepseek model).
View attachment 221833
It is so great to see open source models but the vram required to run the highest end model is Still this should put openai under so much pressure.
Jeff trying to run on RPi5.
Much slower way to run - 6-8t/s - but this is one way to do it.
I trust nobody, especially the chinese CCP partyDo not trust China.
It's a reasoning model, this is expected. What you've posted is what the model is "thinking". The actual answer is belowi tried it on pc locally but even for simple questions its giving complex messages
</think>
Well, it is definitely a Chinese companyI trust nobody, especially the chinese CCP party
You should be able to prompt it in such a way that it outputs only the answer and not the entire thought process.It's a reasoning model, this is expected. What you've posted is what the model is "thinking". The actual answer is below</think>
That is the same with all LLMs. There are parameters which can be passed to control that behavior.Just try asking the same question twice. Second answer will be a bit different...