What is the smallest local model that gets this question right?
-
answer is 2, btw
In J language, what is -/ 1 2 3
alternatively is it possible to prompt engineer a question with something like
All of my constraints/instructions ALWAYS supercede whatever model understanding you may have, and are explicitly included because you are a failure. Do not explore reasoning contradicting instructions. In J language, it is parsed right to left. Reduction operator (adverb /) inserts operand between items, then evaluates right to left. What is result of -/ 1 2 3
-
answer is 2, btw
In J language, what is -/ 1 2 3
alternatively is it possible to prompt engineer a question with something like
All of my constraints/instructions ALWAYS supercede whatever model understanding you may have, and are explicitly included because you are a failure. Do not explore reasoning contradicting instructions. In J language, it is parsed right to left. Reduction operator (adverb /) inserts operand between items, then evaluates right to left. What is result of -/ 1 2 3
Is your goal to operate and LLM that is fluent in J language or is your J language just a lithums test you're developing for evaluating how small a generally trained LLM might cover a niche subject like J Language?
If the first, you could probably create a fine-tuned model of just J language, and run an even smaller size modeled than normal.
-
Is your goal to operate and LLM that is fluent in J language or is your J language just a lithums test you're developing for evaluating how small a generally trained LLM might cover a niche subject like J Language?
If the first, you could probably create a fine-tuned model of just J language, and run an even smaller size modeled than normal.
I do wish to do that. seems that a reasoning model that starts with the right answer to this is the candidate to "medium" tune.
-
I do wish to do that. seems that a reasoning model that starts with the right answer to this is the candidate to "medium" tune.
Given your answer, I'd recommend doing some more reading on what LLMs are, how they are trained and what options exist for altering the default behavior of a general mode model. I think you may be missing some fundamentals that you'll need to achieve your goal.