Explore using an LLM at temperature=0 as a deterministic interpreter for code/pseudocode across languages.
At temperature=0 with structured I/O, an LLM is a deterministic function: eval : Code × Context → Value
Unlike traditional interpreters that parse syntax, the LLM does semantic inference - it understands what code *means* and produces what it *should produce*.
1. Deterministic: temperature=0 → same input, same output 2. Language-agnostic: Python, Haskell, Swift, pseudocode → same semantic space 3. Specification = Implementation: precise description is executable
> [1,2,3].map(x => x + 1) // JavaScript
[2, 3, 4]
> map (+1) [1,2,3] -- Haskell
[2, 3, 4]
> [x+1 for x in [1,2,3]] # Python
[2, 3, 4]
Same semantic operation, any syntax.
> sort [3,1,4,1,5] using quicksort
[1, 1, 3, 4, 5]
> find shortest path from A to B in graph G
[A, C, B]
Natural language algorithms, executable.
> parse '{"a": 1}' as JSON, extract 'a', format as XML
<a>1</a>
Chain operations across language semantics.
> equivalent? 'fold (+) 0 xs' and 'sum(xs)'
true : both compute the sum of xs
1. Simple prompt structure:
2. Test harness:
3. Compare to ground truth:
1. Determinism: Is temperature=0 actually deterministic across calls? Across sessions?
2. Complexity: What can it 'compute'?
3. State: Can it maintain a REPL environment?
x = 5 then x + 1 → 6?4. Limits: Where does it break?
This reframes the LLM from 'text generator' to 'semantic compute engine'. If it works reliably, it's a new kind of interpreter - one that operates on meaning rather than syntax.
No activity yet.