A programming language where code thinks
NSL combines Python-like simplicity with cognitive primitives, GPU-native tensor ops, a bytecode VM with JIT, and 40 stdlib modules. Beliefs, observations, causal reasoning, and knowledge graphs -- as first-class language features. 1,983 tests passing.
Why NSL?
Performance
GPU-native tensor operations hitting 8,797 GFLOPS on TF32 and 14,015 GFLOPS on FP16. Hand-tuned PTX kernels for NVIDIA GPUs. AVX-512 CPU SGEMM at 1,648 GFLOPS. Register-based bytecode VM with threaded dispatch, NaN boxing, and x64 JIT compilation.
Intelligence
Beliefs, observations, and causal reasoning as native language constructs. A built-in prediction engine with Bayesian confidence updates. Knowledge graphs with 15 node types and 17 edge types, monitors, goals, hypotheticals, and a 74-function memory module -- not libraries, language features.
Simplicity
Python-like syntax with 177 keywords that reads like pseudocode. No semicolons, no type annotations required, indent-based blocks. Pattern matching, entities, contracts, concurrency, and a CPython bridge. If you can read Python, you can read NSL.
What you get
tensor, grad, optim, tokenizer, train, memory, py, http, crypto, json, nsd, ncf, and 28 more
CUDA TF32/FP16 PTX kernels, AVX-512 CPU SGEMM, batched matmul, game compute kernels
Tape-based reverse-mode AD, Adam/SGD optimizers, BPE tokenizer, data loaders, LR schedulers
belief, observe, revise, crystallize, capability, monitor -- with confidence tracking and knowledge graphs
158-opcode register VM with threaded dispatch, NaN boxing, and x64 JIT compilation
Human-readable NSD (3-8x fewer tokens than JSON) and binary NCF with LZ77+Huffman compression
Call any Python library from NSL via py.eval, py.import, py.call -- types auto-convert
Run NSL in your browser with no install, or chat with Qel'zyn VI AI assistant
Cognitive reasoning in action
Start thinking in code
NSL is free and runs in your browser. No account needed, no install required. Write your first belief in 30 seconds.