DeepSeek V3.2 Usage#
DeepSeek-V3.2-Exp equips DeepSeek-V3.1-Terminus with DeepSeek Sparse Attention (DSA) through continued training. With DSA, a fine-grained sparse attention mechanism powered by a lightning indexer, DeepSeek-V3.2 achieves efficiency improvements in long-context scenarios.
For reporting issues or tracking upcoming features, please refer to this Roadmap.
Installation#
Docker#
# H200/B200
docker pull lmsysorg/sglang:latest
# MI350/MI355
docker pull lmsysorg/sglang:dsv32-rocm
# NPUs
docker pull lmsysorg/sglang:dsv32-a2
docker pull lmsysorg/sglang:dsv32-a3
Build From Source#
# Install SGLang
git clone https://github.com/sgl-project/sglang
cd sglang
pip3 install pip --upgrade
pip3 install -e "python"
Launch DeepSeek V3.2 with SGLang#
To serve DeepSeek-V3.2-Exp on 8xH200/B200 GPUs:
# Launch with TP + DP
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --dp 8 --enable-dp-attention
# Launch with EP + DP
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --ep 8 --dp 8 --enable-dp-attention
Configuration Tips#
DP Attention: For DeepSeek V3.2 model, the kernels are customized for the use case of
dp_size=8, so DP attention is enabled by default for better stability and performance. The feature of launching with pure TP is still under development.Short-sequence MHA prefill (adaptive): For short prefill sequences (default threshold: 2048 tokens), the NSA backend uses standard MHA automatically (no extra flags). On H200 (SM90) this path uses the FlashAttention variable-length kernel; on B200 (SM100) it uses TRT-LLM ragged MHA. MHA uses
MHA_ONE_SHOTfor best performance.MHA_ONE_SHOTcomputes multi-head attention over all tokens (both cached prefix and newly extended tokens) in a single kernel invocation, avoiding the overhead of chunked KV cache processing. This achieves optimal throughput for short sequences where total sequence length fits within the chunk capacity limit.Choices of Attention Kernels: The attention backend is automatically set to
nsaattention backend for DeepSeek V3.2 model. In this backend, different kernels for sparse prefilling/decoding are implemented, which can be specified by--nsa-prefill-backendand--nsa-decode-backendserver arguments. The choices of nsa prefill/decode attention kernels include:flashmla_sparse:flash_mla_sparse_fwdkernel fromflash_mlalibrary. Can run on both Hopper and Blackwell GPUs. It requires bf16 q, kv inputs.flashmla_kv:flash_mla_with_kvcachekernel fromflash_mlalibrary. Can run on both Hopper and Blackwell GPUs. It requires bf16 q, fp8 k_cache inputs.fa3:flash_attn_with_kvcachekernel fromflash_attnlibrary. Can only run on Hopper GPUs. It requires bf16 q, kv inputs.tilelang:tilelangimplementation that can run on GPU, HPU and NPU.alter: Alter kernel on AMD HPUs. Can only be used as decode kernel.
On the basis of performance benchmarks, the default configuration on H200 and B200 are set as follows :
H200:
flashmla_sparseprefill attention (short-seq prefill uses MHA via FlashAttention varlen),fa3decode attention,bf16kv cache dtype.B200:
flashmla_autoprefill attention (short-seq prefill uses MHA via TRT-LLM ragged),flashmla_kvdecode attention,fp8_e4m3kv cache dtype.flashmla_autoenables automatic selection of eitherflashmla_sparseorflashmla_kvkernel for prefill based on KV cache dtype, hardware, and heuristics. When FP8 KV cache is enabled andtotal_kv_tokens < total_q_tokens * 512, it uses theflashmla_sparsekernel; otherwise, it falls back to theflashmla_kvkernel. The heuristics may need to be tuned if the performance of either theflashmla_sparseorflashmla_kvkernel changes significantly.
Multi-token Prediction#
SGLang implements Multi-Token Prediction (MTP) for DeepSeek V3.2 based on EAGLE speculative decoding. With this optimization, the decoding speed can be improved significantly on small batch sizes. Please look at this PR for more information.
Example usage:
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --dp 8 --enable-dp-attention --speculative-algorithm EAGLE --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
The best configuration for
--speculative-num-steps,--speculative-eagle-topkand--speculative-num-draft-tokenscan be searched with bench_speculative.py script for given batch size. The minimum configuration is--speculative-num-steps 1 --speculative-eagle-topk 1 --speculative-num-draft-tokens 2, which can achieve speedup for larger batch sizes.The default value of
--max-running-requestsis set to48for MTP. For larger batch sizes, this value should be increased beyond the default value.
Function Calling and Reasoning Parser#
The usage of function calling and reasoning parser is the same as DeepSeek V3.1. Please refer to Reasoning Parser and Tool Parser documents.
PD Disaggregation#
Prefill Command:
python -m sglang.launch_server \
--model-path deepseek-ai/DeepSeek-V3.2-Exp \
--disaggregation-mode prefill \
--host $LOCAL_IP \
--port $PORT \
--tp 8 \
--dp 8 \
--enable-dp-attention \
--dist-init-addr ${HOST}:${DIST_PORT} \
--trust-remote-code \
--disaggregation-bootstrap-port 8998 \
--mem-fraction-static 0.9 \
Decode command:
python -m sglang.launch_server \
--model-path deepseek-ai/DeepSeek-V3.2-Exp \
--disaggregation-mode decode \
--host $LOCAL_IP \
--port $PORT \
--tp 8 \
--dp 8 \
--enable-dp-attention \
--dist-init-addr ${HOST}:${DIST_PORT} \
--trust-remote-code \
--mem-fraction-static 0.9 \
Router command:
python -m sglang_router.launch_router --pd-disaggregation \
--prefill $PREFILL_ADDR 8998 \
--decode $DECODE_ADDR \
--host 127.0.0.1 \
--port 8000 \
If you need more advanced deployment methods or production-ready deployment methods, such as RBG or LWS-based deployment, please refer to references/multi_node_deployment/rbg_pd/deepseekv32_pd.md. Additionally, you can also find startup commands for DeepEP-based EP parallelism in the aforementioned documentation.
Benchmarking Results#
Accuracy Test with gsm8k#
A simple accuracy benchmark can be tested with gsm8k dataset:
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1319 --parallel 1319
The result is 0.956, which matches our expectation:
Accuracy: 0.956
Invalid: 0.000
Latency: 25.109 s
Output throughput: 5226.235 token/s
Accuracy Test with gpqa-diamond#
Accuracy benchmark on long context can be tested on GPQA-diamond dataset with long output tokens and thinking enabled:
python3 -m sglang.test.run_eval --port 30000 --eval-name gpqa --num-examples 198 --max-tokens 120000 --repeat 8 --thinking-mode deepseek-v3
The mean accuracy over 8 runs shows 0.797, which matches the number 79.9 in official tech report.
Repeat: 8, mean: 0.797
Scores: ['0.808', '0.798', '0.808', '0.798', '0.783', '0.788', '0.803', '0.793']