尹志 (2026-01-31 23:53):
#paper https://arxiv.org/abs/2601.21571. arxiv 2026. Shaping capabilities with token-level data filtering。文档级过滤过渡到Token 级过滤确实是很直接的想法,但用良好的工程实现获得洞见,确实是alec的风格。
arXiv, 2026-01-29T11:34:01Z. DOI: 10.48550/arXiv.2601.21571
Shaping capabilities with token-level data filtering
翻译
Abstract:
Current approaches to reducing undesired capabilities in language models are largely post hoc, and can thus be easily bypassed by adversaries. A natural alternative is to shape capabilities during pretraining itself. On the proxy task of removing medical capabilities, we show that the simple intervention of filtering pretraining data is highly effective, robust, and inexpensive at scale. Inspired by work on data attribution, we show that filtering tokens is more effective than filtering documents, achieving the same hit to undesired capabilities at a lower cost to benign ones. Training models spanning two orders of magnitude, we then demonstrate that filtering gets more effective with scale: for our largest models, token filtering leads to a 7000x compute slowdown on the forget domain. We also show that models trained with token filtering can still be aligned on the forget domain. Along the way, we introduce a methodology for labeling tokens with sparse autoencoders and distilling cheap, high-quality classifiers. We also demonstrate that filtering can be robust to noisy labels with sufficient pretraining compute.
翻译
回到顶部