AsianFin -- ByteDance’s Seed team has released an experimental diffusion-based language model named Seed Diffusion Preview, targeting structured code generation as a testbed for validating discrete diffusion as a next-generation architecture for language models.
According to the team, the model achieves an inference speed of 2,146 tokens per second — 5.4 times faster than autoregressive models of comparable size. The announcement marks ByteDance’s latest foray into alternative AI model architectures.