The San Francisco-based ChatGPT maker told the Financial Times it had seen some evidence of “distillation”, which it suspects to be from DeepSeek. The technique is used by developers to obtain better performance on smaller models by using outputs from larger, more capable ones, allowing them to achieve similar results on specific tasks at a much lower cost.
“And there’s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI models, and I don’t think OpenAI is very happy about this,”
https://www.ft.com/content/a0dfedd1-5255-4fa9-8ccc-1fe01de87ea6