It is not hard to understand. I understand why China is censoring DeepSeek. I am saying that censorship makes DeepSeek a lot less useful.
I think that is pretty obvious.
Sure, other AI search engines have limitations. But surely you must recognize that some restrictions are going to have far greater impact on the accuracy of the final product than others.
ChatGPT users will be limited by the amount of profanity in their answers. DeepSeek users will be limited in the amount of information about China. Which do you think is going to have a more significant impact on the accuracy of the answers?
No China is not censoring DeepSeek. DeepSeek model is built to confirm to Chinese laws. There is a huge difference. If you understand that DeepSeek is an open source, when you host it in US, there is no Chinese censorship of that model.
If you are going to view an AI model in the prism of a search engine, and Western, American sensitive values⌠I am sorry, you donât understand what an AI model is.
This clearly demonstrates your understanding or lack there of an AI model. Just to be clear, DeepSeek users will be actually getting far better information about China, that is reliable than ChatGPT. It will be difficult for you to understand at the first read. Let it sink and think about it. Thatâs what the standard tests are showing. BTW, avoiding certain subjects doesnât make a model more useful or less.
For ex: None of ChatGPT models are trained on Genome sequences. Does that mean it is lot less useful? or supposing if DeepSeek is trained on them, will it be more useful, or less because it will not answer about Tiananmen Square.
I actually wanted to write about how to evaluate an AI model, how DeepSeek is scoring compared to other models. Then I realized, we are not discussing that, we are discussing what you think is most important. Your flawed assumption that China is censoring, and therefore DeepSeek is lot less useful⌠shows misunderstanding of what is an AI model.
Perhaps it will be better and easy, if you just say, I trust only an American model and any day I prefer them over anything else. No one can argue on that.
Instead the points you are making are just irrelevant and noise.
As long as you are using DeepSeek there is because it is a distill of the original model.
As long as it is what the party wants you to know. Otherwise it wonât. With you not understanding China you wonât know what is true and what isnât so you will think everything it puts out is correct.
I have a problem with this. Deepseek is a LLM which should be factual, but the reason they are not is because of human intervention, but that would be the end goal. You keep saying AI model like that sets Deepseek aside from a LLM.
But DeepSeekâs censorship is baked-in, according to a Wired investigation which found that the model is censored on both the application and training levels.
For example, a locally run version of DeepSeek revealed to Wired thanks to its reasoning feature that it should âavoid mentioningâ events like the Cultural Revolution and focus only on the âpositiveâ aspects of the Chinese Communist Party.
While DeepSeek is open code, its training database is not known. When AI is trained on biased data, that bias becomes embedded into the program. It may be possible to remove the pro-China censor from DeepSeek, but it apparently ainât easy.
WIRED found that while the most straightforward censorship can be easily avoided by not using DeepSeekâs app, there are other types of bias baked into the model during the training process. Those biases can be removed too, but the procedure is much more complicated. Hereâs How DeepSeek Censorship Actually Worksâand How to Get Around It | WIRED
There are economic consequences according to Wired.
These findings have major implications for DeepSeek and Chinese AI companies generally. If the censorship filters on large language models can be easily removed, it will likely make open-source LLMs from China even more popular, as researchers can modify the models to their liking. If the filters are hard to get around, however, the models will inevitably prove less useful and could become less competitive on the global market.
DeepSeek is like using an encyclopedia with many deliberate omissions AND false statements of fact. Who is responsible for false claims as a result of those omissions/false statements presented as factually true.