Are we focusing on the right things?

It is not hard to understand. I understand why China is censoring DeepSeek. I am saying that censorship makes DeepSeek a lot less useful.

I think that is pretty obvious.

Sure, other AI search engines have limitations. But surely you must recognize that some restrictions are going to have far greater impact on the accuracy of the final product than others.

ChatGPT users will be limited by the amount of profanity in their answers. DeepSeek users will be limited in the amount of information about China. Which do you think is going to have a more significant impact on the accuracy of the answers?

1 Like

So what?

Deepseek is not factual. “1984” is closer than you realize.

State decides Pi = 3. How does THAT work again? It was tried in at least one whacko state.

2 Likes

No China is not censoring DeepSeek. DeepSeek model is built to confirm to Chinese laws. There is a huge difference. If you understand that DeepSeek is an open source, when you host it in US, there is no Chinese censorship of that model.

If you are going to view an AI model in the prism of a search engine, and Western, American sensitive values… I am sorry, you don’t understand what an AI model is.

This clearly demonstrates your understanding or lack there of an AI model. Just to be clear, DeepSeek users will be actually getting far better information about China, that is reliable than ChatGPT. It will be difficult for you to understand at the first read. Let it sink and think about it. That’s what the standard tests are showing. BTW, avoiding certain subjects doesn’t make a model more useful or less.

For ex: None of ChatGPT models are trained on Genome sequences. Does that mean it is lot less useful? or supposing if DeepSeek is trained on them, will it be more useful, or less because it will not answer about Tiananmen Square.

I actually wanted to write about how to evaluate an AI model, how DeepSeek is scoring compared to other models. Then I realized, we are not discussing that, we are discussing what you think is most important. Your flawed assumption that China is censoring, and therefore DeepSeek is lot less useful… shows misunderstanding of what is an AI model.

Perhaps it will be better and easy, if you just say, I trust only an American model and any day I prefer them over anything else. No one can argue on that.

Instead the points you are making are just irrelevant and noise.

AI Model is not a dictionary or reference book. First understand what is an AI model and then we can discuss.

As long as you are using DeepSeek there is because it is a distill of the original model.

As long as it is what the party wants you to know. Otherwise it won’t. With you not understanding China you won’t know what is true and what isn’t so you will think everything it puts out is correct.

I would be willing to listen to that.

1 Like

I have a problem with this. Deepseek is a LLM which should be factual, but the reason they are not is because of human intervention, but that would be the end goal. You keep saying AI model like that sets Deepseek aside from a LLM.

1 Like

So you say. But others disagree.

But DeepSeek’s censorship is baked-in, according to a Wired investigation which found that the model is censored on both the application and training levels.

For example, a locally run version of DeepSeek revealed to Wired thanks to its reasoning feature that it should “avoid mentioning” events like the Cultural Revolution and focus only on the “positive” aspects of the Chinese Communist Party.

A quick check by TechCrunch of a locally run version of DeepSeek available via Groq also showed clear censorship: DeepSeek happily answered a question about the Kent State shootings in the U.S., but replied “I cannot answer” when asked about what happened in Tiananmen Square in 1989. No, DeepSeek isn't uncensored if you run it locally | TechCrunch)

DeepSeek was run on Perplexity using western servers. It was still censored. Perplexity lets you try DeepSeek R1 without the security risk, but it's still censored | ZDNET

While DeepSeek is open code, its training database is not known. When AI is trained on biased data, that bias becomes embedded into the program. It may be possible to remove the pro-China censor from DeepSeek, but it apparently ain’t easy.

WIRED found that while the most straightforward censorship can be easily avoided by not using DeepSeek’s app, there are other types of bias baked into the model during the training process. Those biases can be removed too, but the procedure is much more complicated. Here’s How DeepSeek Censorship Actually Works—and How to Get Around It | WIRED

There are economic consequences according to Wired.

These findings have major implications for DeepSeek and Chinese AI companies generally. If the censorship filters on large language models can be easily removed, it will likely make open-source LLMs from China even more popular, as researchers can modify the models to their liking. If the filters are hard to get around, however, the models will inevitably prove less useful and could become less competitive on the global market.

DeepSeek is like using an encyclopedia with many deliberate omissions AND false statements of fact. Who is responsible for false claims as a result of those omissions/false statements presented as factually true.

1 Like