DeepSeek is an open-source AI model offering cost savings of up to 60 percent compared to established LLMs.,Major Singapore firms, including banks and consultancies, restrict employee use of generative AI tools like DeepSeek to avoid data security pitfalls.,Early tests flag bias and potential data retention issues, plus concerns that DeepSeek might store user prompts for further training.,Some governments (South Korea, Italy, Australia) have blocked DeepSeek on official devices, reminiscent of ChatGPT’s early bans.,Enterprise indemnities (available from providers like Microsoft, IBM, and OpenAI) aren’t yet offered by DeepSeek, adding a legal wrinkle for corporate users.,A handful of businesses in Singapore do use DeepSeek, citing lower costs and strong performance for tasks like coding and customer support.
DeepSeek in Singapore—A Fresh AI Challenger Emerges
DeepSeek shot to fame when it launched its R1 model in January, confidently declaring it could match the performance of OpenAI’s tech at a fraction of the cost. According to the Chinese AI start-up behind it, R1 cost about S$7.6 million (RM24.8 million) to train—significantly less than the hundreds of millions typically spent by US tech giants on large language models (LLMs).
The initial response? Absolutely electric. R1 downloads soared, US tech stocks took a dip, and industry gurus started whispering that DeepSeek could disrupt the cosy world of established AI players like OpenAI, Google, and Amazon Web Services.
Why Singapore Is Taking a Careful Stance
Despite DeepSeek’s potential to slash costs (some say 40 to 60 per cent on infrastructure), many Singaporean firms are treading carefully. Big players, including banks and consulting agencies, have laid down strict rules to stop employees from diving into generative AI tools—DeepSeek included—without proper due diligence.
Why the reluctance? In a word: security. Concerns range from data privacy and AI bias to whether employees might (even inadvertently) feed confidential information into an external system. As Hanno Stegmann, Managing Director and Partner at Boston Consulting Group’s (BCG) AI team, puts it:
Open-Source but Far From Problem-Free
DeepSeek’s open-source nature might be appealing to tech enthusiasts and smaller businesses—particularly those on a tight budget. The model’s cost-saving potential is real, and local AI consumer insights platform Ai Palette estimates substantial reductions in expensive computing resources.
But open-source doesn’t automatically mean everything’s rosy. Early tests suggest DeepSeek might not meet every responsible AI standard. Some critics say the model offers selective answers, especially around topics that might be censored by the Chinese government, raising questions about transparency and bias. For more on AI ethics, consider reading about India's AI Future: New Ethics Boards.
Then there’s the matter of data retention. Some experts worry that prompts and results typed into DeepSeek might be stored and used to further train the model. No one’s entirely sure how much data is kept or for how long. In a nutshell, yes, DeepSeek is cheaper. But it could also open a giant can of legal and privacy worms.
Governments and Legal Eagles Weigh In
A few countries—South Korea, Italy, and Australia—have outright blocked DeepSeek on government devices, citing security concerns. This echoes the early days of ChatGPT when it, too, faced temporary restrictions in several jurisdictions. Huang's dire warning on US-China tech war highlights the geopolitical tensions influencing such decisions.
Law firms in Singapore are equally cautious. RPC tech lawyer Nicholas Lauw notes that generative AI is off-limits for client data until safety is thoroughly established:
Firms like RPC and others are testing LLMs in carefully controlled environments, checking legal risks and data security measures before giving any green light. Taiwan, for example, is also working on defining responsible innovation with its AI law.
Indemnity and Enterprise Editions
Many big AI developers—think Microsoft, IBM, Adobe, Google, and OpenAI—offer enterprise products with indemnity clauses, effectively shielding corporate clients from certain legal risks. DeepSeek, however, currently doesn’t have such an enterprise version on the market.
“DeepSeek doesn’t have an enterprise product yet. It might be open-source, but this alone doesn’t protect corporate users from potential legal risks.”
In the meantime, banks like OCBC and UOB rely on internal AI chatbots for coding tasks or archiving. OCBC has put in place system restrictions to block external AI chatbots—DeepSeek included—unless they meet the bank’s stringent security checks. For broader insights, a report by the World Economic Forum on AI Governance details challenges in AI adoption and governance.
The Early Adopters
Not everyone is standing on the sidelines. Babbobox chief executive Alex Chan allows employees to experiment with multiple AI models, including DeepSeek, for inspiration and coding help. Wiz.AI has already integrated R1 for text-based customer support. And smaller businesses see DeepSeek as a fantastic cost-cutter to help them innovate without requiring monstrous computing setups.
Then there’s the potential bigger-picture impact on the local AI scene. According to Kenddrick Chan from LSE Ideas, DeepSeek’s lower-cost approach might encourage more Singapore-based firms to jump on the AI bandwagon and spur further experimentation in generative AI. Singapore itself is actively working to make its workforce AI bilingual.
So, What’s Next?
At present, Singapore’s Ministry of Digital Development and Information has taken the neutral route: it doesn’t typically comment on commercial products but advises companies to do their own thorough evaluations.
For many businesses, DeepSeek remains both exciting and nerve-racking. Stegmann from BCG sums it up nicely:
“It is fair to say that first releases of many LLMs had some issues at the beginning that had to be ironed out based on user feedback and changes made to the model.”
If DeepSeek can address nagging worries about data privacy, censorship bias, and enterprise-grade support, it may well carve a place for itself in Singapore’s AI market. For now, though, the jury’s still out—and corporate Singapore isn’t rushing to deliver its verdict.
And that’s the low-down on DeepSeek in Singapore!
Will it become a shining example of cost-effective AI innovation, or will data privacy worries hold it back? Only time—and thorough due diligence—will tell. In the meantime, keep those eyes peeled, dear readers. The AI space in Asia just got even more interesting. Don't forget to Subscribe to our newsletter to hear about the latest updates on DeepSeek in Singapore as well as other news, tips and tricks here at AIinASIA! Or feel free to leave a comment below.







Latest Comments (3)
The comparison to government bans in South Korea and Italy is relevant. Singapore’s approach to DeepSeek, prioritising data security over immediate cost savings for major firms, aligns with a cautious national digital strategy. It reflects the ongoing challenge of balancing innovation with regulatory oversight, especially with open-source models lacking enterprise indemnities.
the comparison to ChatGPT's early bans for DeepSeek's government restrictions is interesting, but I think the cultural contexts are quite different. are we really seeing the same kind of public privacy concerns and data sovereignty pushes in south korea, italy, and australia that fueled the initial chatGPT backlash? or is deepseek’s chinese origin playing a much bigger, perhaps unspoken, role in this hesitancy, especially given geopolitical sensitivities around data and technology? it makes me wonder how much of this caution is about technical vulnerabilities versa perceptions of national security.
oh wow, I just saw this. the part about DeepSeek costing only S$7.6 million to train R1 model is wild. makes me wonder about some of the costs floating around for Japanese LLMs. been playing with some smaller Japanese models myself and the performance is honestly getting so good for much less investment. definitely something to keep an eye on.
Leave a Comment