Connect with us

Life

Adrian’s Arena: AI and the Global Shift – What Trump’s 2024 Victory Means for AI in Asia

With Trump’s 2024 re-election, Asian nations might push for self-reliant AI ecosystems, regional partnerships, and stronger privacy standards.

Published

on

donald trump happy

TL;DR

  • Donald Trump’s 2024 presidential win could reshape AI development in Asia by prompting self-reliant AI ecosystems, more regional partnerships, and increased privacy standards.
  • Asian nations may accelerate AI innovation and talent development to reduce reliance on U.S. tech, particularly as they anticipate shifts due to this result.
  • Asian companies are positioned to thrive, offering privacy-compliant, localised AI insights that align with Asia’s unique market dynamics during this new Trump era.

What now for AI?

The re-election of Donald Trump to the U.S. presidency is sure to have profound global impacts, particularly in areas like artificial intelligence (AI). In Asia, where AI adoption is already soaring, Trump’s approach to foreign policy, technology, and economic partnerships may drive significant shifts in both public and private AI ventures. This focus includes Donald Trump’s 2024 election win and subsequent implications on AI in various sectors.

This article explores how the changing political landscape could reshape AI in Asia and how businesses are poised to navigate and leverage these shifts.

Donald Trump 2024 election AI

AI Regulation and Innovation: A Push for Autonomy

Trump’s leadership may spur a greater focus on AI autonomy in Asia, encouraging countries to develop homegrown AI solutions across various industries. For example, healthcare data analytics in Singapore, fintech solutions in India, and consumer insights platforms in Japan could see accelerated development as these nations prioritise self-reliance.

Several companies in Asia are well-positioned to contribute, offering privacy-compliant AI insights that help brands tailor messaging without relying on U.S.-based tech giants.

Trade Policies and Tech Partnerships: Redrawing Lines

With Trump’s trade policies likely to maintain a “protectionist” edge, tech partnerships across the Pacific may become more complex, leading Asia’s leading economies to bolster regional AI collaborations. This may foster tighter partnerships within Asia, where companies can provide high-impact AI solutions tailored to local consumer behaviours and trends.

Research Funding and Education: A New Wave of Asian Talent

The expected restrictions on U.S. visas for Asian students and researchers could spark a wave of investment in AI education and talent retention across Asia. AI companeis can support this talent surge by offering real-world, Asia-specific AI applications, from data analytics to customer insights and digital advertising.

Practical programs in Asia, especially in Singapore —offer hands-on AI training—equip professionals with critical skills for driving regional innovation, positioning Asia as a powerhouse for AI expertise.

AI-Powered Defense and Cybersecurity: Strengthening Regional Security

As Asian nations fortify their defences in response to Trump’s renewed focus on military alliances, AI-driven cybersecurity solutions are expected to see considerable growth. AI companies in Asia are poised to address emerging threats with precision and speed.

For instance, Asian technology could support national cybersecurity initiatives by identifying threat patterns in real-time across public data sources, providing governments and enterprises with actionable insights for safeguarding critical infrastructure.

Privacy and Data Ownership: Asia’s Standards vs. the U.S. Approach

Asia’s data governance standards are set to diverge further from those in the U.S., especially with Trump’s preference for lighter tech regulation. This shift aligns with ad tech’s approach to delivering privacy-compliant audience insights, offering Asia-based companies a way to engage their customers effectively without compromising data security.

Advertisement

Impact on the AI Talent Pipeline: Challenges and Opportunities

Trump’s immigration policies could impact the AI talent pipeline to the U.S., pushing many skilled AI professionals to remain in Asia. Companies can leverage this shift by tapping into local AI talent for projects that require regional expertise, particularly in the Donald Trump 2024 election AI context.

By prioritising local talent, companies can ensure that solutions align with Asia’s unique market demands, from local consumer insights to culturally resonant AI-driven advertising.

As a result, Asian companies and their partners can benefit from deeper market understanding, making their campaigns more impactful across Asia.

A Shift Towards Pan-Asian AI Standards

With Trump’s policies creating a potential divide in AI development approaches, Asian countries may push for unified AI standards within the region. By aligning AI governance across economies, Asia could build a formidable framework that encourages innovation while ensuring ethical usage and robust privacy protections.

Countries like Japan, South Korea, and Singapore are already leaders in setting high AI standards, and an Asia-wide approach could help establish a distinctive identity in the global AI community.

This alignment would also reduce friction for companies operating across multiple Asian markets, fostering an interconnected ecosystem that accelerates growth and adaptability.

The Rise of Localised AI Applications

As trade and regulatory landscapes shift, there’s an increased incentive for Asian companies to design AI solutions that cater to local languages, cultural nuances, and consumer behaviours. Localisation has always been a critical factor for success in Asia, and AI is no exception.

From natural language processing that understands regional dialects to AI-driven marketing insights that resonate with unique consumer mindsets, tailored AI applications could see a significant boost.

This emphasis on localisation not only enhances user experience but also ensures that AI remains relevant and effective in each unique market across the continent

Conclusion: A New Era for AI in Asia

The Trump presidency may catalyse a new chapter for AI in Asia. As Asian nations brace for potential shifts in trade and technology policies, they are well-positioned to accelerate regional AI innovation, self-sufficiency, and collaboration.

By investing in local talent, fostering privacy-compliant solutions, and collaborating across the region, companies like SQREEM are driving Asia’s transformation into a global AI powerhouse.

While the future may be uncertain under a second new era of Trump, we know at least it won’t be boring for the AI industry!

Join the Conversation

As AI in Asia surges towards autonomy and privacy-first innovation, will Trump’s policies drive the region to outperform the U.S. in tech advancements? Or are we on the cusp of a global AI divide? Please share your thoughts and don’t forget to subscribe for updates on AI and AGI developments.

You may also like:

Advertisement

Author

  • Adrian Watkins (Guest Contributor)

    Adrian is an AI, marketing, and technology strategist based in Asia, with over 25 years of experience in the region. Originally from the UK, he has worked with some of the world’s largest tech companies and successfully built and sold several tech businesses. Currently, Adrian leads commercial strategy and negotiations at one of ASEAN’s largest AI companies. Driven by a passion to empower startups and small businesses, he dedicates his spare time to helping them boost performance and efficiency by embracing AI tools. His expertise spans growth and strategy, sales and marketing, go-to-market strategy, AI integration, startup mentoring, and investments. View all posts


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Life

The Dirty Secret Behind Your Favourite AI Tools

This piece explores the hidden environmental costs of AI, focusing on electricity and water consumption by popular models like ChatGPT. It unpacks why companies don’t disclose energy usage, shares sobering statistics, and spotlights efforts pushing for transparency and sustainability in AI development.

Published

on

AI's environmental impact

The environmental cost of artificial intelligence is rising fast — yet the industry remains largely silent. Here’s why that needs to change.

TL;DR — What You Need To Know

  • AI systems like ChatGPT and Google Gemini require immense electricity and water for training and daily use
  • There’s no universal standard or regulation requiring AI companies to report their energy use
  • Estimates suggest AI-related electricity use could exceed 326 terawatt-hours per year by 2028
  • Lack of transparency hides the true cost of AI and hinders efforts to build sustainable infrastructure
  • Organisations like the Green Software Foundation are working to make AI’s carbon footprint more measurable

AI Is Booming — So Are AI’s Environmental Impact

AI might be the hottest acronym of the decade, but one of its most inconvenient truths remains largely hidden from view: the vast, unspoken energy toll of its everyday use. The focus keyphrase here is clear: AI’s environmental impact.

With more than 400 million weekly users, OpenAI’s ChatGPT ranks among the five most visited websites globally. And it’s just the tip of the digital iceberg. Generative AI is now baked into apps, search engines, work tools, and even dating platforms. It’s ubiquitous — and ravenous.

Yet for all the attention lavished on deepfakes, hallucinations and the jobs AI might replace, its environmental footprint receives barely a whisper.

Why AI’s Energy Use is Such a Mystery

Training a large language model is a famously resource-intensive endeavour. But what’s less known is that every single prompt you feed into a chatbot also eats up energy — often equivalent to seconds or minutes of household appliance use.

Advertisement

The problem is we still don’t really know how much energy AI systems consume. There are no legal requirements for companies to disclose model-specific carbon emissions and no global framework for doing so. It’s the wild west, digitally speaking.

Why? Three reasons:

  • Commercial secrecy: Disclosing energy metrics could expose architectural efficiencies and other competitive insights
  • Technical complexity: Models operate across dispersed infrastructure, making attribution a challenge
  • Narrative management: Big Tech prefers to market AI as a net-positive force, not a planetary liability

The result is a conspicuous silence — one that researchers, journalists and environmentalists are now struggling to fill.

The stats we do have are eye-watering

MIT Technology Review recently offered a sobering benchmark: a 5-second AI-generated video might burn the same energy as an hour-long microwave session.

Even a text-based chatbot query could cost up to 6,700 joules. Scale that by billions of queries per day and you’re looking at a formidable energy footprint. Add visuals or interactivity and the costs balloon.

The broader data centre landscape is equally stark. In 2024, U.S. data centres were estimated to use around 200 terawatt-hours of electricity — roughly the same as Thailand’s annual consumption. By 2028, AI alone could push this to 326 terawatt-hours.

Advertisement

That’s equivalent to:

  • Powering 22% of American homes
  • Driving over 300 billion miles
  • Completing 1,600 round trips to the sun (in carbon terms)

Water usage, often overlooked, is another major concern. AI infrastructure guzzles water for cooling, posing risks during heatwaves and water shortages. As AI adoption grows, so too does this hidden drain on natural resources.

What’s being done — and who’s trying to fix it

A handful of organisations are beginning to push for accountability.

The Green Software Foundation — backed by Microsoft, Google, Siemens, and others — is creating sustainability standards tailored for AI. Through its Green AI Committee, it champions:

  • Lifecycle carbon accounting
  • Open-source tools for energy tracking
  • Real-time carbon intensity metrics

Meanwhile, governments are cautiously stepping in. The EU AI Act encourages sustainability via risk assessments. In the UK, the AI Opportunities Action Plan and British Standards Institution are working on guidance for measuring AI’s carbon toll.

Still, these are fledgling efforts in an industry sprinting ahead. Without enforceable mandates, they risk becoming toothless.

Why transparency matters more than ever for AI carbon emissions

We can’t manage what we don’t measure. And in AI, the stakes are immense.

Advertisement

Without accurate data, regulators can’t design smart policies. Infrastructure planners can’t future-proof grids. Consumers and businesses can’t make ethical choices.

Most of all, AI firms can’t credibly claim to build a better world while masking the true environmental cost of their platforms. Sustainability isn’t a PR sidecar — it must be built into the business model.

So yes, generative AI may be dazzling. But if it’s to earn its place in a sustainable digital future, the first step is brutally simple: tell us how much it costs to run.

You May Also Like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

How To Teach ChatGPT Your Writing Style

This warm, practical guide explores how professionals can shape ChatGPT’s tone to match their own writing style. From defining your voice to smart prompting and memory settings, it offers a step-by-step approach to turning ChatGPT into a savvy writing partner.

Published

on

teach ChatGPT your writing style

TL;DR — What You Need To Know

  • ChatGPT can mimic your writing tone with the right examples and prompts
  • Start by defining your personal style, then share it clearly with the AI
  • Use smart prompting, not vague requests, to shape tone and rhythm
  • Custom instructions and memory settings help ChatGPT “remember” you
  • It won’t be perfect — but it can become a valuable creative sidekick.

Start by defining your voice

Before ChatGPT can write like you, you need to know how you write. This may sound obvious, but most professionals haven’t clearly articulated their voice. They just write.

Think about your usual tone. Are you friendly, brisk, poetic, slightly sarcastic? Do you use short, direct sentences or long ones filled with metaphors? Swear words? Emojis? Do you write like you talk?

Collect a few of your own writing samples: a newsletter intro, a social media post, even a Slack message. Read them aloud. What patterns emerge? Look at rhythm, vocabulary and mood. That’s your signature.

Show ChatGPT your writing

Now you’ve defined your style, show ChatGPT what it looks like. You don’t need to upload a manifesto. Just say something like:

“Here are three examples of my writing. Please analyse my tone, sentence structure and word choice. I’d like you to write like this moving forward.”

Then paste your samples. Follow up with:

Advertisement

“Can you describe my writing style in a few bullet points?”

You’re not just being polite. This step ensures you’re aligned. It also helps ChatGPT to frame your voice accurately before trying to imitate it.

Be sure to offer varied, representative examples. The more you reflect your daily writing habits across different formats (emails, captions, articles), the sharper the mimicry.

Prompt with purpose

Once ChatGPT knows how you write, the next step is prompting. And this is where most people stumble. Saying, “Make it sound like me” isn’t quite enough.

Instead, try:

“Rewrite this in my tone — warm, conversational, and a little cheeky.” “Avoid sounding corporate. Use contractions, variety in sentence length and clear rhythm.”

Yes, you may need a few back-and-forths. But treat it like any editorial collaboration — the more you guide it, the better the results.

Advertisement

And once a prompt nails your style? Save it. That one sentence could be reused dozens of times across projects.

Use memory and custom instructions

ChatGPT now lets you store tone and preferences in memory. It’s like briefing a new hire once, rather than every single time.

Start with Custom Instructions (in Settings > Personalisation). Here, you can write:

“I use conversational English with dry humour and avoid corporate jargon. Short, varied sentences. Occasionally cheeky.”

Once saved, these tone preferences apply by default.

There’s also memory, where ChatGPT remembers facts and stylistic traits across chats. Paid users have access to broader, more persistent memory. Free users get a lighter version but still benefit.

Advertisement

Just say:

“Please remember that I like a formal tone with occasional wit.”

ChatGPT will confirm and update accordingly. You can always check what it remembers under Settings > Personalisation > Memory.

Test, tweak and give feedback

Don’t be shy. If something sounds off, say so.

“This is too wordy. Try a punchier version.” “Tone down the enthusiasm — make it sound more reflective.”

Ask ChatGPT why it wrote something a certain way. Often, the explanation will give you insight into how it interpreted your tone, and let you correct misunderstandings.

As you iterate, this feedback loop will sharpen your AI writing partner’s instincts.

Advertisement

Use ChatGPT as a creative partner, not a clone

This isn’t about outsourcing your entire writing voice. AI is a tool — not a ghostwriter. It can help organise your thoughts, start a draft or nudge you past a creative block. But your personality still counts.

Some people want their AI to mimic them exactly. Others just want help brainstorming or structure. Both are fine.

The key? Don’t expect perfection. Think of ChatGPT as a very keen intern with potential. With the right brief and enough examples, it can be brilliant.

You May Also Like:

Author


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Life

Adrian’s Arena: Will AI Get You Fired? 9 Mistakes That Could Cost You Everything

Will AI get you fired? Discover 9 career-killing AI mistakes professionals make—and how to avoid them.

Published

on

AI mistakes that cost jobs

TL;DR — What You Need to Know:

  • Common AI mistakes that cost jobs can happen — fast
  • Most are fixable if you know what to watch for.
  • Avoid these pitfalls and make AI your career superpower.

Don’t blame the robot.

If you’re careless with AI, it’s not just your project that tanks — your career could be next.

Across Asia and beyond, professionals are rushing to implement artificial intelligence into workflows — automating reports, streamlining support, crunching data. And yes, done right, it’s powerful. But here’s what no one wants to admit: most people are doing it wrong.

I’m not talking about missing a few prompts or failing to generate that killer deck in time. I’m talking about the career-limiting, confidence-killing, team-splintering mistakes that quietly build up and explode just when it matters most. If you’re not paying attention, AI won’t just replace your role — it’ll ruin your reputation on the way out.

Here are 9 of the most common, most damaging AI blunders happening in businesses today — and how you can avoid making them.

1. You can’t fix bad data with good algorithms.

Let’s start with the basics. If your AI tool is churning out junk insights, odds are your data was junk to begin with. Dirty data isn’t just inefficient — it’s dangerous. It leads to flawed decisions, mis-targeted customers, and misinformed strategies. And when the campaign tanks or the budget overshoots, guess who gets blamed?

Advertisement
The solution? Treat your data with the same respect you’d give your P&L. Clean it, vet it, monitor it like a hawk. AI isn’t magic. It’s maths — and maths hates mess.

2. Don’t just plug in AI and hope for the best.

Too many teams dive into AI without asking a simple question: what problem are we trying to solve? Without clear goals, AI becomes a time-sink — a parade of dashboards and models that look clever but achieve nothing.

Worse, when senior stakeholders ask for results and all you have is a pretty interface with no impact, that’s when credibility takes a hit.

AI should never be a side project. Define its purpose. Anchor it to business outcomes. Or don’t bother.

3. Ethics aren’t optional — they’re existential.

You don’t need to be a philosopher to understand this one. If your AI causes harm — whether that’s through bias, privacy breaches, or tone-deaf outputs — the consequences won’t just be technical. They’ll be personal.

Companies can weather a glitch. What they can’t recover from is public outrage, legal fines, or internal backlash. And you, as the person who “owned” the AI, might be the one left holding the bag.

Bake in ethical reviews. Vet your training data. Put in safeguards. It’s not overkill — it’s job insurance.

4. Implementation without commitment is just theatre.

I’ve seen it more than once: companies announce a bold AI strategy, roll out a tool, and then… nothing. No training. No process change. No follow-through. That’s not innovation. That’s box-ticking.

If you half-arse AI, it won’t just fail — it’ll visibly fail. Your colleagues will notice. Your boss will ask questions. And next time, they might not trust your judgement.

AI needs resourcing, support, and leadership. Otherwise, skip it.

5. You can’t manage what you can’t explain.

Ever been in a meeting where someone says, “Well, that’s just what the model told us”? That’s a red flag — and a fast track to blame when things go wrong.

So-called “black box” models are risky, especially in regulated industries or customer-facing roles. If you can’t explain how your AI reached a decision, don’t expect others to trust it — or you.

Use interpretable models where possible. And if you must go complex, document it like your job depends on it (because it might).

6. Face the bias before it becomes your headline.

Facial recognition failing on darker skin tones. Recruitment tools favouring men. Chatbots going rogue with offensive content. These aren’t just anecdotes — they’re avoidable, career-ending screw-ups rooted in biased data.

It’s not enough to build something clever. You have to build it responsibly. Test for bias.

Advertisement
Diversify your datasets. Monitor performance. Don’t let your project become the next PR disaster.

7. Training isn’t optional — it’s survival.

If your team doesn’t understand the tool you’ve introduced, you’re not innovating — you’re endangering operations. AI can amplify productivity or chaos, depending entirely on who’s driving.

Upskilling is non-negotiable. Whether it’s hiring external expertise or running internal workshops, make sure your people know how to work with the machine — not around it.

8. Long-term vision beats short-term wow.

Sure, the first week of AI adoption might look good. Automate a few slides, speed up a report — you’re a hero.

But what happens three months down the line, when the tool breaks, the data shifts, or the model needs recalibration?

AI isn’t set-and-forget. Plan for evolution. Plan for maintenance. Otherwise, short-term wins can turn into long-term liabilities.

9. When everything’s urgent, documentation feels optional.

Until someone asks, “Who changed the model?” or “Why did this customer get flagged?” and you have no answers.

In AI, documentation isn’t admin — it’s accountability.

Keep logs, version notes, data flow charts. Because sooner or later, someone will ask, and “I’m not sure” won’t cut it.

Final Thoughts: AI doesn’t cost jobs. People misusing AI do.

Most AI mistakes aren’t made by the machines — they’re made by humans cutting corners, skipping checks, and hoping for the best. And the consequences? Lost credibility. Lost budgets. Lost roles.

But it doesn’t have to be that way.

Used wisely, AI becomes your competitive edge. A signal to leadership that you’re forward-thinking, capable, and ready for the future. Just don’t stumble on the same mistakes that are currently tripping up everyone else.

So the real question is: are you using AI… or is it quietly using you?

Advertisement

You may also like:

Author

  • Adrian Watkins (Guest Contributor)

    Adrian is an AI, marketing, and technology strategist based in Asia, with over 25 years of experience in the region. Originally from the UK, he has worked with some of the world’s largest tech companies and successfully built and sold several tech businesses. Currently, Adrian leads commercial strategy and negotiations at one of ASEAN’s largest AI companies. Driven by a passion to empower startups and small businesses, he dedicates his spare time to helping them boost performance and efficiency by embracing AI tools. His expertise spans growth and strategy, sales and marketing, go-to-market strategy, AI integration, startup mentoring, and investments. View all posts


Discover more from AIinASIA

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Discover more from AIinASIA

Subscribe now to keep reading and get access to the full archive.

Continue reading