Industry Insider: How Asia is shaping AI investment

The debate over the capacity for artificial intelligence (AI) to reshape the economy and society is a defining issue of our time. This conversation is shaped by anxieties over job security, a shifting macroeconomic environment and geopolitical tensions, particularly between the US and China.
Central questions -- whether we are witnessing genuine growth or an unsustainable bubble, and if AI will displace jobs or enhance productivity -- are strongly contested. The wide range of opinions, and uncertainty of future technological progress, underscore the need to re-evaluate the frameworks used to analyse these dynamics.
This article provides a framework to analyse Asia’s regulatory environment and then implications for businesses. It then shifts to the investment landscape, assessing the familiar theme of data centres through the lens of Malaysia, a country rapidly expanding its capacity. It concludes by exploring ways to navigate Responsible AI (RAI) from a governance and risk management perspective.
HARD VS SOFT LAW
Asia's varied regulatory approaches create a challenging compliance landscape. Hard law frameworks, like those in China, impose strict legal requirements and penalties, as seen in the 2023 Provisions on the Administration of Generative AI Services. This provides regulatory certainty but can limit flexibility as technology evolves.
Conversely, Japan has favoured a soft law approach, relying on non-binding ethical guidelines. While adaptable, this can hinder consistent implementation. The 2025 AI Promotion Act marks an evolution, establishing cooperative governance to promote an AI ecosystem based on principles rather than explicit penalties. Hong Kong similarly relies on existing laws rather than a unified AI framework.
Beyond hard and soft law, other nations are developing distinct models.
South Korea’s upcoming Basic Act on the Development of AI will be one of Asia’s most comprehensive, mandating human oversight and rigorous risk assessments. Taiwan has introduced a principles-based draft law to balance innovation with responsible AI development.
Australia is working toward “AI guardrails” for high-risk applications, which may become law, while India relies on existing privacy and cybersecurity laws, like the Digital Personal Data Protection Act (DPDPA) of 2023.
Aligning business strategies with these evolving regulations is critical, as they often struggle to keep pace with technological innovation.
AI VALUE CHAIN
The global semiconductor value chain has long been defined by geographic specialisation, with Asia leading in midstream (fabrication) and downstream (assembly, testing, and packaging, or ATP) segments. As the second chart below shows, Southeast Asian economies (especially Vietnam, Malaysia and the Philippines) are gaining share in ATP.

DATA CENTRES
As AI models grow, the infrastructure required to power them -- data centres -- has moved to the forefront of the investment universe. An Infrastructure Investor 2025 analysis shows data centres are now one of the most popular categories for infrastructure funds, driven by the substantial compute power and energy required for AI workloads.

INFRASTRUCTURE STRATGIES
Strategies are categorised by their risk and return profiles, ranging from conservative to aggressive approaches.
At the most conservative end, core strategies focus on mature, stable assets with predictable cash flows like regulated utilities. Moving up the risk spectrum, core-plus strategies offer a blend of stability and growth by targeting high-quality assets that can be strategically improved.
For investors willing to accept higher risk, value-add strategies involve taking on more uncertainty to create value through operational improvements or upgrades to existing assets. At the highest end of the risk-return spectrum, opportunistic strategies often involve developing new assets from the ground up or investing in projects with significant transformation potential.

The interplay of these strategies with an economy’s data centre readiness and data localisation laws, which often require data for regulated sectors like finance and healthcare to be stored locally, shapes where investment may be directed.
MALAYSIA’S CASE STUDY
Malaysia’s data centre market is classified as “Emerging,” with a current capacity of 507 MW and another 314 MW under construction, targeting a total of 821 MW -- a 25% growth rate, among Asia’s highest.
It’s data centre expansion was spurred by Singapore's 2019–2022 moratorium, which was enacted due to concerns over the high environmental costs and intensive resource use of data centres. This three-year restriction redirected hyperscale demand to nearby markets, allowing Malaysia to gain a competitive edge with its affordable land and strong government support.
After lifting the moratorium, Singapore's government adopted a more selective approach, requiring new data centres to meet high standards for resource efficiency and to contribute to the nation's economic and strategic objectives.

Data centres in Malaysia are concentrated in Johor, home to the new Johor-Singapore Special Economic Zone (JS-SEZ). This joint initiative aims to enhance cross-border integration and attract global investment in priority sectors like the digital economy through special incentives and streamlined processes.
A key challenge for Malaysia's digital journey is its limited supply of low-carbon power, which stands at only 20%. This poses a significant risk given the high energy needs of AI and hyperscalers' net-zero goals.
Overcoming these hurdles requires Malaysia to align investment, policy and infrastructure.
Given the high growth potential of the market, investment opportunities will likely align with value-add and opportunistic strategies, as the market is not yet mature enough for core investments.

RESPONSIBLE AI FRAMEWORK
The integration of AI into critical sectors like finance, healthcare and education has brought Responsible AI (RAI) to the forefront. Businesses must proactively develop governance and risk assessments to ensure systems are safe and trustworthy. This is especially vital given emerging risks from technologies like Generative AI, which can hallucinate and produce inaccuracies.
Consider a health insurer deploying a large language model (LLM) for underwriting in an urban area. With a regulatory framework that is often a mix of existing laws and evolving guidelines, the insurer will need to develop its own governance and risk assessments.
This involves addressing key RAI principles that include protecting sensitive patient data through robust privacy measures. The insurer must also ensure data governance by maintaining the quality and integrity of data used for the LLM.
Fairness and bias considerations are critical to prevent discriminatory underwriting decisions that could harm certain populations. Security and safety protocols must be established to guard against data breaches and system failures that could compromise patient information or business operations.
Additionally, transparency and explainability requirements mean the insurer must be able to explain to regulators and patients how the AI reached specific decisions, providing clear reasoning for underwriting outcomes.
Navigating these challenges is essential to unlocking the technology's transformative capacity.

Rodney Gollo, Founder of Rhodes Point Advisors, draws on global experience—including his tenure as Head of Risk for Bupa in Hong Kong—to analyse the investment landscape. He translates complex global risks into clear, actionable, and commercial insights.
He welcomes your feedback and ideas on LinkedIn.