Malaysia’s own large language model — trained on local language and data to understand our culture, context, and daily realities. Multimodal by design. Built and operated entirely in Malaysia, keeping data local and giving the nation strategic control over its AI future.

Dedicated API for OpenClaw and agent frameworks. Powered by Nemo-Super, and ILMU-Nemo-Nano developed in collaboration with NVIDIA.
View Plans#AILOKAL assistant that understands how we talk, what we care about, and how things run in Malaysia.
Try ILMUchat for freeILMU is built to be intelligent- capable of learning and generating insightful, context-aware responses.
At its core, ILMU is guided by strong principles of ethics, integrity, and safety, reflecting Malaysia’s commitment to responsible innovation.
Developed in Malaysia, ILMU understands our languages, cultures, and everyday realities - built to support Malaysians and serve the nation.
ILMU is made for all Malaysians – whether you’re a student, a teacher, a civil servant, a business owner, or a curious individual. It’s designed to be accessible, useful, and relevant to your everyday needs.
Top Global LLM in Malay: Our flagship model, ILMU outperforms other frontier models on the Malay MMLU benchmark and performs on par with — or better than — leading models across global benchmarks.
| Model | Language | STEM | Humanities | Social Sciences | Others | Overall |
|---|---|---|---|---|---|---|
| ILMU | 89.36 | 88.05 | 88.40 | 85.44 | 85.08 | 87.20 |
| GPT-4o | 87.64 | 83.54 | 87.78 | 82.84 | 82.34 | 84.97 |
| Deepseek-V3 | 83.13 | 83.91 | 78.84 | 78.25 | 78.00 | 80.56 |
| GPT-5 | 83.59 | 78.10 | 76.50 | 80.73 | 75.44 | 79.53 |
| SahabatAI | 80.60 | 78.31 | 81.25 | 76.39 | 74.93 | 78.31 |
| Llama 3.1 | 79.44 | 78.76 | 81.00 | 76.50 | 75.10 | 78.07 |
| SEALION | 79.44 | 78.76 | 81.11 | 76.45 | 75.22 | 78.03 |
| Mallam 2.5 Small (Mesolitica) | 73.00 | 70.00 | 71.00 | 72.00 | 70.00 | 71.53 |
| Merdeka-LLM (Agmo) | 56.92 | 57.63 | 60.36 | 56.82 | 55.10 | 57.28 |
| Falcon3-10B | 54.77 | 58.20 | 60.17 | 56.76 | 54.04 | 56.38 |
ILMU performs on par with leading frontier models like GPT-4o and Llama 3.1 across key benchmarks.
In Bahasa Melayu language understanding, ILMU outperforms all frontier models.
ILMU handles real world prompts far better than Llama 3.1
On complex instruction-following tasks, ILMU is neck to neck with GPT-4o’s state-of-the-art performance.