Categories: Business News

Backup plans and recovery readiness of Malaysian businesses

By Guan Tian Lai, COO of Exabytes

Imagine it’s a normal workday. A key system goes down, staff can’t log in, customer orders stall, and the phones start ringing. Someone says, “It’s fine, we have backups.” That sentence feels reassuring, but it often hides a harsh reality: backups are not the same as recovery. Backups tell you a copy of data exists. Disaster recovery determines whether you can restore access, services, and operations within an acceptable time and with acceptable data loss. The gap between those two is where Malaysian businesses lose hours, money, and customer trust, even when they believe they did the right thing.

That’s why World Backup Day is a useful reminder but it can also reinforce the wrong kind of confidence if it stops at “we back up.” The more important question is: can you actually recover?

Most organisations don’t discover they have a disaster recovery problem until the day something breaks. It could be a configuration mistake. A human error that escalates faster than expected. A credential compromise. A provider outage. Or a chain reaction where one dependency quietly fails and the service never comes back the way you assumed it would. In the Malaysian environment, the most common downtime triggers we see are human error and misconfiguration, credential compromise, and provider-side outages and each of these exposes the same weakness: recovery is rarely designed, tested, and owned as a discipline.

Guan Tian Lai, COO of Exabytes

This is also why Malaysia’s national incident response centre continues to warn about ransomware trends and repeatedly emphasises backup management and security hygiene as essential countermeasures. In early 2026, MyCERT noted an increase in ransomware-related incidents targeting organisations across Malaysia. At an industry level, Malaysia’s broader cybersecurity landscape discussions continue to highlight that threats including ransomware are increasing in sophistication as cloud and AI adoption deepens.

Backup vs disaster recovery

Backup vs disaster recovery: the difference leaders must understand Let’s define it simply. Backup is a copy of data. Disaster recovery (DR) is the ability to restore operations, systems, applications, access, dependencies, and workflows, within a target window.

Two decisions separate organisations that feel prepared from organisations that actually are: RTO (Recovery Time Objective): How long can we be down before the business impact becomes unacceptable? RPO (Recovery Point Objective): How much data can we afford to lose and still operate responsibly?

These are not IT terms. They are business decisions. If your billing system can be down for eight hours, that’s an RTO decision. If your orders can only lose five minutes of data, that’s an RPO decision. And if you’ve never defined those targets or never tested whether you can meet them, then “having backups” may not protect you from disruption.

Why backups still fail when reality hits When organisations struggle during incidents, the problem is rarely “the backup file.” It’s everything around it.

First, restore is untested. Many teams check backup completion status but never validate restoration. The result is a false sense of security: the system says “backup successful,” but recovery time and recovery steps remain unknown until the worst moment.

Second, dependencies are invisible until the outage. You may restore a database but forget the identity system, DNS, keys, network configuration, or application dependencies that make the service usable. In Malaysia, SMEs especially underestimate dependency mapping and recovery documentation — so when they attempt a restore, they don’t know what must come back first or what the service relies on.

Third, nobody owns the order of recovery. When everything is “urgent,” teams lose time debating what should be restored first instead of executing. This is where a clear Tier 1 restoration order prevents chaos.

Fourth, access becomes the bottleneck. Many organisations underestimate recovery access control. During an incident, they discover too late that the right people cannot access the right accounts, systems, or recovery tools quickly enough to restore.

Finally, the backup strategy does not match the risk. If credential compromise is a threat, backups must be immutable, ensuring they cannot be altered or deleted even with privileged access. If misconfiguration is a threat, recovery must account for operational errors, not just data loss.

Offsite backups

Across environments we’ve supported at Exabytes, a common scenario illustrates this clearly. Many organisations believe they are safe because backups are stored “outside” the main system. What often gets missed is that the same login access controls both production systems and the backups. When credentials are compromised, attackers don’t need to “destroy backups” in dramatic ways, they can block recovery access or delete backup sets using the same privileged accounts. In those situations, the weakness isn’t where the backup is stored; it’s that recovery was never fully separated from day-to-day operations. Put simply: offsite backups don’t help if attackers control the same credentials used to restore.

A quick clarity box: BaaS vs DRaaS Leaders often confuse the two. Backup-as-a-Service (BaaS) focuses on copying and retaining data. Disaster recovery (often delivered as DRaaS in modern environments) focuses on restoring the business, applications, infrastructure, access, and dependencies, so operations resume within target RTO/RPO. Both matter. Confusing one for the other is how incidents become prolonged outages.

Malaysia’s 2026 resilience checklist (practical, not theoretical) If you want a recovery plan that survives real pressure, start with what matters most: priority, targets, access, and practice.

Begin by setting your Tier 1 restore list and restore order. Tier 1 is not your entire stack. It’s what must come back first for the business to function. In practice, a sensible Tier 1 sequence for many organisations is: identity and access first, then network and DNS, then email and core business applications. Without identity and DNS, everything else becomes slower, riskier, or impossible.

Next, define RTO and RPO per system. Different systems have different tolerances. Define realistic targets, even roughly at first: “This must recover within X hours,” “We can only tolerate Y minutes of data loss.” You don’t need perfect numbers on day one. You need agreement and clarity because recovery is a business decision as much as a technical one.

Then make sure backups are recoverable, not just available. Ask better questions than “Do we have backups?” Ask: when was the last successful restore? Are backups protected from deletion or tampering? Do we have multiple recovery points? Can we recover not just data, but the service?

At the same time, create a runbook that works under stress. The most damaging incident moments are organisational, not technical. Your runbook should clarify who declares an incident, who leads recovery, what gets restored first, who communicates, and what “done” looks like. Include vendor contacts and escalation paths.

No universal template

One critical element many teams skip is defining DR declaration criteria before an incident. There is no universal template because every environment is different but every organisation should agree on triggers based on business impact, time thresholds, and security/access conditions. Indecision is one of the most underestimated failure points: teams keep troubleshooting long after the business should have shifted into recovery mode.

Finally, drill the plan. A practical baseline is two DR drills per year, with at least one restoration test annually. Tabletop drills are fast and revealing: who decides the restore order? What if admin access is unavailable? What do we tell customers? Restoration testing turns “confidence” into proof.

World Backup Day takeaway: the most important test is the one you haven’t run World Backup Day is a good reminder to back up. But the bigger question is whether you can recover. A backup is necessary, but it is not sufficient.

In 2026, Malaysian organisations should move beyond comfort statements and build measurable recovery capability: define what matters most, set recovery targets, create a runbook, and drill until recovery becomes predictable. Because the real risk isn’t that something breaks, it’s that when it breaks, the organisation has no practised way to restore operations quickly, calmly, and with minimal disruption.

If you can only do one thing this quarter, run a DR drill. It will reveal more about your resilience than any dashboard ever will.

Business News

News Malaysia and Global

Read More News on Latest Malaysia

Read More News on Business News Malaysia

Read More News on SG Business News

Read More News on World Future TV

Read More News #latestmalaysia

kazimahmood

Recent Posts

Oracle Lays Off Thousands Amid AI Push

Oracle is laying off thousands of employees globally to restructure and invest in AI, while…

1 hour ago

Wall Street Surges as Iran Signals Willingness to End War

Wall Street rallies as Iran signals willingness to end war; Dow jumps 1,125 points, oil…

3 hours ago

Malaysia’s Digital Banking Sector Enters Critical Growth Phase

Digital banks move into execution phase as competition intensifies.

4 hours ago

SMEs Struggle Amid Rising Costs and Competition

SMEs face cost pressures while adapting to growing competition.

16 hours ago

Najib ordered to pay US$1.3 billion to SRC International

Najib Razak ordered to pay SRC International US$1.3 billion for breach of fiduciary duties, misappropriation,…

21 hours ago

Market: Caution amid oil surge

The FBM KLCI fell below 1,690, impacted by crude oil volatility and regional market tensions,…

1 day ago

This website uses cookies.