Social Engineering
By Eric Roshaan, Enterprise Analyst, ManageEngine
This article is about Social engineering: Avoiding the instinct to trust in the face of deception!
Last July, a bank based in Malaysia received an email from someone claiming to be a representative of its liquefied petroleum gas supplier.
The email instructed the company to send payments to a new overseas bank account. After transferring RM28 million (more than USD 5.9 million) to the new account, the bank was informed by its supplier that it had yet to receive payment.
Realising that it had been duped, the bank immediately lodged a police report, which led to the transfer being stopped.
Despite law enforcement being able to successfully foil the scammers this time, not all transactions can be reversed after being approved. Adding to that, 3,705 fraud incidents were reported in 2023, making up the majority of reported cyber incidents throughout 2023 in Malaysia, according to the Malaysian Computer Emergency Response Team (MyCERT). To counter these threats, the National Scam Response Centre (NSRC) was allocated a RM10 million budget in 2023 and a RM20 million budget for 2024 to ensure it could provide quick responses to scam complaints.
However, countering social engineering fraud requires the involvement of all members of the public, not just government agencies. Organisations can follow these four tips to stay ahead of scammers and maintain trust with their customers.
A single lapse of judgement or misplaced trust is more than enough for a threat actor to steal data from right under an employee’s nose. To prevent incidents like these from compromising organisations, it is vital that leaders host mandatory, organisation-wide training courses to help employees develop skills and knowledge to identify common techniques threat actors use to trick their victims.
These programs enable organisations to establish a culture of threat awareness, which is crucial to reducing vulnerabilities and lowering the risk of security breaches brought on by human error.
Increasingly, technologies like voice cloning and generative AI make it easier for threat actors to pose as business managers and convince employees to authorise large financial transfers or reveal credentials. For this reason, employees are advised to check with the sender or caller through their official email addresses or phone numbers to ensure that the message is legitimate. If something about the message does not feel right, then employees should immediately ignore the message and report it to the cybersecurity team.
Personal information can be weaponised by threat actors to gain more information about the user, such as their contacts, their location, and their interests. From there, threat actors can pose as the victim’s friends to gain personal information about them, or they may pose as an employee at a retail company, offering deep discounts on certain items to the victim and their contacts to reel in new victims. To avoid these incidents, employees are advised to think twice about what they disclose online before sharing personal information.
Simultaneously, organisations should also give their customers the ability to opt out of data collection. Not only does this reduce the risk of privacy compromise, but it also gives customers more control over their personal information and how it should be handled. Typically, customers appreciate more management options surrounding their PII and this can manifest itself as additional good will extended to the organisation.
AI is increasingly prevalent in the business landscape. In the Asia-Pacific region, spending on AI-centric software, components, and services is expected to reach USD 78.4 billion by 2027, according to an IDC report.
While AI can be used by attackers to trick customers into helping them meet their objectives, the technology can also give cybersecurity teams the means to identify and respond to social engineering tactics. For example, machine learning (ML) algorithms trained to analyse suspicious messages can empower security teams with the ability to detect signs of fraud attempts. Furthermore, AI can empower users with real-time solutions so they can act swiftly against fraud incidents and minimise financial loss.
Scamming in the digital age has become easier and more convincing thanks to the increasing accessibility of tools like AI. This makes maintaining trust even more crucial for organisations to retain their customers and lower the risk of regulatory sanctions by governing bodies. By equipping their employees with the right skills and technologies, organisations can stay one step ahead of scammers’ tricks.
Pavilion REIT posts higher Q1 2026 DPU at 2.80 sen, supported by stronger net property…
In 2026, we expect Malaysia’s economy will grow at +4.2%, underpinned by continued expansion in domestic…
Tambadana, a Malaysian financing company, enhances customer loyalty through engaging seasonal campaigns, promoting financial literacy…
PNB appoints Rizal Rickman Ramli as new President & Group CEO, succeeding Dato’ Abdul Rahman…
AMD unveiled MRC to strengthen AI networking. The protocol ensures GPUs stay synchronized under real‑world…
Frontken posts RM38.9m profit, driven by Malaysia O&G surge and Taiwan semicon demand; TP raised…
This website uses cookies.