- Pick & Scroll News
- Posts
- Hackers Use AI to Launch Smarter Cyberattacks
Hackers Use AI to Launch Smarter Cyberattacks
Cybercriminals are turning AI into a powerful weapon in the escalating digital arms race, boosting attack speed and realism.
Cybercriminals are turning AI into a powerful weapon in the escalating digital arms race, boosting attack speed realism while challenging already strained cybersecurity teams. While generative AI was meant to protect networks, it's also handing hackers tools to create convincing phishing messages, deepfakes and automated reconnaissance bots.
AI’s dual nature is pushing organisations to reassess their security strategies. On one hand, businesses are pouring billions into AI. Gartner forecasts global spending on generative AI to surge 76% this year to $US644 billion. On the other hand, early deployments haven’t lived up to expectations, especially in cybersecurity, where companies are starting to second-guess whether AI’s risks now outweigh its benefits.
Recent trends show that cyberattacks are becoming faster and more personalised. AI agents can now comb public data, including employee bios, investor reports and social media accounts. This intelligence is then used in social engineering campaigns or to craft dangerously realistic phishing emails. Companies have also reported scams using AI-generated voices and video deepfakes to impersonate executives in fraudulent calls requesting financial transfers.
Some Australian organisations are seeing firsthand how convincing these AI-powered deception campaigns can be. Sophisticated scams involving fake CEO videos circulated via WhatsApp and video platforms are pushing targeted executives into near-fatal financial decisions before questions reveal the truth. Hackers are testing answers to human vulnerabilities just as much as system ones and are sometimes winning.
While AI tech also powers cybersecurity defences, such as spotting phishing emails to identifying unusual network patterns, the tools only go so far. Experts say cyber teams need significant resources and high-quality data to make AI tools effective. In most cases, AI can flag a threat, but human experts still need to assess and act on it. Without sizeable investment, many companies are left with incomplete protection.
Still, AI is helping security pros better understand their data. Organisations are using AI to map out what data they have, where it's stored and how sensitive it is. This enables smarter decisions on what to keep, what to secure, how long to retain and mitigating the impact if a breach occurs. It’s a clearer lens into risk, even if it doesn’t make AI a magic bullet.
Source: Australian Financial Review Microsoft, InfoSecurity, BDO, CrowdStrike.