As we live through the current AI revolution, the technology is opening doors to new possibilities, greater accessibility, and unprecedented innovation. Yet, alongside these opportunities come heightened risks and stronger threats. Recent studies show that global cyberattacks surged by 44% year-over-year¹, pushing organizations to respond by increasing their cybersecurity spending by 12.2%² and rapidly integrating artificial intelligence into their defenses – with nearly 67% of security operations centers now leveraging AI and automation³.

The rise of cloud-based technologies has made storing and managing sensitive information more efficient, but it has also expanded the attack surface for cybercriminals.

This raises a pressing question: Is AI making our digital environment safer, or is it equipping attackers with more dangerous tools than ever before?

A Unique Lens

Jonas brings a permanent acquirer lens to this topic. Our focus is pragmatic risk reduction for vertical market software businesses, not hype. It’s important to understand how AI is affecting cybersecurity and what steps owners and operators can take to better serve their specific industries, customers and workflows.

What is a Cyberattack and What Has AI Changed?

A cyberattack is a deliberate attempt to disrupt or compromise systems or data. Classic methods still dominate, but AI reduces attacker effort and increases scale and speed.

 

Common Attack Types

  • DoS and DDoS – Overwhelm services with traffic and excessive requests, so legitimate users cannot access them.
  • Phishing – Social engineering to trick users into sharing credentials or installing malware.
  • SQL injection – Malicious code input that manipulates backend databases of websites.

What AI Adds For Attackers

  • Hyper personalized phishing at scale – Language models generate convincing emails, texts, and chat messages targeted to roles, industries, and even current projects.
  • Synthetic media – Voice cloning and deepfake video for business email compromise or fraudulent payment requests.
  • Automated reconnaissance – ML models scan public code, documents, and configuration files to prioritize exploitable weaknesses.
  • Evolving payloads – Automated mutation helps malware avoid signature based detection.
  • Model aware exploits – Prompt injection and data poisoning attacks that target AI assistants, chatbots, and recommendation engines embedded in products.

Where AI Strengthens Defense

AI is most valuable when it helps small teams focus on the highest impact work and shortens the time between detection and response.

  • 24×7 monitoring that learns – Anomaly detection across identity, network, and application logs to flag unusual behavior in near real time.
  • Faster triage and response – AI copilots summarize alerts, identify likely root cause, and suggest playbook actions so analysts can fix issues quickly.
  • Continuous testing – Simulated adversary behavior and automated red teaming to surface blind spots before attackers do.
  • Zero trust alignment – Risk based access controls, strong authentication, and least privilege policies that adapt to context.
  • Software supply chain integrity – AI assisted code review, dependency mapping, and software bill of material policy checks inside the continuous integration pipeline.

Practical Steps for Vertical Market Software Businesses

These recommendations emphasize durable controls that compound, not one-off tools.

1) Protect the domain data that makes your product valuable

List the sensitive data you hold and who can see it and keep customer data out of any AI training unless your contract clearly allows it. If you add AI search or chat, keep each customer’s data fully separated and encrypted.

2) Put guardrails around AI use cases

Label every AI feature by the risk of the actions it can take and keep a human in the loop for anything that moves money, changes access, or edits records. Add simple content filters, defend against prompt manipulation, and restrict which tools an AI can use.

3) Harden the basics first

Turn on multi-factor authentication for everyone. Use endpoint protection that can isolate risky devices, default to least privilege, and review access on a regular schedule.

4) Build an AI aware incident response plan

Write playbooks ahead of time for deepfake fraud, misuse of AI features, accidental data leaks, practice them with executives, finance, legal, and support. Decide when and how you’ll notify customers and keep clear templates ready.

5) Secure the software supply chain

Ask vendors for a software bill of materials for any code or models you use, and scan AI-generated code like any other. Ship only when security tests pass, not because a date is on the calendar.

6) Measure what matters

Track time to detect and time to respond instead of ticket counts, and tame false positives so the team isn’t overwhelmed. Watch for early warning signs like unusual data movement and sudden privilege increases.

Conclusion

As artificial intelligence continues to evolve, its role in cybersecurity grows more complex. On one hand, AI serves as a powerful guardian, capable of monitoring systems 24/7, detecting threats in real time, and learning from past attacks to build stronger defenses. On the other hand, the same technology can be weaponized by malicious actors to launch more sophisticated, faster, and harder-to-detect cyberattacks.

In the end, AI is neither inherently good nor bad, it’s a tool. What matters most is how we choose to use it, and how prepared we are for the challenges it brings.

At Jonas, we emphasize durable security practices, data stewardship, and rightsized AI that compounds value for customers over time.

Sources:

  1. Check Point Software’s 2025 Security Report Finds Alarming 44% Increase in Cyber-Attacks Amid Maturing Cyber Threat Ecosystem – Check Point Software
  2. Worldwide Security Spending to Increase by 12.2% in 2025 as Global Cyberthreats Rise, Says IDC
  3. Pulse of the AI SOC Report 2025 – Cybersecurity Insiders

Contact Us to Learn More

Recent Posts