Artificial intelligence (AI) has dominated headlines and conversations in the last year, with various industries exploring what this technology means for automation, collaboration, communication and more. For community banks, AI presents opportunities and challenges, especially in the world of cybersecurity.
As banks consider integrating AI with traditional cybersecurity measures, what are the risks? This blog explores the perils and pitfalls of AI in cybersecurity and how community banks can incorporate this technology into their strategies.
What’s the Difference between Artificial Intelligence and Machine Learning?
While AI and machine learning are related, differences exist between these technologies. AI describes the use of a computer to perform tasks that would normally require human intelligence. AI—such as ChatGPT or virtual assistant programs—mimics problem solving, learning, reasoning and perception.
Machine learning (ML) is a subset of AI that allows computers to learn from and make predictions using data. As an example, spam email filters use ML to operate. Both AI and ML are set to positively impact the cybersecurity landscape now and in the future. In fact, the revenue forecast for global AI in the cybersecurity market is expected to reach $60.6 billion by 2028.
When ChatGPT emerged in late 2022, it soon led to game-changing effects on countless industries. As a chatbot and virtual assistant that uses AI to respond to prompts, ChatGPT and other generative AI tools have become the norm for employees seeking quick answers or automation.
Following ChatGPT’s boom, organizations were quick to adopt AI for various business functions. A 2023 Gartner survey reported that more than half of organizations (55%) that have deployed AI consider using it for every new use case they evaluate. And a 2024 McKinsey report found that 65% of respondents say their organizations regularly use generative AI, nearly double from 2023. Respondents also indicated their belief that AI will lead to significant change in their industries in the years to come.
Want a refresher on generative AI and its uses in financial services? Check out our overview blog.
What Cybersecurity Risks Does AI Pose?
While AI is transforming business functions, it’s not without risk. And one of AI’s top pitfalls is how it can streamline cyberattacks. According to Darktrace, there was a 135% spike in novel social engineering attacks from January to February 2023, aligning with adoption of ChatGPT. Other reports reveal that 75% of cybersecurity professionals have seen an increase in AI-powered cyberattacks since 2023.
Here are some ways cybercriminals are using AI, all representing cybersecurity risk to your institution.
- Increased Speed and Scalability of Attacks: Cybercriminals use AI to speed up their malicious efforts and increase effectiveness of attacks, including social engineering. Instead of writing their own phishing emails or scripts, criminals prompt AI to create them in seconds. It’s also easier for cybercriminals to launch personalized phishing attacks, as automation leads to increased speed and scalability. With targeted ransomware, they can use AI and ML to create accurate profiles of their targets.
- Deepfake Attacks: Cybercriminals also use deepfakes—which are AI-generated photos, videos or audio—to carry out identity theft and other social engineering schemes. This includes everything from using fake audio on phone calls to execute account takeover or requesting wire transfers. Since deepfakes come in varying levels of sophistication, vigilance is key.
- Circumventing Security Protections: Fraudsters leverage AI to navigate around institutions’ security protections. Using AI, malware can adapt and evade detection by identifying patterns in detection systems and bypassing them. AI can also rewrite vulnerabilities to make them more difficult to detect. If a vulnerability scanner knows how to look for a certain signature, then cybercriminals can have AI quickly rewrite that vulnerability to change that signature, making it more difficult to stop some viruses or malware.
- Talent Shortage and Skills Gap: AI is also revealing the stark cybersecurity skills gap and talent shortage. With offensive cybersecurity, institutions are playing catch up and don’t always hire or have access to forward-thinking employees. Since cybercriminals are often on the cutting-edge of technology and tactics, having employees that share that mindset helps institutions elevate defenses. Further, talent is expensive, and the market for highly skilled cybersecurity professionals is highly competitive. Many institutions turn to trusted managed cybersecurity providers to help bridge this gap.
6 Benefits of AI in Cybersecurity
Despite the accompanying risks, AI’s positive effects on cybersecurity cannot be denied. AI is accelerating the advancement of current tools, including automated security operations, malware protection and authentication. In fact, many organizations are automating security operations to increase efficiencies, with 51% expanding the use of automation or AI into their cybersecurity strategy over the last two years. Below are several advantages AI brings to security.
1. Enhance Vulnerability Management
Institutions can use AI to determine which vulnerabilities within their systems are most likely to be exploited, allowing them to prioritize remediation and reduce expenses for incident response.
2. Analyze Data Efficiently
AI/ML can strengthen threat detection, as it can analyze large amounts of data such as emails, website links and more to identify patterns and trends. For example, AI can evaluate email content to see if common phishing indicators are present, such as a sense of urgency.
3. Automate Manual Tasks
Some routine tasks—such as security log monitoring—can be tedious or time-consuming for employees. By automating these tasks using AI, employees are free to work on more complex, yet rewarding, activities. AI also helps close out investigations and support tickets faster, as in the case of spam or phishing emails. Automation of certain tasks and cybersecurity responses leads to reduced errors and greater protection.
4. Increase Accuracy in Detecting Unusual Activity
AI can more accurately and quickly detect anomalous behavior in certain cases compared to humans, especially with high volumes of cases to evaluate. AI can transform an institution’s approach to log review, spam email or anything that requires analyzing a large amount of data quickly.
5. Improve Security against Evolving Threats
Traditional cybersecurity methods may be slow to adapt to new or evolving threats, but AI quickly adapts to identify new threats and elevates threat intelligence. AI also automates incident response by containing a breach within seconds and improves network security by locating weak spots and securing them. Additionally, AI is strengthening attack prediction with behavioral and predictive analytics, which will likely only grow more accurate.
6. Alert on Anomalous Behavior
Since AI can analyze enormous amounts of data, it effectively identifies patterns and detects anomalies by distinguishing between normal and unusual behavior without involving a human to spend time investigating all cases. If unusual activity occurs on a network, AI sends alerts in real time. Enhanced threat detection and response is already extremely accurate, but AI drives enhanced investigations and forensics of attacks. With AI, institutions have a co-pilot for cybersecurity that can take the lead in starting investigations into suspicious activity.
Evaluating Cybersecurity and AI
Before implementing AI in your cybersecurity strategy, there are various considerations to keep in mind. One of the top considerations for your institution should be network security. Your configurations should be addressed to ensure no vulnerabilities exist.
Securing your data is critical to ensure that your AI engine is not training on any confidential information. Are your files and confidential information protected? If you have a strategic plan or other proprietary documents saved in an area that was being used for training your AI model, then your confidential data could be generated in search results. It’s critical that you understand your controls around data security before moving forward with an AI engine.
If your institution is using publicly available tools like ChatGPT, ensure that your employees understand the risks of uploading sensitive documents or information. Refer to available AI guidance from regulators before encouraging or allowing employees to leverage these tools.
How Should Community Banks Approach AI and Cybersecurity?
Given AI’s prevalence in the industry, it seems this technology is here for the long haul. Community banks should seize the opportunity to educate and train their employees, customers and communities about AI and related cybersecurity best practices.
By providing education on AI’s perils and pitfalls, community banks can directly contribute to a stronger, safer community and customer base. Training topics should include educating customers on identifying AI-generated schemes, how to report them and best practices for navigating this new AI landscape.
As social engineering schemes evolve, banks should also ensure that customers know what questions their institution will and won’t ask them via phone, text or email. For example, if someone claiming to be from your bank calls you to confirm your account username and password, this is likely a scam.
Since community banks are already trusted partners, they are in a unique position to deepen relationships and their reputation by becoming a resource on cybersecurity best practices and online safety in today’s digital-first world.
Moving Forward with Your AI Strategy
Especially as these tools mature, AI stands to revolutionize cyber defense for organizations in various industries, including community banks. And as cybercriminals continue leveraging AI to improve their attacks, institutions must figure out how to effectively use it to beat them in their own game. This means adapting and implementing new tools—all while ensuring proper controls remain in place to mitigate risk.
As institutions develop their strategy to supplement existing cybersecurity measures with AI, it’s important to consider the risks and rewards. But one thing is clear: This technology is set to continue shaping the technology landscape.
Want more insight into technology trends in the financial services industry? Download our Banking Priorities Executive Report.
Read the report
Steve Sanders, Chief Risk Officer and Chief Information Security Officer
In his role, Steve leads enterprise risk management and other key components of CSI’s corporate compliance program, including privacy and business continuity. He also oversees threat and vulnerability management as well as information security strategy and awareness programs. With more than 15 years of experience focused on cybersecurity, information security and privacy, he employs his strong background in audit, information security and IT security to help board members and senior management gain a command of cyber risk oversight.