The Promise of AI to Tackle Security Debt
Managing financial debt is a delicate balancing act, where even the smallest oversight can snowball into significant problems. The same principle applies to business IT in the form of security debt. In other words, accumulated software vulnerabilities that remain unresolved for extended periods. These flaws, particularly those left unaddressed for more than a year, create fertile ground for attackers and expose organisations to escalating risks.
Managing financial debt is a delicate balancing act, where even the smallest oversight can snowball into significant problems. The same principle applies to business IT in the form of security debt. In other words, accumulated software vulnerabilities that remain unresolved for extended periods. These flaws, particularly those left unaddressed for more than a year, create fertile ground for attackers and expose organisations to escalating risks.
The challenge is widespread: research has shown that 70% of organisations carry some level of security debt, with almost half grappling with critical vulnerabilities. The good news? There are ways to significantly reduce security debt, to prevent the issue from spiralling out of control.
Before we delve into the ways to reduce security debt, it is important to reflect on how we got here. The main reason behind the mounting security debt is that organisations are not prioritising or focusing on fixing the flaws that pose the greatest risk: the critical ones.
Application age and size play a significant role in the accumulation of security debt. We have repeatedly observed a recent bias in the way developers fix security flaws: the more time that passes from a flaw appearing, the lower the chance it will ever be fixed. Recent research found that 42% of all flaws roll over to become security debt, so the older the app, the higher the debt accumulation.
Application size is also key. As the codebase of most applications grows over time, it is only logical that there is a correlation between age and the accumulation of older, unremediated flaws. Large applications therefore have the highest proportion of security debt, with 40% of them having security debt, and 47% having critical debt. And while it is not always the youngest and smallest apps that have the least debt, older monolithic applications present a greater challenge.
Flaws in open-source third-party code tend to become security debt slightly faster than first-party code. What’s more, third-party flaws tend to emerge continuously as new vulnerabilities are discovered by security researchers. This means that unless organisations keep their libraries up to date, applications will accumulate more and more risk as time passes, even if nothing has been added to the codebase.
Another major factor contributing to an organisation’s compounding debt is the increased use of generative AI to write code – a practice that will only increase over time, with Gartner predicting 75% of enterprise software engineers will use AI code assistants by 2028. Using AI is not a problem in itself. AI-generated code is not inherently less secure than human-generated code, but it’s also not more secure. The problem is an over-reliance on AI and the erroneous assumption that it will automatically produce properly functioning, flaw-free code. Large Language Models used to generate code are often trained on insecure open-source projects and other publicly available code, meaning AI-generated code can be insecure as well. Failure to vet this code properly adds to an organisation’s security debt over time and may even accelerate security debt as AI helps developers code faster than ever.
It is also important to note that security debt is not solely the result of mismanagement, poor decisions, or failure to execute. Time and resource pressures mean developers and product managers must decide which flaws to fix and which to let lie.
Thankfully, innovation is slowly lifting the pressure on development teams. New technologies like AI, when implemented with appropriate safeguards, allow developers to address more flaws and avoid spreading their time and resources too thin. AI has already fundamentally changed the paradigm of future business. Although it may seem counterintuitive based on the aforementioned risks, we are in an age where we need to consider fighting AI with AI.
Let’s consider the role that AI should play in both creating and safeguarding our software. AI can make the dream of accelerating code fixes a reality; however, it’s up to us to harness its power responsibly.
AI-driven tools, particularly those based on GPT models with supervised training on curated security-specific datasets, excel at cybersecurity tasks. These models can provide highly reliable flaw remediation suggestions, ensuring that vulnerabilities are addressed promptly and effectively. However, it is crucial that any tool handling source code, especially for security purposes, maintains the highest standards of data integrity and security.
Incorporating AI into the software development lifecycle not only enhances efficiency but also has the potential to fortify the security posture of applications. By identifying and addressing vulnerabilities early, development teams can deliver robust, secure software that meets the ever-evolving demands of the digital landscape.
Being aware of a flaw is not the same as fixing it. That is why frequent code scans do not always correlate with less debt. Knowing is only half the battle; the other half is doing something about it.
Continuous scanning must come with continuous fixing, but even the biggest teams with ample resources typically do not fix all their flaws. The problem has grown beyond the ability of humans alone to manage it, so AI-powered tools are becoming necessary. Despite fears from many that it could be a threat to security, the truth is Artificial Intelligence is increasingly part of the solution to help developers fix more efficiently.
Leveraging AI, developers can shift security left in the development cycle, meaning they identify and fix vulnerabilities as they write code. This proactive approach allows organisations to detect and address potential security risks at an earlier stage, reducing the likelihood of costly and time-consuming issues later down the line.
As AI reshapes every facet of technology, its role in addressing security debt stands out as both an urgent need and an unparalleled opportunity to reverse a currently spiralling trend. By shifting security left and enabling developers to detect and remediate flaws earlier, AI has the potential to turn the tide on security debt accumulation.
In the coming years, software security must evolve from simply identifying vulnerabilities to preventing them at the source. Empowering development teams with AI-driven tools trained on robust security datasets could redefine software security, enabling scalable, secure coding practices. In this new paradigm, AI becomes not just a tool for remediation but a cornerstone for building a safer, more resilient digital future.
The challenge is widespread: research has shown that 70% of organisations carry some level of security debt, with almost half grappling with critical vulnerabilities. The good news? There are ways to significantly reduce security debt, to prevent the issue from spiralling out of control.
Before we delve into the ways to reduce security debt, it is important to reflect on how we got here. The main reason behind the mounting security debt is that organisations are not prioritising or focusing on fixing the flaws that pose the greatest risk: the critical ones.
Application age and size play a significant role in the accumulation of security debt. We have repeatedly observed a recent bias in the way developers fix security flaws: the more time that passes from a flaw appearing, the lower the chance it will ever be fixed. Recent research found that 42% of all flaws roll over to become security debt, so the older the app, the higher the debt accumulation.
Application size is also key. As the codebase of most applications grows over time, it is only logical that there is a correlation between age and the accumulation of older, unremediated flaws. Large applications therefore have the highest proportion of security debt, with 40% of them having security debt, and 47% having critical debt. And while it is not always the youngest and smallest apps that have the least debt, older monolithic applications present a greater challenge.
Flaws in open-source third-party code tend to become security debt slightly faster than first-party code. What’s more, third-party flaws tend to emerge continuously as new vulnerabilities are discovered by security researchers. This means that unless organisations keep their libraries up to date, applications will accumulate more and more risk as time passes, even if nothing has been added to the codebase.
Another major factor contributing to an organisation’s compounding debt is the increased use of generative AI to write code – a practice that will only increase over time, with Gartner predicting 75% of enterprise software engineers will use AI code assistants by 2028. Using AI is not a problem in itself. AI-generated code is not inherently less secure than human-generated code, but it’s also not more secure. The problem is an over-reliance on AI and the erroneous assumption that it will automatically produce properly functioning, flaw-free code. Large Language Models used to generate code are often trained on insecure open-source projects and other publicly available code, meaning AI-generated code can be insecure as well. Failure to vet this code properly adds to an organisation’s security debt over time and may even accelerate security debt as AI helps developers code faster than ever.
It is also important to note that security debt is not solely the result of mismanagement, poor decisions, or failure to execute. Time and resource pressures mean developers and product managers must decide which flaws to fix and which to let lie.
Thankfully, innovation is slowly lifting the pressure on development teams. New technologies like AI, when implemented with appropriate safeguards, allow developers to address more flaws and avoid spreading their time and resources too thin. AI has already fundamentally changed the paradigm of future business. Although it may seem counterintuitive based on the aforementioned risks, we are in an age where we need to consider fighting AI with AI.
Let’s consider the role that AI should play in both creating and safeguarding our software. AI can make the dream of accelerating code fixes a reality; however, it’s up to us to harness its power responsibly.
AI-driven tools, particularly those based on GPT models with supervised training on curated security-specific datasets, excel at cybersecurity tasks. These models can provide highly reliable flaw remediation suggestions, ensuring that vulnerabilities are addressed promptly and effectively. However, it is crucial that any tool handling source code, especially for security purposes, maintains the highest standards of data integrity and security.
Incorporating AI into the software development lifecycle not only enhances efficiency but also has the potential to fortify the security posture of applications. By identifying and addressing vulnerabilities early, development teams can deliver robust, secure software that meets the ever-evolving demands of the digital landscape.
Being aware of a flaw is not the same as fixing it. That is why frequent code scans do not always correlate with less debt. Knowing is only half the battle; the other half is doing something about it.
Continuous scanning must come with continuous fixing, but even the biggest teams with ample resources typically do not fix all their flaws. The problem has grown beyond the ability of humans alone to manage it, so AI-powered tools are becoming necessary. Despite fears from many that it could be a threat to security, the truth is Artificial Intelligence is increasingly part of the solution to help developers fix more efficiently.
Leveraging AI, developers can shift security left in the development cycle, meaning they identify and fix vulnerabilities as they write code. This proactive approach allows organisations to detect and address potential security risks at an earlier stage, reducing the likelihood of costly and time-consuming issues later down the line.
As AI reshapes every facet of technology, its role in addressing security debt stands out as both an urgent need and an unparalleled opportunity to reverse a currently spiralling trend. By shifting security left and enabling developers to detect and remediate flaws earlier, AI has the potential to turn the tide on security debt accumulation.
In the coming years, software security must evolve from simply identifying vulnerabilities to preventing them at the source. Empowering development teams with AI-driven tools trained on robust security datasets could redefine software security, enabling scalable, secure coding practices. In this new paradigm, AI becomes not just a tool for remediation but a cornerstone for building a safer, more resilient digital future.