The traditional approach to securing software doesn’t focus on rooting out vulnerabilities during the design phase. Instead, vendors must play a cat-and-mouse patching game with hackers who discover and exploit those vulnerabilities, while organizations and people are using their software.
This creates problems on two ends. First, the average unknown vulnerability isn’t discovered for seven years, according to research by the Rand Corp. Second, people, and even large organizations, are notoriously bad at updating to install patches.
But there’s a larger push by academics, practitioners and organizations like the Defense Advanced Research Projects Agency to shift from a culture in which software is highly vulnerable by default to one in which software code is highly secure from the beginning.
The result, they say, would be a world where the barrier for entry to hackers is significantly higher, where companies spend less money on constant cyber monitoring and defense and where companies and consumers can count on their information being much more secure.