Why Your Cybersecurity Defenses are Weak – Humans vs Robots

Human Versus Robot

People often ask why that matters, but will clarify in a few lines here. We are not delving into the strengths and weaknesses of AI models here as it is beyond the scope of this discussion. Also, I use the term ‘robot’ here to describe a software-based AI engine because this software applies to hacking (for offense and defense) may as well be a machine on the keyboard, therefore a robot in the form of the PC itself.

Experience

In general, an experience learning AI robot with 7 years of experience will have a total of over 61,000 hours of hacking experience. A human with 7 years of experience, with a stretched assumption that he/she worked 12 hours a day every day, will only have 30,000 hours of experience. When many people are faced with these numbers, some actually try to say that humans learn better. I really do not know what that means, but no they don’t. The human mind is awesome, but it is also consumed with emotion, bodily function management, side-thoughts and anything else that would cause bias and noise in the decision-making process. Also, any seasoned hacker or penetration tester will admit that the work can be very repetitive.

Speed

We get challenged on this point all the time, but anybody that has ever automated a process with scripts understands. The computer itself types commands in microseconds or milliseconds versus a human typing a max of 160 words per minute. Most professionals’ typing rates are actually a fraction of that speed.

Completeness

A human has to consistently confirm whether their analysis is complete. Without getting into the psychology involved, generally, humans are passive and opt for the path of least resistance. As humans, we are also affected by bias in our work as cybersecurity professionals. A human with a background in database management and programming will focus their tests on web applications or desktop application testing, business logic testing. On the other hand, a human with a background on network architecture will focus on bypassing network controls.

AI machines may have bias depending on how they are initially trained, but that aside, an experience learning model overcomes this with time and experience. The completeness of a robot’s work is only limited by the humans that build the robot, or the humans that operate the robot. Those assumptions aside, a robot will provide a complete test of an environment without being affected by an empty stomach, a girlfriend, the next cigarette, or the plethora of inhibitions that affect humans.

Overall Cost Benefit

With human comparisons of average experience, exponential speed, and completeness, efficiency is a given, so we jump straight into cost comparisons to a human penetration tester or security auditor. The average human in these capacities will take from two weeks to three months, depending on scope of testing and size of organization. The human will most likely take weekends off, go home at six PM, and take a lunch break everyday. The average cost of a penetration test varies by country, but in Japan, costs anywhere from 35,000 USD to 50,000 USD[^2]. An AI (or robotic) service that performs the same task will cost less than half, take no time off, and will deliver results in minutes or hours.

Misconceptions: Robot Versus Scanner

When presented with an AI solution, such as Ezotech’s Tanuki, many audience members respond with, “we have a scanning solution”. Or, try to compare a software robot typing out commands on a screen to a scanner scrolling through a variety of signature tests on the command line. These are totally different solutions. Furthermore, an autonomous robot does not simulate an attack from a script, an autonomous robot identifies the attack vector and executes that attack, evaluates responses and results, modifies the attack, validates, or whatever it takes to perform the penetration test. Tanuki does what a penetration tester does. Nothing less, but a whole lot more, and faster and more complete.

Real Hackers Rarely Use Scanners

While scanners have their place for finding vulnerabilities, these solutions are noisy and such traffic is always detected by security solutions. When I see unauthorized scan warnings on a monitoring system, the first thing I think is diversion. That is what hackers use scanners for, and most times it will be a directed scan at an email server or web site to make the security operations team look away from the attack target.

Scanners Require Technical Expertise

A scanner implemented out of the box will produce so many false positives that usually less than half of the reported vulnerabilities are actual issues. Scanners require technical ability to setup each scan, assign the proper scope and scan features, and decipher the results. Despite this requirement, a majority of enterprises implement scanners thinking that they have somehow automated their vulnerability management processes. This also goes with most organizations thinking that if they scan all systems in their environment once per year that they have somehow implemented vulnerability management. We’ll leave that for another article, but for now, a scanner is not a vulnerability management solution – it is a tool that supports a complex cyber risk management process.

Robots Execute Hacking Commands – Scanner’s Test Responses

An autonomous penetration testing robot performs enumeration, attack vector analysis, exploit selection and review, malware creation, types commands and executes commands, gains foothold, attains persistence, and finds ways to spread in a network to take root access or domain administrator. This is exactly what a penetration tester executes, exactly like a penetration tester, each step may be pre-defined or not, depending on information gathered throughout each step. On the other hand, a scanner uses a predefined signature or script that is executed, then the target system response is evaluated to determine if there is a vulnerability. This leads to many false positives and quite often needs to be followed up with a human penetration tester to confirm the vulnerability.

These are two completely different levels of testing. A scanner offers quick, dirty low quality intelligence about the state of the target system. Each piece of intelligence needs to be confirmed and synthesized into a useable report. A robotic penetration tester presents detail about what was reviewed, what was determined, the exploit used, and how the system was breached. This is direct evidentiary knowledge about the vulnerability of a target system.

Conclusion

We covered much in a very short space here, so back to the main point. State hackers and organized hacker groups that present an advanced persistent threat (APT) are orchestrated and automated where possible, and AI is implemented where possible to effectively bypass your detection and alert systems. Without offensive testing of systems, such as penetration testing and red team exercises, you cannot completely grasp the vulnerability risk of your perimeter and internal systems, hardening effectiveness and detection systems capabilities. Autonomous penetration testing systems is now a mature technology that should be implemented to reduce cost, increase the speed of obtaining test results, thereby drastically reducing time to risk handling and remediation.

Many are tired of hearing AI jargon applied to a variety of solutions, but offensive testing is where AI implementation benefits far outweigh the reduced cost. A win-win for cybersecurity going forward.

Leave a Reply