Cyberdeterrence and Cyberwarfare by Martin C. Libicki from the RAND corporation [1] was an interesting read but I noticed it contains many short-sighted descriptions and theories that can possibly lower the defensive and offensive posture of any government, military, or private sector industry that views the opinion as a "de-facto" source, to be relied upon. My perspective comes from some of the statements and assumptions that the author makes.
For example, Libicki states: "Cyberattacks Are Possible Only Because Systems Have Flaws." [1 pp 14] Unsure if this statement comes from the knowledge available to the author at the time, or if this is what the author truly believed and or believes at present time.
A computing system will always have some form of flaw, however, to think that this is the only possibility that an attack will be successful, is also flawed. Rather than get into a long discussion of the successes and failures of computing systems, I would like to point out the obvious, that humans are flawed often moreso than computers, the programs and or code on those systems. An establishment can create complex mechanisms and rules to secure systems to the Nth degree, but human error can cause the security of those systems to fail repeatedly.
No more was this evident than in the cases of Bradley Manning [2] and Robert Hanssen [3 pp. 28–33]. These two had access to classified and "air gapped" [4] systems. Policies, controls and deterrents were put in place yet all failed allowing the men to cause real damage. While one can question whether Manning and Hanssen are relevant to the cyberwarfare discussion, the relevancy here stems from the erroneous statement that "Cyberattacks Are Possible Only Because Systems Have Flaws." Moreover even I am flawed in my explanation and counter due to my broad interpretation of the word "systems" in that statement. My interpretation being "information or computing systems."
In the book, the author continued with: "Operational Cyberwar Has an Important Niche Role, but Only That" For operational cyberwar—acting against military targets during a war—to work, its targets have to be accessible and have vulnerabilities. These vulnerabilities have to be exploited in ways the attacker finds useful. It also helps if effects can be monitored. [1 pp 15]"
The above statement is a very narrow assumption and short sided theory. Accessibility need not be defined as accessibility from "the Internet" or accessibility by an enemy. Accessibility by any individual introduces a threat to the system as a whole. Whether it is an enemy accessing this system or trusted individual as already explained using Manning and Hanssen as examples. While the two mentioned may be extreme cases, and the likelihood of a repeat low, humans often cause more damage than buggy code, bad applications and exploits. An attacker thereby can use the human itself as the vulnerability as is often the case in targeted "client side" [5] attacks.
...