33.5 Policy-based fortifications


These fortifications focus on human behavior rather than system design or component selection. In some ways these are the simplest to implement, as they generally require little in the way of technical expertise. This is not to suggest, however, that policy-based fortifications are therefore the easiest to implement. On the contrary, changing human behavior is usually a very difficult feat. Policy-based fortifications are not necessarily cheap, either: although little capital is generally required, operational costs will likely rise as a result of these policies. This may take the form of monetary costs, additional staffing costs, and/or simply costs associated with impeding normal work flow (e.g. pulling personnel away from their routine tasks to do training, requiring personnel to spend more time doing things like inventing and tracking new passwords, slowing the pace of work by limiting authorization).

33.5.1 Foster awareness

Ensure all personnel tasked with using and maintaining the system are fully aware of security threats, and of best practices to mitigate those threats. Given the ever-evolving nature of cyber-attacks, this process of educating personnel must be continuous.

A prime mechanism of cyber-vulnerability is the casual sharing of information between employees, and with people outside the organization. Information such as passwords and network design should be considered “privileged” and should only be shared on a need-to-know basis. Critical security information such as passwords should never be communicated to others or stored electronically in plain (“cleartext”) format. When necessary to communicate or store such information electronically, it should be encrypted so that only authorized personnel may access it.

In addition to the ongoing education of technical personnel, it is important to keep management personnel aware of cyber threat and threat potentials, so that the necessary resources will be granted toward cyber-security efforts.

33.5.2 Employ security personnel

For any organization managing important processes and services, “important” being defined here as threatening if compromised by the right type of cyber-attack, it is imperative to employ qualified and diligent personnel tasked with the ongoing maintenance of digital security. These personnel must be capable of securing the control systems themselves and not just general data systems.

One of the routine tasks for these personnel should be evaluations of risks and vulnerabilities. This may take the form of security audits or even simulated attacks whereby the security of the system is tested with available tools.

33.5.3 Utilize effective authentication

Simply put, it is imperative to correctly identify all users accessing a system. This is what “authentication” means: correctly identifying the person (or device) attempting to use the digital system. Passwords are perhaps the most common authentication technique.

The first and foremost precaution to take with regard to authentication is to never use default (manufacturer) passwords, since these are public information. This precautionary measure may seem so obvious as to not require any elaboration, but sadly it remains a fact that too many password-protected devices and systems are found operating in industry with default passwords.

Another important precaution to take with passwords is to not use the same password for all systems. The reasoning behind this precaution is rather obvious: once a malicious party gains knowledge of that one password, they have access to all systems protected by it. The scenario is analogous to using the exact same key to unlock every door in the facility: all it takes now is one copied key and suddenly intruders have access to every room.

Passwords must also be changed on a regular basis. This provides some measure of protection even after a password becomes compromised, because the old password(s) no longer function.

Passwords chosen by system users should be “strong,” meaning difficult for anyone else to guess. When attackers attempt to guess passwords, they do so in two different ways:

  • Try using common words or phrases that are easy to memorize
  • Try every possible combination of characters until one is found that works

The first style of password attack is called a dictionary attack, because it relies on a database of common words and phrases. The second style of password attack is called a brute force attack because it relies on a simple and tireless (“brute”) algorithm, practical only if executed by a computer.

A password resistant to dictionary-style attacks is one not based on a common word or phrase. Ideally, that password will appear to be nonsense, not resembling any discernible word or simple pattern. The only way to “crack” such a password, since a database of common words will be useless against it, will be to attempt every possible character combination (i.e. a brute-force attack).

A password resistant to brute-force-style attacks is one belonging to a huge set of possible passwords. In other words, there must be a very large number of possible passwords limited to the same alphabet and number of characters. Calculating the brute-force strength of a password is a matter of applying a simple exponential function:

S = C


= Password strength (i.e. the number of unique password combinations possible)

= Number of available characters (i.e. the size of the alphabet)

= Number of characters in the password

For example, a password consisting of four characters, each character being a letter of the English alphabet where lower- and upper-case characters are treated identically, would give the following strength:

S = 264 = 456976 possible password combinations

If we allowed case-sensitivity (i.e. lower- and upper-case letters treated differently), this would double the value of and yield more possible passwords:

S = 524 = 7311616 possible password combinations

Obviously, then, passwords using larger alphabets are stronger than passwords with smaller alphabets.

33.5.4 Cautiously grant authorization

While authentication is the process of correctly identifying the user, authorization is the process of assigning rights to each user. The two concepts are obviously related, but not identical. Under any robust security policy, users are given only as much access as they need to perform their jobs efficiently. Too much access not only increases the probability of an attacker being able to cause maximum harm, but also increases the probability that benevolent users may accidently cause harm.

Perhaps the most basic implementation of this policy is for users to log in to their respective computers using the lowest-privilege account needed for the known task(s), rather than to log in at the highest level of privilege they might need. This is a good policy for all people to adopt when they use personal computers to do any sort of task, be it work- or leisure-related. Logging in with full (“administrator”) privileges is certainly convenient because it allows you to do anything on the system (e.g. install new software, reconfigure any service, etc.) but it also means any malware accidently engaged20 under that account now has the same unrestricted level of access to the system. Habitually logging in to a computer system with a low-privilege account helps mitigate this risk, for any accidental execution of malware will be similarly limited in its power to do harm.

Another implementation of this policy is called application whitelisting, where only trusted software applications are allowed to be executed on any computer system. This stands in contrast to “blacklisting” which is the philosophy behind anti-virus software: maintaining a list of software applications known to be harmful (malware) and prohibiting the execution of those pre-identified applications. Blacklisting (anti-virus) only protects against malware that has been identified and notified to that computer. Blacklisting cannot protect against “zero-day” malware known by no one except the attacker. In a whitelisting system, each computer is pre-loaded with a list of acceptable applications, and no other applications – benign or malicious – will be able to run on that machine.

33.5.5 Maintain good documentation

While this is important for effective maintenance in general, thorough and accurate documentation is especially important for digital security because it helps identify vulnerabilities. Details to document include:

  • Network diagrams
  • Software version numbers
  • Device addresses

33.5.6 Close unnecessary access pathways

All access points to the critical system must be limited to those necessary for system function. This means all other potential access points in the critical system must be closed so as to minimize the total number of access points available to attackers. Examples of access points which should be inventoried and minimized:

  • Hardware communication ports (e.g. USB serial ports, Ethernet ports, wireless radio cards)
  • Software TCP ports
  • Shared network file storage (“network drives”)
  • “Back-door” accounts used for system development

That last category deserves some further explanation. When engineers are working to develop a new system, otherwise ordinary and sensible authentication/authorizations measures become a major nuisance. The process of software development always requires repeated logins, shutdowns, and tests forcing the user to re-authenticate themselves and negotiate security controls. It is therefore understandable when engineers create simpler, easier access routes to the system under development, to expedite their work and minimize frustration.

Such “back-door” access points become a problem when those same engineers forget (or simply neglect) to remove them after the developed system is released for others to use. An interesting example of this very point was the so-called basisk vulnerability discovered in some Siemens S7 PLC products. A security researcher named Dillon Beresford working for NSS Labs discovered a telnet21 service running on certain models of Siemens S7 PLCs with a user account named “basisk” (the password for this account being the same as the user name). All one needed to do in order to gain privileged access to the PLC’s operating system was connect to the PLC using a telnet client and enter “basisk” for the user name and “basisk” for the password! Clearly, this was a back-door account used by Siemens engineers during development of that PLC product line, but it was not closed prior to releasing the PLC for general use.

33.5.7 Maintain operating system software

All operating system software manufacturers periodically release “patches” designed to improve the performance of their products. This includes patches for discovered security flaws. Therefore, it is essential for all computers belonging to a critical system to be regularly “patched” to ensure maximum resistance to attack.

This is a significant problem within industry because so much industrial control system software is built to run on consumer-grade operating systems such as Microsoft Windows. Popular operating systems are built with maximum convenience in mind, not maximum security or even maximum reliability. New features added to an operating system for the purpose of convenient access and/or new functionality often present new vulnerabilities22 .

Another facet to the consumer-grade operating system problem is that these operating systems have relatively short lifespans. Driven by consumer demand for more features, software manufacturers develop new operating systems and abandon older products at a much faster rate than industrial users upgrade their control systems. Upgrading the operating systems on computers used for an industrial control system is no small feat, because it usually means disruption of that system’s function, not only in terms of the time required to install the new software but also (potentially) re-training required for employees. Upgrading may even be impossible in cases where the new operating system no longer supports features necessary for that control system23 . This would not be a problem if operating system manufacturers provided the same long-term (multi-decade) support for their products as industrial hardware manufacturers typically do, but this is not the case for consumer-grade products such as Microsoft Windows24 .

33.5.8 Routinely archive critical data

The data input into and generated by digital control systems is a valuable commodity, and must be treated as such. Unlike material commodities, data is easily replicated, and this fact provides some measure of protection against loss from a cyber-attack. Routine “back-ups” of critical data, therefore, is an essential part of any cyber-security program. It should be noted that this includes not just operational data collected by the control system during operation, but also data such as:

  • PID tuning parameters
  • Control algorithms (e.g. function block programs, configuration data, etc.)
  • Network configuration parameters
  • Software installation files
  • Software license (authorization) files
  • Software drivers
  • Firmware files
  • User authentication files
  • All system documentation (e.g. network cable diagrams, loop diagrams)

This archived data should be stored in a medium immune to cyber-attacks, such as read-only optical disks. It would be foolish, for example, to store this sort of critical data only as files on the operating drives of computers susceptible to attack along with the rest of the control system.

33.5.9 Create response plans

Just as no industrial facility would be safe without incident response plans to mitigate physical crises, no industrial facility using digital control systems is secure without response plans for cyber-attacks. This includes such details as:

  • A chain of command for leading the response
  • Instructions on how to restore critical data and system functions
  • Work-arounds for minimal operation while critical systems are still unavailable

33.5.10 Limit mobile device access

Mobile digital devices such as cell phones and even portable storage media (e.g. USB “flash” drives) pose digital security risks because they may be exploited as an attack vector bypassing air gaps and firewalls. It should be noted that version 0.5 of Stuxnet was likely inserted into the Iranian control system in this manner, through an infected USB flash drive.

A robust digital security policy will limit or entirely prohibit personal electronic devices into areas where they might connect to the facility’s networks or equipment. Where mobile devices are essential for job functions, those devices should be owned by the organization and registered in such a way as to authenticate their use. Computers should be configured to automatically reject non-registered devices such as removable flash-memory storage drives. Portable computers not owned and controlled by the organization should be completely off-limits25 from the process control system.

Above all, one should never underestimate the potential harm allowing uncontrolled devices to connect to critical, trusted portions of an industrial control system. The degree to which any portion of a digital system may be considered “trusted” is a function of every component of that system. Allowing connection to untrusted devices violates the confidence of that system.

33.5.11 Secure all toolkits

A special security consideration for industrial control systems is the existence of software designed to create and edit controller algorithms and configurations. The type of software used to write and edit Ladder Diagram (LD) code inside of programmable logic controllers (PLCs) is a good example of this, such as the Step7 software used to program Siemens PLCs in Iran’s Natanz uranium enrichment facility. Instrumentation professionals use such software on a regular basis to do their work, and as such it is an essential tool of the trade. However, this very same software is a weapon in the hands of an attacker, or when hijacked by malicious code.

A common practice in industry is to leave computers equipped with such “toolkit” software connected to the control network for convenience. This is a poor policy, and one that is easily remedied by simply disconnecting the programming computer from the control network immediately after downloading the edited control code. An even more secure policy is to never connect such “toolkit” computers to a network at all, but only to controllers directly, so that the toolkit software cannot be hijacked.

Another layer of defense is to utilize robust password protection on the programmable control devices when available, rather than leaving password fields blank which then permits any user of the toolkit software full access to the controller’s programming.

33.5.12 Close abandoned accounts

Given the fact that disgruntled technical employees constitute a significant security threat to organizations, it stands to reason that the user accounts of terminated employees should be closed as quickly as possible. Not only do terminated employees possess authentication knowledge in the form of user names and passwords, but they may also possess extensive knowledge of system design and vulnerabilities.

33.6 Review of fundamental principles

Shown here is a partial listing of principles applied in the subject matter of this chapter, given for the purpose of expanding the reader’s view of this chapter’s concepts and of their general inter-relationships with concepts elsewhere in the book. Your abilities as a problem-solver and as a life-long learner will be greatly enhanced by mastering the applications of these principles to a wide variety of topics, the more varied the better.

  • Blacklisting: the concept of flagging certain users, software applications, etc. as “forbidden’ from accessing a system.
  • Chemical isotopes: variants of chemical elements differing fundamentally in atomic mass. Relevant to the subject of uranium enrichment for nuclear reactors and nuclear weapons, where one particular isotope must be separated from (“enriched”) another isotope in order to be useful.
  • Defense-in-Depth: a design philosophy relying on multiple layers of protection, the goal being to maintain some degree of protection in the event of one or more other layers failing.
  • Reliability: a statistical measure of the probability that a system will perform its design function. Relevant here with regard to control systems, in that proper control system design can significantly enhance the reliability of a large system if the controls are able to isolate faulted redundant elements within that system. This is the strategy used by designers of the Iranian uranium enrichment facility, using PLC controls to monitor the health of many gas centrifuges used to enrich uranium, and taking failed centrifuges off-line while maintaining continuous production.
  • Whitelisting: the concept of only permitting certain users, software applications, etc. to access a system.


“21 Steps to Improve Cyber Security of SCADA Networks”, Department of Energy, USA, May 2011.

Bartman, Tom and Carson, Kevin, “Securing Communications for SCADA and Critical Industrial Systems”, Technical Paper 6678-01, Schweitzer Engineering Laboratories, Inc., Pullman, WA, January 22, 2015.

Beresford, Dillon, “Siemens Simatic S7 PLC Exploitation”, technical presentation at Black Hat USA conference, 2011.

Byres, Eric, “Building Intrinsically Secure Control and Safety Systems – Using ANSI/ISA-99 Security Standards for Improved Security and Reliability”, Byres Security Inc., May 2009.

Byres, Eric, “Understanding Deep Packet Inspection (DPI) for SCADA Security”, document WP_INDS_TOF_514_A_AG, Belden, Inc., 2014.

Ciampa, Mark, Security+ Guide to Network Security Fundamentals, Course Technology (a division of Thompson Learning), Boston, MA, 2005.

“Common Cybersecurity Vulnerabilities in Industrial Control Systems”, Department of Homeland Security, Control Systems Security Program, National Cyber Security Division, USA, May 2011.

Falliere, Nicolas; Murchu, Liam O.; Chien, Eric; “W32.Stuxnet Dossier”, version 1.4, Symantec Corporation, Mountain View, CA, February 11, 2011.

Fischer, Ted, “Private and Public Key Cryptography and Ransomware”, Center for Internet Security, Inc., Pullman, WA, December 2014.

Grennan, Mark, “Firewall and Proxy Server HOWTO”, version 0.8, February 26, 2000.

Horak, Ray, Webster’s New World Telecom Dictionary, Wiley Publishing, Inc., Indianapolis, IN, 2008.

Kemp, R. Scott, “Gas Centrifuge Theory and Development: A Review of US Programs”, Program on Science and Global Security, Princeton University, Princeton, NJ, Taylor & Francis Group, LLC, 2009.

Langner, Ralph, “To Kill A Centrifuge – A Technical Analysis of What Stuxnet’s Creators Tried to Achieve”, The Langner Group, Arlington, MA, November 2013.

Lee, Jin-Shyan; Su, Yu-Wei; Shen, Chung-Chou, “A Comparative Study of Wireless Protocols: Bluetooth, UWB, ZigBee, and Wi-Fi”, Industrial Technology Research Institute, Hsinchu, Taiwan, November 2007.

Leidigh, Christopher, “Fundamental Principles of Network Security”, White Paper #101, American Power Conversion (APC), 2005.

Leischner, Garrett and Whitehead, David, “A View Through the Hacker’s Looking Glass”, Technical Paper 6237-01, Schweitzer Engineering Laboratories, Inc., Pullman, WA, April 2006.

Makhijani, Arjun Ph.D.; Chalmers, Lois; Smith, Brice Ph.D.; “Uranium Enrichment – Just Plain Facts to Fuel an Informed Debate on Nuclear Proliferation and Nuclear Power”, Institute for Energy and Environmental Research, October 15, 2004.

McDonald, Geoff; Murchu, Liam O.; Doherty, Stephen; Chien, Eric; “Stuxnet 0.5: The Missing Link”, version 1.0, Symantec Corporation, Mountain View, CA, February 26, 2013.

Oman, Paul W.; Risley, Allen D.; Roberts, Jeff; Schweitzer, Edmund O. III, “Attack and Defend Tools for Remotely Accessible Control and Protection Equipment in Electric Power Systems”, Schweitzer Engineering Laboratories, Inc., Pullman, WA, March 12, 2002.

Postel, John, Internet Protocol – DARPA Internet Program Protocol Specification, RFC 791, Information Sciences Institute, University of Southern California, Marina Del Ray, CA, September 1981.

Rescorla, E. and Korver, B.; “Guidelines for Writing RFC Text on Security Considerations” (RFC 3552), The Internet Society, July 2003.

Risley, Allen; Marlow, Chad; Oman, Paul; Dolezilek, Dave, “Securing SEL Ethernet Products With VPN Technology”, Application Guide 2002-05, Schweitzer Engineering Laboratories, Inc., Pullman, WA, July 11, 2002.

“Seven Strategies to Effectively Defend Industrial Control Systems”, National Cybersecurity and Communications Integration Center (NCCIC), Department of Homeland Security (DHS), USA.

“Tofino Xenon Security Appliance” data sheet, document DS-TSA-XENON version 6.0, Tofino Security, 2014.

“W32.DuQu – The Precursor to the next Stuxnet”, version 1.4, Symantec Corporation, Mountain View, CA, November 23, 2011.

Whitehead, David and Smith, Rhett, “Cryptography: A Tutorial for Power Engineers”, Technical Paper 6345-01, Schweitzer Engineering Laboratories, Inc., Pullman, WA, October 20, 2008.

Zippe, Gernot, “A Progress Report: Development of Short Bowl Centrifuges”, Department of Physics, University of Virginia, July 1, 1959.

Back to Main Index of Book


Leave a Reply

Your email address will not be published. Required fields are marked *