By Peter Guffin, Visiting Professor of Practice
The recently published Internal Report of the National Institute of Standards and Technology (NIST) – NISTIR 8062 entitled “An Introduction to Privacy Engineering and Risk Management in Federal Systems” – introduces a new specialty discipline of systems engineering focused on managing privacy risks, both at individual and societal levels.
Geared towards information systems engineers, the report’s goal was to bring the high-level amorphous Fair Information Practice Principles (FIPPS) into focus and to provide implementation guidance for those in federal agencies tasked with operationalizing privacy. Although intended for federal government agencies, the report is equally applicable for private businesses handling personally identifiable information (PII) across industries, even those in privacy-regulated industries such as banking, insurance, and healthcare.
The report, which provides a frame of reference for identifying privacy risks and a model for privacy risk analysis that has been lacking in the privacy field, is an important first step toward bringing more rigor and discipline (dare one say science) to privacy risk management and closing the communication gap between privacy and security professionals. It also underscores the mutual responsibility of privacy and security professionals to work together in designing, altering, or integrating systems containing PII.
Most importantly, by introducing a common vocabulary, the report also may serve as a catalyst for increased integration of privacy and security curricula in higher education and professional development
The report states: “Extensive guidance already exists for information security. In developing an engineering approach to privacy, it is important to understand the relationship – and particularly the distinctions – between information security and privacy. Doing so will improve understanding of how to apply established systems engineering and risk management processes to addressing privacy concerns.”
With respect to the boundaries and overlap between privacy and security, the report recognizes that, while problems that can result from unauthorized access to PII are generally well-recognized, problems from authorized processing of PII may be “less visible or not as well understood, but they also result in real consequences.” It goes on to explain that the potential problems that can arise from processing PII include loss of trust, discrimination (stigmatization and power imbalance), loss of self-determination (loss of autonomy, loss of liberty, exclusion, and physical harm), and economic loss. Examples of such problems include “the scope of information collection related to providing public benefits may have a discriminatory and stigmatizing effect on recipients. Inaccurate information or the inability to correct it can lead to frustrations in ordinary activities such as boarding airplanes.”
Turning the FIPPs into actionable metrics for privacy risk management, the report introduces a set of privacy engineering objectives – predictability, manageability, and disassociability – which are intended to represent core characteristics of systems, similar to the security objectives of the CIA triad – confidentiality, integrity, and availability – which have been used as a means of categorizing capabilities and controls to achieve security outcomes. As explained in the report, “[a] system should exhibit each objective in some degree to be considered a system that can support an agency’s privacy policies. The privacy engineering objectives are intended to provide a degree of precision to encourage the implementation of measurable controls for managing privacy risk.”
Like the security objectives of the CIA triad, “[the] privacy engineering objectives could enable system designers or engineers to focus on the types of capabilities the system needs in order to demonstrate implementation of an agency’s privacy policies and system privacy requirements.”
In addition, the report introduces a privacy risk model to enable more consistent privacy risk assessments based on the vulnerability to, likelihood of, and impact of problematic data actions, which are defined as a data action that causes an adverse effect, or problem, for individuals. The report states: “Just as agencies conduct risk assessments to determine the information security risk of their systems and processes, they will need to conduct risk assessments to determine the privacy risk of their systems and processes by assessing the data actions of their systems, how they may become problematic, as well as what processes or controls they already have in place to manage these concerns.”
While recognizing that assessing the cost or harm from a problematic data action is an area that needs further research, the report suggests that agencies use other costs as proxies to help account for individual impact, including “legal compliance costs arising from the problems created for individuals, mission failure costs such as reluctance to use the system or service, reputational costs leading to loss of trust, and internal culture costs which impact morale or mission productivity as employees assess their general mission to serve the public good against the problems individuals may experience.”
Going forward, NIST plans to develop guidance for privacy engineering and risk management which is complementary to its security risk management-related special publications, as well as a “complete set of tools to enable privacy to achieve parity with other considerations in agencies’ enterprise risk management processes.” Stay tuned.