In today's computing world, security takes an increasingly predominant role. The industry is facing challenges in public confidence at the discovery of vulnerabilities and customers are expecting security to be delivered out of the box, even on programs that were not designed with security in mind. Software maintainers face the challenge to improve the security of their programs and are often under-equipped to do so. Some are taking advantage of open source software (OSS) as the availability of the source code facilitates their validation and answers their need for trustworthy programs. OSS are often implemented using the C programming language (26% according to SourceForge.net). This makes it necessary to investigate the security issues related to C.
This paper summarizes key concepts related to security hardening, and demonstrates its applicability on the C language. We also propose a progressive approach to integrate security services and protection measures into existing software to ultimately make it more resistant against cyber-attacks. Given our ever increasing dependability on information technologies, it becomes critically important to provide tools to maintainers that will facilitate and accelerate the security hardening process, increasing the effectiveness of the effort and lowering the resources required to do so.
Software Security Hardening
Security hardening of software is an informal term, but the technical community considers it to be an iterative process to progressively implement security services and protection measures.The process starts with the basic software that has being designed and implemented to offer some functionality as typically defined by use cases. As a first step toward better protection of data, security services are introduced to implement features associated with authentication, access control, confidentiality, and integrity. These services are typically described via security use cases. However, this is not sufficient. It is often necessary to define misuse cases to protect the software against users' mistakes and other errors that could happen in any system operated by humans in a complex execution environment. Moreover, it is often required to test software against abuse cases that model deliberate attacks that could be encountered in a hostile environment. Depending on the criticality of the system being designed, it may be necessary to harden the key components to the highest level, including security services and protection measures against misuses or deliberate attacks. Some other less critical components can only be hardened to a lower level.
In practice, the risk analysis may lead to changes in the source code, the development process, the overall design, or even the operating environment itself as described in the following classification of security hardening methods:
Code-level hardening implies changes in the source code in a way that prevent vulnerabilities without altering the design. Some vulnerabilities are a direct result of the programming activities and code level hardening removes these vulnerabilities in a systematic way.
Software process hardening is the replacement of the development tools and compilers, the use of stronger implementations of libraries, and the execution of complementary test suites which implement security scenarios.
Design-level hardening consists of the re-engineering of the application in order to integrate security features that were absent or insufficient. Some security vulnerabilities cannot be resolved by a simple change in the code or by a better environment, but are due to a fundamentally flawed design. This category of hardening practices targets more high-level security such as access control, authentication and secure communication. In this context, best practices and security design patterns can be redirected from their original intent and used to guide the redesign effort.
Operating environment hardening stands for improvements to the security of the execution context (network, operating systems, libraries, etc.) that the software relies upon. Those changes typically make exploitation of vulnerabilities harder, although they do not remedy them.
The spectrum of changes that may be required is very broad and security analysts usually take two complementary perspectives to address key security issues. Typically, analysts prefer to start with the high-level perspective before engaging into code changes or other low-level security issues.
From the high-level perspective, the attention will be put on design-level hardening and on the relationship that the system has with its operating environment. The goal is to identify more precisely the threats, to evaluate the real risks, and to propose countermeasures.
Identifying threats is an important task in security hardening since we need to determine which threats require mitigation and how to mitigate them, preferably by applying a structured and formal mechanism or process.As such, the following is a brief description of the three main steps needed to identify and evaluate the risk of a threat:
- Application decomposition divides the application into its key components in order to identify their trust boundaries. This decomposition helps to minimize the number of threats that need mitigation by excluding those that are outside the scope and beyond the control of the application.
- Threat identification categorizes according to the six known categories presented by Howard and LeBlanc: spoofing identity, tampering with data, repudiation, information disclosure, denial of service and elevation of privilege.
- Risk evaluation is needed to determine the priority of threats to be mitigated.
Once the previous steps are completed and the threat is well identified and categorized, it is possible to determine the appropriate mitigation technique(s). It is possible to find mappings between the categories of threats and known counter-measures. Howard and LeBlanc provide a list of mitigation techniques for each category of threats within their classification.
For example, against the threat of spoofing identity, they recommend using appropriate authentication and to protect secret data; against information disclosure, they recommend using authorization and encryption. Regarding the deployment of these techniques into applications and systems, security patterns are useful to choose the best techniques available, and guide their implementation.
Low Level Perspective
From the low-level perspective, the attention will be on the source code itself and on the methodologies (tools and techniques) used to build software systems. Software analysts often use automated tools to find the software constructs that are problematic or exploitable in an attack scenario. Some tools use static code analysis to find potential implementation flaws. It may be necessary to complement the security analysis of the code with a run-time tester.
When code review is performed, it is important to evaluate the impact of a software defect because it may result in a real vulnerability that will represent an exploitable weakness. Even though many tools exist to help identify vulnerabilities, no tools are perfect. Some tools are good at certain types of defects while others may simply miss them. False positive diagnostics are often the most difficult problem software analysts encounter. [Editor's Note: readers interested in the findings of research into this subject will find details in the article Language Insecurity.]
Notorious Vulnerabilities in the C Language
In this section, some major safety vulnerabilities of C programming are presented along with the hardening techniques used to remedy them at different levels. They are recognized as being among the most notorious source of problems in software security and reliability. They illustrate the multi-layer approach that is needed to cope with them in a rigorous manner.
Buffer overflows exploit common programming errors that arise mostly from weak or non-existent bounds checking of input being stored in memory buffers. Buffers on both the stack and the heap can be corrupted. Many APIs (application programming interfaces) and tools have been deployed to solve the problem of buffer overflow or to make its exploitation harder. Table 1 summarizes the security hardening solutions for buffer overflows.
|Table 1: Hardening for Buffer Overflows|
|Code:||Bound-checking, memory manipulation functions with length parameter, ensuring proper loop bounds, format string specification, user's input validation|
|Software Process:||Compile with canary words, inject bound-checking aspects|
|Design:||Input validation, input sanitization|
|Operating Environment:||Disable stack execution, use libsafe, enable stack randomization|
Integer security issues are caused by converting between signed and unsigned, sign errors, truncation errors and overflow and underflow. Those vulnerabilities can be solved using sound coding practices and special features in some compilers such as replacing integer operations with safer calls. The security hardening solutions for such problems are summarized in Table 2.
|Table 2: Hardening for Integer Vulnerabilities|
|Code:||Use of functions detecting integer overflow/underflow, migration to unsigned integers, ensuring integer data size in assignments/casts|
|Software Process:||Compiler option to convert arithmetic operation to error condition-detecting|
Hardening for memory management vulnerabilities. The C programmer is in charge of pointer management, buffer dimensions, allocation and de-allocation of dynamic memory space, all of which may cause memory corruption, unauthorized access to memory space, and buffer overflows. Security hardening solutions against such problems are summarized in Table 3.
|Table 3: Hardening for Memory Management Vulnerabilities|
|Code:||NULL assignment on freeing and initialization, error handling on allocation, pointer initialization, avoid null dereferencing|
|Software Process:||Using aspects to inject error handling and assignments, compiler option to force detection of multiple-free errors|
|Operating Environment:||Use a hardened memory manager (e.g. dmalloc, phkmalloc)|
File management errors can lead to many security vulnerabilities such as data disclosure, data corruption, code injection and denial of service. Unsafe temporary files and improper file creation access control flags are two major sources of vulnerabilities in file management. In some cases, we can redesign the application to use inter-process communication instead of temporary files. The security hardening solutions for such problems are summarized in Table 4.
|Table 4: Hardening for File Management Vulnerabilities|
|Code:||Use proper temporary file functions, default use of restrictive file permissions, setting a restrictive file creation mask, use of ISO/IEC TR 24731 functions|
|Software Process:||Set a wrapper program changing file creation mask|
|Design:||Redesign to avoid temporary files|
|Operating Environment:||Restricting access rights to relevant directories|
We introduced the concept of software security hardening and a classification for hardening methods. It is hoped that it will guide developers and maintainers in deploying and hardening security features and to remedy vulnerabilities present in existing OSS. More high quality information is available on security vulnerabilities and on the techniques used to mitigate them. We recommend some key resources to address security concerns in existing software, including the US Department of Homeland Security portal that is the most comprehensive reference for software security issues.
As a general advice, the scientific community recommends to look for OSS implemented in modern languages such as Java, C# .NET, Ada, SPARK, and CAML. These offer much better security than old programming languages like C and C++ that are deficient in terms of type safety and rigorous memory management. In all cases, well-recognized and well-supported implementations provide better building blocks since they are constantly improved to match the ever increasing risk encountered in the modern cyber environment.
This research is the result of a fruitful collaboration between the computer security Laboratory of Concordia University, Defense Research and Development Canada at Valcartier, and Bell Canada thanks to a grant under the NSERC/DND Research Partnership Program.
This article is based on work originally presented at the 2006 International Conference on Privacy, Security and Trust which was hosted by the University of Ontario Institute of Technology.