Part 1 can be found here: http://justinparrtech.com/JustinParr-Tech/top-developer-mistakes/
Security issues and design flaws can be costly to fix, once an application has already been written.
In spite of everyone’s best efforts and intentions, these are some additional, common mistakes that can be made during the application design / development process.
1. Mistake: Using Plaintext Parameters Such as “ClientID” or “Password”
Always obfuscate parameter names and values.
When designing a web-based application, it’s natural to design a login screen with a field called “Password”, and then for that field to appear as a parameter in the query string.
Unfortunately, this is hacker-bait. Hackers can use automated test tools to run dictionary attacks, often undetected.
Sample HTML code:
<FORM METHOD=GET ACTION=Login.ASP> User: <INPUT TYPE=INPUT NAME="UserID"> Password: <INPUT TYPE=PASSWORD NAME="Password"> <BR><input type="SUBMIT" VALUE="Login"> </FORM>
Resulting HTML form:
Resulting HTTP Query String:
As you can see, this would be very easy to attack using automated tools.
A simple login form like this can be obfuscated with random or seemingly random field names, extra “hidden” fields, and even a honeypot, bogus password field:
<FORM METHOD=GET ACTION=Login.ASP> <INPUT TYPE=HIDDEN NAME="KCMSJERG" VALUE="55544"> Us er: <INPUT TYPE=INPUT NAME="QCIEJCGH"> Pa ssword: <INPUT TYPE=PASSWORD NAME="PAVSEHVE"> <INPUT TYPE=HIDDEN NAME="EQWVDKDO" VALUE="KJEFG"> <INPUT TYPE=HIDDEN NAME="Password" VALUE="HotDog44"> <BR><input type="SUBMIT" VALUE="Login"> </FORM>
A URL like this one is much harder to attack:
Notice that the field labels can be obscured with which appears as a barely-noticeable space. The real user ID and password are safely obscured in some random (or random-looking) field names. The application can either safely ignore the extra hidden values, or use them as a trap to detect URL tampering. The field named “Password” is not really the password – it’s a honeypot. A hacker might spend significant resources attempting to attack this field, which the application is simply ignoring.
Another example, having a parameter called “ClientID” allows a hacker or malicious user to tamper with the URL by changing the ClientID to some other value.
Once logged in, a shopping cart application might look like this:
“Recent Orders” might be a link with HTML that looks like this:
<A HREF="./ShoppingCart?ClientID=12345&Action=Orders">Recent Orders</A>
The problem occurs when a hacker or malicious user can tamper with the URL by changing the client ID:
If your application shows Fred’s orders instead of Bob’s, there is a serious problem! Further, Bob could possibly find Fred’s address under “Change Preferences” (using the same tampering attack), and now knows exactly what Fred ordered, when it’s expected to arrive, and where Fred lives.
A better approach, if you have to use parameters like this, is to use random or random-looking parameter names, and never use sequential database values as a parameter value. A simple hash routine can be used to obfuscate cleartext parameters. Using bogus parameters can help detect tampering.
A URL that looks like this is much harder to tamper:
BLSKCKDA contains a hash value, 0022314415881, that presumably is a hash of the “real” client ID. Adding or subtracting one from 0022314415881 should result in some invalid value, and could, itself possibly be used by the application to detect URL tampering.
Instead of something obvious like “Action=Orders”, “PCKECLER=2” has no obvious meaning.
SIEFJEVL=ABC is simply a red herring that can be used by the application to detect URL tampering.
Form and query parameters should always be generic or random (session-specific).
2. Mistake: No True Test Environment
Always have an environment where you can test code updates and configuration changes.
Often, it’s tempting to push simple changes straight in to production, or there may be other factors preventing a “clean” test. Plan ahead, and make sure there are adequate testing resources.
- Scaling. Often, the test environment is not scaled properly. Ideally, the refresh process for TEST should involve copying PROD, sanitizing it, transferring to TEST, and restoring the data. I’ve run in to many situations where TEST has insufficient storage compared to PROD, meaning, you can’t do a full test because you can’t do a full restore of a “sanitized” PROD.
For example, if you’re planning to implement a new report or query, you can’t determine how long it will run, if you only have a subset of the data.
This situation can happen either because the developers think of TEST as a unit testing only environment (and thus it was never scaled properly), or because the admins fail to scale TEST alongside PROD as it grows.
From a storage and capacity standpoint, the TEST environment should be scaled to the same size as PROD.
- Scale Out. There may not be a need to duplicate all instances / app servers, if for example, there are 20 app servers in PROD, perhaps only 2 are required in TEST. Ensure that there are adequate resources to test features such as failover and load balancing.
- Versions out of Sync. Often, during staging for a code base (version) upgrade, it’s still necessary to maintain PROD, and to have to test configuration changes for PROD, running the previous code base. This can be problematic if the feature being tested is significantly different in the new version – this can leave you without an adequate test resource.A good solution is to bring up a smaller, parallel TEST environment prior to testing a new code base. When the new code base is promoted to PROD, either leapfrog the TEST environments for the next new code base, or simply release the resources for another project.
- Performance. Some changes require performance validation. Using older hardware in a test environment is usually acceptable for feature / configuration changes, but performance changes may require similar performance to PROD, in order to accurately test. Examples include reporting and database index changes.
- Data set size and quality. A common problem in test environments, is not having sufficient data, or not having good quality of data. Performance issues with new features or configuration changes may not be visible in TEST if the TEST environment has minimal data, but may be very visible in PROD if the data set is larger. Similarly, PROD is likely to encounter a wider variety of values than an isolated TEST environment. Ideally, there should be a process in place to periodically refresh TEST from a copy of PROD. Some sectors, such as healthcare and financial services, require that TEST data be “sanitized” to prevent real data from leaking out of a test environment.
Masking and hashing are valid approaches, allowing TEST data to be realistic, internally-consistent, and still contain both the variety and quantity of data required for accurate pre-production testing.
Make sure test environments are properly scaled, and have the right quantity and type of data. Implement a process for periodically refreshing TEST from PROD, using data masking or hashing.
3. Mistake: Using a wide range of network ports between components
Let me tell you from experience, the look you get is priceless, when you talk to the network guys about implementing in production, they ask, “what network ports do you need opened”, and you respond, “all of them”. I’ve been on both sides of that conversation.
Always plan ahead. Talk to the Network Architecture folks while the application is being developed, in order to avoid any nasty surprises during implementation.
In general, keep communications between application tiers as closed as possible.
Components such as FTP (File Transfer Protocol) or RPC (Remote Procedure Call) use dynamic ports, and can be difficult to map through a firewall. Most dynamically-mapped services can be configured to use a static port range, or ideally, a single TCP/IP port.
Use few (or ideally one) well-defined network ports when communicating through a firewall.
4. Mistake: No True 3-Tier Separation
For compliance purposes, many production environments require a segregated web tier (DMZ), app tier, and data tier, known as 3-tier separation.
Separating the app server tier from the database / file server tier is fairly simple. Paths can be mapped to a remote server, and connection strings can point to remote databases.
The challenge comes in to play, for applications developed to run as a web service. Typically, these applications provide a web (HTML / XML over HTTP) interface, but make their own database / file server connections directly.
The ideal architecture exposes HTML (B2C) or XML (B2B, B2C, Device Access), but passes XML to the core application tier. The Web (DMZ) tier is used for input validation and connection isolation, to prevent an attacker from having direct access to the core app tier, for example, to prevent SQL injection attacks.
- Some managed code platforms provide a web plugin architecture. Platforms such as Tomcat, WebSphere, WebLogic, and iPlanet provide a web server plugin, that can be installed in the DMZ, allowing a remote web server to accept inbound connections, while forwarding application calls to the app tier. This is known as a plugin or proxy architecture.
- Develop a lightweight proxy. With very few lines of code, you can write a simple app that receives user requests and re-formats, and transmits them to the app tier, returning the result to the caller.
- Use an App Delivery Controller (ADC), such as F5, to provide a web-facing “Virtual Server” that is isolated from the app tier.
Plan ahead, for your application to be deployed in a 3-tier environment.
5. Mistake: Requiring SuperUser Credentials
Most production environments require restricted permissions.
Development environments can often have relaxed permissions, where the developers and other users have Administrative permissions. In Windows, this is known as “Administrator” rights, and in Unix / Linux, this is known as “root” access (and/or “wheel” and/or “sudoers”). This type of access enables and simplifies the development process, but can be fundamentally restricted in Production.
These “SuperUsers” (Administrator / root) have access to every component, service, and file on the machine, as well as the ability to do “privileged-only” operations, such as change permissions and create / modify users.
During development, it’s easy to assume that the same permissions exist in the production environment, but in reality, most production environments are “locked down”. Applications and services typically run as a non-administrative “service account” or “service user”, and specific, limited permissions are assigned to the service user.
In Production, Administrative access is often restricted due to the risk of inadvertently creating a security hole, or due to audit requirements. As an example, allowing Administrator access for your application means that someone who compromises your application could gain access to, and perform privileged operations without your knowledge, and use the privileged access to compromise the server or database! Both network worms and SQL injection attacks work exactly this way.
The following types of operations require administrative access:
- Create a shared file system
- Change permissions
- Administer user accounts
- Start / Stop or create services
- Mount a network share as a local device
- Modify system files
- Install / Remove a system component
If your application performs this type of operation, plan to use “sudo” (Linux) or “RunAs” (Windows), which allows your application’s service user account to temporarily have root or Administrator access, by spawning a new process that leverages another account. RunAs or sudo must be properly configured in advance, by the System Administrator.
For normal operations, keep track of, and document, permissions that you need for various tasks:
- Read or write files to a specific folder
- Delete or modify files
- Access another service, process, or component (such as ODBC / JDBC for database access)
- Create connections to remote servers or services
If your application uses non-local resources, such as a remote file share or a database connection, be sure to document the required access for these resources as well, and what type of access is required.
- Read only: Can read files, but can’t erase or modify.
- Read-write: Can read, write, erase, or modify files
- SELECT: Similar to read-only access
- INSERT: Creates new records
- UPDATE: Modify existing records
- DELETE: Erase records
Note that privileged database operations should be handled differently – if your application needs to create new tables, the service user (or database credentials) will need “Modify Schema” access.
In Windows, your application may require special registry permissions.
Always test your application in a QA environment. Unlike DEV, QA should be configured as close to production as possible, to help flesh out problems such as permissions issues. It’s easy to say “I need access to everything!”, but in reality, you may be creating a security hole. Spend the extra time to test and document the proper permissions.
Identifying and documenting required permissions ensures a smooth transition to production.
6. Mistake: Runs in the console session / requires automatic logon
Most Production environments restrict access to the console session.
When developing a traditional application, the app runs as an interactive process in user space, and a user interacts with it via the application’s Graphical User Interface (GUI). The GUI may be used to start / stop a process, or even display diagnostic / log information.
Unfortunately, this type of application requires that a user log on interactively to the console, and launch the application!
Although this can be automated, it’s bad practice, and most Production environments preclude this type of access. It’s too easy, as a hacker, to sneak something in to that console session, which then has privileged access to the rest of the system. In addition, interactive, single-session applications are difficult to support in a multi-user (multi-operator) environment – all of your operators / administrators have to share the same session, meaning they all must share the same credentials (another audit no-no).
When writing a “server” application, plan ahead for a separate administrative interface. Your application should run as a background process, and should not require any direct interaction to start or stop cleanly. Any conventional “GUI” application creates GUI objects and consumes GUI resources – your server application should not have a GUI for the main server process. Administration should be performed via an administrative interface, which could be a web GUI, applet, or small application that connects to the server process asynchronously.
The simplest administrative interface is a config file! Store all configurations in a config file, that is automatically read at the time the service is started, and have your application write to a log file. Unfortunately, this typically requires a full service restart, if you wish to modify and then re-read the config file, but it’s possible to use flag files or OS signals / events that trigger a “soft restart”, where your application listens for a specific event, and responds by re-reading its config file.
For services that require user / administrator interaction, a web GUI is another very simple and relatively easy approach. Most development platforms have a “web app” library or prototype, allowing rapid web UI development of forms, status pages, and the like.
Server applications should be designed to run “lights out”, without an integrated user interface, and without requiring administrator interaction.
Always make sure your applications are secure:
- Obfuscate parameters and values
- Plan ahead for a proper test environment
- Use a few (or ideally one) well-known network ports between components
- Plan your design around 3-tier separation
- Identify and document required permissions for all files and components
- Design server applications to run “lights out”, without a user interface