|These are standards and technologies for providing a central repository of authentication information|
Many computer users are confused and frustrated by the number of times they're asked for a password. Logging onto a computer, accessing shared files, browsing the web, reading email... for the average user almost every action requires a separate authentication. In an ideal world, authentication should only happen once: when first sitting down at the machine. This concept is called "Single Sign On", and is used throughout many large networks.
If there are many disparate systems in your network, Single Sign On can be a major time saver for users and administrators alike. Users only have to enter their login information once. Enabling all network applications to be managed from a single system minimizes network administration time. Security is enhanced because users only have one password to remember.
Password management does not work this way within all computer systems. The reason for this can be traced back to origins of the Personal Computer. Until recently, many low-price operating systems didn't require authentication. They were designed to be single-user systems. Anybody with physical access to the machine could access the system without a username or password.
Eventually systems such as Windows 95/98/Me had a "login" screen, but it was not there for the purpose of security. Instead it was designed to allow multiple users to customize their own settings on the same computer. Windows NT is an example of the first Microsoft operating system that did not allow full anonymous access. With this network based operating system it became necessary to log in to the network or to the actual computer itself.
Meanwhile high-end Unix workstations and servers had leveraged the concept of Single Sign On for many years. This is partially because these operating systems were designed as multi-user environments. In order to protect users from one another, the system was designed to restrict access based on the authentication information provided at login. Applications could ask the operating system for the username and associated permissions. Since all applications shared the same authentication model they rarely needed any type of additional authentication. In the case of networked applications, there were provisions for exchanging the authentication information remotely.
When these worlds merged things got ugly. Unix servers couldn't trust PCs, as the platforms did not share a common security model. As hacking became more prevalent, even trust between Unix systems became unreliable. New systems for authentication were needed that could be trusted across multiple platforms. Applications needed to be re-developed to recognize these systems. While computer scientists pushed for standards, major vendors had their own ideas as to how this would be accomplished. The result was a mess of central authentication technologies.
How It Works
Most Central Authentication systems use one or more of the following core technologies:
RADIUS: For many years, the Remote Access Dial-In User Service (RADIUS) has been the leading standard authentication protocol used in large networks. It is commonly used to provide authentication for embedded network devices such as routers, modem servers and switches. In practical terms, RADIUS is a protocol that allows for all users in and out of your network to be accounted for while being authorized. Users who have authority to use certain parts of a system may not be authorized to see everything, and businesses, for various purposes, often want to track how much time users spend in a system. RADIUS is capable of providing solutions to these needs.
Radius can solve authority and tracking problems through its design as a protocol. In essence it caries authentication, authorization and configuration information between a network access server and shared authentication server. While that reads like a mouthful it is really quite trivial and can be easily illustrated through an example.
In our example the network access server (a server that allows others to access the network where it lives) is going to be a web server that is serving up web pages to the public. This web server desires to authenticate its links when they are activated. To perform this authentication it is going to verify each link and the user who activated the link.
When a user requests access to a specific link the web server requests an authentication. That request goes to a shared authentication server, which also lives on the network. RADIUS provides the common ground for the web server to request the authentication, and the shared authentication server to receive it, turn it around, and allow or deny the request. In the process, the RADIUS protocol counts all the authentication requests, and monitors which users made them and how long they were on the system.
There are other systems and protocols besides RADIUS that can perform this type of functionality. However, none of them are capable of working with other authentication protocols in a shared environment. In other words, RADIUS is a governing protocol that exists around other Single Sign On and Authentication technologies. It can be used in combination with or in addition to LDAP, Kerberos, X500 and many more. The following are some of its additional advantages:
- E-embedded systems cannot inherently process a large number of users with distinct authentication information. This is usually because of a lack of raw storage space. This requires more storage than many embedded systems possess.
- RADIUS facilitates centralized user administration. Many ISPs have tens of thousands or even millions of users. User authentication information can change with each passing moment as users are added and deleted throughout the day. Users are added and deleted continuously throughout the day, and user authentication information changes constantly. This is the typical environment where centralized administration of users is mandatory.
- RADIUS consistently provides some level of protection against a sniffing, active attacker. In essence it is constantly sniffing for active attacks where other protocols only perform this function intermittently.
- RADIUS support is nearly omni-present. Because the platforms on which it is implemented are usually embedded systems, there are limited opportunities to support additional protocols.
Though RADIUS was originally created as a dialup protocol, timing has been on its side. In the current age of broadband it has become the defacto standard in authentication and is built into all embedded systems. The result is great worldwide support and acceptance of RADIUS.
NIS/YP: Many years ago, Sun Microsystems created the Network Information System (NIS, also called “Yellow Pages”, or YP) to centrally manage user and network settings across multiple Unix machines. NIS-enabled workstations dynamically obtain configuration data based on information available from the NIS server.
NIS has numerous security flaws and functional limitations. A replacement system, called NIS+, attempted to address these shortcomings but never gained popularity. Most importantly, NIS is not very practical for a mixed-platform environment (getting it to work with most Windows systems involves a complex process that can yield mixed results). LDAP, which is discussed later in this chapter, has begun to replace NIS as a cross-platform centralized management tool.
Kerberos Then and Now
Microsoft has created a variation on Kerberos that is supported by their own systems, but is incompatible with the Kerberos standard. As a result, non-Microsoft systems cannot securely access services on a Microsoft server unless both systems have access to a Microsoft Kerberos server.
This has created quite a bit of frustration throughout the security community. On the one hand, there’s the potential for the widespread adoption of a solid security infrastructure. On the other hand, many in the security community feel that Microsoft is actually trying to create confusion and doubt, with the goal of discouraging companies from using Kerberos as an authentication system.
Kerberos: According to its creators at MIT, “The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection. After a client and server has used Kerberos to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business.”
The problem with Kerberos is that it was introduced way ahead of its time. Most networks were not sophisticated enough to take advantage of the security that Kerberos offered. Instead, it was seen as a complicated, overkill solution to security. Setting up a Kerberos environment was incredibly difficult. Furthermore, only a handful of applications supported Kerberos authentication. This may change, now that Microsoft has adopted a variation of Kerberos into its Windows 2000 operating system.
Directory Services: RADIUS, Kerberos and NIS systems can all be used as the basis for a centralized authentication system. Past that, their functionality is limited. They don’t provide any framework or support for exchanging more detailed user information between applications. For that, a directory service is needed.
Back in the early days of email (late 1970’s - mid 80’s), people wanted a convenient way to find other email addresses. The idea was to create a generic directory that would hold personal details for all the members of an organization. This directory would be searchable from within the organization, with the option to allow global access across a Wide Area Network such as the Internet. X.500 was the first attempt at creating a standardized Directory Access Protocol (DAP).
Unfortunately, X.500 was designed to operate over a networking technology (X.25) that was primarily designed for high-powered server and workstation systems, not personal computers. The Internet used a networking technology (TCP/IP) that was better suited for PCs, and became the chosen standard during the 90’s.
The Lightweight Directory Access Protocol (LDAP) was created to give PCs basic access to X.500 databases over a TCP/IP network. It initially provided basic, but minimal compatibility with the X.500 protocol – thus the term “lightweight”. As LDAP evolved, it ignored some aspects of X.500 while implementing other features not found in X.500. As of today, LDAP is no longer very “lightweight”; in fact it’s become nearly as complex as the X.500 protocol!
Both LDAP and X.500 can be used to hold authentication and access control information for network users. Applications access the directory to obtain the user’s authentication credentials. Single Sign On systems control access to the directory, only allowing applications to access the user’s information during the user’s session.
Thanks to heavy efforts by Microsoft, IBM, Novell, Sun and other major vendors, many different types of network applications can obtain authentication via an LDAP server. The ability to maintain a single, central database of users and application information is of significant value to system administrators.
Getting a true Single Sign On solution deployed is a very complicated task for all but the simplest of networks. Few applications are developed with Single Sign On in mind. You’re lucky if the application can be forced into submission. In a number of cases you’ll need to either switch to a different application or get the authentication portion of the application rewritten. Get past that, and there are still a few issues you’ll need to consider.
Bottleneck: The problem with centralizing anything is that a single point of failure is immediately created. What happens if your authentication database goes down? Suddenly, the entire network is completely unusable until the central server is reactivated. For this reason, high availability and rapid disaster recovery is critical for any centralized authentication server.
Corruption: Even if the central server is infallible, the data is not. Problems in one application can corrupt the information that is used by other applications. Care must be taken when determining the extent to which each application can access and modify information in the central database.
Privacy: Privacy is a major concern with centralized authentication systems. LDAP-based systems have the capacity to collect and centralize a vast amount of personal information. How will this information be protected from internal users? What about other applications? Users might not want their word processor automatically associating complete contact information with every document that they create.
Making The Connection
Availability: Central Authentication can be an integral part of a high availability server farm.
Expediting recovery: Simpler management makes other servers easy to recover - you don't have to worry about individual user configuration on a crashed mail server if it's all stored on the LDAP server.
Synchronizing info: Eliminates the need to manage sync task of ID info
Accessing Information: How will users connect to the LDAP server when they're remote?
Transferring Information: There are many security risks associated with the transmission of centralized authorization data.
Cryptography: Sign On systems need to encode their traffic when they are communicating with all the nodes on the network. Otherwise the security of the whole system can be compromised.
Single Sign On systems are component solutions. – They are not “black box” technology products that offer complete solutions. Many different technologies have to work in harmony for a Single Sign On solution to be effective. Implementing a SSO system can be a financial and technological disaster if the appropriate care isn’t taken.
The first step is to audit the network applications that are currently in use. Some networks have very basic application needs: email, intranet, and file sharing. Others have more complex needs, such as remote access, encrypted communications, and custom designed application systems. A complete catalogue of every application used and their authentication capabilities must be developed.
Based on the audit, the next step is to select the most compatible Single Sign On authentication framework – this might be based on NIS, RADIUS, LDAP, Kerberos, or some other proprietary technology.
Every network application needs to be integrated with the selected framework. This will probably be the most time consuming part of the Single Sign On implementation. Some applications will quickly and easily plug into the framework. Additional “wrapper” software may be necessary to connect with custom applications. Some older applications might be incompatible with any and every available SSO solution. If your company’s network requirements are significant, it’s very likely that you’ll need to develop some custom software and/or migrate to new systems in order to completely implement a Single Sign On solution.
Once the applications are integrated, user information and configuration data needs to be migrated into the framework. The trick here is that some of this information may change frequently. Therefore, the migration of the user data should happen right when the user is switched over to the Single Sign On framework. Practically, this means that automatic migration software will need to be developed and tested.
Rolling out a Single Sign On solution is something that should be done in stages. The best approach is start with a small department or some very technically savvy employees. Problems will happen; this will give you a heads-up and a chance to fix them. The remainder of the roll out should be on a per-department basis. Some departments might have more trouble with the Single Sign On solution than others, due to differing network needs.
Robust networks can truly benefit from properly implemented centralized authentication. Single Sign On is not without its caveats though. The mere fact that it tightly integrates the many components of a network system degrades security as it improves simplicity.
A pre-existing large network that does not use any form of centralized authentication will not find implementation to be easy. The multi-stage rollout process may take many years and surface many network gremlins. Despite this, the end result can be well worth the effort and expense.
The above information is the start of a chapter in "Network Security Illustrated," published by McGraw-Hill and available from amazon.com, as well as your local bookstore. The book goes into much greater depth on this topic. To learn more about the book and what it covers, click here.
Below, you'll find links to online resources that supplement this portion of the book.