For more information, see the OpenDJ directory server documentation. Consider implementing separate file systems for both OpenAM and OpenDJ, so that you can keep log files on a different disk, separate from data or operational files, to prevent device contention should the log files fill up the file system. Automation and Continuous Integration. The Automation and Continuous Integration phase involves using tools for testing:. Set up a continuous integration server, such as Jenkins, to ensure that builds are consistent by running unit tests and publishing Maven artifacts.
Perform continuous integration unless your deployment includes no customization. Ensure your custom code has unit tests to ensure nothing is broken. Functional Testing. The Functional Testing phase should test all functionality to deliver the solution without any failures. You must ensure that your customizations and configurations are covered in the test plan.
Non-Functional Testing. The Non-Functional Testing phase tests failover and disaster recovery procedures. Run load testing to determine the demand of the system and measure its responses. You can anticipate peak load conditions during the phase. The Supportability Phase involves creating the runbook for system administrators including procedures for backup and restores, debugging, change control, and other processes.
If you have a ForgeRock Support contract, it ensures everything is in place prior to your deployment. When you create a good concrete deployment plan, it ensures that a change request process is in place and utilized, which is essential for a successful deployment. This section looks at planning the full deployment process. When you have addressed everything in this section, then you should have a concrete plan for deployment. Training provides common understanding, vocabulary, and basic skills for those working together on the project.
Depending on previous experience with access management and with OpenAM, both internal teams and project partners might need training. All team members should take at least some training that provides an overview of OpenAM. This helps to ensure a common understanding and vocabulary for those working on the project. Team members planning the deployment should take an OpenAM deployment training before finalizing your plans, and ideally before starting to plan your deployment. OpenAM not only offers a broad set of features with many choices, but the access management it provides tends to be business critical.
OpenAM deployment training pays for itself as it helps you to make the right initial choices to deploy more quickly and successfully. Team members involved in designing and developing OpenAM client applications or custom extensions should take training in OpenAM development in order to help them make the right choices. Team members who have already had been trained in the past might need to refresh their knowledge if your project deploys newer or significantly changed features, or if they have not worked with OpenAM for some time.
When you have determined who needs training and the timing of the training during the project, prepare a training schedule based on team member and course availability. Include the scheduled training plans in your deployment project plan. ForgeRock also offers an accreditation program for partners, offering an in-depth assessment of business and technical skills for each ForgeRock product. This program is open to the partner community and ensures that best practices are followed during the design and deployment phases. When you customize OpenAM, you can improve how the software fits your organization.
OpenAM customizations can also add complexity to your system as you increase your test load and potentially change components that could affect future upgrades. Therefore, a best practice is to deploy OpenAM with a minimum of customizations. Most deployments require at least some customization, like skinning end user interfaces for your organization, rather than using the OpenAM defaults.
If your deployment is expected to include additional client applications, or custom extensions authentication modules, policy conditions, and so forth , then have a team member involved in the development help you plan the work. The Developer's Guide can be useful when scoping a development project. Although some customizations involve little development work, it can require additional scheduling and coordination with others in your organization.
An example is adding support for profile attributes in the identity repository. The more you customize, the more important it is to test your deployment thoroughly before going into production. Consider each customization as sub-project with its own acceptance criteria, and consider plans for unit testing, automation, and continuous integration. When you have prepared plans for each customization sub-project, you must account for those plans in your overall deployment project plan.
Functional customizations, such as custom authentication modules or policy conditions might need to reach the pilot stage before you can finish an overall pilot implementation. Unless you are planning a maintenance upgrade, consider starting with a pilot implementation, which is a long term project that is aligned with customer-specific requirements. A pilot shows that you can achieve your goals with OpenAM plus whatever customizations and companion software you expect to use.
The idea is to demonstrate feasibility by focusing on solving key use cases with minimal expense, but without ignoring real-world constraints. The aim is to fail fast before you have too much invested so that you can resolve any issues that threaten the deployment. Do not expect the pilot to become the first version of your deployment. Instead, build the pilot as something you can afford to change easily, and to throw away and start over if necessary.
The cost of a pilot should remain low compared to overall project cost. Unless your concern is primarily the scalability of your deployment, you run the pilot on a much smaller scale than the full deployment. Scale back on anything not necessary to validating a key use case. Smaller scale does not necessarily mean a single-server deployment, though. If you expect your deployment to be highly available, for example, one of your key use cases should be continued smooth operation when part of your deployment becomes unavailable.
The pilot is a chance to try and test features and services before finalizing your plans for deployment. The pilot should come early in your deployment plan, leaving appropriate time to adapt your plans based on the pilot results. Before you can schedule the pilot, team members might need training and you might require prototype versions of functional customizations.
Plan the pilot around the key use cases that you must validate. Make sure to plan the pilot review with stakeholders. You might need to iteratively review pilot results as some stakeholders refine their key use cases based on observations. When you first configure OpenAM, there are many options to evaluate, plus a number of ways to further increase levels of security. You can change the following default configuration properties:. The main OpenAM administrative account has a default user name, amadmin.
The primary session cookie has a default name, iPlanetDirectoryPro. Initially, only the top-level realm exists. The top-level realm includes a demo user, demo , with the default password changeit. To prevent cross-site scripting attacks, you can configure session cookies as HTTP Only by setting the property com.
This property prevents third-party scripts from accessing the session cookie. You can deploy a reverse proxy within delimitarized zone DMZ firewalls to limit exposure of service URLs to the end user as well as block access to back end configuration and user data stores to unauthorized users. Secure processes and files for example with SELinux, using a dedicated non-privileged user and port forwarding, and so forth.
OpenAM delegates authentication and profile storage to other services. OpenAM can store configuration, policies, session, and other tokens in an external directory service. In each of these cases, a successful deployment depends on coordination with service providers, potentially outside of your organization.
The infrastructure you need to run OpenAM services might be managed outside your own organization. Hardware, operating systems, network, and software installation might be the responsibility of providers with which you must coordinate. Shared authentication and profile services might have been sized prior to or independently from your access management deployment. An overall outcome of your access management deployment might be to decrease the load on shared authentication services and replace some authentication load with single-sign on that is managed by OpenAM , or it might be to increase the load if, for example, your deployment enables many new applications or devices, or enables controlled access to resources that were previously unavailable.
Identity repositories are typically backed by shared directory services. Directory services might need to provision additional attributes for OpenAM. This could affect not only directory schema and access for OpenAM, but also sizing for the directory services that your deployment uses. If your deployment uses an external directory service for OpenAM configuration data and OpenAM policies, then the directory administrator must include attributes in the schema and provide access rights to OpenAM.
The number of policies depends on the deployment. For deployments with thousands or millions of policies to store, OpenAM's use of the directory could affect sizing. If your deployment uses an external directory service as a backing store for the OpenAM Core Token Service CTS , then the directory administrator must include attributes in the schema and provide access rights to OpenAM. CTS load tends to involve more write operations than configuration and policy load, as CTS data tend to be more volatile, especially if most tokens concern short-lived sessions.
This can affect directory service sizing.
Open Source Identity Management Patterns and Practices Using Openam X by Waylon Kenning
For this feature to work quickly in the event of a failure or network partition, CTS data must be replicated rapidly including across WAN links. This can affect network sizing for the directory service. Session blacklisting is an optional OpenAM feature that provides logout integrity. SAML federation circles of trust require organizational and legal coordination before you can determine what the configuration looks like. Organizations must agree on which security data they share and how, and you must be involved to ensure that their expectations map to the security data that is actually available.
There also needs to be coordination between all SAML parties, that is, agreed-upon SLAs, patch windows, points of contact and escalation paths. Often, the technical implementation is considered, but not the business requirements. For example, a common scenario occurs when a service provider takes down their service for patching without informing the identity provider or vice-versa. When working with infrastructure providers, realize that you are likely to have better sizing estimates after you have tried a test deployment under load.
Even though you can expect to revise your estimates, take into account the lead time necessary to provide infrastructure services. Estimate your infrastructure needs not only for the final deployment, but also for the development, pilot, and testing stages. For each provider you work with, add the necessary coordinated activities to your overall plan, as well as periodic checks to make sure that parallel work is proceeding according to plan.
When planning integration with OpenAM client applications, the applications that are most relevant are those that register with OpenAM; therefore, you should make note of the following types of client applications registering with OpenAM:. OpenAM then sends policy agents notifications when their configurations change. To delegate administration of multiple policy agents, OpenAM lets you create a group profile for each realm to register the policy agent profiles. While the OpenAM administration manages policy agent configuration, application administrators are often the ones who install policy agents.
You must coordinate installation and upgrades with them. OAuth 2. OpenAM optionally allows registration of such applications without prior authentication. By default, however, registration requires an access token granted to an OAuth 2. If you expect to allow dynamic registration, or if you have many clients registering with your deployment, then consider clearly documenting how to register the clients, and building a client to register clients. If your deployment functions as a SAML 2.
Consider at least clearly documenting how to do so, and if necessary, build installation and upgrade capabilities. If you have custom client applications, consider how they are configured and how they must register with OpenAM. REST client applications can therefore authenticate using whatever authentication mechanisms you configure in OpenAM, and therefore do not require additional registration.
For each client application whose integration with OpenAM requires coordination, add the relevant tasks to your overall plan. OpenAM and policy agents can log audit information to flat files or alternatively, to a relational database. Log volumes depend on usage and on logging levels. By default, OpenAM generates both access and error messages for each service, providing the raw material for auditing the deployment. The Reference covers what you can expect to find in log messages. In order to analyze the raw material, however, you must use other software, such as Splunk , which indexes machine-generated data for analysis.
If you require integration with an audit tool, plan the tasks of setting up logging to work with the tool, and analyzing and monitoring the data once it has been indexed. Consider how you must retain and rotate log data once it has been consumed, as a high volume service can produce large volumes of log data.
In addition to planning tests for each customized component, test the functionality of each service you deploy, such as authentication, policy decisions, and federation. You should also perform non-functional testing to validate that the services hold up under load in realistic conditions. Perform penetration testing to check for security issues.
Include acceptance tests for the actual deployment. The data from the acceptance tests help you to make an informed decision about whether to go ahead with the deployment or to roll back. Functional testing validates that specified test cases work with the software considered as a black box.
As ForgeRock already tests OpenAM and policy agents functionally, focus your functional testing on customizations and service-level functions. For each key service, devise automated functional tests. Automated tests make it easier to integrate new deliveries to take advantage of recent bug fixes and to check that fixes and new features do not cause regressions. Tools for running functional testing include Apache JMeter and Selenium. Apache JMeter is a load testing tool for Web applications. Selenium is a test framework for Web applications, particularly for UIs.
As part of the overall plan, include not only tasks to develop and maintain your functional tests, but also to provision and to maintain a test environment in which you run the functional tests before you significantly change anything in your deployment. For example, run functional tests whenever you upgrade OpenAM, OpenAM policy agents, or any custom components, and analyze the output to understand the effect on your deployment.
For written service-level agreements and objectives, even if your first version consists of guesses, you turn performance plans from an open-ended project to a clear set of measurable goals for a manageable project with a definite outcome. Therefore, start your testing with clear definitions of success. Also, start your testing with a system for load generation that can reproduce the traffic you expect in production, and provider services that behave as you expect in production.
To run your tests, you must therefore generate representative load data and test clients based on what you expect in production. You can then use the load generation system to perform iterative performance testing. Iterative performance testing consists in identifying underperformance and the bottlenecks that cause it, and discovering ways to eliminate or work around those bottlenecks.
Underperformance means that the system under load does not meet service level objectives. Based on service level objectives and availability requirements, define acceptance criteria for performance testing, and iterate until you have eliminated underperformance. Tools for running performance testing include Apache JMeter , for which your loads should mimic what you expect in production, and Gatling , which records load using a domain specific language for load testing.
To mimic the production load, examine both the access patterns and also the data that OpenAM stores. The representative load should reflect the expected random distribution of client access, so that sessions are affected as in production. Consider authentication, authorization, logout, and session timeout events, and the lifecycle you expect to see in production.
Although you cannot use actual production data for testing, you can generate similar test data using tools, such as the OpenDJ makeldif command, which generates user profile data for directory services. As part of the overall plan, include not only tasks to develop and maintain performance tests, but also to provision and to maintain a pre-production test environment that mimics your production environment.
Once you are satisfied that the baseline performance is acceptable, run performance tests again when something in your deployment changes significantly with respect to performance. For example, if the load or number of clients changes significantly, it could cause the system to underperform. Also, consider the thresholds that you can monitor in the production system to estimate when your system might start to underperform. Penetration testing involves attacking a system to expose security issues before they show up in production.
When planning penetration testing, consider both white box and black box scenarios. Attackers can know something about how OpenAM works internally, and not only how it works from the outside. Also, consider both internal attacks from within your organization, and external attacks from outside your organization. As for other testing, take time to define acceptance criteria. Know that ForgeRock has performed penetration testing on the software for each enterprise release. Any customization, however, could be the source of security weaknesses, as could configuration to secure OpenAM.
You can also plan to perform penetration tests against the same hardened, pre-production test environment also used for performance testing. Deployment testing is used as a description, and not a term in the context of this guide. It refers to the testing implemented within the deployment window after the system is deployed to the production environment, but before client applications and users access the system.
Plan for minimal changes between the pre-production test environment and the actual production environment. Then test that those changes have not cause any issues, and that the system generally behaves as expected. Take the time to agree upfront with stakeholders regarding the acceptance criteria for deployment tests. When the production deployment window is small, and you have only a short time to deploy and test the deployment, you must trade off thorough testing for adequate testing.
Make sure to plan enough time in the deployment window for performing the necessary tests and checks. Include preparation for this exercise in your overall plan, as well as time to check the plans close to the deployment date. The OpenAM product documentation is written for readers like you, who are architects and solution developers, as well as for OpenAM developers and for administrators who have had OpenAM training.
The people operating your production environment need concrete documentation specific to your deployed solution, with an emphasis on operational policies and procedures. Procedural documentation can take the form of a runbook with procedures that emphasize maintenance operations, such as backup, restore, monitoring and log maintenance, collecting data pertaining to an issue in production, replacing a broken server or policy agent, responding to a monitoring alert, and so forth.
Make sure in particular that you document procedures for taking remedial action in the event of a production issue. Furthermore, to ensure that everyone understands your deployment and to speed problem resolution in the event of an issue, changes in production must be documented and tracked as a matter of course.
When you make changes, always prepare to roll back to the previous state if the change does not perform as expected. Include documentation tasks in your overall plan. Also, include the tasks necessary to put in place and to maintain change control for updates to the configuration. If you own the architecture and planning, but others own the service in production, or even in the labs, then you must plan coordination with those who own the service.
Start by considering the service owners' acceptance criteria. If they have defined support readiness acceptance criteria, you can start with their acceptance criteria. You can also ask yourself the following questions:.
Also, plan back line support with ForgeRock or a qualified partner. The aim is to define clearly who handles production issues, and how production issues are escalated to a product specialist if necessary. Include a task in the overall plan to define the hand off to production, making sure there is clarity on who handles monitoring and issues. In addition to planning for the hand off of the production system, also prepare plans to roll-out the system into production. Rollout into production calls for a well-choreographed operation, so these are likely the most detailed plans.
In your overall plan, leave time and resources to finalize rollout plans toward the end of the project. Before rolling out into production, plan how to monitor the system to know when you must grow, and plan the actions to take when you must add capacity. Unless your system is embedded or otherwise very constrained, after your successful rollout of access management services, you can expect to add capacity at some point in the future.
Open Source Identity Management Patterns and Practices Using Openam 10.X
Therefore, you should plan to monitor system growth. You can grow many parts of the system by adding servers or adding clients.
The parts of the system that you cannot expand so simply are those parts that depend on writing to the directory service, and those that can result in crosstalk between OpenAM servers. The directory service eventually replicates each write to all other servers. Therefore, adding servers simply adds the number of writes to perform.
One simple way of getting around this limitation to working with the hierarchical nature of directory data to split a monolithic directory service into several. That said, directory services often are not a bottleneck for growth. Crosstalk between OpenAM servers can result when one OpenAM server authenticates a user, and a subsequent request regarding that user is sent to a second OpenAM server.
In that case, the second server can communicate with the first server to handle the request, resulting in crosstalk from one server to another. A load balancing solution that offers server affinity or stickiness reduces crosstalk and contributes to a system that grows more smoothly. When should you expand the deployed system? The time to expand the deployed system is when growth in usage causes the system to approach performance threshold levels that cause the service to underperform.
For that reason, devise thresholds that can be monitored in production, and plan to monitor the deployment with respect to the thresholds. In addition to programming appropriate alerts to react to thresholds, also plan periodic reviews of system performance to uncover anything missing from regular monitoring results. In this section, "upgrade" means moving to a more recent release, whether it is a patch, maintenance release, minor release, or major release. Upgrades generally bring fixes, or new features, or both. For each upgrade, you must build a new plan.
Depending on the scope of the upgrade, that plan might include almost all of the original overall plan, or it might be abbreviated, for example, for a patch that fixes a single issue. In any case, adapt deployment plans, as each upgrade is a new deployment. When planning an upgrade, pay particular attention to testing and to any changes necessary in your customizations.
For testing, consider compatibility issues when not all agents and services are upgraded simultaneously. Choreography is particularly important, as upgrades are likely to happen in constrained low usage windows, and as users already have expectations about how the service should behave. When preparing your overall plan, include a regular review task to determine whether to upgrade, not only for patches or regular maintenance releases, but also to consider whether to upgrade to new minor and major releases.
Disaster recovery planning and a robust backup strategy is essential when server hardware fails, network connections go down, a site fails, and so on. Your team must determine the disaster recovery procedures to recover from such events. You can configure OpenAM in a wide variety of deployments depending on your security requirements and network infrastructure. This chapter presents an example enterprise deployment, featuring a highly available and scalable architecture across multiple data centers.http://porcelaintile.org/includes/131.php
Read e-book Open Source Identity Management Patterns and Practices Using OpenAM 10.x
The example deployment is partitioned into a two-tier architecture. The top tier is a DMZ with the initial firewall securing public traffic into the network. The second firewall limits traffic from the DMZ into the application tier where the protected resources are housed. The example components in this chapter are presented for illustrative purposes.
ForgeRock does not recommend specific products, such as reverse proxies, load balancers, switches, firewalls, and so forth, as OpenAM can be deployed within your existing networking infrastructure. The public tier provides an extra layer of security with a DMZ consisting of load balancers and reverse proxies.
This section presents the DMZ elements. The GLB reduces application latency by spreading the traffic workload among data centers and maintains high availability during planned or unplanned down time, during which it quickly re-routes requests to another data center to ensure online business activity continues successfully.
You can install a cloud-based or a hardware-based version of the GLB. The leading GLB vendors offer solutions with extensive health-checking, site affinity capabilities, and other features for most systems. Detailed deployment discussions about global load balancers are beyond the scope of this guide. Each data center has local front end load balancers to route incoming traffic to multiple reverse proxy servers, thereby distributing the load based on a scheduling algorithm.
Many load balancer solutions provide server affinity or stickiness to efficiently route a client's inbound requests to the same server. Other features include health checking to determine the state of its connected servers, and SSL offloading to secure communication with the client. You can cluster the load balancers themselves or configure load balancing in a clustered server environment, which provides data and session failover and high availability across multiple nodes.
Clustering also allows horizontal scaling for future growth. Many vendors offer hardware and software solutions for this requirement. In most cases, you must determine how you want to configure your load balancers, for example, in an active-passive configuration that supports high availability, or in an active-active configuration that supports session failover and redundancy.
There are many load balancer solutions available in the market. The reverse proxies work in concert with the load balancers to route the client requests to the back end Web or application servers, providing an extra level of security for your network. The reverse proxies also provide additional features, like caching to reduce the load on the Web servers, HTTP compression for faster transmission, URL filtering to deny access to certain sites, SSL acceleration to offload public key encryption in SSL handshakes to a hardware accelerator, or SSL termination to reduce the SSL encryption overhead on the load-balanced servers.
The use of reverse proxies has several key advantages. First, the reverse proxies serve as an highly scalable SSL layer that can be deployed inexpensively using freely available products, like Apache HTTP server or nginx. This layer provides SSL termination and offloads SSL processing to the reverse proxies instead of the load balancer, which could otherwise become a bottleneck if the load balancer is required to handle increasing SSL traffic.
The HAProxy load balancers forward the requests to Apache 2. For this example, we assume SSL is configured everywhere within the network. Another advantage to reverse proxies is that they allow access to only those endpoints required for your applications. A good rule of thumb is to check which functionality is required for your public interface and then use the reverse proxy to expose only those endpoints. A third advantage to reverse proxies is when you have applications that sit on non-standard containers for which ForgeRock does not provide a native agent.
In this case, you can implement a reverse proxy in your Web tier, and deploy a policy agent on the reverse proxy to filter any requests. The dotted policy agents indicate that they can be optionally deployed in your network depending on your configuration, container type, and application.
OpenIG provides a set of servlet filters that you can use as-is or chained together with other filters to provide complex operations processing on HTTP requests and responses.
- Open Source Identity Management Patterns and Practices Using Openam X - osunifupob.tk?
- Geodynamics of the Lithosphere: An Introduction.
- Seeing Sky-Blue Pink.
- About This Item!
You can also write your own custom filters for legacy or custom applications. For more information, see the OpenIG documentation. You can deploy OpenIG on Tomcat or Jetty servers, allowing it to intercept the HTTP requests and carry out filtering operations on each request, and then log the user directly into the application. In such cases, you can deploy a policy agent for authorization purposes on the request. However, in the example deployment, you may not need to deploy a policy agent as OpenIG functions strictly as a reverse proxy in the DMZ.
The inclusion of the policy agent in the illustration only indicates that you can deploy a policy agent with OpenIG when deployed on a Web container or app server. Some OpenAM authentication modules may require additional user information to authenticate, such as the IP address where the request originated. When OpenAM is accessed through a load balancer or proxy layer, you can configure OpenAM to consume and forward this information with the request headers.
Another option is to run SSL pass-through where the load balancer does not decrypt the traffic but passes it on to the reverse proxy servers, which are responsible for the decryption. The other option is to deploy a more secure environment using SSL everywhere within your deployment. The application tier is where the protected resources reside on Web containers, application servers, or legacy servers. The policy agents intercept all access requests to the protected resources on the Web or app server and grants access to the user based on policy decisions made on the OpenAM servers.
OpenAM provides a cookie default: amlbcoookie for sticky load balancing to ensure that the load balancer properly routes requests to the OpenAM servers. When the client sends an access request to a resource, the policy agent redirects the client to an authentication login page. Upon successful authentication, the policy agent forwards the request via the load balancer to one of the OpenAM servers. The OpenAM server that authenticated the user becomes the authoritative server during that user's session with OpenAM. Each authentication and authorization request related to the user's session is then evaluated by the authoritative server as long as that server is available.
It is therefore important when load balancing, to send requests concerning the user to the authoritative server directly to reduce additional crosstalk from other servers trying contact the authoritative server. Directing OpenAM requests to the authoritative server is necessary only for OpenAM deployments that use stateful sessions. Because stateless sessions reside in the session token cookie default: iPlanetDirectoryPro rather than on the OpenAM server, any OpenAM server in a cluster can handle a request with a stateless session without crosstalk. To direct requests directly to the authoritative OpenAM server, the load balancer should use the value specified in the OpenAM load balancer cookie, amlbcookie , which you can configure to uniquely identify a server within a site.
The load balancer inspects the sticky cookie to determine which OpenAM server should receive the request. This ensures that all subsequent requests involving the session are routed to the correct server. Policy agents are OpenAM components that are installed on Web containers or application servers to protect the resources deployed there. Policy agents function as a type of gatekeeper to ensure clients are authenticated and authorized to access the resource as well as enforce SSO with registered devices.
The Web Policy Agent is a native plugin to a Web server and is distributed as a zip file. Web policy agents filter requests for Web server resources without any changes to the resources. Cookie Reset. Policy agents can be configured to reset any number of cookies in the session before the client is redirected for authentication. This feature is typically used when the policy agent is deployed with a parallel authentication mechanism and cookies need to be reset.
Make sure that the name , domain , and path properties are defined. Disable Policy Evaluation. Policy agents act as a policy enforcement point PEP during the authorization phase for a client application. This feature is typically used when the policy agent is only used for SSO and does not require a policy evaluation request to OpenAM.
Policy agents protect all resources on the Web server or in a Web application that it serves and grants access only if the client has been authenticated and authorized to access the resources. However, there may be some resources, such as public HTML pages, graphics, or stylesheet files that do not require policy evaluation. URL Correction. OpenAM is aware of the access management network and its registered clients, implementing a fully qualified domain name FQDN mapper that can be configured to correct invalid URLs.
Attribute Injection Into Requests. Policy agents can be configured to inject user profile attributes into cookies, requests, and HTTP headers. Both agents have the ability to receive configuration notifications from OpenAM. In deployments with stateful sessions, both agents can receive session notifications from OpenAM.
Cross-Domain Single Sign-On. In deployments with stateful sessions, both agents can be configured for cross-domain single sign-on CDSSO. Also, OpenAM's signing keys are shipped with a test certificate. If you upgrade the keystore, you need to redistribute the certificates to all nodes so that they can continue to communicate with each other. CTS Data Stores.
If configured, CTS supports session token persistence for stateful session failover. CTS traffic is volatile compared to configuration data, so deploying CTS as a dedicated external data store is advantageous for systems with many users and many sessions. This setup eliminates the possibility of directory read-write errors if replication is not quick enough. For example, if an attribute is updated on OpenDJ-1 but read from OpenDJ-2, and if replication is not quick enough and the attribute is not written or updated in OpenDJ-2, an error could result.
You can use load balancers to spread the load or throttle performance for the external data stores. Although not shown in the diagram, you can also set up a directory tier, separating the application tier from the repositories with another firewall.
- 1. Who Should Use This Guide.
- Customer Reviews?
- Pauline Frommers Paris, Second Edition (Pauline Frommer Guides);
- Download Open Source Identity Management Patterns And Practices Using Openam 10X 2013.
- 2 Sisters - A Superspy Graphic Novel?
- Bomb. The Race to Build—and Steal—the Worlds Most Dangerous Weapon.
This tier provides added security for your identity repository or policy data. ForgeRock recommends that you use the OpenAM's embedded OpenDJ directory server as the configuration data store and only set up an external configuration store if necessary. The example deploys the various servers on Linux hosts. The firewalls can be a hardware or software solution or a combination firewall-router can be implemented in the deployment.
The local load balancers are implemented using HAProxy servers in an active-passive configuration. You can also use Linux Keepalived for software load balancing or one of the many other solutions available. The Web and application servers have the Web policy agent and Java EE policy agent installed on each server respectively. OpenAM is deployed on Tomcat hosted on a Linux server.
Within each datacenter, the OpenAM servers are configured as sites for failover and stateful session failover capabilities. For presentation purposes only, the configuration data is assumed to be stored within the embedded directory store on each OpenAM server. The OpenIG example does not show redundancy for high availability also due to presentation purposes. The previous sections in this chapter present the logical and physical topologies of an example highly available OpenAM deployment, including the clustering of servers using sites. One important configuration feature of OpenAM is its ability to run multiple client entities to secure and manage applications through a single OpenAM instance.
OpenAM supports its multiple clients through its use of realms. You configure realms within OpenAM to handle different sets of users to whom you can set up different configuration options, storage requirements, delegated administrators, and customization options per realm. Typically, you can configure realms for customers, partners, or employees within your OpenAM instance, for different departments, or for subsidiaries. In such cases, you create a global administrator who can delegate privileges to realm administrators, each specifically responsible for managing their respective realms.
This chapter covers sizing servers, network, storage, and service levels required by your OpenAM deployment. Any part of a system that can fail eventually will fail. Keeping your service available means tolerating failure in any part of the system without interrupting the service. You make OpenAM services highly available through good maintenance practices and by removing single points of failure from your architectures. Removing single points of failure involves replicating each system component, so that when one component fails, another can take its place. Replicating components comes with costs not only for the deployment and maintenance of more individual components, but also for the synchronization of anything those components share.
Due to necessary synchronization between components, what you spend on availability is never fully recovered as gains in capacity. Two servers cannot do quite twice the work of one server. Instead you must determine the right trade offs for the deployment. In an online system this could be a severe problem, interrupting all access to protected resources.
Most deployments fall into this category. In an embedded system protecting local resources, it might be acceptable to restart the service. Deployments that require always-on service availability require some sort of load balancing solution at minimum between OpenAM and OpenAM client applications. The load balancer itself must be redundant, too, so that it does not become a single point of failure. OpenAM allows you to deploy replicated configurations in different physical locations, so that if the service experiences complete failure at one site, you can redirect client traffic to another site and continue operation.
BE THE FIRST TO KNOW
The question is whether the benefit in reducing the likelihood of failure outweighs the costs of maintaining multiple sites. When you need failover across sites, one of the costs is redundant WAN links scaled for inter-site traffic. OpenAM synchronizes configuration and policy data across sites, and by default also synchronizes session data as well. OpenAM also expects profiles in identity data stores to remain in sync. In OpenAM session failover is different from service failover. Session failover consists of maintaining redundant information for stateful sessions, so that if a server fails, another server recovers the session information, preventing the user from having to authenticate again.
Service failover alone consists of maintaining redundant servers, so that if one server fails, another server can take the load. With service failover alone, users who authenticated with a failed server must authenticate again to start a new session. In deployments where an interruption in access to a protected resource could cause users to lose valuable information, session failover can prevent the loss.
To provide for session failover, OpenAM replicates the session information held by the CTS, relying on the underlying directory service to perform the replication. Session information can be quite volatile, certainly more volatile than configuration and policy data. Session failover across sites can therefore call for more WAN bandwidth, as more information is shared across sites. Once you have the answers to these questions for the deployment, you can draw a diagram of the deployment, checking for single points of failure to avoid.
In the end, you should have a count of the number of load balancers, network links, and servers that you need, with the types of clients and an estimated numbers of clients that access the OpenAM service. While you might be able to perform functional testing by using a single OpenAM server with the embedded OpenDJ server for user data store, other tests require a more complete environment with multiple servers, secure connections, and so forth.
Performance testing should reveal any scalability issues. Performance testing should also run through scenarios where components fail, and check that critical functionality remains available and continues to provide acceptable levels of service. Beyond service availability, your aim is to provide some level of service. You can express service levels in terms of throughput and response times.
Another service level goal could be to handle an average of policy requests per minute per policy agent with an average response time of 0. Use the load tests to check your sizing assumptions. To estimate sizing based on service levels, take some initial measurements and extrapolate from those measurements. For a service that handles policy decision authorization requests, what is the average policy size? What is the total size of all expected policies? To answer these questions, you can measure the current disk space and memory occupied by the configuration directory data.
Next, create a representative sample of the policies that you expect to see in the deployment, and measure the difference. Then, derive the average policy size, and use it to estimate total size. To measure rates of policy evaluations, you can monitor policy evaluation counts over SNMP. What is the average total size of CTS data?
The average total size depends on the number of live CTS entries, which in turn depends on session and token lifetimes. The lifetimes are configurable and depend also on user actions like logout, that are specific to the deployment. For example, suppose that the deployment only handles stateful SSO sessions, session entries occupy on average 20 KB of memory, and that you anticipate on average , active sessions. In that case, you would estimate the need for 2 GB , x 20, RAM on average to hold the session data in memory. If you expect that sometimes the number of active sessions could rise to ,, then you would plan for 4 GB RAM for session data.
Keep in mind that this is the extra memory needed in addition to memory needed for everything else that the system does including running OpenAM server. Session data is relatively volatile, as the CTS creates sessions entries when sessions are created. CTS deletes session entries when sessions are destroyed due to logout or timeout.
Sessions are also modified regularly to update the idle timeout. Suppose the rate of session creation is about 5 per second, and the rate of session destruction is also about 5 per second. Then you know that the underlying directory service must handle on average 5 adds and 5 deletes per second. The deleted entries generate less replication traffic, as the directory service only needs to know the distinguished name DN of the entry to delete, not its content. When sizing the network, you must account both for inter-site replication traffic and also for notifications and crosstalk in high-throughput deployments.
In OpenAM deployments using stateful sessions, much of the network traffic between OpenAM servers consists of notifications and crosstalk. When the session state changes on session creation and destruction, the OpenAM server performing the operations can notify other servers. Crosstalk between OpenAM servers arises either when configured i. In an OpenAM site, the server that originally authenticates a client deals with the session, unless that server becomes unavailable. If the client is routed to another server, then the other server communicates with the first, resulting in local crosstalk network traffic.
Sticky load balancing can limit crosstalk by routing clients to the same server with which they started their session. When the OpenAM servers are all on the same LAN, and even on the same rack, notifications and crosstalk are less likely to adversely affect performance. If the servers are on different networks or communicating across the WAN, the network could become a bottleneck. OpenAM stores data in user profile attributes. When you know which attributes are used, you can estimate the average increase in size by measuring the identity data store as you did for configuration and CTS-related data.
If you do not manage the identity data store as part of the deployment, you can communicate this information with the maintainers. For a large deployment, the increase in profile size can affect sizing for the underlying directory service. In a centrally managed deployment with only a few realms, the size of realm configuration data might not be consequential.
Also, you might have already estimated the size of policy data. For example, each new realm might add about 1 MB of configuration data to the configuration directory, not counting the policies added to the realm. In a multi-tenant deployment or any deployment where you expect to set up many new realms, the realm configuration data and the additional policies for the realm can add significantly to the size of the configuration data overall. You can measure the configuration directory data as you did previously, but specifically for realm creation and policy configuration, so that you can estimate an average for a new realm with policies and the overall size of realm configuration data for the deployment.
Given availability requirements and estimates on sizing for services, estimate the required capacity for individual systems, networks, and storage. This section considers the OpenAM server systems, not the load balancers, firewalls, independent directory services, and client applications. Although you can start with a rule of thumb, you see from the previous sections that the memory and storage footprints for the deployment depend in large part on the services you plan to provide.
This rule of thumb assumes the identity data stores are sized separately, and that the service is housed on a single local site. Notice that this rule of thumb does not take into account anything particular to the service levels you expect to provide. Consider it a starting point when you lack more specific information. OpenAM services use CPU resources to process requests and responses, and essentially to make policy decisions. Encryption, decryption, signing, and checking signatures can absorb CPU resources when processing requests and responses. Policy decision evaluation depends both on the number of policies configured and on their complexity.
About This Item We aim to show you accurate product information. Manufacturers, suppliers and others provide what you see here, and we have not verified it. See our disclaimer. Customer Reviews.
Write a review. See any care plans, options and policies that may be associated with this product. Email address. Please enter a valid email address. Walmart Services. Get to Know Us. Customer Service.