Wednesday 25 November 2015

Networks that span multiple Engineered Systems/Exalogic Accounts

This blog post is to introduce some functionality that has fairly recently (~Oct 2015) become available that allows additional infiniband shared networks to be defined.  This enables internal networks to span accounts or be extended to other Engineered Systems.

Historically an Exalogic rack is setup with two internal (IPoIB) networks that have IP addresses which can be handed out to vServers in all accounts, these are the vServer Shared Storage and the IPoIB Default networks.  Any vServers on the storage network are limited members and full members of the infiniband default network. It is possible to override the membership of a virtual machine to allow vServers to communicate to each other internally on the Infiniband storage net.

Security concerns about using the IPoIB default network to allow inter-vServer communication alongside access to the database tier has meant that this network tends not to be used to allow cross-account conversations.   The only other mechanism to allow network traffic between accounts was to use a public EoIB network which has the downside of preventing the Infiniband high performance protocols and mandating the smaller MTU sizes and thus is sub-optimal for performance based applications.

Recent changes in Exadata have introduced support for the use of non-default partitions.  Indeed, when Exadata is setup to make use of the database running in a virtual machine the normal configuration will be such that there is no use of the IPoIB_default partition (0x7fff).   This was a problem for Exalogic which historically only had access to Exadata over the IPoIB-default network.

The standard configuration of a virtualised Exadata is to have two IB partitions, one that allows the database server to talk to the storage servers and another that will connect the virtual machine to another virtual machine on the Exadata so that a distributed RAC cluster can be setup and use IB for inter-cluster communications.  Obviously if Exalogic wants to communicate to Exadata using the Infiniband Optimised protocols the Exalogic must be able to link in with the Exadata over a non-default infiniband partition.  This is depicted in figure 1 below.


Figure 1 - Connecting EL and ED using non-default Infiniband Network

This example shows a two tier application deployed to Exalogic, the web tier which has access to the EoIB client network, potentially hosting an application like Oracle Traffic Director.  This can forward requests on to an application tier over an internal private network and then the application tier is linked to another IPoIB internal network but this is what might be considered a "public private network" meaning that this network can be handed out to vServers and provide linkage to the Exadata virtual machines which have had this specific network (partition) allocated to them.  The Exadata also has two other internal IB networks, one to allow the RAC cluster to communicate between the DB servers and another to allow access to the storage cells.

The approach to creating this non-default network that spans both Exalogic and Exadata introduces a couple of potential options.  Firstly to extend a private network from an Exalogic account into the Exadata rack and secondly to create a new Exalogic customer shared IPoIB network which can span multiple Exalogic accounts.

Extending a Private Network

In this scenario we create a private network within an Exalogic account and then expand the Infiniband partition into the Exadata.  This means that access to the Exadata is kept purely within an Exalogic account.  The steps to go through are:
  1. Create a private network in an Exalogic Account
  2. Edit the network to reserve the IP addresses in the subnet that the Exadata will use.
  3. Identify the  pkey value that this new network has been assigned
  4. Using the IB command line/Subnet Manager make the new partition extend to the Exadata switches and database servers.
  5. Recreate the Exadata virtual machines adding the new partition key to the virtual machine configuration file used.
  6. Configure the Exadata VM to use an IP address made unavailable to the Exalogic

Creating a new "Custom Shared IPoIB Network"

This is a slightly more flexible approach than the first scenario as we create a new "public private" network and then allocate IP addresses on this network to each account that will need access to it.  This is also useful in the use cases that Exadata is not involved because it allows certain virtual machines to be setup as a service provider and others as service consumers.  A provider being an IB full member of the partition and a consumer a limited member.  Thus all consumers can access and use the service provider functions but the consumers cannot "see" each other.

This example is is for the connected Exadata that we discussed earlier.  In this case the process to follow is:-

  1. Run the process to create the new IPoIB network.  It can be setup such that all vServers will be limited or full members by default, defines the IB Partition and specifies the subnet used as well as which IP addresses the Exalogic rack will use.
  2. Allocated a number of IP addresses from this new network to each account that will use it.  Same process that is used for EoIB networks, storage network or the IPoIB Default network today.
  3. Create vServers in the accounts with an IP address on the custom shared network.
  4. Identify the pkey for the custom network and extend the partition to the Exadata switches and DB server nodes.  The primary difference here is that if the Exadata was setup first then the first step in this process would have been to specify the pKey that was originally used by the Exadata.  (i.e. Either the Exadata or the Exalogic can be the first to specify the pKey.)
    1. Warning - The pKey being used is defined manually.  Make sure it will not overlap with any pKeys that Exalogic Control will assign.
  5. Recreate the database virtual machines assigning the pkey to their configuration and within the VM specify the IP address you want them to use.  
  6. Test 
Note - The technical details on how to achieve this are fully documented in an Oracle support note.  Get in touch with your local Oracle representative find out more.

Tuesday 7 July 2015

Oracle Traffic Director - Deployment options, Virtual Servers vs Configurations

Summary

Oracle Traffic Director (OTD) is a powerful software load balancing solution.  As with most good products there is a degree of flexibilty in how it can be deployed with different approaches allowing the solution to be formed.  This article discusses two options that could be used to determine different routing possiblilties.

The scenario that is being considered is a need to perform two separate load balancing activities in the same OTD environment.  For example, load balancing to an older SOA 11g deployment and to SOA 12c for recent integration deployments.  Another possible example would be two routes to the same back end service but one is designed for high priority traffic while the other route will throttle the service at a preset load.   The two options that are discussed are:
  1. Using two separate configurations, one for SOA 11g and one for SOA12c.
  2. Using one configuration that has two virtual servers.  The virtual servers handling the routing for each environment.
Needless to say either option can be appropriate and it will depend on the details of the overall solution and to some extent personal preference to determine the right answer for a particular customer environment.  Of course other options such as more complex routing rules within a single configuration or multiple OTD domains are also options to think about.

OTD Configuration Overview

Simple configuration

An OTD deployment, in its simplest form, consists of an administrative instance which manages the configuration and a deployed instance.  The deployed configuration specifies the HTTP(S)/TCP listening port, routing rules to one or more origin servers, logging setup etc.   In many situations there is a business need to use OTD to manage requests to different business applications or even just to different environments/versions of an application.  It is obviously possible to split these out by using independent deployments of OTD however to minimise the resources required and keep the number of deployed components to a minimum there are options to use one administration server.

The base configuration options


The minimum configuration that will appear for a configuration is a setup which defines things like the listening ports, SSL certificates, logging setup and critically at least one origin server pool and a virtual server.  The origin server pool is a simple enough concept in that it defines the back end services to actually fulfil the client requests. 

Using Virtual Servers

The virtual servers provide a mechanism to isolate traffic sent to the software load balancer.  Each virtual server contains its own set of routing rules which can determine the origin servers to send requests to, caching rules, traffic shaping and overrides for logging and the layer 7 firewall rules.  The virtual server to be used for subsequent processing is identified by either the listening port or the hostname used to send the request.

Virtual Server example - Routing based on otrade-host
Virtual Server example - Routing based on websession-host
So in the above example both hostnames otrade-host and websession-host resolve to the same IP address in DNS (or in the clients local /etc/hosts file).   In this case two virtual servers also use the same listener.  If the client makes a request to access otrade-host then the first virtual server is used and if they request websession-host then the second's rules are used.

There is always at least one virtual server.  By default this is created and the hosts field left blank such that it is used if any traffic hits the listening port.

Solution Variations

Multiple Configurations

Overview

In this setup two configurations can be defined and deployed.  It is quite possible to have both configurations deployed to the same OS instances. (Admin node in OTD talk.)  The result of deploying the configuration to the admin nodes is the creation of another running instance of OTD.
Running multiple configurations
Thus in the example shown above we have three OS instances, one to host the admin server which could be co-located with the actual instances.  There are two OS instances which host two OTD servers, one for each configuration.  I have shown two OS instances to run the configuration to indicate that they can be setup in a failover group to provide HA, each config can utilise a different VIP.

Advantages

  • Each configuration is managed independently of each other.  (within the one administration server) 
    • The settings are independent of each other.
    • The running instances for each configuration are independent of each other. i.e. Can be stopped and started without impacting the other configuration instances running.
  • Simple to understand

Disadvantages

  • Care must be taken to ensure that the configurations do not have clashes with each other.  (eg. Same listenting ports)
  • Results in more processes running on each OS instance.

Multiple Virtual Servers

Overview

In this situation there is one configuration with multiple virtual servers which result in different routing rules being applied to send requests on.   In the diagram below we have deployed a single configuration to two OS instances with the configuration containing two virtual servers.  As per the multi-config option I have shown two OS instances to indicate that the failover group can be used for HA.

OTD Using Two Virtual Servers

Advantages

  • One configuration that provides visibility of all configuration in the environment.
  • Minimal running processes
    • Simplifying the monitoring
    • Reducing resources required to run the system

Disadvantages

  • Introduces dependencies between the environments
    • eg. Can share listeners, origin server pools, logging config etc.  Thus one change can impact all instances
    • eg. Some changes mandate a restart of an instance.  A change for one config may have an impact on load balancing for the other environment.
  • Complexity of a single configuration
  • Dependencies on external factors.  (DNS resolution of hostnames/firewalls for port access.)

Conclusions

There are no hard and fast rules to figure out which approach is the best one for you.  It will ultimately depend on the requirements for the load balancing.  If a configuration is changing frequently and is functionally independent then I would tend to go for the multiple configuration route.  If on the other hand simplicity of monitoring and minimal resource footprint alongside a fairly static configuration was the situation I would tend to use the multiple virtual server approach.

Essentially the classic IT answer of "it depends" will apply.  Only a good understanding of the requirements will clarify which way to go.  (Although if you are using OTD 11.1.1.6 then you might be better with the virtual server approach as there are a few limitations to the VIPs using keepalive for the failover groups)