Friday, 5 April 2013

Access LDAP from internal Exalogic vServers

Introduction

As discussed in my earlier postings about setting up LDAP to enable access to shared storage for NFSv4 requirements, the same is true for a virtualised Exalogic.   In many data centres a directory of some sort may already be setup external to the Exalogic that holds the user accounts for access to unix environments - an Exalogic should be able to use this authentication source.

An LDAP service may be available on one or more networks but is most likely to appear on one only.  An Exalogic has at least two networks connected to the datacentre, firstly a management network that links the physical components together and secondly a 10GbE network that can provide access to a deployed application.  Often these two networks are kept separate for security reasons.  For Exalogic this poses an issue for shared authentication between the storage and the running vServers as the storage has no direct access to the 10GbE network and the vServers have no direct access to the 1GbE management network but both need to be able to access a shared LDAP resource.

The issues is compounded further if we are building up a secure vServer topology with only web tier having access to the client network, as shown in the deployment topology discussed when considering the Infiniband network.


Figure 1 : vServer deployment with a web and application tier

In this situation the application tier vServers have no access to either the 10GbE nework or the 1GbE management network.  As such how can we use an external directory to provide a shared authentication source?

This posting considers a few possible solutions to the scenario.

Problem Statement

The problem is that both vServers and the shared storage require access to the same directory for authentication purposes so that shares can be mounted using NFSv4.  The visibility of networks is limited and different for different components as shown in Table 1.

ComponentManagement Network (1GbE)External Network(10GbE)Internal/Private Network (IB)
ZFS Storage ApplianceYesNoYes (vServer-shared-storage)
vServer
(web tier)
NoYesYes (vServer-shared-storage)
vServer (internal/app tier)NoNoYes (vServer-shared-storage)
Table 1 - Component network access

So how do we setup an environment that all vServers and the shared storage can all access the same directory service?

Potential Solutions

  1. Ensure the directory service is available/routable on both the management and 10GbE networks.  Give all vServers an interface to the 10GbE network.  (The 10GbE network can be VLAN tagged to a management only network.)
  2. Ensure the directory service is available/routable on both management and 10GbE network.  Create a new vserver that has interfaces on both the 10GbE network and the internal private network.   (IPoIB-vserver-shared-storage is a good internal candidate for this.)  Then setup this vServer to be a gateway/router.  All internal vServers must have a static route that will go via this gateway for the IP addresses of the directory servers.
  3. Make the directory available on the 10GbE network and then create replicas of the directory that run in vServers on the Exalogic rack.  These replicas can make their services available to the internal components.
  4. Make the directory available on the 10GbE network and then include a vServer that runs an LDAP proxy so that internal components can access the external vServer through the proxy service.
This blog posting is going to consider the fourth option in more detail, this is partially in the light of the recent 11.1.1.7 release of Oracle Traffic Director that now supports load balancing LDAP requests and hence is a good candidate for use as an LDAP proxy.

OTD can be downloaded from the public Oracle website here.  The primary new functionality in this release is:
  1. TCP load-balancing support . This allows OTD to be an entry point to load balance HTTP and non HTTP traffic including connect-time LDAP, T3/RMI etc.
  2. HTML5 WebSockets reverse proxy support
  3. Graphical expression builder for reverse proxy routing rules
  4. Additional WLS load-balancing/keepalive synchronization optimizations
  5. Web Application Firewall Support - (ModSecurity based Firewall to inspect and reject requests). Supports well recognized rulesets from OWASP Core Ruleset
  6. OAM 11g WebGate support
  7. Inter-operability certification with FMW 11.1.1.1.7 and with Classic Portal / Forms.  
  8. Exalogic Solaris support
We are interested in the load balancing of LDAP.  In a recent Blog posting Paul Done wrote about using OTD with T3/RMI load balancing.

LDAP Proxy Solution

For this solution we will setup a vServer that hosts OTD, it will listen on the internal networks and forward LDAP requests to the external LDAP server.  The architecture of such a design is shown in Figure 2.

Figure 2: High Level Architecture of using LDAP Proxy

Thus in this case the ZFS SA and the internal vServer are both pointing to the LDAP Proxy which is setup using OTD as an TCP/LDAP load balancer,  it listens on the IPoIB-vServer-shared-storage network for incoming LDAP requests.  These requests are forwarded on to the external LDAP service.  Thus any vServer with access to the IPoIB-vserver-shared-storage network is able to mount the shares from the ZFS internal appliance using NFSv4.

Considerations

 

High Availability

In the architecture that is shown in Figure 1 the external LDAP is a highly available service that is running on two physically separate OS instances so that should one fail the other is able to service the requests.  The diagram shows a single LDAP proxy vServer so should that fail then the NFSv4 mounts would also fail because the ZFS Appliance would not have a route to the external directory.  There are two solutions to this issue, either use the HA features of vServers running on Exalogic or use the HA features of OTD to create a VIP and run two vServers as part of an failover group group.

The former case of using Exalogic vServer HA is by far the simplest solution.  If Exalogic senses that the LDAP Proxy vServer has failed it will automatically restart the server.  Thus, provided the OTD instances are configured to start on boot, the LDAP service should only be down for a short period of time while the vServer restarts.  Probably acceptable in non-production environments.  However, it is possible for the service within the vServer to fail for some reason and in this scenario the LDAP service would then become unavailable as the Exalogic vServer HA would not be activated.

To cater for this situation two vServers in a distribution group should be configured as LDAP proxies with OTD running as an HA Failover group.  This solution would identify a vServer failure very quickly and migrate the VIP over to the remaining vServer immediately.  A slightly more complex environment to configure but for running a production environment where any down time is critical this solution should be used.

vServer access on the vserver-shared-storage network

When a vServer is given access to the vserver-shared-storage network it will automatically be setup as a limited member of the Infiniband partition.   This makes perfect sense as a security consideration because it means that any vServer on this network is only able to access the shared storage appliance and no other vServer on the network.  However, in the case of setting up an LDAP proxy server we want to enable the vServer to be a full member of the partition so that any of the other vServers can access it.   Only a full system administrator of the Exalogic rack will be able to do this.  The process to follow is:-

1.  Shutdown the vServer you want to promote. (This example assumes a server has access to the IPoIB-vserver-shared-storage and it is this network that is being promoted to full member.)

2.  Locate the vm.cfg of the server by ssh into any of the underlying OVS physical compute nodes. Change directory to the /OVS/Repositories/nnnnnn/VirtualMachines The number in example path shown below is unique for each Exalogic Control implementation.  In the example below we are going to make the LDAP proxy server visible on this network.

[root@el01cn01 ~]# cd /OVS/Repositories/0004fb00000300000ca29f8ce7f571fa/VirtualMachines
[root@el01cn01 VirtualMachines]# grep -r ldap .
./0004fb0000060000d4f615c6df13c8f1/vm.cfg:OVM_simple_name = 'ldap-proxy''

This identifies the vm.cfg file we need to edit.

3.  Identify the partition number for the network you want the vServer to become a full member of. Generally this is likely to be the IPoIB-vserver-shared-storage. In which case the default partition is 0005.  As shown in the Exalogic Control screenshot below.

Figure 3 : Network summary details showing the Partition (P-Key)

4.  Edit the vm.cfg file and change the entries in the line identified by exalogic_ipoib and change the partition from 0005 to 8005. (The most significant bit of an IB partition indicates the membership type, hence 0005 and 8005 are referring to the same partition but with an 8 at the start it becomes a full member.


exalogic_ipoib = [{'pkey': ['0x0005', '0x0003'], 'port': '1'}, {'pkey': ['0x0005', '0x0003'], 'port': '2'}]

To

exalogic_ipoib = [{'pkey': ['0x8005', '0x0003'], 'port': '1'}, {'pkey': ['0x8005', '0x0003'], 'port': '2'}]


Remember to change the partition key for BOTH ports.

5.  Restart the vServer to ensure that the visibility is as expected and it can be accessed from other vServers.


Appendix

Auto-start of OTD instance

Below is a very simple example script that can be used to automatically startup the OTD instance.


[root@ldap-proxy ~]# cat /etc/init.d/otd
#!/bin/sh
# chkconfig init header
#
# otd: Oracle Traffic Manager
#
# chkconfig: 345 92 8
# description: Oracle Traffic Manager Server \
# Start/Stop the OTD installation automatically
#
#
#Script to start and stop the OMS agent during shutdown and restart of the machine
PATH=/usr/bin:/bin:/usr/local/bin:$PATH
export PATH
OTD_HOME=/u01/instances/otd/admin
export OTD_HOME
installUser=oracle

case "$1" in
start)
COMMAND="$OTD_HOME/admin-server/bin/startserv"
su - $installUser -c "$COMMAND"
COMMAND="$OTD_HOME/net-ldap-proxy/bin/startserv"
su - $installUser -c "$COMMAND"
;;
stop)
COMMAND="$OTD_HOME/admin-server/bin/stopserv"
su - $installUser -c "$COMMAND"
COMMAND="$OTD_HOME/net-ldap-proxy/bin/stopserv"
su - $installUser -c "$COMMAND"
;;
status)
ps -ef | grep net-ldap-proxy
;;
*)
echo $"Usage: $0 {start|stop|status}"
exit 1
esac

Simply create the otd file in /etc/init.d then use the # chkconfig --add otd command to add it to the list of managed services. then the service should automatically start on boot.

[root@ldap-proxy ~]# chkconfig --list otd
otd             0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@ldap-proxy ~]# service otd stop
server has been shutdown
server has been shutdown
[root@ldap-proxy ~]# service otd start
Oracle Traffic Director 11.1.1.7.0 B01/14/2013 04:13
[NOTIFICATION:1] [OTD-80118] Using [Java HotSpot(TM) 64-Bit Server VM, Version 1.6.0_35] from [Sun Microsystems Inc.]
[NOTIFICATION:1] [OTD-80000] Loading web module in virtual server [admin-server] at [/admin]
[NOTIFICATION:1] [OTD-80000] Loading web module in virtual server [admin-server] at [/jmxconnector]
[NOTIFICATION:1] [OTD-10358] admin-ssl-port: https://ldap-proxy:1895 ready to accept requests
[NOTIFICATION:1] [OTD-10487] successful server startup
Oracle Traffic Director 11.1.1.7.0 B01/14/2013 04:13
[NOTIFICATION:1] [OTD-10358] tcp-listener-1: tcp://tcpserver:3389 ready to accept requests
[NOTIFICATION:1] [OTD-10487] successful server startup
[root@ldap-proxy ~]# service otd status
oracle   28131     1  0 06:07 ?        00:00:00 trafficd-wdog -d /u01/instances/otd/admin/net-ldap-proxy/config -r /u01/products/otd -t /tmp/net-ldap-proxy-7dd0931e -u oracle
oracle   28132 28131  1 06:07 ?        00:00:00 trafficd -d /u01/instances/otd/admin/net-ldap-proxy/config -r /u01/products/otd -t /tmp/net-ldap-proxy-7dd0931e -u oracle
oracle   28133 28132  0 06:07 ?        00:00:00 trafficd -d /u01/instances/otd/admin/net-ldap-proxy/config -r /u01/products/otd -t /tmp/net-ldap-proxy-7dd0931e -u oracle
root     28162 28160  0 06:07 pts/0    00:00:00 grep net-ldap-proxy