Showing posts with label LDAP Proxy. Show all posts
Showing posts with label LDAP Proxy. Show all posts

Friday, 31 January 2014

Running DNS (bind) for a private DNS domain in Exalogic

In an earlier post I described a process to setup bind to provide a relay DNS service that can be accessed from internal vServers and the shared storage.  This provides an HA DNS service to the shared storage in particular as without such a setup it will be relying on the non-HA 1GbE network for access to DNS.

The next obvious step in the process is to extend your bind configuration so that a local DNS service can be used for the vServers you create.  This would give name resolution for guests that you do not want included in the external DNS service.

The first step is to setup bind or the named daemon as described in my earlier blog entry.  Ensure that the vServer you are using for the DNS service is connected to an EoIB network and the shared storage, this will mean that it becomes attached to three networks in total.
  1. the EoIB network which will give it access to the main DNS service in the datacenter, 
  2. the vServer-shared-storage which will allow the ZFS appliance to use this as a DNS server 
  3. the IPoIB-virt-admin network.  This is a network that is connected to all vServers so if we make the vServer a full member of this network as described earlier in a post about setting up LDAP on the rack then all vServers created can utilise the DNS services.  All we need to is to configure the network to use the domain service.

Once bind is operational then we can extend the named configuration to include details for an internal domain to the Exalogic rack.  So in this example our datacenter DNS runs on a domain of mycompany.com, for lookups internal to the Exalogic we want to use the domain el01.mycompany.com where el01 represents the Exalogic rack name.   The first step is to edit the main configuration file and add another section to specify that the bind service will be the master for the el01.mycompany.com domain.



# cat /etc/named.conf
options {
    directory "/var/named";

    # Hide version string for security
    version "not currently available";

    # Listen to the loopback device and internal networks only
    listen-on { 127.0.0.1; 172.16.0.14; 172.17.0.41; };
    listen-on-v6 { ::1; };

    # Do not query from the specified source port range
    # (Adjust depending your firewall configuration)
    avoid-v4-udp-ports { range 1 32767; };
    avoid-v6-udp-ports { range 1 32767; };

    # Forward all DNS queries to your DNS Servers
    forwarders { 10.5.5.4; 10.5.5.5; };
    forward only;

    # Expire negative answer ASAP.
    # i.e. Do not cache DNS query failure.
    max-ncache-ttl 3; # 3 seconds

    # Disable non-relevant operations
    allow-transfer { none; };
    allow-update-forwarding { none; };
    allow-notify { none; };
};

zone "el01.mycompany.com" in{
        type master;
        file "el01";
        allow-update{none;};
};
 

The extra section specifies that we will have a zone or DNS domain el01.mycompany.com. Within this zone this DNS server will be the master or authoritative source for all name resolution.  There is a file called el01 which will be the source of all the IP addresses that are served by this server.  Earlier in the configuration is the line

    directory "/var/named";

This specifies the directory that the named daemon will search in for the file called el01. The content of the file is as shown below.


# cat el01
; zone file for el01.mycompany.com
$TTL 2d    ; 172800 secs default TTL for zone
$ORIGIN el2h.mycompany.com.
@             IN      SOA   proxy.el01.mycompany.com. hostmaster.el01.mycompany.com. (
                        2003080800 ; se = serial number
                        12h        ; ref = refresh
                        15m        ; ret = update retry
                        3w         ; ex = expiry
                        3h         ; min = minimum
                        )
              IN      NS      proxy.el01.mycompany.com.
              MX      10      proxy.el01.mycompany.com.

; Server names for resolution in the el01.mycompany.com domain
el01sn-priv   IN      A         172.17.0.9
proxy         IN      A         172.16.0.12
ldap-proxy    IN      CNAME     proxy
 

The properties or directives in the zone file are:-

  1. TTL - Time to live.  If there are downstream name servers then this directive lets them know how long their cache can be valid for.
  2. ORIGIN - Defines the domain name that will be appended to any unqualified lookups.
  3. SOA - Start of Authority details
    1. The @ symbol places the domain name specified in the ORIGIN as the namespace being defined by this SOA record.
    2. The SOA directive is followed by the primary DNS server for the namespace and the e-mail address for the domain.  (Not used in our case but it needs to be present)
    3. The serial number is incremented each time the zone file is updated.  This allows the named to recognise that it needs to reload the content.
    4. The other values indicate time periods to wait for updates or to force refresh slave servers.
  4. NS - Name service - Determines the fully qualified domain for servers that are authoritative in this domain.
  5. MX - Mail eXchange, defines the mail server where mail sent to this domain is to be sent.
  6. A - Address record is used to specify the IP address for a particular name
  7. CNAME - The Cannonical Name which can be used to create aliases for a particular server.
In the example above we have added a few addresses into the DNS domain,
  1. The storage head under the name el01sn-priv.  This means that all vServers will automatically be able to resolve by name the storage for use with NFS mounts.
  2. proxy (or ldap-proxy) is the name that we are using for a server where OTD is installed and configured to be a proxy for an external directory.  Thus enabling all vServers to access LDAP for authentication.  (Useful for NFSv4 mounts from the shared storage.)
So once this is all up and running restart the named service and ensure that your DNS settings in the virt-admin network (in our case) include the search domain for el01.mycompany.com and the IP address for the DNS vServer.  As shown below.  This way every vServer created will be able to use the DNS service.



Friday, 5 April 2013

Access LDAP from internal Exalogic vServers

Introduction

As discussed in my earlier postings about setting up LDAP to enable access to shared storage for NFSv4 requirements, the same is true for a virtualised Exalogic.   In many data centres a directory of some sort may already be setup external to the Exalogic that holds the user accounts for access to unix environments - an Exalogic should be able to use this authentication source.

An LDAP service may be available on one or more networks but is most likely to appear on one only.  An Exalogic has at least two networks connected to the datacentre, firstly a management network that links the physical components together and secondly a 10GbE network that can provide access to a deployed application.  Often these two networks are kept separate for security reasons.  For Exalogic this poses an issue for shared authentication between the storage and the running vServers as the storage has no direct access to the 10GbE network and the vServers have no direct access to the 1GbE management network but both need to be able to access a shared LDAP resource.

The issues is compounded further if we are building up a secure vServer topology with only web tier having access to the client network, as shown in the deployment topology discussed when considering the Infiniband network.


Figure 1 : vServer deployment with a web and application tier

In this situation the application tier vServers have no access to either the 10GbE nework or the 1GbE management network.  As such how can we use an external directory to provide a shared authentication source?

This posting considers a few possible solutions to the scenario.

Problem Statement

The problem is that both vServers and the shared storage require access to the same directory for authentication purposes so that shares can be mounted using NFSv4.  The visibility of networks is limited and different for different components as shown in Table 1.

ComponentManagement Network (1GbE)External Network(10GbE)Internal/Private Network (IB)
ZFS Storage ApplianceYesNoYes (vServer-shared-storage)
vServer
(web tier)
NoYesYes (vServer-shared-storage)
vServer (internal/app tier)NoNoYes (vServer-shared-storage)
Table 1 - Component network access

So how do we setup an environment that all vServers and the shared storage can all access the same directory service?

Potential Solutions

  1. Ensure the directory service is available/routable on both the management and 10GbE networks.  Give all vServers an interface to the 10GbE network.  (The 10GbE network can be VLAN tagged to a management only network.)
  2. Ensure the directory service is available/routable on both management and 10GbE network.  Create a new vserver that has interfaces on both the 10GbE network and the internal private network.   (IPoIB-vserver-shared-storage is a good internal candidate for this.)  Then setup this vServer to be a gateway/router.  All internal vServers must have a static route that will go via this gateway for the IP addresses of the directory servers.
  3. Make the directory available on the 10GbE network and then create replicas of the directory that run in vServers on the Exalogic rack.  These replicas can make their services available to the internal components.
  4. Make the directory available on the 10GbE network and then include a vServer that runs an LDAP proxy so that internal components can access the external vServer through the proxy service.
This blog posting is going to consider the fourth option in more detail, this is partially in the light of the recent 11.1.1.7 release of Oracle Traffic Director that now supports load balancing LDAP requests and hence is a good candidate for use as an LDAP proxy.

OTD can be downloaded from the public Oracle website here.  The primary new functionality in this release is:
  1. TCP load-balancing support . This allows OTD to be an entry point to load balance HTTP and non HTTP traffic including connect-time LDAP, T3/RMI etc.
  2. HTML5 WebSockets reverse proxy support
  3. Graphical expression builder for reverse proxy routing rules
  4. Additional WLS load-balancing/keepalive synchronization optimizations
  5. Web Application Firewall Support - (ModSecurity based Firewall to inspect and reject requests). Supports well recognized rulesets from OWASP Core Ruleset
  6. OAM 11g WebGate support
  7. Inter-operability certification with FMW 11.1.1.1.7 and with Classic Portal / Forms.  
  8. Exalogic Solaris support
We are interested in the load balancing of LDAP.  In a recent Blog posting Paul Done wrote about using OTD with T3/RMI load balancing.

LDAP Proxy Solution

For this solution we will setup a vServer that hosts OTD, it will listen on the internal networks and forward LDAP requests to the external LDAP server.  The architecture of such a design is shown in Figure 2.

Figure 2: High Level Architecture of using LDAP Proxy

Thus in this case the ZFS SA and the internal vServer are both pointing to the LDAP Proxy which is setup using OTD as an TCP/LDAP load balancer,  it listens on the IPoIB-vServer-shared-storage network for incoming LDAP requests.  These requests are forwarded on to the external LDAP service.  Thus any vServer with access to the IPoIB-vserver-shared-storage network is able to mount the shares from the ZFS internal appliance using NFSv4.

Considerations

 

High Availability

In the architecture that is shown in Figure 1 the external LDAP is a highly available service that is running on two physically separate OS instances so that should one fail the other is able to service the requests.  The diagram shows a single LDAP proxy vServer so should that fail then the NFSv4 mounts would also fail because the ZFS Appliance would not have a route to the external directory.  There are two solutions to this issue, either use the HA features of vServers running on Exalogic or use the HA features of OTD to create a VIP and run two vServers as part of an failover group group.

The former case of using Exalogic vServer HA is by far the simplest solution.  If Exalogic senses that the LDAP Proxy vServer has failed it will automatically restart the server.  Thus, provided the OTD instances are configured to start on boot, the LDAP service should only be down for a short period of time while the vServer restarts.  Probably acceptable in non-production environments.  However, it is possible for the service within the vServer to fail for some reason and in this scenario the LDAP service would then become unavailable as the Exalogic vServer HA would not be activated.

To cater for this situation two vServers in a distribution group should be configured as LDAP proxies with OTD running as an HA Failover group.  This solution would identify a vServer failure very quickly and migrate the VIP over to the remaining vServer immediately.  A slightly more complex environment to configure but for running a production environment where any down time is critical this solution should be used.

vServer access on the vserver-shared-storage network

When a vServer is given access to the vserver-shared-storage network it will automatically be setup as a limited member of the Infiniband partition.   This makes perfect sense as a security consideration because it means that any vServer on this network is only able to access the shared storage appliance and no other vServer on the network.  However, in the case of setting up an LDAP proxy server we want to enable the vServer to be a full member of the partition so that any of the other vServers can access it.   Only a full system administrator of the Exalogic rack will be able to do this.  The process to follow is:-

1.  Shutdown the vServer you want to promote. (This example assumes a server has access to the IPoIB-vserver-shared-storage and it is this network that is being promoted to full member.)

2.  Locate the vm.cfg of the server by ssh into any of the underlying OVS physical compute nodes. Change directory to the /OVS/Repositories/nnnnnn/VirtualMachines The number in example path shown below is unique for each Exalogic Control implementation.  In the example below we are going to make the LDAP proxy server visible on this network.

[root@el01cn01 ~]# cd /OVS/Repositories/0004fb00000300000ca29f8ce7f571fa/VirtualMachines
[root@el01cn01 VirtualMachines]# grep -r ldap .
./0004fb0000060000d4f615c6df13c8f1/vm.cfg:OVM_simple_name = 'ldap-proxy''

This identifies the vm.cfg file we need to edit.

3.  Identify the partition number for the network you want the vServer to become a full member of. Generally this is likely to be the IPoIB-vserver-shared-storage. In which case the default partition is 0005.  As shown in the Exalogic Control screenshot below.

Figure 3 : Network summary details showing the Partition (P-Key)

4.  Edit the vm.cfg file and change the entries in the line identified by exalogic_ipoib and change the partition from 0005 to 8005. (The most significant bit of an IB partition indicates the membership type, hence 0005 and 8005 are referring to the same partition but with an 8 at the start it becomes a full member.


exalogic_ipoib = [{'pkey': ['0x0005', '0x0003'], 'port': '1'}, {'pkey': ['0x0005', '0x0003'], 'port': '2'}]

To

exalogic_ipoib = [{'pkey': ['0x8005', '0x0003'], 'port': '1'}, {'pkey': ['0x8005', '0x0003'], 'port': '2'}]


Remember to change the partition key for BOTH ports.

5.  Restart the vServer to ensure that the visibility is as expected and it can be accessed from other vServers.


Appendix

Auto-start of OTD instance

Below is a very simple example script that can be used to automatically startup the OTD instance.


[root@ldap-proxy ~]# cat /etc/init.d/otd
#!/bin/sh
# chkconfig init header
#
# otd: Oracle Traffic Manager
#
# chkconfig: 345 92 8
# description: Oracle Traffic Manager Server \
# Start/Stop the OTD installation automatically
#
#
#Script to start and stop the OMS agent during shutdown and restart of the machine
PATH=/usr/bin:/bin:/usr/local/bin:$PATH
export PATH
OTD_HOME=/u01/instances/otd/admin
export OTD_HOME
installUser=oracle

case "$1" in
start)
COMMAND="$OTD_HOME/admin-server/bin/startserv"
su - $installUser -c "$COMMAND"
COMMAND="$OTD_HOME/net-ldap-proxy/bin/startserv"
su - $installUser -c "$COMMAND"
;;
stop)
COMMAND="$OTD_HOME/admin-server/bin/stopserv"
su - $installUser -c "$COMMAND"
COMMAND="$OTD_HOME/net-ldap-proxy/bin/stopserv"
su - $installUser -c "$COMMAND"
;;
status)
ps -ef | grep net-ldap-proxy
;;
*)
echo $"Usage: $0 {start|stop|status}"
exit 1
esac

Simply create the otd file in /etc/init.d then use the # chkconfig --add otd command to add it to the list of managed services. then the service should automatically start on boot.

[root@ldap-proxy ~]# chkconfig --list otd
otd             0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@ldap-proxy ~]# service otd stop
server has been shutdown
server has been shutdown
[root@ldap-proxy ~]# service otd start
Oracle Traffic Director 11.1.1.7.0 B01/14/2013 04:13
[NOTIFICATION:1] [OTD-80118] Using [Java HotSpot(TM) 64-Bit Server VM, Version 1.6.0_35] from [Sun Microsystems Inc.]
[NOTIFICATION:1] [OTD-80000] Loading web module in virtual server [admin-server] at [/admin]
[NOTIFICATION:1] [OTD-80000] Loading web module in virtual server [admin-server] at [/jmxconnector]
[NOTIFICATION:1] [OTD-10358] admin-ssl-port: https://ldap-proxy:1895 ready to accept requests
[NOTIFICATION:1] [OTD-10487] successful server startup
Oracle Traffic Director 11.1.1.7.0 B01/14/2013 04:13
[NOTIFICATION:1] [OTD-10358] tcp-listener-1: tcp://tcpserver:3389 ready to accept requests
[NOTIFICATION:1] [OTD-10487] successful server startup
[root@ldap-proxy ~]# service otd status
oracle   28131     1  0 06:07 ?        00:00:00 trafficd-wdog -d /u01/instances/otd/admin/net-ldap-proxy/config -r /u01/products/otd -t /tmp/net-ldap-proxy-7dd0931e -u oracle
oracle   28132 28131  1 06:07 ?        00:00:00 trafficd -d /u01/instances/otd/admin/net-ldap-proxy/config -r /u01/products/otd -t /tmp/net-ldap-proxy-7dd0931e -u oracle
oracle   28133 28132  0 06:07 ?        00:00:00 trafficd -d /u01/instances/otd/admin/net-ldap-proxy/config -r /u01/products/otd -t /tmp/net-ldap-proxy-7dd0931e -u oracle
root     28162 28160  0 06:07 pts/0    00:00:00 grep net-ldap-proxy