Cluster authentication service

Prerequisites

PhenixID Server version 4.0 installed on two machines.

Overview

When setting up a PhenixID cluster, two service will be involved:
PhenixID Service (clustering of sessions and configuration)
PhenixID Database service (clustering of the database)

NOTE:
This document will cover configuration of the PhenixID authentication service cluster.
More information about database cluster in this document:
Database
If upgrading from earlier version, please use this document:
Upgrade

Configuration of PhenixID authentication service cluster

The PhenixID service should now have been installed on two nodes as standalone.

To configure the PAS cluster, edit the file /classes/cluster.xml.
Enable the clustering by changing/adding the following:

  • Set "<tcp-ip enabled="false">" and  <interfaces enabled="false"> to true.
  • Add the section "member-list" to the "tcp-ip"  section and add the ip address of the second node.
  • Add the local ip address to "public-address",  "interface" in tcp-ip and "interfaces".

See example below.
When done, save the file.
NOTE:
Do not start the service until all configuration has been done.

    <network>
    	<public-address>192.168.0.11</public-address>
        <port auto-increment="false" port-count="1">5701</port>
        <outbound-ports>
            <!--
            Allowed port range when connecting to other nodes.
            0 or * means use system provided port.
            -->
            <ports>0</ports>
        </outbound-ports>
        <join>
            <tcp-ip enabled="true">
                <interface>192.168.0.11</interface>
                <member-list>
                    <member>192.168.0.12</member>
                </member-list>
            </tcp-ip>
            <multicast enabled="false"/>
            <aws enabled="false"/>
        </join>
        <interfaces enabled="true">
            <interface>192.168.0.11</interface>
        </interfaces>
    </network>

When done on both nodes, sessions and PAS configuration will be clustered between the nodes.
Verify the log at first startup, making sure that the nodes are communicating correctly.