Difference between revisions of "CentOS cluster computing"

From Teknologisk videncenter
Jump to: navigation, search
m (Backup technologies)
m (example)
Line 70: Line 70:
 
<pre>
 
<pre>
 
[root@node1 ~]# clustat
 
[root@node1 ~]# clustat
Cluster Status for webcluster2 @ Thu Apr  2 10:43:02 2009
+
Cluster Status for webcluster2 @ Sat Apr  4 07:33:26 2009
 
Member Status: Quorate
 
Member Status: Quorate
  
  Member Name                             ID  Status
+
  Member Name                                           ID  Status
  ------ ----                             ---- ------
+
  ------ ----                                           ---- ------
  node1.tekkom.dk                             1 Online, Local
+
  node1.tekkom.dk                                           1 Online
  node2.tekkom.dk                             2 Online
+
  node2.tekkom.dk                                           2 Online, rgmanager
  node3.tekkom.dk                             3 Online
+
  node3.tekkom.dk                                           3 Online, Local, rgmanager
 +
 
 +
Service Name                                Owner (Last)                                State
 +
------- ----                                ----- ------                                -----
 +
service:apache_service                      node2.tekkom.dk                              started
  
 
[root@node1 cluster]# cman_tool status
 
[root@node1 cluster]# cman_tool status
 
Version: 6.1.0
 
Version: 6.1.0
Config Version: 10
+
Config Version: 16
 
Cluster Name: webcluster2
 
Cluster Name: webcluster2
 
Cluster Id: 28826
 
Cluster Id: 28826
Line 91: Line 95:
 
Total votes: 3
 
Total votes: 3
 
Quorum: 2
 
Quorum: 2
Active subsystems: 8
+
Active subsystems: 9
 
Flags: Dirty
 
Flags: Dirty
Ports Bound: 0 177
+
Ports Bound: 0 11 177
Node name: node1.tekkom.dk
+
Node name: node3.tekkom.dk
Node ID: 2
+
Node ID: 3
 
Multicast addresses: 239.192.112.11
 
Multicast addresses: 239.192.112.11
Node addresses: 192.168.138.146
+
Node addresses: 192.168.138.157
  
 
[root@node1 ~]# cat /etc/cluster/cluster.conf
 
[root@node1 ~]# cat /etc/cluster/cluster.conf
 
<?xml version="1.0"?>
 
<?xml version="1.0"?>
<cluster alias="webcluster2" config_version="10" name="webcluster2">
+
<cluster alias="webcluster2" config_version="16" name="webcluster2">
 
         <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
 
         <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
 
         <clusternodes>
 
         <clusternodes>
Line 117: Line 121:
 
         <fencedevices/>
 
         <fencedevices/>
 
         <rm>
 
         <rm>
                 <failoverdomains/>
+
                 <failoverdomains>
 +
                        <failoverdomain name="webcluster-failover" nofailback="0" ordered="0" restricted="1">
 +
                                <failoverdomainnode name="node2.tekkom.dk" priority="1"/>
 +
                        </failoverdomain>
 +
                </failoverdomains>
 
                 <resources>
 
                 <resources>
                         <apache config_file="conf/httpd.conf" name="Appache2" server_root="/etc/httpd" shutdown_wait="0"/>
+
                         <apache config_file="conf/httpd.conf" name="Appache2" server_root="/mnt/iscsi" shutdown_wait="0"/>
 +
                        <script file="/etc/rc.d/init.d/httpd" name="apachescript"/>
 
                 </resources>
 
                 </resources>
 +
                <service autostart="1" domain="webcluster-failover" exclusive="0" name="apache_service"/>
 
         </rm>
 
         </rm>
 
</cluster>
 
</cluster>
 +
 
</pre>
 
</pre>
  

Revision as of 06:33, 4 April 2009

Abbreviations and systems

Redhat Clustering daemons and systems
System Meaning Runs on
CCS Cluster Configuration System Each node
CLVM Cluster Logical Volume Manager. Provides volume management to nodes. Each node
CMAN Cluster Manager Each node
DLM Distributed Lock Manager Each node
fenced Fence Daemon - The DLM Distributed Lock manager daemon Each node
GFS Global File System. Shared storage among nodes. Each node
GNDB Global Network Block Device. Low level storage access over Ethernet GFS server
LVS Linux Virtual Server, routing software to privide IP load balancing On two or more Linux gateways
RHCS RedHat Cluster Suite - Software components to build various types of Clusters

Red Hat Cluster Suite Introduction

Red Hat Cluster Suite (RHCS) is an integrated set of software components that can be deployed in a variety of configurations to suit your needs for performance, high-availability, load balancing, scalability, file sharing, and economy.

RHCS consists of the following major components

  • Cluster infrastructure — Provides fundamental functions for nodes to work together as a cluster: configuration-file management, membership management, lock management, and fencing.
  • High-availability Service Management — Provides failover of services from one cluster node to another in case a node becomes inoperative.
  • Cluster administration tools — Configuration and management tools for setting up, configuring, and managing a Red Hat cluster. The tools are for use with the Cluster Chapter Infrastructure components, the High-availability and Service Management components, and storage.
  • Linux Virtual Server (LVS) — Routing software that provides IP-Load-balancing. LVS runs in a pair of redundant servers that distributes client requests evenly to real servers that are behind the LVS servers.

Additional Cluster Components

You can supplement Red Hat Cluster Suite with the following components, which are part of an optional package (and not part of Red Hat Cluster Suite):

  • Red Hat GFS (Global File System) — Provides a cluster file system for use with Red Hat Cluster Suite. GFS allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node.
  • Cluster Logical Volume Manager (CLVM) — Provides volume management of cluster storage.
  • Global Network Block Device (GNBD) — An ancillary component of GFS that exports block-level storage to Ethernet. This is an economical way to make block-level storage available to Red Hat GFS.

Cluster management with Conga

Starting luci and ricci

Follow the instructions in chapter 3 - Configuring Red Hat Cluster With Conga - in RedHat 5.2 Cluster Administration manual. Follow the instructions in

notes

Services

  • ricci
  • luci
  • cman
  • rgmanager

utilities

  • clustat Cluster Status

Files

  • /etc/cluster/cluster.conf

hostnames give all the nodes hostnames in /etc/hosts or use dns

  • node1.tekkom.dk
  • node2.tekkom.dk
  • ..
hostname node1.tekkom.dk
vi /etc/sysconfig/network
service network restart

example

[root@node1 ~]# clustat
Cluster Status for webcluster2 @ Sat Apr  4 07:33:26 2009
Member Status: Quorate

 Member Name                                           ID   Status
 ------ ----                                           ---- ------
 node1.tekkom.dk                                           1 Online
 node2.tekkom.dk                                           2 Online, rgmanager
 node3.tekkom.dk                                           3 Online, Local, rgmanager

 Service Name                                 Owner (Last)                                 State
 ------- ----                                 ----- ------                                 -----
 service:apache_service                       node2.tekkom.dk                              started

[root@node1 cluster]# cman_tool status
Version: 6.1.0
Config Version: 16
Cluster Name: webcluster2
Cluster Id: 28826
Cluster Member: Yes
Cluster Generation: 252
Membership state: Cluster-Member
Nodes: 3
Expected votes: 3
Total votes: 3
Quorum: 2
Active subsystems: 9
Flags: Dirty
Ports Bound: 0 11 177
Node name: node3.tekkom.dk
Node ID: 3
Multicast addresses: 239.192.112.11
Node addresses: 192.168.138.157

[root@node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster alias="webcluster2" config_version="16" name="webcluster2">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.tekkom.dk" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.tekkom.dk" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node3.tekkom.dk" nodeid="3" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman/>
        <fencedevices/>
        <rm>
                <failoverdomains>
                        <failoverdomain name="webcluster-failover" nofailback="0" ordered="0" restricted="1">
                                <failoverdomainnode name="node2.tekkom.dk" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <apache config_file="conf/httpd.conf" name="Appache2" server_root="/mnt/iscsi" shutdown_wait="0"/>
                        <script file="/etc/rc.d/init.d/httpd" name="apachescript"/>
                </resources>
                <service autostart="1" domain="webcluster-failover" exclusive="0" name="apache_service"/>
        </rm>
</cluster>

Manuls

Clustering

  • FreeNAS Network Attached Storage solution

Backup technologies