cluster – TecAdmin https://tecadmin.net How to guide for System Administrator's and Developers Wed, 04 Jan 2023 10:27:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 What is High Availability Cluster: A Basic Introduction https://tecadmin.net/high-availability-cluster/ https://tecadmin.net/high-availability-cluster/#respond Wed, 04 Jan 2023 10:27:55 +0000 https://tecadmin.net/?p=33605 A high-availability cluster is a type of computing system that is designed to ensure that critical services and applications remain available to users with minimal downtime. It consists of multiple servers, or nodes, that are configured to work together to provide a single, unified service or application. If one node fails, the other nodes take [...]

The post What is High Availability Cluster: A Basic Introduction appeared first on TecAdmin.

]]>
A high-availability cluster is a type of computing system that is designed to ensure that critical services and applications remain available to users with minimal downtime. It consists of multiple servers, or nodes, that are configured to work together to provide a single, unified service or application. If one node fails, the other nodes take over to ensure that the service or application remains available to users.

There are several different types of high-availability clusters, including active-passive, active-active, and hybrid clusters.

  • An active-passive cluster consists of one active node that handles all requests and one or more passive nodes that are in standby mode. If the active node fails, the passive node(s) take over and become the active node(s). This type of cluster is simple and easy to set up, but it can lead to downtime if the failover process takes too long.
  • An active-active cluster consists of multiple active nodes that handle requests simultaneously. This type of cluster offers improved performance and scalability, but it can be more complex to set up and manage.
  • A hybrid cluster combines elements of both active-passive and active-active clusters. It typically includes one or more active nodes that handle requests and one or more passive nodes that are in standby mode. If an active node fails, the passive node(s) take over and become active, providing failover protection.

High-availability clusters are used in a variety of environments, including mission-critical applications, web servers, and databases. They are an important tool for ensuring the continuous operation of services and applications, as well as protecting against data loss and downtime.

To achieve high availability, clusters often use specialized software and hardware components, such as load balancers, storage area networks (SANs), and redundant power supplies. They may also utilize failover protocols, such as the Heartbeat protocol, to monitor the health of the nodes and initiate a failover if necessary.

In summary, a high-availability cluster is a system that is designed to ensure that critical services and applications remain available to users with minimal downtime. It consists of multiple servers, or nodes, that are configured to work together and provide failover protection in the event of a node failure. High-availability clusters are used in a variety of environments and can be an important tool for ensuring the continuous operation of services and applications.

The post What is High Availability Cluster: A Basic Introduction appeared first on TecAdmin.

]]>
https://tecadmin.net/high-availability-cluster/feed/ 0
How To Install Elasticsearch on CentOS/RHEL 8 https://tecadmin.net/how-to-install-elasticsearch-on-centosl-8/ https://tecadmin.net/how-to-install-elasticsearch-on-centosl-8/#comments Sun, 26 Jul 2020 07:57:36 +0000 https://tecadmin.net/?p=22080 Elasticsearch is flexible and powerful open-source, distributed real-time search and analytics engine. Using a simple set of APIs provides the ability for full-text search. Elastic search is freely available under the Apache 2 license, which provides the most flexibility. Elasticsearch is used to store and search all kinds of documents. It supports full-text search completely [...]

The post How To Install Elasticsearch on CentOS/RHEL 8 appeared first on TecAdmin.

]]>
Elasticsearch is flexible and powerful open-source, distributed real-time search and analytics engine. Using a simple set of APIs provides the ability for full-text search. Elastic search is freely available under the Apache 2 license, which provides the most flexibility.

Elasticsearch is used to store and search all kinds of documents. It supports full-text search completely based on the documents instead of tables and schema.

This tutorial will help you to setup Elasticsearch single node cluster on CentOS 8 and RedHat 8 systems.

Prerequisites

Java is the primary requirement for installing Elasticsearch on any system. You can check the installed version of Java by executing the following command. If it returns an error, install Java on your system using this tutorial.

sudo dnf install java-11-openjdk

After installation, check the Java version:

java -version 

openjdk version "11.0.8" 2020-07-14 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.8+10-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.8+10-LTS, mixed mode, sharing)

Step 1 – Configure Yum Repository

The first step is to configure Elasticsearch package repository on your system. Run the following command to install GPG key for the Elasticsearch rpm packages.

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Next, create a yum repository configuration file for the Elasticsearch. Edit /etc/yum.repos.d/elasticsearch.repo file in your favorite text editor:

sudo vi /etc/yum.repos.d/elasticsearch.repo

Add below content:

[Elasticsearch-7]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Step 2 – Installing Elasticsearch

Your system is prepared for the Elasticsearch installation. Run the following commands to update DNF cache and install the Elasticsearch rpm packages on your system.

sudo dnf update -y
sudo dnf install elasticsearch -y

Step 3 – Configure Elasticsearch

After successful installation edit Elasticsearch configuration file “/etc/elasticsearch/elasticsearch.yml” and set the network.host to localhost. You can also change it to the system LAP IP address to make it accessible over the network.

vim /etc/elasticsearch/elasticsearch.yml

Set the following values to customize your Elasticsearch environment.

  cluster.name: TecAdmin-ES-Cluster
  node.name: node-1
  path.data: /var/lib/elasticsearch
  network.host: 127.0.0.1

Save file and close.

After making configuration changes, let’s enable the Elasticsearch service and start it.

sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

Your Elasticsearch server is up and running now. To view status of the service, run below command:

sudo systemctl status elasticsearch

Output:

● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-10-21 05:28:25 UTC; 12min ago
     Docs: https://www.elastic.co
 Main PID: 99609 (java)
    Tasks: 61 (limit: 75413)
   Memory: 1.2G
   CGroup: /system.slice/elasticsearch.service
           ├─99609 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -X>
           └─99818 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Oct 21 05:28:09 centos8 systemd[1]: Starting Elasticsearch...
Oct 21 05:28:25 centos8 systemd[1]: Started Elasticsearch.

Step 4 – Test Elasticsearch

The Elasticsearch has been successfully installed and running on your CentOS 8 or RHEL 8 system. Now, you can use it for storing and searching content.

Run the following command to view the Elasticsearch server configuration and version details:

curl -X GET "localhost:9200/?pretty"

You will see the results like below:

{
  "name" : "centos8",
  "cluster_name" : "TecAdmin-ES-Cluster",
  "cluster_uuid" : "a0OZk1c1TEmPTlA24uT4zQ",
  "version" : {
    "number" : "7.9.2",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "d34da0ea4a966c4e49417f2da2f244e3e97b4e6e",
    "build_date" : "2020-09-23T00:45:33.626720Z",
    "build_snapshot" : false,
    "lucene_version" : "8.6.2",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Conclusion

In this tutorial, You have learned to install and configure Elasticsearch on CentOS 8 / RHEL 8 Linux system.

The post How To Install Elasticsearch on CentOS/RHEL 8 appeared first on TecAdmin.

]]>
https://tecadmin.net/how-to-install-elasticsearch-on-centosl-8/feed/ 4
How to Setup Load Balancing with Nginx in Linux https://tecadmin.net/setup-load-balancing-nginx/ https://tecadmin.net/setup-load-balancing-nginx/#comments Mon, 16 Oct 2017 04:55:15 +0000 https://tecadmin.net/?p=12669 Prerequisites You must have root access or sudo access to your server. Connect your server console with privileged access. Configure your site on backend servers. Step 1 – Install Nginx Server First of all, Login to your server with SSH access. Windows users can use PuTTY or alternatives to SSH into the server. Now install [...]

The post How to Setup Load Balancing with Nginx in Linux appeared first on TecAdmin.

]]>
Prerequisites

You must have root access or sudo access to your server. Connect your server console with privileged access. Configure your site on backend servers.

Step 1 – Install Nginx Server

First of all, Login to your server with SSH access. Windows users can use PuTTY or alternatives to SSH into the server. Now install Nginx using Linux package manager. Nginx package is available under default yum and apt repositories.

Using Apt-get:

$ sudo apt-get install nginx

Using Yum:

$ sudo yum install nginx

Using DNF:

$ sudo dnf install nginx

Step 2 – Setup VirtualHost with Upstream

Let’s create a Nginx virtual host configuration file for your domain. Below is my minimal settings configuration file.

/etc/nginx/conf.d/www.example.com.conf

upstream remote_servers  {
   server remote1.example.com;
   server remote2.example.com;
   server remote3.example.com;
}

server {
   listen   80;
   server_name  example.com www.example.com;
   location / {
     proxy_pass  http://remote_servers;
   }
}

Step 3 – Other Useful Directives

You may also use some more useful settings to more customize and optimize your load balancer with Nginx. For example set, the weight and IP hash like below with configuration.

Weight

upstream remote_servers  {
   server remote1.example.com weight=1;
   server remote2.example.com weight=2;
   server remote3.example.com weight=4;
}

IP Hash

upstream remote_servers {
   ip_hash;
   server   remote1.example.com;
   server   remote2.example.com;
   server   remote3.example.com  down;
 }

Step 4 – Restart Nginx Service

After making all the changes, restart Nginx service with the following command.

$ sudo systemctl restart nginx.service

The post How to Setup Load Balancing with Nginx in Linux appeared first on TecAdmin.

]]>
https://tecadmin.net/setup-load-balancing-nginx/feed/ 1
How to Setup IP Failover with KeepAlived on Ubuntu & Debian https://tecadmin.net/setup-ip-failover-on-ubuntu-with-keepalived/ https://tecadmin.net/setup-ip-failover-on-ubuntu-with-keepalived/#comments Sun, 05 Feb 2017 09:30:10 +0000 https://tecadmin.net/?p=11148 Keepalived is used for IP failover between two servers. Its facilities for load balancing and high-availability to Linux-based infrastructures. It worked on VRRP (Virtual Router Redundancy Protocol) protocol. In this tutorial, we have configured IP failover between two Linux systems running as a load balancer for load balancing and high-availability infrastructures. You may also intrested [...]

The post How to Setup IP Failover with KeepAlived on Ubuntu & Debian appeared first on TecAdmin.

]]>
Keepalived is used for IP failover between two servers. Its facilities for load balancing and high-availability to Linux-based infrastructures. It worked on VRRP (Virtual Router Redundancy Protocol) protocol. In this tutorial, we have configured IP failover between two Linux systems running as a load balancer for load balancing and high-availability infrastructures.

You may also intrested in our tutorial How to Setup HAProxy on Ubuntu & Linuxmint .

Network Scenario:
  1. LB1 Server: 192.168.10.111 (eth0)
  2. LB2 Server: 192.168.10.112 (eth0)
  3. Virtual IP: 192.168.10.121

keepalived-vrrp-network

I hope you get a better understanding of the setup with the above structure. Let’s move to the configuration IP failover setup between LB1 and LB2 servers.

Step 1 – Install Required Packages

First of all, Use the following command to install required packages to configure Keepalived on the server.

sudo apt-get update
sudo apt-get install linux-headers-$(uname -r)

Step 2 – Install Keepalived

Keepalived packages are available under default apt repositories. So just use a command to install it on both servers.

sudo apt-get install keepalived

Step 3 – Setup Keepalived on LB1.

Now create or edit Keepalived configuration /etc/keepalived/keepalived.conf file on LB1 and add the following settings. Update all red highlighted values with your network and system configuration.

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     sysadmin@mydomain.com
     support@mydomain.com
   }
   notification_email_from lb1@mydomain.com
   smtp_server localhost
   smtp_connect_timeout 30
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 101
    priority 101
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.121
    }
}

Step 4 – Setup KeepAlived on LB2.

Also, create or edit Keepalived configuration file /etc/keepalived/keepalived.conf on LB2 and add the following configuration. While making changes in the LB2 configuration file, make sure to set priority values to lower than LB1. For example below configuration is showing 100 priority value than LB1 has it 101.

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     sysadmin@mydomain.com
     support@mydomain.com
   }
   notification_email_from lb2@mydomain.com
   smtp_server localhost
   smtp_connect_timeout 30
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 101
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.121
    }
}
1. Priority value will be higher on Master server, It doesn’t matter what you used in state. If your state is MASTER but your priority is lower than the router with BACKUP, you will lose the MASTER state.
2. virtual_router_id should be same on both LB1 and LB2 servers.
3. By default single vrrp_instance support up to 20 virtual_ipaddress. In order to add more addresses you need to add more vrrp_instance

Step 5 – Start KeepAlived Service

Start keepalived service using the following command and also configure to autostart on system boot.

sudo service keepalived start

Step 6 – Check Virtual IPs

By default virtual IP will be assigned to the master server, In the case of master gets down, it will automatically assign to the slave server. Use the following command to show assigned virtual IP on the interface.

ip addr show eth0

Sample output

2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b9:b0:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.111/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.10.121/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::11ab:eb3b:dbce:a119/64 scope link
       valid_lft forever preferred_lft forever

Step 7 – Verify IP Failover Setup

  1. Shutdown master server (LB1) and check if ips are automatically assigned to slave server.
ip addr show eth0
  1. Now start LB1 and stop slave server (LB2). IPs will automatically assigned to master server.
ip addr show eth0
  1. Watch log files to insure its working
tailf /var/log/syslog

Sample Output

Feb  7 17:24:51 tecadmin Keepalived_healthcheckers[23177]: Registering Kernel netlink reflector
Feb  7 17:24:51 tecadmin Keepalived_healthcheckers[23177]: Registering Kernel netlink command channel
Feb  7 17:24:51 tecadmin Keepalived_healthcheckers[23177]: Opening file '/etc/keepalived/keepalived.conf'.
Feb  7 17:24:51 tecadmin Keepalived_healthcheckers[23177]: Configuration is using : 11104 Bytes
Feb  7 17:24:51 tecadmin Keepalived_healthcheckers[23177]: Using LinkWatch kernel netlink reflector...
Feb  7 17:24:52 tecadmin Keepalived_vrrp[23178]: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb  7 17:24:53 tecadmin Keepalived_vrrp[23178]: VRRP_Instance(VI_1) Entering MASTER STATE
Feb  7 17:24:53 tecadmin avahi-daemon[562]: Registering new address record for 192.168.10.121 on eth0.IPv4.

The post How to Setup IP Failover with KeepAlived on Ubuntu & Debian appeared first on TecAdmin.

]]>
https://tecadmin.net/setup-ip-failover-on-ubuntu-with-keepalived/feed/ 9
How to Setup HAProxy Load Balancer on Ubuntu 18.04 & 16.04 https://tecadmin.net/how-to-setup-haproxy-load-balancing-on-ubuntu-linuxmint/ https://tecadmin.net/how-to-setup-haproxy-load-balancing-on-ubuntu-linuxmint/#comments Wed, 16 Sep 2015 13:10:40 +0000 https://tecadmin.net/?p=8322 HAProxy is a very fast and reliable solution for high availability, load balancing, It supports TCP and HTTP-based applications. Nowadays maximizing websites up-time is very crucial for heavy traffic websites. This is not possible with single server setup. Then we need some high availability environment that can easily manage with single server failure. This article [...]

The post How to Setup HAProxy Load Balancer on Ubuntu 18.04 & 16.04 appeared first on TecAdmin.

]]>
HAProxy is a very fast and reliable solution for high availability, load balancing, It supports TCP and HTTP-based applications. Nowadays maximizing websites up-time is very crucial for heavy traffic websites. This is not possible with single server setup. Then we need some high availability environment that can easily manage with single server failure.

haproxy-setup-diagram

This article will help you to setup HAProxy load balancing environment on Ubuntu, Debian and LinuxMint. This will configure a Layer 4 Load Balancing (Transport Layer). Which will balance load and transfer requests to different-2 servers based on IP address and port numbers.

Network Details –

Below is our network server. There are 3 web servers running with Apache2 and listening on port 80 and one HAProxy server.

Web Server Details:

Server 1:    web1.example.com     192.168.1.101
Server 2:    web2.example.com     192.168.1.102
Server 3:    web3.example.com     192.168.1.103

HAProxy Server: 

HAProxy:     haproxy              192.168.1.12

Step 1 – Install HAProxy

Now start the setup. SSH to your HAProxy server as a privileged user and install HAProxy using following commands.

sudo add-apt-repository ppa:vbernat/haproxy-1.8
sudo apt-get update
sudo apt-get install haproxy

Step 2 – Configure HAProxy Load Balancing

Now edit haproxy default configuration file /etc/haproxy/haproxy.cfg and start configuration.

sudo vi /etc/haproxy/haproxy.cfg

Default Settings:

You will find some default configuration like below. If you don’t have enough idea about this, you can keep as it is.

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private

	# Default ciphers to use on SSL-enabled listening sockets.
	# For more information, see ciphers(1SSL). This list is from:
	#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256::RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
	ssl-default-bind-options no-sslv3

defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

Adding HAProxy Listener:

Now tell HAProxy to where to listen for new connections. As per below configuration HAProxy will list on port 80 of 192.168.1.12 ip address.

frontend Local_Server
    bind 192.168.1.12:80
    mode http
    default_backend My_Web_Servers

Add Backend Web Servers:

As per above configuration haproxy is now listening on port 80. Now define the backend web servers where HAProxy send the request.

backend nodes
    mode http
    balance roundrobin
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    option httpchk HEAD / HTTP/1.1rnHost:localhost
    server web1.example.com  192.168.1.101:80
    server web2.example.com  192.168.1.102:80
    server web3.example.com  192.168.1.103:80

Enable Stats (Optional)

Now if you want you can enable Haproxy statistics by adding following configuration in HAProxy configuration file.

listen stats *:1936
    stats enable
    stats hide-version
    stats refresh 30s
    stats show-node
    stats auth username:password
    stats uri  /stats

Step 3 – Final HAProxy Configuration File

The final configuration file may look like below:

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private

	# Default ciphers to use on SSL-enabled listening sockets.
	# For more information, see ciphers(1SSL). This list is from:
	#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256::RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
	ssl-default-bind-options no-sslv3

defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

frontend Local_Server
    bind 192.168.1.12:80
    mode http
    default_backend My_Web_Servers

backend My_Web_Servers
    mode http
    balance roundrobin
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    option httpchk HEAD / HTTP/1.1rnHost:localhost
    server web1.example.com  192.168.1.101:80
    server web2.example.com  192.168.1.102:80
    server web3.example.com  192.168.1.103:80

listen stats *:1936
    stats enable
    stats hide-version
    stats refresh 30s
    stats show-node
    stats auth username:password
    stats uri  /stats

Step 4 – Restart HAProxy

Now you have made all necessary changes in your HAProxy server. Now verify configuration file before restarting service using the following command.

haproxy -c -f /etc/haproxy/haproxy.cfg

If above command returned output as configuration file is valid then restart HAProxy service

sudo service haproxy restart

Step 5 – Verify HAProxy Setting

At this stage, we have a full functional HAProxy setup. At each web server node, I have a demo index.html page showing servers hostname, So we can easily differentiate between servers web pages.

Now access port 80 on IP 192.168.1.12 (as configured above) in the web browser and hit refresh. You will see that HAProxy is sending requests to backend servers one by one (as per round-robin algorithm).

haproxy-web1

haproxy-web2

haproxy-web3

With each refresh you can that HAProxy is sending request one by one to a backend server.

Reference: http://www.haproxy.org/download/1.5/doc/configuration.txt

The post How to Setup HAProxy Load Balancer on Ubuntu 18.04 & 16.04 appeared first on TecAdmin.

]]>
https://tecadmin.net/how-to-setup-haproxy-load-balancing-on-ubuntu-linuxmint/feed/ 3
How to Install ElasticSearch (Multi Node) Cluster on CentOS/RHEL, Ubuntu & Debian https://tecadmin.net/install-elasticsearch-multi-node-cluster-on-linux/ https://tecadmin.net/install-elasticsearch-multi-node-cluster-on-linux/#comments Fri, 16 Jan 2015 14:28:26 +0000 https://tecadmin.net/?p=6819 ElasticSearch is flexible and powerful open source, distributed real-time search and analytic engine. Using a simple set of APIs, it provides the ability for full-text search. Elastic search is freely available under the Apache 2 license, which provides most flexibility. This article will help you for configuring ElasticSearch Multi Node Cluster on CentOS, RHEL, Ubuntu [...]

The post How to Install ElasticSearch (Multi Node) Cluster on CentOS/RHEL, Ubuntu & Debian appeared first on TecAdmin.

]]>
ElasticSearch is flexible and powerful open source, distributed real-time search and analytic engine. Using a simple set of APIs, it provides the ability for full-text search. Elastic search is freely available under the Apache 2 license, which provides most flexibility.

ElasticSearch

This article will help you for configuring ElasticSearch Multi Node Cluster on CentOS, RHEL, Ubuntu and Debian Systems. In ElasticSearch multi node cluster is just configuring multiple single node clusters with same cluster name in same network.

Network Scenerio

We have three server with following ips and host names. All server are running in same LAN and have full access to each other server using ip and hostname both.

  192.168.10.101  NODE_1
  192.168.10.102  NODE_2
  192.168.10.103  NODE_3

Verify Java (All Nodes)

Java is the primary requirement for installing ElasticSearch. So make sure you have Java installed on all nodes.

# java -version 

java version "1.8.0_31"
Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)

If you don’t have Java installed on any node system, use one of following link to install it first.

Install Java 8 on CentOS/RHEL 7/6/5
Install Java 8 on Ubuntu

Download ElasticSearch (All Nodes)

Now download the latest ElasticSearch archive on all node systems from its official download page. At the time of last update of this article ElasticSearch 1.4.2 version is latest version available to download. Use following command to download ElasticSearch 1.4.2.

$ wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.2.tar.gz

Now extract ElasticSearch on all node systems.

$ tar xzf elasticsearch-1.4.2.tar.gz

Configure ElasticSearch

Now we need to setup ElasticSearch on all node systems. ElasticSearch uses “elasticsearch” as default cluster name. We recommend to change it as per your naming conversation.

$ mv elasticsearch-1.4.2 /usr/share/elasticsearch
$ cd /usr/share/elasticsearch

To change cluster named edit config/elasticsearch.yml file in each node and update following values. Node names are dynamically generated, but to keep a fixed user-friendly name change it also.

On NODE_1

Edit elasticsearch cluster configuration on NODE_1 (192.168.10.101) system.

$ vim config/elasticsearch.yml
  cluster.name: TecAdminCluster
  node.name: "NODE_1"

On NODE_2

Edit elasticsearch cluster configuration on NODE_2 (192.168.10.102) system.

$ vim config/elasticsearch.yml
  cluster.name: TecAdminCluster
  node.name: "NODE_2"

On NODE_3

Edit elasticsearch cluster configuration on NODE_3 (192.168.10.103) system.

$ vim config/elasticsearch.yml
  cluster.name: TecAdminCluster
  node.name: "NODE_3"

Install ElasticSearch-Head Plugin (All Nodes)

elasticsearch-head is a web front end for browsing and interacting with an Elastic Search cluster. Use the following command to install this plugin on all node systems.

$ bin/plugin --install mobz/elasticsearch-head

Starting ElasticSearch Cluster (All Nodes)

As the ElasticSearch cluster setup has been completed. Let start ElasticSearch cluster using following command on all nodes.

$ ./bin/elasticsearch &

By default elasticserch listen on port 9200 and 9300. So connect to NODE_1 on port 9200 like following url, You will see all three nodes in your cluster.

http://NODE_1:9200/_plugin/head/

Elasticsearch multinode cluster

Verify Multi Node Cluster

To verify that cluster is working properly. Insert some data in one node and if the same data is available in other nodes, it means cluster is working properly.

Insert Data on NODE_1

To verify cluster create a bucket in NODE_1 and add some data.

$ curl -XPUT http://NODE_1:9200/mybucket
$ curl -XPUT 'http://NODE_1:9200/mybucket/user/rahul' -d '{ "name" : "Rahul Kumar" }'
$ curl -XPUT 'http://NODE_1:9200/mybucket/post/1' -d '
{
    "user": "rahul",
    "postDate": "01-16-2015",
    "body": "Adding Data in ElasticSearch Cluster" ,
    "title": "ElasticSearch Cluster Test"
}'

Search Data on All Nodes

Now search same data from NODE_2 and NODE_3 and check if same data is replicated to other nodes of cluster. As per above commands we have created a user named rahul and added some data there. So use following commands to search data associated with user rahul.

$ curl 'http://NODE_1:9200/mybucket/post/_search?q=user:rahul&pretty=true'
$ curl 'http://NODE_2:9200/mybucket/post/_search?q=user:rahul&pretty=true'
$ curl 'http://NODE_3:9200/mybucket/post/_search?q=user:rahul&pretty=true'

and you will get results something like below for all above commands.

{
  "took" : 69,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "mybucket",
      "_type" : "post",
      "_id" : "1",
      "_score" : 1.0,
      "_source":
{
    "user": "rahul",
    "postDate": "01-16-2015",
    "body": "Adding Data in ElasticSearch Cluster" ,
    "title": "ElasticSearch Cluster Test"
}
    } ]
  }
}

View Cluster Data on Web Browser

To view data on ElasticSearch cluster access of elasticsearch-head plugin using one of cluster ip at below url. Then click on Browser tab.

http://NODE_1:9200/_plugin/head/

data in elasticsearch cluster

The post How to Install ElasticSearch (Multi Node) Cluster on CentOS/RHEL, Ubuntu & Debian appeared first on TecAdmin.

]]>
https://tecadmin.net/install-elasticsearch-multi-node-cluster-on-linux/feed/ 8
How to Setup Hadoop 2.6.5 (Single Node Cluster) on Ubuntu, CentOS And Fedora https://tecadmin.net/setup-hadoop-2-4-single-node-cluster-on-linux/ https://tecadmin.net/setup-hadoop-2-4-single-node-cluster-on-linux/#comments Sat, 10 Jan 2015 15:48:18 +0000 https://tecadmin.net/?p=5170 Apache Hadoop 2.6.5 noticeable improvements over the previous stable 2.X.Y releases. This version has many improvements in HDFS and MapReduce. This how-to guide will help you to install Hadoop 2.6 on CentOS/RHEL 7/6/5, Ubuntu and other Debian-based operating system. This article doesn’t include the overall configuration to setup Hadoop, we have only basic configuration required [...]

The post How to Setup Hadoop 2.6.5 (Single Node Cluster) on Ubuntu, CentOS And Fedora appeared first on TecAdmin.

]]>
Apache Hadoop 2.6.5 noticeable improvements over the previous stable 2.X.Y releases. This version has many improvements in HDFS and MapReduce. This how-to guide will help you to install Hadoop 2.6 on CentOS/RHEL 7/6/5, Ubuntu and other Debian-based operating system. This article doesn’t include the overall configuration to setup Hadoop, we have only basic configuration required to start working with Hadoop.

Hadoop on Linux

Step 1: Installing Java

Java is the primary requirement to setup Hadoop on any system, So make sure you have Java installed on your system using the following command.

# java -version 

java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

If you don’t have Java installed on your system, use one of the following links to install it first.

Install Java 8 on CentOS/RHEL 7/6/5
Install Java 8 on Ubuntu

Step 2: Creating Hadoop User

We recommend creating a normal (nor root) account for Hadoop working. So create a system account using the following command.

# adduser hadoop
# passwd hadoop

After creating an account, it also required to set up key-based ssh to its own account. To do this use execute following commands.

# su - hadoop
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys

Let’s verify key based login. Below command should not ask for the password but the first time it will prompt for adding RSA to the list of known hosts.

$ ssh localhost
$ exit

Step 3. Downloading Hadoop 2.6.5

Now download hadoop 2.6.0 source archive file using below command. You can also select alternate download mirror for increasing download speed.

$ cd ~
$ wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz 
$ tar xzf hadoop-2.6.5.tar.gz 
$ mv hadoop-2.6.5 hadoop

Step 4. Configure Hadoop Pseudo-Distributed Mode

4.1. Setup Hadoop Environment Variables

First, we need to set environment variable uses by Hadoop. Edit ~/.bashrc file and append following values at end of file.

export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

Now apply the changes in current running environment

$ source ~/.bashrc

Now edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh file and set JAVA_HOME environment variable. Change the JAVA path as per install on your system.

export JAVA_HOME=/opt/jdk1.8.0_131/

4.2. Edit Configuration Files

Hadoop has many of configuration files, which need to configure as per requirements to setup Hadoop infrastructure. Let’s start with the configuration with basic Hadoop single node cluster setup. first, navigate to below location

$ cd $HADOOP_HOME/etc/hadoop

Edit core-site.xml

<configuration>
<property>
  <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
</property>
</configuration>

Edit hdfs-site.xml

<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value>
</property>

<property>
  <name>dfs.name.dir</name>
    <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
    <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>

Edit mapred-site.xml

<configuration>
 <property>
  <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
</configuration>

Edit yarn-site.xml

<configuration>
 <property>
  <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>
</configuration>

4.3. Format Namenode

Now format the namenode using the following command, make sure that Storage directory is

$ hdfs namenode -format

Sample output:

15/02/04 09:58:43 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = svr1.tecadmin.net/192.168.1.133
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.5
...
...
15/02/04 09:58:57 INFO common.Storage: Storage directory /home/hadoop/hadoopdata/hdfs/namenode has been successfully formatted.
15/02/04 09:58:57 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/02/04 09:58:57 INFO util.ExitUtil: Exiting with status 0
15/02/04 09:58:57 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at svr1.tecadmin.net/192.168.1.133
************************************************************/

Step 5. Start Hadoop Cluster

Now start your Hadoop cluster using the scripts provides by Hadoop. Just navigate to your Hadoop sbin directory and execute scripts one by one.

$ cd $HADOOP_HOME/sbin/

Now run start-dfs.sh script.

$ start-dfs.sh

Sample output:

15/02/04 10:00:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-svr1.tecadmin.net.out
localhost: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-svr1.tecadmin.net.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 3c:c4:f6:f1:72:d9:84:f9:71:73:4a:0d:55:2c:f9:43.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-svr1.tecadmin.net.out
15/02/04 10:01:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Now run start-yarn.sh script.

$ start-yarn.sh

Sample output:

starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-svr1.tecadmin.net.out
localhost: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-svr1.tecadmin.net.out

Step 6. Access Hadoop Services in Browser

Hadoop NameNode started on port 50070 default. Access your server on port 50070 in your favorite web browser.

http://svr1.tecadmin.net:50070/

hadoop single node namenode

Now access port 8088 for getting the information about cluster and all applications

http://svr1.tecadmin.net:8088/

hadoop single node applications

Access port 50090 for getting details about secondary namenode.

http://svr1.tecadmin.net:50090/

Hadoop single node secondary namenode

Access port 50075 to get details about DataNode

http://svr1.tecadmin.net:50075/

hadoop-2-6-single-node-datanode

Step 7. Test Hadoop Single Node Setup

7.1 – Make the HDFS directories required using following commands.

$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/hadoop

7.2 – Now copy all files from local file system /var/log/httpd to hadoop distributed file system using below command

$ bin/hdfs dfs -put /var/log/httpd logs

7.3 – Now browse hadoop distributed file system by opening below url in browser.

 http://svr1.tecadmin.net:50070/explorer.html#/user/hadoop/logs

hadoop-test-uploaded-files

7.4 – Now copy logs directory for hadoop distributed file system to local file system.

$ bin/hdfs dfs -get logs /tmp/logs
$ ls -l /tmp/logs/

You can also check this tutorial to run wordcount mapreduce job example using command line.

The post How to Setup Hadoop 2.6.5 (Single Node Cluster) on Ubuntu, CentOS And Fedora appeared first on TecAdmin.

]]>
https://tecadmin.net/setup-hadoop-2-4-single-node-cluster-on-linux/feed/ 72
How to Install Elasticsearch on CentOS 7/6 https://tecadmin.net/install-elasticsearch-on-linux/ https://tecadmin.net/install-elasticsearch-on-linux/#comments Wed, 07 Jan 2015 01:00:18 +0000 https://tecadmin.net/?p=1559 Elasticsearch is flexible and powerful open-source, distributed real-time search and analytics engine. Using a simple set of APIs provides the ability for full-text search. Elastic search is freely available under the Apache 2 license, which provides the most flexibility. This tutorial will help you to setup Elasticsearch single node cluster on CentOS, Red Hat, and [...]

The post How to Install Elasticsearch on CentOS 7/6 appeared first on TecAdmin.

]]>
Elasticsearch is flexible and powerful open-source, distributed real-time search and analytics engine. Using a simple set of APIs provides the ability for full-text search. Elastic search is freely available under the Apache 2 license, which provides the most flexibility.

This tutorial will help you to setup Elasticsearch single node cluster on CentOS, Red Hat, and Fedora systems.

Step 1 – Prerequsities

Java is the primary requirement for installing Elasticsearch on any system. You can check the installed version of Java by executing the following command. If it returns an error, install Java on your system using this tutorial.

java -version

Step 2 – Setup Yum Repository

First of all, install GPG key for the elasticsearch rpm packages.

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Then create yum repository file for the elasticsearch. Edit /etc/yum.repos.d/elasticsearch.repo file:

sudo vi /etc/yum.repos.d/elasticsearch.repo

Add below content:

[Elasticsearch-7]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Step 3 – Install Elasticsearch

After adding yum repository, just install Elasticsearch on CentOS and RHEL system using the following command:

sudo yum install elasticsearch

After successful installation edit Elasticsearch configuration file “/etc/elasticsearch/elasticsearch.yml” and set the network.host to localhost. You can also change it to the system LAP IP address to make it accessible over the network.

vim /etc/elasticsearch/elasticsearch.yml
  network.host: localhost

Then enable the elasticsearch service and start it.

sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

The ElasticSearch has been successfully installed and running on your CentOS or RHEL system.

Run the following command to verify service:

curl -X GET "localhost:9200/?pretty"

You will see the results like below:

{
  "name" : "tecadmin",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "HY8HoLHnRCeb3QzXnTcmrQ",
  "version" : {
    "number" : "7.4.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
    "build_date" : "2019-09-27T08:36:48.569419Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Step 4 – Elasticsearch Examples (Optional)

The following examples will help you to add, fetch and search data in the Elasticsearch cluster.

Create New Bucket

curl -XPUT http://localhost:9200/mybucket

Output:

{"acknowledged":true}

Adding Data to Elasticsearch

Use following commands to add some data in Elasticsearch.
Command 1:

curl -XPUT 'http://localhost:9200/mybucket/user/johny' -d '{ "name" : "Rahul Kumar" }'

Output:

{"_index":"mybucket","_type":"user","_id":"johny","_version":1,"created":true}

Command 2:

curl -XPUT 'http://localhost:9200/mybucket/post/1' -d '
{
    "user": "Rahul",
    "postDate": "01-15-2015",
    "body": "This is Demo Post 1 in Elasticsearch" ,
    "title": "Demo Post 1"
}'

Output:

{"_index":"mybucket","_type":"post","_id":"1","_version":1,"created":true}

Command 3:

curl -XPUT 'http://localhost:9200/mybucket/post/2' -d '
{
    "user": "TecAdmin",
    "postDate": "01-15-2015",
    "body": "This is Demo Post 2 in Elasticsearch" ,
    "title": "Demo Post 2"
}'

Output:

{"_index":"mybucket","_type":"post","_id":"2","_version":1,"created":true}

Fetching Data from Elasticsearch

Use the following command to GET data from ElasticSearch and read the output.

curl -XGET 'http://localhost:9200/mybucket/user/johny?pretty=true'
curl -XGET 'http://localhost:9200/mybucket/post/1?pretty=true'
curl -XGET 'http://localhost:9200/mybucket/post/2?pretty=true'

Searching in Elasticsearch

Use the following command to search data from elastic search. Below command will search all data associated with user johny.

curl 'http://localhost:9200/mybucket/post/_search?q=user:TecAdmin&pretty=true'

Output:

{
  "took" : 145,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 0.30685282,
    "hits" : [ {
      "_index" : "mybucket",
      "_type" : "post",
      "_id" : "2",
      "_score" : 0.30685282,
      "_source":
{
    "user": "TecAdmin",
    "postDate": "01-15-2015",
    "body": "This is Demo Post 2 in Elasticsearch" ,
    "title": "Demo Post 2"
}
    } ]
  }
}

Congratulations! You have successfully configured elasticsearch single node cluster on your Linux system.

The post How to Install Elasticsearch on CentOS 7/6 appeared first on TecAdmin.

]]>
https://tecadmin.net/install-elasticsearch-on-linux/feed/ 7
How to Setup MariaDB Galera Cluster 10.0 on CentOS/RedHat & Fedora https://tecadmin.net/setup-mariadb-galera-cluster-10-on-centos-redhat-fedora/ https://tecadmin.net/setup-mariadb-galera-cluster-10-on-centos-redhat-fedora/#comments Tue, 22 Jul 2014 06:24:35 +0000 https://tecadmin.net/?p=5117 MariaDB Galera Cluster 10.0.12 Stable has been released and available for production use. MariaDB is a relational database management system (RDBMS). Generally we use single node of database server for small application but think about application which have thousands of users keep online at a time, In that situation we need a structure which will [...]

The post How to Setup MariaDB Galera Cluster 10.0 on CentOS/RedHat & Fedora appeared first on TecAdmin.

]]>
MariaDB Galera Cluster 10.0.12 Stable has been released and available for production use. MariaDB is a relational database management system (RDBMS). Generally we use single node of database server for small application but think about application which have thousands of users keep online at a time, In that situation we need a structure which will capable to handle this load and provides high availability. So we need to add multiple database servers interconnected with each other and keep synchronized, so in case any server goes down other servers can take place of them and provide services to users.

mariadb-banner

This article will help you to set up MariaDB Galera Cluster 10.0.12 with 3 nodes running with CentOS 6.5. Cluster server details are as following.

    • Cluster DB1: 192.168.1.10 ( HostName: db1.tecadmin.net )
    • Cluster DB2: 192.168.1.20 ( HostName: db2.tecadmin.net )
    • Cluster DB3: 192.168.1.30 ( HostName: db3.tecadmin.net )

Note: Step 1/2/3 has to be done on all cluster nodes and remaining steps are node specific.

Step 1: Add MariaDB Repositories

Create a mariadb repository /etc/yum.repos.d/mariadb.repo using following content in your system. Below repository will work on CentOS 6.x systems, For other system use repository generation tool and add to your system.

For CentOS 6 – 64bit

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

For CentOS 6 – 32bit

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-x86
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

Step 2: Install MariaDB and Galera

Before installing MariaDB Galera cluster packages, remove any existing MySQL or MariaDB packages installed on system. After that use following command to install on all nodes.

# yum install MariaDB-Galera-server MariaDB-client galera

Step 3: Initial MariaDB Configuration

After successfully installing packages in above steps do the some initial MariaDB configurations. Use following command and follow the instructions on all nodes of cluster. If will prompt to set root account password also.

# service mysql start
# mysql_secure_installation

After that create a user in MariaDB on all nodes, which can access database from your network in cluster.

# mysql -u root -p

MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'cluster'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION;
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit

and stop MariaDB service before starting cluster configuration

# service mysql stop

Step 4: Setup MariaDB Galera Cluster on DB1

Lets start setup MariaDB Galera cluster from DB1 server. Edit MariaDB server configuration file and add following values under [mariadb] section.

[root@db1 ~]# vim /etc/my.cnf.d/server.cnf
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.1.10,192.168.1.20,192.168.1.30"
wsrep_cluster_name='cluster1'
wsrep_node_address='192.168.1.10'
wsrep_node_name='db1'
wsrep_sst_method=rsync
wsrep_sst_auth=cluster:password

Start cluster using following command.

[root@db1 ~]# /etc/init.d/mysql bootstrap
Bootstrapping the clusterStarting MySQL.... SUCCESS!

If you get any problem during startup check MariaDB error log file /var/lib/mysql/<hostname>.err

Step 5: Add DB2 in MariaDB Cluster

After successfully starting cluster on DB1. Start configuration on DB2. Edit MariaDB server configuration file and add following values under [mariadb] section. All the settings are similar to DB1 except wsrep_node_address, wsrep_cluster_address and wsrep_node_name.

[root@db2 ~]# vim /etc/my.cnf.d/server.cnf

query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.1.10,192.168.1.20,192.168.1.30"
wsrep_cluster_name='cluster1'
wsrep_node_address='192.168.1.20'
wsrep_node_name='db2'
wsrep_sst_method=rsync
wsrep_sst_auth=cluster:password

Start cluster using following command.

[root@db2 ~]# /etc/init.d/mysql start
Starting MySQL..... SUCCESS!

Step 6: Add DB3 in MariaDB Cluster

This server is optional, If you want only two server in cluster, you can ignore this step, but you need to remove third server ip from DB1/DB2 configuration files. To add this server make changes same as DB2.

[root@db3 ~]# vim /etc/my.cnf.d/server.cnf
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.1.10,192.168.1.20,192.168.1.30"
wsrep_cluster_name='cluster1'
wsrep_node_address='192.168.1.30'
wsrep_node_name='db2'
wsrep_sst_method=rsync
wsrep_sst_auth=cluster:password

Start cluster using following command.

[root@db3 ~]# /etc/init.d/mysql start
Starting MySQL..... SUCCESS!

Step 7: Test MariaDB Galera Cluster Setup

At this stage your cluster setup has been completed and running properly. Now you can test the cluster setup by creating database and tables at any server in cluster, it will replicate immediately to all servers in cluster.

The post How to Setup MariaDB Galera Cluster 10.0 on CentOS/RedHat & Fedora appeared first on TecAdmin.

]]>
https://tecadmin.net/setup-mariadb-galera-cluster-10-on-centos-redhat-fedora/feed/ 4
How to Setup MariaDB Galera Cluster 5.5 in CentOS, RHEL & Fedora https://tecadmin.net/setup-mariadb-galera-cluster-5-5-in-centos-rhel/ https://tecadmin.net/setup-mariadb-galera-cluster-5-5-in-centos-rhel/#comments Sun, 30 Mar 2014 05:57:44 +0000 https://tecadmin.net/?p=4551 MariaDB is an relational database management system (RDBMS). Generally we use single node of database server for small application but think about application which have thousands of users keep online at a time, In that situation we need a structure which will capable to handle this load and provides high availability. So we need to [...]

The post How to Setup MariaDB Galera Cluster 5.5 in CentOS, RHEL & Fedora appeared first on TecAdmin.

]]>
MariaDB is an relational database management system (RDBMS). Generally we use single node of database server for small application but think about application which have thousands of users keep online at a time, In that situation we need a structure which will capable to handle this load and provides high availability. So we need to add multiple database servers interconnected with each other and keep synchronized, so in case any server goes down other servers can take place of them and provide services to users.

MariaDB Galera Cluster is an synchronous Active-Active multi-master cluster of MariaDB databases. Which keeps all nodes synchronized. MariaDB Galera cluster provides synchronus replication which is always highly available (there is no data loss when one of the nodes crashes, and data replicas are always consistent). Currently it only supports XtraDB/InnoDB storage engines and available for Linux platform only.

This article will help you to setup MariaDB Galera Cluster with 3 servers running with CentOS. Cluster server details are as following.

    • Cluster DB1: 192.168.1.10 ( HostName: db1 )
    • Cluster DB2: 192.168.1.20 ( HostName: db2 )
    • Cluster DB3: 192.168.1.30 ( HostName: db3 )

Note: Step 1/2/3 has to be done on all cluster nodes and remaining steps are node specific.

Step 1: Add MariaDB Repositories

Create a mariadb repository /etc/yum.repos.d/mariadb.repo using following content in your system. Below repository will work on CentOS 6.x systems, For other system use repository generation tool and add to your system.

For CentOS 6 – 64bit

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

For CentOS 6 – 32bit

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-x86
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

Step 2: Install MariaDB and Galera

Before installing MariaDB Galera cluster packages, remove any existing MySQL or MariaDB packages installed on system. After that use following command to install on all nodes.

# yum install MariaDB-Galera-server MariaDB-client galera

Step 3: Initial MariaDB Configuration

After successfully installing packages in above steps do the some initial MariaDB configurations. Use following command and follow the instructions on all nodes of cluster. If will prompt to set root account password also.

# mysql_secure_installation
# service mysql start

After that create a user in MariaDB on all nodes, which can access database from your network in cluster.

# mysql -u root -p

MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION;
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit

and stop MariaDB service before starting cluster configuration

# service mysql stop

Step 4: Setup Cluster Configuration on DB1

Lets start setup MariaDB Galera cluster from DB1 server. Edit MariaDB server configuration file and add following values under [mariadb] section.

[root@db1 ~]# vim /etc/my.cnf.d/server.cnf
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.1.20,192.168.1.30
wsrep_cluster_name='cluster1'
wsrep_node_address='192.168.1.10'
wsrep_node_name='db1'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password

Start cluster using following command.

[root@db1 ~]# /etc/init.d/mysql bootstrap
Bootstrapping the clusterStarting MySQL.... SUCCESS!

If you get any problem during startup check MariaDB error log file /var/lib/mysql/<hostname>.err

Step 5: Add DB2 in MariaDB Cluster

After successfully starting cluster on DB1. Start configuration on DB2. Edit MariaDB server configuration file and add following values under [mariadb] section. All the settings are similar to DB1 except wsrep_node_address, wsrep_cluster_address and wsrep_node_name.

[root@db2 ~]# vim /etc/my.cnf.d/server.cnf

query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.1.10,192.168.1.30
wsrep_cluster_name='cluster1'
wsrep_node_address='192.168.1.20'
wsrep_node_name='db2'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password

Start cluster using following command.

[root@db2 ~]# /etc/init.d/mysql start
Starting MySQL..... SUCCESS!

Step 6: Add DB3 in MariaDB Cluster

This server is optional, If you want only two server in cluster, you can ignore this step, but you need to remove third server ip from DB1/DB2 configuration files. To add this server make changes same as DB2.

[root@db3 ~]# vim /etc/my.cnf.d/server.cnf
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.1.10,192.168.1.20
wsrep_cluster_name='cluster1'
wsrep_node_address='192.168.1.30'
wsrep_node_name='db2'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password

Start cluster using following command.

[root@db3 ~]# /etc/init.d/mysql start
Starting MySQL..... SUCCESS!

Step 7: Test MariaDB Galera Cluster Setup

At this stage your cluster setup has been completed and running properly. Now you can test the cluster setup by creating database and tables at any server in cluster, it will replicate immediately to all servers in cluster.

Galera-cluster-setup-testing-animated

Above GIF image is showing that databases are replicating properly to all nodes of cluster.

The post How to Setup MariaDB Galera Cluster 5.5 in CentOS, RHEL & Fedora appeared first on TecAdmin.

]]>
https://tecadmin.net/setup-mariadb-galera-cluster-5-5-in-centos-rhel/feed/ 14