Obviously the topic does not make much of sense. Installing multiple instances to replicate on same system is not something that would be used in production. My own agenda behind the test was to try out Galera Cluster and how the configuration goes and do I finally get it up, in the end practice is what makes you skilled even how talented you were.
To get through the whole process some downloads need to be done prior starting. The first few steps are stroll in a park and shall not document more about them as they do have own set of instructions available and for troubleshooting searching the www will give more than enough answers.
- Download latest version of MySQL Sandbox https://launchpad.net/mysql-sandbox/+download or install with CPAN. Installation instructions: http://search.cpan.org/~gmax/MySQL-Sandbox-3.0.44/lib/MySQL/Sandbox/Recipes.pm#Installing_MySQL::Sandbox
Download Percona XtraBackup from http://www.percona.com/software/percona-xtrabackup (download links on the right) and install it. Instructions for installation can be found from http://www.percona.com/doc/percona-xtrabackup/2.1/#installation
- Download MariaDB with Galera Cluster Sources from https://downloads.mariadb.org/mariadb-galera/10.0.11/.
Extracting Tarball and Preparing for Install
As prerequisites have been done, now can the fun parts commence.
Create required directories for MySQL Sandbox under your user (not root)
mkdir -p ~/opt/mysql/10.0.11
Extract the tarball, this will create directory mariadb-galera-10.0.11-linux-x86_64 to same folder with tarball.
Copy all files from mariadb-galera-10.0.11-linux-x86_64 to newly created ~/opt/mysql/10.0.11 directory
cp mariadb-galera-10.0.11-linux-x86_64/* ~/opt/mysql/10.0.11 -r
MySQL Sandbox uses ~/opt/mysql as install base directory when using only version number as source name for installation which we are going to use.
MySQL Sandbox comes with multiple different sandbox generators from single node to replication sandboxes. Galera Cluster requires minimum of three nodes, to generate the nodes we will be using make_multiple_sandbox in next manner:
make_multiple_sandbox --how_many_nodes=3 10.0.11
By default MSB (MySQL Sandbox) will create directory sandboxes to home directory of the user. You can change this path defining –upper_directory=[path] and give new path where to create the sandboxes. If the command goes through without errors MSB shall show next kind of output.
installing node 1 installing node 2 installing node 3 group directory installed in /home/user/sandboxes/multi_msb_10_0_11
Now you should have three nodeN directories under multi_msb_10_0_11 where each has node specific data directories, config, and bunch of shell scripts that work as shortcuts for different MySQL commands. We need to stop all nodes for upcoming configuration changes, this can be easily done by browsing to multi_msb_10_0_11 and run script ./stop_all which will report when each node has been stopped.
Configuring the Cluster
MSB will create my.sandbox.cnf configuration files under each node directory. This is the same configuration file you would normally have located /etc/my.cnf.d/ named as server.cnf (MariaDB configuration directory and file). Open the file under node1 with your favourite text editor and start modifying the file in next manner, if you are using tabbed application like gedit, can recommend to open node2 and node3 configuration files also as the configurations will be almost identical.
Default MySQL configuration options under [mysqld]
Comment out the bind-address with #
Add mandatory InnoDB configuration options
query_cache_size=0 binlog_format=ROW default_storage_engine=innodb innodb_autoinc_lock_mode=2
You can add more specific InnoDB configuration options like innodb-buffer-pool-size which obviously is something you'll require to change in some point. To get easy start with InnoDB configuration options you can use Percona's great configuration tool for MySQL from Percona Tools for MySQL. In any case, do not change the server-id variable unless you want to play around the value, MSB creates unique identifiers for each node automatically, same goes for port.
Galera Cluster configuration options
This was something that had to bang head against the wall for a good while but eventually found all settings to proper values. Bolded values need more attention and comments give bit of help. Had bold texts but brush library fails them awfully. Sorry for inconvenience caused. Will be searching for better library.
# Galera Provider Configuration wsrep_provider=/home/user/opt/mysql/10.0.11/lib/libgalera_smm.so # Reflect path to your environment wsrep_provider_options="gcache.size=1G" # Small just for sanity reasons # Galera Cluster Configuration wsrep_cluster_name="test_cluster" # Cluster name wsrep_cluster_address=gcomm://127.0.0.1:18115,127.0.0.1:18116,127.0.0.1:18117" # Cluster addresses, include all nodes here. Note the changed ports. # If your cluster went similar to mine. You should have ports 18112, 18113 and 18114 reserved for nodes. # If the ports in list are the same as node ports, there will be port collision # and nodes won't fire up. Simply adding 3 to each provides easy memory route over collision. # If you have 4 nodes, add 4 to each
# Galera Synchronization Configuration (State transfor snapshot options) wsrep_sst_method=xtrabackup-v2 # How the databases are synchronized on start when node joins the cluster. Using the # installed XtraBackup v2 wsrep_sst_auth=root:msandbox # User when authenticating to donor # MSB has rsandbox user for replication, but for local sandboxing would say root is safe to use. # DO NOT USE ROOT IN PRODUCTION # Galera Node Configuration wsrep_node_address="127.0.0.1:18115" # This node's address, note the port not being the same as in mysqld section but the same with gcomm list wsrep_node_name="node1" # This node's name
Copy the changes to all three configuration files. Simply make sure that the wsrep_node_address and wsrep_node_name are unique but consistent. No point naming nodes with other names than 'nodeN' naming in this case when whole directory structure supports this.
Starting and Testing Cluster
Starting Galera Cluster
Order of firing up nodes is important, first you need to start the primary master with additional --wsrep-new-cluster option. To do this with the SMB where there is no clear mysqld command. You can use next command in node1/ directory
./my sqld –wsrep-new-cluster &
Notice the "Send to Background" & character. The ./my script will not ever exit so sending it to background gets the shell back under command. To know if everything really went well, follow the error log with tail -f data/msandbox.err (Command run in node1/ directory, change the path accordingly) and see if mysqld will be ready for connections.
Starting other nodes is easier, just enter node's directory and run shell script ./start. Following the error log is recommended on the first go.
The easy part really, open two shells one for node1 and second for node2 and log in to both. On node1 run show status like 'wsrep_cluster_size'; The outcome should now show 3. If got this results, then your cluster is running smoothly. To make even simpler test. Create new database on node1 and use show databases; to make it appear on node2 which should happen instantly.