이후 manager 서버를 제외한 노드들은 /etc/mysql/my.cnf 에 아래 내용을 추가합니다
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[mysqld]
user = mysql
bind-address = 0.0.0.0
mysqlx-bind-address = 127.0.0.1
myisam-recover-options = BACKUP
# 각노드마다 server-id 를 다르게 설정한다, node1은 1, node2는 2 ...
server-id=1
log_error=/var/log/mysql/error.log
# binlog를 통하여 이중화를 함으로, 해당 디렉토리 위치를 잘 인지하고 있어야함
log-bin=/var/log/mysql/binlog
sync_binlog=1
binlog_cache_size=2M
max_binlog_size=512M
expire_logs_days=7
log-bin-trust-function-creators=1
# 각 DB의 호스트네임으로 추후 replication시에 show slave hosts 에서 정보로 활용
report-host=db-node-01
relay-log=/var/log/msyql/relay_log
relay-log-index=/va/log/mysql/relay_log.index
relay_log_purge=off
binlog_expire_logs_seconds=604800
log_replica_updates=on
$ systemctl restart mysql.service
[mysqld]
user = mysql
bind-address = 0.0.0.0
mysqlx-bind-address = 127.0.0.1
myisam-recover-options = BACKUP
# 각노드마다 server-id 를 다르게 설정한다, node1은 1, node2는 2 ...
server-id=1
log_error=/var/log/mysql/error.log
# binlog를 통하여 이중화를 함으로, 해당 디렉토리 위치를 잘 인지하고 있어야함
log-bin=/var/log/mysql/binlog
sync_binlog=1
binlog_cache_size=2M
max_binlog_size=512M
expire_logs_days=7
log-bin-trust-function-creators=1
# 각 DB의 호스트네임으로 추후 replication시에 show slave hosts 에서 정보로 활용
report-host=db-node-01
relay-log=/var/log/msyql/relay_log
relay-log-index=/va/log/mysql/relay_log.index
relay_log_purge=off
binlog_expire_logs_seconds=604800
log_replica_updates=on
$ systemctl restart mysql.service
[mysqld]
user = mysql
bind-address = 0.0.0.0
mysqlx-bind-address = 127.0.0.1
myisam-recover-options = BACKUP
# 각노드마다 server-id 를 다르게 설정한다, node1은 1, node2는 2 ...
server-id=1
log_error=/var/log/mysql/error.log
# binlog를 통하여 이중화를 함으로, 해당 디렉토리 위치를 잘 인지하고 있어야함
log-bin=/var/log/mysql/binlog
sync_binlog=1
binlog_cache_size=2M
max_binlog_size=512M
expire_logs_days=7
log-bin-trust-function-creators=1
# 각 DB의 호스트네임으로 추후 replication시에 show slave hosts 에서 정보로 활용
report-host=db-node-01
relay-log=/var/log/msyql/relay_log
relay-log-index=/va/log/mysql/relay_log.index
relay_log_purge=off
binlog_expire_logs_seconds=604800
log_replica_updates=on
$ systemctl restart mysql.service
정상적으로 적용 되었을경우 별다른 메시지없이 restart가 수행합니다
간혹 아래와 같은 에러가 발생하는데, (/var/log/mysql/error.log) binlog.index파일의 권한이 잘못 설정되어 있을경우 발생합니다
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# /var/log/mysql/error.log 의 일부
2023-11-28T01:43:40.127485Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.35-0ubuntu0.22.04.1) starting as process 3401
mysqld: File '/var/log/mysql/binlog.index' not found (OS errno 13 - Permission denied)
# /var/log/mysql/error.log 의 일부
2023-11-28T01:43:40.127485Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.35-0ubuntu0.22.04.1) starting as process 3401
mysqld: File '/var/log/mysql/binlog.index' not found (OS errno 13 - Permission denied)
# /var/log/mysql/error.log 의 일부
2023-11-28T01:43:40.127485Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.35-0ubuntu0.22.04.1) starting as process 3401
mysqld: File '/var/log/mysql/binlog.index' not found (OS errno 13 - Permission denied)
해결방법
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# 혹여나 파일이 없다면 파일을 생성하고 권한을 부여해야한다.
# touch /var/log/mysql/binlog.index
$ chown mysql:mysql /var/log/mysql/binlog.index
$ systemctl restat mysql.service
# 혹여나 파일이 없다면 파일을 생성하고 권한을 부여해야한다.
# touch /var/log/mysql/binlog.index
$ chown mysql:mysql /var/log/mysql/binlog.index
$ systemctl restat mysql.service
# 혹여나 파일이 없다면 파일을 생성하고 권한을 부여해야한다.
# touch /var/log/mysql/binlog.index
$ chown mysql:mysql /var/log/mysql/binlog.index
$ systemctl restat mysql.service
Master & Slave 설정
Replication 용 계정 생성 (master 및 slave 노드)
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
mysql> create user 'repl_user'@'%' identified by 'qwe1212';
mysql> grant replication slave on *.* to 'repl_user'@'%';
mysql> flush privileges;
mysql> create user 'repl_user'@'%' identified by 'qwe1212';
mysql> grant replication slave on *.* to 'repl_user'@'%';
mysql> flush privileges;
mysql> create user 'repl_user'@'%' identified by 'qwe1212';
mysql> grant replication slave on *.* to 'repl_user'@'%';
mysql> flush privileges;
master_log_file, read_master_log_pos 이 지정한 파일로 설정이 되어있다면 정상
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
위 두부분이 yes로 되어있다면 정상적인 모습. 혹여나 No라고 떠있다면 “Last_IO_Error:“ 란과, error.log 를 통해서 트러블 슈팅 진행.
필자는 아래와 같은 에러가 발생 했습니다
Last_IO_Error: Fatal error: The replica I/O thread stops because source and replica have equal MySQL server ids; these ids must be different for replication to work (or the –replicate-same-server-id option must be used on replica but this does not always make sense; please check the manual before using it).
확인해보니 server-id 가 동일하게 설정되어있어서 확인후 변경후 정상작동 확인하였습니다
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
mysql> show variables like'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 1 |
+---------------+-------+
1 row in set (0.00 sec)
mysql> SETGLOBAL server_id = 2;
mysql> flush privilges;
mysql> show variables like 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 1 |
+---------------+-------+
1 row in set (0.00 sec)
mysql> SET GLOBAL server_id = 2;
mysql> flush privilges;
mysql> show variables like 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 1 |
+---------------+-------+
1 row in set (0.00 sec)
mysql> SET GLOBAL server_id = 2;
mysql> flush privilges;
MHA 는 git 을 통하여 배포되며 git clone을 통하여 받을 수 있습니다
manager : https://github.com/yoshinorim/mha4mysql-manager
node : https://github.com/yoshinorim/mha4mysql-node
$ vi master_ip_online_change
### ori 149 line
## Drop application user so that nobody can connect. Disabling per-session binlog beforehand
$orig_master_handler->disable_log_bin_local();
print current_time_us() . " Drpping app user on the orig master..n";
FIXME_xxx_drop_app_user($orig_master_handler);
### modi
## Drop application user so that nobody can connect. Disabling per-session binlog beforehand
## $orig_master_handler->disable_log_bin_local();
## print current_time_us() . " Drpping app user on the orig master..n";
## FIXME_xxx_drop_app_user($orig_master_handler);
### ori 244 line
## Creating an app user on the new master
print current_time_us() . " Creating app user on the new master..n";
FIXME_xxx_create_app_user($new_master_handler);
$new_master_handler->enable_log_bin_local();
$new_master_handler->disconnect();
### modi 라인 하단에 mha_change_vip.sh 추가
## Creating an app user on the new master
## print current_time_us() . " Creating app user on the new master..n";
## FIXME_xxx_create_app_user($new_master_handler);
## $new_master_handler->enable_log_bin_local();
## $new_master_handler->disconnect();
## Update master ip on the catalog database, etc
system("/bin/bash /masterha/scripts/mha_change_vip.sh $new_master_ip");
$exit_code = 0;
};
$ vi master_ip_online_change
### ori 149 line
## Drop application user so that nobody can connect. Disabling per-session binlog beforehand
$orig_master_handler->disable_log_bin_local();
print current_time_us() . " Drpping app user on the orig master..n";
FIXME_xxx_drop_app_user($orig_master_handler);
### modi
## Drop application user so that nobody can connect. Disabling per-session binlog beforehand
## $orig_master_handler->disable_log_bin_local();
## print current_time_us() . " Drpping app user on the orig master..n";
## FIXME_xxx_drop_app_user($orig_master_handler);
### ori 244 line
## Creating an app user on the new master
print current_time_us() . " Creating app user on the new master..n";
FIXME_xxx_create_app_user($new_master_handler);
$new_master_handler->enable_log_bin_local();
$new_master_handler->disconnect();
### modi 라인 하단에 mha_change_vip.sh 추가
## Creating an app user on the new master
## print current_time_us() . " Creating app user on the new master..n";
## FIXME_xxx_create_app_user($new_master_handler);
## $new_master_handler->enable_log_bin_local();
## $new_master_handler->disconnect();
## Update master ip on the catalog database, etc
system("/bin/bash /masterha/scripts/mha_change_vip.sh $new_master_ip");
$exit_code = 0;
};
(출처 : https://hoing.io/archives/92)
mha 소스 변경
– /usr/local/share/perl5/MHA/Server.pm
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# ori
339 rules, temporarily executing CHANGE MASTER to dummy host, and
345"%s: SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.",
348$dbhelper->execute("CHANGE MASTER TO MASTER_HOST='dummy_host'");
# modi
339# rules, temporarily executing CHANGE REPLICATION SOURCE to dummy host, and
345"%s: SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE REPLICATION SOURCE to a dummy host.",
348$dbhelper->execute("CHANGE REPLICATION SOURCE TO SOURCE_HOST='dummy_host'");
```
- /usr/local/share/perl5/MHA/ServerManager.pm
```bash
# ori
1294" All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_AUTO_POSITION=1, MASTER_USER='%s', MASTER_PASSWORD='xxx';",
1307" All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d, MASTER_USER='%s', MASTER_PASSWORD='xxx';",
1354$log->info(" Executed CHANGE MASTER.");
1356# After executing CHANGE MASTER, relay_log_purge is automatically disabled.
#modi
1294" All other slaves should start replication from here. Statement should be: CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_AUTO_POSITION=1, SOURCE_USER='%s', SOURCE_PASSWORD='xxx';",
1307" All other slaves should start replication from here. Statement should be: CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d, SOURCE_USER='%s', SOURCE_PASSWORD='xxx';",
1354$log->info(" CHANGE REPLICATION SOURCE.");
1356# After executing CHANGE REPLICATION SOURCE, relay_log_purge is automatically disabled.
# ori
339 rules, temporarily executing CHANGE MASTER to dummy host, and
345 "%s: SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.",
348 $dbhelper->execute("CHANGE MASTER TO MASTER_HOST='dummy_host'");
# modi
339 # rules, temporarily executing CHANGE REPLICATION SOURCE to dummy host, and
345 "%s: SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE REPLICATION SOURCE to a dummy host.",
348 $dbhelper->execute("CHANGE REPLICATION SOURCE TO SOURCE_HOST='dummy_host'");
```
- /usr/local/share/perl5/MHA/ServerManager.pm
```bash
# ori
1294 " All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_AUTO_POSITION=1, MASTER_USER='%s', MASTER_PASSWORD='xxx';",
1307 " All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d, MASTER_USER='%s', MASTER_PASSWORD='xxx';",
1354 $log->info(" Executed CHANGE MASTER.");
1356 # After executing CHANGE MASTER, relay_log_purge is automatically disabled.
#modi
1294 " All other slaves should start replication from here. Statement should be: CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_AUTO_POSITION=1, SOURCE_USER='%s', SOURCE_PASSWORD='xxx';",
1307 " All other slaves should start replication from here. Statement should be: CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d, SOURCE_USER='%s', SOURCE_PASSWORD='xxx';",
1354 $log->info(" CHANGE REPLICATION SOURCE.");
1356 # After executing CHANGE REPLICATION SOURCE, relay_log_purge is automatically disabled.
# ori
339 rules, temporarily executing CHANGE MASTER to dummy host, and
345 "%s: SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.",
348 $dbhelper->execute("CHANGE MASTER TO MASTER_HOST='dummy_host'");
# modi
339 # rules, temporarily executing CHANGE REPLICATION SOURCE to dummy host, and
345 "%s: SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE REPLICATION SOURCE to a dummy host.",
348 $dbhelper->execute("CHANGE REPLICATION SOURCE TO SOURCE_HOST='dummy_host'");
```
- /usr/local/share/perl5/MHA/ServerManager.pm
```bash
# ori
1294 " All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_AUTO_POSITION=1, MASTER_USER='%s', MASTER_PASSWORD='xxx';",
1307 " All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d, MASTER_USER='%s', MASTER_PASSWORD='xxx';",
1354 $log->info(" Executed CHANGE MASTER.");
1356 # After executing CHANGE MASTER, relay_log_purge is automatically disabled.
#modi
1294 " All other slaves should start replication from here. Statement should be: CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_AUTO_POSITION=1, SOURCE_USER='%s', SOURCE_PASSWORD='xxx';",
1307 " All other slaves should start replication from here. Statement should be: CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d, SOURCE_USER='%s', SOURCE_PASSWORD='xxx';",
1354 $log->info(" CHANGE REPLICATION SOURCE.");
1356 # After executing CHANGE REPLICATION SOURCE, relay_log_purge is automatically disabled.
– /usr/local/share/perl5/MHA/DBHelper.pm
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# ori
71"CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_PASSWORD='%s', MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d";
73"CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d";
75"CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_PASSWORD='%s', MASTER_AUTO_POSITION=1";
77"CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_AUTO_POSITION=1";
87 use constant Stop_IO_Thread_SQL => "STOP REPLICA IO_THREAD";
88 use constant Start_IO_Thread_SQL => "START REPLICA IO_THREAD";
89 use constant Start_Slave_SQL => "START REPLICA";
90 use constant Stop_Slave_SQL => "STOP REPLICA";
91 use constant Start_SQL_Thread_SQL => "START REPLICA SQL_THREAD";
92 use constant Stop_SQL_Thread_SQL => "STOP REPLICA SQL_THREAD";
# modi
71"CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_PASSWORD='%s', SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d";
73"CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d";
75"CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_PASSWORD='%s', SOURCE_AUTO_POSITION=1";
77"CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_AUTO_POSITION=1";
87 use constant Stop_IO_Thread_SQL => "STOP REPLICA IO_THREAD";
88 use constant Start_IO_Thread_SQL => "START REPLICA IO_THREAD";
89 use constant Start_Slave_SQL => "START REPLICA";
90 use constant Stop_Slave_SQL => "STOP REPLICA";
91 use constant Start_SQL_Thread_SQL => "START REPLICA SQL_THREAD";
92 use constant Stop_SQL_Thread_SQL => "STOP REPLICA SQL_THREAD";
# ori
71 "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_PASSWORD='%s', MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d";
73 "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d";
75 "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_PASSWORD='%s', MASTER_AUTO_POSITION=1";
77 "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_AUTO_POSITION=1";
87 use constant Stop_IO_Thread_SQL => "STOP REPLICA IO_THREAD";
88 use constant Start_IO_Thread_SQL => "START REPLICA IO_THREAD";
89 use constant Start_Slave_SQL => "START REPLICA";
90 use constant Stop_Slave_SQL => "STOP REPLICA";
91 use constant Start_SQL_Thread_SQL => "START REPLICA SQL_THREAD";
92 use constant Stop_SQL_Thread_SQL => "STOP REPLICA SQL_THREAD";
# modi
71 "CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_PASSWORD='%s', SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d";
73 "CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d";
75 "CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_PASSWORD='%s', SOURCE_AUTO_POSITION=1";
77 "CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_AUTO_POSITION=1";
87 use constant Stop_IO_Thread_SQL => "STOP REPLICA IO_THREAD";
88 use constant Start_IO_Thread_SQL => "START REPLICA IO_THREAD";
89 use constant Start_Slave_SQL => "START REPLICA";
90 use constant Stop_Slave_SQL => "STOP REPLICA";
91 use constant Start_SQL_Thread_SQL => "START REPLICA SQL_THREAD";
92 use constant Stop_SQL_Thread_SQL => "STOP REPLICA SQL_THREAD";
# ori
71 "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_PASSWORD='%s', MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d";
73 "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_LOG_FILE='%s', MASTER_LOG_POS=%d";
75 "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_PASSWORD='%s', MASTER_AUTO_POSITION=1";
77 "CHANGE MASTER TO MASTER_HOST='%s', MASTER_PORT=%d, MASTER_USER='%s', MASTER_AUTO_POSITION=1";
87 use constant Stop_IO_Thread_SQL => "STOP REPLICA IO_THREAD";
88 use constant Start_IO_Thread_SQL => "START REPLICA IO_THREAD";
89 use constant Start_Slave_SQL => "START REPLICA";
90 use constant Stop_Slave_SQL => "STOP REPLICA";
91 use constant Start_SQL_Thread_SQL => "START REPLICA SQL_THREAD";
92 use constant Stop_SQL_Thread_SQL => "STOP REPLICA SQL_THREAD";
# modi
71 "CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_PASSWORD='%s', SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d";
73 "CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_LOG_FILE='%s', SOURCE_LOG_POS=%d";
75 "CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_PASSWORD='%s', SOURCE_AUTO_POSITION=1";
77 "CHANGE REPLICATION SOURCE TO SOURCE_HOST='%s', SOURCE_PORT=%d, SOURCE_USER='%s', SOURCE_AUTO_POSITION=1";
87 use constant Stop_IO_Thread_SQL => "STOP REPLICA IO_THREAD";
88 use constant Start_IO_Thread_SQL => "START REPLICA IO_THREAD";
89 use constant Start_Slave_SQL => "START REPLICA";
90 use constant Stop_Slave_SQL => "STOP REPLICA";
91 use constant Start_SQL_Thread_SQL => "START REPLICA SQL_THREAD";
92 use constant Stop_SQL_Thread_SQL => "STOP REPLICA SQL_THREAD";
$ masterha_check_ssh --conf=/root/mha/conf/app1.cnf
Fri Dec 1 16:36:12 2023 - [info] Reading default configuration from /etc/masterha_default.cnf..
Fri Dec 1 16:36:12 2023 - [info] Reading application default configuration from /etc/app1.cnf..
Fri Dec 1 16:36:12 2023 - [info] Reading server configuration from /etc/app1.cnf..
Fri Dec 1 16:36:12 2023 - [info] Starting SSH connection tests..
Fri Dec 1 16:36:14 2023 - [debug]
Fri Dec 1 16:36:12 2023 - [debug] Connecting via SSH from root@db-node-01(192.168.1.38:22) to root@db-node-02(192.168.1.18:22)..
Fri Dec 1 16:36:13 2023 - [debug] ok.
Fri Dec 1 16:36:13 2023 - [debug] Connecting via SSH from root@db-node-01(192.168.1.38:22) to root@db-node-03(192.168.1.254:22)..
Fri Dec 1 16:36:14 2023 - [debug] ok.
Fri Dec 1 16:36:14 2023 - [debug]
Fri Dec 1 16:36:12 2023 - [debug] Connecting via SSH from root@db-node-02(192.168.1.18:22) to root@db-node-01(192.168.1.38:22)..
Fri Dec 1 16:36:14 2023 - [debug] ok.
Fri Dec 1 16:36:14 2023 - [debug] Connecting via SSH from root@db-node-02(192.168.1.18:22) to root@db-node-03(192.168.1.254:22)..
Fri Dec 1 16:36:14 2023 - [debug] ok.
Fri Dec 1 16:36:15 2023 - [debug]
Fri Dec 1 16:36:13 2023 - [debug] Connecting via SSH from root@db-node-03(192.168.1.254:22) to root@db-node-01(192.168.1.38:22)..
Fri Dec 1 16:36:14 2023 - [debug] ok.
Fri Dec 1 16:36:14 2023 - [debug] Connecting via SSH from root@db-node-03(192.168.1.254:22) to root@db-node-02(192.168.1.18:22)..
Fri Dec 1 16:36:15 2023 - [debug] ok.
Fri Dec 1 16:36:15 2023 - [info] All SSH connection tests passed successfully.
Use of uninitialized value in exit at /usr/local/bin/masterha_check_ssh line 44.
$ masterha_check_repl --conf=/root/mha/conf/app1.cnf
~
~
Fri Dec 1 16:37:18 2023 - [info] Checking replication health on db-node-02..
Fri Dec 1 16:37:18 2023 - [info] ok.
Fri Dec 1 16:37:18 2023 - [info] Checking replication health on db-node-03..
Fri Dec 1 16:37:18 2023 - [info] ok.
Fri Dec 1 16:37:18 2023 - [info] Checking master_ip_failover_script status:
Fri Dec 1 16:37:18 2023 - [info] /root/mha/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=db-node-01 --orig_master_ip=192.168.1.38 --orig_master_port=3306
Fri Dec 1 16:37:19 2023 - [info] OK.
Fri Dec 1 16:37:19 2023 - [warning] shutdown_script is not defined.
Fri Dec 1 16:37:19 2023 - [debug] Disconnected from db-node-01(192.168.1.38:3306)
Fri Dec 1 16:37:19 2023 - [debug] Disconnected from db-node-02(192.168.1.18:3306)
Fri Dec 1 16:37:19 2023 - [debug] Disconnected from db-node-03(192.168.1.254:3306)
Fri Dec 1 16:37:19 2023 - [info] Got exit code 0 (Not master dead).
$ masterha_check_ssh --conf=/root/mha/conf/app1.cnf
Fri Dec 1 16:36:12 2023 - [info] Reading default configuration from /etc/masterha_default.cnf..
Fri Dec 1 16:36:12 2023 - [info] Reading application default configuration from /etc/app1.cnf..
Fri Dec 1 16:36:12 2023 - [info] Reading server configuration from /etc/app1.cnf..
Fri Dec 1 16:36:12 2023 - [info] Starting SSH connection tests..
Fri Dec 1 16:36:14 2023 - [debug]
Fri Dec 1 16:36:12 2023 - [debug] Connecting via SSH from root@db-node-01(192.168.1.38:22) to root@db-node-02(192.168.1.18:22)..
Fri Dec 1 16:36:13 2023 - [debug] ok.
Fri Dec 1 16:36:13 2023 - [debug] Connecting via SSH from root@db-node-01(192.168.1.38:22) to root@db-node-03(192.168.1.254:22)..
Fri Dec 1 16:36:14 2023 - [debug] ok.
Fri Dec 1 16:36:14 2023 - [debug]
Fri Dec 1 16:36:12 2023 - [debug] Connecting via SSH from root@db-node-02(192.168.1.18:22) to root@db-node-01(192.168.1.38:22)..
Fri Dec 1 16:36:14 2023 - [debug] ok.
Fri Dec 1 16:36:14 2023 - [debug] Connecting via SSH from root@db-node-02(192.168.1.18:22) to root@db-node-03(192.168.1.254:22)..
Fri Dec 1 16:36:14 2023 - [debug] ok.
Fri Dec 1 16:36:15 2023 - [debug]
Fri Dec 1 16:36:13 2023 - [debug] Connecting via SSH from root@db-node-03(192.168.1.254:22) to root@db-node-01(192.168.1.38:22)..
Fri Dec 1 16:36:14 2023 - [debug] ok.
Fri Dec 1 16:36:14 2023 - [debug] Connecting via SSH from root@db-node-03(192.168.1.254:22) to root@db-node-02(192.168.1.18:22)..
Fri Dec 1 16:36:15 2023 - [debug] ok.
Fri Dec 1 16:36:15 2023 - [info] All SSH connection tests passed successfully.
Use of uninitialized value in exit at /usr/local/bin/masterha_check_ssh line 44.
$ masterha_check_repl --conf=/root/mha/conf/app1.cnf
~
~
Fri Dec 1 16:37:18 2023 - [info] Checking replication health on db-node-02..
Fri Dec 1 16:37:18 2023 - [info] ok.
Fri Dec 1 16:37:18 2023 - [info] Checking replication health on db-node-03..
Fri Dec 1 16:37:18 2023 - [info] ok.
Fri Dec 1 16:37:18 2023 - [info] Checking master_ip_failover_script status:
Fri Dec 1 16:37:18 2023 - [info] /root/mha/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=db-node-01 --orig_master_ip=192.168.1.38 --orig_master_port=3306
Fri Dec 1 16:37:19 2023 - [info] OK.
Fri Dec 1 16:37:19 2023 - [warning] shutdown_script is not defined.
Fri Dec 1 16:37:19 2023 - [debug] Disconnected from db-node-01(192.168.1.38:3306)
Fri Dec 1 16:37:19 2023 - [debug] Disconnected from db-node-02(192.168.1.18:3306)
Fri Dec 1 16:37:19 2023 - [debug] Disconnected from db-node-03(192.168.1.254:3306)
Fri Dec 1 16:37:19 2023 - [info] Got exit code 0 (Not master dead).
Thu Nov 3014:02:212023 - [info] Reading default configuration from /etc/masterha_default.cnf..
Thu Nov 3014:02:212023 - [info] Reading application default configuration from /etc/app1.cnf..
Thu Nov 3014:02:212023 - [info] Reading server configuration from /etc/app1.cnf..
Thu Nov 3014:02:212023 - [info] MHA::MasterMonitor version 0.58.
Thu Nov 3014:02:212023 - [debug] Connecting to servers..
Thu Nov 3014:02:212023 - [debug] Got MySQL error when connecting db-node-01(192.168.1.38:3306) :2061:Authentication plugin 'caching_sha2_password' reported error: Authentication requires secure connection.
Thu Nov 3014:02:212023 - [debug] Got MySQL error when connecting db-node-02(192.168.1.18:3306) :2061:Authentication plugin 'caching_sha2_password' reported error: Authentication requires secure connection.
Thu Nov 3014:02:212023 - [debug] Got MySQL error when connecting db-node-03(192.168.1.254:3306) :2061:Authentication plugin 'caching_sha2_password' reported error: Authentication requires secure connection.
Thu Nov 3014:02:222023 - [error][/usr/local/share/perl/5.34.0/MHA/ServerManager.pm, ln188] There is no alive server. We can't do failover
Thu Nov 3014:02:222023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm line 329.
Thu Nov 3014:02:222023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Thu Nov 3014:03:312023 - [info] Reading default configuration from /etc/masterha_default.cnf..
Thu Nov 3014:03:312023 - [info] Reading application default configuration from /etc/app1.cnf..
Thu Nov 3014:03:312023 - [info] Reading server configuration from /etc/app1.cnf..
Thu Nov 3014:03:312023 - [info] MHA::MasterMonitor version 0.58.
Thu Nov 3014:03:312023 - [debug] Connecting to servers..
Thu Nov 3014:03:322023 - [debug] Connected to: db-node-01(192.168.1.38:3306), user=mha
Thu Nov 3014:03:322023 - [debug] Number of slave worker threads on host db-node-01(192.168.1.38:3306): 4
Thu Nov 3014:03:322023 - [debug] Connected to: db-node-02(192.168.1.18:3306), user=mha
Thu Nov 3014:03:322023 - [debug] Number of slave worker threads on host db-node-02(192.168.1.18:3306): 4
Thu Nov 3014:03:322023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. Redundant argument in sprintf at /usr/local/share/perl/5.34.0/MHA/NodeUtil.pm line 195.
Thu Nov 3014:03:322023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Thu Nov 3014:10:132023 - [info] Connecting to root@192.168.1.38(db-node-01:22)..
Failed to save binary log: Binlog not found from /usr/local/mysql/logs! If you got this error at MHA Manager, please set "master_binlog_dir=/path/to/binlog_directory_of_the_master" correctly in the MHA Manager's configuration file and try again.
at /usr/local/bin/save_binary_logs line 123.
eval {...} called at /usr/local/bin/save_binary_logs line 70
main::main() called at /usr/local/bin/save_binary_logs line 66
Thu Nov 3014:10:132023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln161] Binlog setting check failed!
Thu Nov 3014:10:132023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln408] Master configuration failed.
Thu Nov 3014:10:132023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48.
Thu Nov 3014:10:132023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
~
~
Thu Nov 30 14:10:10 2023 - [debug] ok.
Thu Nov 30 14:10:11 2023 - [info] All SSH connection tests passed successfully.
Thu Nov 30 14:10:11 2023 - [info] Checking MHA Node version..
Thu Nov 30 14:10:12 2023 - [info] Version check ok.
Thu Nov 30 14:10:12 2023 - [info] Checking SSH publickey authentication settings on the current master..
Thu Nov 30 14:10:12 2023 - [debug] SSH connection test to db-node-01, option -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o BatchMode=yes -o ConnectTimeout=5, timeout 5
Thu Nov 30 14:10:12 2023 - [info] HealthCheck: SSH to db-node-01 is reachable.
Thu Nov 30 14:10:13 2023 - [info] Master MHA Node version is 0.58.
Thu Nov 30 14:10:13 2023 - [info] Checking recovery script configurations on db-node-01(192.168.1.38:3306)..
Thu Nov 30 14:10:13 2023 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/usr/local/mysql
/logs --output_file=/root/mha/app1/save_binary_logs_test --manager_version=0.58 --start_file=binlog.000006 --debug
Thu Nov 30 14:10:13 2023 - [info] Connecting to root@192.168.1.38(db-node-01:22)..
Failed to save binary log: Binlog not found from /usr/local/mysql/logs! If you got this error at MHA Manager, please set "master_binlog_dir=/path/to/binlog_directory_of_the_master" correctly in the MHA Manager's configuration file and try again.
at /usr/local/bin/save_binary_logs line 123.
eval {...} called at /usr/local/bin/save_binary_logs line 70
main::main() called at /usr/local/bin/save_binary_logs line 66
Thu Nov 30 14:10:13 2023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln161] Binlog setting check failed!
Thu Nov 30 14:10:13 2023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln408] Master configuration failed.
Thu Nov 30 14:10:13 2023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48.
Thu Nov 30 14:10:13 2023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Thu Nov 30 14:10:13 2023 - [info] Got exit code 1 (Not master dead).
~
~
Thu Nov 30 14:10:10 2023 - [debug] ok.
Thu Nov 30 14:10:11 2023 - [info] All SSH connection tests passed successfully.
Thu Nov 30 14:10:11 2023 - [info] Checking MHA Node version..
Thu Nov 30 14:10:12 2023 - [info] Version check ok.
Thu Nov 30 14:10:12 2023 - [info] Checking SSH publickey authentication settings on the current master..
Thu Nov 30 14:10:12 2023 - [debug] SSH connection test to db-node-01, option -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o BatchMode=yes -o ConnectTimeout=5, timeout 5
Thu Nov 30 14:10:12 2023 - [info] HealthCheck: SSH to db-node-01 is reachable.
Thu Nov 30 14:10:13 2023 - [info] Master MHA Node version is 0.58.
Thu Nov 30 14:10:13 2023 - [info] Checking recovery script configurations on db-node-01(192.168.1.38:3306)..
Thu Nov 30 14:10:13 2023 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/usr/local/mysql
/logs --output_file=/root/mha/app1/save_binary_logs_test --manager_version=0.58 --start_file=binlog.000006 --debug
Thu Nov 30 14:10:13 2023 - [info] Connecting to root@192.168.1.38(db-node-01:22)..
Failed to save binary log: Binlog not found from /usr/local/mysql/logs! If you got this error at MHA Manager, please set "master_binlog_dir=/path/to/binlog_directory_of_the_master" correctly in the MHA Manager's configuration file and try again.
at /usr/local/bin/save_binary_logs line 123.
eval {...} called at /usr/local/bin/save_binary_logs line 70
main::main() called at /usr/local/bin/save_binary_logs line 66
Thu Nov 30 14:10:13 2023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln161] Binlog setting check failed!
Thu Nov 30 14:10:13 2023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln408] Master configuration failed.
Thu Nov 30 14:10:13 2023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48.
Thu Nov 30 14:10:13 2023 - [error][/usr/local/share/perl/5.34.0/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Thu Nov 30 14:10:13 2023 - [info] Got exit code 1 (Not master dead).
app1.cnf 의 binlog master노드 위치에 맞추어 수정
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
master_binlog_dir=/var/log/mysql
master_binlog_dir=/var/log/mysql
master_binlog_dir=/var/log/mysql
masterha_manager 실행하기
maneger server 에서 다른 노드들을 관리하기위해 백그라운드로 매니저 프로그램을 실행합니다
last_failover_minute는 새로이 마스터로 승격된 노드가 복구이후 다시 마스터로 승격되려면 8시간이 경과후 승격이 가능합니다, 이 간격을 분단위로 지정가능하도록 하는 옵션입니다.
필자는 실행을 간단하게하기 위하여 아래와같이 bashrc 에 등록해두었습니다
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
alias sshcheck='masterha_check_ssh --conf=/etc/app1.cnf'
alias replcheck='masterha_check_repl --conf=/etc/app1.cnf'
alias start='nohup masterha_manager --conf=/etc/app1.cnf'
alias status='masterha_check_status --conf=/etc/app1.cnf'
alias stop='masterha_stop --conf=/etc/app1.cnf'
alias sshcheck='masterha_check_ssh --conf=/etc/app1.cnf'
alias replcheck='masterha_check_repl --conf=/etc/app1.cnf'
alias start='nohup masterha_manager --conf=/etc/app1.cnf'
alias status='masterha_check_status --conf=/etc/app1.cnf'
alias stop='masterha_stop --conf=/etc/app1.cnf'
alias sshcheck='masterha_check_ssh --conf=/etc/app1.cnf'
alias replcheck='masterha_check_repl --conf=/etc/app1.cnf'
alias start='nohup masterha_manager --conf=/etc/app1.cnf'
alias status='masterha_check_status --conf=/etc/app1.cnf'
alias stop='masterha_stop --conf=/etc/app1.cnf'
Failover test
failover가 정상적으로 작동하는지 확인하기위해, master node 에 장애를 발생시킵니다
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# db-node-01
$ systemctl stop mysql.service
# mha-manager
$ tail -f /root/mha/app1/app1.log
~
~
Fri Dec 117:11:062023 - [info] All new slave servers recovered successfully.
Fri Dec 117:11:062023 - [info]
Fri Dec 117:11:062023 - [info] * Phase 5: New master cleanup phase..
Fri Dec 117:11:062023 - [info]
Fri Dec 117:11:062023 - [info] Resetting slave info on the new master..
Fri Dec 117:11:062023 - [debug] Clearing slave info..
Fri Dec 117:11:062023 - [debug] Stopping slave IO/SQL thread on db-node-02(192.168.1.18:3306)..
Fri Dec 117:11:062023 - [debug] done.
Fri Dec 117:11:062023 - [debug] SHOW SLAVE STATUS shows new master does not replicate from anywhere. OK.
Fri Dec 117:11:062023 - [info] db-node-02: Resetting slave info succeeded.
Fri Dec 117:11:062023 - [info] Master failover to db-node-02(192.168.1.18:3306) completed successfully.
Fri Dec 117:11:062023 - [debug] Disconnected from db-node-02(192.168.1.18:3306)
Fri Dec 117:11:062023 - [debug] Disconnected from db-node-03(192.168.1.254:3306)
Fri Dec 117:11:062023 - [info]
----- Failover Report -----
app1: MySQL Master failover db-node-01(192.168.1.38:3306) to db-node-02(192.168.1.18:3306) succeeded
Master db-node-01(192.168.1.38:3306) is down!
Check MHA Manager logs at mha-test:/root/mha/app1/app1.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on db-node-01(192.168.1.38:3306)
The latest slave db-node-02(192.168.1.18:3306) has all relay logs for recovery.
Selected db-node-02(192.168.1.18:3306) as a new master.
db-node-02(192.168.1.18:3306): OK: Applying all logs succeeded.
db-node-02(192.168.1.18:3306): OK: Activated master IP address.
db-node-03(192.168.1.254:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
db-node-03(192.168.1.254:3306): OK: Applying all logs succeeded. Slave started, replicating from db-node-02(192.168.1.18:3306)
db-node-02(192.168.1.18:3306): Resetting slave info succeeded.
Master failover to db-node-02(192.168.1.18:3306) completed successfully.
# db-node-01
$ systemctl stop mysql.service
# mha-manager
$ tail -f /root/mha/app1/app1.log
~
~
Fri Dec 1 17:11:06 2023 - [info] All new slave servers recovered successfully.
Fri Dec 1 17:11:06 2023 - [info]
Fri Dec 1 17:11:06 2023 - [info] * Phase 5: New master cleanup phase..
Fri Dec 1 17:11:06 2023 - [info]
Fri Dec 1 17:11:06 2023 - [info] Resetting slave info on the new master..
Fri Dec 1 17:11:06 2023 - [debug] Clearing slave info..
Fri Dec 1 17:11:06 2023 - [debug] Stopping slave IO/SQL thread on db-node-02(192.168.1.18:3306)..
Fri Dec 1 17:11:06 2023 - [debug] done.
Fri Dec 1 17:11:06 2023 - [debug] SHOW SLAVE STATUS shows new master does not replicate from anywhere. OK.
Fri Dec 1 17:11:06 2023 - [info] db-node-02: Resetting slave info succeeded.
Fri Dec 1 17:11:06 2023 - [info] Master failover to db-node-02(192.168.1.18:3306) completed successfully.
Fri Dec 1 17:11:06 2023 - [debug] Disconnected from db-node-02(192.168.1.18:3306)
Fri Dec 1 17:11:06 2023 - [debug] Disconnected from db-node-03(192.168.1.254:3306)
Fri Dec 1 17:11:06 2023 - [info]
----- Failover Report -----
app1: MySQL Master failover db-node-01(192.168.1.38:3306) to db-node-02(192.168.1.18:3306) succeeded
Master db-node-01(192.168.1.38:3306) is down!
Check MHA Manager logs at mha-test:/root/mha/app1/app1.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on db-node-01(192.168.1.38:3306)
The latest slave db-node-02(192.168.1.18:3306) has all relay logs for recovery.
Selected db-node-02(192.168.1.18:3306) as a new master.
db-node-02(192.168.1.18:3306): OK: Applying all logs succeeded.
db-node-02(192.168.1.18:3306): OK: Activated master IP address.
db-node-03(192.168.1.254:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
db-node-03(192.168.1.254:3306): OK: Applying all logs succeeded. Slave started, replicating from db-node-02(192.168.1.18:3306)
db-node-02(192.168.1.18:3306): Resetting slave info succeeded.
Master failover to db-node-02(192.168.1.18:3306) completed successfully.
# db-node-01
$ systemctl stop mysql.service
# mha-manager
$ tail -f /root/mha/app1/app1.log
~
~
Fri Dec 1 17:11:06 2023 - [info] All new slave servers recovered successfully.
Fri Dec 1 17:11:06 2023 - [info]
Fri Dec 1 17:11:06 2023 - [info] * Phase 5: New master cleanup phase..
Fri Dec 1 17:11:06 2023 - [info]
Fri Dec 1 17:11:06 2023 - [info] Resetting slave info on the new master..
Fri Dec 1 17:11:06 2023 - [debug] Clearing slave info..
Fri Dec 1 17:11:06 2023 - [debug] Stopping slave IO/SQL thread on db-node-02(192.168.1.18:3306)..
Fri Dec 1 17:11:06 2023 - [debug] done.
Fri Dec 1 17:11:06 2023 - [debug] SHOW SLAVE STATUS shows new master does not replicate from anywhere. OK.
Fri Dec 1 17:11:06 2023 - [info] db-node-02: Resetting slave info succeeded.
Fri Dec 1 17:11:06 2023 - [info] Master failover to db-node-02(192.168.1.18:3306) completed successfully.
Fri Dec 1 17:11:06 2023 - [debug] Disconnected from db-node-02(192.168.1.18:3306)
Fri Dec 1 17:11:06 2023 - [debug] Disconnected from db-node-03(192.168.1.254:3306)
Fri Dec 1 17:11:06 2023 - [info]
----- Failover Report -----
app1: MySQL Master failover db-node-01(192.168.1.38:3306) to db-node-02(192.168.1.18:3306) succeeded
Master db-node-01(192.168.1.38:3306) is down!
Check MHA Manager logs at mha-test:/root/mha/app1/app1.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on db-node-01(192.168.1.38:3306)
The latest slave db-node-02(192.168.1.18:3306) has all relay logs for recovery.
Selected db-node-02(192.168.1.18:3306) as a new master.
db-node-02(192.168.1.18:3306): OK: Applying all logs succeeded.
db-node-02(192.168.1.18:3306): OK: Activated master IP address.
db-node-03(192.168.1.254:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
db-node-03(192.168.1.254:3306): OK: Applying all logs succeeded. Slave started, replicating from db-node-02(192.168.1.18:3306)
db-node-02(192.168.1.18:3306): Resetting slave info succeeded.
Master failover to db-node-02(192.168.1.18:3306) completed successfully.
로그를 확인해보면 stop이 발생한 이후 수초내에 가용가능한 db를 확인후, db-node-02를 master로 승격한것을 확인할 수 있습니다
위와 같이 동작한다면 정상입니다
Failover 복구
db-node-01 의 장애 복구를 위하여 mysql를 다시 start하고 새로운 master node 인 db-node-02를 바라보도록 설정하였습니다
When you login first time using a Social Login button, we collect your account public profile information shared by Social Login provider, based on your privacy settings. We also get your email address to automatically create an account for you in our website. Once your account is created, you'll be logged-in to this account.
DisagreeAgree
I allow to create an account
When you login first time using a Social Login button, we collect your account public profile information shared by Social Login provider, based on your privacy settings. We also get your email address to automatically create an account for you in our website. Once your account is created, you'll be logged-in to this account.