mita2 database life

主にMySQLに関するメモです

MySQL Shell で検証環境を作る

日々、検証のためDBサーバを作っては捨て、作っては捨てる・・を繰り返しています。 そうすると、検証環境を作る手間を省きたくなってきます。

dbdeployerdocker-composechef など、いろいろなツールを試してきましたが、結局、白紙の仮装マシン(VM)を立てて、一から構築することが多いです。 それが一番慣れてますからね。。。

今回は、MySQL Shell の サンドボックス機能を試してみました。

MySQL Shell をインストールする

$ yum install -y https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
$ sudo yum install -y mysql-shell

MySQL Shell の Sandbox 関連のファンクション

検証環境(Sandbox)を作るファンクションが用意されています。ファンクション名を見れば、だいたい使い方の予想が付きます。 〜 on localhost という記述があり、(dbdeployer のような)ローカルに複数の mysqld を立ち上げる仕組みであることが想像できます。

 MySQL  127.0.0.1:3307 ssl  JS > dba.help()

NAME
      dba - InnoDB cluster and replicaset management functions.
<snip>

FUNCTIONS
<snip>

      deleteSandboxInstance(port[, options])
            Deletes an existing MySQL Server instance on localhost.

      deploySandboxInstance(port[, options])
            Creates a new MySQL Server instance on localhost.

<snip>

      killSandboxInstance(port[, options])
            Kills a running MySQL Server instance on localhost.

<snip>


      startSandboxInstance(port[, options])
            Starts an existing MySQL Server instance on localhost.

      stopSandboxInstance(port[, options])
            Stops a running MySQL Server instance on localhost.


<snip>

サンドボックスインスタンスを作成する

deploySandboxInstanceサンドボックスインスタンスを作成+起動します。作成だけでなく、起動まで一緒にやってくれます。 引数にはポート番号を指定します。

 MySQL  localhost:33060+ ssl  JS > dba.deploySandboxInstance(3307)
A new MySQL sandbox instance will be created on this host in
/home/vagrant/mysql-sandboxes/3307

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.

Please enter a MySQL root password for the new instance: ********

Deploying new MySQL instance...

Instance localhost:3307 successfully deployed and started.
Use shell.connect('root@localhost:3307') to connect to the instance.

mysql-sandboxes ディレクトリ以下にポート番号でディレクトリが切られ、その中に設定ファイルやデータファイルが配置されていました。

$ ps au | grep mysqld
vagrant   2739  5.1 52.8 1252440 263732 pts/0  Sl   06:07   0:03 /home/vagrant/mysql-sandboxes/3307/bin/mysqld --defaults-file=/home/vagrant/mysql-sandboxes/3307/my.cnf
vagrant   2799  0.0  0.1  12500   688 pts/0    S+   06:08   0:00 grep --color=auto mysqld

my.cnf には最初からレプリケーションに必要な設定があらかじめ追記されています。

$ cat /home/vagrant/mysql-sandboxes/3307/my.cnf
[mysqld]
port = 3307
loose_mysqlx_port = 33070
server_id = 3057363809
socket = mysqld.sock
loose_mysqlx_socket = mysqlx.sock
basedir = /usr
datadir = /home/vagrant/mysql-sandboxes/3307/sandboxdata
loose_log_syslog = OFF
report_port = 3307
report_host = 127.0.0.1
log_error = /home/vagrant/mysql-sandboxes/3307/sandboxdata/error.log
relay_log_info_repository = TABLE
master_info_repository = TABLE
gtid_mode = ON
log_slave_updates = ON
transaction_write_set_extraction = XXHASH64
binlog_format = ROW
log_bin
enforce_gtid_consistency = ON
pid_file = /home/vagrant/mysql-sandboxes/3307/3307.pid
secure_file_priv = /home/vagrant/mysql-sandboxes/3307/mysql-files
[client]
port = 3307
user = root
protocol = TCP

レプリケーションを組んでみる

レプリカ用のインスタンスを追加します。

 MySQL  localhost:33060+ ssl  JS > dba.deploySandboxInstance(3308)
A new MySQL sandbox instance will be created on this host in
/home/vagrant/mysql-sandboxes/3307

createReplicaSet で ReplicaSet を作成します。createReplicaSet の引数には名前を指定します。今回は適当に、myCluster という名前を付けました。

 MySQL  localhost:33060+ ssl  JS > \connect root@localhost:3307
Creating a session to 'root@localhost:3307'
Please provide the password for 'root@localhost:3307': ********
Save password for 'root@localhost:3307'? [Y]es/[N]o/Ne[v]er (default No): Y
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 14
Server version: 8.0.21 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.

 MySQL  localhost:3307 ssl  JS > dba.createReplicaSet('myCluster')
A new replicaset with instance '127.0.0.1:3307' will be created.

* Checking MySQL instance at 127.0.0.1:3307

This instance reports its own address as 127.0.0.1:3307
127.0.0.1:3307: Instance configuration is suitable.

* Updating metadata...

ReplicaSet object successfully created for 127.0.0.1:3307.
Use rs.addInstance() to add more asynchronously replicated instances to this replicaset and rs.status() to check its status.

topology を見ると、まだ、127.0.0.1:3307 の1台構成であることがわかります。

 MySQL  localhost:3307 ssl  JS > rs = dba.getReplicaSet()
You are connected to a member of replicaset 'myCluster'.
<ReplicaSet:myCluster>

 MySQL  localhost:3307 ssl  JS > rs.status()
{
    "replicaSet": {
        "name": "myCluster",
        "primary": "127.0.0.1:3307",
        "status": "AVAILABLE",
        "statusText": "All instances available.",
        "topology": {
            "127.0.0.1:3307": {
                "address": "127.0.0.1:3307",
                "instanceRole": "PRIMARY",
                "mode": "R/W",
                "status": "ONLINE"
            }
        },
        "type": "ASYNC"
    }
}

addInstance で ReplicaSet にレプリカを追加します。

 MySQL  127.0.0.1:3307 ssl  JS > rs.addInstance('root@127.0.0.1:3308');
Adding instance to the replicaset...

* Performing validation checks

This instance reports its own address as 127.0.0.1:3308
127.0.0.1:3308: Instance configuration is suitable.

* Checking async replication topology...

* Checking transaction state of the instance...

NOTE: The target instance '127.0.0.1:3308' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether replication can completely recover its state.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3308' with a physical snapshot from an existing replicaset member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

このとき、レプリカのリカバリーをCloneIncremental recovery 、どちらの方法で行うか選択する必要があります。今回は、Cloneを選択し、ソース(マスター)をまるっとコピーしました。

WARNING: It should be safe to rely on replication to incrementally recover the state of the new instance if you are sure all updates ever executed in the replicaset were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the replicaset or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
* Updating topology
Waiting for clone process of the new member to complete. Press ^C to abort the operation.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3308 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3308 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3308 has restarted, waiting for clone to finish...
* Clone process has finished: 59.62 MB transferred in about 1 second (~59.62 MB/s)

** Configuring 127.0.0.1:3308 to replicate from 127.0.0.1:3307
** Waiting for new instance to synchronize with PRIMARY...

The instance '127.0.0.1:3308' was added to the replicaset and is replicating from 127.0.0.1:3307.

127.0.0.1:3308 がレプリカ(SECCONDARY)として追加されていることが確認できます。

 MySQL  127.0.0.1:3307 ssl  JS >  rs.status()
{
    "replicaSet": {
        "name": "myCluster",
        "primary": "127.0.0.1:3307",
        "status": "AVAILABLE",
        "statusText": "All instances available.",
        "topology": {
            "127.0.0.1:3307": {
                "address": "127.0.0.1:3307",
                "instanceRole": "PRIMARY",
                "mode": "R/W",
                "status": "ONLINE"
            },
            "127.0.0.1:3308": {
                "address": "127.0.0.1:3308",
                "instanceRole": "SECONDARY",
                "mode": "R/O",
                "replication": {
                    "applierStatus": "APPLIED_ALL",
                    "applierThreadState": "Slave has read all relay log; waiting for more updates",
                    "receiverStatus": "ON",
                    "receiverThreadState": "Waiting for master to send event",
                    "replicationLag": null
                },
                "status": "ONLINE"
            }
        },
        "type": "ASYNC"
    }
}

念のため、SHOW SLAVE STATUS を直接みてチェックしてみます。 3307 ポートのMySQLをソース(マスター)として参照していることが確認できました。

$ mysql -uroot -p -S mysql-sandboxes/3308/sandboxdata/mysqld.sock
mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 127.0.0.1
                  Master_User: mysql_innodb_rs_637839038
                  Master_Port: 3307
                Connect_Retry: 60
              Master_Log_File: localhost-bin.000002
          Read_Master_Log_Pos: 24939
               Relay_Log_File: localhost-relay-bin.000002
                Relay_Log_Pos: 2093
        Relay_Master_Log_File: localhost-bin.000002
             Slave_IO_Running: Yes
<snip>
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 3057363809
                  Master_UUID: d89032f1-e9bd-11ea-81a5-5254004d77d3
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set: d89032f1-e9bd-11ea-81a5-5254004d77d3:37-38
            Executed_Gtid_Set: d89032f1-e9bd-11ea-81a5-5254004d77d3:1-38
                Auto_Position: 1
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
       Master_public_key_path:
        Get_master_public_key: 1
            Network_Namespace:
1 row in set (0.01 sec)

所感

MySQL Shell に同梱されているのは良いですね。MySQL Shell を普段使いするようになれば、出番があるかも。