读写分离,简单来说是把对数据库的读和写操作分开,当应用程序发起读操作时分配给读库,当应用程序发起写操作时分配给写库,从而达到减轻单台数据库服务器的压力。
实现读写分离的工具使用mycat就可以,当然还有其它的,mycat不仅可以支持MySQL,也支持oracle和SQL server。
读写分离逻辑图
读写分离是基于MySQL 的主从复制的,所以在实现读写分离前,我们先回顾下mysql主从复制。mysql主从复制首先它是基于二进制日志实现的,这个二进制binlong包含了DDL和DML语句。实现原理是这样的: 当主库执行了DDL和DML语句,会将数据的变更记录到binlog日志文件里,从库中的IO线程负责去读取主库的binlog,然后记录到自己的中继日志relay log中,从库的SQL线程读取中继日志relay log 重新执行到从库中。
一主一从环境环境准备,如图示:
通过mycat来控制后台数据库的读写分离和负载均衡,由schema.xml文件中的datahost标签里的balance属性来控制。
balance属性值含义:
0: 表示不开启读写分离机制,所有读操作都发送到当前可用的writeHost 上。
1: 表示全都的readHost与备用的writeHost都参与select语句的负载均衡(针对双主环境)
2: 所有的读写操作都随机在writeHost、readHost上分发。
3: 所有的读请求随机分发到writeHost对应的readHost上执行,writeHost不负责读压力。
配置过程修改schema.xml 添加逻辑库及读写分离dataHost标签,
schme.xml
然后修改server.xml 运行用户访问读写分离逻辑库DB_TBSHARE_RW.
server.xml
ok,配置好后重启mycat进行测试。
[root@db-master /usr/local/mycat/conf]# ../bin/mycat start
[root@db-master /usr/local/mycat/conf]# cat ../logs/wrApper.log
STATUS | wrapper | 2022/04/24 20:57:01 | --> Wrapper Started as Daemon
STATUS | wrapper | 2022/04/24 20:57:02 | Launching a JVM...
INFO | jvm 1 | 2022/04/24 20:57:03 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
INFO | jvm 1 | 2022/04/24 20:57:03 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
INFO | jvm 1 | 2022/04/24 20:57:03 |
INFO | jvm 1 | 2022/04/24 20:57:10 | MyCAT Server startup successfully. see logs in logs/mycat.log
[root@db-master /usr/local/mycat/conf]# mysql -uroot -h192.168.128.100 -p123456 -P8066
。。。
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> show databases;
+---------------+
| DATABASE |
+---------------+
| DB_TBSHARE |
| DB_TBSHARE_RW |
| SHOPING |
+---------------+
3 rows in set (0.02 sec)
mysql> use DB_TBSHARE_RW;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+----------------------+
| Tables_in_db_tbshare |
+----------------------+
| tb_user |
+----------------------+
1 row in set (0.02 sec)
mysql>
插入4条数据,然后select,此时我们分辨不出查询是主库的还是从库的,懵逼了
mysql> insert into tb_user values(1,'凤雏','男');
Query OK, 1 row affected (0.02 sec)
mysql> insert into tb_user values(2,'落凤坡','男');
Query OK, 1 row affected (0.02 sec)
mysql> insert into tb_user values(3,'小乔','女');
Query OK, 1 row affected (0.02 sec)
mysql> insert into tb_user values(4,'风二娘','女');
Query OK, 1 row affected (0.10 sec)
mysql> select * from tb_user;
+----+-----------+------+
| id | username | sex |
+----+-----------+------+
| 1 | 凤雏 | 男 |
| 2 | 落凤坡 | 男 |
| 3 | 小乔 | 女 |
| 4 | 风二娘 | 女 |
+----+-----------+------+
4 rows in set (0.01 sec)
办法来了,我们悄悄的把从库的小乔改成大桥试试, 因为从库的更新不可能同步到主库对吧,条友们,然后才select验证,如下图示,读写分离正常。
由于这一主一从存在单点故障,当主库挂了,通过mycat查询是没问题的,但是执行DML和DDL会提示连接断开,导致大家熬夜加班。
主节点master1用于处理所有的写请求,它的从节点slave1和另外一台主节点master2及slave2从节点负责所有的读请求。当master1主机节点宕机后,master2主节点负责写请求,它俩互为备机,如图示:
实现过程
1、搭建双主
修改M1的 my.cnf 添加主要参数
server_id=33061
binlog-do-db=db01
binlog-do-db=db02
binlog-do-db=db03
log-slave-updates=1 #强制刷新从库二进制日志,如果有更新的话
M2的my.cnf文件添加
server_id=33062
binlog-do-db=db01
binlog-do-db=db02
binlog-do-db=db03
log-slave-updates=1
然后在两个主库中创建复制账号
grant replication slave on *.* to repl@'192.168.128.%' identified by '123456';
flush privileges;
show master status;
从库配置,修改server_id 即可
server_id=33071 # S1
server_id=33072 # S2
重启服务
接着配置两台从库管理主库,S1关联M1
change master to master_host='192.168.128.100',master_port=3306,master_user='repl',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=154;
start slave && show slave status;
S2管理M2
change master to master_host='192.168.128.101',master_port=3306,master_user='repl',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=154;
start slave; && show slave status; # 如果有故障就reset slave all
主库M1和M2相互复制
M1: change master to master_host='192.168.128.101',master_port=3306,master_user='repl',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=154;
start slave && show slave status;
M2: change master to master_host='192.168.128.100',master_port=3306,master_user='repl',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=154;
start slave && show slave status;
测试,分别在两台M1、M2上执行DDL、DML语句,查看涉及到的数据库服务器的数据同步情况。
M1 上创建db01, 如图db01全部同步完成。
在M2上创建表并插入数据观察同步情况
是没有问题的
主从主主复制已实现,接下来实现双主双从读写分离。
Mycat控制后台数据库的读写分离和负载均衡是由schema.xml文件中的datahost标签里的balance属性控制的,通过writeType及switchType来完成失败自动切换。
schema.xml 配置
<schema name="DB_TBSHARE_RW2" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn7">
<!--此处可以不用配逻辑表-->
</schema>
<dataNode name="dn7" dataHost="dbhost7" database="db01" />
<dataHost name="dbhost7" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!--M1 S1-->
<writeHost host="master1" url="192.168.128.100:3306" user="root" password="123456">
<readHost host="slave1" url="192.168.128.101:3307" user="root" password="123456" />
</writeHost>
<!--M2 S2-->
<writeHost host="master2" url="192.168.128.101:3306" user="root" password="123456">
<readHost host="slave2" url="192.168.128.101:3307" user="root" password="123456" />
</writeHost>
</dataHost>
balance="1", 表示全部的readHost与备用的writeHost参与select语句的负载均衡,换句话说,就是当双主双从模式M1->S1,M2->S2互为主备,正常情况下,M2、S1、S2都参与select语句的负载均衡。
writeType: 0 写操作都转发到第一台writehost、writehost1宕机会切换到writehost2上
1 所有的写操作都随机发送到配置的writehost上。
switchType: -1 不自动切换 ; 1 自动切换
配置好后,重启mycat。 ./bin/mycat restart
[root@db-master ~]# mysql -uroot -h192.168.128.100 -p123456 -P8066
.......
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> show databases;
+----------------+
| DATABASE |
+----------------+
| DB_TBSHARE |
| DB_TBSHARE_RW2 |
| SHOPING |
+----------------+
3 rows in set (0.02 sec)
mysql> use DB_TBSHARE_RW2;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+----------------+
| Tables_in_db01 |
+----------------+
| tb_user |
+----------------+
1 row in set (0.02 sec)
mysql>
mysql> select * from tb_user;
+----+------+------+
| id | name | sex |
+----+------+------+
| 1 | Jack | 1 |
| 2 | Tony | 1 |
| 3 | mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
+----+------+------+
5 rows in set (1.86 sec)
为了区分查询结果到底是哪台节点上的,我们先把S1 和S2 上的数据进行标识。
mysql> update tb_user set name='Jack-S1' where id=1;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S1 | 1 | # 查询结果为Jack-S1 |说明来自S1
。。。。
mysql> update tb_user set name='Jack-S2' where id=1;
Query OK, 1 row affected (0.02 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S2 | 1 | # 查询结果为Jack-S2 |说明来自S2
。。。。。
我们切换会mycat进行查询
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S1 | 1 | # 从节点 S1
| 2 | Tony | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
+----+---------+------+
5 rows in set (0.00 sec)
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S1 | 1 | # 从节点 S1
| 2 | Tony | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
+----+---------+------+
5 rows in set (0.01 sec)
mysql> select * from tb_user;
+----+------+------+
| id | name | sex |
+----+------+------+
| 1 | Jack | 1 |
| 2 | Tony | 1 | # 主节点M2,因为M1负责写操作,我们配了balance="1"
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
+----+------+------+
5 rows in set (0.03 sec)
mysql> select * from tb_user;
+----+------+------+
| id | name | sex |
+----+------+------+
| 1 | Jack | 1 |
| 2 | Tony | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
+----+------+------+
5 rows in set (0.00 sec)
插入一条数据 看看4个节点同步情况
mysql> insert into tb_user values(6,'Baky','2');
Query OK, 1 row affected (0.33 sec)
mysql> update tb_user set name='SZ-马' where id=2;
update 更新也是没有问题的。
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from tb_user;
+----+--------+------+
| id | name | sex |
+----+--------+------+
| 1 | Jack | 1 |
| 2 | SZ-马 | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
| 6 | Baky | 2 |
+----+--------+------+
6 rows in set (0.01 sec)
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S1 | 1 |
| 2 | SZ-马 | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
| 6 | Baky | 2 |
+----+---------+------+
6 rows in set (0.02 sec)
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S1 | 1 |
| 2 | SZ-马 | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
| 6 | Baky | 2 |
+----+---------+------+
6 rows in set (0.25 sec)
模拟M1节点宕机,看看会不会切换M2及能否执行写入操作
systemctl stop mysqld
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S1 | 1 |
| 2 | SZ-马 | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
| 6 | Baky | 2 |
+----+---------+------+
6 rows in set (0.01 sec)
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S2 | 1 |
| 2 | SZ-马 | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
| 6 | Baky | 2 |
mysql> insert into tb_user values(7,'www','1');
Query OK, 1 row affected (0.01 sec)
mysql> select * from tb_user;
+----+--------+------+
| id | name | sex |
+----+--------+------+
| 1 | Jack | 1 |
| 2 | SZ-马 | 1 |
| 3 | Mack | 2 | # M2
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
| 6 | Baky | 2 |
| 7 | www | 1 |
+----+--------+------+
7 rows in set (0.00 sec)
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S2 | 1 |
| 2 | SZ-马 | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
| 6 | Baky | 2 |
| 7 | www | 1 |
+----+---------+------+
7 rows in set (0.00 sec)
mysql> select * from tb_user;
+----+---------+------+
| id | name | sex |
+----+---------+------+
| 1 | Jack-S1 | 1 |
| 2 | Tony | 1 |
| 3 | Mack | 2 |
| 4 | Lucy | 2 |
| 5 | Mely | 2 |
| 6 | Baky | 2 |
+----+---------+------+
6 rows in set (0.00 sec
OK,读写分离讲完啦,你学废了吗。