最新动态 > 详情

docker redis集群配置3主3从hash槽分配

发布时间:2022-06-17 15:26:41

一、docker集群安装

1.分别在宿主机的/docker/redis下创建每个节点数据同步的目录

2.执行容器启动命令

docker run -d --name redis-node81 --net host --privileged=true -v /docker/redis/node-6381/data:/data redis --cluster-enabled yes --appendonly yes --port 6381

docker run -d --name redis-node82 --net host --privileged=true -v /docker/redis/node-6382/data:/data redis --cluster-enabled yes --appendonly yes --port 6382

docker run -d --name redis-node83 --net host --privileged=true -v /docker/redis/node-6383/data:/data redis --cluster-enabled yes --appendonly yes --port 6383 

docker run -d --name redis-node84 --net host --privileged=true -v /docker/redis/node-6384/data:/data redis --cluster-enabled yes --appendonly yes --port 6384 

注意 --cluster-enabled yes 需要跟在镜像后

 

3.执行命令参数解释:

docker run -d 

--name redis-node81 

--net host  主机网络

--privileged=true 

-v /docker/redis/node-6381/data:/data redis  宿主机文件映射

--cluster-enabled yes  是否开启集群

--appendonly yes  是否开启持久化

--port 6381  端口号

 

[root@localhost redis]# docker run -d --name redis-node81 --net host --privileged=true -v /docker/redis/node-6381/data:/data redis --cluster-enabled yes --appendonly yes --port 6381
b5ee28798e83842f54dc1fc35f1a10113f257d31209b5719629659137145bc8d
[root@localhost redis]# docker run -d --name redis-node82 --net host --privileged=true -v /docker/redis/node-6382/data:/data redis --cluster-enabled yes --appendonly yes --port 6382
50bab8d93f0a2f3a464fd625bf23ecdd8a2b323d0843ba5fc24b857b659bec1f
[root@localhost redis]# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS                                                  NAMES
50bab8d93f0a   redis       "docker-entrypoint.s…"   3 seconds ago    Up 2 seconds                                                           redis-node82
b5ee28798e83   redis       "docker-entrypoint.s…"   10 minutes ago   Up 10 minutes                                                          redis-node81
c68e431e7a2d   mysql:5.7   "docker-entrypoint.s…"   8 days ago       Up 8 days       33060/tcp, 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp   formysql
d536dd728243   redis       "docker-entrypoint.s…"   8 days ago       Up 8 days       6379/tcp, 0.0.0.0:6380->6380/tcp, :::6380->6380/tcp    forredis2
[root@localhost redis]# docker run -d --name redis-node83 --net host --privileged=true -v /docker/redis/node-6383/data:/data redis --cluster-enabled yes --appendonly yes --port 6383
bf3f7574eeab9830a1fd5d29b76c81164ac3d87acd1d7b6803baeafb9779f1f3
[root@localhost redis]# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS                                                  NAMES
bf3f7574eeab   redis       "docker-entrypoint.s…"   2 seconds ago    Up 2 seconds                                                           redis-node83
50bab8d93f0a   redis       "docker-entrypoint.s…"   21 seconds ago   Up 20 seconds                                                          redis-node82
b5ee28798e83   redis       "docker-entrypoint.s…"   10 minutes ago   Up 10 minutes                                                          redis-node81
c68e431e7a2d   mysql:5.7   "docker-entrypoint.s…"   8 days ago       Up 8 days       33060/tcp, 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp   formysql
d536dd728243   redis       "docker-entrypoint.s…"   8 days ago       Up 8 days       6379/tcp, 0.0.0.0:6380->6380/tcp, :::6380->6380/tcp    forredis2
[root@localhost redis]# docker run -d --name redis-node84 --net host --privileged=true -v /docker/redis/node-6384/data:/data redis --cluster-enabled yes --appendonly yes --port 6384
b4f4ed8c22caac25cfeae91dcdc657149b820e10003981a78a676652b6d209dd
[root@localhost redis]# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS                                                  NAMES
b4f4ed8c22ca   redis       "docker-entrypoint.s…"   1 second ago     Up 1 second                                                            redis-node84
bf3f7574eeab   redis       "docker-entrypoint.s…"   10 seconds ago   Up 10 seconds                                                          redis-node83
50bab8d93f0a   redis       "docker-entrypoint.s…"   29 seconds ago   Up 28 seconds                                                          redis-node82
b5ee28798e83   redis       "docker-entrypoint.s…"   10 minutes ago   Up 10 minutes                                                          redis-node81
c68e431e7a2d   mysql:5.7   "docker-entrypoint.s…"   8 days ago       Up 8 days       33060/tcp, 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp   formysql
d536dd728243   redis       "docker-entrypoint.s…"   8 days ago       Up 8 days       6379/tcp, 0.0.0.0:6380->6380/tcp, :::6380->6380/tcp    forredis2

执行命令构建主从关系创建主从关系

4.集群节点不够提示:提示需要至少3个master

root@localhost:/data# redis-cli --cluster create 192.168.2.252:6381 192.168.2.252:6382 192.168.2.252:6383 192.168.2.252:6384 --cluster-replicas 1
*** ERROR: Invalid configuration for cluster creation.
*** Redis Cluster requires at least 3 master nodes.
*** This is not possible with 4 nodes and 1 replicas per node.
*** At least 6 nodes are required.

 

docker run -d --name redis-node85 --net host --privileged=true -v /docker/redis/node-6385/data:/data redis --cluster-enabled yes --appendonly yes --port 6385

docker run -d --name redis-node86 --net host --privileged=true -v /docker/redis/node-6386/data:/data redis --cluster-enabled yes --appendonly yes --port 6386 

5.执行命令构建主从关系

root@localhost:/data# redis-cli --cluster create 192.168.2.252:6381 192.168.2.252:6382 192.168.2.252:6383 192.168.2.252:6384 192.168.2.252:6385 192.168.2.252:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.2.252:6385 to 192.168.2.252:6381
Adding replica 192.168.2.252:6386 to 192.168.2.252:6382
Adding replica 192.168.2.252:6384 to 192.168.2.252:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: c4e09a1bf36b4aa79b744b39aabe809df1bd29c8 192.168.2.252:6381
   slots:[0-5460] (5461 slots) master
M: 2de30f0b227b2d30602f14ec69b84d54e3b22af9 192.168.2.252:6382
   slots:[5461-10922] (5462 slots) master
M: 8996d0476951fba537d26b513721c27981f0ddd6 192.168.2.252:6383
   slots:[10923-16383] (5461 slots) master
S: 12a22d510b2b590b7407877bd310263409b88893 192.168.2.252:6384
   replicates 8996d0476951fba537d26b513721c27981f0ddd6
S: 23a94ab62fcefb1112ba88ebe65a198bc540b721 192.168.2.252:6385
   replicates c4e09a1bf36b4aa79b744b39aabe809df1bd29c8
S: 8e7560aa024a53ed0ed93ab7b6b43189cb0c0ef7 192.168.2.252:6386
   replicates 2de30f0b227b2d30602f14ec69b84d54e3b22af9
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.2.252:6381)
M: c4e09a1bf36b4aa79b744b39aabe809df1bd29c8 192.168.2.252:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 2de30f0b227b2d30602f14ec69b84d54e3b22af9 192.168.2.252:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 12a22d510b2b590b7407877bd310263409b88893 192.168.2.252:6384
   slots: (0 slots) slave
   replicates 8996d0476951fba537d26b513721c27981f0ddd6
M: 8996d0476951fba537d26b513721c27981f0ddd6 192.168.2.252:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 8e7560aa024a53ed0ed93ab7b6b43189cb0c0ef7 192.168.2.252:6386
   slots: (0 slots) slave
   replicates 2de30f0b227b2d30602f14ec69b84d54e3b22af9
S: 23a94ab62fcefb1112ba88ebe65a198bc540b721 192.168.2.252:6385
   slots: (0 slots) slave
   replicates c4e09a1bf36b4aa79b744b39aabe809df1bd29c8
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data#

 

 

注意看hash槽分配到3个主节点的槽点值区间

Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383

分别代表如果存入的键名通过计算得到的值落在哪个数值区间,则存到哪个节点下

 

6.查看集群状态

root@localhost:/data# redis-cli -p 6381
127.0.0.1:6381> CLUSTER info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:470
cluster_stats_messages_pong_sent:484
cluster_stats_messages_sent:954
cluster_stats_messages_ping_received:479
cluster_stats_messages_pong_received:470
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:954
127.0.0.1:6381> CLUSTER nodes
2de30f0b227b2d30602f14ec69b84d54e3b22af9 192.168.2.252:6382@16382 master - 0 1654852981329 2 connected 5461-10922
12a22d510b2b590b7407877bd310263409b88893 192.168.2.252:6384@16384 slave 8996d0476951fba537d26b513721c27981f0ddd6 0 1654852980326 3 connected
8996d0476951fba537d26b513721c27981f0ddd6 192.168.2.252:6383@16383 master - 0 1654852979324 3 connected 10923-16383
8e7560aa024a53ed0ed93ab7b6b43189cb0c0ef7 192.168.2.252:6386@16386 slave 2de30f0b227b2d30602f14ec69b84d54e3b22af9 0 1654852979000 2 connected
23a94ab62fcefb1112ba88ebe65a198bc540b721 192.168.2.252:6385@16385 slave c4e09a1bf36b4aa79b744b39aabe809df1bd29c8 0 1654852980000 1 connected
c4e09a1bf36b4aa79b744b39aabe809df1bd29c8 192.168.2.252:6381@16381 myself,master - 0 1654852979000 1 connected 0-5460

 

二、验证集群hash槽位(slot)自动分配键值

1.传统单机登陆错误示范

传统登陆方式,无法正常存储非属于当前节点上的槽点的值而报错

root@localhost:/data# redis-cli -p 6381
127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> set a 1
(error) MOVED 15495 192.168.2.252:6383
127.0.0.1:6381> get a
(error) MOVED 15495 192.168.2.252:6383
127.0.0.1:6381> set b 2
OK
127.0.0.1:6381> get b
"2"
127.0.0.1:6381>

 

可以看到 报错:“(error) MOVED 15495 192.168.2.252:6383 ”

是因为集群自动分配槽位,  a对应的槽位15495 对应的是6383的节点

 

2.正确存储键值示范

集群的正确链接方式是在链接命令后面带上参数 “-c”

当存入键值时,对键key进行CRC16校验后,对16384取余计算,得出的值,redis会自动分配到对应的槽位,并重定向到对应的节点下

root@localhost:/data# redis-cli -p 6381 -c
127.0.0.1:6381> keys *
1) "b"
127.0.0.1:6381> FLUSHALL
OK
127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> set a 1
-> Redirected to slot [15495] located at 192.168.2.252:6383
OK
192.168.2.252:6383> keys *
1) "a"
192.168.2.252:6383> set b 2
-> Redirected to slot [3300] located at 192.168.2.252:6381
OK
192.168.2.252:6381> set c 3
-> Redirected to slot [7365] located at 192.168.2.252:6382
OK
192.168.2.252:6382> keys *
1) "c"


3.集群检查

root@localhost:/data# redis-cli --cluster check 192.168.2.252:6381
192.168.2.252:6381 (c4e09a1b...) -> 1 keys | 5461 slots | 1 slaves.
192.168.2.252:6382 (2de30f0b...) -> 1 keys | 5462 slots | 1 slaves.
192.168.2.252:6383 (8996d047...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 3 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.2.252:6381)
M: c4e09a1bf36b4aa79b744b39aabe809df1bd29c8 192.168.2.252:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 2de30f0b227b2d30602f14ec69b84d54e3b22af9 192.168.2.252:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 12a22d510b2b590b7407877bd310263409b88893 192.168.2.252:6384
   slots: (0 slots) slave
   replicates 8996d0476951fba537d26b513721c27981f0ddd6
M: 8996d0476951fba537d26b513721c27981f0ddd6 192.168.2.252:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 8e7560aa024a53ed0ed93ab7b6b43189cb0c0ef7 192.168.2.252:6386
   slots: (0 slots) slave
   replicates 2de30f0b227b2d30602f14ec69b84d54e3b22af9
S: 23a94ab62fcefb1112ba88ebe65a198bc540b721 192.168.2.252:6385
   slots: (0 slots) slave
   replicates c4e09a1bf36b4aa79b744b39aabe809df1bd29c8
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data#

 

上一篇: docker 安装redis:挂载容器卷,同时开启持久化

下一篇:Hyperf rabbitmq 削峰平谷限流、死信队列配置方案