自从接触Docker,便对它产生了浓厚兴趣,平时需要的开发环境也尝试使用Docker来搭建,相比较虚拟机确实方便了不少,这里主要记录一下平时在开发中使用Docker搭建过得开发环境。这里所有使用docker创建的应用都放在了dockerapps目录下。
1 ZooKeeper
版本:3.5
1.1 单节点部署
在dockerapps下创建zookeeper目录。此目录包含data目录用来持久化zookeeper应用数据,包含一个zoo.cfg配置文件用来配置zookeeper应用,包含一个start.sh用来启动单节点的zk。目录结构如下所示:
zookeeper/
├── data
├── start.sh
└── zoo.cfg
1.1.1 创建持久化数据目录data
在zookeeper目录下创建zk的数据持久化目录
mkdir data
1.1.2 创建zoo.cfg 配置文件
在zookeeper目录下创建zk的配置文件zoo.cfg,内容如下:
clientPort=2181
dataDir=/data
dataLogDir=/data/log
1.1.3 创建启动脚本start.sh
在zookeeper目录先创建zk的启动脚本start.sh,内容如下:
docker stop zookeeper
docker rm zookeeper
docker run -itd -p 2181:2181 -v `pwd`/data:/data --name zookeeper -v `pwd`/zoo.cfg:/conf/zoo.cfg zookeeper:3.5
1.1.4 启动服务
完成上述步骤之后就可以启动单节点的zk了。执行start.sh
sh start.sh
查看docker进程
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39a6cf356e1a zookeeper:3.5 "/docker-entrypoint.…" 11 seconds ago Up 10 seconds 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 8080/tcp zookeeper
进入zk客户端zkCli
docker exec -it 39a6cf356e1a bin/zkCli.sh
1.2 集群部署
对于zk集群部署,我们可以使用docker-compose进行编排。在zookeeper目录下创建一个zk-cluster目录用来存放集群相关的文件。这里以3台节点为例。
mkdir zk-cluster
1.2.1 创建持久化数据文件夹
这里分别创建data1、data2、data3来持久化三个节点的数据。
mkdir data1 data2 data3
并在每个data目录下创建myid文件,内容分别为1,2,3
1.2.2 创建zk配置文件zoo.cfg
这个配置文件可以用同一个文件,也可以与data类似,分别创建,这里为了简单,使用同一个配置文件,内容如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data
dataLogDir=/data/log
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=localhost:2881:3881
server.1=zookeeper1:2888:3888
server.2=zookeeper2:2888:3888
server.3=zookeeper3:2888:3888
4lw.commands.whitelist=*
1.2.3 创建编排文件docker-compose.yml
在zk-cluster目录下从创建编排文件,内容如下:
version: '2'
services:
zoo1:
image: zookeeper:3.5
restart: always
container_name: zookeeper1
volumes:
- ./data1:/data
- ./zoo.cfg:/conf/zoo.cfg
ports:
- "2181:2181"
zoo2:
image: zookeeper:3.5
restart: always
container_name: zookeeper2
volumes:
- ./data2:/data
- ./zoo.cfg:/conf/zoo.cfg
ports:
- "2182:2181"
zoo3:
image: zookeeper:3.5
restart: always
container_name: zookeeper3
volumes:
- ./data3:/data
- ./zoo.cfg:/conf/zoo.cfg
ports:
- "2183:2181"
1.2.4 创建集群启动文件start.sh
内容如下:
docker-compose stop
docker-compose -p zk-cluster up -d
1.2.5 启动集群
sh start.sh
Starting zookeeper1 ... done
Starting zookeeper3 ... done
Starting zookeeper2 ... done
使用四字命令查看集群状态
echo stat |nc localhost 2181
Zookeeper version: 3.5.5-390fe37ea45dee01bf87dc1c042b5e3dcce88653, built on 05/03/2019 12:07 GMT
Clients:
/172.18.0.1:41462[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x0
Mode: standalone
Node count: 5
2 RabbitMQ