| IP | 主机名 | 备注 |
|---|---|---|
| 11.0.1.3 | master1 | |
| 11.0.1.4 | master2 | |
| 11.0.1.5 | master3 | |
| 11.0.1.6 | node1 | |
| 11.0.1.7 | node2 | |
| 11.0.1.8 | nfs |
使用sealos一键搭建或二进制搭建均可
创建NFS共享服务
安装nfs-utils和rpcbind
nfs客户端和服务端都安装nfs-utils包
yum install nfs-utils rpcbind
创建共享目录
mkdir -p /nfsdata
chmod 777 /nfsdata
编辑/etc/exports文件添加如下内容
vi /etc/exports/nfsdata *(rw,sync,no_root_squash)
nfs权限说明
启动服务
# systemctl start rpcbind.service
# systemctl enable rpcbind.service
# systemctl start nfs.service
# systemctl enable nfs.service
启动顺序一定是rpcbind->nfs,否则有可能出现错误
创建StorageClass
因为StorageClass可以实现自动配置,所以使用StorageClass之前,我们需要先安装存储驱动的自动配置程序,而这个配置程序必须拥有一定的权限去访问我们的kubernetes集群(类似dashboard一样,必须有权限访问各种api,才能实现管理)。
创建rbac(Role-Based Access Control:基于角色的访问控制):
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisionernamespace: elk
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runnernamespace: elk
rules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]###此处需要注意的是,如果你的名称空间是default,可以不加下面这个授权###- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: elk
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: elk
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: elk
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
创建StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: master-nfs-storage
provisioner: master-nfs-storage #这里的名称要和下面的provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters: archiveOnDelete: "false"
创建自动配置程序 - NFS客户端
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisionernamespace: elk
spec:replicas: 1selector:matchLabels:app: nfs-client-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: master-nfs-storage- name: NFS_SERVERvalue: 11.0.1.8- name: NFS_PATHvalue: /nfsdatavolumes:- name: nfs-client-rootnfs:server: 11.0.1.8path: /nfsdata
创建测试pod,检查是否部署成功
创建PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: test-service-pvcannotations:volume.beta.kubernetes.io/storage-provisioner: master-nfs-storage
spec:accessModes:- ReadWriteOnceresources:requests:storage: 1GistorageClassName: master-nfs-storage
accessModes访问模式说明
ReadWriteOnce -- 该volume只能被单个节点以读写的方式映射
ReadOnlyMany -- 该volume可以被多个节点以只读方式映射
ReadWriteMany -- 该volume可以被多个节点以读写的方式映射
查看pvc状态是否为Bound
kubectl get pvc --all-namespaces
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-service-pvc Bound pvc-aae2b7fa-377b-11ea-87ad-525400512eca 1Gi RWX master-nfs-storage 2m48s
创建测试pod,查看是否可以正常挂载
kind: Pod
apiVersion: v1
metadata:name: test-podnamespace: elk
spec:containers:- name: test-podimage: busybox:1.24command:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: test-pod-nfs-pvcpersistentVolumeClaim:claimName: test-service-pvc
此时在nfs文件夹/nfsdata下应以多出一个名字为default-test-service-pvc-pvc-aae2b7fa-377b-11ea-87ad-525400512eca的文件夹
关于StorageClass回收策略对数据的影响
1.第一种配置
archiveOnDelete: "false" reclaimPolicy: Delete #默认没有配置,默认值为Delete
测试结果:
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV被删除且NFS Server对应数据被删除
2.第二种配置
archiveOnDelete: "false" reclaimPolicy: Retain
测试结果:
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
3.第三种配置
archiveOnDelete: "ture" reclaimPolicy: Retain
结果:
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
4.第四种配置
archiveOnDelete: "ture" reclaimPolicy: Delete
结果:
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
创建master节点的yaml文件
apiVersion: apps/v1
kind: StatefulSet
metadata:namespace: elkname: elasticsearch-masterlabels:app: elasticsearchrole: master
spec:serviceName: elasticsearch-masterreplicas: 3selector:matchLabels:app: elasticsearchrole: mastertemplate:metadata:labels:app: elasticsearchrole: masterspec:containers:- name: elasticsearchimage: elasticsearch:7.16.2command: ["bash", "-c", "ulimit -l unlimited && sysctl -w vm.max_map_count=262144 && chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && exec su elasticsearch docker-entrypoint.sh"]ports:- containerPort: 9200name: http- containerPort: 9300name: transportenv:- name: discovery.seed_hostsvalue: "elasticsearch-master-0.elasticsearch-master,elasticsearch-master-1.elasticsearch-master,elasticsearch-master-2.elasticsearch-master,elasticsearch-data-0.elasticsearch-data,elasticsearch-data-1.elasticsearch-data,elasticsearch-data-2.elasticsearch-data,elasticsearch-data-3.elasticsearch-data,elasticsearch-data-4.elasticsearch-data,elasticsearch-client-0.elasticsearch-client,elasticsearch-client-1.elasticsearch-client,elasticsearch-client-2.elasticsearch-client"- name: cluster.initial_master_nodesvalue: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"- name: ES_JAVA_OPTSvalue: -Xms512m -Xmx512m- name: node.mastervalue: "true"- name: node.ingestvalue: "false"- name: node.datavalue: "false"- name: cluster.namevalue: "elasticsearch"- name: node.namevalueFrom:fieldRef:fieldPath: metadata.name- name: xpack.security.enabledvalue: "true"- name: xpack.security.transport.ssl.enabledvalue: "true"- name: xpack.monitoring.collection.enabledvalue: "true"- name: xpack.security.transport.ssl.verification_modevalue: "certificate"- name: xpack.security.transport.ssl.keystore.pathvalue: "/usr/share/elasticsearch/config/elastic-certificates.p12"- name: xpack.security.transport.ssl.truststore.pathvalue: "/usr/share/elasticsearch/config/elastic-certificates.p12"volumeMounts:- mountPath: /usr/share/elasticsearch/dataname: pv-storage-elastic-master- name: elastic-certificatesreadOnly: truemountPath: "/usr/share/elasticsearch/config/elastic-certificates.p12"subPath: elastic-certificates.p12- mountPath: /etc/localtimename: localtimesecurityContext:privileged: truevolumes:- name: elastic-certificatessecret:secretName: elastic-certificates- hostPath:path: /etc/localtimename: localtimevolumeClaimTemplates:- metadata:name: pv-storage-elastic-masterspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "master-nfs-storage"resources:requests:storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:namespace: elkname: elasticsearch-masterlabels:app: elasticsearchrole: master
spec:selector:app: elasticsearchrole: mastertype: NodePortports:- port: 9200nodePort: 30001targetPort: 9200
创建data节点的yaml文件
apiVersion: apps/v1
kind: StatefulSet
metadata:namespace: elkname: elasticsearch-datalabels:app: elasticsearchrole: data
spec:serviceName: elasticsearch-datareplicas: 5selector:matchLabels:app: elasticsearchrole: datatemplate:metadata:labels:app: elasticsearchrole: dataspec:containers:- name: elasticsearchimage: elasticsearch:7.16.2command: ["bash", "-c", "ulimit -l unlimited && sysctl -w vm.max_map_count=262144 && chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && exec su elasticsearch docker-entrypoint.sh"]ports:- containerPort: 9200name: http- containerPort: 9300name: transportenv:- name: discovery.seed_hostsvalue: "elasticsearch-master-0.elasticsearch-master,elasticsearch-master-1.elasticsearch-master,elasticsearch-master-2.elasticsearch-master,elasticsearch-data-0.elasticsearch-data,elasticsearch-data-1.elasticsearch-data,elasticsearch-data-2.elasticsearch-data,elasticsearch-data-3.elasticsearch-data,elasticsearch-data-4.elasticsearch-data,elasticsearch-client-0.elasticsearch-client,elasticsearch-client-1.elasticsearch-client,elasticsearch-client-2.elasticsearch-client"- name: cluster.initial_master_nodesvalue: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"- name: ES_JAVA_OPTSvalue: -Xms512m -Xmx512m- name: node.mastervalue: "false"- name: node.ingestvalue: "false"- name: node.datavalue: "true"- name: cluster.namevalue: "elasticsearch"- name: node.namevalueFrom:fieldRef:fieldPath: metadata.name- name: xpack.security.enabledvalue: "true"- name: xpack.security.transport.ssl.enabledvalue: "true"- name: xpack.monitoring.collection.enabledvalue: "true"- name: xpack.security.transport.ssl.verification_modevalue: "certificate"- name: xpack.security.transport.ssl.keystore.pathvalue: "/usr/share/elasticsearch/config/elastic-certificates.p12"- name: xpack.security.transport.ssl.truststore.pathvalue: "/usr/share/elasticsearch/config/elastic-certificates.p12"volumeMounts:- mountPath: /usr/share/elasticsearch/dataname: pv-storage-elastic-data- name: elastic-certificatesreadOnly: truemountPath: "/usr/share/elasticsearch/config/elastic-certificates.p12"subPath: elastic-certificates.p12- mountPath: /etc/localtimename: localtimesecurityContext:privileged: truevolumes:- name: elastic-certificatessecret:secretName: elastic-certificates- hostPath:path: /etc/localtimename: localtimevolumeClaimTemplates:- metadata:name: pv-storage-elastic-dataspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "master-nfs-storage"resources:requests:storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:namespace: elkname: elasticsearch-datalabels:app: elasticsearchrole: data
spec:selector:app: elasticsearchrole: datatype: NodePortports:- port: 9200nodePort: 30002targetPort: 9200
创建client节点的yaml文件:
apiVersion: apps/v1
kind: StatefulSet
metadata:namespace: elkname: elasticsearch-clientlabels:app: elasticsearchrole: client
spec:serviceName: elasticsearch-clientreplicas: 3selector:matchLabels:app: elasticsearchrole: clienttemplate:metadata:labels:app: elasticsearchrole: clientspec:containers:- name: elasticsearchimage: elasticsearch:7.16.2command: ["bash", "-c", "ulimit -l unlimited && sysctl -w vm.max_map_count=262144 && chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && exec su elasticsearch docker-entrypoint.sh"]ports:- containerPort: 9200name: http- containerPort: 9300name: transportenv:- name: discovery.seed_hostsvalue: "elasticsearch-master-0.elasticsearch-master,elasticsearch-master-1.elasticsearch-master,elasticsearch-master-2.elasticsearch-master,elasticsearch-data-0.elasticsearch-data,elasticsearch-data-1.elasticsearch-data,elasticsearch-data-2.elasticsearch-data,elasticsearch-data-3.elasticsearch-data,elasticsearch-data-4.elasticsearch-data,elasticsearch-client-0.elasticsearch-client,elasticsearch-client-1.elasticsearch-client,elasticsearch-client-2.elasticsearch-client"- name: cluster.initial_master_nodesvalue: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"- name: ES_JAVA_OPTSvalue: -Xms512m -Xmx512m- name: node.mastervalue: "false"- name: node.ingestvalue: "true"- name: node.datavalue: "false"- name: cluster.namevalue: "elasticsearch"- name: node.namevalueFrom:fieldRef:fieldPath: metadata.name- name: xpack.security.enabledvalue: "true"- name: xpack.security.transport.ssl.enabledvalue: "true"- name: xpack.monitoring.collection.enabledvalue: "true"- name: xpack.security.transport.ssl.verification_modevalue: "certificate"- name: xpack.security.transport.ssl.keystore.pathvalue: "/usr/share/elasticsearch/config/elastic-certificates.p12"- name: xpack.security.transport.ssl.truststore.pathvalue: "/usr/share/elasticsearch/config/elastic-certificates.p12"volumeMounts:- mountPath: /usr/share/elasticsearch/dataname: pv-storage-elastic-client- name: elastic-certificatesreadOnly: truemountPath: "/usr/share/elasticsearch/config/elastic-certificates.p12"subPath: elastic-certificates.p12- mountPath: /etc/localtimename: localtimesecurityContext:privileged: truevolumes:- name: elastic-certificatessecret:secretName: elastic-certificates- hostPath:path: /etc/localtimename: localtimevolumeClaimTemplates:- metadata:name: pv-storage-elastic-clientspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "master-nfs-storage"resources:requests:storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:namespace: elkname: elasticsearch-clientlabels:app: elasticsearchrole: client
spec:selector:app: elasticsearchrole: clienttype: NodePortports:- port: 9200nodePort: 30003targetPort: 9200
设置ES集群的密码,密码要牢记!!!
kubectl -n elk exec -it $(kubectl -n elk get pods | grep elasticsearch-master | sed -n 1p | awk '{print $1}') -- bin/elasticsearch-setup-passwords auto -b
使用上面生成的用户 elastic 的密码 03sWFWzGOjNOCioqcbV3创建secret
kubectl -n elk create secret generic elasticsearch-password --from-literal password=03sWFWzGOjNOCioqcbV3
创建kibana的yaml文件
apiVersion: v1
kind: ConfigMap
metadata:namespace: elkname: kibana-configlabels:app: kibana
data:kibana.yml: |-server.host: 0.0.0.0elasticsearch:hosts: ${ELASTICSEARCH_HOSTS}username: ${ELASTICSEARCH_USER}password: ${ELASTICSEARCH_PASSWORD}
---
kind: Deployment
apiVersion: apps/v1
metadata:labels:app: kibananame: kibananamespace: elk
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:app: kibanatemplate:metadata:labels:app: kibanaspec:nodeSelector:node: node2containers:- name: kibanaimage: kibana:7.16.2ports:- containerPort: 5601protocol: TCPenv:- name: SERVER_PUBLICBASEURLvalue: "http://0.0.0.0:5601"- name: I18N.LOCALEvalue: zh-CN- name: ELASTICSEARCH_HOSTSvalue: "http://elasticsearch-client:9200"- name: ELASTICSEARCH_USERvalue: "elastic"- name: ELASTICSEARCH_PASSWORDvalueFrom:secretKeyRef:name: elasticsearch-passwordkey: password- name: xpack.encryptedSavedObjects.encryptionKeyvalue: "min-32-byte-long-strong-encryption-key"volumeMounts:- name: kibana-configmountPath: /usr/share/kibana/config/kibana.ymlreadOnly: truesubPath: kibana.yml- mountPath: /etc/localtimename: localtimevolumes:- name: kibana-configconfigMap:name: kibana-config- hostPath:path: /etc/localtimename: localtime
---
kind: Service
apiVersion: v1
metadata:labels:app: kibananame: kibana-servicenamespace: elk
spec:ports:- port: 5601targetPort: 5601nodePort: 30004type: NodePortselector:app: kibana
部署zookeeper集群:
apiVersion: v1
kind: Service
metadata:name: zookeepernamespace: elklabels:app: zookeeper
spec:type: NodePortports:- port: 2181nodePort: 30005targetPort: 2181selector:app: zookeeper
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:name: zookeeper-pdbnamespace: elk
spec:selector:matchLabels:app: zookeeperminAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: zookeepernamespace: elk
spec:selector:matchLabels:app: zookeeperserviceName: zookeeperreplicas: 3updateStrategy:type: RollingUpdatepodManagementPolicy: ParallelupdateStrategy:type: RollingUpdatetemplate:metadata:labels:app: zookeeperspec:containers:- name: kubernetes-zookeeperimagePullPolicy: IfNotPresentimage: "mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10"ports:- containerPort: 2181name: client- containerPort: 2888name: server- containerPort: 3888name: leader-electioncommand:- sh- -c- "start-zookeeper \--servers=3 \--data_dir=/var/lib/zookeeper/data \--data_log_dir=/var/lib/zookeeper/data/log \--conf_dir=/opt/zookeeper/conf \--client_port=2181 \--election_port=3888 \--server_port=2888 \--tick_time=2000 \--init_limit=10 \--sync_limit=5 \--heap=512M \--max_client_cnxns=60 \--snap_retain_count=3 \--purge_interval=12 \--max_session_timeout=40000 \--min_session_timeout=4000 \--log_level=INFO"readinessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5livenessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5volumeMounts:- name: zookeepermountPath: /var/lib/zookeepersecurityContext:runAsUser: 1000fsGroup: 1000volumeClaimTemplates:- metadata:name: zookeeperspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "master-nfs-storage"resources:requests:storage: 1Gi
创建kafka的yaml文件:
[root@master1 elk]# cat 07-kafka.yaml
---
apiVersion: v1
kind: Service
metadata:name: kafkanamespace: elklabels:app: kafka
spec:type: NodePortports:- port: 9092nodePort: 30006targetPort: 9092selector:app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:name: kafka-pdbnamespace: elk
spec:selector:matchLabels:app: kafkamaxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: kafkanamespace: elk
spec:selector:matchLabels:app: kafkaserviceName: kafkareplicas: 3template:metadata:labels:app: kafkaspec:terminationGracePeriodSeconds: 300containers:- name: k8s-kafkaimagePullPolicy: IfNotPresentimage: fastop/kafka:2.2.0resources:requests:memory: "600Mi"cpu: 500mports:- containerPort: 9092name: servercommand:- sh- -c- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \--override listeners=PLAINTEXT://:9092 \--override zookeeper.connect=zookeeper.elk.svc.cluster.local:2181 \--override log.dir=/var/lib/kafka \--override auto.create.topics.enable=true \--override auto.leader.rebalance.enable=true \--override background.threads=10 \--override compression.type=producer \--override delete.topic.enable=false \--override leader.imbalance.check.interval.seconds=300 \--override leader.imbalance.per.broker.percentage=10 \--override log.flush.interval.messages=9223372036854775807 \--override log.flush.offset.checkpoint.interval.ms=60000 \--override log.flush.scheduler.interval.ms=9223372036854775807 \--override log.retention.bytes=-1 \--override log.retention.hours=168 \--override log.roll.hours=168 \--override log.roll.jitter.hours=0 \--override log.segment.bytes=1073741824 \--override log.segment.delete.delay.ms=60000 \--override message.max.bytes=1000012 \--override min.insync.replicas=1 \--override num.io.threads=8 \--override num.network.threads=3 \--override num.recovery.threads.per.data.dir=1 \--override num.replica.fetchers=1 \--override offset.metadata.max.bytes=4096 \--override offsets.commit.required.acks=-1 \--override offsets.commit.timeout.ms=5000 \--override offsets.load.buffer.size=5242880 \--override offsets.retention.check.interval.ms=600000 \--override offsets.retention.minutes=1440 \--override offsets.topic.compression.codec=0 \--override offsets.topic.num.partitions=50 \--override offsets.topic.replication.factor=3 \--override offsets.topic.segment.bytes=104857600 \--override queued.max.requests=500 \--override quota.consumer.default=9223372036854775807 \--override quota.producer.default=9223372036854775807 \--override replica.fetch.min.bytes=1 \--override replica.fetch.wait.max.ms=500 \--override replica.high.watermark.checkpoint.interval.ms=5000 \--override replica.lag.time.max.ms=10000 \--override replica.socket.receive.buffer.bytes=65536 \--override replica.socket.timeout.ms=30000 \--override request.timeout.ms=30000 \--override socket.receive.buffer.bytes=102400 \--override socket.request.max.bytes=104857600 \--override socket.send.buffer.bytes=102400 \--override unclean.leader.election.enable=true \--override zookeeper.session.timeout.ms=6000 \--override zookeeper.set.acl=false \--override broker.id.generation.enable=true \--override connections.max.idle.ms=600000 \--override controlled.shutdown.enable=true \--override controlled.shutdown.max.retries=3 \--override controlled.shutdown.retry.backoff.ms=5000 \--override controller.socket.timeout.ms=30000 \--override default.replication.factor=1 \--override fetch.purgatory.purge.interval.requests=1000 \--override group.max.session.timeout.ms=300000 \--override group.min.session.timeout.ms=6000 \--override inter.broker.protocol.version=2.2.0 \--override log.cleaner.backoff.ms=15000 \--override log.cleaner.dedupe.buffer.size=134217728 \--override log.cleaner.delete.retention.ms=86400000 \--override log.cleaner.enable=true \--override log.cleaner.io.buffer.load.factor=0.9 \--override log.cleaner.io.buffer.size=524288 \--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \--override log.cleaner.min.cleanable.ratio=0.5 \--override log.cleaner.min.compaction.lag.ms=0 \--override log.cleaner.threads=1 \--override log.cleanup.policy=delete \--override log.index.interval.bytes=4096 \--override log.index.size.max.bytes=10485760 \--override log.message.timestamp.difference.max.ms=9223372036854775807 \--override log.message.timestamp.type=CreateTime \--override log.preallocate=false \--override log.retention.check.interval.ms=300000 \--override max.connections.per.ip=2147483647 \--override num.partitions=4 \--override producer.purgatory.purge.interval.requests=1000 \--override replica.fetch.backoff.ms=1000 \--override replica.fetch.max.bytes=1048576 \--override replica.fetch.response.max.bytes=10485760 \--override reserved.broker.max.id=1000 "env:- name: KAFKA_HEAP_OPTSvalue : "-Xmx512M -Xms512M"- name: KAFKA_OPTSvalue: "-Dlogging.level=INFO"volumeMounts:- name: kafkamountPath: /var/lib/kafkareadinessProbe:tcpSocket:port: 9092timeoutSeconds: 1initialDelaySeconds: 5securityContext:runAsUser: 1000fsGroup: 1000volumeClaimTemplates:- metadata:name: kafkaspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "master-nfs-storage"resources:requests:storage: 1Gi
通过zookeeper查看broker
[root@master1 elk]# kubectl exec -it zookeeper-1 -n elk bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
zookeeper@zookeeper-1:/$ zkCli.sh
Connecting to localhost:2181[zk: localhost:2181(CONNECTED) 0] get /brokers/ids/0
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-0.kafka.elk.svc.cluster.local:9093"],"jmx_port":-1,"host":"kafka-0.kafka.elk.svc.cluster.local","timestamp":"1641887271398","port":9093,"version":4}
cZxid = 0x200000024
ctime = Tue Jan 11 07:47:51 UTC 2022
mZxid = 0x200000024
mtime = Tue Jan 11 07:47:51 UTC 2022
pZxid = 0x200000024
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x27e480b276e0001
dataLength = 246
numChildren = 0
[zk: localhost:2181(CONNECTED) 1] get /brokers/ids/1
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-1.kafka.elk.svc.cluster.local:9093"],"jmx_port":-1,"host":"kafka-1.kafka.elk.svc.cluster.local","timestamp":"1641887242316","port":9093,"version":4}
cZxid = 0x20000001e
ctime = Tue Jan 11 07:47:22 UTC 2022
mZxid = 0x20000001e
mtime = Tue Jan 11 07:47:22 UTC 2022
pZxid = 0x20000001e
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x27e480b276e0000
dataLength = 246
numChildren = 0
[zk: localhost:2181(CONNECTED) 2] get /brokers/ids/2
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-2.kafka.elk.svc.cluster.local:9093"],"jmx_port":-1,"host":"kafka-2.kafka..svc.cluster.local","timestamp":"1641888604437","port":9093,"version":4}
cZxid = 0x20000002d
ctime = Tue Jan 11 08:10:04 UTC 2022
mZxid = 0x20000002d
mtime = Tue Jan 11 08:10:04 UTC 2022
pZxid = 0x20000002d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x27e480b276e0002
dataLength = 246
numChildren = 0
(2)kafka生产消费测试
创建topic
[root@master1 elk]# kubectl exec -it kafka-0 -n elk sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
$ pwd
/
$ cd /opt/kafka/bin
$ ./kafka-topics.sh --create --topic test --zookeeper zookeeper.elk.svc.cluster.local:2181 --partitions 3 --replication-factor 3
Created topic "test".
$ ./kafka-topics.sh --list --zookeeper zookeeper.elk.svc.cluster.local:2181
test
生产消息:
生产消息:
$ ./kafka-console-producer.sh --topic test --broker-list kafka-0.kafka.elk.svc.cluster.local:9093
111
另起一个终端消费消息:
$ ./kafka-console-consumer.sh --topic test --zookeeper zookeeper.elk.svc.cluster.local:2181 --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
111
消费是正常的!
[root@master1 elk]# cat 09-logstash.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: logstash-configmapnamespace: elk
data:logstash.yml: |http.host: "0.0.0.0"path.config: /usr/share/logstash/pipelinelogstash.conf: |input {kafka {bootstrap_servers => "kafka-0.kafka.ns-elk.svc.cluster.local:9092,kafka-1.kafka.ns-elk.svc.cluster.local:9092,kafka-2.kafka.ns-elk.svc.cluster.local:9092"topics => ["filebeat"]codec => "json"}}filter {date {match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]}}output {elasticsearch {hosts => ["elasticsearch-client:9200"]user => "elastic"password => "lGTiRY1ZcChmlNpr5AFX"index => "kubernetes-%{+YYYY.MM.dd}"}}---
apiVersion: apps/v1
kind: Deployment
metadata:name: logstash-deploymentnamespace: elk
spec:selector:matchLabels:app: logstashreplicas: 1template:metadata:labels:app: logstashspec:nodeSelector:node: node2containers:- name: logstashimage: docker.elastic.co/logstash/logstash:7.16.2ports:- containerPort: 5044volumeMounts:- name: config-volumemountPath: /usr/share/logstash/config- name: logstash-pipeline-volumemountPath: /usr/share/logstash/pipeline- mountPath: /etc/localtimename: localtimevolumes:- name: config-volumeconfigMap:name: logstash-configmapitems:- key: logstash.ymlpath: logstash.yml- name: logstash-pipeline-volumeconfigMap:name: logstash-configmapitems:- key: logstash.confpath: logstash.conf- hostPath:path: /etc/localtimename: localtime
---
kind: Service
apiVersion: v1
metadata:name: logstash-servicenamespace: elk
spec:selector:app: logstashtype: NodePortports:- protocol: TCPport: 5044targetPort: 5044nodePort: 30007
apiVersion: v1
kind: ConfigMap
metadata:name: filebeat-confignamespace: elklabels:k8s-app: filebeat
data:filebeat.yml: |filebeat.inputs:- type: containerpaths:- '/var/lib/docker/containers/*/*.log'processors:- add_kubernetes_metadata:host: ${NODE_NAME}matchers:- logs_path:logs_path: "/var/lib/docker/containers/"processors:- add_cloud_metadata:- add_host_metadata:output:kafka:enabled: truehosts: ["kafka-0.kafka.elk.svc.cluster.local:9092","kafka-1.kafka.elk.svc.cluster.local:9092","kafka-2.kafka.elk.svc.cluster.local:9092"]topic: "filebeat"max_message_bytes: 5242880
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: filebeatnamespace: elklabels:k8s-app: filebeat
spec:selector:matchLabels:k8s-app: filebeattemplate:metadata:labels:k8s-app: filebeatspec:serviceAccountName: filebeatterminationGracePeriodSeconds: 30hostNetwork: truednsPolicy: ClusterFirstWithHostNetcontainers:- name: filebeatimage: docker.elastic.co/beats/filebeat:7.16.2args: ["-c", "/etc/filebeat.yml","-e",]env:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamesecurityContext:runAsUser: 0resources:limits:memory: 200Mirequests:cpu: 100mmemory: 100MivolumeMounts:- name: configmountPath: /etc/filebeat.ymlreadOnly: truesubPath: filebeat.yml- name: datamountPath: /usr/share/filebeat/data- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: truevolumes:- name: configconfigMap:defaultMode: 0640name: filebeat-config- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: datahostPath:path: /var/lib/filebeat-datatype: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: filebeat
subjects:
- kind: ServiceAccountname: filebeatnamespace: elk
roleRef:kind: ClusterRolename: filebeatapiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: filebeatnamespace: elk
subjects:- kind: ServiceAccountname: filebeatnamespace: elk
roleRef:kind: Rolename: filebeatapiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: filebeat-kubeadm-confignamespace: elk
subjects:- kind: ServiceAccountname: filebeatnamespace: elk
roleRef:kind: Rolename: filebeat-kubeadm-configapiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: filebeatlabels:k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API groupresources:- namespaces- pods- nodesverbs:- get- watch- list
- apiGroups: ["apps"]resources:- replicasetsverbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: filebeatnamespace: elklabels:k8s-app: filebeat
rules:- apiGroups:- coordination.k8s.ioresources:- leasesverbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: filebeat-kubeadm-confignamespace: elklabels:k8s-app: filebeat
rules:- apiGroups: [""]resources:- configmapsresourceNames:- kubeadm-configverbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:name: filebeatnamespace: elklabels:k8s-app: filebeat
此次部署跟我们上次(kubernetes上部署ELK集群_你说咋整就咋整的博客-CSDN博客_k8s部署elk)部署相比,简化了创建PV的过程,因为有了动态创建PV的功能,后面每一步搭建的时候,都不需要单独创建PV了。
上一篇:三人爬上重庆朝天门大桥顶部“玩心跳”,警方介入,桥梁管养方:三人可能绕开了警报系统 三人爬上重庆朝天门大桥顶部“玩心跳”,警方介入,桥梁管养方:三人可能绕开了警报系统
下一篇:崔康熙罕见挨批!一战三大败笔,1次换人毁掉战局,战术迷得很 崔康熙罕见挨批!一战三大败笔,1次换人毁掉战局,战术迷得很