$ kubectl get pod -n fluid-systemgoosefsruntime-controller-5b64fdbbb-84pc6 1/1 Running 0 8hcsi-nodeplugin-fluid-fwgjh 2/2 Running 0 8hcsi-nodeplugin-fluid-ll8bq 2/2 Running 0 8hdataset-controller-5b7848dbbb-n44dj 1/1 Running 0 8h
dataset-controller, a pod named goosefsruntime-controller, and multiple pods named csi-nodeplugin. The number of csi-nodeplugin pods depends on the number of nodes in your Kubernetes cluster.$ mkdir <any-path>/co-locality$ cd <any-path>/co-locality
$ kubectl get nodesNAME STATUS ROLES AGE VERSION192.168.1.146 Ready <none> 7d14h v1.18.4-tke.13192.168.1.147 Ready <none> 7d14h v1.18.4-tke.13
$ kubectl label nodes 192.168.1.146 hbase-cache=true
NodeSelector to manage where to store data, and therefore we add a label to a desired node.$ kubectl get node -L hbase-cacheNAME STATUS ROLES AGE VERSION HBASE-CACHE192.168.1.146 Ready <none> 7d14h v1.18.4-tke.13 true192.168.1.147 Ready <none> 7d14h v1.18.4-tke.13
hbase-cache=true label, indicating that data will only be cached on this node.Dataset resource object to be createdapiVersion: data.fluid.io/v1alpha1kind: Datasetmetadata:name: hbasespec:mounts:- mountPoint: https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/stable/name: hbasenodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: hbase-cacheoperator: Invalues:- "true"
mountPoint is set to WebUFS in this example. If you want to mount COS, see Mounting COS (COSN) to GooseFS.spec attribute of the Dataset resource object, we defined the nodeSelectorTerms sub-attribute to specify that data caches must be placed on the node with the hbase-cache=true label.Dataset resource object$ kubectl create -f dataset.yamldataset.data.fluid.io/hbase created
GooseFSRuntime resource object to be createdapiVersion: data.fluid.io/v1alpha1kind: GooseFSRuntimemetadata:name: hbasespec:replicas: 2tieredstore:levels:- mediumtype: SSDpath: /mnt/disk1quota: 2Ghigh: "0.8"low: "0.7"
spec.replicas attribute is set to 2, indicating that Fluid will launch a GooseFS instance containing one GooseFS master and two GooseFS workers.$ kubectl create -f runtime.yamlgoosefsruntime.data.fluid.io/hbase created$ kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEShbase-fuse-42csf 1/1 Running 0 104s 192.168.1.146 192.168.1.146 <none> <none>hbase-master-0 2/2 Running 0 3m3s 192.168.1.147 192.168.1.147 <none> <none>hbase-worker-l62m4 2/2 Running 0 104s 192.168.1.146 192.168.1.146 <none> <none>
hbase-cache=true label.GooseFSRuntime object$ kubectl get goosefsruntime hbase -o wideNAME READY MASTERS DESIRED MASTERS MASTER PHASE READY WORKERS DESIRED WORKERS WORKER PHASE READY FUSES DESIRED FUSES FUSE PHASE AGEhbase 1 1 Ready 1 2 PartialReady 1 2 PartialReady 4m3s
Worker Phase is PartialReady, and Ready Workers is 1, which is less than the value 2 of Desired Workers.apiVersion: apps/v1beta1kind: StatefulSetmetadata:name: nginxlabels:app: nginxspec:replicas: 2serviceName: "nginx"podManagementPolicy: "Parallel"selector: # define how the deployment finds the pods it managesmatchLabels:app: nginxtemplate: # define the pods specificationsmetadata:labels:app: nginxspec:affinity:# prevent two Nginx Pod from being scheduled at the same Node# just for demonstrating co-locality demopodAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- nginxtopologyKey: "kubernetes.io/hostname"containers:- name: nginximage: nginxvolumeMounts:- mountPath: /dataname: hbase-volvolumes:- name: hbase-volpersistentVolumeClaim:claimName: hbase
podAntiAffinity attribute, solely designed for demonstration, is configured to ensure that all the pods of the same application are distributed to different nodes, for better demonstration effect.$ kubectl create -f app.yamlstatefulset.apps/nginx created
$ kubectl get pod -o wide -l app=nginxNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-0 1/1 Running 0 2m5s 192.168.1.146 192.168.1.146 <none> <none>nginx-1 0/1 Pending 0 2m5s <none> <none> <none> <none>
nodeSelectorTerms.$ kubectl describe pod nginx-1...Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had volume node affinity conflict.Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had volume node affinity conflict.
PodAntiAffinity attribute, two Nginx pods cannot be scheduled to the same node. On the other hand, there is currently only one node that meets the affinity requirements of the Dataset resource object and therefore only one Nginx pod is successfully scheduled.$ kubectl label node 192.168.1.147 hbase-cache=true
$ kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEShbase-fuse-42csf 1/1 Running 0 44m 192.168.1.146 192.168.1.146 <none> <none>hbase-fuse-kth4g 1/1 Running 0 10m 192.168.1.147 192.168.1.147 <none> <none>hbase-master-0 2/2 Running 0 46m 192.168.1.147 192.168.1.147 <none> <none>hbase-worker-l62m4 2/2 Running 0 44m 192.168.1.146 192.168.1.146 <none> <none>hbase-worker-rvncl 2/2 Running 0 10m 192.168.1.147 192.168.1.147 <none> <none>
$ kubectl get goosefsruntime hbase -o wideNAME READY MASTERS DESIRED MASTERS MASTER PHASE READY WORKERS DESIRED WORKERS WORKER PHASE READY FUSES DESIRED FUSES FUSE PHASE AGEhbase 1 1 Ready 2 2 Ready 2 2 Ready 46m43s
$ kubectl get pod -l app=nginx -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-0 1/1 Running 0 21m 192.168.1.146 192.168.1.146 <none> <none>nginx-1 1/1 Running 0 21m 192.168.1.147 192.168.1.147 <none> <none>
Pending state. It is now successfully started and is running on another node.$ kubectl delete -f .$ kubectl label node 192.168.1.146 hbase-cache-$ kubectl label node 192.168.1.147 hbase-cache-
Feedback