tencent cloud

All product documents
Tencent Container Security Service
Self-Built Cluster
Last updated: 2024-08-13 17:22:21
Self-Built Cluster
Last updated: 2024-08-13 17:22:21
This document describes how to access an external cluster for unified management and risk check.
Note:
Supports Kubernetes (K8s) cluster versions 1.13 and later.

Limits

You can access an external cluster with up to 500 nodes.

Directions

1. Log in to the TCSS console and click Cluster Security > Security Check on the left sidebar.
2. On the Security Check page, click Access cluster.

3. On the Cluster Access page, select the belonging cloud as Tencent Cloud or Non-Tencent Cloud .
Tencent Cloud: the CVM resources of a self-built cluster come from Tencent Cloud, follow the on-page prompts to select the recommended installation method and the cluster name.

Non-Tencent Cloud: Select Non-Tencent Cloud, and follow the on-page prompts to configure the recommended scheme, cluster name, and command validity period.
Note:
The CVM resources of the connected cluster come from other clouds, including self-built clusters, standalone clusters, and managed clusters hosted by other clouds.

4. Click Generate Command , copy and execute the relevant commands. You can download or copy the YAML file content below and install it by the following two methods.
Note:
It is recommended that you generate a separate connection command for each cluster to avoid duplicate cluster names.
Method 1: Click Copy Command Link , then paste and execute the command on a machine capable of running k8s commands. Alternatively, you may first download the YAML file below, copy it to the machine, and execute kubectl apply -f tcss.yaml .
Method 2: Go to the TKE console - cluster details page, and use the Create Resources with YAML File option to copy the command content.
---
apiVersion: v1
kind: Namespace
metadata:
name: tcss

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: tcss
name: tcss-admin
rules:
- apiGroups: ["extensions", "apps", ""]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tcss-admin-rb
namespace: tcss
subjects:
- kind: ServiceAccount
name: tcss-agent
namespace: tcss
apiGroup: ""
roleRef:
kind: Role
name: tcss-admin
apiGroup: rbac.authorization.k8s.io


---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tcss-agent
namespace: tcss

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: security-clusterrole
rules:
- apiGroups: ["", "v1"]
resources: ["namespaces", "pods", "nodes", "services", "serviceaccounts", "configmaps", "componentstatuses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps","batch","extensions","rbac.authorization.k8s.io","networking.k8s.io","cilium.io"]
resources: ["*"]
verbs: ["get", "list","watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["networkpolicies"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["list", "get","create"]
- apiGroups: ["apiextensions.k8s.io"]
resourceNames: ["tracingpolicies.cilium.io", "tracingpoliciesnamespaced.cilium.io"]
resources: ["customresourcedefinitions"]
verbs: ["list", "get", "update"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: security-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: security-clusterrole
subjects:
- kind: ServiceAccount
name: tcss-agent
namespace: tcss
- kind: User
name: tcss
apiGroup: rbac.authorization.k8s.io


---
apiVersion: v1
kind: Secret
metadata:
name: tcss-agent-secret
namespace: tcss
annotations:
kubernetes.io/service-account.name: tcss-agent
type: kubernetes.io/service-account-token


---
apiVersion: batch/v1
kind: Job
metadata:
name: init-tcss-agent
namespace: tcss
spec:
template:
spec:
serviceAccountName: tcss-agent
containers:
- image: ccr.ccs.tencentyun.com/yunjing_agent/agent:latest
imagePullPolicy: Always
name: init-tcss-agent
command: ["/home/work/yunjing-agent"]
args: ["-token",'',"-vip",'','-cc']
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
env:
- name: user_tags
value: "default"
- name: k8s_name
value: "11"
- name: appid
value: "1256299843"
securityContext:
privileged: true
volumeMounts:
- mountPath: /run/secrets/kubernetes.io/tcss-agent
name: token-projection
securityContext: {}
hostPID: true
restartPolicy: Never
volumes:
- name: token-projection
secret:
secretName: tcss-agent-secret
backoffLimit: 5

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: yunjing-agent
name: yunjing-agent
namespace: kube-system
annotations:
config.kubernetes.io/depends-on: batch/v1/namespaces/tcss/jobs/init-tcss-secrets
spec:
selector:
matchLabels:
k8s-app: yunjing-agent
template:
metadata:
annotations:
eks.tke.cloud.tencent.com/ds-injection: "true"
labels:
k8s-app: yunjing-agent
spec:
tolerations:
- operator: Exists
containers:
- image: ccr.ccs.tencentyun.com/yunjing_agent/agent:latest
imagePullPolicy: Always
name: yunjing-agent
command: ["/home/work/yunjing-agent"]
args: ["-d","-token",'',"-vip",'']
resources:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
hostNetwork: true
hostPID: true


---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: tcss-asset
name: tcss-asset
namespace: tcss
spec:
selector:
matchLabels:
k8s-app: tcss-asset
replicas: 1
template:
metadata:
labels:
k8s-app: tcss-asset
annotations:
eks.tke.cloud.tencent.com/ds-injection: "true"
spec:
serviceAccountName: tcss-agent
tolerations:
- operator: Exists
containers:
- image: ccr.ccs.tencentyun.com/yunjing_agent/agent:latest
imagePullPolicy: Always
name: tcss-asset
command: ["/home/work/yunjing-agent"]
args: ["-asset"]
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
hostPID: true
5. After installation, check if it is successful. Upon the cluster's connection, a namespace named tcss will be created within the cluster, along with the creation of the following workload resources. Ensure that the following three workloads are running properly:
Install a Job-type workload named init-tcss-agent under the tcss namespace.
Install a Deployment-type workload named tcss-asset under the tcss namespace.
Install a DaemonSet-type workload named yunjing-agent under the kube-system namespace.
5.1 Check if the Job workload is deployed successfully.
To check if the Job is created successfully, run the command: kubectl get jobs -n tcss .

To check if the Job is deployed successfully, run the command: kubectl get pods -n tcss | grep init-tcss-agent .

5.2 Check if the DaemonSet is deployed successfully.
To check if the DaemonSet is created successfully, run the command: kubectl get daemonset -A -l k8s-app=yunjing-agent .

To check if the DaemonSet is deployed successfully, run the command: kubectl get pods -A -l k8s-app=yunjing-agent .

5.3 Check if the Deployment workload is deployed successfully.
To check if the Deployment is created successfully, run the command: kubectl get deployment -n tcss

To check if the Deployment is deployed successfully, run the command: kubectl get pods -n tcss | grep tcss-asset



Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback

Contact Us

Contact our sales team or business advisors to help your business.

Technical Support

Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

7x24 Phone Support