在 Kube.NETes 上,从部署 Deployment 到正常提供服务,整个流程可能会出现各种各样问题,有兴趣的可以浏览 Kubernetes Deployment 的故障排查可视化指南(2021 中文版)[1]。从可视化指南也可能看出这些问题实际上都是有迹可循,根据错误信息基本很容易找到解决方法。随着 ChatGPT 的流行,基于 LLM 的文本生成项目不断涌现,k8sgpt[2] 便是其中之一。
k8sgpt 是一个扫描 Kubernetes 集群、诊断和分类问题的工具。它将 SRE 经验编入其分析器,并通过 AI 帮助提取并丰富相关的信息。
其内置了大量的分析器:
k8sgpt 的能力是通过 CLI 来提供的,通过 CLI 可以对集群中的错误进行快速的诊断。
k8sgpt analyze --explain --filter=Pod --namespace=default --output=json
{
"status": "ProblemDetected",
"problems": 1,
"results": [
{
"kind": "Pod",
"name": "default/test",
"error": [
{
"Text": "Back-off pulling image "flomesh/pipy2"",
"Sensitive": []
}
],
"details": "The Kubernetes system is experiencing difficulty pulling the requested image named "flomesh/pipy2". nnThe solution may be to check that the image is correctly spelled or to verify that it exists in the specified container registry. Additionally, ensure that the networking infrastructure that connects the container registry and Kubernetes system is working properly. Finally, check if there are any access restrictions or credentials required to pull the image and ensure they are provided correctly.",
"parentObject": "test"
}
]
}
但是,每次进行诊断都要执行命令,有点繁琐且限制较多。我想大家想要的肯定是能够监控到问题并自动诊断。这就有了今天要介绍的 k8sgpt-operator[3]
简单来说 k8sgpt-operator 可以在集群中开启自动化的 k8sgpt。它提供了两个 CRD: K8sGPT 和 Result。前者可以用来设置 k8sgpt 及其行为;而后者则是用来展示问题资源的诊断结果。
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
name: k8sgpt-sample
namespace: kube-system
spec:
model: gpt-3.5-turbo
backend: OpenAI
noCache: false
version: v0.2.7
enableAI: true
secret:
name: k8sgpt-sample-secret
key: openai-api-key
实验环境使用 k3s 集群。
export INSTALL_K3S_VERSION=v1.23.8+k3s2
curl -sfL https://get.k3s.io | sh -s - --disable traefik --disable local-storage --disable servicelb --write-kubeconfig-mode 644 --write-kubeconfig ~/.kube/config
安装 k8sgpt-operator
helm repo add k8sgpt https://charts.k8sgpt.ai/
helm repo update
helm install release k8sgpt/k8sgpt-operator -n openai --create-namespace
安装完成后,可以看到随 operator 安装的两个 CRD:k8sgpts 和 results。
kubectl api-resources | grep -i gpt
k8sgpts core.k8sgpt.ai/v1alpha1 true K8sGPT
results core.k8sgpt.ai/v1alpha1 true Result
在开始之前,需要先生成一个 OpenAI 的 key[4],并保存到 secret 中。
OPENAI_TOKEN=xxxx
kubectl create secret generic k8sgpt-sample-secret --from-literal=openai-api-key=$OPENAI_TOKEN -n openai
接下来创建 K8sGPT 资源。
kubectl Apply -n openai -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
name: k8sgpt-sample
spec:
model: gpt-3.5-turbo
backend: openai
noCache: false
version: v0.2.7
enableAI: true
secret:
name: k8sgpt-sample-secret
key: openai-api-key
EOF
执行完上面的命令后在 openai 命名空间下会自动创建 Deployment k8sgpt-deployment 。
使用一个不存在的镜像创建 pod。
kubectl run test --image flomesh/pipy2 -n default
然后在 openai 命名空间下会看到一个名为 defaulttest 的资源。
kubectl get result -n openai
NAME AGE
defaulttest 5m7s
详细信息中可以看到诊断内容以及出现问题的资源。
kubectl get result -n openai defaulttest -o yaml
apiVersion: core.k8sgpt.ai/v1alpha1
kind: Result
metadata:
creationTimestamp: "2023-05-02T09:00:32Z"
generation: 1
name: defaulttest
namespace: openai
resourceVersion: "1466"
uid: 2ee27c26-61c1-4ef5-ae27-e1301a40cd56
spec:
details: "The error message is indicating that Kubernetes is having trouble pulling
the image "flomesh/pipy2" and is therefore backing off from trying to do so.
nnThe solution to this issue would be to check that the image exists and that
the spelling and syntax of the image name is correct. Additionally, check that
the image is accessible from the Kubernetes cluster and that any required authentication
or authorization is in place. If the issue persists, it may be necessary to troubleshoot
the network connectivity between the Kubernetes cluster and the image repository."
error:
- text: Back-off pulling image "flomesh/pipy2"
kind: Pod
name: default/test
parentObject: test
[1] Kubernetes Deployment 的故障排查可视化指南(2021 中文版): https://atbug.com/troubleshooting-kubernetes-deployment-zh-v2/
[2] k8sgpt: https://Github.com/k8sgpt-ai/k8sgpt
[3] k8sgpt-operator: https://github.com/k8sgpt-ai/k8sgpt-operator
[4] OpenAI 的 key: https://platform.openai.com/account/api-keys