侧边栏壁纸
博主头像
问道

问道的小花园,总能给你带来惊喜

  • 累计撰写 68 篇文章
  • 累计创建 35 个标签
  • 累计收到 3 条评论

轻量级kubernetes多集群管理系统(一)

问道
2023-01-28 / 0 评论 / 0 点赞 / 693 阅读 / 18,580 字 / 正在检测是否收录...
温馨提示:
本文最后更新于 2023-01-28,若内容或图片失效,请留言反馈。部分素材来自网络,若不小心影响到您的利益,请联系我们删除。

项目主要是通过client-go来与kubernetes集群进行连接,并对集群进行增删改查操作。同时也对操作进行restful api的暴露。

本身也增加集群模块,用于保存不同集群的访问信息,包含对集群的操作,与对集群资源对象的增删改查进行结合。

client-go简介

client-go客户端

client-go是一个调用kubernetes集群资源对象API的客户端,即通过client-go实现对kubernetes集群中资源对象(包括deployment、service、ingress、replicaSet、pod、namespace、node和CRD等)的增删改查等操作。大部分对kubernetes进行前置API封装的二次开发都通过client-go这个第三方包来实现。

client-go不仅被k8s进行使用,其他围绕k8s的生态也被大量使用,例如:kubectl ETCD-operator等。 client-go一共提供了4种与kubernetes APIServer交互的客户端对象,分别为RESTClient,DiscoveryClient,ClientSet,DynamicClient。

RESTClient是最基础的客户端。RESTClient对HTTP Request进行了封装,实现了RESTful风格的API。ClientSet、DynamicClient及DiscoveryClient客户端都是基于RESTClient实现的。

ClientSet在RESTClient的基础上封装了对Resource和Version的管理方法。每一个Resource可以理解为一个客户端,而ClientSet则是多个客户端的集合,每一个Resource和Version都以函数的方式暴露给开发者。ClientSet只能够处理Kubernetes内置资源,它是通过client-gen代码生成器自动生成的。

DynamicClient与ClientSet最大的不同之处是,ClientSet仅能访问Kubernetes自带的资源(即Client集合内的资源),不能直接访问CRD自定义资源。DynamicClient能够处理Kubernetes中的所有资源对象,包括Kubernetes内置资源与CRD自定义资源。

DiscoveryClient发现客户端,用于发现kube-apiserver所支持的资源组、资源版本、资源信息(即Group、Versions、Resources)。

client-go对kubernetes资源对象的调用,需要先获取kubernetes的配置信息,即$HOME/.kube/config。

整个调用的过程如下:

kubeconfig→rest.config→clientset→具体的client(CoreV1Client)→具体的资源对象(pod)→RESTClient→http.Client→HTTP请求的发送及响应

通过clientset中不同的client和client中不同资源对象的方法实现对kubernetes中资源对象的增删改查等操作,常用的client有CoreV1Client、AppsV1beta1Client、ExtensionsV1beta1Client等。

kubeconfig

kubeconfig用于管理访问集群Kube-apiserver的配置信息。上面介绍的几个客户端都通过kubeconfig来访问集群,其步骤可分为两步:

●第1步,加载kubeconfig配置信息;

●第2步,合并多个kubeconfig配置信息

http://k8s.io)\client-go\tools\clientcmd\loader.go源码位置:staging\src[k8s.io](

func (rules *ClientConfigLoadingRules) Load() (*clientcmdapi.Config, error) {
  if err := rules.Migrate(); err != nil {
    return nil, err
  }

  errlist := []error{}
  missingList := []string{}

  kubeConfigFiles := []string{}

  // Make sure a file we were explicitly told to use exists
  if len(rules.ExplicitPath) > 0 {
    if _, err := os.Stat(rules.ExplicitPath); os.IsNotExist(err) {
      return nil, err
    }
    kubeConfigFiles = append(kubeConfigFiles, rules.ExplicitPath)

  } else {
    kubeConfigFiles = append(kubeConfigFiles, rules.Precedence...)
  }

  kubeconfigs := []*clientcmdapi.Config{}
  // read and cache the config files so that we only look at them once
  for _, filename := range kubeConfigFiles {
    if len(filename) == 0 {
      // no work to do
      continue
    }

    config, err := LoadFromFile(filename)

    if os.IsNotExist(err) {
      // skip missing files
      // Add to the missing list to produce a warning
      missingList = append(missingList, filename)
      continue
    }

    if err != nil {
      errlist = append(errlist, fmt.Errorf("error loading config file \"%s\": %v", filename, err))
      continue
    }

    kubeconfigs = append(kubeconfigs, config)
  }

  if rules.WarnIfAllMissing && len(missingList) > 0 && len(kubeconfigs) == 0 {
    klog.Warningf("Config not found: %s", strings.Join(missingList, ", "))
  }

  // first merge all of our maps
  mapConfig := clientcmdapi.NewConfig()

  for _, kubeconfig := range kubeconfigs {
    mergo.MergeWithOverwrite(mapConfig, kubeconfig)
  }

  // merge all of the struct values in the reverse order so that priority is given correctly
  // errors are not added to the list the second time
  nonMapConfig := clientcmdapi.NewConfig()
  for i := len(kubeconfigs) - 1; i >= 0; i-- {
    kubeconfig := kubeconfigs[i]
    mergo.MergeWithOverwrite(nonMapConfig, kubeconfig)
  }

  // since values are overwritten, but maps values are not, we can merge the non-map config on top of the map config and
  // get the values we expect.
  config := clientcmdapi.NewConfig()
  mergo.MergeWithOverwrite(config, mapConfig)
  mergo.MergeWithOverwrite(config, nonMapConfig)

  if rules.ResolvePaths() {
    if err := ResolveLocalPaths(config); err != nil {
      errlist = append(errlist, err)
    }
  }
  return config, utilerrors.NewAggregate(errlist)
}

获取配置文件以及合并配置文件都在同一个方法Load中,其中kubeConfigFiles是记录获取到的配置信息,然后会循环它通过LoadFromFile把配置信息读取出来记录到config对象中,然后通过kubeconfigs将所有的config存起来,然后通过mergo.MergeWithOverwrite将所有的config进行合并。

ClientSet

ClientSet在RESTClient的基础上封装了对Resource和Version的管理方法。每一个Resource可以理解为一个客户端,而ClientSet则是多个客户端的集合,每一个Resource和Version都以函数的方式暴露给开发者。

注意:ClientSet仅能访问Kubernetes自身内置的资源(即客户端集合内的资源),不能直接访问CRD自定义资源。

package main

import (
  "fmt"
  apiv1 "k8s.io/api/core/v1"
  metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  "k8s.io/client-go/kubernetes"
  "k8s.io/client-go/tools/clientcmd"
)

func main() {
  config, err := clientcmd.BuildConfigFromFlags("", "./config")
  if err != nil {
    panic(err)
  }
  clientset, err := kubernetes.NewForConfig(config)
  if err != nil {
    panic(err)
  }
  podClient := clientset.CoreV1().Pods(apiv1.NamespaceDefault)
  podList, err := podClient.List(metav1.ListOptions{Limit: 10})
  if err != nil {
    panic(err)
  }
  for _,pod := range podList.Items{
    fmt.Printf("NAMESPACE: %v \nNAME: %v \nSTATUS: %v \n",pod.Namespace,pod.Name,pod.Status)
  }
}

通过clientcmd.BuildConfigFromFlags来加载配置文件,然后通过kubernetes.NewForConfig来创建clientset对象。然后通过clientset对象来操作集群,比如上面的获取default命名空间下的pod。

简单便捷搭建kubernetes实验环境

做实验时需要有kubernetes集群,传统的kubernetes集群搭建起来比较麻烦,并且一般是一主两从,资源消耗也比较高。面向开发并不需要如此折腾环境,故使用k3s来搭建实验集群,只需要1v2g最基础的主机资源即可。

K3s 是一个轻量级的 Kubernetes 发行版,非常简单易用而且轻量。只需要一个简单的安装脚本即可把 K3s 安装到主机。

使用国内资源安装 K3s

为了解决以上问题,K3s 社区已经将所需的 K3s 资源都同步到了国内的服务器上,这样我们就可以使用这些国内资源在国内环境上安装 K3s,不但提升了安装速度也提升了安装的稳定性。

该站点由 K3s 社区维护,同步资源到国内服务器有可能出现延后或遗漏情况,如发现延后或遗漏情况欢迎反馈到中文论坛(https://forums.rancher.cn)。。)

K3s 社区都同步了哪些资源到国内?

  • K3s 安装脚本
  • Channel 解析文件
  • K3s 的二进制文件
  • K3s 依赖系统镜像

使用脚本的安装过程

[root@VM-4-9-centos ~]# curl –sfL \
>      https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | \
>      INSTALL_K3S_MIRROR=cn sh -s - \
>      --system-default-registry "registry.cn-hangzhou.aliyuncs.com"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (6) Could not resolve host: xn--sfl-1n0a; 未知的错误
100 29622  100 29622    0     0  37185      0 --:--:-- --:--:-- --:--:-- 37213
[INFO]  Finding release for channel stable
[INFO]  Using v1.25.4+k3s1 as release
[INFO]  Downloading hash rancher-mirror.rancher.cn/k3s/v1.25.4-k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary rancher-mirror.rancher.cn/k3s/v1.25.4-k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
docker                                                                                                                                                                   | 3.5 kB  00:00:00     
docker-ce-stable                                                                                                                                                         | 3.5 kB  00:00:00     
epel                                                                                                                                                                     | 4.7 kB  00:00:00     
extras                                                                                                                                                                   | 2.9 kB  00:00:00     
os                                                                                                                                                                       | 3.6 kB  00:00:00     
updates                                                                                                                                                                  | 2.9 kB  00:00:00     
(1/6): docker/primary_db                                                                                                                                                 |  91 kB  00:00:00     
(2/6): epel/7/x86_64/group_gz                                                                                                                                            |  98 kB  00:00:00     
(3/6): epel/7/x86_64/updateinfo                                                                                                                                          | 1.0 MB  00:00:00     
(4/6): epel/7/x86_64/primary_db                                                                                                                                          | 7.0 MB  00:00:00     
(5/6): updates/7/x86_64/primary_db                                                                                                                                       |  19 MB  00:00:01     
(6/6): docker-ce-stable/7/x86_64/primary_db                                                                                                                              |  91 kB  00:00:05     
软件包 yum-utils-1.1.31-54.el7_8.noarch 已安装并且是最新版本
无须任何处理
已加载插件:fastestmirror, langpacks
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
rancher-k3s-common-stable                                                                                                                                                | 2.9 kB  00:00:00     
rancher-k3s-common-stable/primary_db                                                                                                                                     | 3.8 kB  00:00:00     
正在解决依赖关系
There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
--> 正在检查事务
---> 软件包 k3s-selinux.noarch.0.1.2-2.el7 将被 安装
--> 解决依赖关系完成

依赖关系解决

================================================================================================================================================================================================
 Package                                     架构                                   版本                                        源                                                         大小
================================================================================================================================================================================================
正在安装:
 k3s-selinux                                 noarch                                 1.2-2.el7                                   rancher-k3s-common-stable                                  16 k

事务概要
================================================================================================================================================================================================
安装  1 软件包

总下载量:16 k
安装大小:94 k
Downloading packages:
警告:/var/cache/yum/x86_64/7/rancher-k3s-common-stable/packages/k3s-selinux-1.2-2.el7.noarch.rpm: 头V3 RSA/SHA256 Signature, 密钥 ID e257814a: NOKEY         ]  0.0 B/s |    0 B  --:--:-- ETA 
k3s-selinux-1.2-2.el7.noarch.rpm 的公钥尚未安装
k3s-selinux-1.2-2.el7.noarch.rpm                                                                                                                                         |  16 kB  00:00:00     
从 https://rpm.rancher.io/public.key 检索密钥
导入 GPG key 0xE257814A:
 用户ID     : "Rancher (CI) <ci@rancher.com>"
 指纹       : c8cf f216 4551 26e9 b9c9 18be 925e a29a e257 814a
 来自       : https://rpm.rancher.io/public.key
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : k3s-selinux-1.2-2.el7.noarch                                                                                                                                                1/1 
  验证中      : k3s-selinux-1.2-2.el7.noarch                                                                                                                                                1/1 

已安装:
  k3s-selinux.noarch 0:1.2-2.el7                                                                                                                                                                

完毕!
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
[root@VM-4-9-centos ~]# systemctl status k3s
● k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2023-01-13 12:46:12 CST; 24s ago
     Docs: https://k3s.io
  Process: 6473 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 6454 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 6451 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
 Main PID: 6476 (k3s-server)
    Tasks: 105
   Memory: 884.4M
   CGroup: /system.slice/k3s.service
           ├─6476 /usr/local/bin/k3s server
           ├─6587 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/co...
           ├─7601 /var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin/containerd-shim-runc-v2 -namespace k8s.io -id e80172081d95e02dcc5e5b9ad14403...
           ├─7705 /var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin/containerd-shim-runc-v2 -namespace k8s.io -id 313b2efd8f921f314c1719ce2496d8...
           ├─7773 /var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin/containerd-shim-runc-v2 -namespace k8s.io -id 8eadb368dded3bc0184d7d25e251fb...
           ├─7799 /var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin/containerd-shim-runc-v2 -namespace k8s.io -id 12d9f5d7f6a162045a645e3825d47e...
           ├─7800 /var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin/containerd-shim-runc-v2 -namespace k8s.io -id 737ead2c0fbb4a8627968814437a49...
           └─8429 /var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin/unpigz -d -c

1月 13 12:46:27 VM-4-9-centos k3s[6476]: E0113 12:46:27.794864    6476 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get li...r APIService
1月 13 12:46:27 VM-4-9-centos k3s[6476]: I0113 12:46:27.794875    6476 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
1月 13 12:46:27 VM-4-9-centos k3s[6476]: W0113 12:46:27.795895    6476 handler_proxy.go:105] no RequestInfo found in the context
1月 13 12:46:27 VM-4-9-centos k3s[6476]: E0113 12:46:27.795984    6476 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve open... unavailable
1月 13 12:46:27 VM-4-9-centos k3s[6476]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
1月 13 12:46:27 VM-4-9-centos k3s[6476]: I0113 12:46:27.795999    6476 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
1月 13 12:46:39 VM-4-9-centos k3s[6476]: I0113 12:46:39.280428    6476 job_controller.go:510] enqueueing job kube-system/helm-install-traefik-crd
1月 13 12:46:39 VM-4-9-centos k3s[6476]: I0113 12:46:39.357202    6476 job_controller.go:510] enqueueing job kube-system/helm-install-traefik
1月 13 12:46:40 VM-4-9-centos k3s[6476]: I0113 12:46:40.484801    6476 job_controller.go:510] enqueueing job kube-system/helm-install-traefik
1月 13 12:46:40 VM-4-9-centos k3s[6476]: I0113 12:46:40.547706    6476 job_controller.go:510] enqueueing job kube-system/helm-install-traefik-crd
1月 13 12:46:42 VM-4-9-centos k3s[6476]: I0113 12:46:42.091825    6476 scope.go:115] "RemoveContainer" containerID="9b5c8ddbad63bd2a65749bde4aa6c07edc80466276f9487a421c973ad34a4998"
1月 13 12:46:42 VM-4-9-centos k3s[6476]: I0113 12:46:42.248162    6476 job_controller.go:510] enqueueing job kube-system/helm-install-traefik
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.385377    6476 trace.go:205] Trace[1639472897]: "Create" url:/api/v1/namespaces/kube-system/secrets,user-agent:Go-http-client/2.0,au...
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[1639472897]: ---"Write to database call finished" len:78320,err:<nil> 687ms (12:46:43.384)
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[1639472897]: [689.967749ms] [689.967749ms] END
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.668908    6476 trace.go:205] Trace[459355219]: "GuaranteedUpdate etcd3" audit-id:83646a7c-8cae-4cbd-9e97-4c2c83b375e1,ke...ime: 596ms):
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[459355219]: ---"Txn call finished" err:<nil> 593ms (12:46:43.668)
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[459355219]: [596.747443ms] [596.747443ms] END
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.670046    6476 trace.go:205] Trace[1616843808]: "Patch" url:/api/v1/namespaces/kube-system/pods/helm-install-traefik-kxcp9/status,us...
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[1616843808]: ---"Object stored in database" 594ms (12:46:43.668)
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[1616843808]: [598.010992ms] [598.010992ms] END
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.672647    6476 job_controller.go:510] enqueueing job kube-system/helm-install-traefik
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.757924    6476 trace.go:205] Trace[727563776]: "GuaranteedUpdate etcd3" audit-id:e0d61d36-e7c7-4118-8747-cd1ec82e6835,ke...ime: 504ms):
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[727563776]: ---"Txn call finished" err:<nil> 501ms (12:46:43.757)
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[727563776]: [504.20138ms] [504.20138ms] END
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.759312    6476 trace.go:205] Trace[1270597021]: "GuaranteedUpdate etcd3" audit-id:04ff8cec-5cab-464f-a380-3f41ea7cc532,k...ime: 509ms):
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[1270597021]: ---"Txn call finished" err:<nil> 506ms (12:46:43.759)
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[1270597021]: [509.049327ms] [509.049327ms] END
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.759637    6476 trace.go:205] Trace[453658084]: "Patch" url:/api/v1/nodes/vm-4-9-centos/status,user-agent:k3s/v1.25.4+k3s1 (linux/amd...
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[453658084]: ---"Object stored in database" 502ms (12:46:43.757)
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[453658084]: [547.022256ms] [547.022256ms] END
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.759861    6476 trace.go:205] Trace[1652397100]: "Update" url:/apis/batch/v1/namespaces/kube-system/jobs/helm-install-traefik/status,...
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[1652397100]: ---"Write to database call finished" len:5074,err:<nil> 509ms (12:46:43.759)
1月 13 12:46:43 VM-4-9-centos k3s[6476]: Trace[1652397100]: [510.076026ms] [510.076026ms] END
1月 13 12:46:43 VM-4-9-centos k3s[6476]: I0113 12:46:43.812298    6476 job_controller.go:510] enqueueing job kube-system/helm-install-traefik
1月 13 12:46:44 VM-4-9-centos k3s[6476]: I0113 12:46:44.032551    6476 job_controller.go:510] enqueueing job kube-system/helm-install-traefik
Hint: Some lines were ellipsized, use -l to show in full.

检查安装结果

节点正常

[root@VM-4-9-centos ~]# kubectl get nodes -o wide
NAME            STATUS   ROLES                  AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
vm-4-9-centos   Ready    control-plane,master   14d   v1.25.4+k3s1   10.0.4.9      <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   containerd://1.6.8-k3s1

命名空间正常

[root@VM-4-9-centos ~]# kubectl get ns -o wide -A
NAME              STATUS   AGE
kube-system       Active   14d
default           Active   14d
kube-public       Active   14d
kube-node-lease   Active   14d

初始的pod 正常

[root@VM-4-9-centos ~]# kubectl get pods -o wide -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE   IP          NODE            NOMINATED NODE   READINESS GATES
kube-system   coredns-fd7f5dc55-74d5d                   1/1     Running     0          14d   10.42.0.4   vm-4-9-centos   <none>           <none>
kube-system   local-path-provisioner-66b5f84849-7lgxx   1/1     Running     0          14d   10.42.0.6   vm-4-9-centos   <none>           <none>
kube-system   helm-install-traefik-crd-cb859            0/1     Completed   0          14d   10.42.0.3   vm-4-9-centos   <none>           <none>
kube-system   helm-install-traefik-kxcp9                0/1     Completed   1          14d   10.42.0.2   vm-4-9-centos   <none>           <none>
kube-system   svclb-traefik-0c07dca9-f7w8j              2/2     Running     0          14d   10.42.0.7   vm-4-9-centos   <none>           <none>
kube-system   metrics-server-b4bb65577-jwklq            1/1     Running     0          14d   10.42.0.5   vm-4-9-centos   <none>           <none>
kube-system   traefik-7db84f77c9-vgbzk                  1/1     Running     0          14d   10.42.0.8   vm-4-9-centos   <none>           <none>

测试集群的远程连接

本地配置kube config文件

k3s集群的config文件在如下位置

[root@VM-4-9-centos ~]# ll /etc/rancher/k3s/k3s.yaml
-rw------- 1 root root 2969 1月  13 12:46 /etc/rancher/k3s/k3s.yaml

将k3s.yaml下载下来,将文件中server: https://127.0.0.1:6443的地址改成可以访问的地址,注意网络要通。

放置在用户目录下,重命名为config,非必须这么做,反正到时候能读到该文件即可

测试连接k3s集群

provider/k8s/client.go

构造集群客户端结构体,实现client-go的一些基本方法

var (
  DEFAULT_NAMESPACE = "default"
)

func NewClient(kubeConfigYaml string) (*Client, error) {
  // 加载kubeconfig配置
  kubeConf, err := clientcmd.Load([]byte(kubeConfigYaml))
  if err != nil {
    return nil, err
  }

  // 构造Restclient Config
  restConf, err := clientcmd.BuildConfigFromKubeconfigGetter("",
    func() (*clientcmdapi.Config, error) {
      return kubeConf, nil
    },
  )
  if err != nil {
    return nil, err
  }

  // 初始化客户端
  client, err := kubernetes.NewForConfig(restConf)
  if err != nil {
    return nil, err
  }

  // 基于Interface封装的客户端
  // client.AppsV1().Deployments("default").Create(nil, nil, metav1.CreateOptions{})
  // RESTclient https://github.com/jindezgm/k8s-src-analysis/blob/master/client-go/rest/Client.md
  // client.RESTClient().Post().Namespace("ns").Body("body").Do(nil).Into(resp).Error()

  return &Client{
    kubeconf: kubeConf,
    restconf: restConf,
    client:   client,
    log:      zap.L().Named("provider.k8s"),
  }, nil
}

type Client struct {
  kubeconf *clientcmdapi.Config
  restconf *rest.Config
  client   *kubernetes.Clientset
  log      logger.Logger
}

func (c *Client) ServerVersion() (string, error) {
  si, err := c.client.ServerVersion()
  if err != nil {
    return "", err
  }

  return si.String(), nil
}

provider/k8s/client_test.go

测试连接集群,打印集群版本

//加载依赖
func init() {
  zap.DevelopmentSetup()

  // 获取当前文件在哪个目录下
  //wd, err := os.Getwd()
  //fmt.Println(wd)
  //if err != nil {
  //  panic(err)
  //}

  kc, err := os.ReadFile(filepath.Join("C:\\Users\\zengz\\.kube", "config"))
  if err != nil {
    panic(err)
  }

  client, err = k8s.NewClient(string(kc))
  if err != nil {
    panic(err)
  }

}

var (
  client *k8s.Client
  ctx    = context.Background()
)

func TestServerVersion(t *testing.T) {
  v, err := client.ServerVersion()
  if err != nil {
    t.Fatal(err)
  }
  t.Log(v)
}

测试通过,连接集群成功

=== RUN   TestServerVersion
    client_test.go:30: v1.25.4+k3s1
--- PASS: TestServerVersion (0.03s)
PASS
0

评论区