本地 k8s 集群也可以有 LoadBalancer
我们在本地集群做测试,往往被告知,Service Type 不支持 LoadBalancer,需要改用 NodePort 来替来。有没有被郁闷到?
MetalLB来拯救!
MetalLB 是裸机 Kubernetes 集群的负载均衡器实现,使用标准路由协议。
前置条件:
安装
根据官方安装指南,通过 manifest 的方式安装
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
配置第二层模式
参考Layer 2 Configuration,我们新建一个 ConfigMap,内容如下
# config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.32.64/26
kubectl apply -f config.yaml
等待
kubectl wait --for=condition=Ready pods --all -n metallb-system
# pod/controller-57fd9c5bb-d5z9j condition met
# pod/speaker-6hz2h condition met
# pod/speaker-7pzb4 condition met
# pod/speaker-trr9v condition met
测试
kubectl apply -f whoami.yaml
# mentallb/whoami.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
labels:
app: containous
name: whoami
spec:
replicas: 2
selector:
matchLabels:
app: containous
task: whoami
template:
metadata:
labels:
app: containous
task: whoami
spec:
containers:
- name: containouswhoami
image: containous/whoami
resources:
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- name: http
port: 80
selector:
app: containous
task: whoami
type: LoadBalancer
查看
kubectl get all
# NAME READY STATUS RESTARTS AGE
# pod/whoami-577f459d99-bnnm7 1/1 Running 0 17m
# pod/whoami-577f459d99-j595j 1/1 Running 0 17m
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h20m
# service/whoami LoadBalancer 10.105.186.75 192.168.32.64 80:30465/TCP 17m
# NAME READY UP-TO-DATE AVAILABLE AGE
# deployment.apps/whoami 2/2 2 2 17m
# NAME DESIRED CURRENT READY AGE
# replicaset.apps/whoami-577f459d99 2 2 2 17m
可见分配的 EXTERNAL-IP 为 192.168.32.64
分别在虚拟机和宿主机试试:
curl 192.168.32.64
# Hostname: whoami-577f459d99-j595j
# IP: 127.0.0.1
# IP: 192.168.235.204
# RemoteAddr: 10.0.2.15:61859
# GET / HTTP/1.1
# Host: 192.168.32.64
# User-Agent: curl/7.64.1
# Accept: */*
如果宿主机配置了网络代理,需要修改 ~/.zshrc 文件,将 192.168.32.64 加入到 no_proxy
中
# 宿主机 ~/.zshrc
export http_proxy=http://127.0.0.1:7890
export https_proxy=http://127.0.0.1:7890
export no_proxy=127.0.0.1,localhost,192.168.0.0/16,172.16.0.0/12,10.0.0.0/8 # basic
export no_proxy=aliyuncs.com,kubernetes.docker.internal,$no_proxy # docker
export no_proxy=192.168.32.64,$no_proxy # k8s
# 宿主机
source ~/.zshrc
然后再试
清理
kubectl delete -f whoami.yaml