0002-ingress-高可用部署
方案一: Keepalived+Nginx+Ingress ¶
Nginx 可同时承担负载均衡与反向代理功能,基于七层代理协议,能按域名灵活分发请求,适用于中小规模流量场景。
-
当业务日均 PV 低于 1000 万、并发请求在 1 万以下时,Nginx 足以胜任。
-
若需支撑大型网站或关键服务,尤其是服务器集群规模较大、并发极高的场景,则建议采用 LVS。LVS 基于四层协议,相比 Nginx 在超大规模流量下具备更强的性能优势。
主机名 | IP 地址 | 规格 | 软件 |
---|---|---|---|
ha-1 | 10.10.100.80 | 2C / 4G / 1024GB | Keepalived & Nginx ( VIP: 10.10.100.100 ) |
ha-2 | 10.10.100.90 | 2C / 4G / 1024GB | Keepalived & Nginx |
k8s-w1 | 10.10.100.40 | 2C / 4G / 1024GB | ingress-controller |
k8s-w2 | 10.10.100.50 | 2C / 4G / 1024GB | ingress-controller |
k8s-w3 | 10.10.100.60 | 2C / 4G / 1024GB | ingress-controller |
![]() |
1. nginx 安装并配置 ¶
ha1 、 ha2 机器均操作
yum install epel-release
yum install nginx keepalived* libnl* popt* nginx-all-modules.noarch -y
更改 nginx.conf
cat > /etc/nginx/nginx.conf <<"EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 增加一下内容:为两台apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-ingress-controller {
server 10.10.100.40:80 weight=5 max_fails=3 fail_timeout=30s; # node1的IP:PORT
server 10.10.100.50:80 weight=5 max_fails=3 fail_timeout=30s; # node2 的IP:PORT
server 10.10.100.60:80 weight=5 max_fails=3 fail_timeout=30s; # node2 的IP:PORT
}
server {
listen 80; # 该监听端口
proxy_pass k8s-ingress-controller;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
}
EOF
2. keeplived 安装并配置 ¶
【主/备配置参数区别】:
主服务(ha1) | 备服务(ha2) | |
---|---|---|
state | MASTER | BACKUP |
priority | 100 | 90 |
1. 修改 keepalived
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
router_id NGINX_MASTER
}
# 检查脚本
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_NGINX {
state MASTER
interface ens160 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 123
}
# 虚拟IP
virtual_ipaddress {
10.10.100.100/24
}
track_script {
check_nginx
}
EOF
1. 修改 keepalived
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
router_id NGINX_BACKUP2
}
# 检查脚本
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_NGINX {
state BACKUP2
interface ens160 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 123
}
# 虚拟IP
virtual_ipaddress {
10.10.100.100/24
}
track_script {
check_nginx
}
EOF
ha1、ha2 均创建 keepalived 检查脚本:
cat > /etc/keepalived/check_nginx.sh <<"EOF"
#!/bin/bash
count=$(ss -antp |grep 80 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
#!/bin/bash
#1、判断Nginx是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
#2、如果不存活则尝试启动Nginx
service nginx start
sleep 2
#3、等待2秒后再次获取一次Nginx状态
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
service keepalived stop
fi
fi
执行脚本
useradd keepalived_script
passwd keepalived_script
chown -R keepalived_script:keepalived_script /etc/keepalived/check_nginx.sh
chmod +x /etc/keepalived/check_nginx.sh
3. 启动 nginx 和 keepalived 服务 ¶
systemctl daemon-reload
systemctl enable nginx keepalived --now
systemctl enable keepalived.service --now
4. 测试vip是否绑定成功 ¶
查看vip绑定位置
ip -c a
发现在 ha1 节点 |
---|
![]() |
停止 keepalived, vip 是否漂移
关闭 ha1 节点 deepalived,vip 跳转到 ha2 节点。 |
---|
![]() |
5. 测试 ingress ¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: c1
image: nginx:1.15-alpine
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp
spec:
ingressClassName: nginx
rules:
- host: "linuxcdn.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: nginx-service
port:
number: 80
1. 准备 pod 内容器运行的 web 主页
> kubectl get pods |grep nginx-65
nginx-6555d45fcd-27t8k 1/1 Running 0 128m
nginx-6555d45fcd-5j8jf 1/1 Running 0 128m
> kubectl exec -it nginx-6555d45fcd-27t8k -- /bin/sh
echo "ingress web1" > /usr/share/nginx/html/index.html
> kubectl exec -it nginx-6555d45fcd-5j8jf -- /bin/sh
echo "ingress web2" > /usr/share/nginx/html/index.html
2. 模拟客户端访问
while true; do
date +%T
curl linuxcdn.com
sleep 1
done
查看访问结果 |
---|
![]() |
方案二: LVS+Keepalived+Ingress ¶
主机名 | IP 地址 | 规格 | 软件 |
---|---|---|---|
ha-1 | 10.10.100.80 | 2C / 4G / 1024GB | Keepalived & ipvs ( VIP: 10.10.100.100 ) |
ha-2 | 10.10.100.90 | 2C / 4G / 1024GB | Keepalived & ipvs |
k8s-w1 | 10.10.100.40 | 2C / 4G / 1024GB | ingress-controller |
k8s-w2 | 10.10.100.50 | 2C / 4G / 1024GB | ingress-controller |
k8s-w3 | 10.10.100.60 | 2C / 4G / 1024GB | ingress-controller |
![]() |
1. 部署并配置 ipvsadm ¶
ha1 、 ha2 机器均操作
yum install ipvsadm libnl* popt* -y
ipvsadm --save > /etc/sysconfig/ipvsadm
systemctl enable ipvsadm.service --now && systemctl restart ipvsadm.service
ipvsadm 配置
mkdir /etc/sysconfig/modules/
cat <<EOF >/etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/`uname -r`/kernel/net/netfilter/ipvs"
for i in \`ls \$ipvs_modules_dir | sed -r 's#(.*).ko.xz#\1#'\`; do
/sbin/modinfo -F filename \$i &> /dev/null
if [ \$? -eq 0 ]; then
/sbin/modprobe \$i
fi
done
EOF
cat <<EOF >/etc/modules-load.d/ipvs.conf
#!/bin/bash
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
执行以下命令授权并执行
# 授权
chmod +x /etc/sysconfig/modules/ipvs.modules
#载入 ipvs 模块
bash /etc/sysconfig/modules/ipvs.modules
#查看模块是否已经载入成功
lsmod | grep ip_vs
2. 部署并配置 keepalived ¶
ha1 、 ha2 机器均操作
1. 安装软件
yum -y install keepalived*
2. keepalived 初始化与配置
更改 /etc/keepalived/keepalived.conf
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
router_id LVS_MASTER_80
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
mcast_src_ip 10.10.100.80
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
10.10.100.100
}
}
virtual_server 10.10.100.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 10.10.100.40 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.50 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.60 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 10.10.100.100 443 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 10.10.100.40 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.50 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.60 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
EOF
更改 /etc/keepalived/keepalived.conf
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
router_id LVS_BACKUP_90
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 95
advert_int 1
mcast_src_ip 10.10.100.90
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
10.10.100.100
}
}
virtual_server 10.10.100.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 10.10.100.40 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.50 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.60 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 10.10.100.100 443 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 10.10.100.40 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.50 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.60 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
EOF
3. 关闭 ingress 所在节点的 arp 查询并设置回环 IP ¶
ingress pod 所在节点都要执行
vip=10.10.100.100
mask='255.255.255.255'
dev=lo:1
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
/sbin/ip addr add $vip/$mask dev $dev
/sbin/ip route add $vip dev $dev # 强制发往 VIP 的流量严格通过 lo:1 接口处理。
echo "The RS Server is Ready!"
;;
stop)
/sbin/ip addr del $vip/$mask dev $dev
/sbin/ip route del $vip dev $dev
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "The RS Server is Canceled!"
;;
*)
echo "Usage: $(basename $0) start|stop"
exit 1
;;
esac
chmod +755 lvs.sh && bash lvs.sh start
检查现有路由规则
ip route show | grep 10.10.100.100
4. 再次查看keepalived状态和lvs路由转发规则 ¶
journalctl -f -u keepalived
查看keepalived日志,发现监听到了后端服务器并将路由加入了转发规则内 |
---|
![]() |
while true; do
date +%T
ipvsadm -Ln
sleep 1 # 每次循环后等待5秒,可根据需求调整等待时间
done
方案三:LVS+Keepalived+Nginx+Ingress ¶
流程 ¶
- DNS 解析阶段:客户端发起对
linuxcdn.com
的请求,DNS 服务器将该域名解析为1.1.1.1
这个 IP 地址,这是客户端能够找到服务入口的基础,通过 DNS 系统实现域名到 IP 的映射 。 - NAT 转换阶段:网络中的路由器根据配置的 NAT(网络地址转换)规则,将源 IP 为
1.1.1.1
的请求全部转发到 VIP(虚拟 IP)10.1.1.100
上,由LVS
进行后续处理 。NAT 起到了将外部网络请求定向到内部服务网络的作用。 - LVS 负载均衡阶段:LVS 工作在四层(传输层 ),基于配置的轮询(rr ,Round Robin )负载策略,将接收到的请求依次转发到后端的
ha1
和ha2
服务器上。轮询策略能较为简单公平地分配请求流量到后端服务器。 - nginx 七层负载及转发阶段:nginx 工作在七层(应用层 ),接收到 LVS 转发过来的请求后,依据客户端请求的域名(可通过泛域名配置实现灵活匹配 )进行进一步处理。它将请求转发给 ingress 组件。同时,nginx 还能作为常规的反向代理服务器,通过添加 vhost(虚拟主机 )配置处理其他业务,实现对不同域名或服务的代理转发 。
![]() |
主机名 | IP 地址 | 规格 | 软件 |
---|---|---|---|
ha-1 | 10.10.100.80 | 2C / 4G / 1024GB | Keepalived & ipvs ( VIP: 10.10.100.100 ) |
ha-2 | 10.10.100.90 | 2C / 4G / 1024GB | Keepalived & ipvs |
nginx-1 | 10.10.100.81 | 2C / 4G / 1024GB | Nginx |
nginx-2 | 10.10.100.82 | 2C / 4G / 1024GB | Nginx |
k8s-w1 | 10.10.100.40 | 2C / 4G / 1024GB | ingress-controller |
k8s-w2 | 10.10.100.50 | 2C / 4G / 1024GB | ingress-controller |
k8s-w3 | 10.10.100.60 | 2C / 4G / 1024GB | ingress-controller |
1. 部署并配置 ipvsadm ¶
ha1 、 ha2 机器均操作
ha1 、 ha2 机器均操作
yum install ipvsadm libnl* popt* -y
ipvsadm --save > /etc/sysconfig/ipvsadm
systemctl enable ipvsadm.service --now && systemctl restart ipvsadm.service
ipvsadm 配置
mkdir /etc/sysconfig/modules/
cat <<EOF >/etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/`uname -r`/kernel/net/netfilter/ipvs"
for i in \`ls \$ipvs_modules_dir | sed -r 's#(.*).ko.xz#\1#'\`; do
/sbin/modinfo -F filename \$i &> /dev/null
if [ \$? -eq 0 ]; then
/sbin/modprobe \$i
fi
done
EOF
cat <<EOF >/etc/modules-load.d/ipvs.conf
#!/bin/bash
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
执行以下命令授权并执行
# 授权
chmod +x /etc/sysconfig/modules/ipvs.modules
#载入 ipvs 模块
bash /etc/sysconfig/modules/ipvs.modules
#查看模块是否已经载入成功
lsmod | grep ip_vs
2. 部署并配置 keepalived ¶
ha1 、 ha2 机器均操作
1. 安装软件
yum -y install keepalived*
2. keepalived 初始化与配置
更改 /etc/keepalived/keepalived.conf
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
router_id LVS_MASTER_80
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
mcast_src_ip 10.10.100.80
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
10.10.100.100
}
}
virtual_server 10.10.100.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 10.10.100.81 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.82 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 10.10.100.100 443 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 10.10.100.81 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.82 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
EOF
更改 /etc/keepalived/keepalived.conf
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
router_id LVS_BACKUP_90
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 95
advert_int 1
mcast_src_ip 10.10.100.90
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
10.10.100.100
}
}
virtual_server 10.10.100.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 10.10.100.81 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.82 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 10.10.100.100 443 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 50
protocol TCP
real_server 10.10.100.81 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 10.10.100.82 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
EOF
3. 启动 keepalived 服务 ¶
systemctl daemon-reload
systemctl enable nginx keepalived --now
systemctl enable keepalived.service --now
4. 测试vip是否绑定成功 ¶
查看vip绑定位置
ip -c a
发现在 ha1 master节点 |
---|
![]() |
停止 keepalived, vip 是否漂移
关闭 ha1 节点 keepalived,vip 跳转到 ha2 节点。 |
---|
![]() |
再查看路由转发情况(此时的lvs01 02节点都会有此路由转发的规则)
ipvsadm -Ln
5. 安装并配置 nginx 服务 ¶
nginx-1 、 nginx-2 机器均操作
yum install epel-release
yum install nginx nginx-all-modules.noarch -y
更改 nginx.conf
server {
listen 81;
listen [::]:81;
server_name _;
root /usr/share/nginx/html;
...