EKS

in  

AWSのEKSについて

kubeconfig ~/.kube/config-stgに設定を追加

1
$ export KUBECONFIG=~/.kube/config-stg

ワーカーノードをクラスターと結合する。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::xxxxxxxx:role/role-name
      username: system:node:{{EC2PrivateDNSName}}
      groups:    
	- system:bootstrappers
	- system:nodes

上記内容を aws-auth-config-map.yaml として保存する。

1
2
$ kubectl apply -f aws-auth-config-map.yaml
configmap/aws-auth created

Nodeを確認する。

1
2
3
$ kubectl get nodes
NAME                                               STATUS   ROLES    AGE   VERSION
ip-10-128-22-123.ap-northeast-1.compute.internal   Ready    <none>   34m   v1.13.10-eks-d6460e

rbac

ServiceAccount

クラスタ内のPodに対するアクセス制限の仕組み ClusterRoleBindingにより、ServiceAccountとClusterRoleの紐付けを行う。

cluster role

Helm

1
2
$ helm init
$ kubectl -n kube-system get service,deployment,pod
helm list
helm delete
helm delete name –purge
helm reset
helm search
helm repo update
helm repo add name url

kubectl

kubectl get
kubectl get pod
kubectl get replicaset
kubectl get deploy
kubectl get deploy deploymentname -o yaml –export
kubectl get node
kubectl get persistentvolumeclaim
kubectl get storageclass
kubectl describe
kubectl describe pod
kubectl describe replicaset
kubectl describe deployment
kubectl describe node
kubectl describe persistentvolumeclaim
kubectl describe storageclass

volume

Dynamic Provisioner

Persistent Volume Claimを作成すると、自動的にPersistent Volumeが作成される機能 pv.kubernetes.io/bind-completed: “yes"が入っているとDynamic Provisionしてくれなかった。

ingress

publicサブネットのTAGに key: kubernetes.io/role/elb value: 1を追加する。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.2/docs/examples/iam-policy.json

aws iam create-policy \
--policy-name ALBIngressControllerIAMPolicy \
--policy-document file://iam-policy.json

kubectl -n kube-system describe configmap aws-auth

aws iam attach-role-policy \
--policy-arn arn:aws:iam::111122223333:policy/ALBIngressControllerIAMPolicy \
--role-name eksctl-alb-nodegroup-ng-b1f603c5-NodeInstanceRole-GKNS581EASPU




# rbac
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.2/docs/examples/rbac-role.yaml

# マニフェスト
curl -sS "https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.2/docs/examples/alb-ingress-controller.yaml" > alb-ingress-controller.yaml

# -cluster-name、--aws-vpc-id、--aws-region部分を修正して実行
kubectl apply -f alb-ingress-controller.yaml

Service

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
  name: "test-app-service-target1"
  namespace: "test-app"
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  type: NodePort
  selector:
    app: "test-app-target1"

ALB

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: "ingress"
  namespace: "test-app"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
  labels:
    app: test-app
spec:
  rules:
    - http:
	paths:
	  - path: /target1
	    backend:
	      serviceName: "test-app-service-target1"
	      servicePort: 80
n          - path: /target2
	    backend:
	      serviceName: "test-app-service-target2"
	      servicePort: 80

Share