Steps and guide to create Private Cluster, Private Vm Instance and VPC and to interconnect private cluster and private vm instance also to configure NAT gateway:

Saravana Pandian
6 min readNov 22, 2020
Getting on with VPC network

Creation of VPC network :

VPC (Virtual private cloud) is used to isolate private applications and cluster instances. By default GCP creates cluster and everything in default network which is easy to hack and easy to guess its configuration. By using VPC we can have our very own network which we can make secure by declaring firewall rules (By default all ingresses are blocked and egresses are allowed).

1. Go to VPC Network -> VPC networks -> Create a VPC network

2. Fill name of the network, create custom subnet and choose the ip range for that subnet as 192.168.100.0/0.

3. Fill region as europe-west3-a (if your cluster and VM instance lies there). Check off for Private Google access.

4. Choose regional for Dynamic routing mode.

5. Click Create to create network.

For reference https://www.youtube.com/watch?v=cNb7xKyya5c

Creation of Private cluster:

Private kubernetes cluster

Private cluster has many advantages over public GKE clusters, main advantage is private GKE cluster does not have any external ip address, so it is completely isolated from internet.

We can use Cloud NAT to allow private cluster to access internet but NAT can only allow outbound/egress traffic and cannot allow ingress/inbound traffic. Private cluster can access anything inside the vpc network by using instance’s internal ip address. There are 3 kinds of private cluster

1. No internet accessible cluster

2. Master authorized network cluster

3. Unrestricted accessible cluster

In No internet accessible cluster, master is inaccessible to everyone and cannot be accessed by any means.

In Master authorized network cluster, we can define ip ranges that can access the master.

In unrestricted accessible cluster everyone can access the cluster’s master.

Commands to create a private cluster:

gcloud container clusters create private-cluster-1 \

— create-subnetwork name=my-subnet-1 \

— enable-master-authorized-networks \

— enable-ip-alias \

— enable-private-nodes \

— master-ipv4-cidr 172.16.0.0/28 \

— no-enable-basic-auth \ — no-issue-client-certificate

where,

— enable-master-authorized-networks specifies that access to the public endpoint is restricted to IP address ranges that you authorize.

  • — create-subnetwork name=my-subnet-1 causes GKE to automatically create a subnet named my-subnet-1.
  • — enable-ip-alias makes the cluster VPC-native.
  • — enable-private-nodes indicates that the cluster’s nodes do not have external IP addresses.
  • — master-ipv4-cidr 172.16.0.0/28 specifies an RFC 1918 range for the master. This setting is permanent for this cluster.

Suppose you have a group of machines, outside of my-net-1(vpc network) that have addresses in the range 104.1.0.0/32. You could authorize those machines to access the public endpoint by entering this command:

gcloud container clusters update private-cluster-1 \

— enable-master-authorized-networks \

— master-authorized-networks 104.1.0.0/32

Using Cloud Shell to access a private cluster :

Accessing from cloud shell

By default in private GKE cluster you cannot access cluster from google cloud shell, to allow it you have to do this :

1. In your Cloud Shell command-line window, use dig to find the external IP address of your Cloud Shell:

dig +short myip.opendns.com @resolver1.opendns.com

2. Add the external address of your Cloud Shell to your cluster’s list of master authorized networks:

gcloud container clusters update private-cluster-1 \

— zone us-central1-c \

— enable-master-authorized-networks \

— master-authorized-networks [EXISTING_AUTH_NETS],[SHELL_IP]/32

where

[EXISTING_AUTH_NETS] is your existing list of master authorized networks. You can find your master authorized networks in the console or by running the following command: gcloud container clusters describe private-cluster-1 — format “flattened(masterAuthorizedNetworksConfig.cidrBlocks[])”.

  • [SHELL_IP] is the external IP address of your Cloud Shell.

3. Get credentials, so that you can use kubectl to access the cluster:

Now you can use kubectl, in Cloud Shell, to access your private cluster. For example:

kubectl get nodes

Connecting private cluster to NAT Gateway:

By the above cluster you cannot access internal as your cluster is a private one it does not have an external ip to connect to internet.

You can connect outside internet using NAT (Cloud NAT).

Reminder: Cloud Nat does not support ingress traffic

Try connecting to internet inside the cluster:

Run this command in the cloud shell:

kubectl run test-deploy — image dwdraju/alpine-curl-jq

You will get error ErrImagePull back off, this is because of lack of internet.

Create Cloud NAT Gateway:

1. Name the gateway, Select or create a cloud router (Router is used to route the traffic of a particular network to desired destination).

2. In NAT mapping select Custom, select subnet name and ip ranges, set NAT IP addresses to automatic

3. Click Create to create NAT.

Now try to connect to internet

Run curl example.co, you will get the result.

Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters.

Creating a VM Instances with no external IP addresses (Private VM Instances) :

VM instance with no external ip

Step 1:

gcloud compute instances create nat-test-1 \

— image-family debian-9 \

— image-project debian-cloud \

— network custom-network1 \

— subnet subnet-us-east-192 \

— zone us-east4-c \

— no-address

Step 2:

Create a firewall rule that allows SSH connections :

gcloud compute firewall-rules create allow-ssh \

— network custom-network1 \

— source-ranges 35.235.240.0/20 \

— allow tcp:22

Step 3:

Log into nat-test-1 and confirm that it cannot reach the internet

gcloud compute ssh nat-test-1 \

— zone us-east4-c \

— command “curl example.com” \

— tunnel-through-iap

(Run this command in Google Cloud Shell only)

If you want to run this with your local machine, then run this in your local machine:

ssh-add ~/.ssh/google_compute_engine

and then run previous command in gcloud console

Due to Cloud NAT which is already connected to the network which also includes GCE instance, this private instance will be able to access the internet via NAT gateway.

Interconnecting Private Cluster to Private Compute Instance :

Private Compute Instance can only be accessed by instances that are inside the vpc network.

As our private GKE cluster is already inside the vpc network, we can easily connect it to GCE instance which has mongodb server

1. Installing mongodb inside Private VM Instance:

Firstly, ssh into the private vm via cloud console only (because only it has access to ot at this point).

Follow this tutorial to install mongo db inside vm instance:

https://tecadmin.net/install-mongodb-on-ubuntu/

Create a db and insert some data in it.

2. Accessing mongodb inside Private VM Instance from Private Cluster

In Python:

mongo = PyMongo(app, uri=’mongodb://192.168.100.31:27017/retailer’)

where,

192.168.100.31 is the internal ip address of the compute instance

When you run this python workload and access it through Ingress you can access mongodb inside private vm.

Thanks for going through this post, its always a pleasure to help out and contribute to community. Hope this helps you out !,

--

--

Saravana Pandian

An enthusiastic Associate software engineer ,exploring open source community. Keenly interested in GCP cloud, Docker, DevOps.