At Mobingi, when we are developing services that run on Kubernetes, we generally use Minikube or Kubernetes in Docker for Mac. We also have a cluster that runs on GKE that we use for development. In this post, I will share how we access some of the services that are running on our development cluster.
Using kubectl port-forward
Using kubectl port-forward is probably the cheapest and the most straightforward. For example, if I want to access a cluster service
svc1 through my localhost, I use
kubectl port-forward like this:
8080 is my local port, the right
8080 is the pod port where svc1 is running.
One thing to note with
kubectl port-forward through is that it won’t disconnect automatically even when the pod is restarted, say for example, due to an update from CI. I have to restart the command by doing a Ctrl+C and then rerun.
Exposing the service using NodePort or LoadBalancer
This part is probably the easiest to setup. You can check the details from the Kubernetes documentation. But you have to be careful though, especially with load balancers. These are not cheap. We have gone with this route during our early Kubernetes days and we ended up with a lot of load balancers. This was when our clusters were still in AWS. In AWS, (I’m not sure if it is still the case now) when you specify
LoadBalancer as service type, a classic load balancer will be provisioned for your service. That means one load balancer per exposed service!
When we moved to GKE, we started using GLBC which uses an L7 load balancer via the Ingress API. This improved our costs a little bit since GLBC can support up to five backend services per load balancer using paths. The slight downside was that Ingress updates were a bit slow. It’s not a big deal though since it’s only in the development cluster and we use blue/green deployment in production. But still, some updates can take up to ten minutes.
Using nginx as a reverse proxy to cluster services
In our quest to further minimize costs, we are currently using nginx as our way of exposing services. We provisioned a single Ingress that points to an nginx service which serves as a reverse proxy to our cluster services. This was the cheapest for us as we only have one load balancer for all services. And updating the nginx reverse proxy service takes only a few seconds. So far, this worked for us with no significant problems for the past couple of months.
Here’s an example of an nginx reverse proxy service:
In this example, all services, mainly
svc2, are running in the
default namespace. Save this as service.yaml and deploy:
A sample Ingress controller for the reverse proxy service:
Save this as ingress.yaml and deploy:
After everything is ready (Ingress provisioning takes some time), you should be able to access
https://development.mobingi.com/svc2/another-endpoint, etc. Of course, you have to point your domain to your Ingress load balancer’s IP address which you can see using the following command:
If you’re wondering how to setup the TLS portion, you can refer to my previous post about the very subject.---
If you have any questions or feedback, please reach out @flowerinthenyt.
This work is licensed under a Creative Commons Attribution 4.0 International License.