EventGet 50% off your ticket to MongoDB.local NYC on May 2. Use code Web50!Learn more >>
MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Securely Connect MongoDB to Cloud-Offered Kubernetes Clusters

Pavel Duchovny4 min read • Published Feb 14, 2022 • Updated Jun 27, 2022
KubernetesGoogle CloudAtlas
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty

Introduction

Containerized applications are becoming an industry standard for virtualization. When we talk about managing those containers, Kubernetes will probably be brought up extremely quickly.
Kubernetes is a known open-source system for automating the deployment, scaling, and management of containerized applications. Nowadays, all of the major cloud providers (AWS, Google Cloud, and Azure) have a managed Kubernetes offering to easily allow organizations to get started and scale their Kubernetes environments.
Not surprisingly, MongoDB Atlas also runs on all of those offerings to give your modern containerized applications the best database offering. However, ease of development might yield in missing some critical aspects, such as security and connectivity control to our cloud services.
In this article, I will guide you on how to properly secure your Kubernetes cloud services when connecting to MongoDB Atlas using the recommended and robust solutions we have.

Prerequisites

You will need to have a cloud provider account and the ability to deploy one of the Kubernetes offerings:
And of course, you'll need a MongoDB Atlas project where you are a project owner.
If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post. Please note that for this tutorial you are required to have a M10+ cluster.

Step 1: Set Up Networks

Atlas connections, by default, use credentials and end-to-end encryption to secure the connection. However, building a trusted network is a must for closing the security cycle between your application and the database.
No matter what cloud of choice you decide to build your Kubernetes cluster in, the basic foundation of securing that deployment is creating its own network. You can look into the following guides to create your own network and gather the main information (Names, Ids, and subnet Classless Inter-Domain Routing - CIDR).
Private Network Creation
AWSGCPAzure
Create an AWS VPCCreate a GCP VPCCreate a VNET

Step 2: Create Network Peerings

Now, we'll configure connectivity of the virtual network that the Atlas region resides in to the virtual network we've created in Step 1. This connectivity is required to make sure the communication between networks is possible. We'll configure Atlas to allow connections from the virtual network from Step 1.
This process is called setting a Network Peering Connection. It's significant as it allows internal communication between networks of two different accounts (the Atlas cloud account and your cloud account).
Atlas peering page
The network peerings are established under our Projects > Network Access > Peering > "ADD PEERING CONNECTION." For more information, please read our [documentation](https://docs.atlas.mongodb.com/security-vpc-peering/).
However, I will highlight the main points in each cloud for a successful peering setup:
Private Network Creation
AWSGCPAzure
  1. Allow outbound traffic to Atlas CIDR on 2015-27017.
  2. Obtain VPC information (Account ID, VPC Name, VPC Region, VPC CIDR). Enable DNS and Hostname resolution on that VPC.
  3. Using this information, initiate the VPC Peering.
  4. Approve the peering on AWS side.
  5. Add peering route in the relevant subnet/s targeting Atlas CIDR and add those subnets/security groups in the Atlas access list page.
  1. Obtain GCP VPC information (Project ID, VPC Name, VPC Region, and CIDR).
  2. When you initiate a VPC peering on Atlas side, it will generate information you need to input on GCP VPC network peering page (Atlas Project ID and Atlas VPC Name).
  3. Submit the peering request approval on GCP and add the GCP CIDR in Atlas access lists.
  1. Obtain the following azure details from your subscription (Subscription ID, Azure Active Directory Directory ID, VNET Resource Group Name, VNet Name, VNet Region).
  2. Input the gathered information and get a list of commands to perform on Azure console.
  3. Open Azure console and run the commands, which will create a custom role and permissions for peering.
  4. Validate and initiate peering.

Step 3: Deploy the Kubernetes Cluster in Our Networks

The Kubernetes clusters that we launch must be associated with the peered network. I will highlight each cloud provider's specifics.

AWS EKS

When we launch our EKS via the AWS console service, we need to configure the peered VPC under the "Networking" tab.
Place the correct settings:
  • VPC Name
  • Relevant Subnets (Recommended to pick at least three availability zones)
  • Choose a security group with open 27015-27017 ports to the Atlas CIDR.
  • Optionally, you can add an IP range for your pods.
In this case, I placed my peered VPC name, chose three availability zones from my EU West region, and specified a security group allowing 27015 - 27017 access to Atlas CIDR.

GCP GKE

When we launch our GKE service, we need to configure the peered VPC under the "Networking" section.
Place the correct settings:
  • VPC Name
  • Subnet Name
  • Optionally, you can add an IP range for your pod's internal network that cannot overlap with the peered CIDR.
In this case, once I got to the network tab, I placed my Network name

Azure AKS

When we lunch our AKS service, we need to use the same resource group as the peered VNET and configure the peered VNET as the CNI network in the advanced Networking tab.
Place the correct settings:
  • Resource Group
  • VNET Name under "Virtual Network"
  • Cluster Subnet should be the peered subnet range.
  • The other CIDR should be a non-overlapping CIDR from the peered network.
In this case, I checked the box

Step 4: Deploy Containers and Test Connectivity

Once the cluster is up and running in your cloud provider, you can test the connectivity to our peered cluster.
First, we will need to get our connection string and method from the Atlas cluster UI. Please note that GCP and Azure have private connection strings for peering, and those must be used for peered networks.
Connect to Peering Test: step 1
Connect to Peering Test: step 2
Now, let's test our connection from one of the Kubernetes pods:
Successful Shell Test
That's it. We are securely connected!

Wrap-Up

Kubernetes-managed clusters offer a simple and modern way to deploy containerized applications to the vendor of your choice. It's great that we can easily secure their connections to work with the best cloud database offering there is, MongoDB Atlas, unlocking other possibilities such as building cross-platform application with MongoDB Realm and Realm Sync or using MongoDB Data Lake and Atlas Search to build incredible applications.
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Industry Event
locationMETRO MANILA, PHILIPPINES | IN-PERSON

Developer Day Manila


Apr 25, 2024 | 12:30 AM - 9:00 AM UTC
Article

How to Build a Search Service in Java


Feb 13, 2024 | 11 min read
Tutorial

Exploring Window Operators in Atlas Stream Processing


Feb 13, 2024 | 4 min read
Tutorial

Getting Started With Azure Spring Apps and MongoDB Atlas: A Step-by-Step Guide


Jan 27, 2024 | 5 min read
Table of Contents