Qdrant is a brilliant vector database, and in my opinion, the best choice for a vector database in most cases. It's easy to set up and extremely performant. Having tried Weaviate, Chroma, OpenSearch, Milvus, Pinecone, and others, Qdrant remains my first choice. I could go into all of the reasons for this opinion, but that is not the intention of this post.
This post assumes you have a basic understanding of Qdrant and have chosen it for your project. If you don't know much about Qdrant, you can read the official documentation.
If you want to skip the explainer and go straight to code, you can find it on Github.
Cross-region replication makes your data available in multiple regions. This is most frequently used when you want to ensure that your data is available in all regions where your users are located to reduce query latency and improve availability.
Many developers prefer to use AWS for their cloud infrastructure. This is because AWS offers a wide range of services that are easy to set up and use, and most developers are already familiar with the platform.
However, one important caveat that you will not find in the Qdrant documentation is how to set up cross-region replication with AWS.
Specifically, the key challenge is getting your nodes to communicate with each other across VPCs. This is because AWS VPCs are isolated from each other by default.
There are three main options for cross-VPC communication:
Transit gateway is a network transit hub that you can use to connect your VPCs together. It supports a wide range of configurations, including transitive routing between your VPCs. This is the recommended approach for cross-VPC communication. However, it is extremely expensive. Most of the cost is due to a per-hour charge for each VPC that you connect to the transit gateway.
If you have three regions, you may end up paying hundreds of dollars per month just to allow cross-VPC network traffic. For most developers, Transit Gateway is too expensive and unnecessary.
VPC peering allows you to connect two VPCs together using a direct network route between the two networks. This is a cheaper alternative to transit gateway, as you only pay for data transfer, not a per-hourly rate per connection, but it is more limited in terms of configuration.
This is the option we will explore in this post.
In my experience, setting up VPC peering is straightforward and when used with namespaces and service discovery, it works exceptionally well in coordinating your Qdrant nodes across regions.
A site-to-site VPN creates a virtual private network between your VPCs. This is a more expensive option than VPC peering, as you pay per hour for each VPN connection. Like Transit Gateway, the costs of using a site-to-site VPN can be exclusionary for most projects.
To set up VPC peering, you will need to define the following resources:
You should set up your VPC peering before you set up your Qdrant nodes. Once you start a Qdrant node, it will attempt to connect to the other nodes that you have specified and will store their IP addresses in local storage. This can lead to errors, as even if you set up VPC peering at the same time, or after the nodes have started, the nodes may rely on the network configuration stored in their local storage.
To get around this, you can set a variable in Terraform that disables setting up Qrant nodes until the VPC peering has been set up.
In the Github code, this is done with the first_create
Boolean variable, for example:
module "qdrant" { source = "./modules/qdrant" count = var.first_create ? 0 : 1 ...
When applying Terraform for the first time, you can set the first_create
variable to true
using the -var
flag.
terraform apply -var="first_create=true"
The following Terraform code sets up a Qdrant cluster with cross-region replication across three regions.
us-east-1
us-west-1
eu-west-2
First, ensure your project is structured as follows:
. ├── global ├── regional │ ├── qdrant ├── init
Your init
module should set up the VPCs for each region, as well as the Terraform state bucket and lock table. Make sure that each VPC has a CIDR block that is unique to that region.
Your regional
module should set up the Qdrant nodes and network configuration for each region.
Your global
module should contain resources that are shared between regions, such as IAM.
In your main.tf
file, you can coordinate the configuration between global and regional resources.
main.tf
For each region pair, define a VPC peering connection and acceptor as follows:
resource "aws_vpc_peering_connection_accepter" "us_west_1_accept_eu_west_2" { provider = aws.us-west-1 vpc_peering_connection_id = aws_vpc_peering_connection.eu_west_2_to_us_west_1.id auto_accept = true tags = { Name = "${var.organisation}-vpc-peering-us-west-1-accept-eu-west-2" } } resource "aws_vpc_peering_connection" "us_east_1_to_us_west_1" { provider = aws.us-east-1 vpc_id = module.us-east-1.vpc_id peer_vpc_id = module.us-west-1.vpc_id peer_region = "us-west-1" tags = { Name = "${var.organisation}-vpc-peering-us-east-1-to-us-west-1" } }
In regional/modules/network
, add route tables and route table associations to each VPC.
resource "aws_route_table" "main" { vpc_id = data.aws_vpc.selected.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.main.id } dynamic "route" { for_each = var.vpc_peering_connection_ids content { cidr_block = var.region_cidr_blocks[route.key] vpc_peering_connection_id = route.value } } tags = { Name = var.route_table_name } } resource "aws_route_table_association" "main" { count = var.subnet_count subnet_id = element(aws_subnet.main.*.id, count.index) route_table_id = aws_route_table.main.id }
In global/modules/namespace
, create a namespace for each region.
data "aws_vpc" "current" { filter { name = "tag:Name" values = ["${var.organisation}-vpc-${var.region}"] } } resource "aws_service_discovery_private_dns_namespace" "internal" { name = "${var.organisation}.${var.region}.internal" description = "Private DNS namespace for ${var.organisation} in ${var.region}" vpc = data.aws_vpc.current.id tags = { Name = "${var.organisation}-namespace-${var.region}" Organisation = var.organisation Region = var.region Terraform = "true" } }
Use regional/modules/service-discovery
to create a service discovery name for each region.
module "service_discovery_qdrant" { source = "./modules/service-discovery" organisation = var.organisation region = var.region namespace_id = module.namespace.namespace_id service_name = "${var.organisation}-qdrant-${var.region}" }
Use regional/modules/qdrant
to define the Qdrant nodes for each region.
The important part to note here is the startup command.
All replica nodes should bootstrap from the primary node.
... container definitions ... command = var.region == "eu-west-2" ? ["./qdrant", "--uri", "http://${var.service_discovery_name}.${var.namespace_name}:6335"] : ["./qdrant", "--bootstrap", "http://${var.primary_service_discovery_name}.${var.primary_namespace_name}:6335"] ... container definitions ...
If you need to test the connection cross-VPC, try using ping
or telnet
to check that VPC peering is working.
For example:
ping service-discovery-name.namespace-name
or to check that the port is open:
telnet service-discovery-name.namespace-name 6335
You can find the full Terraform code on Github. Did you find this post helpful? Give us a star!