WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2024 Poal.co

453

Replication is the significant data sync and consensus mechanism in MongoDB, and it’s only used for a single #ReplicaSet deployment. Data between multi ReplicaSet or even cluster are separated and couldn’t being accessed each other. In some scenarios the entire data syncing among the cluster is required that we can access data at any Mongo instance. Inspired by this, we have develop “Lamda” system in order to syncing data from one Mongo cluster to another cluster or other clusters, mirror all Mongo instances’ data look just the same. Imagine we establish many data-pipeline between MongoDB instance (or cluster). Our use case for Lamda system include: Backup and mirror, Geographic based data distribution and sync, offline data analysis. We have successfully deployed more than tens Lamda on many MongoDB data centers which are hundred miles apart for disaster recovery and loading balance. We also addressed the data center traffic flow switching and remote transmission problem. On the failure of one data center, we can forward the network traffic to others easily since we have the equivalent data between the separated cluster.

Replication is the significant data sync and consensus mechanism in MongoDB, and it’s only used for a single #ReplicaSet deployment. Data between multi ReplicaSet or even cluster are separated and couldn’t being accessed each other. In some scenarios the entire data syncing among the cluster is required that we can access data at any Mongo instance. Inspired by this, we have develop “Lamda” system in order to syncing data from one Mongo cluster to another cluster or other clusters, mirror all Mongo instances’ data look just the same. Imagine we establish many data-pipeline between MongoDB instance (or cluster). Our use case for Lamda system include: Backup and mirror, Geographic based data distribution and sync, offline data analysis. We have successfully deployed more than tens Lamda on many MongoDB data centers which are hundred miles apart for disaster recovery and loading balance. We also addressed the data center traffic flow switching and remote transmission problem. On the failure of one data center, we can forward the network traffic to others easily since we have the equivalent data between the separated cluster.

(post is archived)