Restoring the configsvr Replica Set
Preparing Directories
rm -rf /compile/cluster-restore/cfg*
mkdir -p /compile/cluster-restore/cfg1/data/db
mkdir -p /compile/cluster-restore/cfg1/log
mkdir -p /compile/cluster-restore/cfg2/data/db
mkdir -p /compile/cluster-restore/cfg2/log
mkdir -p /compile/cluster-restore/cfg3/data/db
mkdir -p /compile/cluster-restore/cfg3/log
Procedure
- Prepare the configuration file and data directory of a single node and start the process in single-node mode.
- The configuration file is as follows (restoreconfig/single_40303.yaml):
net: bindIp: 127.0.0.1 port: 40303 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/cfg1/configsvr.pid} storage: dbPath: /compile/cluster-restore/cfg1/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/cfg1/log/configsingle.log}
- Copy the decompressed configsvr file to the dbPath directory on the single node.
cp -aR
/compile/download/backups/cac1efc8e65e42ecad8953352321bfeein02_41c8a32fb10245899708dea453a8c5c9no02/* /compile/cluster-restore/cfg1/data/db/
- Start the process.
- The configuration file is as follows (restoreconfig/single_40303.yaml):
- Connect to the single node and run the following configuration command:
./mongo --host 127.0.0.1 --port 40303
- Run the following commands to modify the replica set configuration:
var cf=db.getSiblingDB('local').system.replset.findOne();
cf['members'][0]['host']='127.0.0.1:40303';
cf['members'][1]['host']='127.0.0.1:40304';
cf['members'][2]['host']='127.0.0.1:40305';
cf['members'][0]['hidden']=false;
cf['members'][1]['hidden']=false;
cf['members'][2]['hidden']=false;
cf['members'][0]['priority']=1;
cf['members'][1]['priority']=1;
cf['members'][2]['priority']=1;
db.getSiblingDB('local').system.replset.remove({});
db.getSiblingDB('local').system.replset.insert(cf)
- Run the following commands to clear the built-in accounts:
db.getSiblingDB('admin').dropAllUsers();
db.getSiblingDB('admin').dropAllRoles();
- Run the following command to update the dds mongos and shard information:
db.getSiblingDB('config').mongos.remove({});
Query the _id information about multiple shards in the config.shards table. The _id information is used as the query condition of _id in the following statements. Update records in sequence.
db.getSiblingDB('config').shards.update({'_id' : 'shard_1'},{$set: {'host': 'shard_1/127.0.0.1:40306,127.0.0.1:40307,127.0.0.1:40308'}})
db.getSiblingDB('config').shards.update({'_id' : 'shard_2'},{$set: {'host': 'shard_2/127.0.0.1:40309,127.0.0.1:40310,127.0.0.1:40311'}})
db.getSiblingDB('config').mongos.find({});
db.getSiblingDB('config').shards.find({});
- Run the following command to stop the single-node process:
- Run the following commands to modify the replica set configuration:
- Create a configsvr replica set.
- Copy the dbPath file of the configsvr1 node to the directories of the other two configsvr nodes.
cp -aR /compile/cluster-restore/cfg1/data/db/ /compile/cluster-restore/cfg2/data/db/
cp -aR /compile/cluster-restore/cfg1/data/db/ /compile/cluster-restore/cfg3/data/db/
- Add the replica set configuration attribute to the configuration file (restoreconfig/configsvr_40303.yaml) of the configsvr-1 node.
net: bindIp: 127.0.0.1 port: 40303 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/cfg1/configsvr.pid} replication: {replSetName: config} sharding: {archiveMovedChunks: false, clusterRole: configsvr} storage: dbPath: /compile/cluster-restore/cfg1/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/cfg1/log/configsvr.log}
- Start the process.
- Add the replica set configuration attribute to the configuration file (restoreconfig/configsvr_40304.yaml) of the configsvr-2 node.
net: bindIp: 127.0.0.1 port: 40304 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/cfg2/configsvr.pid} replication: {replSetName: config} sharding: {archiveMovedChunks: false, clusterRole: configsvr} storage: dbPath: /compile/cluster-restore/cfg2/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/cfg2/log/configsvr.log}
- Start the process.
- Add the replica set configuration attribute to the configuration file (restoreconfig/configsvr_40305.yaml) of the configsvr-3 node.
net: bindIp: 127.0.0.1 port: 40305 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/cfg3/configsvr.pid} replication: {replSetName: config} sharding: {archiveMovedChunks: false, clusterRole: configsvr} storage: dbPath: /compile/cluster-restore/cfg3/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/cfg3/log/configsvr.log}
- Start the process.
- Copy the dbPath file of the configsvr1 node to the directories of the other two configsvr nodes.
- Wait until the primary node is selected.
./mongo --host 127.0.0.1 --port 40303
Run the rs.status() command to check whether the primary node exists.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot