Restoring the shardsvr1 Replica Set
Preparing Directories
rm -rf /compile/cluster-restore/shd1*
mkdir -p /compile/cluster-restore/shd11/data/db
mkdir -p /compile/cluster-restore/shd11/log
mkdir -p /compile/cluster-restore/shd12/data/db
mkdir -p /compile/cluster-restore/shd12/log
mkdir -p /compile/cluster-restore/shd13/data/db
mkdir -p /compile/cluster-restore/shd13/log
Procedure
- Prepare the configuration file and directory of a single node and start the process in single-node mode.
- The configuration file is as follows (restoreconfig/single_40306.yaml):
net: bindIp: 127.0.0.1 port: 40306 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/shd11/mongod.pid} storage: dbPath: /compile/cluster-restore/shd11/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/shd11/log/mongod.log}
- Copy the decompressed shardsvr1 file to the dbPath directory on the single node.
cp -aR
/compile/download/backups/cac1efc8e65e42ecad8953352321bfeein02_6cfa6167d4114d7c8cec5b47f9a78dc5no02/* /compile/cluster-restore/shd11/data/db/
- Start the process.
- The configuration file is as follows (restoreconfig/single_40306.yaml):
- Connect to the single node and run the following configuration command:
Connection command: ./mongo --host 127.0.0.1 --port 40306
- Run the following commands to modify the replica set configuration:
var cf=db.getSiblingDB('local').system.replset.findOne();
cf['members'][0]['host']='127.0.0.1:40306';
cf['members'][1]['host']='127.0.0.1:40307';
cf['members'][2]['host']='127.0.0.1:40308';
cf['members'][0]['hidden']=false;
cf['members'][1]['hidden']=false;
cf['members'][2]['hidden']=false;
cf['members'][0]['priority']=1;
cf['members'][1]['priority']=1;
cf['members'][2]['priority']=1;
db.getSiblingDB('local').system.replset.remove({});
db.getSiblingDB('local').system.replset.insert(cf)
- Run the following commands to clear the built-in accounts:
db.getSiblingDB('admin').dropAllUsers();
db.getSiblingDB('admin').dropAllRoles();
- Run the following commands to update the configsvr information:
Connection command: ./mongo --host 127.0.0.1 --port 40306
var vs = db.getSiblingDB('admin').system.version.find();
while (vs.hasNext()) {
var curr = vs.next();
if (curr.hasOwnProperty('configsvrConnectionString')) {
db.getSiblingDB('admin').system.version.update({'_id' : curr._id}, {$set: {'configsvrConnectionString': 'config/127.0.0.1:40303,127.0.0.1:40304,127.0.0.1:40305'}});
}
}
- Run the following command to stop the single-node process:
- Run the following commands to modify the replica set configuration:
- Create the shardsvr1 replica set.
- Copy the dbPath file of the shardsvr1 node to the directories of the other two shardsvr nodes.
cp -aR /compile/cluster-restore/shd11/data/db/ /compile/cluster-restore/shd12/data/db/
cp -aR /compile/cluster-restore/shd11/data/db/ /compile/cluster-restore/shd13/data/db/
- Add the replica set configuration attribute to the configuration file (restoreconfig/shardsvr_40306.yaml) of the shardsvr1-1 node.
--- For details about the value of replication.replSetName, see the shard _id information in 2.c.
net: bindIp: 127.0.0.1 port: 40306 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/shd11/mongod.pid} replication: {replSetName: shard_1} sharding: {archiveMovedChunks: false, clusterRole: shardsvr} storage: dbPath: /compile/cluster-restore/shd11/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/shd11/log/mongod.log}
- Start the process.
- Add the replica set configuration attribute to the configuration file (restoreconfig/shardsvr_40307.yaml) of the shardsvr1-2 node.
--- For details about the value of replication.replSetName, see the shard _id information in 2.c.
net: bindIp: 127.0.0.1 port: 40307 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/shd12/mongod.pid} replication: {replSetName: shard_1} sharding: {archiveMovedChunks: false, clusterRole: shardsvr} storage: dbPath: /compile/cluster-restore/shd12/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/shd12/log/mongod.log}
- Start the process.
- Add the replica set configuration attribute to the configuration file (restoreconfig/shardsvr_40308.yaml) of the shardsvr1-3 node.
--- For details about the value of replication.replSetName, see the shard _id information in 2.c.
net: bindIp: 127.0.0.1 port: 40308 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/shd13/mongod.pid} replication: {replSetName: shard_1} sharding: {archiveMovedChunks: false, clusterRole: shardsvr} storage: dbPath: /compile/cluster-restore/shd13/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/shd13/log/mongod.log}
- Start the process.
- Copy the dbPath file of the shardsvr1 node to the directories of the other two shardsvr nodes.
- Wait until the primary node is selected.
./mongo --host 127.0.0.1 --port 40306
Run the rs.status() command to check whether the primary node exists.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot