Restoring the shardsvr2 Replica Set
Preparing Directories
rm -rf /compile/cluster-restore/shd2*
mkdir -p /compile/cluster-restore/shd21/data/db
mkdir -p /compile/cluster-restore/shd21/log
mkdir -p /compile/cluster-restore/shd22/data/db
mkdir -p /compile/cluster-restore/shd22/log
mkdir -p /compile/cluster-restore/shd23/data/db
mkdir -p /compile/cluster-restore/shd23/log
Procedure
- Prepare the configuration file and directory of a single node and start the process in single-node mode.
- The configuration file is as follows (restoreconfig/single_40309.yaml):
net: bindIp: 127.0.0.1 port: 40309 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/shd21/mongod.pid} storage: dbPath: /compile/cluster-restore/shd21/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/shd21/log/mongod.log}
- The configuration file is as follows (restoreconfig/single_40309.yaml):
- Connect to the single node and run the following configuration command:
Connection command: ./mongo --host 127.0.0.1 --port 40309
- Run the following commands to modify the replica set configuration:
var cf=db.getSiblingDB('local').system.replset.findOne();
cf['members'][0]['host']='127.0.0.1:40309';
cf['members'][1]['host']='127.0.0.1:40310';
cf['members'][2]['host']='127.0.0.1:40311';
cf['members'][0]['hidden']=false;
cf['members'][1]['hidden']=false;
cf['members'][2]['hidden']=false;
cf['members'][0]['priority']=1;
cf['members'][1]['priority']=1;
cf['members'][2]['priority']=1;
db.getSiblingDB('local').system.replset.remove({});
db.getSiblingDB('local').system.replset.insert(cf)
- Run the following commands to clear the built-in accounts:
db.getSiblingDB('admin').dropAllUsers();
db.getSiblingDB('admin').dropAllRoles();
- Run the following commands to update the configsvr information:
var vs = db.getSiblingDB('admin').system.version.find();
while (vs.hasNext()) {
var curr = vs.next();
if (curr.hasOwnProperty('configsvrConnectionString')) {
db.getSiblingDB('admin').system.version.update({'_id' : curr._id}, {$set: {'configsvrConnectionString': 'config/127.0.0.1:40303,127.0.0.1:40304,127.0.0.1:40305'}});
}
}
- Run the following command to stop the single-node process:
- Run the following commands to modify the replica set configuration:
- Create the shardsvr2 replica set.
- Copy the dbPath file of the shardsvr2 node to the directories of the other two shardsvr nodes.
cp -aR /compile/cluster-restore/shd21/data/db/ /compile/cluster-restore/shd22/data/db/
cp -aR /compile/cluster-restore/shd21/data/db/ /compile/cluster-restore/shd23/data/db/
- Add the replica set configuration attribute to the configuration file (restoreconfig/shardsvr_40309.yaml) of the shardsvr2-1 node.
--- For details about the value of replication.replSetName, see the shard _id information in 2.c.
net: bindIp: 127.0.0.1 port: 40309 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/shd21/mongod.pid} replication: {replSetName: shard_2} sharding: {archiveMovedChunks: false, clusterRole: shardsvr} storage: dbPath: /compile/cluster-restore/shd21/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/shd21/log/mongod.log}
- Start the process.
- Add the replica set configuration attribute to the configuration file (restoreconfig/shardsvr_40310.yaml) of the shardsvr2-2 node.
--- For details about the value of replication.replSetName, see the shard _id information in 2.c.
net: bindIp: 127.0.0.1 port: 40310 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/shd22/mongod.pid} replication: {replSetName: shard_2} sharding: {archiveMovedChunks: false, clusterRole: shardsvr} storage: dbPath: /compile/cluster-restore/shd22/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/shd22/log/mongod.log}
- Start the process.
- Add the replica set configuration attribute to the configuration file (restoreconfig/shardsvr_40311.yaml) of the shardsvr2-2 node.
--- For details about the value of replication.replSetName, see the shard _id information in 2.c.
net: bindIp: 127.0.0.1 port: 40311 unixDomainSocket: {enabled: false} processManagement: {fork: true, pidFilePath: /compile/cluster-restore/shd23/mongod.pid} replication: {replSetName: shard_2} sharding: {archiveMovedChunks: false, clusterRole: shardsvr} storage: dbPath: /compile/cluster-restore/shd23/data/db/ directoryPerDB: true engine: wiredTiger wiredTiger: collectionConfig: {blockCompressor: snappy} engineConfig: {directoryForIndexes: true, journalCompressor: snappy} indexConfig: {prefixCompression: true} systemLog: {destination: file, logAppend: true, logRotate: reopen, path: /compile/cluster-restore/shd23/log/mongod.log}
- Start the process.
- Copy the dbPath file of the shardsvr2 node to the directories of the other two shardsvr nodes.
- Wait until the primary node is selected.
./mongo --host 127.0.0.1 --port 40309
Run the rs.status() command to check whether the primary node exists.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot