升级MongoDB 4.4.8到6.0.4时复制问题。

huangapple go评论59阅读模式
英文:

Issue with replication when upgrading Mongodb 4.4.8 to 6.0.4

问题

I'll provide the translated text without code sections:

我们在三台主机上托管的 Docker 容器中拥有一个包含三个节点的复制集,每个节点上有一个支持每个 MongoDB 数据节点的 128GB 文件系统。

每个 MongoDB 容器都限制了 5GB 内存和 1.5 CPU,MongoDB 的 WiredTiger 缓存大小为 2.5。

升级过程遵循以下文档中提供的步骤:
https://www.mongodb.com/docs/manual/release-notes/5.0-upgrade-replica-set/
https://www.mongodb.com/docs/manual/release-notes/6.0-upgrade-replica-set/

在不同版本的日志中进行调查后,我发现在版本 4.4.8 和 5.0 中存在以下错误:

{"log":"{\t\"t\":{\t\"$date\":\"2023-03-04T16:22:54.348+00:00\"},\t\"s\":\"I\",\t\"c\":\"COMMAND\",\t\"id\":51803,\t\"ctx\":\"conn125400\",\t\"msg\":\"Slow query\",\t\"attr\":{\"t\"ye\":\"command\",\t\"ns\":\"local.oplog.rs\",\t\"command\":{\"t\"find\":\"oplog.rs\",\t\"filter\":{},\t\"sort\":{\"t\"-$natural\":1},\t\"lsid\":{\"t\"id\":{\"t\"$uuid\":\"9a02cb13-57ee-4793-a27d-ae51ef6774af\"}},\t\"$clusterTime\":{\"t\"clusterTime\":{\"t\"$timestamp\":{\"t\"t\":1677946973,\t\"i\":2}},\t\"signature\":{\"th\":{\"t\"$binary\":{\"t\"base64\":\"EBMoYzvf/BJE1tf1ghz6ZkQ+E2Y=\",\t\"subType\":\"0\"}},\t\"keyId\":7206450133318238212}},\t\"$db\":\"local\",\t\"$readPreference\":{\"t\"mode\":\"primaryPreferred\"}},\t\"planSummary\":\"COLLSCAN\",\t\"numYields\":71,\t“queryHash”:”A11B6D23”,”planCacheKey”:”A11B6D23”,”ok”:0,”errMsg”:”Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.”,”errName”:”QueryExceededMemoryLimitNoDiskUseAllowed”,”errCode”:292,”reslen”:378,”locks”:{“ReplicationStateTransition”:{“acquireCount”:{“w”:72}},“Global”:{“acquireCount”:{“r”:72}},“Database”:{“acquireCount”:{“r”:72}},“Mutex”:{“acquireCount”:{“r”:1}},“oplog”:{“acquireCount”:{“r”:72}}},“storage”:{},“protocol”:”op_msg”,”durationMillis”:105}}\r\n","stream":"stdout","time":"2023-03-04T16:22:54.349177116Z"}

此错误与以下命令相关:“command":{"find":"oplog.rs","filter":{},"sort":{"-$natural":1}”。

它失败是因为结果太大以致无法存储在内存中,并且默认情况下在版本 4.4.8 和 5.0 中禁用了写入磁盘,这与版本 6.0 不同。
在我的情况下,这个命令确实导致了磁盘空间的填充,我不明白是什么事件导致执行这个命令。

我猜想这个事件可能是由于复制不同步或选举过程引起的,这会导致在每个节点上读取 OPLOG 以进行数据恢复,但这个过程是否是 MongoDB 的正常过程?

我是否需要为此过程设置特定配置,以使其正常运行而不必序列化到磁盘?
是否可以限制Mongo写入驱动器的临时文件大小?
我的MongoDB集群是否配置错误?

我们已经重新启动了从 4.4.8 升级到 6.0.4 的迁移过程,但仍然存在相同的问题。

以下是每个版本的 rs.status() 命令的输出:

版本 4.4.8
(输出已省略)

版本 5.0
(输出已省略)

版本 6

英文:

We have a replica set with three nodes in the docker container hosted on three hosts on a 128 GB filesystem supporting each mongodb data nodes.

Every mongodb container have 5GB of memory limits and 1,5 CPU limits, mongodb wiredTiger cacheSizeGB is define to 2.5.

The upgrade process followed is that of the documentation provided here:
https://www.mongodb.com/docs/manual/release-notes/5.0-upgrade-replica-set/
https://www.mongodb.com/docs/manual/release-notes/6.0-upgrade-replica-set/

After investigation in the logs of the different versions I see that this error is present in version 4.4.8 and 5.0 :

{"log":"{\"t\":{\"$date\":\"2023-03-04T16:22:54.348+00:00\"},\"s\":\"I\",  \"c\":\"COMMAND\",  \"id\":51803,   \"ctx\":\"conn125400\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"local.oplog.rs\",\"command\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1},\"lsid\":{\"id\":{\"$uuid\":\"9a02cb13-57ee-4793-a27d-ae51ef6774af\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1677946973,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"EBMoYzvf/BJE1tf1ghz6ZkQ+E2Y=\",\"subType\":\"0\"}},\"keyId\":7206450133318238212}},\"$db\":\"local\",\"$readPreference\":{\"mode\":\"primaryPreferred\"}},\"planSummary\":\"COLLSCAN\",\"numYields\":71,\"queryHash\":\"A11B6D23\",\"planCacheKey\":\"A11B6D23\",\"ok\":0,\"errMsg\":\"Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.\",\"errName\":\"QueryExceededMemoryLimitNoDiskUseAllowed\",\"errCode\":292,\"reslen\":378,\"locks\":{\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":72}},\"Global\":{\"acquireCount\":{\"r\":72}},\"Database\":{\"acquireCount\":{\"r\":72}},\"Mutex\":{\"acquireCount\":{\"r\":1}},\"oplog\":{\"acquireCount\":{\"r\":72}}},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":105}}\r\n","stream":"stdout","time":"2023-03-04T16:22:54.349177116Z"}

this error refers to the command : "command":{"find":"oplog.rs","filter":{},"sort":{"-$natural":1}

It fails because the result is too large to be stored in memory and writing to disk is disabled by default in version 4.4.8 and 5.0 which is different from version 6.0
In my case it's exactly this command which causes the filling of my disk in version 6.0.
I do not understand which event causes the execution of this command.

I suppose this event is due to out of sync of the replication or election process which causes the OPLOG reading for data recovering on every node, but this process is it a normal process of mongodb?

Must I set up a specific configuration for this process to run normally without having to serialize on disk?
Is it possible to restrict the size of temporary files written to the drive by Mongo?
Is my mongodb cluster wrongly sized ?

We have restarted the migration process from 4.4.8 to 6.0.4 and we have always the same issue.

Here is the output of the rs.status() command for each version

Version 4.4.8

rs.status()
{
	"set" : "ds-rs",
	"date" : ISODate("2023-03-04T20:28:31.952Z"),
	"myState" : 1,
	"term" : NumberLong(3),
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"votingMembersCount" : 3,
	"writableVotingMembersCount" : 3,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1677961707, 41),
			"t" : NumberLong(3)
		},
		"lastCommittedWallTime" : ISODate("2023-03-04T20:28:27.492Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1677961707, 41),
			"t" : NumberLong(3)
		},
		"readConcernMajorityWallTime" : ISODate("2023-03-04T20:28:27.492Z"),
		"appliedOpTime" : {
			"ts" : Timestamp(1677961707, 41),
			"t" : NumberLong(3)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1677961707, 41),
			"t" : NumberLong(3)
		},
		"lastAppliedWallTime" : ISODate("2023-03-04T20:28:27.492Z"),
		"lastDurableWallTime" : ISODate("2023-03-04T20:28:27.492Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1677961703, 1),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "electionTimeout",
		"lastElectionDate" : ISODate("2023-03-04T20:23:43.468Z"),
		"electionTerm" : NumberLong(3),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(0, 0),
			"t" : NumberLong(-1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1677959280, 1),
			"t" : NumberLong(2)
		},
		"numVotesNeeded" : 2,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"numCatchUpOps" : NumberLong(0),
		"newTermStartDate" : ISODate("2023-03-04T20:23:43.564Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2023-03-04T20:23:44.805Z")
	},
	"members" : [
		{
			"_id" : 1,
			"name" : "11.0.50.9:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 290,
			"optime" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDate" : ISODate("2023-03-04T20:28:27Z"),
			"optimeDurableDate" : ISODate("2023-03-04T20:28:27Z"),
			"lastHeartbeat" : ISODate("2023-03-04T20:28:31.779Z"),
			"lastHeartbeatRecv" : ISODate("2023-03-04T20:28:31.022Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncSourceHost" : "11.0.50.11:27017",
			"syncSourceId" : 3,
			"infoMessage" : "",
			"configVersion" : 1,
			"configTerm" : 3
		},
		{
			"_id" : 2,
			"name" : "11.0.50.10:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 298,
			"optime" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDate" : ISODate("2023-03-04T20:28:27Z"),
			"optimeDurableDate" : ISODate("2023-03-04T20:28:27Z"),
			"lastHeartbeat" : ISODate("2023-03-04T20:28:31.781Z"),
			"lastHeartbeatRecv" : ISODate("2023-03-04T20:28:30.026Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncSourceHost" : "11.0.50.9:27017",
			"syncSourceId" : 1,
			"infoMessage" : "",
			"configVersion" : 1,
			"configTerm" : 3
		},
		{
			"_id" : 3,
			"name" : "11.0.50.11:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 311,
			"optime" : {
				"ts" : Timestamp(1677961707, 41),
				"t" : NumberLong(3)
			},
			"optimeDate" : ISODate("2023-03-04T20:28:27Z"),
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"electionTime" : Timestamp(1677961423, 1),
			"electionDate" : ISODate("2023-03-04T20:23:43Z"),
			"configVersion" : 1,
			"configTerm" : 3,
			"self" : true,
			"lastHeartbeatMessage" : ""
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1677961707, 41),
		"signature" : {
			"hash" : BinData(0,"p7TYQ+teX1adsMmdcQUirXqfXIE="),
			"keyId" : NumberLong("7206450133318238212")
		}
	},
	"operationTime" : Timestamp(1677961707, 41)
}

Version 5.0

rs.status()
{
	"set" : "ds-rs",
	"date" : ISODate("2023-03-04T20:35:46.296Z"),
	"myState" : 1,
	"term" : NumberLong(4),
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"votingMembersCount" : 3,
	"writableVotingMembersCount" : 3,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1677962136, 1),
			"t" : NumberLong(4)
		},
		"lastCommittedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1677962136, 1),
			"t" : NumberLong(4)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1677962136, 1),
			"t" : NumberLong(4)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1677962136, 1),
			"t" : NumberLong(4)
		},
		"lastAppliedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
		"lastDurableWallTime" : ISODate("2023-03-04T20:35:36.996Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1677962126, 1),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "electionTimeout",
		"lastElectionDate" : ISODate("2023-03-04T20:34:56.969Z"),
		"electionTerm" : NumberLong(4),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(0, 0),
			"t" : NumberLong(-1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1677961933, 1),
			"t" : NumberLong(3)
		},
		"numVotesNeeded" : 2,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"numCatchUpOps" : NumberLong(0),
		"newTermStartDate" : ISODate("2023-03-04T20:34:56.991Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2023-03-04T20:34:57.032Z")
	},
	"members" : [
		{
			"_id" : 1,
			"name" : "11.0.50.9:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 47,
			"optime" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDate" : ISODate("2023-03-04T20:35:36Z"),
			"optimeDurableDate" : ISODate("2023-03-04T20:35:36Z"),
			"lastAppliedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastDurableWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastHeartbeat" : ISODate("2023-03-04T20:35:45.018Z"),
			"lastHeartbeatRecv" : ISODate("2023-03-04T20:35:44.896Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncSourceHost" : "11.0.50.11:27017",
			"syncSourceId" : 3,
			"infoMessage" : "",
			"configVersion" : 1,
			"configTerm" : 4
		},
		{
			"_id" : 2,
			"name" : "11.0.50.10:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 58,
			"optime" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDate" : ISODate("2023-03-04T20:35:36Z"),
			"optimeDurableDate" : ISODate("2023-03-04T20:35:36Z"),
			"lastAppliedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastDurableWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastHeartbeat" : ISODate("2023-03-04T20:35:45.029Z"),
			"lastHeartbeatRecv" : ISODate("2023-03-04T20:35:45.559Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncSourceHost" : "11.0.50.11:27017",
			"syncSourceId" : 3,
			"infoMessage" : "",
			"configVersion" : 1,
			"configTerm" : 4
		},
		{
			"_id" : 3,
			"name" : "11.0.50.11:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 76,
			"optime" : {
				"ts" : Timestamp(1677962136, 1),
				"t" : NumberLong(4)
			},
			"optimeDate" : ISODate("2023-03-04T20:35:36Z"),
			"lastAppliedWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"lastDurableWallTime" : ISODate("2023-03-04T20:35:36.996Z"),
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"electionTime" : Timestamp(1677962096, 1),
			"electionDate" : ISODate("2023-03-04T20:34:56Z"),
			"configVersion" : 1,
			"configTerm" : 4,
			"self" : true,
			"lastHeartbeatMessage" : ""
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1677962136, 1),
		"signature" : {
			"hash" : BinData(0,"4zdWQKCg1WIw8twSfdxRx49MlsU="),
			"keyId" : NumberLong("7206450133318238212")
		}
	},
	"operationTime" : Timestamp(1677962136, 1)
}

Version 6.0.4 Before Before setting FeatureCompatibilityVersion "6.0

rs.status()
{
  set: 'ds-rs',
  date: ISODate("2023-03-04T20:41:13.192Z"),
  myState: 1,
  term: Long("5"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
    lastCommittedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
    appliedOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
    durableOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
    lastAppliedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
    lastDurableWallTime: ISODate("2023-03-04T20:41:11.565Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1677962337, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate("2023-03-04T20:40:50.481Z"),
    electionTerm: Long("5"),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1677962347, i: 1 }), t: Long("4") },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long("10000"),
    numCatchUpOps: Long("0"),
    newTermStartDate: ISODate("2023-03-04T20:40:51.551Z"),
    wMajorityWriteAvailabilityDate: ISODate("2023-03-04T20:40:56.028Z")
  },
  members: [
    {
      _id: 1,
      name: '11.0.50.9:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 11,
      optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:41:11.000Z"),
      optimeDurableDate: ISODate("2023-03-04T20:41:11.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastHeartbeat: ISODate("2023-03-04T20:41:11.593Z"),
      lastHeartbeatRecv: ISODate("2023-03-04T20:41:11.896Z"),
      pingMs: Long("3"),
      lastHeartbeatMessage: '',
      syncSourceHost: '11.0.50.11:27017',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 2,
      configTerm: 5
    },
    {
      _id: 2,
      name: '11.0.50.10:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 22,
      optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:41:11.000Z"),
      optimeDurableDate: ISODate("2023-03-04T20:41:11.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastHeartbeat: ISODate("2023-03-04T20:41:11.574Z"),
      lastHeartbeatRecv: ISODate("2023-03-04T20:41:11.586Z"),
      pingMs: Long("1"),
      lastHeartbeatMessage: '',
      syncSourceHost: '11.0.50.11:27017',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 2,
      configTerm: 5
    },
    {
      _id: 3,
      name: '11.0.50.11:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 38,
      optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:41:11.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:41:11.565Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1677962450, i: 1 }),
      electionDate: ISODate("2023-03-04T20:40:50.000Z"),
      configVersion: 2,
      configTerm: 5,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1677962471, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("a93b92c3b1b03f506d12a906e8b1b2aa853d7ccc", "hex"), 0),
      keyId: Long("7206450133318238212")
    }
  },
  operationTime: Timestamp({ t: 1677962471, i: 1 })
}

Version 6.0.4 After setting FeatureCompatibilityVersion "6.0

rs.status()
{
  set: 'ds-rs',
  date: ISODate("2023-03-04T20:42:30.793Z"),
  myState: 1,
  term: Long("5"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
    lastCommittedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
    appliedOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
    durableOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
    lastAppliedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
    lastDurableWallTime: ISODate("2023-03-04T20:42:18.702Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1677962491, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate("2023-03-04T20:40:50.481Z"),
    electionTerm: Long("5"),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1677962347, i: 1 }), t: Long("4") },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long("10000"),
    numCatchUpOps: Long("0"),
    newTermStartDate: ISODate("2023-03-04T20:40:51.551Z"),
    wMajorityWriteAvailabilityDate: ISODate("2023-03-04T20:40:56.028Z")
  },
  members: [
    {
      _id: 1,
      name: '11.0.50.9:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 89,
      optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:42:18.000Z"),
      optimeDurableDate: ISODate("2023-03-04T20:42:18.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastHeartbeat: ISODate("2023-03-04T20:42:29.673Z"),
      lastHeartbeatRecv: ISODate("2023-03-04T20:42:29.985Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: '11.0.50.11:27017',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 2,
      configTerm: 5
    },
    {
      _id: 2,
      name: '11.0.50.10:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 100,
      optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDurable: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:42:18.000Z"),
      optimeDurableDate: ISODate("2023-03-04T20:42:18.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastHeartbeat: ISODate("2023-03-04T20:42:29.674Z"),
      lastHeartbeatRecv: ISODate("2023-03-04T20:42:29.675Z"),
      pingMs: Long("1"),
      lastHeartbeatMessage: '',
      syncSourceHost: '11.0.50.11:27017',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 2,
      configTerm: 5
    },
    {
      _id: 3,
      name: '11.0.50.11:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 115,
      optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long("5") },
      optimeDate: ISODate("2023-03-04T20:42:18.000Z"),
      lastAppliedWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      lastDurableWallTime: ISODate("2023-03-04T20:42:18.702Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1677962450, i: 1 }),
      electionDate: ISODate("2023-03-04T20:40:50.000Z"),
      configVersion: 2,
      configTerm: 5,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1677962538, i: 4 }),
    signature: {
      hash: Binary(Buffer.from("13338366b839f370e3acf785eb93e20276953006", "hex"), 0),
      keyId: Long("7206450133318238212")
    }
  },
  operationTime: Timestamp({ t: 1677962538, i: 4 })
}

Here is the output of the rs.printReplicationInfo() command for each version

Version 4.4.8

rs.printReplicationInfo()
configured oplog size:   6481.01123046875MB
log length start to end: 77846secs (21.62hrs)
oplog first event time:  Fri Mar 03 2023 22:48:47 GMT+0000 (UTC)
oplog last event time:   Sat Mar 04 2023 20:26:13 GMT+0000 (UTC)
now:                     Sat Mar 04 2023 20:26:17 GMT+0000 (UTC)

Version 5.0

rs.printReplicationInfo()
configured oplog size:   6481.01123046875MB
log length start to end: 78370secs (21.77hrs)
oplog first event time:  Fri Mar 03 2023 22:48:47 GMT+0000 (UTC)
oplog last event time:   Sat Mar 04 2023 20:34:57 GMT+0000 (UTC)
now:                     Sat Mar 04 2023 20:35:14 GMT+0000 (UTC)

Version 6.0.4 After setting FeatureCompatibilityVersion "6.0

rs.printReplicationInfo()
actual oplog size
'6481.01123046875 MB'
---
configured oplog size
'6481.01123046875 MB'
---
log length start to end
'78843.99999809265 secs (21.9 hrs)'
---
oplog first event time
'Fri Mar 03 2023 22:48:47 GMT+0000 (Coordinated Universal Time)'
---
oplog last event time
'Sat Mar 04 2023 20:42:51 GMT+0000 (Coordinated Universal Time)'
---
now
'Sat Mar 04 2023 20:43:00 GMT+0000 (Coordinated Universal Time)'

Here is the output of the rs.conf() command for version 4.4.8

rs.conf()
{
        "_id" : "ds-rs",
        "version" : 1,
        "term" : 1,
        "protocolVersion" : NumberLong(1),
        "writeConcernMajorityJournalDefault" : true,
        "members" : [
                {
                        "_id" : 1,
                        "host" : "11.0.50.9:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "11.0.50.10:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "11.0.50.11:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("6403ba5704b0564c053d341b")
        }
}

答案1

得分: 0

问题已解决,这些请求是由于我的监控解决方案Metricbeat及其模块MongoDB引起的,更精确地说是replstatus模块。

https://github.com/elastic/beats/blob/main/metricbeat/module/mongodb/replstatus/info.go

https://github.com/elastic/beats/issues/33616

为解决这个问题,禁用MongoDB Metricbeat模块中的replstatus模块。

英文:

Problem solved, these requests were due to my monitoring solution metricbeat and its module mongodb, more precisely the module replstatus.

https://github.com/elastic/beats/blob/main/metricbeat/module/mongodb/replstatus/info.go

https://github.com/elastic/beats/issues/33616

to solve this problem disable replstatus module in mongodb metricbeat modules

huangapple
  • 本文由 发表于 2023年3月9日 16:57:23
  • 转载请务必保留本文链接:https://go.coder-hub.com/75682335.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定