日期:2014-05-16  浏览次数:20662 次

MongoDb move chunk 故障分析和处理 (SERVER-5351)
版本: 2.2.0
涉及BUG: SERVER-5351
操作: 把数据chunk从一个分片挪到另外一个分片(下面描述为: 从分片shard3_2 挪到 分片shard1_1)
现象
1) mongos 上move chunk命令执行后卡住, 发现日志:
Tue Mar  4 20:34:12 [conn25194] CMD: movechunk: { moveChunk: "archive.archive", to: "shard1_1", find: { uid: -2130706432 } }
Tue Mar  4 20:34:12 [conn25194] moving chunk ns: archive.archive moving ( ns:archive.archive at: shard3_2:shard3_2/192.168.130.13:15001,192.168.130.14:15002,10.1.2.13:15003 lastmod: 1|2||000000000000000000000000 min: { uid: -2130706432 } max: { uid: MaxKey }) shard3_2:shard3_2/192.168.130.13:15001,192.168.130.14:15002,10.1.2.13:15003 -> shard1_1:shard1_1/10.1.2.12:13003,192.168.130.11:13001,192.168.130.12:13002
Tue Mar  4 20:34:20 [LockPinger] cluster 192.168.130.14:9001,192.168.130.15:9001,192.168.130.16:9001 pinged successfully at Tue Mar  4 20:34:20 2014 by distributed lock pinger '192.168.130.14:9001,192.168.130.15:9001,192.168.130.16:9001/server-a08:27227:1392462352:1804289383', sleeping for 30000ms
2)在shard3_2的primary日志里面发现这种记录,并且这种日志持续10个小时:
Wed Mar  5 05:34:57 [conn4754510] moveChunk data transfer progress: { active: true, ns: "archive.archive", from: "shard3_2/192.168.130.13:15001,192.168.130.14:15002,10.1.2.13:15003", min: { uid: -2130706432 }, max: { uid: MaxKey }, shardKeyPattern: { uid: 1 }, state: "catchup", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
最后有个失败日志:
Wed Mar  5 06:40:13 [conn4754510] distributed lock 'archive.archive/server-a03:15001:1349773025:1657874366' unlocked.
Wed Mar  5 06:40:13 [conn4754510] about to log metadata event: { _id: "server-a03-2014-03-04T22:40:13-163", server: "server-a03", clientAddr: "192.168.130.18:21606", time: new Date(1393972813700), what: "moveChunk.from", ns: "archive.archive", details: { min: { uid: -2130706432 }, max: { uid: MaxKey }, step1 of 6: 1, step2 of 6: 442, step3 of 6: 9, step4 of 6: 36000896, note: "aborted" } }
Wed Mar  5 06:40:13 [conn4754510] command admin.$cmd command: