在大量segment情况下,如何让data node的compaction更加高效? #38807
Unanswered
xiaobingxia-at
asked this question in
Q&A and General discussion
Replies: 1 comment 3 replies
-
如果小segment数量比较多的话,不应该修改dataCoord.compaction.mix.triggerInterval,保留其默认60秒即可。不然会在很长一段时间内都不会做compaction 系统能同时执行的compaction数量由dataCoord.compaction.maxParallelTaskNum决定,默认为10,如果你的datanode数量不大于10,就不需要改 "compactionHandler cannot find datanode for compaction task" 是因为datanode都在忙于做compaction,没有空闲的节点做 |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
我在2.4.17的cluster mode下,因为各种公司的需求,需要处理大量的segment (200k个)。然后我把mix/l0 compaction的频率调成了3600秒一次。然后我在coordinator的log中看到大量如下信息:
["compactionHandler cannot find datanode for compaction task"] [planID=454886895760171459] [type=MixCompaction] [vchannel=milvus-50kp-50kb-hnsw-i4i-rootcoord-dml_53_454886895681276984v0]
[datacoord/compaction_trigger.go:411] ["compaction plan skipped due to handler full"]
好像说是data node的compaction资源不足?
目前的cluster有两个data node,每个data node 15 Core, 60GB memory. 根据监控,data node的CPU使用率在10%以下。
我认为是,data node做compaction job的并发度不够高?
请问该如何调高data node做compaction job的并发度?还是只能拉高data node数目。
我目前在考虑的设置有:
dataCoord.slot.clusteringCompactionUsage
dataCoord.slot.mixCompactionUsage
dataCoord.slot.l0DeleteCompactionUsage
dataNode.slot.slotCap
应该调整那些设置呢? 谢谢。
Beta Was this translation helpful? Give feedback.
All reactions