ClickHouse Backup Restore - InConsistent Metadata #974
-
Our setup is a 3 shard and 3 replica setup and in my case i see that my clickhouse DDL (addition of new column) operation is in progress for 3 pods, and the remaining 6 pods got completed as these 3 are in distributed task ddl queue. Now before this gets executed by the other 3 shards, my backup got triggerred and so, it has backed up the inconsistent metadata and thus causing the restore to fail miserably as the first shard metadata and the second shard metadata are different. Is there a way to solve this problem by having some workaround ? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
metadata + data consistent inside one clickhouse-server I don't understand how you achieve fail restore Table schema after restore could be different on different shards as workaround you could poll |
Beta Was this translation helpful? Give feedback.
metadata + data consistent inside one clickhouse-server
I don't understand how you achieve fail restore
could you provide more context?
logs and error messages?
Table schema after restore could be different on different shards
and yes, unfinished distributed ddl queries is not backuped
as workaround you could poll
SELECT count() FROM system.distributed_ddl_queue WHERE status != 'Finished'