💾 Archived View for dioskouroi.xyz › thread › 25009347 captured on 2020-11-07 at 00:52:24. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
I'm wondering how they manage the tables creation on a 100 nodes cluster. Is it all by hand?
And why do to they use clickhouse_exporter? Doesn't the built in exporter provide the required data?
I don't know about CloudFlare specifically but the usual way to create tables in clusters is to use the CREATE TABLE IF NOT EXISTS <name> ON CLUSTER <cluster>. It executes the command across all nodes. You can automate this easily. [1]
The built-in exporter is relatively new--the original PR was merged in December of last year and it still had changes coming in mid-year 2020.
Also, compatibility with clickhouse-exporter was unfortunately not a requirement, so the metric names and coverage do not fully match.
[1]
https://clickhouse.tech/docs/en/sql-reference/statements/cre...
You are right, for some reason I totally forgot about it (probably because I still do it on each node since I started when that feature didn't exist).
Interesting about the exporter, thanks for sharing.
All migrations are applied automatically. We have bootstrap files and migrations themselves. All of these files are stored in the repository and go through the review process, after which they are deployed to a specific cluster.
We use clickhouse_exporter from the early days. It queries system tables and exposes metrics in the required format to Prometheus.
title typo
Thanks, sir, it took me some time to find it, to be honest.