-
Notifications
You must be signed in to change notification settings - Fork 827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to C3 machine types for prow build cluster #6525
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: upodroid The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
We really shouldn't need 2 cores of daemonsets ...? We should be able to tighten this up, discarding 20% of the node as overhead is excessive. |
Maybe we can import from the existing metrics collection? |
According to the calculations here: |
That sounds good to me (have not reviewed machine types yet). I don't recall an discussion about enabling network policy and I don't think we need it, 400m CPU is quite a bit back to fit other daemonsets cc @ameukam |
Just checked: k8s-prow-builds in google.com ( |
One more question, are we ok with nodes having 32gb memory instead of 64gb? |
EDIT: jinx I'm not sure if that's a problem for any of our workloads or not. I think we might have a few that are a bit lopsided in memory>cpu requests currently |
Apply plan:
After lgtm is applied on the PR. |
I think we should probably query the jobs to see if any have 30+ GB or a larger memory : core ratio than c3, if so, we can do n2? |
Network Policy was enabled by default at the time the cluster was created. We never took the opportunity to disable it. +1 for the disablement. |
Instead of tweaking the prowjobs, lets go with N2 instances(Ice Lake platform) and keep the same machine-type and merge that change this weekend. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
a4ad41f
to
9c4f988
Compare
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/cc @ameukam @BenTheElder @xmudrii
We should get better performance out of these instances.
C3 instance types are a bit tricky. Our options are:
I'm in favour offering 7 cores and 28GB to all our prowjobs across the board. This setting guarantees a dedicated node for testing. The remaining core belongs to us to run our stuff on it