-
Notifications
You must be signed in to change notification settings - Fork 827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for a community owned GCP project for minikube #7414
Comments
Can you outline some more detailed requirements so we can determine how best to provide them? We generally try to work from "my project needs VMs for testing cgroups v2 which we cannot do locally in a CI container" => "well we have AWS credits let's use EC2, make sure to use boskos to rent access" or "my project needs to host container images" => use registry.k8s.io (which is AWS+GCP, there are standardized docs for setting up image hosting on here in this repo). We have to maintain balance across the budgets available to the project. What infra we do provide we also setup here in git wherever possible (terraform, bash etc), so it's auditable and so others can chip in in the future, instead of just creating cloud project admins and having them create random resources. So we need to know what to spin up, exactly. We have a lot of existing shared resources in the project for things like CI and release. |
There are multiple aspects of it, and since it is an 8 years old infrastructure for both test/release and also hosting live apps and released artifacts (binaries and tarballs, ISOs, docker images ...) . I agree that we would like to leverage github binaries and github actions as much possible when do-able, some cases might not work such as minikube preload tarball images. Currently the idea is to get a footprint in the public owned infra and then try to move little by little without disrupting the system or unrealistic overcapacity re-eingeering The current requirements coming to mind
This is a good list to start with but not comprehensive, The idea to to get a footprint in the new project and re-evaluate the path forward. |
We have already engineered systems for e.g. hosting images though, and we do not want to dig a new unsustainable hole for these. From the specific examples:
We do not want users consuming directly from any paid SaaS like this, it is a liability for the project (we have no flexibility to shift costs when utilization and funding shifts). We shouldn't re-introduce this.
See above comment, also can be hosted on github at no cost?
Can we use our existing CI infra? We already have a lot of resources behind this and they're shared/pooled across the project. We care a lot about things like making sure that VMs get cleaned up when they're no longer in use. At the scale that we're supporting, if every project runs custom unmonitored systems we can't keep track of the waste.
That's just not how we run k8s infra though, it's not transparent or sustainable. Everything we've lifted and shifted previously we've span up a new copy in k8s infra, with the specifics checked in, so others can read through, edit/PR, and otherwise take over in the future. We haven't granted any subproject the ability to arbitrarily create cloud resources in a project because it's not accountable and it's not reproducible. Everything we're running can be traced back to e.g. https://github.com/kubernetes/k8s.io/tree/main/infra/gcp/terraform and the SIG (as steward) has agreed is reasonable to run (and always sought out the most effective answers, we've had to work hard to reach sustainable spend, up to and including things like working with SIG Scalability to evaluate their test workloads and adjust frequency and scheduling). |
cc @dims (chair) in additional to TLs (#7414 (comment)) |
All of the infra we've migrated has been similarly old if not older and it does take a lot of work, but I also think we really don't want to regress from all the effort we've put in so far and the ground rules we've stablished (such as not permitting non-community owned accounts into our CI), which are all based on mitigating real issues we've experienced in the past. It's really important that I or any of the other infra leads can quit and someone else can pick up the pieces without blockers, and that we keep an eye on sustainable spend and know what it is that we're funding and what the usage trends are. |
I undrestand and I agree with leveraging github as much as possible, Some of the the artifacts can be hosted in github such as binaries, as part of the Release Assets there are also many jobs that build ISOs and Kic Images Per PR and push to the PR, that would not be doable in Free github action machines, that would need beefy machines to build ISOs. currently we have 80 internal autmoation jobs (not dependabot) thats bumps new versions of ISO/Image software and pushes a new ISO during Off peak hours (mid night) those wouldn not be implementable using github or github actions. also as mentioned in my previous comment, we also have multiple hosted Software running for minikube that are essential in running minikube project, currently deployed to Cloud run |
The content contained in github releases is mutable, even after advertising a release publicly. Are these "preload tarballs" essentially a set of container images? Because that sounds like if we host it we're going to have the registry.k8s.io egress problem duplicated. Per above it sounds like these are advertised directly from GCS buckets, which is not a cost-effective approach and not something we want to do again. Cost effectiveness aside, it limits our ability to make decisions later about what resources to use for hosting as users become dependent on the buckets and make assumptions about them). Again, we have an established process and common infra for container image hosting: https://github.com/kubernetes/k8s.io/tree/main/registry.k8s.io#managing-kubernetes-container-registries @upodroid has been working on migrating the staging to artifact registry and may have some updates for the process but we don't have to block on that.
That's a distinct problem from where they're hosted though. The output of the jobs can be copied where we need it ...?
ACK ... We still need an accounting of what exactly. Should probably prioritize the most critical assets first. |
|
Ok, but we still have to sustainably host the ingress if we're paying for it in k8s infra. We have an allocation for the core repos binaries (we get a bandwidth budget that we negotiated based on that need), and we have registry.k8s.io We have to be careful with introducing content hosts because we have limited ability to cut usage and manage costs. We've been asking subprojects to use GitHub releases to host files. We probably would do this for Kubernetes too but we have a huge legacy around that and we receive an ongoing donation specifically for that problem. |
IMHO we should break down this migration project in different conversations. I can't definitively do a lift and shift for Minikube. |
@medyagh Any thoughts on my proposal ? |
Hello, minikube maintainer here, I would like to ask for a GCP project for minikube owned by the CNCF community, our release test infra is at a google owned project that we like to explore migrating it to CNCF-owned project, is this the right place to ask for it ?
related: kubernetes/test-infra#33654
The text was updated successfully, but these errors were encountered: