-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🔥 Feature: Loadshed middleware [Proportional Request Rejection Based on Probabilistic CPU Load] #2687
🔥 Feature: Loadshed middleware [Proportional Request Rejection Based on Probabilistic CPU Load] #2687
Conversation
This commit introduces a prototype for the load shedding middleware for the gofiber library. The middleware checks CPU usage and determines how to handle incoming requests based on the provided thresholds. Key components: - Config struct for middleware customization. - Uses gopsutil to fetch CPU percentages. - Responses are based on CPU usage thresholds. Note: The implementation is not complete. Feedback is needed on how to manage and execute queued requests effectively.
Thanks for opening this pull request! 🎉 Please check out our contributing guidelines. If you need help or want to chat with us, join us on Discord https://gofiber.io/discord |
@caner-cetin you can check if it corresponds to your idea |
Yes, it aligns with what I want, and what I originally think of. Leaves request queueing to cloud services as far as I understood, and just does load shedding. Wonderful, and well done! LGTM. |
In addition to my previous comment I couldn't and cannot look into implementation throughly for now, so just idea is LGTM. @efectn I trust you on this lol |
Hi @efectn ,
Would it be okay to ignore this error with a lint comment since predictability Isn't an Issue and math/rand would perform faster than crypto/rand? |
Do you have any benchmarks to compare both libs? |
Can we also implement load shedding based on average latency time? Perhaps we should make middleware extendable for other cases. |
I didn't run any benchmarks tests on that. I just assumed that because crypto/rand would use cryptographically secure random number generator which would reduce performance due to extra operations while math/rand implements pseudo-random number generators which is usually faster. if you look at this link: https://pkg.go.dev/math/rand , the documentation says:
Since security and predictability isn't an issue in this case I thought it would be better to stick with math/rand. I can still look into running benchmarks if needed, Always up for trying something new. |
I don't know how can it be done, and even through I am not OP, I am curious. Calculate the time passed from the moment c.Next() is called for average MS? |
Yup, It's definitely possible. I could look into it. I have seen similar implementations as well and would love to look into it. |
Yeah I'm guessing we would need to do something like that, but we need a mechanism to smooth out latency like a sliding window or some sort of Moving Average (MA) to calculate an average that is tolerant to sudden spikes and variability to get a more consistent average latency. |
@Behzad-Khokher looks very good, can you create another markdown file for our documentation in this folder https://github.com/gofiber/fiber/tree/master/docs/api/middleware with the same concept like the others where the description about the middleware, signature, default config, the single config settings are described and examples are included this would also help with the review |
@ReneWerner87 Yes! Absolutely, will look into it and try me best to finalise everything as soon as I can. Thanks for getting back to me 🙃 |
@Behzad-Khokher any progress ? |
@ReneWerner87 Hey! Really sorry, just got really busy last week. I have made the middleware extendable and modified the code for some minor improvements but still have to write the documentation. Will push some code in a bit. Sorry for the delay. |
@ReneWerner87 Noob question but was it suppose to be under this, or am I suppose to create a new PR for the modification: #2706 Also it's pretty much done, I'm satisfied with it just need to clean it up and remove the comments ect... I'll get the documentation done by tomorrow, again apologies for the delay 😅 just had a-lot going on. |
Thanks for the feedback, Will look into that. Right now I'm going to focus on the documentation. I have closed the other PR, my bad for that. |
c90de45
to
5ede9fe
Compare
5ede9fe
to
f77ba00
Compare
@ReneWerner87 I have documented it and am satisfied with the code: Documentation: (f77ba00) Should I squash my commits? |
not needed |
@Behzad-Khokher please go through the headlines again and add a hashtag to each of the lower ones under the main headline so that there is only one main headline |
@ReneWerner87 Done. |
@Behzad-Khokher can you open a pull request with the code there and integrate your middleware into the workflows |
@Behzad-Khokher since this middleware is well separated, the code only needs to be moved |
@ReneWerner87 Hey Thanks! That's great. Will try my best to get this done by tommorow. |
@Behzad-Khokher hi hope you haven't forgotten, a port to the contrib repository would be very nice, think you can use this middleware well |
@Behzad-Khokher happy new year ping |
static buzzing, dread noise of empty laboratory in horror scifi movies creeping in at a distance |
Happy New Years Guys! My apologies for the radio silence. Life stuff got in the way 🥹. @ReneWerner87 @caner-cetin I have also been looking into making it extendable like shedding based on latency. I have a few ideas, but let's just get the first PR merged in contrib. Contrib: Loadshed Middleware - Proportional Request Rejection PR |
Closed by gofiber/contrib#899 |
Description
I have implemented a load shed middleware. This inspiration originally came from this post: (#2341 )
Loadshed middleware work by intentionally shedding or dropping some of the load (e.g., incoming requests), so the system can maintain acceptable performance and avoid potential failures.
The unique aspect of my implementation is Proportional Request Rejection Based on Probabilistic CPU Load. This means that as the CPU load increases, the probability of rejecting a request rises proportionally.
The formula used for this proportional rejection is:
rejectionProbability := (cpuUsage - cfg.LowerThreshold*100) / (cfg.UpperThreshold - cfg.LowerThreshold)
A: cpuUsage - cfg.LowerThreshold * 100: Calculates how much the current CPU usage exceeds the lower threshold
B: cfg.UpperThreshold - cfg.LowerThreshold: Calculates range between the lower and upper thresholds
C: A/B : division scales the exceeded amount as a fraction of the range between the lower and upper thresholds.
The result is a rejectionProbability. As you can see, keeping everything constant, as the cpuUsage increases, this results in an increase in rejectionProbability. Which can be used to reject request probabilistically.
If the CPU usage is below the LowerThreshold, no requests are rejected. If the CPU usage is above the LowerThreshold, every request has a probability of being rejected depending on how closer the cpu usage is to the UpperThreshold. Once usage exceeds UpperThreshold, all requests are rejected.
This is a draft. Looking forward to feedback from the community. (This is my first PR in open source)
Fixes # 2341
Type of change
Please delete options that are not relevant.
Checklist: