What flow-control of RPC requests mechanism(s) ? #7942
Replies: 6 comments 8 replies
-
In case you are relating to the concrete Substrate RPC API, I am not the right person to reply, given that I have not been involved much in that part of the codebase. In case you are referring to flow-control in general: Yes, there have been many discussions. Let me link a couple:
See also w3f/polkadot-validator-setup#89 (comment). Let me know if this is of some help. |
Beta Was this translation helpful? Give feedback.
-
Thank you! Pointers are helpful to start getting more context. So basically to me it's this discussion is about generic question, i.e. about handling "all" incoming requests. Therefore, also RPC API requests. All in-flight requests generate load and should be taken into account when rate limiting. |
Beta Was this translation helpful? Give feedback.
-
//cc @tomusdrw |
Beta Was this translation helpful? Give feedback.
-
The
Both are no longer valid today, you can apply a band-aid in a form of rate-limiting in the reverse-proxy layer ( AFAIR |
Beta Was this translation helpful? Give feedback.
-
All I can offer here is opinions. Like @gww-parity says in the ticket, there are pros and cons for both approaches. I think that using a proxy in front of the node is good enough and that the engineering effort required to ensure dynamic back pressure for all use-cases is better spent elsewhere. Until we hear of a concrete user requirement for dynamic rate limiting on the RPC layer my opinion is that we should encourage all and everyone to put a proxy in front of their node. |
Beta Was this translation helpful? Give feedback.
-
What reverse proxies offer limiting amount of in-flight connections / preferably with different limits for different request types? (for Nginx e.g. I've found few parameters , but I am far from sysadmin ninja to know if it's answer to my question: |
Beta Was this translation helpful? Give feedback.
-
Flow-control means things like max in-flight requests, dynamic control if there is capacity to receive one more request etc...
This can be achieved in to ways:
Were there discussions so far regarding requests flow-control and how to tackle it?
Each of approches (inside applicaiton and using external proxy server) have their pros and cons and have to be designed accordingly.
Instead of dynamic flow-control, we may be fine with fixed rate limiting (e.g. on Nginx) for now. Probably that's ok, as we may not need to head maximum utilisation (at least for now). Still, it would be nice to have captured informed decision making about topic.
It's RPC requests related, so it's kind of infrastructure question, therefore probably worth to threat in generalized way as problem is universal (every kind of request puts some vector of loads on service by principle).
Beta Was this translation helpful? Give feedback.
All reactions