Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue with kusama #1554

Open
jun0tpyrc opened this issue Nov 21, 2024 · 4 comments
Open

Performance issue with kusama #1554

jun0tpyrc opened this issue Nov 21, 2024 · 4 comments

Comments

@jun0tpyrc
Copy link

jun0tpyrc commented Nov 21, 2024

Description
Performance issue with kusama
private nodes running behind sidecar hang & stuck => rpc timeout etc , also maybe the cause of explorer slowdown https://kusama.subscan.io/

Steps to Reproduce

run sidecar (v19.2.0/v19.3.1) on your rpc nodes ,tested on polkadot 1.16.2-dba2dd59101, run some calls like /blocks/head etc

@Imod7
Copy link
Contributor

Imod7 commented Nov 21, 2024

Thank you for the issue @jun0tpyrc
I just tested Sidecar (v19.3.1) with some public RPC endpoints connected to Kusama and I either get almost instant results from /blocks/head or with some delay. However, I cannot reproduce the timeout you are experiencing.
On Sidecar's side, there are no significant changes in the code of the last releases that could result in delays in endpoints like /blocks/head.
Have you changed something in your nodes ? I am checking now to see if there is something related to the Kusama network in general that could justify a delay.

@nfekunprdtiunnkge
Copy link

nfekunprdtiunnkge commented Nov 21, 2024

We are experiencing the same, sidecar using more than 4gbs of ram. Try with blocks around 25875378.

@filvecchiato
Copy link
Contributor

filvecchiato commented Nov 22, 2024

I'm currently looking into this issue. I have seen some degradation for certain blocks in Kusama due to the spammening reported here. Specifically certain blocks in the range: 25875509-25875808

Those blocks have about 4/5k extrinsics per block (compared to 40/50 average) every 6s. So the endpoints in sidecar /blocks/:number or /blocks/head (when those blocks were the head) would struggle to get all extrinsics in order to respond in a timely manner.
We are currently looking into a solution to resolve the extrinsics decoding overhead once all the blocks will be averaging that level of traffic.
It would be quite helpful to understand your use cases for sidecar specifically to /blocks/head and /blocks/:number so we can work on a more performant solution, thanks!!

@Imod7
Copy link
Contributor

Imod7 commented Nov 26, 2024

@jun0tpyrc @nfekunprdtiunnkge

Please expect a possible decline in Sidecar performance starting at approximately 14:00 UTC today and lasting for around 40 minutes. This is due to an expected Spammening event on the Polkadot network.

During this period, Sidecar endpoints like blocks, staking-info and others may be affected, as we expect a high volume of 2.000-5.000 transactions per block.

To mitigate the potential performance impact, we recommend the following:

  • Increase Sidecar's Heap Memory by using the --max-old-space-size flag in your process startup command.
  • Use the noFees=true and finalizedKey=false flags for the blocks endpoint to reduce the data payload.

Related docs:
Showcasing Polkadot’s Capabilities: The Spammening

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants