Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to match quality of original implementation? #155

Open
tsaizhenling opened this issue Oct 31, 2023 · 1 comment
Open

how to match quality of original implementation? #155

tsaizhenling opened this issue Oct 31, 2023 · 1 comment

Comments

@tsaizhenling
Copy link

I have been playing with the truck sample in both this repository and in graphdeco-inria /
gaussian-splatting
from this repo
^this repo
from reference
^ reference, trained with default parameters

I can't seem to replicate the same re-construction quality with this repository, (note that the fence cannot be rendered clearly). I have tried to match the learning rates, making the following changes to the config. what is causing the difference?

--- a/config/tat_truck_every_8_test.yaml
+++ b/config/tat_truck_every_8_test.yaml
@@ -31,8 +31,8 @@ print-metrics-to-console: False
 enable_taichi_kernel_profiler: False
 log_taichi_kernel_profile_interval: 3000
 log_validation_image: False
-feature_learning_rate: 0.005
-position_learning_rateo: 0.00005
+feature_learning_rate: 0.0025
+position_learning_rate: 0.00016
 position_learning_rate_decay_rate: 0.9947
 position_learning_rate_decay_interval: 100
 loss-function-config:
@@ -45,8 +45,11 @@ rasterisation-config:
   depth-to-sort-key-scale: 10.0
   far-plane: 2000.0
   near-plane: 0.4
+  grad_s_factor: 2
+  grad_q_factor: 0.4
+  grad_alpha_factor: 20
 summary-writer-log-dir: logs/tat_truck_every_8_experiment
-output-model-dir: logs/tat_truck_every_8_experiment
+output-model-dir: logs/tat_truck_every_8_experiment_matched_lr
@jb-ye
Copy link
Contributor

jb-ye commented Nov 1, 2023

Based on my experience, the repo's implementation has a lot difference than the official repo, so does its rendering performance. One notable difference is its gaussian densitification strategy is much conservative than the one by official repo. Directly matching the parameters wouldn't lead to the same performance. But I had found cases that Taichi GS performs better than official one. It is really case by case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants