Skip to content
This repository has been archived by the owner on Jun 24, 2022. It is now read-only.

Numerical issues with direct Fun usage #3

Open
ChrisRackauckas opened this issue May 1, 2017 · 28 comments
Open

Numerical issues with direct Fun usage #3

ChrisRackauckas opened this issue May 1, 2017 · 28 comments

Comments

@ChrisRackauckas
Copy link
Member

Direct usage of Funs in OrdinaryDiffEq.jl works... but kind of.

numerical_errors

After a few steps, the size of the Fun's coefficient array grows really large and numerical errors occur. On the same problem, the fixed size Fun does okay, so I think it must be due to how the array just keeps growing.

@ChrisRackauckas
Copy link
Member Author

@dlfivefifty

@dlfivefifty
Copy link

Adaptive time-stepping is a delicate issue. I had an honours student look at this, and we found it can only be done reliably with L-stable schemes. This may be avoidable with smarter Plateaux detection.

It's also important that the solutions coefficients are chopped to avoid the length getting out of control.

@dlfivefifty
Copy link

Note that L-stable schemes add a nit of numerical diffusion, which may not be desirable.

@ChrisRackauckas
Copy link
Member Author

This doesn't do adaptive timestepping. This is fixed timesteps, adaptive coefficient length.

It's also important that the solutions coefficients are chopped to avoid the length getting out of control.

This is the issue.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented May 1, 2017

Note that L-stable schemes add a nit of numerical diffusion, which may not be desirable.

Sundials CVODE_BDF seems to do fine. You likely only need L-stability if you're near the stability limit.

@dlfivefifty
Copy link

I meant adaptive in space

@ChrisRackauckas
Copy link
Member Author

I see. Gets confusing with the two meanings.

Yeah, I don't think it's the spatial adaptivity as much as it is the numerical issue with huge numbers of coefficients. It works fine for quite a few steps, but reliably fails at step ~12 no matter what, and the coefficient array is large with tons of <1e-16 coefficients. Is there a way to zero the coefficients when they are sufficiently small?

@dlfivefifty
Copy link

I think you always need L-stability, as in infinite-dimensions you want the operators to be bounded. That is, you want to invert more differential orders than you apply. (This is more than a numerical issue.)

@dlfivefifty
Copy link

Yes, chop(f,atol) and chop!(f,atol)

@dlfivefifty
Copy link

chop should probably be redesigned to mimi isapprox and allow both relative and absolute tolerances

@ChrisRackauckas
Copy link
Member Author

Is there a way to set the Fun to "auto-chop" all coefficients below a certain tolerance?

@dlfivefifty
Copy link

This really should happen in . The only reason it doesn't is that I'm not sure about the implications.

One of the big missing features in ApproxFun that Chebfun has is plateau detection and chopping, to determine which coefficients are noise. I've been meaning to copy their (very heuristic) chopping algorithm.

@dlfivefifty
Copy link

The more the constructor is used the more important this is (which is the case with broadcasting)

@dlfivefifty
Copy link

Sorry, this really should happen in backslash

@ChrisRackauckas
Copy link
Member Author

Backslash isn't used here. This is just testing a Runge-Kutta method. Fixed coefficient length works (with adaptivity in time). Same timesteps doesn't work with adaptivity in space, and the number of coefficients seems to explode. If there's a way that a+b for Funs applies some tolerance on the coefficients of the result and correctly chops, this would work without intervention.

@ChrisRackauckas
Copy link
Member Author

Also avoiding backslash are things like exponential RK methods, so it's probably good to have a solution that doesn't require backslash.

However, if the RK methods are treated as projective RK methods on manifolds a la #2 , then backslash would be used at the end of each step, so I guess that still kind of works.

@dlfivefifty
Copy link

Chebfun does the chop after every operation, which is something that could be copied

Though when you are using broadcast the chop in the constructor is sufficient.

That said, it's not clear how explicit methods can work with adaptive spacial discretisation unless you also have adaptive time discretion because CFL says the time step should decrease as the length increases.

@dlfivefifty
Copy link

(L-stable schemes do work with fixed time steps but they are all implicit.)

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented May 1, 2017

unless you also have adaptive time discretion because CFL says the time step should decrease as the length increases.

Yes, this is just a test while #1 doesn't have a fix (which would allow adaptivity). And now that you mention it, this is what CFL instability looks like... so + the lack of chopping is probably it.

(L-stable schemes do work with fixed time steps but they are all implicit.)

Not exactly. There are methods which are L-stable in the linear part. Exponential RK, IIF, etc. That's sufficient for stopping this kind of instability issue. Rosenbrock methods can be L-stable and they are not implicit, though they do require a linear solve.

Chebfun does the chop after every operation, which is something that could be copied
Though when you are using broadcast the chop in the constructor is sufficient.

Do you have an issue for tracking this?

@dlfivefifty
Copy link

@ChrisRackauckas
Copy link
Member Author

Alright, then it looks like this issue is just a mixture of the other issues then.

@dlfivefifty
Copy link

I've copied the Chebfun chopping method, so the lengths should be more controlled

@ChrisRackauckas
Copy link
Member Author

I'll wait for your next release and have another go at this.

@dlfivefifty
Copy link

Can you give the line of code to test the solution growth rate? The constructor is still adhoc so I'd like to test that it works as expected.

@ChrisRackauckas
Copy link
Member Author

Needs some master branches right now. Just wait a little bit and remind me.

@ChrisRackauckas
Copy link
Member Author

https://github.com/JuliaDiffEq/DiffEqApproxFun.jl/blob/master/test/direct_approxfun.jl

just looking at the Euler one

sol(0.4).coefficients

shows that by t=0.4 it's at 1636 coefficients. Much lower than before, but still growing fast enough that by t=0.5 it has CFL problems. Is there a way to put a cap on it, or tell it what kind of accuracy you want to chop at?

@dlfivefifty
Copy link

At the moment I think no. I think some serious thought needs to be put into Lebesgue constants, etc., before explicit methods can be reliably used with adaptivity. (Implicit methods like backward Euler fair much better since the coefficients don't plateau so can be reliably chopped.) This is a planned future student project.

Though it's not entirely clear how the "infinite-dimensional" time steps should look. It might be worthwhile doing a simulation with BigFloat coefficients to see what the "true" growth rate of ncoefficients for the time steps should be.

@dlfivefifty
Copy link

Just to clarify: it's not as simple as specifying an absolute or relative tolerance at which to chop, as eventually the errors from simulation will exceed this and then you get a long tail. So essentially one needs some round off error control to modify the tolerance adaptively. Lebesgue constants should give this (I think).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants