-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate with time stepping codes #1
Comments
Note that if you just target the DiffEq common interface, the same code will work for DifferentialEquations.jl, Sundials.jl, ODEInterface.jl, and ODE.jl. That's why I'm suggesting doing that. |
All that would be required is that you'd have to be able to build a Problem type https://juliadiffeq.github.io/DiffEqDocs.jl/latest/types/ode_types.html If there's a function that builds the problem from an ApproxFun function, then it should all just work. |
Ah OK. Note that the code here is derelict, and is of the variety of "discretize in time, ∞-dimensional in space" where your previous request was the more traditional pseudo-spectral approach of "∞-dimensional in time, discretize in space". I think the latter approach of "∞-dimensional in time, discretize in space" is the right way to go, as it means nothing special has to be done to DiffEq. |
For periodic problems it should be really easy. How do you support constraints? For example, with a Dirichlet heat equation It's necessary that we constrain the value at ±1. |
Also, is it necessary that |
That can be relaxed. Will a |
No. Why would you need indexing like an array?
|
Let me rephrase that. a vector-valued Fun behaves like a vector: f=Fun(x->[exp(x);sin(x)],[-1,1])
f[1] # is Fun(exp,[-1,1])
f[2] # is Fun(sin,[-1,1]) A scalar-valued Fun doesn't at the moment support indexing, but |
For the update steps which do a linear combination on the partial steps, like in a Runge-Kutta method. For example, say we took two partial RK steps and stored them in for i in eachindex(u)
tmp[i] = a31*k[1][i] + a32*k[2][i]
end Is it better than to treat a |
Well here's the big question: is a |
Yes, a u0=Fun(θ->cos(cos(θ-0.1)),PeriodicInterval())
prob = ODEProblem((t,u) -> u'',u0,(0,1)) # ' is overriden for a Fun to mean derivative (In practice, there should be a |
|
(My policy is to always use |
I think that immutable vs mutable is a better way of making the distinction. The reason is because we accept two different kind of functions: |
I think it's better to leave it immutable in this context: the number of coefficients of a |
OrdinaryDiffEq.jl (the native DifferentialEquations.jl methods) actually already handle that. See the cell model: https://juliadiffeq.github.io/DiffEqDocs.jl/latest/features/callback_functions.html |
But would they change size with every function call? That would be too difficult to handle. |
Yes. There is another way we could go about this: send only matrices and arrays to DiffEq.jl, and have this package pre and post process the data... |
Would they still be changing size every function call? Because then it might be too difficult to handle anyways, and if you're always allocating to resize I don't think that there'd be too much of a performance difference trying to just chop a few of them out. |
Question: what does broadcasting do to a |
Chris, something of interest for time evolution of PDEs is the use of IMEX Runge-Kutta schemes, i.e. schemes with multiple tableaus to handle multi-physics problems. These would work well with ApproxFun, because one can cache partial QR factorizations of linear operators (whose inverses are required in DIRK schemes for example). Should we consider these in DifferentialEquations.jl? Here's a few IMEX RK schemes http://www.cs.ubc.ca/~ascher/papers/ars.pdf |
These are already being considered, just haven't been implemented yet. It'll look pretty much the same as the ODEProblem: prob = ODEIMEXProblem(f,g,u0,tspan) where you give the two functions, stiff and non-stiff, and then solve(prob,alg;kwargs...) solves the problem with the chosen algorithm. That's the interface we're looking for. FYI I'm going to be concentrating on more Rosenbrock and exponential RK methods first though. |
I'm going to take a crack at getting equations on |
Is that something that's done automatically? Can you show the docs for that? |
Ok. We have an ETDRK4 scheme built here somewhere. It uses some special functions from ApproxFun's |
I think you just call |
f.([1,2,3]) == [f(1),f(2),f(3)]
|
I'll add qrfact to the docs. Sent from my iPhone
|
Sorry, maybe this is more helpful f=Fun(exp,[-1,1])
g=Fun(cos,[-1,1])
h=Fun(x->exp(cos(x)),[-1,1])
f.(g) == h |
Or also g=Fun(cos,[-1,1])
h=Fun(x->exp(cos(x)),[-1,1])
exp.(g) == h |
The issue is that boundary conditions (other than periodic) are not preserved: just because The fact time-stepping works well with periodic boundary conditions is extremely lucky. |
But we might be getting ahead of ourselves: I'm not sure boundary conditions are realistic without implicit-explicit (IMEX?) methods. IMEX are mostly useful when you have fast implicit solvers, so for that we'd need to specify to use |
They should be able to be imposed on the explicit method as well. For spectral methods it's usually through the choice of basis (periodic -> Fourier basis). If there is no appropriate basis, then there is an order reduction if you impose the restriction directly on ODEs. When the appropriate basis can't be formed, then a simplified form of a DAE is usually sufficient for this. I think what we discussed can always be a constrained ODE or an ODE with a mass matrix. Getting these right (without simply making everything an implicit ODE) will take some time. But I think we know what we need to do now.
What is it just |
I agree that this is the simplest way forward to handle general boundary conditions, though you lose the O(N) complexity of ApproxFun's \ as it won't recognize that the operators are almost banded.
Short answer: only The infinite-dimensional nature of ApproxFun works well with implicit-explicit methods, which extend naturally to ∞-dimensions. The benefit of building off of ApproxFun's PS Here's a spectral time stepping project in python: I believe they work using implicit-explicit methods where the user specifies the equation as
where L = [0; 0; I]
M = [Evaluation(-1);Evaluation(1);-D^2]
f = (t,u) -> [0;0;u^2] The departure here from DifferentialEquations.jl is that it allows a matrix acting on the |
That's mass matrices. We already know how we want to do that. SciML/DiffEqBase.jl#1 . And lots of solvers will be able to handle that with only minor changes (i.e. just expose the API to pass the mass matrix). However, it'll take some time to implement since I'll be working on stochastic things more for awhile. |
Got it. We'll just need to expose the choice of factorization method, and set defaults when using |
Yep, I think that'll work. Then code that treats This should also resolve another issue: S=space(u0)
C=Conversion(S,rangespace(M))
C*u_t + M*u |
I currently use it with the
That conversion should just be applied before making the mass matrix problem and passed as the mass matrix, right? If it needs to be applied each step (to deal with the changing discretization size? Or will it handle it automatically?), then the non-constant mass matrix version will allow it to be a function which will be able to do what's needed here. |
Yep, the conversion operator would be the mass matrix. Operators can handle the adaptivity automatically (though a call to Would it be helpful to have |
I think so? What does For the "can be indexed" case I use InPlaceOpts.jl to make everything in-place (and thus use |
Ah but the coefficients are (mutable) vectors, so one could do, for example: resize!(f.coefficients,1000)
f.coefficients[:]=rand(1000) It is possible to make a Fun usable as an ∞-dimensional array, via: c=Fun(f.coefficients,SequenceSpace())
c[10] # returns 10th coefficient But I'm not sure why a time-stepper would need to know about indexing. PS I'm about to change the order to |
That's because in the most common case you update each index at each step. Until all of the broadcast fusing changes are done, |
Broadcasting should be fine. I could overload function Base.broadcast!(::typeof(identity),a::Fun,b::Fun)
resize!(a.coefficients,ncoefficients(b))
a.coefficients[:]=b.coefficients
a
end
`` |
Cool. Then it should all workout come v0.6. |
I guess to support fusing properly I probably want to also have the definition: function Base.broadcast!(f,a::Fun,b::Fun)
c=f.(b)
a.=c
end |
To fuse what statement? |
I'm worried about autofusing: if I have |
That is true. I think you do need the extra definition. |
@ChrisRackauckas I've added documentation to ApproxFun: http://juliaapproximation.github.io/ApproxFun.jl/stable/ Hopefully this clarifies the relationship between operators, qrfact, etc. a bitl. If you have any questions/requests just file an issue. One major change is that the syntax is now |
Using QR / allowing the choice of QR won't be a problem if the linear algebra interface I'm interested in becomes a thing. See the blog post: http://www.stochasticlifestyle.com/modular-algorithms-scientific-computing-julia/ With that, someone could just pass in a method, so I'd default to LU, but could make the dispatch on |
Nice article! P.S. I think your linear solves are backwards (or maybe it's just my version of Julia that's missing new methods): julia> A = rand(3,3)
3×3 Array{Float64,2}:
0.628874 0.527016 0.465296
0.151401 0.401624 0.197612
0.18347 0.674264 0.4713
julia> b = rand(3)
3-element Array{Float64,1}:
0.979039
0.348729
0.747334
julia> K = lufact(A)
Base.LinAlg.LU{Float64,Array{Float64,2}}([0.628874 0.527016 0.465296; 0.291743 0.52051 0.335553; 0.24075 0.527839 -0.0915263],[1,3,3],0)
julia> x = b\K
ERROR: ctranspose not implemented for Base.LinAlg.LU{Float64,Array{Float64,2}}
in \(::Array{Float64,1}, ::Base.LinAlg.LU{Float64,Array{Float64,2}}) at ./operators.jl:145
julia> x = K\b
3-element Array{Float64,1}:
0.528412
-0.0334175
1.42779
|
haha I was silly. Thanks for noticing that. |
Looks good! You can also use |
Oh, that might be a good idea. But |
I suppose not, but type stability of a factorization object is unlikely to cause overhead, as it immediately dispatches to a \ call. |
Ahh, but I will want to store the factorized object in many cases because it can be re-used. I though |
Just updating this thread. The discussion over here solves the matrix factorization problem we were talking about. The factorization function could just be what's passed in, and default to factorize, and this should cover all linear solvers in the future as well. https://discourse.julialang.org/t/unified-interface-for-linear-solving/699 Is there rootfinding on https://discourse.julialang.org/t/a-unified-interface-for-rootfinding/698 That'll be what's needed for implicit methods on the adaptive-basis |
There's a routine `newton` that does root finding for ODEs, usually BVPs:
```julia
x=Fun()
u0=0.0x # initial guess
N=u->[u(-1)-1,u(1)+0.5,0.001u''+6*(1-x^2)*u'+u^2-1] u=newton(N,u0)
```
It would be awesome to use a general root finding package. In this setting, Jacobians are `Operator`s, which I calculate using auto differentiation via a type `DualFun`
…Sent from my iPhone
On 14 Dec. 2016, at 17:02, Christopher Rackauckas ***@***.***> wrote:
Just updating this thread.
The discussion over here solves the matrix factorization problem we were talking about. The factorization function could just be what's passed in, and default to factorize, and this should cover all linear solvers in the future as well.
https://discourse.julialang.org/t/unified-interface-for-linear-solving/699
Is there rootfinding on ApproxFun types? What would it even mean? That would be necessary for things like BDF methods. But whatever it is, if we unify the rootfinding interface as well, it could be overloaded for ApproxFun types and seamlessly work in those kind of methods:
https://discourse.julialang.org/t/a-unified-interface-for-rootfinding/698
That'll be what's needed for implicit methods on the adaptive-basis Fun to work.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
This should be a home for integrating ApproxFun with standard time-stepping codes, e.g.:
https://github.com/JuliaDiffEq/ODE.jl
https://github.com/luchr/ODEInterface.jl
This follows up on conversations at TU Munich with @luchr and @al-Khwarizmi, and a discourse discussion
https://discourse.julialang.org/t/pros-cons-of-approxfun/396/10
with @ChrisRackauckas.
The text was updated successfully, but these errors were encountered: