Skip to content

exanauts/ExaModels.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Logo

An algebraic modeling and automatic differentiation tool in Julia Language, specialized for SIMD abstraction of nonlinear programs.


License Documentation Build Status Coverage Citation
License: MIT doc doc build codecov arXiv

Overview

ExaModels.jl employs what we call SIMD abstraction for nonlinear programs (NLPs), which allows for the preservation of the parallelizable structure within the model equations, facilitating efficient, parallel reverse-mode automatic differentiation on the GPU accelerators.

ExaModels.jl is different from other algebraic modeling tools, such as JuMP or AMPL, in the following ways:

  • Modeling Interface: ExaModels.jl requires users to specify the model equations always in the form of Generators. This restrictive structure allows ExaModels.jl to preserve the SIMD-compatible structure in the model equations. This unique feature distinguishes ExaModels.jl from other algebraic modeling tools.
  • Performance: ExaModels.jl compiles (via Julia's compiler) derivative evaluation codes tailored to each computation pattern. Through reverse-mode automatic differentiation using these tailored codes, ExaModels.jl achieves significantly faster derivative evaluation speeds, even when using CPU.
  • Portability: ExaModels.jl goes beyond traditional boundaries of algebraic modeling systems by enabling derivative evaluation on GPU accelerators. Implementation of GPU kernels is accomplished using the portable programming paradigm offered by KernelAbstractions.jl. With ExaModels.jl, you can run your code on various devices, including multi-threaded CPUs, NVIDIA GPUs, AMD GPUs, and Intel GPUs. Note that Apple's Metal is currently not supported due to its lack of support for double-precision arithmetic.

Highlight

The performance comparison of ExaModels with other algebraic modeling systems for evaluating different NLP functions (obj, con, grad, jac, and hess) are shown below. Note that Hessian computations are the typical bottlenecks. benchmark

Supporting ExaModels.jl