SABR adjoint is abnormally slow

Hello,

As a part of an ongoing exercise, I recently benchmarked your implementation of the asymptotic SABR volatility function. I noticed that the implementation of the adjoint was abnormally slow for beta = 1, slower in fact than a hypothetical finite differences implementation. The benchmarks are open source and the corresponding write up may be read here.

Is this an issue with Strata, or with the benchmark implementation?

Harshad

Thanks for trying out Strata. Obviously, it is not possible for us to run your benchmark code as it uses your closed source product. As such, I won’t delve too deeply into the rights or wrongs of that particular benchmark or methodology.

It is also the case that Strata has not sought to optimise performance, as our first concern is to get the right numbers with appropriate tests. Suffice it to say that our customers have yet to raise algorithmic performance as a problem. Nevertheless, when a particular issue is found, it can obviously be examined. In this case, the use of Math.pow(n, 4) instead of n*n*n*n (and similar for powers of 2 and 3) appears to have been the primary problem. A PR is underway with suitable changes, and some other minor optimisations.

Hello,

Thank you for your reply. My point was not so much to compare the performance of your library with mine (as it would be a non-falsifiable proposition on this forum), as much as it was to flag an internal inconsistency within your codebase. Strata, being a well funded open source library, is a useful standard candle, and I hope to contribute when I can to improve it.

As an addendum, if you believe it is relevant, there seems to be an issue with your implementation of cubic spline interpolation, in that it is two orders of magnitude slower than Apache Commons Math. I have pushed the benchmarks to the github repository.

Regards,
Harshad

Some validation has been improved in cubic spline interpolation which should see performance improve. Thanks