This is an archived static version of the original discussion site.

Efficient detection of repeating sites to accelerate phylogenetic likelihood calculations


Here’s a nice paper from @Alexis_RAxML’s lab:

Efficient detection of repeating sites to accelerate phylogenetic likelihood calculations Kassian Kobert, Alexandros Stamatakis, Tomáš Flouri doi:

The phylogenetic likelihood function is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory saving attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 10-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the phylogenetic likelihood function currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation.

It’s quite common for software packages to collapse identical site patterns. The novelty (well, new to me) is to find identical site patterns at every internal node, and to think hard about smart algorithms to do that efficiently.

Alexis, is this going to be folded into the LLPLL?


Dear @ematsen thanks for posting this. Yes, we do plan to integrate this into the LLPLL. Also, we further optimized the site repeats code at a technical level (stupidly enough after submitting the paper) and the speedups are even better than reported there.