First, let me update you on our news:
- Our paper on “Junction Tree Variational Autoencode for Molecular Graph Generation” has been accepted to ICML. This is a paper that looks at novel ways for lead optimization. We are currently working on the final version. If you have any comments/questions, please send our way.
- On Thursday, I recorded machine learning tutorial that I gave during the meeting. The recording will be available for the download next week.
Many of you have asked our opinion on the Nature paper in retrisynthesis. Connor, Tommi and I summarized our thoughts below:
The paper has some clear strengths and weaknesses from our perspective.
On the positive side, casting the search problem as a stochastic tree search is reasonable (albeit not new). The approach requires one to articulate how to expand each node (current set of atoms), how to evaluate the feasibility of each transition, and how to quickly evaluate a position (a set molecules) with a roll-out search. The authors use neural networks for all these steps which is where the word “deep” comes from. Focusing on the evaluation of different search strategies is also good.
On the negative side, the choices for the expansion are all cast in terms of old-style templates and thus the method suffers from all the associate limitations, including
a) Insufficient coverage: Their expansion network makes use of 300K templates which, despite the large number, cover just 79% of reactions reported in Reaxys from 2015 and after. To get the remaining 21% would require substantially more templates to include the tail of the distribution. Moreover, if one-step reaction coverage is only 79%, the coverage of multi-step synthesis route is much lower.
b) Asymmetry, lack of context: reaction templates often propose reactions that are infeasible in forward direction. The authors therefore adopt an in-scope filter network to disqualify such templates. It is trained to discriminate pseudo false reactions generated artificially by applying forward templates. This does not seem a good proxy for feasibility of reactions. We argue that the asymmetry in the approach is due to loss of context during template extraction. A better account of context would greatly increase the number of templates. Indeed, restricting the contextual information (atoms outside reaction center) is the primary way to control the template set size (already quite large).
In contrast, we are building a template-free approach which has both better coverage and is faster to evaluate. This was already demonstrated in our forward-synthesis approach (retrosynthesis is in the works).