As a great example of the inter-relatedness of science and math, I would like to lay out a recent chain of connections I made the other day. One cool point is that the last link in the chain is a memory of an experience I had when in graduate school (over 30 years ago). It started when I was researching a new project that was discussed with some colleagues. Each item in the following list indicatesa different, but related, concept.

- How to compare the cost effectiveness of the treatment of lung cancer with x-rays or protons;
- Markov models are one approach
- In reading a little more about Markov models, I found an intriguing reference to connection between Markov models and Monte Carlo methods
- A quick google search on the above-mentioned concepts kept pointing me to Markov Chain Monte Carlo (MCMC)
- Having seen the topic of MCMC in books and articles, but never having the time or interest to followup and understand it, I looked for articles on this

- Found a great article that caught my attention in the first page or two with some anecdotal history of Monte Carlo methods in physics:
*The Evolution of Markov Chain Monte Carlo Methods, Matthew Richey, The American Mathematical Monthly, Vol. 117, No. 5 (May 2010), pp. 383-413* - The article explains the Metropolis algorithm, which was first used in nuclear physics, but quickly applied to topics like Ising glasses. The Metropolis algorithm was a way of searching a very large state space very efficiently by sampling the most likely states most often. It used the energy of an ensemble of particles in thermodynamics.
- From there, the Metropolis algorithm was found to be useful in problems of combinatorial optimization, where, again, many states had to be searched to find the optimal solution.
- One of the consequences was the development of simulated annealing algorithm. This is one personal connection since I remember very well the day I sat in the library at PSI (Switzerland) and read Steve Webb’s paper on simulated annealing for IMRT. This was pretty early in the development of this algorithm and I give Steve a lot of credit for understanding and applying it so well and so quickly.
- One of the classic combinatorial optimization problems is the traveling salesman problem.
- At the time that this algorithm was being applied to optimization, solid state circuits were starting their incredible rise in number of components and density. A big problem was how best to add new ones. There is a cost associated with the distance between related components so the traveling salesman problem was relevant.
- A physicist who had studied spin glasses–Scott Kirkpatrick–went to work for IBM and started working on the circuit board problem. He recognized that the objectives that needed to be met were very similar to the equation for the energy of spin glasses, the configuration of which was being solved using the Metropolis algorithm.
- Final step: I have always remembered–and often cited as an example of cross-field intellectual fertilization–a talk I heard as a graduate student at the University of Wisconsin in which a physicist described the exact scenario I just recounted above.

So–loop closed. I am glad (a) to find out who it was, (b) that the point I always took away from it was correct, and (c) to learn more about the actual problem.

P.S. The Bayesian connection of this story will be told in a forthcoming post.