What is LAMMPS?

One of the most widely used techniques is computational materials science is Molecular Dynamics (commonly referred to as MD) in which Newton’s equations of motion are integrated for collections of particles that interact with each other. While the many-body problem is too complex to solve exactly, MD codes use numerical approaches to the solution of Newton’s equations and MD models are often able to achieve a remarkable level of accuracy to observed phenomena.

LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a very popular open-source molecular dynamics code written in C++. Because LAMMPS is particularly well suited to parallelization over many processors, it is a powerful tool for simulating large numbers of atoms or doing many simulations in a high-throughput manner. There is even a python interface to LAMMPS and also a number of libraries that interface with LAMMPS, including pymatgen.

The documentation for LAMMPS is thousands of pages long, but I am putting together a series of posts that detail how to do various simulations in LAMMPS. This is both to help me teach myself how to use LAMMPS and to help other LAMMPS newcomers learn as well. Check back soon for the first post in the series!

A Brief Overview of Density Functional Theory on High Performance Computing Systems

(Note, this blog post also serves as my submission for CS 267 Homework 0)

Bio of Author:

My name is John Dagdelen and I am a graduate student in the Materials Science and Engineering department at UC Berkeley. My advisor, Kristin Persson, is a Staff Scientist at LBNL and a Professor in the MSE department at UC Berkeley. Our group is one of the main contributors to the Materials Project, which aims to model the properties of every material and make the data freely available to the materials research community. I’m including my picture in this post so that potential project partners can find/approach me in class.

2018 Portrait

My research interests cover:

  • Computing the properties of materials using Density Functional Theory (DFT).
  • Developing computational screening methods for materials with useful properties.
  • Using machine learning to mine large collections of materials data, including that stored in databases like the Materials Project and the text of scientific papers.
  • Expanding the predictive power of computational methods in materials science through the development of new modeling techniques and more efficient algorithms.

I use NERSC’s high-performance computing systems in my work and I enrolled in CS 267 to learn how to fully utilize these resources to solve challenging problems in computational materials science.

Case Study: Modern Density Functional Theory Calculations on High Performance Computing Systems

create_isosurface_example

Isosurfaces of Electron density modeled using density functional theory.                                                   Image Credit: Ovito.org

Density functional theory (DFT) is a technique that allows us to calculate the properties of materials computationally rather than physically making and testing them in a laboratory. We do this by numerically solving a set of equations called the Kohn-Sham equations to get the density of electrons around all the atoms in a material. From this information we can deduce important information about how materials are held together, their electronic structure, and a host of other materials properties that are important in materials science.

Modern DFT represents a marriage between quantum mechanical theory and high-performance computation which has lead to a revolution in materials science over the last 20 years. These methods, which can accurately predict materials properties, sometimes within a few percent, have lead to the discovery of a host of materials with interesting properties including battery materials for ultrafast charging, new thermoelectric materials, and (a little self-promotion here) materials with exotic mechanical properties.

Thanks to advancements in DFT algorithms and software tools that have simplified the process of setting up and running DFT calculations, investigations that used to take an entire year can be done in under an hour. This is allowing these techniques to be used across a wide range of research initiatives and at enormous scales by groups such as the Materials Project, OQMD, and AFLOWLIB. Today, it’s estimated that more than one-hundred million CPU core hours are used for DFT calculations every day (source Matthias Scheffler). Many of the supercomputers on the Top 500 list are used to run DFT calculations including NERSC’s Cori system (#12on the list), which spends somewhere between 1/8 and 1/4 of its time running DFT calculations.

While There are two main challenges that make fitting DFT calculations into HPC ecosystems difficult:

  1. DFT calculations are generally “small” compared to “large” calculations like climate modeling, which are effectively favored over small jobs under the queue policies of supercomputing centers.
  2. While some parts of DFT calculations can be parallelized there are parts of these calculations that can only be performed in sequence and they require lots of reading and writing of memory.

The first challenge comes about as a result of the “run limits” imposed by computing centers. It’s not fair to let any one user dominate a system’s job queue so there are often limits placed on the number of jobs that any given user can have running/queued up at a time. This is fine if you have a small number of huge calculations to do requiring the use of hundreds of nodes at a time, but what if you have thousands and thousands of smaller calculations? DFT calculations often fall into this second basket: we can theoretically calculate the properties of thousands of materials at a time but the run limits can keep us from doing so. When we try to submit a few hundred DFT calculations that would use the same amount of compute time as a larger climate model, the system de-prioritizes our requests because it looks like we’re trying to monopolize the system.

Fortunately, we have been able to overcome this problem through a cleaver workaround.  By packaging thousands of DFT calculations into single “jobs”, we’re able to make our calculations more suitable for the infrastructure that these HPC systems are built around. This allows us to request the computing resources we need for our work without confusing the necessary filters that keeps an HPC system working properly for all of its users.

youmustbethistall

Image Cedit: Anubhav Jain, hackingmaterials.com

 

The second challenge I mentioned is more difficult to overcome. There is a lot of complexity involved but DFT is essentially the diagonalization of a Hamiltonian of dimension equal to the number of electrons. Since diagonalization is O(n^3), DFT scales similarly with the number of atoms. This process includes a number of steps that act as bottlenecks. Some can be GPU accelerated, like the many FFT steps required, but others can only be performed on CPUs, which adds I/O constraints to the problem. There have been some advancements in speeding up VASP (a very common DFT program) though various means, including using a hybrid MPI/OpenMP schema, but there is still lots of work to be done and this is an active area of research.

Screen Shot 2018-01-22 at 4.02.29 PM

The parallel/thread scaling of the hybrid MPI/OpenMP VASP (version 4/13/2017) on the Cori KNL and Haswell nodes. Source: Performance of Hybrid MPI/OpenMP VASP on Cray XC40 Based on Intel Knights Landing Many Integrated Core Architecture. Zhengji Zhao, Martijn Marsman, Florian Wende, and Jeongnim Kim. Cray User Group Proceedings, 2017

Moreover, you can’t parallelize these calculations across an infinite number of processors. At a certain point your efficiency will drop drastically because processors will be stuck idle while they wait on results from other processors. Generally using the same number of cores as the number of atoms is a good rule of thumb, but there are lots of situations where this heuristic will not give you the best possible results. You can read a fantastic overview of how to get the most efficiency out of VASP running on multiple processors by Peter Larsson here.

The current state of the art allows us to model systems of a few hundred atoms at most, which is unfortunate because most of the interesting problems out there involve the interaction of thousands or millions of atoms at a time. If we could improve how DFT scales, we may be able to extend the capabilities of DFT into the territory where juicy problems in chemistry, biology, and materials science reside.

I think this is a worthy goal for a CS 267 project and I am happy to talk to anyone who is interested in learning more about doing DFT on parallel systems.

2017 MP Workshop

In just one week, scientists from dozens of companies and academic institutions will meet in Berkeley for the 2017 Materials Project Workshop. Members of the Materials Project team will be instructing a series of hands-on lessons on leveraging MP tools and data to accelerate the pace of innovation and discovery in materials science.

The two-day workshop will cover:

  • The Materials Project web interface (materialsproject.org).
  • The Python Materials Genomics (pymatgen) library for materials analysis.
  • Accessing the MP database through the Materials API (MAPI).
  • Case studies of using pymatgen and the MP database to do various analyses.
  • Atomate, a new library for creating easily reproducible computational materials science workflows.
  • MPContribs, a portal that enables users to contribute data from external sources to the Materials Project’s library of compounds.

If you can’t attend the workshop but are still interested in the materials, all of the modules will be available on the GitHub following the workshop,

Simulating powder x-ray diffraction in Python with Pymatgen and Xrayutilities.

Introduction

Student: “Professor! I made a material!”

Professor: “That’s great! What did you make?”

Student: “I don’t know!”

Professor: “…”

X-ray diffraction (XRD) is probably the most widely-used characterization technique in materials science.  It can be used to probe the atomic structure of unknown compounds and to confirm that you have synthesized a particular phase of interest. Powder XRD, like the name suggests, can be used to characterize the crystal structures of powdered and polycrystalline samples, making it an extremely popular variant of the technique.

It essentially works by shooting x-rays with a known wavelength at a sample and watching for x-rays that come off of it at various angles. These angles correspond to distances between planes of atoms in the crystal structure(s) through a physical principle called Bragg’s Law and the intensity of the peaks at each angle give us information about the actual atoms that the sample is made of.

640px-Bragg_diffraction_2.svg

Figure 1: Illustration of Bragg’s Law. Image source: Hydrargyrum – Own work, CC BY-SA 3.0, 

Now, actually determining a structure from an XRD pattern is incredibly difficult in practice and only the most expert crystallographers have any confidence in their ability to do so. However solving the opposite problem of turning a structure into a simulated XRD pattern isn’t too hard, especially if you have a computer do it for you!

Currently, the Materials Project database includes tens of thousands of simulated XRD patterns. Unfortunately, these calculated patterns (stick plots) don’t do a great job of replicating what we see in the real-world (peaks broadening, shifting, etc.) As you can see below, these calculated patterns aren’t a dead-ringer for the real thing.

Screen Shot 2017-04-19 at 1.13.00 AM
Screen Shot 2017-04-19 at 1.21.29 AM

Figure 2: Real XRD pattern vs calculated pattern for \beta-CsCl (mp-22865) from materialsproject.org.

One of the projects that I’m working on involves doing more advanced simulations of XRD patterns that actually simulates the physics of a diffractometer. This is called the Fundamental Parameters Approach (FPA), an implementation of which is described in a great paper out of NIST. Using FPA to simulate diffraction patterns is useful because it produces extremely accurate patterns that replicate many of the features that we see in real diffraction patterns. The NIST team was also nice enough to release their implementation out into the wild for free, which means starving young researchers like me can play around with it!

While I’m doing shout-outs, I should also thank Eugen Wintersberger and Dominik Kriegner for writing a great open-source library for python called xrayutilities, which includes both an FPA code and a ton of other useful tools for working XRD data. Their library is by far the easiest way to get your hands on a FPA code.

Alright, now to start simulating some XRD patterns!

Let’s use the same high-temperature phase β-CsCl (Fm\overline{3}m) that I showed above as our example. We’ll first make the structure using Pymatgen and then simulate it’s diffraction pattern using xrayutilities.

In [1]:

# Set up some imports that we will need
from pymatgen import Lattice, Structure
import numpy
import xrayutilities as xru
from xrayutilities.materials.cif import CIFFile
from xrayutilities.materials.material import Crystal
from IPython.display import Image, display
from tempfile import NamedTemporaryFile
import matplotlib.pyplot as plt 
import matplotlib.ticker as ticker
%matplotlib inline
matplotlib.rcParams.update({'font.size': 18})
fig_size = [15, 12]
plt.rcParams["figure.figsize"] = fig_size
In [2]:
# Create beta-CsCl structure
a = 6.923 #Angstrom
latt = Lattice.cubic(a)
structure = Structure(latt, ["Cs", "Cs", "Cs", "Cs", "Cl", "Cl", 
"Cl", "Cl"], [[0, 0, 0], [0.5, 0.5, 0], [0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0.5], [0, 0, 0.5], [0, 0.5, 0], [0.5, 0, 0]])

temp_cif = NamedTemporaryFile(delete=False)
structure.to("cif", temp_cif.name)
xu_cif = CIFFile(temp_cif.name)
xu_crystal = Crystal(name="b-CsCl", lat=xu_cif.SGLattice())
temp_cif.close()

two_theta = numpy.arange(0, 80, 0.01)

powder = xru.simpack.smaterials.Powder(xu_crystal, 1)
pm = xru.simpack.PowderModel(powder, I0=100)
intensities = pm.simulate(two_theta)
plt.plot(two_theta,intensities)
plt.xlim(0,80)
ax = plt.axes()
ax.xaxis.set_major_locator(ticker.MultipleLocator(5))
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%.4f'))
plt.title("XRD Pattern for " + xu_crystal.name)
plt.xlabel("2 theta (degrees)")
plt.ylabel("Intensity")
plt.show()
Our simulated diffraction pattern is shown below:
Screen Shot 2017-04-19 at 1.12.53 AMFigure 3: Simulated XRD pattern of β-CsCl (mp-22865) using xrayutilities.
When we compare it with the experimental XRD pattern for β-CsCl below, we can see that our simulation pretty much nailed it!

Screen Shot 2017-04-19 at 1.13.00 AMFigure 4: Real XRD pattern of β-CsCl (mp-22865).