jeffhammond.github.io

Home Page

View My GitHub Profile

About Me

Jeff Hammond, Computational Scientist, jeff_hammond@acm.org

My detailed CV is available as PDF.

Edit this page (authentication required)

Education and Research Positions

NVIDIA Tesla Business Unit (March 2021 - present). Title: Principal Programming Models Architect (March 2021 - present)

Intel Data Center Group (September 2016 - February 2021). Title: Principal Engineer (April 2020 - February 2021)

Intel Labs Parallel Computing Laboratory (May 2014 - August 2016). Title: Research Scientist Supervisor: Drs. Tim Mattson and Pradeep Dubey

Argonne National Laboratory Leadership Computing Facility (June 2011 - May 2014). Title: Assistant Computational Scientist in the Performance Engineering group Supervisors: Drs. Kalyan Kumaran and Ray Bair

The University of Chicago Computation Institute (February 2011 - May 2014). Title: Fellow (since September 2011)

Argonne National Laboratory Leadership Computing Facility (June 2009 - May 2011). Title: Argonne Scholar (Director’s Postdoctoral Fellowship) Supervisor: Dr. Ray Bair

Pacific Northwest National Laboratory EMSL MSCF (June 2006 - May 2009). Title: Alternate Sponsored Fellow (DOE-CSGF practicum) Supervisors: Drs. Karol Kowalski and Wibe A. de Jong

University of Chicago Dept. of Chemistry (September 2003 to May 2009). PhD in Chemistry, May 2009; MS in Chemistry, August 2004. Supervisors: Professors Karl F. Freed and L. Ridgway Scott

University of Washington Dept. of Chemistry (January 2001 to August 2003). BS in Chemistry with Distinction; BA in Mathematics; Minor in Applied Mathematics. Supervisor: Professor Weston T. Borden

Online Profiles

Publications

Software

Random Facts

Publications

Specific software packages that these papers have involved are denoted by Software Name in front of the citation.

Note that my contribution to these packages varies from “nearly everything” to “supervised student developer” to “literally nothing”. In most cases, the relevant version control system will give you all the details, if you’re interested.

Matrix and Tensor Computations

Elemental: Sayan Ghosh, Jeff Hammond, Antonio J. Peña, Pavan Balaji, Assefaw Gebremedhin and Barbara Chapman. International Conference on Parallel Processing (ICPP). Philadelphia, PA, August 16-19, 2016. One-Sided Interface for Matrix Operations using MPI-3 RMA: A Case Study with Elemental (Preprint)

TTC: P. Springer, J.R. Hammond, P. Bientinesi. ACM Transactions on Mathematical Software (TOMS) 44, 2 (2017). TTC: A high-performance Compiler for Tensor Transpositions (Preprint (arXiv:1603.02297 (2016))) (Source Code)

CTF: Edgar Solomonik, Devin Matthews, Jeff Hammond, John Stanton and James Demmel. Journal of Parallel and Distributed Computing (2014). A massively parallel tensor contraction framework for coupled-cluster computations (Preprint) (Source Code)

BLIS: T. M. Smith, R. van de Geijn, M. Smelyanskiy, J. R. Hammond, and F. G. Van Zee. Proceedings of the 28th IEEE International Parallel and Distributed Processing Symposium (IPDPS). Phoenix, Arizona, May 2014. Anatomy of High-Performance Many-Threaded Matrix Multiplication (Preprint). Also known as FLAME Working Note #71. The University of Texas at Austin, Department of Computer Science. Technical Report TR-13-20. 2013. Opportunities for Parallelism in Matrix Multiplication (Home Page) (GitHub Source)

P. Ghosh, J. R. Hammond, S. Ghosh, and B. Chapman, 4th International Workshop on. Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS13). Workshop at SC13, Denver, Colorado, USA, November 2013. Performance analysis of the NWChem TCE for different communication patterns (Preprint)

TCE-IE: D. Ozog, J. R. Hammond, J. Dinan, P. Balaji, S. Shende and A. Malony. International Conference on Parallel Processing (ICPP). Ecole Normale Superieure de Lyon, Lyon, France, October 1-4, 2013. Inspector-Executor Load Balancing Algorithms for Block-Sparse Tensor Contractions (Preprint). (Related poster from ICS.)

CTF: Edgar Solomonik, Devin Matthews, Jeff Hammond and James Demmel. Proc. 27th Intl. Parallel and Distributed Processing Symp (IPDPS). Boston, Massachusetts, May 2013. Cyclops Tensor Framework: reducing communication and eliminating load imbalance in massively parallel contractions (Preprint) (Source Code)

CTF: Edgar Solomonik, Jeff Hammond and James Demmel. Electrical Engineering and Computer Sciences, University of California at Berkeley, Technical Report No. UCB/EECS-2012-29, March 9, 2012. A preliminary analysis of Cyclops Tensor Framework.

Elemental: J. Poulson, B. Marker, J. R. Hammond, N. A. Romero, and R. van de Geijn. ACM Trans. Math. Software, 39 (2012). Elemental: A New Framework for Distributed Memory Dense Matrix Computations. (Preprint) (Source Code)

MPI, Global Arrays, ARMCI, OpenSHMEM, PGAS

shmem4py Marcin Rogowski, Jeff R. Hammond, David E. Keyes, Lisandro Dalcin. Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis. shmem4py: High-Performance One-Sided Communication for Python Applications

shmem4py Marcin Rogowski, Lisandro Dalcin, Jeff R. Hammond, David E. Keyes. Journal of Open Source Software shmem4py: OpenSHMEM for Python

Mukautuva Jeff Hammond, Lisandro Dalcin, Erik Schnetter, Marc Pérache, Jean-Baptiste Besnard, Jed Brown, Gonzalo Brito Gadeschi, Simon Byrne, Joseph Schuchart, Hui Zhou. EuroMPI ‘23: Proceedings of the 30th European MPI Users’ Group Meeting. MPI Application Binary Interface Standardization (Preprint)

Casper: Min Si, Antonio J. Peña, Jeff R. Hammond, Pavan Balaji, Masamichi Takagi and Yutaka Ishikawa. IEEE Transactions on Parallel and Distributed Systems. Dynamic Adaptable Asynchronous Progress Model for MPI RMA Multiphase Applications (Preprint)

A. Amer., H. Lu, Y. Wei, J. Hammond, S. Matsuoka, and P. Balaji, ACM Transactions on Parallel Computing. Lock Contention Management in Multithreaded MPI Locking Aspects in Multithreaded MPI Implementations

OpenCoarrays: Alessandro Fanfarillo and Jeff R. Hammond. EuroMPI. Edinburgh, Scotland, Sept. 2016. CAF Events Implementation Using MPI-3 Capabilities (Reprint)

UPCFock: D. Ozog, A. Kamil, Y. Zheng, P. Hargrove, J. R. Hammond, A. Malony, W. de Jong, and K. Yelick. Proc. 30th Intl. Parallel and Distributed Processing Symp (IPDPS). Chicago, IL, May 2016. A Hartree-Fock Application using UPC++ and the New DArray Library

Karthikeyan Vaidyanathan, Dhiraj D. Kalamkar, Kiran Pamnany, Jeff R. Hammond, Pavan Balaji, Dipankar Das, Jongsoo Park, Balint Joo. The International Conference for High Performance Computing, Networking, Storage and Analytics (SC15). Austin, TX, November 15-20, 2015. Improving Concurrency and Asynchrony in Multithreaded MPI Applications Using Software Offloading

PRK: Jeff R. Hammond and Timothy G. Mattson. Proceedings of the International Workshop on OpenCL (IWOCL’19). Evaluating data parallelism in C++ using the Parallel Research Kernels

PRK: R. F. Van der Wijngaart, A. Kayi, J. R. Hammond, G. Jost, T. St. John, S. Sridharan, T. G. Mattson, J. Abercrombie, and J. Nelson. ISC High Performance. June 20-22, 2016. Frankfurt, Germany. Comparing runtime systems with exascale ambitions using the Parallel Research Kernels

PRK: Rob Van der Wijngaart, Srinivas Sridharan, Abdullah Kayi, Gabriele Jost, Jeff Hammond, Tim Mattson, and Jacob Nelson. The 9th International Conference on Partitioned Global Address Space Programming Models (PGAS). September 17-18, 2015. Washington, D.C. Using the Parallel Research Kernels to study PGAS models (Slides) (Source Code)

Casper: Min Si, Antonio J. Peña, Jeff R. Hammond, Pavan Balaji, and Yutaka Ishikawa. IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid). May 4–7, 2015, Shenzhen, China. Scalable Computing Challenge Finalist. Scaling NWChem with Efficient and Portable Asynchronous Communication in MPI RMA (Preprint)

Casper: Min Si, Antonio J. Pena, Jeff Hammond, Pavan Balaji, Masamichi Takagi, Yutaka Ishikawa. Proc. 29th Intl. Parallel and Distributed Processing Symp (IPDPS). Hyderabad, India, May 2015. Casper: An Asynchronous Progress Model for MPI RMA on Many-Core Architectures. (Preprint) (Source Code)

Jeff Hammond. OpenSHMEM User Group (OUG2014), October 7, 2014, Eugene, OR. Towards a matrix-oriented strided interface in OpenSHMEM. (Source Code)

BigMPI: Jeff R. Hammond, Andreas Schaefer, and Rob Latham. Workshop on Exascale MPI at Supercomputing Conference 2014 (ExaMPI14), New Orleans, LA, November 17, 2014. To INT_MAX… and beyond! Exploring large-count support in MPI (Preprint 1) (Preprint 2) (Source Code)

David Ozog, Allen Malony, Jeff Hammond and Pavan Balaji. 20th IEEE International Conference on Parallel and Distributed Systems (ICPADS). Hsinchu, Taiwan, December 16 – 19, 2014. WorkQ: A Many-Core Producer/Consumer Execution Model Applied to PGAS Computations (Preprint 1) (Preprint 2).

OSHMPI: Min Si, Huansong Fu, Jeff Hammond and Pavan Balaji, accepted to OpenSHMEM and Related Technologies Workshop 2021. OpenSHMEM over MPI as a Performance Contender: Thorough Analysis and Optimizations (Slides) (Preprint) (Source Code)

OSHMPI: J. R. Hammond, S. Ghosh, and B. M. Chapman, accepted to First OpenSHMEM Workshop: Experiences, Implementations and Tools. Implementing OpenSHMEM using MPI-3 one-sided communication (Online) (Preprint) (Source Code)

V. Morozov, J. Meng, V. Vishwanath, J. R. Hammond, K. Kumaran and M. Papka. Parallel Processing Workshops (ICPPW), 41st International Conference, September 2012, Pittsburgh, Pennsylvania ALCF MPI Benchmarks: Understanding Machine-Specific Communication Behavior (IEEE link) (Slides)

OSPRI: J. R. Hammond, J. Dinan, P. Balaji, I. Kabadshow, S. Potluri, and V. Tipparaju, The 6th Conference on Partitioned Global Address Space Programming Models (PGAS). Santa Barbara, CA, October 2012. OSPRI: An Optimized One-Sided Communication Runtime for Leadership-Class Machines (Preprint).

ARMCI-MPI: J. Dinan, P. Balaji, J. R. Hammond, S. Krishnamoorthy, and V. Tipparaju, Proc. 26th Intl. Parallel and Distributed Processing Symp (IPDPS). Shanghai, China, May 2012. Supporting the Global Arrays PGAS Model Using MPI One-Sided Communication (Preprint) (Source Code)

J. Dinan, S. Krishnamoorthy, P. Balaji, J. R. Hammond, M. Krishnan, V. Tipparaju and A. Vishnu, in Recent Advances in the Message Passing Interface (Lecture Notes in Computer Science, Volume 6960/2011, pp. 282-291), edited by Y. Cotronis, A. Danalis, D. S. Nikolopoulos and J. Dongarra. Noncollective Communicator Creation in MPI (Preprint).

TAU-ARMCI: J. R. Hammond, S. Krishnamoorthy, S. Shende, N. A. Romero and A. D. Malony, Concurrency and Computation: Practice and Experience (DOI: 10.1002/cpe.1881). Performance Characterization of Global Address Space Applications: A Case Study with NWChem (Preprint)

Intra-node programming models (ISO language parallelism, OpenMP, etc.)

StdPar: Jeff R Hammon, Tom Deakin, Jim H Cownie, Simon N. McIntosh-Smith. 2022 IEEE/ACM International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS) _. Benchmarking Fortran DO CONCURRENT on CPUs and GPUs Using BabelStream

StdPar: M. Graham Lopez, Jeff R. Hammond, Jack C. Wells, Tom Gibbs and Timothy B. Costa. SMC 2021: Driving Scientific and Engineering Discoveries Through the Integration of Experiment, Big Data, and Modeling and Simulation. Enabling ISO Standard Languages for Complex HPC Workflows

SYCL: Ben Ashbaugh, Alexey Bader, James Brodman, Jeff Hammond, Michael Kinsner, John Pennycook, Roland Schulz, and Jason Sewall. Proceedings of the International Workshop on OpenCL (IWOCL ‘20). Data Parallel C++: Enhancing SYCL Through Extensions for Productivity and Performance

SYCL: Jeff R. Hammond, Michael Kinsner, and James Brodman. Proceedings of the International Workshop on OpenCL (IWOCL’19). A comparative analysis of Kokkos and SYCL as heterogeneous, parallel programming models for C++ applications

OpenMP: S. J. Pennycook, J. D. Sewall and J. R. Hammond. 2018 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC) Evaluating the Impact of Proposed OpenMP 5.0 Features on Performance, Portability and Productivity

OpenMP: Yonghong Yan, Jeff R. Hammond, Ali Alqazzaz, Chunhua Liao. International Workshop on OpenMP (IWOMP). Nara, Japan, Oct. 2016. A Proposal to OpenMP for Addressing the CPU Oversubscription Challenge

MEMKIND: Christopher Cantalupo, Vishwanath Venkatesan, Jeff R. Hammond, and Simon Hammond. Submitted. User Extensible Heap Manager for Heterogeneous Memory Platforms and Mixed Memory Policies (Preprint) (Source)

Performance Engineering and Application Scalability

GTFOCK: Edmond Chow, Xing Liu, Sanchit Misra, Marat Dukhan, Mikhail Smelyanskiy, Jeff R. Hammond, Yunfei Du, Xiang-Ke Liao and Pradeep Dubey. International Journal of High Performance Computing Applications, accepted. Scaling up Hartree–Fock Calculations on Tianhe-2 (Preprint)

GTFOCK: Edmond Chow, Xing Liu, Mikhail Smelyanskiy, and Jeff R. Hammond. J. Chem. Phys. 142, 104103 (2015). Parallel scalability of Hartree–Fock calculations (Preprint) (Source 1) (Source 2)

MADNESS: Robert J. Harrison, Gregory Beylkin, Florian A. Bischoff, Justus A. Calvin, George I. Fann, Jacob Fosso-Tande, Diego Galindo, Jeff R. Hammond, Rebecca Hartman-Baker, Judith C. Hill, Jun Jia, Jakob S. Kottmann, M-J. Yvonne Ou, Laura E. Ratcliff, Matthew G. Reuter, Adam C. Richie-Halford, Nichols A. Romero, Hideo Sekino, William A. Shelton, Bryan E. Sundahl, W. Scott Thornton, Edward F. Valeev, Álvaro Vázquez-Mayagoitia, Nicholas Vence, Yukina Yokoi. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation (Preprint) (Source)

MADNESS: Álvaro Vázquez–Mayagoitia, W. Scott Thornton, Jeff R. Hammond, Robert J. Harrison. Annual Reports in Computational Chemistry 10, pp. 3–24 (2014). Quantum Chemistry Methods with Multiwavelet Bases on Massive Parallel Computers

Harvey: Amanda Peters Randles, Vivek Kale, Jeff Hammond, William D. Gropp and Efthimios Kaxiras. Proc. 27th Intl. Parallel and Distributed Processing Symp (IPDPS). Boston, Massachusetts, May 2013. Performance Analysis of the Lattice Boltzmann Model Beyond Navier-Stokes (Preprint)

Reviews and High-level Presentations

Venkatram Vishwanath, Thomas Uram, Lisa Childers, Hal Finkel, Jeff Hammond, Kalyan Kumaran, Paul Messina and Michael E. Papka. DOE ASCR Workshop on Software Productivity for eXtreme-Scale Science (SWP4XS), Rockville, Maryland, January 13-14, 2014. Toward improved scientific software productivity on leadership facilities: An Argonne Leadership Computing Facility View

Jeff R. Hammond. ACM XRDS 19 (3), Spring 2013. Challenges and methods in large-scale computational chemistry applications (invited and proof-read but not refereed in the traditional sense)

Bill Allcock, Anna Maria Bailey, Ray Bair, Charles Bacon, Ramesh Balakrishnan, Adam Bertsch, Barna Bihari, Brian Carnes, Dong Chen, George Chiu, Richard Coffey, Susan Coghlan, Paul Coteus, Kim Cupps, Erik W. Draeger, Thomas W. Fox, Larry Fried, Mark Gary, Jim Glosli, Thomas Gooding, John Gunnels, John Gyllenhaal, Jeff Hammond, Ruud Haring, Philip Heidelberger, Mark Hereld, Todd Inglett, K.H. Kim, Kalyan Kumaran, Steve Langer, Amith Mamidala, Rose McCallen, Paul Messina, Sam Miller, Art Mirin, Vitali Morozov, Fady Najjar, Mike Nelson, Albert Nichols, Martin Ohmacht, Michael E. Papka, Fabrizio Petrini, Terri Quinn, David Richards, Nichols A. Romero, Kyung Dong Ryu, Andy Schram, Rob Shearer, Tom Spelce, Becky Springmeyer, Fred Streitz, Bronis de Supinski, Pavlos Vranas, Bob Walkup, Amy Wang, Timothy Williams, and Robert Wisniewski. Blue Gene/Q: Sequoia and Mira in Contemporary High Performance Computing: From Petascale toward Exascale, edited by Jeffrey S. Vetter.

Jeff R. Hammond. IEEE-TCSC Blog, August 6th, 2012. Challenges for Interoperability of Runtime Systems in Scientific Applications (invited and proof-read but not refereed in the traditional sense)

Resilience

GVR: A. Chien, P. Balaji, P. Beckman, N. Dun, A. Fang, H. Fujita, K. Iskra, Z. Rubenstein, Z. Zheng, R. Schreiber, J. Hammond, J. Dinan, I. Laguna, D. Richards, A. Dubey, B. van Straalen, M. Hoemmen, M. Heroux, K. Teranishi, A. R. Siegel. Submitted. 2015. Versioned Distributed Arrays for Resilience in Scientific Applications: Global View Resilience (Preprint)

Sean Hogan, Jeff R. Hammond and Andrew A. Chien. Fault-Tolerance at Extreme Scale (FTXS). Boston, MA. June, 2012. An Evaluation of Difference and Threshold Techniques for Efficient Checkpoints. (Preprint) (Slides)

Statistical sampling and molecular dynamics

LAMMPS: Rolf Isele-Holder, Wayne Mitchell, Jeff Hammond, Axel Kohlmeyer and Ahmed Ismail, J. Chem. Theory Comput. 9 (12), 5412-5420 (2013). Reconsidering Dispersion Potentials: Reduced Cutoffs in Mesh-Based Ewald Solvers Can Be Faster Than Truncation

LAMMPS-Ensembles: Luke Westby, Mladen Rasic, Adrian Lange and Jeff R. Hammond. See LAMMPS-Ensembles on my Wiki for more information.

NEUS: A. Dickson, M. Maienshein-Cline, A. Tovo-Dwyer, J. R. Hammond and A. R. Dinner, J. Chem. Theory Comput. 7, 2710 (2011). Flow-dependent unfolding and refolding of an RNA by nonequilibrium umbrella sampling. (Preprint)

Quantum chemistry on accelerators

GPUs

Eugene has incorporated all of the GPU coupled-cluster codes into PSI4. See Github for details.

A. E. DePrince III, J. R. Hammond, and C. D. Sherrill, Iterative Coupled-Cluster Methods on Graphics Processing Units in in Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics, edited by Ross Walker and Andreas Goetz (Wiley, 2016).

A. E. DePrince III, J. R. Hammond and S. K. Gray, Proceedings of SciDAC 2011, Denver, CO, July 10-14, 2011. Many-body quantum chemistry on graphics processing units.

A. E. DePrince III and J. R. Hammond, Symposium on Application Accelerators in High-Performance Computing (SAAHPC) Knoxville, TN, USA, 19-21 July 2011. Quantum chemical many-body theory on heterogeneous nodes. (Slides)

A. E. DePrince III and J. R. Hammond J. Chem. Theory Comput. 7, 1287 (2011) Coupled Cluster Theory on Graphics Processing Units I. The Coupled Cluster Doubles Method.

A. E. DePrince III and J. R. Hammond, Symposium on Application Accelerators in High-Performance Computing (SAAHPC), Knoxville, TN, USA, 13-15 July 2011. Evaluating one-sided programming models for GPU cluster computations.

Intel Xeon Phi (aka MIC)

NWChem:

E. Apr{`a}, et al. J. Chem. Phys. 152, 184102 (2020). NWChem: Past, present, and future.

Eric J. Bylaska, Edoardo Aprà, Karol Kowalski, Mathias Jacquelin, Wibe A. de Jong, Abhinav Vishnu, Bruce Palmer, Jeff Daily, Tjerk P. Straatsma, Jeff R. Hammond, Michael Klemm. In T. Straatsma, K. Antypas, T. Williams, (eds), Exascale Scientific Applications. New York: Chapman and Hall/CRC (2018). Transitioning NWChem to the Next Generation of Manycore Machines.

NWChem: Eric J. Bylaska, Mathias Jacquelin, Wibe A. de Jong, Jeff R. Hammond, and Michael Klemm. In J. Kunkel, R. Yokota, M. Taufer, J. Shalf (eds), High Performance Computing. ISC High Performance 2017. Lecture Notes in Computer Science, vol 10524. Performance Evaluation of NWChem Ab-Initio Molecular Dynamics (AIMD) Simulations on the Intel® Xeon Phi™ Processor. Best Paper Award at the IXPUG 2017 workshop (Slides)

NWChem: Edoardo Apra, Karol Kowalski, Jeff R. Hammond, and Michael Klemm. NWChem: Quantum Chemistry Simulations at Scale in High Performance Parallelism Pearls, edited by James Reinders and James Jeffers (Morgan Kaufmann, 3 Nov. 2014). (Safari Books Online) (Google Books Online)

Coupled-cluster response theory and NWChem

NWChem 101 - incomplete version of what I hope will be a crash course in how to use NWChem like an expert. Obviously, this is not a refereed publication.

NWChem: Coupled-cluster response theory: parallel algorithms and novel applications (my dissertation).

NWChem: B. Peng, N. Govind, E. Apra, M. Klemm, J.R. Hammond, K. Kowalski, J. Phys. Chem. A 121 (6), 1328-1335 (2017). Coupled Cluster Studies of Ionization Potentials and Electron Affinities of Single-Walled Carbon Nanotubes

NWChem: H. Hu, Y.-F. Zhao, J. Hammond, E. Bylaska, E. Apra, H.J.J. van Dam, J. Li, N. Govind, and K. Kowalski, Chem. Phys. Lett. (2015). Theoretical studies of the global minima and polarizabilities of small lithium clusters

NWChem: K. Kowalski, J. R. Hammond, W. A. de Jong, P.-D. Fan, M. Valiev, D. Wang and N. Govind, in Computational Methods for Large Systems: Electronic Structure Approaches for Biotechnology and Nanotechnology, edited by J. R. Reimers (Wiley, March 2011, Hoboken). Coupled-Cluster Calculations for Large Molecular and Extended Systems

NWChem: K. Kowalski, S. Krishnamoorthy, O. Villa, J. R. Hammond, and N. Govind, J. Chem. Phys. 132, 154103 (2010). Active-space completely-renormalized equation-of-motion coupled-cluster formalism: Excited-state studies of green fluorescent protein, free-base porphyrin, and oligoporphyrin dimer

NWChem: J. R. Hammond, N. Govind, K. Kowalski, J. Autschbach and S. S. Xantheas, J. Chem. Phys. 131, 214103 (2009). Accurate dipole polarizabilities for water clusters N=2-12 at the coupled-cluster level of theory and benchmarking of various density functionals

NWChem: J. R. Hammond and K. Kowalski, J. Chem. Phys. 130, 194108 (2008). Parallel computation of coupled-cluster hyperpolarizabilities

NWChem: K. Kowalski, J. R. Hammond, W. A. de Jong and A. J. Sadlej, J. Chem. Phys. 129, 226101 (2008). Coupled cluster calculations for static and dynamic polarizabilities of C60

NWChem: J. R. Hammond, W. A. de Jong and K. Kowalski, J. Chem. Phys. 128, 224102 (2008). Coupled cluster dynamic polarizabilities including triple excitations

NWChem: K. Kowalski, J. R. Hammond and W. A. de Jong, J. Chem. Phys. 127, 164105 (2007). Linear response coupled cluster singles and doubles approach with modified spectral resolution of the similarity transformed Hamiltonian

NWChem: J. R. Hammond, K. Kowalski and W. A. de Jong, J. Chem. Phys. 127, 144105 (2007). Dynamic polarizabilities of polyaromatic hydrocarbons using coupled-cluster linear response theory

NWChem: J. R. Hammond, M. Valiev, W. A. de Jong and K. Kowalski, J. Phys. Chem. A 111, 5492 (2007). Calculations of properties using a hybrid coupled-cluster and molecular mechanics approach

Chemistry Applications

Sameer Varma, Mohsen Botlani, Jeff R. Hammond, H. Larry Scott, Joseph P.R.O. Orgel, Jay D. Schieber, Proteins: Structure, Function, and Bioinformatics (2015). Effect of Intrinsic and Extrinsic Factors on the Simulated D-band Length of Type I Collagen

R. S. Assary, P. C. Redfern, J. R. Hammond, J. Greeley and L. A. Curtiss, Chem. Phys. Lett., 497 (1-3), 123 (2010). Predicted Thermochemistry for Chemical Conversion of 5-Hydroxymethyl Furfural

R. S. Assary, P. C. Redfern, J. R. Hammond, J. Greeley and L. A. Curtiss, J. Phys. Chem. B, 114, 9002 (2010). Computational Studies of the Thermochemistry for Conversion of Glucose to Levulinic Acid

R. K. Chaudhuri, J. R. Hammond, K. F. Freed, S. Chattopadhyay and U. S. Mahapatra, J. Chem. Phys. 129, 064101 (2008). Reappraisal of cis effect in 1,2-dihaloethenes: An improved virtual orbital multireference approach

M. Lingwood, J. R. Hammond, D. A. Hrovat, J. M. Mayer, and W. T. Borden, J. Chem. Theory Comp. 2, 740 (2006). MPW1K, rather than B3LYP, should be used as the functional for DFT calculations on reactions that proceed by proton-coupled electron transfer (PCET)

RDM Theory

J. R. Hammond and D. A. Mazziotti, Bulletin of the American Physical Society 52 (1) (March 2007). Variational reduced-rensity-matrix theory applied to the hubbard model. (This was first reported results on the 2D Hubbard model, which been the subject of ongoing interest (e.g. by http://prl.aps.org/abstract/PRL/v108/i21/e213001, http://prl.aps.org/abstract/PRL/v108/i20/e200404, and http://arxiv.org/abs/1207.4847).)

J. R. Hammond and D. A. Mazziotti, Phys. Rev. A 73, 062505 (2006). Variational reduced-density-matrix calculation of the one-dimensional Hubbard model.

J. R. Hammond and D. A. Mazziotti, Phys. Rev. A 73, 012509 (2006). Variational reduced-density-matrix calculations on small radicals: a new approach to open-shell ab initio quantum chemistry.

J. R. Hammond and D. A. Mazziotti, Phys. Rev. A 71, 062503 (2005). Variational two-electron reduced-density-matrix theory: Partial 3-positivity conditions for N-representability.

Projects

Active

Archives