ASC eNews Quarterly Newsletter March 2013

 

 ||   Who’s Who in ASC   ||   ASC eNews POCs   ||   Upcoming Events   ||

ASC    
  NA-ASC-500-13 Issue 23
  March 2013

 

The Meisner Minute

meisner

 

 

 

Bob Meisner

 

 

 

 

Guest editorial by Bill Archer, LANL ASC Program Director (acting)
Los Alamos National Laboratory

Next-generation Technology Impacts on the ASC Program

Recently I’ve had reasons to review the goals of ASCI from 1995. There were two related goals: develop a 100-teraFLOP/s computer and develop 3D weapon codes to run on it. The program succeeded at both of these goals by 2007. What we then found was that there was a long way to go for a practical 3D capability, in both hardware and software. Even with Cielo, a 1.3-petaFLOP/s system, certain high-resolution 3D simulations are still beyond our reach.

The 3D simulations on Cielo have taught us that the 100 terabytes of memory available on half the system is not enough, and that I/O and archiving of 25-terabytes files is a problem. We also learned that node failures during a run must be managed to ensure throughput. These lessons have been incorporated into the mission needs of the next ASC system, Trinity, which is currently being procured by the Alliance for Computing at Extreme Scale (ACES) Los Alamos/Sandia partnership.

At this time all we know for sure about Trinity is that it will not be a conventional cluster like Cielo.

The ASC program has two next-generation technology machines, Roadrunner, the world’s first 1-petaFLOP/s system and Sequoia, the first 20-petaFLOP/s system. Trinity is likely to incorporate features similar to both machines. The weapon codes will feel the full impact of these technology changes.

As we move forward to Trinity the weapon codes and supporting models will require a significant amount of re-work to be able to efficiently use the next generation of hardware. At the same time we have to continue supporting the Stockpile Stewardship Program, especially the ongoing and planned Lifetime Extension Programs (LEP). This will require shifting the program from supporting assessments to supporting primary reuse, design and certification. And, we have to continue developing an improved understanding of both primary and secondary physics to improve our predictive capability. Obviously, doing all of this at once will challenge the program.

Don’t think that getting onto the next-generation systems is just an effort for the code teams, or even just the Integrated Codes (IC) program element; it will require the combined effort of the entire program to successfully execute this. Both physics models and data access routines will need to be modified by the Physics & Engineering Models (PEM). The reworked codes will have to be evaluated by Verification & Validation (V&V) program element. A new programming model will have to be provided by Computational and Systems Environments (CSSE), as well as resiliency solutions. Facility Operations and User Support (FOUS) will have to provide the machines, but also the supporting infrastructure for cooling and power, and new ways to handle large data sets for visualization, storage, and archiving. Moving our focus beyond stockpile assessment to design and certification will be equally challenging for the entire program.

The next few years are going to be both exciting and challenging for the ASC program. They will be very similar to 1995-2005 when both computing technology and stockpile stewardship were changing direction. We are at a time when new ideas must be tried, and everyone in the ASC program has to work together to build the future.

______________________________________________________

ACES and NERSC form partnership for next-generation supercomputers, an unprecedented collaboration between NNSA and the Office of Science

The Alliance for Computing at Extreme Scale (ACES), a collaboration among Sandia (SNL) and Los Alamos (LANL) national laboratories and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL) have fully integrated as a team as they work towards their respective next-generation supercomputer deployments. The goal of the new alliance is to acquire two supercomputers, Trinity and NERSC-8, using a common set of technical requirements and a single procurement process. In addition to strengthening the partnership between NNSA and the DOE Office of Science (DOE/SC) in the area of high performance computing, there are a number of advantages to combining the acquisition strategy. It leverages shared technical and risk management expertise and saves industry time and money in responding to a single request for proposal and a single set of requirements. There are also benefits associated with shared experiences in production operations.

Trinity will be the next-generation supercomputer to provide high performance computational resources in support of NNSA’s Stockpile Stewardship Program mission. Driving Trinity’s architectural requirements is the need to increase ASC predictive capabilities. Cielo and Sequoia (the current machines at LANL and Lawrence Livermore National Laboratory (LLNL) are unable to support the higher fidelity models, both in geometry and physics, needed to provide necessary predictions.

NERSC-8 will increase DOE’s ability to support rapidly increasing computational demands of the entire spectrum of DOE/SC computational research. It will also transition DOE scientific applications to more energy-efficient architectures. The NERSC-8 deployment will support well-established fields that already rely on large-scale simulation, but are moving to incorporate additional physical processes and higher resolution. It will also support scientific discovery performed using a very large number of individual, mutually independent compute tasks.

The ACES and NERSC team will issue a formal Request for Proposals in the third quarter of calendar year 2013, and plans to have contracts in place by the end of the year. Deployments are planned for late 2015 to early 2016.

______________________________________________________

 Cielo Working at Full Tilt for Stockpile Stewardship

The ASC Program’s capability-class workhorse, Cielo, concluded its third capability computing campaign (CCC) on January 27, 2013. Cielo performed at approximately 80% overall utilization of the machine for CCC-3, which began on May 22, 2012. These campaigns support simulations for Los Alamos, Lawrence Livermore, and Sandia national laboratories.

As these campaigns go forward, there is a large and growing demand for large-scale, up to full-system jobs. Cielo is operated by the New Mexico Alliance for Computing at Extreme Scale (ACES), a partnership between Sandia and Los Alamos national laboratories. It is the petascale resource for conducting NNSA weapons simulations in the 2011–2015 timeframe. Built by Cray, Inc., it is installed at Los Alamos National Laboratory.

After a system outage for upgrading the Cray Linux Environment, CCC-4 started on February 4. CCC-4 has initiated an impressive workload for stockpile stewardship and weapons science. Initial requests oversubscribed the time available by a factor of more than five.

Numerous campaigns are increasing the job size to utilize the full-system, or near full-system, capability. Three users will use 102 thousand cores for their jobs: 3D weapons studies, a safety study, and 3D particle transport. Co-design scaling studies by Sandia National Laboratories will use the full system.

Results from Cielo’s Capability Computing Campaigns

In addition to unclassified projects, there were numerous classified stockpile projects done by weapons program scientists. The results from these projects help to understand and accurately calculate nuclear tests that had anomalous results.  These results are important to validate our understanding of the physics relevant to nuclear weapons. Simulations such as these began with the inception of the ASC Program in 1995 (formerly the ASCI Program), but the calculations have needed larger machines than those available at the time. For example, LANL researcher Bob Weaver ran a particular weapon simulation on the Blue Mountain machine, which had to be run as a “cutoff,” or truncated, problem because the calculation was too big to run on the machine. In addition, the level of resolution was below that desired, due to the limitations of the machine size. Since that time, he has run versions of the problem to utilize these ASC machines as much as possible: White, Roadrunner, and now Cielo. With Cielo, he is now able to run full-scale 3D simulations with remarkable fidelity, but he is still not able to run the complete simulation. It will have to wait until the next-generation supercomputer Trinity is deployed at LANL.

The need for running full-scale 3D simulations with increased fidelity is the design basis for the Trinity platform.  The following five projects with unclassified results from the CCC3 tri-labs are highlighted below.

1. Integrated Simulation of Laser Plasma Interaction (LPI) in NIF Experiments (Steve Langer, PI; Code pF3D) — This project begins an investigation of how to account for backscattered light in a self-consistent manner in simulations of ignition experiments at the National Ignition Facility (NIF).  The pF3D code is used on Cielo to simulate backscatter in overlapping beams, and interesting temporal and spatial structure is seen. The Cielo computer was recently used to simulate three interacting quads. The spatial variability of the backscattered light shows an enhancement from interacting quads. The runs on CCC-3 do a better job of simulating backscatter in regions where quads overlap.

 

2. Informing Predictive Models Using 3-D Simulations of Localization and Initiation in Polycrystals and HE (Nathan Barton, PI; ALE3D and mdef [crystal mechanics]) — This project enabled exploration of phase transformation and plasticity kinetics. To better model strength and phase dynamics of materials, they ran simulations to provide details that are not experimentally accessible at the rates and pressures needed. The results illuminate interplay between nucleation and growth and among interacting mechanisms. The results contribute to ASC Physics and Engineering Models milestones.

3. First Principles Diffusion Coefficients of DT-Pu (Christopher Ticknor, PI; ofmd.f90) — This work explores first principles calculations of the diffusion coefficients for mass transport in a deuterium-tritium (DT) plutonium warm dense plasma.

4. Mitigating Stimulated Raman Scattering (SRS) in Inertial Confinement Fusion (ICF) Hohlraums; Lin Yin, PI; VPIC — This work studies SRS, a type of laser-plasma instability of concern for laser-driven fusion experiments such as those underway at the  (NIF).

5. Towards Accurate Simulation of the Abnormal Thermal Fire Environment for Stockpile Stewardship (Paul Lin, PI; Sierra/Fuego) — The goal for this project is safety and reliability of the stockpile for the abnormal thermal fire environment. It is a B61 LEP requirement to accurately predict the abnormal thermal fire environment.

Summary

Cielo was deployed for stockpile stewardship, and it has excelled as a stable and reliable platform for running capability computing campaigns for the tri-lab weapons community, particularly for problems that cannot be resolved at less than near-petascale size. Cielo’s architecture allowed easy migration of the existing integrated weapons codes, thus extending the ability to conduct critical simulations while the code teams work to adapt the codes to the changing computing architectures. The examples shown in this article illustrate that Cielo is excelling at the job for which it was intended.

______________________________________________________

Sandia, Los Alamos and NERSC Host Joint Mini-App Deep Dive with NVIDIA’s Developer Technologies Group

Hardware and software experts from NVIDIA’s Developer Technologies Group recently visited Sandia for a deep-dive on mini-application performance and porting activities. Attendees from ACES and NERSC learned the very latest developments in NVIDIA’s hardware solutions and approaches to optimizing codes for high-performance GPU architectures. NVIDIA also presented several optimization activities where they are using key ASC and ASCR mini-applications to inform future hardware designs. This gave the laboratories the opportunity to learn how modifications to the algorithms or programming models will improve future application performance.

The visit by NVIDIA builds upon a strong and growing collaboration between ACES and NERSC as the laboratories increasingly focus on the opportunities for co-designing future Exascale systems. The laboratories also continue active engagement with leading industry vendors to identify future application characteristics and requirements to enable hardware designers to improve performance, lower power requirements and increase machine reliability. Follow-up discussions from the NVIDIA visit are also helping to refine requirements for future directive-based programming models, a key facet of utilizing existing investment in legacy application codes, as well as programmer tools such as mathematics libraries and runtime systems.

______________________________________________________

New High-Order Hydrodynamics Algorithms Pave the Way for Future Computing Architectures

Through research funded at Lawrence Livermore, scientists have developed BLAST, a high-order finite element hydrodynamics research code that improves the accuracy of simulations and provides a viable path to extreme parallel computing and exascale architectures.

Scaling a computational physics code to extreme levels of parallelism on supercomputers such as Sequoia, which has over 1.5 million compute cores, is a major challenge. The computational science community widely recognizes that high-order numerical methods will be much better suited to existing and future heterogenous computing architectures, and the need for such methods will increase over the next decade. It is essential to invest in not only hardware development but also numerical methods research to develop novel algorithms designed from the beginning to take advantage of both current extreme parallel machines such as Sequoia, as well as new architectures. 

High-order finite element methods use additional degrees of freedom per computational element (or zone) to increase the accuracy and robustness of simulations relative to low-order methods, which have historically been used. For example, in the image below, a very high-order calculation of a multi-material shock hydrodynamics problem is shown using Q8–Q7 finite elements (eighth-order polynomials for the kinematic fields, seventh-order polynomials for the thermodynamic fields). The high-order finite elements result in highly curved zones and sub-zonal resolution of the shock waves, which is simply not possible with a low-order method.

BLAST also provides a high performance computing advantage since high-order methods have greater FLOP/byte ratios, meaning that more time is spent on floating point operations relative to memory transfer, an important characteristic of numerical algorithms for exascale computing. Strong scaling data for the BLAST code on the Sequoia machine is shown below. In strong scaling, a single large problem is solved on more and more processors. Ideally, the time to solution should decrease as the number of processors is increased. In practice, there is a limit to how far an algorithm can be scaled as parallel communication overhead can dominate the run time of a problem as more and more processors are used. Because high-order methods are more compute intensive at the zone level, the communication overhead is not as significant as it is for low-order algorithms; thus, it is possible to achieve excellent strong scaling results down to very few computational zones per processor. “I don’t know of any other code that has such good strong scaling,” said BLAST Principal Investigator Tzanio Kolev. “And I think we can push it even further.”

Strong scaling data is shown for three cases: Q2-Q1 (quadratic), Q3-Q2 (cubic), and Q4-Q3 (quartic) finite element basis functions.

For more information about BLAST, visit https://computation.llnl.gov/casc/blast/index.php.

______________________________________________________

New ALE3D Release Means More Efficient Simulations

The ALE3D code team, a multidisciplinary team at Lawrence Livermore, recently released version 4.18, which added significant enhancements and functionality. ALE3D, a multiphysics numerical simulation software tool that uses arbitrary Lagrangian–Eulerian (ALE) techniques, is used to simulate many physical phenomena, including geological materials, rocket and munitions fragment impact, concrete fracture, and heat transfer. The code addresses 2D and 3D physics and engineering problems, using a hybrid finite element and finite volume formulation on an unstructured grid. The added enhancements and functionality support the Lab’s progress and performance in predictive capability for assessment.

The new version provides Lagrangian element erosion and automatic contact, with improved parallelism, robustness, accuracy, and coupling, which allow newly exposed surfaces to interact. These enhancements improve the modeling of complex systems, including material failure and fracture. For the longer time scales required by cook-off simulations, an implicit mortar slide capability was added for the structural components, while the incompressible flow

module was augmented to give users the ability to simulate cook-off of high explosives with low melt temperatures. Simulations using ALE capabilities will benefit from using a new condition number relaxer, producing higher quality meshes and leading to more robust and efficient simulations.

The code team also made several improvements that affect problem setup and initiation. Support for CUBIT (a geometry and mesh-generation toolkit produced at Sandia) has been expanded to include meshing of shell elements and the ability to shape geometry into 2D meshes. The ParticlePack library now includes several types of spherical particle defects, new statistical capabilities, and new particle/enclosure options and size-distribution options. The DSD (detonation shock dynamics) package supports a new dynamic capability for initializing explosive burn times.

Usability is improved by the addition of “for” loops in the user input and by initial support for dynamically linked user-defined functions. The flexibility of user controls is also improved. In addition, the code provides a common input syntax for element erosion, slide deletion, and nodal relaxation. A set of “rules” is available that can be combined to give the user very specific control over code behavior.

Code scaling on Sequoia is progressing. For this effort, the code team developed and implemented a new mesh-initialization algorithm and improved the memory control in Smooth-Particle Hydrodynamics. Preliminary results show two-orders-of-magnitude improvements in scaling.

______________________________________________________

Sandia Fracture Challenge Results Presented at 2012 ASME Congress Indicate Fracture Models Are Not Yet Predictive

Mid-summer last year, Sandia National Laboratories, in partnership with the US National Science Foundation (NSF) and Naval Surface Warfare Center, launched the Sandia Fracture Challenge. The goal of the Challenge was to benchmark the prediction capabilities for ductile fracture, including physics models, computational methods and numerical approaches currently available in the computational fracture community.

The challenge given to engineering researchers was to predict the onset and propagation of quasi-static ductile fracture of a common engineering alloy in a simple geometry (Figure 1) using modeling and simulation. Twenty-four international research teams initially signed up to participate; however, only 14 teams submitted their predictions before the deadline.

The 14 successful teams presented their methodology and predicted results at a special symposium during the American Society of Mechanical Engineers (ASME) International Mechanical Engineering Congress and Exposition in Houston, TX, November 9-15, 2012.

The methods for fracture prediction taken by the teams ranged from very simple engineering calculations to complicated multi-scale simulations. The wide variation in modeling results presented at the symposium (Figure 2) indicated computational fracture models existing in the mechanics community today are not yet predictive. In addition, predicting ductile failure initiation and crack propagation remains an extremely difficult problem.

Soon after the ASME symposium, discussions began within the research community about how to isolate the shortcomings in computational models and narrow the gaps in predictive capabilities for ductile fracture. A follow-up workshop, hosted by NSF, is in planning stages. It will document the lessons learned from the Sandia Fracture Challenge and define coordinated future R&D activities.

______________________________________________________

Mantevo 1.0 Release, Co-design Via Mini-Apps and the Launch of Mantevo.org Portal

Sandia researchers, along with colleagues from LLNL, LANL, the Atomic Weapons Establishment, NVIDIA, Intel and AMD, announced the first official release of the Mantevo suite of mini applications and mini drivers on December 14, 2012. The community portal (mantevo.org) for accessing Mantevo packages and related capabilities launched simultaneously.

Mantevo packages are small programs that embody one or more performance impacting elements of large-scale production applications.  Mantevo 1.0 includes eight packages, including seven miniapplications: CloverLeaf, CoMD, HPCCG, MiniFE, MiniGhost, MiniMD, and MiniXyce, and one minidriver: EpetraBenchmarkTest.

While Mantevo miniapps have been around for several years, energy efficiency concerns are driving change at all levels of computing. Co-design has become an essential activity in order to jointly answer questions about how memory, processor, operating system, programming model and application designs can advance simultaneously. Miniapps have emerged as a critical central component in the co-design process, representing concrete, yet malleable, proxies for large-scale applications, enabling rapid exploration of a very complex design space.

The collection of Mantevo reference implementations continues to grow. The base implementation of each Mantevo miniapp includes OpenMP, MPI and sequential execution. As a part of our advanced systems testbed efforts, many other models are also supported, including AVX, CUDA, OpenACC, OpenCL, Intel TBB, qthreads and KokkosArray.

For more information, or to download these packages, visit the following links.

Mantevo website: http://mantevo.org
Mantevo Suite 1.0 download: http://mantevo.org/download.php

______________________________________________________

Livermore Computing Collaboration Tools Increase Productivity

Collaborating in Livermore Computing's (LC's) high performance computing (HPC) environment has become easier over the past year with the rollout of three Web-based tools at lc.llnl.gov. Confluence is a wiki, a Web site that allows users to easily create and share content using only their Web browser. JIRA is an issue-tracking system that allows users to track tasks and software bugs and also provides powerful project management capabilities. Stash is a source code hosting system (based on git) that allows users to store their code and to collaborate on projects through online code review.

“In a matrix company like LLNL, there are many small, multidisciplinary projects, and they often span divisions or departments,” said Todd Gamblin, who started the sites with ASC funding as a pilot program to enable HPC research and development teams to work together more effectively. Todd continued, “Web-based collaboration tools can help these teams work together, but the overhead of maintaining them is often too large for a small team. Strict security requirements make it difficult for LLNL employees to set up one-off Web servers.” Now, LC staff and users can quickly create wiki spaces, issue trackers, and source code repositories for their HPC-related projects. The login is the standard LC user name, and fine-grained access controls and groups make it easy to stay within LC security guidelines.

The impact of the tools has been far-reaching. Within LC, Confluence was used to rapidly produce and maintain documentation for the Sequoia supercomputer, and it has since been used on many other LC projects for documentation and brainstorming. Pam Hamilton, Software Development group leader, fronted an effort to use JIRA to improve communication between and among LC groups. Users from Los Alamos and Sandia National Laboratories can log into the system using cross-site credentials. This capability allows LC's TOSS operating system and Lustre file system teams to use JIRA across the tri-labs.

Since going live in January 2012, the lc.llnl.gov tools have grown to serve more than 560 HPC users. There are more than 120 wiki sites, 88 JIRA projects, and 64 Stash repositories. LC HPC users can start their own collaborative projects by visiting https://lc.llnl.gov/. There may be institutional tools in the future for those without LC accounts.

Traditionally, collaboration tools were seen as outside the scope of HPC, and LC did not support them on its platforms. This perception is changing.

“We see the advantage to the ASC program of a common system for the HPC community to share content, track issues, and work together on code,” said Kim Cupps, LC division leader. “We appreciate the work of Todd, Pam, and others in leading the way to establish this valuable set of tools, which have already had a strong, positive impact on LC users.”

______________________________________________________

LANL ASC Projects Showcased in Stockpile Stewardship Quarterly

Since February 2011, the Office of Stockpile Stewardship, NA-11, has published the “Stockpile Stewardship Quarterly” (SSQ). Four ASC researchers at LANL are featured in the February 2013 issue. Alek Zubelewicz in the Physics and Chemistry of Materials Group and coauthor Abigail Hunter in the Lagrangian Codes Group at Los Alamos contributed the article titled “Fracture Model for Beryllium and Other Materials.” Another LANL author, Kim Molvig, contributed the article titled “Knudsen Layer Reduction of Fusion Reactivity.” Molvig in the Verification and Analysis Group and coauthor Nelson Hoffman is in the Plasma Theory and Applications Group.

These articles can be viewed on the LANL public website at http://www.lanl.gov/asc under Documents.

______________________________________________________

Dealing with Data Overload in the Scientific Realm

Excerpted from Science & Technology Review.

We may think we have problems managing our ever-increasing stream of electronic personal “data,” whether that information comes in the form of e-mails, social network updates, or phone texts. However, the challenges we face are miniscule compared with those faced by scientists who must parse the growing flood of scientific data critical to their work. From sequences to simulations to sensors, modern scientific inquiry is awash in electronic information. At Lawrence Livermore, computer scientists supporting various projects and programs are developing methods and algorithms that provide new ways to tame, control, and understand these large amounts of data.

“Four key steps are involved in solving data-science problems,” explains Dean Williams of Livermore’s Global Security Principal Directorate. “One must organize the data—arrange the numbers and bits into meaningful concepts. One must prioritize—choose the most useful data when time and resources are limited. One must analyze—find meaning in images, graphs, data streams, and so on. Finally, it’s important to make it easy for researchers to use the data; that is, we must create the methods and systems that help users query, retrieve, and visualize.”

Three “V’s” can sum up the type of data and the challenges involved: the variety, the velocity, and the volume. For example, in biological mission areas, the variety is high, but the velocity is low, with the volume that needs to be manipulated ranging from gigabytes to terabytes. By contrast, in the cyber security arena, variety and velocity are high, and the volume, which is continually changing as data streams past, can be very large. For climate research, the variety and velocity of data are also high, with an enormous volume accumulating in databases worldwide (from petabytes to exabytes).

The Laboratory, with its broad expertise in analysis, experience in storage technologies, and strong institutional computing culture, is addressing the data-science challenges of its programs and projects. These efforts range from devising methods for predicting the evolution of viruses, to creating tools for tackling streaming data in the cyber security arena, to fashioning accessible intuitive portals and analytics for climate scientists worldwide.

For the complete article, see the January issue of Science & Technology Reviewhttps://str.llnl.gov/january-2013/williams

______________________________________________________

Computation/Engineering Team Provides Expertise to DOE Carbon Capture

A joint team of Computation and Engineering staff contributed to the first software release for the Carbon Capture Simulation Initiative (CCSI) last fall. The release was a full year ahead of schedule, and the high quality of the tools has elicited praise from both industry partners and the Department of Energy (DOE).

In a February 2010 memorandum, President Obama charged a Carbon Capture and Storage (CCS) Task Force to overcome the barriers to widespread, cost-effective deployment of CCS technology within 10 years. To meet the President’s goal, DOE initiated the CCSI project to accelerate the realization of new carbon capture approaches from the laboratory to the power plant.

James Wood, from DOE headquarters, commended the quality of the initial software release: “The usefulness of the tools was demonstrated in a breakout session [at the CCSI Industry Advisory Board meeting] during which industry representatives began to voice plans for immediate use of these tools. Success in [this] goal has significant uses in many industries that use complex systems and offers a potential to reduce costs, risks, and the enormous time-lag presently required to bring commercial products to market.”

The release is the result of a productive multi-lab collaboration. The software includes uncertainty quantification (UQ) tools, carbon capture simulation models, risk analysis tools, a reduced-order model development tool, and the Turbine Gateway, which allows thousands of UQ simulations to run on the Amazon EC2 cloud simultaneously. Lab employees were involved with the development of many of these tools.

The Lab is a partner in CCSI with other national laboratories, industry, and academic institutions that use state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately to the widespread deployment to hundreds of power plants.

The multi-lab CCSI team still has a significant amount of work to accomplish. Although the team has proven the viability of the simulation approach for improving carbon capture, they must now productize their tools, develop more detailed simulations, and develop dynamic simulations of the full power cycle of a power plant. The team also must broaden the simulations to include the solvents that industry partners are most likely to deploy in the near future.

______________________________________________________

Livermore Shares Seismic High Performance Computing Expertise with Academia

Lab researchers and university scientists gathered recently at a Lawrence Livermore-hosted workshop for “Advancing Seismology and Geodynamics through High Performance Computing.”

The workshop's goal was to create a strategy for advancing the state of the art in these fields by partnering the National Science Foundation (NSF) and academia with the high performance computing (HPC) capabilities of the Department of Energy.

Funded mostly by NSF, the November session brought to Livermore many of the nation's academic leaders in computational seismology and geodynamics, including the directors of the Southern California Earthquake Center; NSF Center on Computational Infrastructure for Geodynamics at the University of California, Davis; and the Western Region Earthquake Center, U.S. Geological Survey. According to Steve Bohlen, deputy program director, Energy and Environmental Security, who organized the event, rapid advancement of important Laboratory programs in nuclear explosion monitoring and nuclear reactor seismic safety relies in part on the development of more refined seismic wave propagation codes and whole-Earth models, both of which are also needed by academic researchers to improve researchers' understanding of the Earth's interior.

 

Discussions during the workshop focused on some of these academic needs. A tour of Livermore's computing facility showcased the HPC modeling and simulation capabilities that can be shared.

The workshop ended with an outline for a white paper that NSF will use to promote partnerships between academia and DOE. It will present a vision for major advances in seismology and geodynamics that can be enabled with the focused support of an HPC "ecosystem." The white paper will define key attributes and features of that ecosystem and a concept for a more robust partnership between the academic community and the Lab.

______________________________________________________

LDRD Funds Early Career Research Projects

LANL’s Laboratory Directed Research & Development (LDRD) organization awarded twelve Fiscal Year 2013 Early Career Research projects of which three awards were for ASC Program researchers. Proposals from Abigail Hunter, Chengkun Huang, and Christopher Ticknor were selected from a total of 27 proposals that were submitted to the competition by researchers across Los Alamos. They will receive up to $225,000 per year for 2 years. “Early career research is one of the most important mechanisms for retaining top-notch scientific talent and aiding in the transition to a staff position,” said Deputy LDRD Program Director Jeanne Robinson.

 

 

 

 

Abigail Hunter’s proposal “Novel Mesoscale Modeling Approach for Investigating Energetically Driven Nanoscale Defect/Interface Interactions,” was accepted. Abigail is in the Lagrangian Codes Group in LANL’s Computational Physics Division.

 

 

 

 

 

 

 

 

 

 

Chengkun Huang, in the Applied Mathematics and Plasma Physics Group in the Theoretical Division, will be funded for his proposal “First Principle Study of Relativistic Beam and Plasma Physics Enabled by Enhanced Particle-in-Cell Capability.”

 

 

 

 

 

 

 

 

 

Christopher Ticknor, who first came to the Los Alamos as a Metropolis postdoctoral fellow, works in the Physical and Chemistry of Materials Group in the Theoretical Division. His proposal title is “Stochastic Modeling of Phase Transitions in Strongly Interacting Quantum Systems.”

 

 

 

 

______________________________________________________

The Rapid Evolution of Supercomputing Displayed at SC12

If the annual supercomputing (SC) conference proves anything, it is that high performance computing technology changes with stunning speed. But then, the purpose of SC is to galvanize the forces that bring about that change. SC12 keynote speaker, Michio Kaku, a physicist, futurist and author, told conference attendees that "today's supercomputers are tomorrow's infrastructure."

"The rapid evolution of high performance computing should come as no surprise," said Dona Crawford, associate director for Computation at Lawrence Livermore. "The leadership role LLNL plays in supercomputing contributes significantly to the new developments in HPC highlighted at SC."

Sequoia retained its No. 1 Graph 500 ranking, showcasing its ability to conduct analytic calculations or finding the needle in the haystack by traversing 15,363 giga edges per second on a scale of 40 graph (a graph with 2^40 vertices). The system's capability also played a role in one of the conferences most watched competitions, the Gordon Bell Prize. Two of the final five submissions used Sequoia: a simulation of the human heart's electrophysiology using a code called Cardioid, developed by a LLNL/IBM team; and a cosmology simulation led by Argonne National Laboratory.

Sequoia also was selected by readers of HPC Wire, the high performance computing news services, for a 2012 Readers Choice Award. Michel McCoy, head of LLNL's Advanced Simulation and Computing Program, received the award from Tom Tabor, publisher of HPC Wire.

______________________________________________________

Lawrence Livermore Computation Scientists Tapped to Lead Significant External Committees

Jeff Hittinger (left) has been named co-chair of the Exascale Math Steering Committee for the Office of Advanced Scientific Computing Research (ASCR) within the Office of Science. Jeff will gather input from the applied math community and will then help organize a workshop to discuss future directions for applied math research as it relates to exascale computing architecture.

Martin Schulz (right) was tapped to chair the Message Passing Interface (MPI) Forum, a large, open committee with representatives from many organizations that define and maintain the MPI standard. Among Martin’s responsibilities will be overseeing the MPI 3.1/4.0 effort, which seeks to identify and document additions to the MPI standard that are needed for better platform and application support.

______________________________________________________

Dona Crawford Named HPCwire "Person to Watch" in 2013

 

 

Dona Crawford, Associate Director for Computation at Lawrence Livermore, has been named one of HPCwire’s People to Watch in 2013. This is the second time Dona has been given this recognition; she was the first woman to be named to the list in 2002, and she is the only woman who has made the list twice. The 12 people who HPCwire selects each year are among the most talented and outstanding individuals within HPC. Recipients are selected from a pool of potential candidates in academia, government, industrial end-user, and vendor communities. Final selections are made following discussions with the HPCwire editorial and publishing teams and with input garnered from colleagues and other luminaries across the HPC community, including nominations and guidance from past recipients.

 

 

 

______________________________________________________

Maya Gokhale's “Streams-C C-to-FPGA Compiler” Paper Is Honored

 

 

A paper by Maya Gokhale and her team, entitled “Stream-Oriented FPGA Computing in the Streams-C High-Level Language” and originally published in 2000, was recently selected as one of the 25 highest impact papers in the 20-year history of the IEEE Symposium on Field-Programmable Custom Computing Machines.

In Feb 2013, Maya was named a “Distinguished Member of the Technical Staff” at Lawrence Livermore.

To read the paper, visit
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=903392&tag=1.

 

 

 

 

 ______________________________________________________

ASC Salutes Jeremy Templeton

The Verification and Validation (V&V) subprogram of ASC is pretty important. After all, we want to be highly confident in the predictions we make about the safety, security, and reliability of the weapons in the stockpile.

Sandia’s engineering researcher Jeremy Templeton wants to provide confidence in the safety of nuclear weapons systems in abnormal thermal environments.

State-of-the-art uncertainty quantification theory brings that confidence.

Quantifying margins and uncertainties analysis begins by working with systems engineers to understand the relevant safety themes and environments of concern. Analysts then obtain as much information as possible regarding the weapon’s design. This includes information such as the materials used and the knowns and unknowns about parts configuration. “All this information is used to build a realistic model for the heat transport physics throughout the system,” said Templeton.

After quantifying algorithm accuracy through rigorous verification methods, assessment of the impact of physical uncertainties on system response begins. “This helps us learn which parameters are important to determining the behavior of the systems and which are not, helping us make the best use of limited experimental and computational resources,” he said.

These results provide the basis for validation testing. Predictivity of the model is assessed quantitatively to confirm the assumptions made in the modeling process, followed by tens of thousands of targeted simulations. These simulations show how the system meets the defined safety requirements probabilistically.

Templeton’s work, and that of his team, in this area has inspired an LDRD project in a seemingly unlikely area – calibrating the engineering models of gas turbine engines. The idea is to inform models using high fidelity data and use computationally efficient methods to impact the engine design cycle, leading to increased efficiency and reduced emissions.

“I also work on developing multi-scale and multi-physics coupling methods for atomistic and continuum descriptions of physical phenomena,” he said. “The goal is to understand the fundamental behavior of electrical energy storage systems at a molecular level.”

Templeton has worked with the Sandia portion of the ASC program for five years. He came to Sandia after completing his Ph.D. in Mechanical Engineering at Stanford University. His thesis was in the area of turbulent flows, developing new wall models for large-eddy simulation (LES) using optimal control theory. “We demonstrated orders-of-magnitude computational savings in some canonical flows with minimal reduction in accuracy as compared to LES without wall models,” he said.

“I enjoy working at the intersection of physics, statistics, and computation,” said Templeton. “The constant process of learning and improvement makes work a lot of fun. I particularly like having the opportunity to think deeply about hard problems, such as how to assess very rare events, and then putting those ideas into practice to solve a real problem.”

His contributions at Sandia are noteworthy. “Jeremy and the teams he’s led have brought an outstanding level of rigor to the verification and validation of full systems thermal analysis for the W87,” said Greg Wagner, manager of the Thermal/Fluid Science and Engineering Department. “He’s pulled in some of the most up-to-date techniques in uncertainty quantification theory and used them to have impact on a very applied problem, setting an example that can be followed by other analysis teams across the labs.”

Wagner further observed, “I’m always impressed with Jeremy’s range of knowledge, his creativity, and his willingness to share his expertise. When I want help thinking through a tough problem, to talk about ideas for new projects, or to get some technical coaching or mentorship for a less experience staff member, Jeremy is always the first person I turn to.”

Wagner’s comments are quite a statement about the value Templeton brings to the ASC program, and the confidence we can have in our predictions about the safety of nuclear weapons systems in abnormal thermal environments.

ASC Relevant Research


Lawrence Livermore National Laboratory
2011 Publications

  1. Grabowski, B., Soderlind, P., Hickel, T., Neugebauer, J. (2011). “Temperature-driven phase transitions from first principles including all relevant excitations: The fcc-to-bcc transition in Ca,” Phys. Rev. B, Vol. 84, Iss. 21, p. 214107, Dec.

2012 Publications

  1. Adams, M.L., Higdon, D.M., et al. (2012), “Assessing the Reliability of Complex Models,” National Research Council, Washington, D.C., The National Academies Press.
  2. Alam, A., Wilson, B.G.; Johnson, D.D. (2012). “Comment on Accurate and fast numerical solution of Poisson's equation for arbitrary, space-filling Voronoi polyhedra: Near-field corrections revisited,” Physical Review B, Vol. 86, Iss. 12, 127102.
  3. Armstrong, M.R. et al. (2012). “Prospects for achieving high dynamic compression with low energy,” Applied Physics Letters 101, 101904.
  4. Armstrong, M.R. et al. (2012). “Shock compression of precompressed deuterium,” AIP Conf. Proc. 1426.
  5. Badanur, S., Mueller, F., Gamblin, T. (2012). “Memory Trace Compression and Replay for SPMD Systems using Extended PRSDs,” Computer J., 55(2):206–217, Feb.
  6. Barton, N.R., Arsenlis, A., Marian, J. (2012). “A polycrystal plasticity model of strain localization in irradiated iron,” J. of the Mechanics and Physics of Solids, Vol. 61, No. 2, pp. 341–351.
  7. Barton, N.R., Arsenlis, A., Rhee, M., Marian, J., Bernier, J.V., Tang, M., Yang, L. (2012). “A multi-scale strength model with phase transformation,” Proc. of the Conf. of the American Physical Society Topical Group on Shock Compression of Condensed Matter, Vol. 1426, ISBN: 978-0-7354-1006-0.
  8. Barty, A., C. Caleman, A. Aquila, N. Timneaunu, L. Lomb, T.A. White, J. Andreasson, D. Arnlund, S. Bajt, T.R.M. barends, M. Barthelmess, M.J. Bogan, C. Bostedt, J.D. Bozek, R. Coffee, N. Coppola, J. Davisson, D.P. DePonte, R.B. Doak, T. Ekeberg, V. Elser, S.W. Epp, B. Erk, H. Fleckenstein, L. Foucar, P. Fromme, H. Graafsma, L. Gumprecht, J. Hajdu, C.Y. Hampton, R. Hartmann, A. Hartmann, G. Hauser, H. Hirsemann, P. Holl, M.S. Hunter, L. Johansson, S. Kassemeyer, N. Kimmel, R.A. Kirian, M.N. Liang, F.R.N.C. Maia, E. Malmerberg, S. Marchesini, A.V. Martin, K. Nass, R. Neutze, C. Reich, D. Rolles, B. Rudek, A. Rudenko, H. Scott, I. Schlichting, J. Schulz, M.M. Seibert, R.L. Shoeman, R.G. Sierra, H. Soltau, J.C.H. Spence, F. Stellato, S. Stern, L. Struder, J. Ullrich, X. Wang, G. Weidenspointner, U. Weierstall, C.B. Wunderer and H.N. Chapman. (2012). “Self-terminating diffraction gates femtosecond X-ray nanocrystallography measurements,” Nature Photonics 6, pp. 35-40.
  9. Bastea, S. (2012). “Aggregation kinetics of detonation nanocarbon,” Applied Physics Letters 100, 214106.
  10. Bastea, S., Fried, L.E. (2012). “Chemical Equilibrium Detonation,” Chapter 1 in Shock Wave Science and Technology Reference Library Vol. 6, Detonation Dynamics, Springer-Verlag (Berlin Heidelberg).
  11. Benedict, L.X., Surh, M.P., Castor, J.I., Khairallah, S.A., Whitley, H.D., Richards, D.F., Glosli, J.N., Murillo, M.S., Scullard, C.R., Grabowski, P.E., Michta, D., Graziani, F.R. (2012). “Large-scale molecular dynamics simulations of dense plasmas: The Cimarron Project,” High Energy Density Physics, Volume 8, Iss. 1, Mar., pp. 105–131.
  12. Bennett, J., Abbasi, H, Bremer, P-T., Grout, R., Gyulassy, A., Jin, T., Klasky, S., Kolla, H., M. Parashar, Pascucci, V., Pebay, P., Thompson, D., Yu, H., Zhang, F., Chen, J. (2012). “Combining In-situ and In-transit Processing to Enable Extreme-Scale Scientific Analysis,” Proc. ACM/IEEE Conf. on Supercomputing (SC12), Salt Lake City, UT, Nov. 10–16.
  13. Bhatele, A. ,Gamblin, T.(2012). “OS/Runtime challenges for dynamic topology aware mapping,” U.S. DOE Exascale Operating Systems and Runtime Research Workshop (ExaOSR), Washington, DC, Oct.
  14. Bhatele, A., Gamblin, T., Isaacs, K., Gunney, B., Schulz, M., Bremer, P-T., Hamann, B. (2012). “Novel Views of Performance Data to Analyze Large-Scale Adaptive Applications,” Proc. (ACM/IEEE) Conf. on Supercomputing (SC12), Salt Lake City, UT, Nov. 10–16.
  15. Bhatele, A., Gamblin, T., Isaacs, K.E., Gunney, B.T.N., Schulz, M., Bremer, P-T., Hamann, B. (2012). “Novel views of performance data to analyze large-scale adaptive applications,” Supercomputing 2012 (SC’12), Salt Lake City, UT, Nov. 10–16.
  16. Bhatele, A., Gamblin, T., Langer, S.H., Bremer, P-T., Draeger, E.W., Hamann, B., Isaacs, K.E., Landge, A.G., Levine, J.A., Pascucci, V., Schulz, M., Still, C.H. (2012). “Heuristics for optimizing communication of bandwidth-bound applications,” Supercomputing 2012 Proc., Salt Lake City, UT, Nov.
  17. Bhatele, A., Gamblin, T., Langer, S.H., Bremer, P-T., Draeger, E.W., Hamann, B., Isaacs, K.E., Landge, A.G., Levine, J.A., Pascucci, V., Schulz, M., Still, C.H. (2012). “Mapping Applications with Collectives over Sub-Communicators on Torus Networks,” Proc. of the ACM/IEEE Conf. on Supercomputing (SC12), Salt Lake City, UT, Nov. 10–16.
  18. Bhatia, H., Norgard, G., Pascucci, V., Bremer, P-T. (2012). “The Helmholtz-Hodge Decomposition - A Survey,” J. IEEE Transactions on Visualization and Computer Graphics, Vol. 99, http://doi.ieeecomputersociety.org/10.1109/TVCG.2012.316.
  19. Bhatia, H., Norgard., G., Pascucci, V., Bremer, P-T. (2012). “Comments on the Meshless Helmholtz-Hodge Decomposition,” J. IEEE Transactions on Visualization and Computer Graphics,” Vol. 19, No. 3, pp. 527–528, http://doi.ieeecomputersociety.org/10.1109/TVCG.2012.62.
  20. Bihari, B.L. (2012). “Transactional Memory for Unstructured Mesh Simulations,” J. Sci. Comput., Vol. 54, pp. 311–332.
  21. Bihari, B.L., Wong, M., Wang, A., De Supinski, B.R., Chen, W. (2012). “A Case for Including Transactions in OpenMP II: Hardware Transactional Memory,” Eighth Int. Workshop on OpenMP (IWOMP2012), Rome, Italy, Jun. 11–13.
  22. Bihari, B.L., Wong, M., Wang, A., de Supinski, B.R., Chen, W. (2012). “A Case for Including Transactions in OpenMP II: Hardware Transactional Memory,” Int. Workshop on OpenMP, Rome, Italy, Jun 11–13, LECT NOTE COMPUT SCI, 7312, pp. 44-58.
  23. Boates, B., Teweldeberhan, A.M., Bonev, S.A. (2012). “Stability of dense liquid carbon dioxide,” Proc. of the National Academy of Sciences, Vol. 109, pp. 14808–14812.
  24. Böhme, D., De Supinski, B.R., Geimer, M., Schulz, M., Wolf, F. “Scalable Critical-Path Based Performance Analysis,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS2012), Shanghai, China, May 21–25.
  25. Brantley, P.S. (2012). “Incorporation of a Modified Closure in a Monte Carlo Particle Transport Algorithm for Binary Stochastic Media,” Trans. Am. Nuc. Soc., 106, 342, on CD-ROM.
  26. Brooks III, E.D., Szoke, A. (2012). “The Difference Formulation of Radiation Transport in 2-D RZ Geometry (U),” NECDC, Oct. 22–26, Livermore CA.
  27. Buth, C., Liu, J., Chen, M.H., et al. (2012). “Ultrafast absorption of intense x rays by nitrogen molecules,” J. of Chemical Physics, Vol. 136, Iss. 21, 214310.
  28. Caleman, C., Timneanu, N., Martin, A.V., White, T.A., Scott, H.A., Barty, A., Aquila, A., Chapman, H.N. (2012). “Modeling of XFEL induced ionization and atomic displacement in protein nanocrystals,” X-Ray Free-Electron Lasers - Beam Diagnostics, Beamline Instrumentation, and Applications, SPIE Vol. 8504, 85040H.
  29. Casas, M., De Supinski, B.R., Schulz, M., Bronevetsky, G. (2012). “Fault Resilience of the Algebraic Multi-Grid Solver,” Twenty Sixth Int. Conf. on Supercomputing (ICS 2012), Venice, Italy, June 25–29.
  30. Chiang, W.-F., Gopalakrishnan, G., Rakamaric, Z., Ahn, D.H.,  Lee, G.L. (2012). “Determinism and Reproducibility in Large-Scale HPC Systems,” 4th Workshop on Determinism and Correctness in Parallel Programming (WoDET), Houston, Texas, Mar.
  31. Cho, B.I., Heimann, P.A., Engelhorn, K., Feng, J., Glover, T.E., Hertlein, M.P., Ogitsu, T., Weber, C.P., Correa, A.A., Falcone, R.W. (2012). “Picosecond Single-Shot X-ray Absorption Spectroscopy for Warm and Dense Matter,” J. of Synchrotron Radiation News, Vol. 8, Iss. 3, pp. 303–306.
  32. Cone, K.V., Dunn, J., Baldis, H.A., May, M.J., Purvis, M.A., Scott, H.A., Schneider, M.B. (2012). “Time-resolved soft x-ray spectra from laser-produced Cu plasma,” Rev. Sci. Instr. 83, 10E138.
  33. D.H. Ahn, M.J. Brim, B.R. de Supinski, Gamblin, T., G.L. Lee, M.P. LeGendre, B.P. Miller, A.Moody, and Schulz, M. (2012). “Efficient and Scalable Retrieval Techniques for Global File Properties,” Proc. Int. Parallel and Distributed Processing Sym. (IPDPS’13), Boston, MA, May 20–24.
  34. Dobrev, V., Ellis, T., Kolev, T., Rieben, R. (2012). “High-order curvilinear finite elements for axisymmetric Lagrangian hydrodynamics,” Computers and Fluids, Jun., http://dx.doi.org/10.1016/j.compfluid.2012.06.004.
  35. Dobrev, V., Kolev, T., Rieben, R. (2012). “High-order curvilinear finite element methods for Lagrangian hydrodynamics,” SIAM J. on Scientific Computing, Vol. 304, No. 5, pp. 606–641, Sep. 
  36. Escher, J.E., Burke, J.T., Dietrich, F.S., Scielzo, N.D., IThompson, .J., Younes, W. (2012). “Compound-nuclear Reaction Cross Sections From Surrogate Measurements,'” Rev. Mod. Physics 84, 353.
  37. Fried, L.E., Zepeda-Ruis, L., Howard, W.M. (2012). “The Role of Viscosity in TATB hot spot ignition,” 7th Biennial Conf. of the American-Physical-Society-Topical-Group on Shock Compression of Condensed Matter, Chicago, IL, API Conf. Proc., Vol. 1426.
  38. Gaffney, J.A., Clark, D., Sonnad, V., Libby, S.B. (2012). “Bayesian inference of inaccuracies in radiation transport physics from inertial confinement fusion experiments,” J. of Quantitative Spectroscopy and Radiative Transfer, Radiative Properties of Hot Dense Matter, Santa Barbara, CA, Oct.
  39. Gahvari, H., Gropp, W., Jordan, K.E., Schulz, M., Yang, U.M. (2012). “Modeling the Performance of an Algebraic Multigrid Cycle Using Hybrid MPI/OpenMP,” Proc. of the 41st Int. Conf. on Parallel Processing (ICPP), Sep.
  40. Gaither, K., Childs, H., Schulz, K., Harrison, C., Barth, W., Donzis, D., Yeung P. (2012). “Visual Analytics for Finding Critical Structures in Massive Time-Varying Turbulent-Flow Simulations,” Computer Graphics and Applications, IEEE, Vol. 32, No.4, pp. 34–45, Jul.–Aug.
  41. Gatu Johnson, M., Frenje, J.A., Casey, D.T., Li, C.K., Séguin, F.H., Petrasso, R., Ashabranner, R., Bionta, R.M., Bleuel, D.L., Bond, E.J., Caggiano, J.A., Carpenter, A., Cerjan, C.J., Clancy, T.J., Doeppner, T., Eckart, M.J., Edwards, M.J., Friedrich, S., Glenzer, S.H., Haan, S.W., Hartouni, E.P., Hatarik, R., Hatchett, S.P., Jones, O.S., Kyrala, G., Le Pape, S., Lerche, R.A., Landen, O.L., Ma, T., MacKinnon, A.J., McKernan, M.A., Moran, M.J., Moses, E., Munro, D.H., McNaney, J., Park, H.S., Ralph, J., Remington, B., Rygg, J.R., Sepke, S.M., Smalyuk, V., Spears, B., Springer, P.T., Yeamans, C.B., Farrell, M., Jasion, D., Kilkenny, J.D., Nikroo, A., Paguio, R., Knauer, J.P., Yu Glebov, V., Sangster, T.C., Betti, R., Stoeckl, C., Magoon, J., Shoup, III, M.J., Grim, G.P., Kline, J., Morgan, G.L., Murphy, T.J., Leeper, R.J., Ruiz, C.L., Cooper, G.W., Nelson, A.J. (2012). “Neutron spectrometry—An essential tool for diagnosing implosions at the National Ignition Facility,” Rev. Sci. Instrum. 83, 10D308.
  42. Gentile, N.A. (2012). “Material Motion Corrections for Implicit Monte Carlo Radiation Transport,” Proc. of the NECDC.
  43. Goldman, N. Fried, L.E. (2012). “Extending the Density Functional Tight Binding Method to Carbon Under Extreme Conditions,” J. Phys. Chem. C, Vol. 116, pp. 2198–2204.
  44. Gopalakrishnan, G., Mueller, M.S., Lecomber, D., de Supinski, B.R., Hilbrich, T. (2012). “Debugging MPI and Hybrid-Heterogenous Applications at Scale,” SC2012, Salt Lake City, Utah, Nov. 11–16.
  45. Gorelli, F.A., Elatresh, S.F., Guilaume, C.L., Marques, M., Ackland, G.J., Santoro, M., Bonev, S.A., Gregoryanz, E. (2012). “On lattice dynamics of dense lithium,” Physical Review Letters, Vol. 108, pp. 055501–4.
  46. Grondalski, J. (2012). “A Monte-Carlo Implementation of Brown Preston Singleton (BPS) Charged Particle Stopping Power,” NECDC Proc.
  47. Grout, R.W., Gruber, A., Kolla, H., Bremer, P-T., Bennett, J., Gyulassy, A., Chen, J.H. (2012). “A direct numerical simulation study of turbulence and flame structure in transverse jets analysed in jet-trajectory based coordinates,” J. of Fluid Mechanics, Vol. 706, pp. 351–383, http://dx.doi.org/10.1017/S0022112012002571, http://dx.doi.org/10.1017/jfm.2012.257.
  48. Gyulassy, A., Bremer, P-T., Pascucci, V. (2012). “Computing Morse-Smale Complexes with Accurate Geometry,” J. IEEE Transactions on Visualization and Computer Graphics, Vol. 18, No. 12, pp. 2014–2022.
  49. Gyulassy, A., Peterka, T., Pascucci, V., Ross, R. (2012). “Characterizing the Parallel Computation of Morse-Smale Complexes,” Proc. IPDPS '12, Shanghai, China.
  50. Hansen, C.E., Klein, R.I., McKee, C.F., Fisher, R.T. (2012). “Feedback Effects on Low-mass Star Formation,” Astrophysical J., 747, 22.
  51. Hardin, D. (2012). “2-D Geometric Approximations,” NECDC Proc., Oct. 22–26, Livermore, CA.
  52. Harrison, C. (2012). “Data Binning in VisIt: A surprisingly flexible analysis tool,” Proc. of Nuclear Explosive Code Development Conf. (NECDC), Oct.
  53. Harrison, C., Krishnan, H. (2012). “Python's role in VisIt,” Proc. of the eleventh annual Scientific Computing with Python Conference (SciPy 2012), Jul.
  54. Harrison, C., Navrátil, P., Moussalem, M., Jiang, M., Childs, H. (2012). “Efficient Dynamic Derived Field Generation on Many-Core Architectures Using Python,” Python for High Performance and Scientific Computing (PyHPC 2012), Nov., online.
  55. Harrison, C., Navrátil, P.A., Moussalem, M., Jiang, M., Childs, H. (2012). “Efficient Dynamic Derived Field Generation on Many-Core Architectures Using Python,” Proc. of Workshop on Python for High Performance and Scientific Computing (PyHPC), Nov. 16.
  56. Haskins, J.B., Moriarty, J.A., Hood, R.Q. (2012).  “Polymorphism and Melt in High-Pressure Tantalum,” Physical Review B, Vol. 86, No. 22, pp. 224104(1–18).
  57. Hau-Riege, S.P, Weisheit, J., Castor, J.I., London, R.A., Scott, H., Richards, D.F. (2012). “Modeling quantum processes in classical molecular dynamics simulations of dense plasmas,” New J. of Physics, Vol. 15, Jan.
  58. Hilbrich, T., Mueller, M.S., De Supinski, B.R., Schulz, M., Nagel, W.E. (2012). “GTI: A Generic Tools Infrastructure for Event Based Tools in Parallel Systems,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS 2012), Shanghai, China, May 21–25.
  59. Hilbrich, T., Protze, J., Schulz, M., De Supinski, B.R. and Mueller, M.S., (2012). “MPI Runtime Error Detection with MUST: Advances in Deadlock Detection,” SC2012, Salt Lake City, Utah, Nov. 11–16, 2012.
  60. Ho, C.-H., De Kruijf, M., Sankaralingam, K., Rountree, B., Schulz, M., De Supinski, B.R. (2012). “Mechanisms and Evaluation of Cross-Layer Fault-Tolerance for Supercomputing,” The Forty First Int. Conf. on Parallel Processing (ICPP-12), Pittsburgh, PA, Sep. 10–13.
  61. Hoffman, R.D., Zimmerman, G., Chen, M., Cerjan, C. (2012). “Nuclear Plasma Interactions on the National Ignition Facility,” SSQ, Vol. 2, No. 4, pp. 5–7, Feb.
  62. Hornung, R., Anderson, R., Elliott, N., Gunney, B., Pudliner, B., Ryujin, B., Wickett, M. (2012). “Building Adaptive Mesh Refinement into a Multiphysics Code: Software Design Issues and Solutions,” Proc. NECDC, Livermore, CA, Oct. 22–26.
  63. Hurricane, O.A., Smalyuk, V.A., Raman, K., Schilling, O., Hansen, J.F., Langstaff, G., Martinez, D., Park, H.-S., Remington, B.A., Robey, H.F., Greenough, J.A., Wallace, R., Di Stefano, C.A., Drake, R.P., Marion, D., Krauland, C.M., Kuranz, C.C. (2012). “Validation of a Turbulent Kelvin-Helmholtz Shear Layer Model Using a High-Energy-Density OMEGA Laser Experiment,” Physical Review Letters 109, 155004-1–155004-4. http://prl.aps.org/abstract/PRL/v109/i15/e155004.
  64. Iglesias, C.A. (2012). “Partially resolved transition array model in intermediate coupling,” High Energy Density Physics 8, 154.
  65. Iglesias, C.A., Sonnad V. (2012). “Statistical line-by-line model for atomic spectra in intermediate coupling,” High Energy Density Physics 8, 253.
  66. Iglesias, C.A., Sonnad, V. (2012). “Partially resolved transition array model for atomic spectra,” High Energy Density Physics 8, 260.
  67. Iglesias, C.A., Sonnad, V. (2012). “Partially Resolved Transition Array Model for Atomic Spectra,” High Energy Density Physics 7, Jan., online.
  68. Islam, T., Mohror, K., Bagchi, S., Moody, A., De Supinski, B.R., Eigenmann, R. (2012). “MCRENGINE: A Scalable Checkpointing System Using Data-Aware Aggregation and Compression,” SC2012, Salt Lake City, Utah, Nov. 11–16.
  69. Islam, T., Mohror, K., Bagchi, S., Moody, A., De Supinski, B.R., Eigenmann, R., (2012). “mcrEngine: A Scalable Checkpointing System using Data-Aware Aggregation and Compression,” Supercomputing 2012, Salt Lake City, UT, Nov.
  70. Johnson, B.M., Katz, J.I., Schilling, O. (2012). “A von Neumann–Smagorinsky turbulent transport model for stratified shear flows,” Int. J. of Computational Fluid Dynamics 26, 173–179, http://www.tandfonline.com/doi/abs/10.1080/10618562.2012.670226.
  71. Jones, O.S., C.J. Cerjan, M.M. Marinak, J.L. Milovich, H.F. Robey, P.T. Springer, L.R. Benedetti, D.L. Bleuel, E.J. Bond, D.K. Bradley, D.A. Callahan, J.A. Caggiano, P.M. Celliers, D.S. Clark, S.M. Dixit, T. Doppner, R.J. Dylla-Spears, E.G. Dzentitis, D.R. Farley, S.M. Glenn, S.H. Glenzer, S.W. Haan, B.J. Haid, C.A. Haynam, D.G. Hicks, B.J. Kozioziemski, K.N. LaFortune, O.L. Landen, E.R. Mapoles, A.J. MacKinnon, J.M. McNaney, N.B. Meezan, P.A. Michel, J.D. Moody, M.J. Moran, D.H. Munro, M.V. Patel, T.G. Parham, J.D. Sater, S.M. Sepke, B.K. Spears, R. P.J. Town, S.V. Weber, K. Widmann, C.C. Widmayer, E.A. Williams, L.J. Atherton, M.J. Edwards, J.D. Lindl, B.J. MacGowan, L.J. Suter, R.E. Olson, H.W. Herrmann, J.L. Kline, G.A. Kyrala, D.C. Wilson, J. Frenje, T.R. Boehly, V. Glebov, J.P. Knauer, A. Nikroo, H. Wilkens, J.D. Kilkenny,  (2012). “A high-resolution integrated model of the National Ignition Campaign cryogenic layered experiments,” Phys. Plasmas 19, 056315.
  72. Kandalla, K., Yang, U.M., Keasler, J., Kolev, T., Moody, A., Subramoni, H., Tomko, K., Vienne, J., De Supinski, B.R., Panda, D.K. (2012). “Designing Non-blocking All reduce with Collective Offload on InfiniBand Clusters: A Case Study with Conjugate Gradient Solvers,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS2012), Shanghai, China, May 21–25.
  73. Kang, J., Zhu, J., Wei, S.H., Schwegler, E., Kim, Y.H. (2012). “Persistent medium-range order and anomalous liquid properties of Al1-xCux alloys,” Physical Review Letters Vol. 108, pp. 115901.
  74. Kang, K., Bulatov, V.V., Cai, W. (2012). “Singular orientations and faceted motion of dislocations in body-centered cubic crystals,” Proc. of the National Academy of Sciences, Vol. 109, No. 38, pp. 15174–15178.
  75. Karlin, I., Bhatele, A., Keasler, J., Chamberlain, B.L., Cohen, J., DeVito, Z., Haque, R., Laney, D., Luke, E., Wang, F., Richards, D., Schulz, M., Still, C.H. (2012). “Exploring Traditional and Emerging Parallel Programming Models using a Proxy Application,” IEEE Int. Parallel & Distributed Processing Sym.
  76. Kim, J., Esler, K.P., McMinis, J., Morales, M.A., Clark, B.K., Schulenburger, L., Ceperley, D.M. (2012). “Hybrid Algorithms in Quantum Monte Carlo,” J. of Physics: Conf. Series, Vol. 402, 012008.
  77. Kimpe, D., Mohror, K., Moody, A., Van Essen, B., Gokhale, M., Iskra, K., Ross, R., De Supinski, B.R. (2012). “Integrated In-System Storage Architecture for High Performance Computing,” Int. Workshop on Runtime and Operating Systems for Supercomputers (ROSS) 2012, Venice, Italy, June 29.
  78. Koch, J.A., Stewart, R.E., Beiersdorfer, P., Shepherd, R., Schneider, M.B., Miles, A.R., Scott, H.A., Smalyuk, V.A., Hsing, W.W. (2012). “High-resolution spectroscopy for Doppler-broadening ion temperature measurements of implosions at the National Ignition Facility,” Rev. Sci. Instr. 83, 10E127.
  79. Laguna, I., Ahn, D.H., De Supinski, B.R., Bagchi, S., Gamblin, T. (2012). “Probabilistic Diagnosis of Performance Faults inLarge-Scale Parallel Applications,” Twenty First Int. Conf. on Parallel Architectures and Compilation Techniques (PACT-2012), Minneapolis, MN, Sep. 19–23.
  80. Landge, A.G., Levine, J.A., Isaacs, K.E., Bhatele, A., Gamblin, T., Schulz, M., Langer, S.H., Bremer, P-T., Hamann, B., Pascucci, V. (2012). “Visualizing Network Traffic to Understand the Performance of Massively Parallel Simulations,” J. IEEE Transactions on Visualization and Computer Graphics, Vol. 18, No. 12, pp. 2467–2476.
  81. Landge, A.G., Levine, J.A., Isaacs, K.E., Bhatele, A., Gamblin, T., Schulz, M., Langer, S.H., Bremer, P-T., and Pascucci, V. (2012). “Visualizing network traffic to understand the performance of massively parallel simulations,’ IEEE Sym. on Information Visualization (INFOVIS’12), Seattle, WA, October 14–19.
  82. Langer, S. Bhatele, A., Gamblin, T., Still, B. Hinkel, D., Kumbera, M., Langdon, B., Williams, E. (2012). “Simulating Laser-Plasma Interaction in Experiments at the National Ignition Facility on a Cray XE6,” Cray Users Group (CUG 2012), Stuttgart, Germany, April 29–May 3.
  83. Langer, S., Boyd, W. (2012). “Improving application  performance using hardware performance counters,” Proc. of the NECDC.
  84. Li, J., Zhou, J., Ogitsu, T., Ping, Y., Ware, W.D., Cao, J. (2012). “Probing the warm dense copper nano-foil with ultrafast electron shadow imaging and deflectometry,” High Energy Density Physics, Vol. 8, Iss. 3, pp, 298–302.
  85. Liu, S., Levine, J.A., Bremer, P-T., Pascucci, V. (2012). “Gaussian Mixture Model Based Volume Rendering,” Proc. IEEE Sym. On Large-Scale Data Analysis and Visualization, pp. 73–77, Oct.
  86. Lyberis, S., Pratikakis, P., Nikolopoulos, D., Schulz, M., Gamblin, T., De Supinski, B.R. (2012). “The myrmics memory allocator: Hierarchical, message-passing allocation for global address spaces,” Int. Sym. on Memory Management (ISMM’12), Beijing, China, Jun. 15–16.
  87. Mallik, B.S., W. Kuo, I-F., Fried, L.E., Siepmann, J. I. (2012). “Understanding the solubility of triamino trinitro benzene in hydrous tetramethylammonium fluoride: A first principles molecular dynamics study,” Physical Chemistry Chemical Physics, 14, 4884-4890.
  88. Manaa, M.R., Fried, L.E. (2012). “Nearly Equivalent Inter- and Intramolecular Hydrogen Bonding in 1,3,5-Triamino-2,4,6-trinitrobenzene at High Pressure,” J. of Physical Chemistry C. 116, 2116.
  89. Manaa, M.R., Yoo, C-S., Reed, E.J., Strano, M.S. (2012). “Advances in Energetic Materials Research,” MRS sym. Proc., Vol. 1405, MRS Online Proc. Library, USA.
  90. Managan, R.A. (2012). “Using Tabular EOS at Low Temperatures”, Proc. of the NECDC.
  91. McLendon, W.C., Bansal, G., Bremer, P-T., Chen, J., Kolla, H., Bennett (2012). “On The Use of Graph Search Techniques for The Analysis of Extreme-scale Combustion Simulation Data,” Proc. IEEE Sym. Large-Scale Data Analysis and Visualization, pp. 57–63, Oct.
  92. McMahon, J.M., Morales, M.A., Pierleoni, C., Ceperley, D.M. (2012). “The properties of hydrogen and helium under extreme conditions,” Rev. Mod. Phys., Vol. 84, No. 4, p. 1607.
  93. Miles, A.R., Chung, H.K., Heeter, R., Hsing, W., Koch, J.A., Park, H.-S., Robey, H.F., Scott, H.A., Tommasini, R., Frenje, J., Li, C.K., Petrasso, R., Glebov, V., Lee, R.W. (2012). “Numerical simulation of thin-shell direct drive DHe3-filled capsules fielded at OMEGA,” Phys. Plasmas 19, 072702.
  94. Mirin, A.A., Richards, D.F., Glosli, J.N., Draeger, E.W., Chan, B., Fattebert, J., Krauss, W.D., Oppelstrup, T., Rice, J.J., Gunnels, J.A., Gurev, V., Kim, C., Magerlein, J., Reumann, M., Wen, H.-F. (2012). “Toward Real-Time Modeling of Human Heart Ventricles at Cellular Resolution: Simulation of Drug-Induced Arrhythmias,” Proc. of the ACM/IEEE Supercomputing 2012 Conf.
  95. Mohror, K., Karavanic, K.L. (2012). “Trace Profiling: Scalable Event Tracing on High-End Parallel Systems,” Parallel Computing, 38(4-5):194-225, April–May.
  96. Mohror, K., Moody, A., De Supinski, B.R. (2012). “Asynchronous Checkpoint Migration with MRNet in the Scalable Checkpoint/Restart Library,” Second Workshop on Fault Tolerance for HPC at Extreme Scale (FTXS 2012), Boston, MA, Jun. 25.
  97. Molitoris, J.D., Batteux, J.D., Garza, R.G., Tringe, J.W., Souers, P.C., Forbes, J.W. (2012). “Mix and instability growth from oblique shock,” 7th Biennial Conference of the American-Physical-Society-Topical-Group on Shock Compression of Condensed Matter, Chicago, IL, Jun. 26–Jul. 1, API Conf. Proc., Vol. 1426.
  98. Morales, M.A., Benedict, L.X., Clark, D.S., Schwegler, E., Tamblyn, I., Bonev, S.A., Correa, A.A., Haan, S.W. (2012). “Ab initio calculations of the equation of state of hydrogen in a regime relevant for inertial fusion applications,” High Energy Density Physics, Vol. 8, No. 1, pp. 5–12.
  99. Morales, M.A., McMinis, J., Clark, B.K., Kim, J., Scuseria, G.E., (2012). “Multi-Determinant Wave-functions in Quantum Monte Carlo,” J. Chem. Theory Comput., Vol. 8, No. 7, pp. 2181–2188.
  100. Morán-López, T., Holloway, J.P., Schilling, O. (2012). “Application of a K-e Turbulence Model to Reshocked Richtmyer-Meshkov Instability Corresponding to a Heavy-to-Light Gas Transition,” Proc. of the Seventh Int. Conf. on Computational Fluid Dynamics (ICCFD7), Big Island, HA, Jul, 9–13 July, ICCFD7-3704.
  101. Moriarty, J.A., Hood, R.Q., Yang, L.H. (2012). “Quantum-Mechanical Interatomic Potentials with Electron Temperature for Strong-Coupling Transition Metals,” Physical Review Letters, Vol. 108, No. 3, pp. 036401(1–4).
  102. Mousavi, S.E., Pask J.E., Sukumar N. (2012). “Efficient adaptive integration of functions with sharp gradients and cusps in n-dimensional parallelepipeds,” Int. J. Numer. Meth. Engng. 91, 343357.
  103. Murphy, B.F., Fang, L., Chen, M -H., et al. (2012). “Multiphoton L-shell ionization of H2S using intense x-ray pulses from a free-electron laser,” Physical Review A, Vol. 86, Iss. 5, 053423.
  104. Najjar, F.M., Bazan, G., Cavallo, R. (2012). “Sensitivity Study in Ejecta Modeling,” NECDC Proc., Oct. 22–26, Livermore, CA.
  105. Najjar, F.M., Howard, W.M., Fried, L.E., Manaa, M.R., Nichols, A., Levesque, G. (2012). “Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive,” 7th Biennial Conf. of the American-Physical-Society-Topical-Group on Shock Compression of Condensed Matter, Chicago, IL, Jun. 26–Jul. 1, API Conf. Proc., Vol. 1426.
  106. Norgard, G., Bremer, P-T. (2012). “Second Derivative Ridges are Straight Lines and the Implications for Computing Lagrangian Coherent Structures,” J. Physica D: Nonlinear Phenomena, Vol. 241, Iss. 18, pp. 1475–1476, Sep.
  107. O'Brien, M., Dawson, S., Brantley, P. (2012). “Scalable Algorithms for Monte Carlo Particle Transport,” NECDC, Oct. 22–26, Livermore, CA.
  108. Ogitsu, T., Ping, Y., Correa, A., Cho, B.-I., Heimann, P., Schwegler, E., Cao, J., Collins, G. W. (2012). “Ballistic electron transport in non-equilibrium warm dense gold,” High Energy Density Physics, Vol. 8, Iss. 3, pp. 303–306.
  109. Ogitsu, T., Schwegler, E. (2012). “The alpha-beta phase boundary of elemental boron,” Solid State Sciences, Vol. 14, Iss. 11–12, pp. 1598–1600.
  110. Olivier, S., De Supinski, B.R., Schulz, M., Prins, J.F., (2012). “Characterizing and Mitigating Work Time Inflation in Task Parallel Programs,” SC2012, Salt Lake City, Utah, Nov. 11–16.
  111. Park, H.-S., Barton, N.R., Belof, J.L., Blobaum, K.J.M., Cavallo, R.M., Comley, A.J., Maddox, B.R., May, M.J., Pollaine, S.M., Prisbrey, S.T., Remington, B.A., Rudd, R.E., Swift, D.W., Wallace, R.J., Wilson, M.J., Nikroo, A., Giraldez, E. (2012). “Experimental results of tantalum material strength at high pressure and high strain rate,” Proc. of the Conf. of the American Physical Society Topical Group on Shock Compression of Condensed Matter, pp. 1371–1374, ISBN: 978-0-7354-1006-0.
  112. Pask, J.E., Sukumar N., Mousavi S.E. (2012). “Linear scaling solution of the all-electron Coulomb problem in solids,” Int. J. Multiscale Computational Engng. Vol. 10, p. 8399.
  113. Pearce, O., Gamblin, T., De Supinski, B.R., Schulz, M., Amato, N.M. (2012). “Quantifying the Effectiveness of Load Balance Algorithms,” Int. Conf. on Supercomputing (ICS’12), Venice, Italy, June 25–29.
  114. Pearce, O., Gamblin, T., Schulz, M., De Supinski, B.R., Amato, N.M. (2012). “Quantifying the Effectiveness of Load Balance Algorithms,” Twenty Sixth Int. Conf. on Supercomputing (ICS 2012), Venice, Italy, June 25–29.
  115. Phipps, C.R., Baker, K.L., Libby, S.B., Liedahl, D.A., Olivier, S.S., Pleasance, L.D., Rubenchik, A., Trebes, J.E., George, E.V., Marcovici, B., Reilly, J.P., and Valley, M.T., (2012), “Removing Orbital Debris With Lasers,” Advances in Space Research, Vol. 49, Iss. 9, May 1, pp. 1283–1300.
  116. Phipps, C.R., Baker, K.L., Libby, S.B., Liedahl, D.A., Olivier, S.S., Pleasance, L.D., Rubenchik, A., Trebes, J.E., George, E.V., Marcovici, B., Reilly, J.P., Valley, M.T., (2012). “Removing Orbital Debris with Pulsed Lasers,” Int. High-Power Laser Ablation Conference, Santa Fe, NM, April 30–May 3, AIP Conf. Proc. 1464, pp. 468-480.
  117. Protze, J., Hilbrich, T., Knüpfer, A., De Supinski, B.R., Mueller, M.S. (2012). “Holistic Debugging of MPI Derived Data types,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS 2012), Shanghai, China, May 21–25.
  118. Reed, E.J., Rodriguez, A.W., Manaa, M.R., Fried, L.E., Tarver, C.M. (2012). “Ultrafast detonation of hydrazoic acid (HN3),” Physical Review Letters, 109, 038301.
  119. Regan, S.P., R. Epstein, B.A. Hammel, L.J. Suter, J. Ralph, H. Scott, M.A. Barrios, D.K. Bradley, D.A. Callahan, C. Cerjan, G.W. Collins, S.N. Dixit, T. Doeppner, M.J. Edwards, D.R. Farley, S. Glenn, S.H. Glenzer, I.E. Golovkin, S.W. Haan, A. Hamza, D.G. Hicks, N. Izumi, J.D. Kilkenny, J.L. Kline, G.A. Kyrala, O.L. Landen, T. Ma, J.J. MacFarlane, R.C. Mancini, R.L. McCrory, N.B. Meezan, D.D. Meyerhofer, A. Nikroo, K.I. Peterson, T.C. Sangster, P. Springer , R.P.J. Town. (2012). “Hot-spot mix in ignition-scale implosions on the NIF,” Phys. Plasmas 19, 056307.
  120. Remington, B.A., Rudd, R.E., Barton, N.R., Cavallo, R.M., Park, H.-S., Belof, J., Comley, A.J., Maddox, B.R., May, M.J., Pollaine, S.M., Prisbrey, S.T. (2012). “Interpretation of laser-driven {V} and {Ta} {Rayleigh}-{Taylor} strength experiments,” Proc. of the Conf. of the American Physical Society Topical Group on Shock Compression of Condensed Matter, Vol. 1426, ISBN: 978-0-7354-1006-0.
  121. Rountree, B., Ahn, D.H., De Supinski, B.R., Lowenthaland, D.K. Schulz, M. (2012). “Beyond DVFS: A First Look at Performance Under a Hardware-Enforced Power Bound,” Eighth Int. Workshop on High Performance Power-Aware Computing (HPPAC 2012), Shanghai, China, May 21.
  122. Saad, K.A. (2012). “Energy Conservation in Safety Problems,” NECDC, Oct. 22–26, Livermore, CA.
  123. Sato, K., Mohror, K., Moody, A., De Supinski, B.R., Gamblin, T. Maruyama, N., Matsuoka, S. (2012). “Design and Modeling of aNon-blocking Checkpointing System,” SC2012, Salt Lake City, Utah, Nov.11–16.
  124. Sato, K., Moody, A., Mohror, K., Gamblin, T., De Supinski, B.R., Maruyama, N., Matsuoka, S. (2012). “Design and Modeling of a Non-blocking Checkpointing System,” Supercomputing 2012, Salt Lake City, UT, Nov. 10–16.
  125. Sato, K., Moody, A., Mohror, K., Gamblin, T., De Supinski, B.R., Maruyama, N., and Matsuoka, S. (2012). “Towards a light-weight non-blocking checkpointing system,” HPC in Asia Workshop 2012, Hamburg, Germany, Jun.
  126. Sato, K., Moody, A., Mohror, K., Gamblin, T., De Supinski, B.R., Maruyama, N., Matsuoka, S. (2012). “Design and modeling of a non-blocking checkpoint system,” ATIP—A*CRC Workshop on Accelerator Technologies in High Performance Computing, May 7–10.
  127. Schilling, O. (2012). “Progress Towards Comparative Studies of Turbulence Models Applied to Rayleigh-Taylor and Richtmyer-Meshkov Unstable Flows,” Proc. of the NECDC 2012.
  128. Schindewolf, M., Bihari, B.L., Gyllenhaal, J., Schulz, M., Wang, A., Karl, W. (2012). “What Scientific Applications Can Benefit from Hardware Transactional Memory,” Int. Conf. for High Performance Computing, Networking, Storage and Analysis (SC12), Salt Lake City, UT, Nov. 10–16.
  129. Schindewolf, M., Schulz, M., Gyllenhaal, J., Bihari, B., Whang, A., Karl, W. (2012). “What Scientific Applications Can Benefit from Hardware Transactional Memory?” SC2012, Salt Lake City, Utah, Nov.  2012.
  130. Schleife, A, Draeger, E.W., Kanai, Y., Correa, A.A. (2012). “Plane-wave pseudopotential implementation of explicit integrators for time-dependent Kohn-Sham equations in large-scale simulations,” J. Chem. Phys., 137, 22A546.
  131. Schofield, S.P., Christon, M.A. (2012). “Effects of element order and interface reconstruction in FEM/volume-of-fluid incompressible flow simulations,” Int. J. for Numerical Methods in Fluids, Vol. 68, pp. 1422–1437.
  132. Schulz, M., Galarowicz, J., Maghrak, D., Montoya, D., Rajan, M., LeGendre, M. (2012). “How to Analyze the Performance of Parallel Codes 101”, SC2012, Salt Lake City, Utah, Nov.
  133. Schulz, M., Hoeffler, T. (2012). “Next Generation MPI Programming: Advanced MPI-2 & New Features in MPI-3,” Int. Supercomputing Conf. (ISC) 2012, Hamburg, Germany, June.
  134. Schulz, M., Mohr, B., Wylie, B. (2012). “Supporting Code Developments on Extreme-scale Computer Systems,” Int. Supercomputing Conf. (ISC) 2012, Hamburg, Germany, June.
  135. Schulz, M., Mohr, B., Wylie, B. (2012). “Supporting Performance Analysis and Optimization on Extreme-Scale Computer Systems,” SC2012, Salt Lake City, Utah, Nov. 
  136. Schunck, N., Dobaczewski, J., McDonnell, J., Satu la, W., Sheikh, J.A., Staszczak, A., Stoitsov, M., Toivanen, P. (2012). “Solution of the Skyrme-Hartree-Fock-Bogolyubov equations in the  Cartesian deformed harmonic-oscillator basis,” Comp. Phys. Comm., Vol. 183, p. 166.
  137. Scogland, T.R.W., Rountree, B., Feng, W., de Supinski, B.R. (2012). “Heterogeneous Task Scheduling for Accelerated OpenMP,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS2012), Shanghai, China, May 21–25.
  138. Smalyuk, V.A., Hansen, J.F., Hurricane, O.A., Langstaff, G., Martinez, D., Park, H.-S., Raman, K., Remington, B.A., Robey, H.F., Schilling, O., Wallace, R., Elbaz, Y., Shimony, A., Shvarts, D., Di Stefano, C., Drake, R.P., Marion, D., Krauland, C.M., Kuranz, C.C. (2012). “Experimental observations of turbulent mixing due to Kelvin–Helmholtz instability on the OMEGA Laser Facility,” Physics of Plasmas 19, 092702-1–092702-8, http://pop.aip.org/resource/1/phpaen/v19/i9/p092702_s1.
  139. Soderlind, P., Grabowski, B., Yang, L., Landa, A., Bjorkman, T., Souvatzis, P., Eriksson, O. (2012). “High-temperature phonon stabilization of $\gamma$-uranium from relativistic first-principles theory,” Phys. Rev. B, Vol. 85, Iss. 6, p. 60301, Feb.
  140. Song, Y., Manaa, M.R. (2012). “New Trends in Chemistry and Materials Science in Extremely Tight Space,” J. of Physical Chemistry C. 116, 2059.
  141. Spears, B.K., S. Glenzer, M. J. Edwards, S. Brandon, D. Clark, R. Town, C. Cerjan, R. Dylla-Spears, E. Mapoles, D. Munro, J. Salmonson, S. Sepke, S. Weber, S. Hatchett, S. Haan, P. Springer, Moses, E., Kline, J., G. Kyrala, and D. Wilson, "Performance metrics for inertial confinement fusion implosions: Aspects of the technical framework for measuring progress in the National Ignition Campaign," Phys. Plasmas 19, 056316 (2012).
  142. Stewart, C., Najjar, F.M., Stewart, D.S., Bdzil, J. (2012). “Computational Meso-Scale Study of Representative Unit Cubes for Inert Spheres Subject to Intense Shocks,” 65th Annual Meeting of the APS Division of Fluid Dynamics, Vol. 57, No. 17, Nov. 18–20, San Diego, CA.
  143. Stewart, D.S., Fried, L.E., Szuck, M. (2012). “Detonation theory for condensed phase explosives with anisotropic properties,” 7th Biennial Conference of the American-Physical-Society-Topical-Group on Shock Compression of Condensed Matter, Chicago, IL, Jun. 26–Jul. 1, API Conf. Proc., Vol. 1426.
  144. Stewart, D.S., Glumac, N., Najjar, F.M., Szuck, M.J. (2012). “Hydrodynamics Computations of Jet Formation and Penetration for Micro-Shaped Charges,” The 12th Hypervelocity Impact Sym., Procedia Engineering, Baltimore, MD.
  145. Su, C-Y., Li, D., Nikolopoulos, D.S., Cameron, K., De Supinski, B.R., and Leon, E.A. (2012). “Model-Based, Memory-Centric Performanceand Power Optimization on NUMA Multiprocessors,” 2012 IEEE Int. Sym. on Workload Characterization (IISWC 2012), San Diego, CA,, Nov. 4–6.
  146. Swift, D.C., Eggert, J.H., Kicks, G.G., Hamel, S., Caspersen, K., Schwegler, E., Collins, G.W., Nettlemann, N., Ackland, G.J. (2012). “Mass-radius relationships for exoplanets,” Astrophysical J., Vol. 744, pp. 59.
  147. Terboven, C., Duran, A., Klemm, M., van der Pas. R., De Supinski, B.R. (2012). “Advanced OpenMP Tutorial,” SC2012, Salt Lake City, Utah, Nov. 11–16.
  148. Terry, M.R., Perkins, L.J., Sepke, S.M. (2012). “Design of a deuterium and tritium-ablator shock ignition target for the National Ignition Facility,” Phys. Plasmas 19, 112705.
  149. Teweldeberhan, A.M., Dubois, J.L., Bonev, S.A., (2012). “Stability of the high-pressure phases of CaTiO3 perovskite at finite temperature,” Physical Review B, Vol. 86, pp, 064104–4.
  150. Trahan, T.J., Gentile, N.A. (2012). “Analytic Treatment of Source Photon Emission to Reduce Noise in Implicit Monte Carlo Calculations,” Transport Theory and Statistical Physics, Transport Theory and Statistical Physics, 41:1–19.
  151. Trahan, T.J., Gentile, N.A. (2012). “Analytic Treatment of Source Photon Emission Times to Reduce Noise in Implicit Monte Carlo Calculations,” Transport Theory and Statistical Physics, 41, pp. 265–283.
  152. Ulitsky, M., Grondalski, J. (2012). “1D Code Comparisons for NIF shot 120205,” NECDC Proc.
  153. Vitello, P., Fried, L.E., Howard, W.M., Levesque, G., Souers, P.C. (2012). “Chemistry resolved kinetic flow modeling of TATB based explosives,” 7th Biennial Conf. of the American-Physical-Society-Topical-Group on Shock Compression of Condensed Matter, Chicago, IL, Jun. 26–Jul. 1, API Conf. Proc., Vol. 1426.
  154. Weber, G., Bremer, P-T., Pascucci, V. (2012). “Topological Cacti: Visualizing Contour-based Statistics,” Topological Methods in Data Analysis and Visualization II, Springer Verlag, pp. 63–76.
  155. Widanagamaachchi, W., Christensen, C., Bremer, P-T, Pascucci, V. (2012), “Interactive Exploration of Large-scale Time-varying Data using Dynamic Tracking Graphs,” Proc. IEEE Sym. Large-Scale Data Analysis and Visualization, pp. 9–17, Oct.
  156. Zhang, W., Howell, L., Almgren, A., Burrows, A., Dolence, J., Bell, J. (2012). “CASTRO: A New Compressible Astrophysical Solver. III. Multigroup Radiation Hydrodynamics,” Astrophysical J. Supplement Series 204:7.
  157. Zylstra, A.B., Frenje, J.A., Séguin, F.H., Rosenberg, M.J., Rinderknecht, Gatu Johnson, M., Casey, D.T., Sinenian, N., Manuel, M.J.-E., Waugh, C.J., Sio, H.W., Li, C.K., Petrasso, R.D., Friedrich, S., Knittel, K., Bionta, R., McKernan, M., Callahan, D., Collins, G.W., Dewald, E., Döppner, T., Edwards, M.J., Glenzer, S., Hicks, D.G., Landen, O.L., London, R., Mackinnon, A., Meezan, N., Prasad, R.R., Ralph, J., Richardson, M., Rygg, J.R., Sepke, S., Weber, S., Zacharias, R., Moses, E., Kilkenny, J., Nikroo, A., Sangster, T.C., Glebov, V., Stoeckl, C., Olson, R., Leeper, R.J., Kline, J.. Kyrala, G., Wilson, D. (2012). “Charged-particle spectroscopy for diagnosing shock rhoR and strength in NIF implosions,” Rev. Sci. Instrum. 83, 10D901.

2013 Publications

  1. Ahn, D.H., Brim, M.J., De Supinski, B.R., Gamblin, T., Lee, G.L., LeGendre, M.P., Miller, B.P., Moody, A., Schulz, M. (2013). “Efficient and Scalable Retrieval Techniques for Global File Properties,” Proc. of the 27th IEEE Int. Parallel and Distributed Processing Sym. (IPDPS), Boston, MA, May, 2013.  
  2. Cleveland M., Palmer T., Gentile N. (2013). “Using Shannon Entropy to Estimate Convergence of CMFD-accelerated Monte Carlo,” SIAM, Feb. 25–Mar. 1.
  3. Cook, A.W., Greenough, J.A. (2013). “Accounting for Energy Changes in Hydrodynamic Mixing: The Role of Enthalpy Diffusion,” Stockpile Stewardship Quarterly, Vol. 2, No. 3, 2–3.
  4. Cook, A.W., Ulitsky, M.S. Miller, D.S. (2013). “Hyperviscosity for Unstructured ALE Meshes,” Int. J. Comput. Fluid Dyn. DOI:10.1080/10618562.2012.756477.
  5. Dobrev, V., Kolev, T. and Rieben, R. (2013). “High-order curvilinear finite elements for elastic-plastic Lagrangian dynamics,” J. of Computational Physics, Feb.
  6. Maljovec, D., Wang, B., Kupresanin, A., Johannesson, Gardar, Pascucci, V., Bremer, P-T. (2013). “Adaptive Sampling with Topological Scores,” Int. J. for Uncertainty Quantification, Vol. 3, Iss. 2.
  7. Morán-López, J.T., Schilling, O. (2013). “Multicomponent Reynolds-averaged Navier–Stokes simulations of reshocked Richtmyer–Meshkov instability-induced mixing,” High Energy Density Physics 9, 112–121, http://www.sciencedirect.com/science/article/pii/S1574181812001279.
  8. Smalyuk, V.A., Hurricane, O.A., Hansen, J.F., Langstaff, G., Martinez, D., Park, H.-S., Raman, K., Remington, B.A., Robey, H.F., Schilling, O., Wallace, R., Elbaz, Y., Shimony, A., Shvarts, D., Di Stefano, C., Drake, R.P., Marion, D., Krauland, C.M., Kuranz, C.C. (2013). “Measurements of turbulent mixing due to Kelvin–Helmholtz instability in high-energy-density plasmas,” High Energy Density Physics 9, 47–51, http://www.sciencedirect.com/science/article/pii/S1574181812001103.

Los Alamos National Laboratory
2012 Citations for Publications (previously not listed
)

  1. Akkan, H., Lang, M., Liebrock, L.M. (2012). "Stepping towards noiseless Linux environment,"In 2nd International Workshop on Runtime and Operating Systems for Supercomputers, ROSS 2012 - In Conjunction with: ICS 2012, Venice, article 7. DOI:10.1145/2318916.2318925.
  2. Bent, J., Grider, G., Kettering, B., Manzanares, A., McClelland, M., Torres, A., Torrez, A. (2012). "Storage Challenges at Los Alamos National Lab,"In IEEE Symposium on Mass Storage Systems and Technologies Proceedings-MSST, San Diego, CA. DOI:10.1109/MSST.2012.6232376
  3. Boettger, J.C. (2012). "Theoretical calculation of the zero-temperature isotherm and phase stability of silver up to 2 Gbar using the linear combinations of gaussian type orbitals method," International Journal of Quantum Chemistry, Vol. 112, No. 24, pp. 3822-3828. DOI:10.1002/qua.24239.
  4. Booth, T.E., Forster, R.A., Martz, R.L. (2012). "MCNP Variance Reduction Developments in the 21st Century," Nuclear Technology, Vol. 180, No. 3, pp. 355-371.
  5. Chabaud, B.M., Brock, J.S., Williams, T.O. (2012). "Transverse isotropic elastic dynamic sphere problem,"In 6th European Congress on Computational Methods in Applied Sciences and Engineering, ECCOMAS 2012, Vienna, pp. 5736-5748.
  6. Chang, C.H., Stagg, A.K. (2012). "A compatible Lagrangian hydrodynamic scheme for multicomponent flows with mixing," Journal of Computational Physics, Vol. 231, No. 11, pp. 4279-4294. DOI:10.1016/j.jcp.2012.02.005.
  7. Clerouin, J., Noiret, P., Blottiau, P., Recoules, V., Siberchicot, B., Renaudin, P., Blancard, C., Faussurier, G., Holst, B., Starrett, C.E. (2012). "A database for equations of state and resistivities measurements in the warm dense matter regime," Physics of Plasmas, Vol. 19, No. 8, article 082702. DOI:10.1063/1.4742317.
  8. Colgan, J., Pindzola, M.S. (2012). "Application of the time-dependent close-coupling approach to few-body atomic and molecular ionizing collisions," European Physical Journal D, Vol. 66, No. 11, article 284. DOI:10.1140/epjd/e2012-30517-2.
  9. Colgan, J., Pindzola, M.S. (2012). "Angular Distributions for the Complete Photofragmentation of the Li Atom," Physical Review Letters, Vol. 108, No. 5, article 053001. DOI:10.1103/PhysRevLett.108.053001.
  10. Densmore, J.D. (2012). "Spatial Moments of Continuous Transport Problems Computed on Grids," Transport Theory and Statistical Physics, Vol. 41, No. 5-6, pp. 389-405. DOI:10.1080/00411450.2012.671223.
  11. Dimonte, G., Bergstralh, E.J., Bolander, M.E., Karnes, R.J., Tindall, D.J. (2012). "Use of tumor dynamics to clarify the observed variability among biochemical recurrence nomograms for prostate cancer," Prostate, Vol. 72, No. 3, pp. 280-290. DOI:10.1002/pros.21429.
  12. Durkee, J.W. (2012). "MCNP geometry transformation and plotter equations," Progress in Nuclear Energy, Vol. 61, pp. 26-40. DOI:10.1016/j.pnucene.2012.06.004.
  13. Durkee, J.W., James, M.R., McKinney, G.W., Waters, L.S., Goorley, T. (2012). "The MCNP6 Delayed-Particle Feature," Nuclear Technology, Vol. 180, No. 3, pp. 336-354.
  14. Fensin, M.L., James, M.R., Hendricks, J.S., Goorley, J.T. (2012). "The new MCNP6 depletion capability,"In International Congress on Advances in Nuclear Power Plants 2012, ICAPP 2012, Chicago, IL, Vol. 2, pp. 1536-1545.
  15. Fontes, C.J., Colgan, J., Zhang, H.L., Abdallah, J., Hungerford, A.L., Fryer, C.L., Kilcrease, D.P. (2012). "Atomic data and the modeling of supernova light curves," Journal of Physics: Conference Series, Vol. 388, No. PART 1, article 012022. DOI:10.1088/1742-6596/388/1/012022.
  16. Goorley, T., James, M., Booth, T., Brown, F., Bull, J., Cox, L.J., Durkee, J., Elson, J., Fensin, M., Forster, R.A., Hendricks, J., Hughes, H.G., Johns, R., Kiedrowski, B., Martz, R., Mashnik, S., McKinney, G., Pelowitz, D., Prael, R., Sweezy, J., Waters, L., Wilcox, T., Zukaitis, T. (2012). "Initial MCNP6 Release Overview," Nuclear Technology, Vol. 180, No. 3, pp. 298-315.
  17. Harribey, T., Breil, J., Maire, P.H., Shashkov, M. (2012). "Hydrodynamic applications using ReALE method,"In 6th European Congress on Computational Methods in Applied Sciences and Engineering, ECCOMAS 2012, Vienna, pp. 574-587.
  18. Higdon, D., Geelhood, K., Williams, B., Unal, C. (2013). "Calibration of tuning parameters in the FRAPCON model," Annals of Nuclear Energy, Vol. 52, pp. 95-102. DOI:10.1016/j.anucene.2012.06.018.
  19. Ichikawa, T., Iwamoto, A., Moller, P., Sierk, A.J. (2012). "Contrasting fission potential-energy structure of actinides and mercury isotopes," Physical Review C, Vol. 86, No. 2, article 024610. DOI:10.1103/PhysRevC.86.024610.
  20. Kheifets, A.S., Fursa, D.V., Bray, I., Colgan, J., Pindzola, M.S. (2012). "Differential cross-sections for the double photoionization of lithium," Journal of Physics: Conference Series, Vol. 388, No. PART 2, article 022053. DOI:10.1088/1742-6596/388/2/022053.
  21. Kulkarni, A., Lang, M., Lumsdaine, A. (2012). "GoDEL: A multidirectional dataflow execution model for large-scale computing,"In 1st International Workshop on Data-Flow Models, DFM 2011, Galveston, TX, pp. 10-18. DOI:10.1109/DFM.2011.12.
  22. Kulkarni, A., Lumsdaine, A., Lang, M., Ionkov, L. (2012). "Optimizing latency and throughput for spawning processes on massively multicore processors,"In 2nd International Workshop on Runtime and Operating Systems for Supercomputers, ROSS 2012 - In Conjunction with: ICS 2012, Venice, article 6. DOI:10.1145/2318916.2318924.
  23. Lee, T.G., Pindzola, M.S., Colgan, J. (2012). "Antiproton-impact single ionization of H-2(+) and H-2," Journal of Physics B-Atomic Molecular and Optical Physics, Vol. 45, No. 4, article 045203. DOI:10.1088/0953-4075/45/4/045203.
  24. Liu, N., Cope, J., Carns, P., Carothers, C., Ross, R., Grider, G., Crume, A., Maltzahn, C. (2012). "On the Role of Burst Buffers in Leadership-Class Storage Systems,"In IEEE Symposium on Mass Storage Systems and Technologies Proceedings-MSST, San Diego, CA. DOI:10.1109/MSST.2012.6232369
  25. Luscher, D.J., McDowell, D.L., Bronkhorst, C.A. (2012). "Essential Features of Fine Scale Boundary Conditions for Second Gradient Multiscale Homogenization of Statistical Volume Elements," International Journal for Multiscale Computational Engineering, Vol. 10, No. 5, pp. 461-486. DOI:10.1615/IntJMultCompEng.2012002929.
  26. Martz, R.L. (2012). "MCNP6 Unstructured Mesh Initial Validation and Performance Results," Nuclear Technology, Vol. 180, No. 3, pp. 316-335.
  27. Menikoff, R., Shaw, M.S. (2012). "The SURF model and the curvature effect for PBX 9502," Combustion Theory and Modelling, Vol. 16, No. 6, pp. 1140-1169. DOI:10.1080/13647830.2012.713994.
  28. Park, H., Fellinger, M.R., Lenosky, T.J., Tipton, W.W., Trinkle, D.R., Rudin, S.P., Woodward, C., Wilkins, J.W., Hennig, R.G. (2012). "Ab initio based empirical potential used to study the mechanical properties of molybdenum," Physical Review B, Vol. 85, No. 21, article 214121. DOI:10.1103/PhysRevB.85.214121.
  29. Park, H., Knoll, D.A., Rauenzahn, R.M., Wollaber, A.B., Densmore, J.D. (2012). "A Consistent, Moment-Based, Multiscale Solution Approach for Thermal Radiative Transfer Problems," Transport Theory and Statistical Physics, Vol. 41, No. 3-4, pp. 284-303. DOI:10.1080/00411450.2012.671224.
  30. Pindzola, M.S., Abdel-Naby, S.A., Ludlow, J.A., Robicheaux, F., Colgan, J. (2012). "Electron-impact ionization of Li-2 using a time-dependent close-coupling method," Physical Review A, Vol. 85, No. 1, article 012704. DOI:10.1103/PhysRevA.85.012704.
  31. Planes, A., Lloveras, P., Castan, T., Saxena, A., Porta, M. (2012). "Ginzburg-Landau modelling of precursor nanoscale textures in ferroelastic materials," Continuum Mechanics and Thermodynamics, Vol. 24, No. 4-6, pp. 619-627. DOI:10.1007/s00161-011-0203-z.
  32. Randrup, J., Moller, P. (2012). "Brownian shape motion: fission fragment mass distributions," Physica Scripta, Vol. T150, article 014033. DOI:10.1088/0031-8949/2012/t150/014033.
  33. Reisner, J., Serencsa, J., Shkoller, S. (2013). "A space-time smooth artificial viscosity method for nonlinear conservation laws," Journal of Computational Physics, Vol. 235, pp. 912-933. DOI:10.1016/j.jcp.2012.08.027.
  34. Srinivasan, B., Tang, X.Z. (2012). "Mechanism for magnetic field generation and growth in Rayleigh-Taylor unstable inertial confinement fusion plasmas," Physics of Plasmas, Vol. 19, No. 8, article 082703. DOI:10.1063/1.4742176.
  35. Staff, J.E., Menon, A., Herwig, F., Even, W., Fryer, C.L., Motl, P.M., Geballe, T., Pignatari, M., Clayton, G.C., Tohline, J.E., NuGrid, C. (2012). "Do R Coronae Borealis Stars Form from Double White Dwarf Mergers?," Astrophysical Journal, Vol. 757, No. 1, article 76. DOI:10.1088/0004-637x/757/1/76.
  36. Starrett, C.E., Clerouin, J., Recoules, V., Kress, J.D., Collins, L.A., Hanson, D.E. (2012). "Average atom transport properties for pure and mixed species in the hot and warm dense matter regimes," Physics of Plasmas, Vol. 19, No. 10, article 102709. DOI:10.1063/1.4764937.
  37. Starrett, C.E., Saumon, D. (2013). "Electronic and ionic structures of warm and hot dense matter," Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, Vol. 87, No. 1, article 013104. DOI:10.1103/PhysRevE.87.013104.
  38. Veselsky, M., Andreyev, A.N., Antalic, S., Huyse, M., Moller, P., Nishio, K., Sierk, A.J., Van Duppen, P., Venhart, M. (2012). "Fission-barrier heights of neutron-deficient mercury nuclei," Physical Review C, Vol. 86, No. 2, article 024308. DOI:10.1103/PhysRevC.86.024308.
  39. Wienke, B.R., Budge, K.G., Chang, J.H., Dahl, J.A., Hungerford, A.L. (2012). "Jacobian transformed and detailed balance approximations for photon induced scattering," Journal of Quantitative Spectroscopy & Radiative Transfer, Vol. 113, No. 2, pp. 150-157. DOI:10.1016/j.jqsrt.2011.09.018.
  40. Zhang, X.C., Davis, K., Jiang, S. (2012). "iTransformer: Using SSD to Improve Disk Scheduling for High-performance I/O,"In International Parallel and Distributed Processing Symposium IPDPS, Shanghai, pp. 715-726. DOI:10.1109/ipdps.2012.70.

Sandia National Laboratories
Citations for FY13, Q13

 Key: DOI = Digital Object Identifier; URL prefix of DOI is: http://dx.doi.org/

  1. Barrett, R. F., Hammond, S. D., Vaughan, C. T., Doerfler, D. W., Heroux, M. A., Luitjens, J. P., Roweth, D. (2012).  "Navigating an Evolutionary Fast Path to Exascale," Proceedings of the 3rd International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems (PMBS12) at Supercomputing 2012 (SC12): The International Conference for High Performance Computing, Networking, Storage, and Analysis, Salt Lake City, UT.  OSTI Identifier: 1062331. SAND2012-9777 C. 
  2. Barrett, R. F., Hu, X. S., Dosanjh, S. S., Parker, S., Heroux, M. A., Shalf, J. (2012).  "Toward Codesign in High Performance Computing Systems," Proceedings of the 2012 IEEE/ACM International Conference on Computer-Aided Design (ICCAD ’12), pp. 443-449.  DOI: 10.1145/2429384.2429476. SAND2012-7108 C. 
  3. Bochev, P., Peterson, K., Gao, X. (2013).  “A New Control Volume Finite Element Method for the Stable and Accurate Solution of the Drift–diffusion Equations on General Unstructured Grids,” Computer Methods in Applied Mechanics and Engineering, Vol. 254, pp. 126-145.  DOI: 10.1016/j.cma.2012.10.009. SAND2012-1769 J. 
  4. Cartwright, K. L., Hills, R. G., Pointon, T. D., Hinshelwood, D. D., Schumer, J. W., Swanekamp, S. B., Ottinger, P. F. (2012).  “Solution Verification, Validation, and Uncertainty Quantification for a Series of Gas Cell Experiments at NRL,” 2012 Abstracts IEEE International Conference on Plasma Science (ICOPS), Albuqerque, NM, p. 7A-3.  DOI: 10.1109/PLASMA.2012.6384057. SAND2012-5450 C. 
  5. Dayal, J., Schwan, K., Oldfield, R. (2012).  “D2T: Doubly Distributed Transactions for High Performance and Distributed Computing,” 2012 IEEE International Conference on Cluster Computing (CLUSTER), Beijing, China, pp. 90-98.  DOI: 10.1109/CLUSTER.2012.79. SAND2012-4599 A. 
  6. Fabian, N. (2012).  “In Situ Fragment Detection at Scale,” 2012 IEEE Symposium on Large Data Analysis and Visualization (LDAV), Seattle, WA, pp. 105-108.  DOI: 10.1109/LDAV.2012.6378983. SAND2012-3811 C. 
  7. Gallis, M. A., Torczynski, J. R. (2012).  “The Effect of Internal Energy on Chemical Reaction Rates as Predicted by Bird’s Quantum-Kinetic Model,” American Institute of Physics (AIP) Conference Proceedings, 28th International Symposium of Rarefied Gas Dynamics 2012, Vol. 1501, pp. 1051-1060 (Invited).  DOI: 10.1063/1.4769658. SAND2012-5386 C.
  8. Gallis, M. A., Torczynski, J. R. (2012).  “The Effect of Rotational Non-equilibrium on Chemical Reaction Rates Predicted by the Quantum-Kinetic (Q-K) Model for Direct Simulation Monte Carlo (DSMC) Simulations,” Bulletin of the American Physical Society, 65th Annual Meeting of the APS Division of Fluid Dynamics, Vol. 57, No. 17.  Abstract ID: BAPS.2012.DFD.A27.1 SAND2012-5030 A. 
  9. 9.      Lofstead, J., Dayal, J. (2012).  "Transactional Parallel Metadata Services for Integrated Application Workflows," Proceedings of High Performance Computing Meets Databases (HPCDB 2012) at Supercomputing 2012 (SC12): The International Conference for High Performance Computing, Networking, Storage, and Analysis, Salt Lake City, UT.  OSTI Identifier: 1061035. SAND2012-9730 C.
  10. Lofstead, J., Dayal, J., Schwan, K., Oldfield, R. (2012).  "D2T: Doubly Distributed Transactions for High Performance and Distributed Computing," Proceedings of 2012 IEEE International Conference on Cluster Computing (CLUSTER), Beijing, China, pp. 90-98.  DOI: 10.1109/CLUSTER.2012.79. SAND2012-7727 C.
  11. Mankbadi, M. R., Balachandar, S., Brown, A. L. (2012).  “Compressible Instability of Rapidly Expanding Spherical Material Interfaces.” SAND2012-7934. Unlimited release.
  12. Moreland, K. (2012).  “Redirecting Research in Large-Format Displays for Visualization,” in Proceedings of the IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV 2012), Seattle, WA, pp. 91-95.  SDAV Publications: http://sdav-scidac.org/listpubs. html , [Mor2012a] SAND2012-2993 C.
  13. Moreland, K. D., King, B., Maynard, R., Ma, K.-L. (2012).  “Flexible Analysis Software for Emerging Architecture,” Petascale Data Analytics: Challenges and Opportunities (PDAC-12) at Supercomputing 2012 (SC12): The International Conference for High Performance Computing, Networking, Storage, and Analysis, Salt Lake City, UT.  OSTI Identifier: 1061060. SAND2012-8450 C.
  14. Reedy, E. D. (2013).  “Adhesion/Atomistic Friction Surface Interaction Model with Application to Interfacial Fracture and Nanofabrication,” International Journal of Solids and Structures, Vol. 50, Issue 6, pp. 937-943.  Available online 15 March 2013.  DOI: 10.1016/j.ijsolstr.2012.11.025. SAND2012-3769 J.
  15. Sayer, R. A., Piekos, E. S., Phinney, L. M. (2012).  “Modified Data Analysis for Thermal Conductivity Measurements of Polycrystalline Silicon Microbridges using a Steady State Joule Heating Technique,” Review of Scientific Instruments, Vol. 83, Issue 12, p. 124904.  DOI: 10.1063/1.4769059. SAND2012-6840 J.
  16. Sun, W., Ostien, J. T., Salinger, A. G. (2013).  “A Stabilized Assumed Deformation Gradient Finite Element Formulation for Strongly Coupled Poromechanical Simulations at Finite Strain,” International Journal for Numerical and Analytical Methods in Geomechanics, Published online 7 January 2013.  DOI: 10.1002/nag.2161. SAND2012-5043 J. 
  17. Reedy Jr., E. D. (2012).  “Adhesion/Atomistic Friction Surface Interaction Model with Application to Interfacial Fracture and Nanofabrication,” International Journal of Solids and Structures, Vol. 50, Issue 6, pp. 937-943.  Published online 15 March 2013.  DOI: 10.1016/j.ijsolstr.2012.11.025. SAND2012-3769 J.
  18. Sun, W. C., Ostien, J. T., Salinger, A. G. (2013).  “A Stabilized Assumed Deformation Gradient Finite Element Formulation for Strongly Coupled Poromechanical Simulations at Finite Strain,” International Journal for Numerical and Analytical Methods in Geomechanics, Published online 7 January 2013.  DOI: 10.1002/nag.2161. SAND2012-5043 J.

CORRECTIONS TO PRIOR SUBMITTALS

 Submitted FY13 Q1

FROM:

O’Hern, T., Shelden, B., Romero, L., Torczynski, J. (2012).  “Stably Levitated Large Bubbles in Vertically Vibrating Liquids,” Bulletin of the American Physical Society, 65th Annual Meeting of the APS Division of Fluid Dynamics, Vol. 57, No. 17.  Abstract ID: L11.00005. SAND2012-5184 A.

 TO:

O’Hern, T., Shelden, B., Romero, L., Torczynski, J. (2012).  “Stably Levitated Large Bubbles in Vertically Vibrating Liquids,” Bulletin of the American Physical Society, 65th Annual Meeting of the APS Division of Fluid Dynamics, Vol. 57, No. 17.  Abstract ID: L11.00005. SAND2012-6261 A

 

LA-UR 13-22122