Follow this link to skip to the main content

ASC eNews Quarterly Newsletter June 2013

 

 ||   Who’s Who in ASC   ||   ASC eNews POCs   ||   Upcoming Events   ||

ASC    
  NA-ASC-500-14 Issue 24
  June 2013

 

The Meisner Minute

 ASC Newsletter June 2013

 

 

 

Bob Meisner

 

 

 

 

 

Accomplishments year-to-date portend another successful year for ASC.  The weapons community is highly complimentary of the support provided on our tri-Lab Linux capacity cluster (TLCC) systems, and clamoring for time on Cielo.  The increasingly capable and stable high performance computing (HPC) platforms and applications you provide are advancing the Nuclear Security Enterprise’s understanding of weapons performance.

The new ASC Computing Strategy I discussed at the beginning of the year has been published as the roadmap that defines our computing capability for the next decade.  The strategy is predicated on our mission to support the stockpile.  But, as I am sure you know, we have been asked to plan for building an exaFLOPs system in the early 2020s.  We have completed that plan with our partners in the DOE Office of Science and just recently sent a copy of the plan to Congress, albeit more than a year following their request (a copy has not yet been released to the public).  Should the nation ask us to execute that plan, you can expect our strategy to evolve and accelerate.  I expect I will have more details to report next quarter.

We currently have three acquisition programs in various states of completion.  Sequoia, while procured under our previous ASC Platform Strategy, is transitioning to full operating capability and will run under the new Advanced Technology System (ATS) operating paradigm.  Consequently, following a very successful open science run, it has begun Capability Computing Campaign operations as a tri-lab resource.  The Trinity system will be our first ATS from mission need to retirement.  Trinity also sets an historic milestone as the first major joint high performance computing (HPC) procurement with an external partner, Lawrence Berkley National Laboratory’s National Energy Research Scientific Computing Center (NERSC); this procurement is proceeding extremely well.  Having passed a highly successful Lehman Review and Critical Decision-1, the Trinity/NERSC-8 team will soon release a Request for Proposals.  Finally, SIERRA—Sequoia’s replacement scheduled for a 2017 delivery—has passed its Critical Decision-0 milestone.  It will also be executed as a joint procurement with our DOE Office of Science partners through the Collaboration of Oak Ridge, Argonne and Livermore (CORAL) team.

Interesting activities are rumbling beneath the surface in the world of LINPAK.  Mike Heroux and Jack Dongara jointly published a paper titled Toward a New Metric for Ranking High Performance Computing Systems that explores a successor to LINPAK.  This is an important development to define a universal benchmark that better portrays future HPC architectures in an era where data movement is a prime concern.   If you are interested in shaping the future, read the paper and join the discussion.

Last, but certainly not least, the Predictive Science Academic Alliance Program II (PSAAP II) acquisition team has culminated over two years of work with a recommendation to select six universities for the next five-year phase of our highly successful alliances program.  While in the past we exclusively funded multi-disciplinary science centers (MSCs), PSAAP II will include single discipline centers (SDCs).  Once we conclude negotiations with each university we expect our new partners to be as follows:  University of Florida (SDC); University of Illinois (MSC); Notre Dame (SDC); Stanford (MSC); Texas A&M (SDC); and University of Utah (MSC).

Since the beginning of the year we have also seen an unusual churn in personnel.  Major Paul Adamson is leaving NNSA to return to the United States Air Force but will still be a member of the nuclear weapons family as he transitions to the Nuclear Weapons Center at Kirtland Air Force Base.  Jay Edgeworth, now running the ASC Integrated Codes program, has returned to NNSA Defense Programs from a career-broadening assignment with Defense Nuclear Nonproliferation.  Lt. Col. Mike Severson comes to us from the Defense Threat Reduction Agency (DTRA) to take over the ASC Verification & Validation program from Trieu Truong, who returned to the Engineering Campaign.  Alexis Blanc has also transitioned to ASC from the NNSA Policy Office and will help oversee HPC acquisitions and computer operations.  Daniel Orlikowski from Lawrence Livermore National Laboratory has started a year-long detail with us and will be assisting across the program.  Finally, we will be saying farewell to Wendy Cieslak, Sandia National Laboratories’ ASC Exec, who will be joining her husband in retirement in August.  We welcome Wendy’s replacement, Keith Matzen.

As we enter the second half of the year, I expect we will see changes precipitated by the new fiscal year kicking in and our new Secretary of Energy setting course for the next four years.  My next report to you should be interesting.  In the meanwhile, have fun and make amazing things happen.

 ______________________________________________________

Sequoia Supercomputer Transitions to Classified Work

The Sequoia supercomputer at Lawrence Livermore National Laboratory (LLNL) has completed its transition to classified computing in support of the Stockpile Stewardship Program, which helps the United States ensure the safety, security and effectiveness of its aging nuclear weapons stockpile without the use of underground testing.

ASC Newsletter June 2013

The 20 petaFLOP/s (quadrillion floating point operations per second) IBM BlueGene/Q system is now dedicated exclusively to NNSA's Advanced Simulation and Computing (ASC) program. ASC is a tri-lab effort drawing on the computational engineering and scientific computing expertise resident at Los Alamos, Sandia and Lawrence Livermore national laboratories.

"The success of early science runs on Sequoia have prepared the system to take on the complex calculations necessary to continue certifying the nation's aging nuclear stockpile," said NNSA Assistant Deputy Administrator for Stockpile Stewardship Chris Deeney. "Sequoia's mammoth computing power will provide scientists and engineers with a more complete understanding of weapons' performance, notably hydrodynamics and the properties of materials at extreme pressures and temperatures. These capabilities provide confidence in the U.S. deterrent as it is reduced under treaty agreements and represent the nation's continued leadership in high performance computing."

Bob Meisner, director of the ASC program, says that among the critical enhanced capabilities Sequoia provides is uncertainty quantification, or UQ, the quantitative characterization and reduction of uncertainty in computer applications made possible by the running of large suites of calculations designed to assess the effects of minor differences in the systems. Sources of uncertainty are rife in the natural science and engineering. UQ uses statistical methods to determine likely outcomes.

"The work we've done to date on Sequoia gives every indication that we will be able to run suites of highly resolved uncertainty quantification calculations in support of ASC's effort to extend the life of aging weapons systems such as the B61 and W78—what we call the life extension program," Meisner said. "By reducing the time required for the studies that underlie life extension, total costs also are reduced. The machine will also provide a means to do an assessment of its entry-level 3-D UQ capabilities. 3-D UQ will become increasingly important as the stockpile ages."

Additionally, NNSA expects the machine to enhance the program's ability to: sustain the stockpile by resolving any significant findings in weapons systems; bring greater computing power to all aspects of the annual assessment of the stockpile; and anticipate and avoid future problems that inevitably result from aging. These capabilities help ensure that the nation will never have to return to nuclear testing. Supercomputers such as Sequoia have allowed the U.S. to have confidence in its nuclear weapons stockpile over the 20 years since nuclear testing ended in 1992. The insights that come from supercomputing simulations also are vital to addressing nonproliferation and counterterrorism issues as well as informing other national security decisions such as nuclear weapon policy and treaty agreements.

Delivered and deployed in early 2012, the 96-rack Sequoia machine not only took the No. 1 ranking on the June 2012 Top500 list of the world's most powerful supercomputers, it was also rated as the world's most energy efficient system and earned top honors on the Graph500 list for its ability to solve big data problems—finding the proverbial needle in the haystack. While Sequoia dropped to No. 2 on the November 2012 Top500 list, it remains one of the most energy efficient high performance computing (HPC) systems and retained its No. 1 Graph500 ranking.

Early unclassified work on the machine allowed NNSA researchers and IBM computer scientists to work out the bugs and optimize the system before it transitioned to classified work. Los Alamos National Laboratory researchers ran turbulence simulations and Sandia National Laboratories scientists explored the properties of tantalum on Sequoia.

LLNL researchers performed record simulations using all 1,572,864 cores of Sequoia to study the interaction of ultra-powerful lasers with dense plasmas in a proposed method to produce fusion energy, the energy source that powers the sun. Sequoia is the first machine to exceed one million computational cores. The simulations are the largest particle-in-cell (PIC) code simulations by number of cores ever performed and are important to laser fusion experiments in LLNL's National Ignition Facility (NIF). PIC simulations are used extensively in plasma physics to model the motion of the charged particles, and the electromagnetic interactions between them, that make up ionized matter. For more, see the March 19th announcement at https://www.llnl.gov/news/newsreleases/2013/Mar/NR-13-03-05.html.

In addition, LLNL scientists investigated burn in doped plasmas, exploiting the full capability of Sequoia and the code developed for this purpose. Following a benchmark exploration of the density and temperature dependence of burn in undoped hydrogen plasma, researchers began a series of extreme-scale simulations of burn in the presence of small fractions of a percent of high-Z dopants. These studies are being used to deepen scientists' understanding of the effect of dopants on burn, physics that is vital to capsule design for NIF, a facility critical to NNSA's stockpile stewardship program.

Early efforts by LLNL scientists also included a QBox first principles molecular dynamics code examination of the electronic structure of heavy metals, research of interest to stockpile stewardship. QBox was developed at LLNL to perform large-scale simulations of materials directly from first-principles, allowing scientists to predict the properties of complex systems without first having to carry out experiments.

Sequoia also demonstrated its great scalability with a 3D simulation of the human heart's electrophysiology. Using a code created in a partnership between LLNL and IBM scientists called Cardioid, researchers are modeling the electrical signals moving throughout the heart. Cardioid has the potential to be used to test drugs and medical devices, paving the way for tests on humans. Techniques employed by the code are useful to Sequoia's national security applications. Development of Cardioid is continuing on a smaller version of Sequoia called Vulcan, a five petaFLOP/s BlueGene/Q system used for unclassified research collaborations.

Sequoia was recognized with a Breakthrough Award from Popular Mechanics as one of the top technology innovations of 2012.

______________________________________________________

Simulations Shed Light on the Formation of Twin Crystals

Members of the ASC/LLNL Physics and Engineering Models Strength and Damage team successfully completed a series of materials simulations under the Capability Computing Campaign (CCC3) on the ASC Cielo machine at Los Alamos. The simulations employed continuum crystal mechanics models in the ALE3D code to investigate the growth of twin domains (specific type of microstructural regions relevant to various programmatic materials) in tantalum and the alpha-to-epsilon phase transformation domains in iron.

“This work is helping us understand twinning as a deformation mechanism, and twinning is a significant mechanism in tantalum, beryllium, uranium, and various alloy systems, especially under high rate-loading conditions,” said Nathan Barton, LLNL computational materials scientist.

Models were partially informed by molecular dynamics and dislocation dynamics calculations as part of an overall multi-scale effort. The continuum crystal mechanics approach allowed scientists to simulate polycrystalline response at length scales relevant to macroscopic observables for polycrystalline materials. These macroscopic observables are now being compared to experimental data from the Dynamics Material Properties Campaign efforts and from Laboratory Directed Research and Development activities. Experimental measurements include data from both electron microscopy for twin evolution in tantalum and from x-ray microscopy at Basic Energy Sciences (BES) user facilities for in situ stress and phase content measurements under pressure in a diamond anvil cell for phase transformation in iron. The modeling results have offered details—and therefore insights—not directly available from experimental efforts.

The simulation results are already making for interesting comparisons against experimental results for twin evolution in tantalum. The interaction of twin nucleation and growth with dislocation structure is a complex and interesting area. The simulation results, which nicely reproduce aspects of experimental observations, indicate that if twin boundary motion is significantly impeded by dislocation tangles, then the nucleation can saturate early on in the deformation of tantalum at high rates. The stress concentrations at the tips of already nucleated twins allow them to continue to grow. It is also seen that among the many fine twins that may nucleation, comparatively few may win out and consume significant portions of volume in the original material. The figure shows predicted evolution of twin fraction versions axial strain of the material, with a rapid rise in which there is more nucleation followed by a growth phase in which there is more growth. Similar trends are observed in experimental observations of twin fraction evolution.

To capture twin domain evolution in simulations, each crystal in the polycrystal is resolved in detail. Grid resolution combined with computational expense of the constitutive model make the problem suited to capability-class ASC computing resources.

ASC Newsletter June 2013

______________________________________________________

Foundational Code Capabilities for Future Computational Platforms

Sandia has developed a new set of foundational code capabilities that support the path forward towards new architectures and extreme-scale unstructured mesh computations.  Unstructured hexahedral meshes of greater than nine billion elements have been used for weak and strong scaling studies on greater than 65,000 cores.  The current effort extends past the 32-bit limit (1.12 billion elements), improves matrix assembly scaling, and improves mesh generation and mesh management.  These foundational capabilities have been deployed in a new low Mach number fluid dynamics simulation capability.  This new capability is built upon the Sierra Toolkit and includes key implicit solver components of the Trilinos/Tpetra infrastructure.  This capability can be used to simulate abnormal thermal environments, which consist of reacting, turbulent, low Mach number fluid dynamics and participating media radiation.  The foundational capabilities are poised to support a wide range of simulation capabilities, from fluid environments to system mechanical and thermal response.

A turbulent open jet test case has demonstrated near optimal scaling for combined systems of up to 60 billion degrees of freedom.  Figure 1 shows the weak and strong scaling studies.  The weak scaling plot at left demonstrates nearly ideal scaling for matrix assembly of the monolithic momentum equation system and the segregated pressure Poisson equation (PPE) system over a range of 17 million to 9 billion elements.  For the 9 billion element case, the momentum equation solve represents a system of 27 billion degrees of freedom.  Weak scaling for the momentum system solve (not shown) is also quite good.  The strong scaling plot at right demonstrates nearly ideal scaling for a mesh resolution typical of routine analysis (140 million elements) to element loads of 8,500 elements/core.  The scaling studies exercise solvers and preconditioners from Trilinos/Tpetra, including the newly developed multilevel preconditioner built under Trilinos/Tpetra for use in the PPE system.  Algorithmic weak and strong scaling for the exercised solvers and preconditioners are also nearly optimal (not shown).

ASC Newsletter June 2013

The Percept component of the SIERRA Toolkit has addressed many issues associated with these large problem sizes.  It automatically refines a coarse mesh to create the mesh used at scale and decomposes it in parallel.  This takes only minutes in the current workflow, compared to previous studies that required time-consuming, fault-intolerant, manual intervention.

The new low Mach number simulation capability contains two options for spatial discretization: the canonical control volume finite element method (CVFEM) scheme and the newly developed edge-based vertex-centered (EBVC) scheme.  Exercising two discretizations in the same code base enables characterization of the trade space between speed and accuracy; it also simplifies uncertainty quantification (UQ) studies that are focused on numerical accuracy.  A buoyant helium plume test case has been used to benchmark the performance of these new implementations relative to the currently deployed production capability for low Mach number fluids simulation.  Results indicate a speedup of two-to-four times, depending upon whether the CVFEM or EBVC scheme is used.

______________________________________________________

New Inverse Structural-Acoustics Computational Simulation Capability

ASC Newsletter June 2013

 

A new B61 inverse structural acoustics computational simulation capability (illustrated left) was developed this year by Sandia researchers from Engineering and Computational Sciences, and Duke University. This important capability is able to simulate under wing captive carry vibration environments for the B61 and the vibrational environments for Reentry Vehicles. The ability to perform inverse simulations in which the desired experimental acoustic conditions can be computed to create complex spatially varying environments is a unique and powerful new tool. Because inverse problems take experimental data as input, this new computational capability is a further integration of computational and experimental simulation capabilities.

 

 

 

 

 ASC Newsletter June 2013

 

The inverse structural acoustics capability in SIERRA/SD is implemented in a PDE-constrained optimization framework using adjoint-based gradient and Hessian calculations an interface to the Rapid Optimization Library and (ROL).  This capability enables analysts to inversely estimate the desired experimental acoustic conditions of NW components.  The method is currently being used to characterize complex spatially varying acoustic environments for B61 captive carry in both the time and frequency domains. This represents a new paradigm for future integration of advanced computational tools and experimental measurements, and supports the broad national security interests of our nation.

 

 

______________________________________________________

Record Simulations Conducted on Sequoia

Researchers at Lawrence Livermore National Laboratory have performed record simulations using all 1,572,864 cores of Sequoia, the largest supercomputer in the world. Sequoia, based on IBM BlueGene/Q architecture, is the first machine to exceed one million computational cores. It also is No. 2 on the list of the world's fastest supercomputers, operating at 16.3 petaFLOP/s (16.3 quadrillion floating point operations per second).

ASC Newsletter June 2013

The simulations are the largest particle-in-cell (PIC) code simulations by number of cores ever performed. PIC simulations are used extensively in plasma physics to model the motion of the charged particles, and the electromagnetic interactions between them, that make up ionized matter. High performance computers such as Sequoia enable these codes to follow the simultaneous evolution of tens of billions to trillions of individual particles in highly complex systems.

Frederico Fiuza, a physicist and Lawrence Fellow at LLNL, performed the simulations in order to study the interaction of ultra-powerful lasers with dense plasmas in a proposed method to produce fusion energy, the energy source that powers the sun, in a laboratory setting. The method, known as fast ignition, uses lasers capable of delivering more than a petawatt of power (a million billion watts) in a fraction of a billionth of a second to heat compressed deuterium and tritium (DT) fuel to temperatures exceeding the 50 million degrees Celsius needed to initiate fusion reactions and release net energy. The project is part of the U.S. Department of Energy's Office of Fusion Energy Science Program.

This method differs from the approach being taken by LLNL's National Ignition Facility to achieve thermonuclear ignition and burn. NIF's approach is called the "central hot spot" scenario, which relies on simultaneous compression and ignition of a spherical fuel capsule in an implosion, much like in a diesel engine. Fast ignition uses the same hardware as the hot spot approach but adds a high-intensity, ultrashort-pulse laser as the "spark" that achieves ignition.

The code used in these simulations was OSIRIS, a PIC code that has been developed over more than 10 years in collaboration among the University of California, Los Angeles and Portugal's Instituto Superior Técnico. Using this code, Fiuza demonstrated excellent scaling in parallel performance of OSIRIS to the full 1.6 million cores of Sequoia. By increasing the number of cores for a relatively small problem of fixed size, what computer scientists call "strong scaling," OSIRIS obtained 75 % efficiency on the full machine. But when the total problem size was increased, what is called "weak scaling," 97 % efficiency was achieved.

"This means that a simulation that would take an entire year to perform on a medium-size cluster of 4,000 cores can be performed in a single day. Alternatively, problems 400 times greater in size can be simulated in the same amount of time," Fiuza said. "The combination of this unique supercomputer and this highly efficient and scalable code is allowing for transformative research."

OSIRIS is routinely used for fundamental science during the test phase of Sequoia in simulations with up to 256,000 cores. These simulations are allowing researchers, for the first time, to model the interaction of realistic fast-ignition-scale lasers with dense plasmas in three dimensions with sufficient speed to explore a large parameter space and optimize the design for ignition. Each simulation evolves the dynamics of more than 100 billion particles for more than 100,000 computational time steps. This is approximately an order of magnitude larger than the previous largest simulations of fast ignition.

Sequoia is a National Nuclear Security Administration (NNSA) machine, developed and fielded as part of NNSA's Advanced Simulation and Computing (ASC) program. Sequoia was recently moved to classified computing in support of stockpile stewardship.

"This historic calculation is an impressive demonstration of the power of high-performance computing to advance our scientific understanding of complex systems," said Bill Goldstein, LLNL's deputy director for Science and Technology. "With simulations like this, we can help transform the outlook for laboratory fusion as a tool for science, energy and stewardship of the nuclear stockpile."

______________________________________________________

On the Way to Trinity — ASC’s First Advanced Technology System

NNSA’s ASC Program has given permission to the Los Alamos and Sandia Alliance for Computing at Extreme Scale (ACES) Project to release the request for proposal (RFP) for the Trinity system. The procurement of Trinity is a joint procurement with the DOE Office of Science to procure Trinity for ASC and the NERSC-8 supercomputer for Lawrence Berkeley National Energy Research Scientific Computing Center.

Trinity is the first of ASC’s advanced technology systems. According to NNSA/ASC Program Office’s recently published ASC Computing Strategy document, [ASC Computing Strategy, by J.A. Ang, P.J. Henning, T.T. Hoang, R. Neely, May 2013 (SAND 2013-3951P)], advanced technology systems are “the vanguards of high performance computing platform market and incorporate features that, if successful, will become future commodity technologies. These large, first-of-a-kind systems will require application software modifications in order to take full advantage of exceptional capabilities offered by new technology.”

At Los Alamos, work has been underway to accomplish significant facility upgrades required by Trinity. The Trinity system will reside in the Strategic Computing Center (SCC) in the Nicholas C. Metropolis Center for Modeling and Simulation. The SCC is a 300,000-square-foot building. The vast floor of the supercomputing room is 43,500 square feet, almost an acre in size. To read more about the facility, see the article in the National Security Science magazine, April 2013.

Because of a switch from air to water cooling, an SCC infrastructure upgrade is required to bring water-cooling technology to the facility. The infrastructure with upgrade design is 90% complete. Because energy conservation is critical, ASC Program staff conducted field trips to observe water, power, and cooling operations at supercomputing facilities around the country, including ORNL, NREL, NCAR, LLNL, and NERSC. Staff from Los Alamos visited the largest hybrid cooling tower operation in the country in Eunice, New Mexico. The trip to URENCO in Eunice aided in the evaluation of hybrid versus evaporative cooling tower technologies. The site visits inspired design changes in the SCC cooling towers, for example: the addition of a strategically located valve will save money as a result of cooling without recirculation during months when the outside air temperature can provide adequate cooling temperatures. The ultimate goal is to maximize availability of computing platforms to the end users with minimum expense and effort required of the computing center.

ASC Newsletter June 2013

To meet cooling requirements for the supercomputers in the SCC, LANL is decreasing its use of city/well water for cooling towers and using water from LANL’s Sanitary Effluent Reclamation Facility (SERF). From December 2012 through April 2013, LANL has saved its use of city/well water by roughly 50% for cooling towers to meet supercomputer cooling requirements.

Trinity’s installation is projected to be in 2015–2016. It is expected to be the first platform large and fast enough to begin to accommodate finely resolved 3D calculations for full-scale, end-to-end weapons calculations.

______________________________________________________

Forged Geometry Machining Model to Help Predict GTS Reservoir Failure Risk

The ability to predict residual stresses in forged, machined parts is important for predicting gas transfer system (GTS) reservoir failure mechanisms. At Sandia, we have successfully demonstrated the capability to model the machining operation of a forged-wedge geometry.  After the forging is cooled to room temperature, the stress state is mapped to a mesh of the “machined” geometry.  The machined part is then allowed to relax to the equilibrium state through a Presto simulation with mass damping to increase the time step. We have also broken the machining operation into multiple steps to study the effects on the final residual stress state.

Simulation predictions of the distortion that occurs in the wedge due to machining have been compared to experimental measurements.  Electrical discharge machining was used to remove tensile specimens from the forged wedge.  Warpage occurred due the relaxation of the residual stresses in the wedge, creating a step in the bottom of the machined wedge.  Simulation predictions of the step and of three other final dimensions agree well with the corresponding experimental measurements (see figure).  Uncertainty quantification will be performed next.  The modeling tool will then be used to predict the residual stress state in a machined forging used in a gas transfer system reservoir.

ASC Newsletter June 2013

______________________________________________________

Optimal Meshes for Generalized Crack Propagation

A new methodology has been developed to mitigate the error associated with generalized crack propagation. The adaptive insertion of cohesive surface or localization elements along pre-existing finite element boundaries for generalized crack propagation introduces geometric and energetic error. We propose to minimize the error in the crack path by augmenting existing paths with conjugate directions via barycentric subdivision and graph theory. Prior work in 2-D has illustrated that this novel methodology minimizes the geometric error and ongoing works in 2-D and 3-D are quantifying both the geometric and the energetic error.

The Sierra Toolkit (STK) enables a graph representation of the mesh that allows for complex topological manipulations. Through the STK, we have implemented the barycentric algorithm and verified that it scales linearly with the number of elements. The method is advantageous because it easily admits both linear and higher-order elements for brittle and ductile fracture. When coupled with massively parallel algorithms for adaptive insertion (which also employ graph theory), the optimal discretization can be employed for both dominant cracks and/or a multitude of fragments. The ultimate objective of this work is to minimize the error in crack path(s) through intelligent pre-processing. This work is in collaboration with Professor Julian Rimoli at Georgia Tech and a publication in 2-D has been submitted to the International Journal of Numerical Methods in Engineering with work in 3-D forthcoming.

ASC Newsletter June 2013

______________________________________________________

Scientists Set a New Simulation Speed Record on the Sequoia Supercomputer

Computer scientists at Lawrence Livermore National Laboratory (LLNL) and Rensselaer Polytechnic Institute have set a high performance computing speed record that opens the way to the scientific exploration of complex planetary-scale systems.

In a paper to be published in May, the joint team announced a record-breaking simulation speed of 504 billion events per second on the ASC Sequoia Blue Gene/Q supercomputer sited at LLNL, dwarfing the previous record set in 2009 of 12.2 billion events per second.

In addition to breaking the record for computing speed, the research team set a record for the most highly parallel "discrete event simulation," with 7.86 million simultaneous tasks using 1.97 million cores. Discrete event simulations are used to model irregular systems with behavior that cannot be described by equations, such as communication networks, traffic flows, economic and ecological models, military combat scenarios, and many other complex systems.

Prior to the record-setting experiment, a preliminary scaling study was conducted at the Rensselaer supercomputing center, the Computational Center for Nanotechnology Innovation (CCNI). The researchers tuned parameters on the CCNI's two-rack Blue Gene/Q system and optimized the experiment to scale up and run on the 120-rack Sequoia system.

ASC Newsletter June 2013

Authors of the study are Peter Barnes, Jr. and David Jefferson of LLNL, and CCNI Director and computer science professor Chris Carothers and graduate student Justin LaPre of Rensselaer.

The records were set using the ROSS (Rensselaer's Optimistic Simulation System) simulation package developed by Carothers and his students, and using the Time Warp synchronization algorithm originally developed by Jefferson.

"The significance of this demonstration is that direct simulation of 'planetary scale' models is now, in principle at least, within reach," said Jefferson. "Planetary scale" in the context of the joint team's work means simulations large enough to represent all 7 billion people in the world or the entire Internet's few billion hosts.

"This is an exciting time to be working in high performance computing, as we explore the petascale and move aggressively toward exascale computing," Carothers said. "We are reaching an interesting transition point where our simulation capability is limited more by our ability to develop, maintain and validate models of complex systems than by our ability to execute them in a timely manner."

The calculations were completed while Sequoia was in unclassified 'early science' service as part of the machine's integration period. The system is now in classified service. The ASC program provided time on Sequoia to the LLNL-RPI team as the capabilities tested have potential relevance to NNSA/DOE missions. This work also was supported by LLNL's Laboratory Directed Research and Development program.

Since opening in 2007, the CCNI has enabled researchers at Rensselaer and around the country to tackle challenges ranging from advanced manufacturing to cancer screening to sustainable energy. External funding for these research activities has exceeded $50 million and has led to an economic impact of over $130 million across New York state. A partnership between Rensselaer and IBM, CCNI currently supports a network of more than 850 researchers, faculty, and students from a mix of universities, government laboratories, and companies across a diverse spectrum of scientific and engineering disciplines.

______________________________________________________

ASC Newsletter June 2013LANL Magazine Highlights Advanced Supercomputing

“But, Will It Work?” is the attention-grabbing title of the April 2013 issue of the National Security Sciencemagazine published by Los Alamos National Laboratory (LANL). The cover depicts images of the US stockpile. The articles in the issue give the reader an understanding of advanced supercomputing and how LANL uses supercomputer simulations to help assess the reliability of the nuclear deterrent. In the preface, LANL Director Charles McMillan reminds us about the beginning of the Accelerated Strategic Computing Initiative. He writes, “When the US stopped underground nuclear testing in 1992, a mission of the nuclear security labs fundamentally changed: we went from designing, building, and testing nuclear weapons to using our science and engineering capabilities to ensure that the stockpile of current weapons remained safe, secure, and effective into the future.”

The issue includes an overview article about the security challenges we face in the US. An article about the first PetaFLOP supercomputer, Roadrunner, and its role in paving the way to the next supercomputer at LANL, Trinity, gives a compelling argument for why we must continue to grow our supercomputing capabilities. One article gives a look under the floor of the supercomputing facility at LANL. How we are dealing with the massive sets of digital data that we are generating with supercomputing is discussed in an article about big data/fast data. Los Alamos’ powerful Cielo supercomputer is highlighted in an article about the real-world application of simulation results that suggest that a 1-megaton nuclear blast could deter a killer asteroid.

As we move into the future of supercomputing, it is always good to remember the past. An article about early computing and the Manhattan Project tells the history of supercomputing at LANL. Another article in the issue addresses key issues in the debate over the role of nuclear weapons in today’s security environment. A perspective of the nature of science and scientists at LANL is the focus of an interview with Bob Webster, Associate Director for Weapons Physics. A final article, written by Charlie McMillan, reflects on Tom D’Agostino’s leadership of the NNSA and his unique relationship with LANL.

______________________________________________________

ASC Advanced Systems Technology Test Beds get a Facelift

Three computer systems, each part of the ASC Advanced Systems Technology Test Bed project at Sandia National Laboratories, have undergone a technology refresh.  The need to maintain bleeding edge technology is often at odds with budget constraints. A technology refresh, when applicable, enables the project to field critical emerging technologies at greatly reduced cost by reusing portions of the infrastructure. The test beds enable the ASC tri-lab community, DOE/ASCR researchers, university collaborators and vendor partners to explore multiple dimensions of application performance and portability on the latest available technologies. Key portions of the programming environment, such as compilers and tools, can be exercised as part of a co-design process with the vendors. Potential programming models can be evaluated on the various systems to ensure portable viability and performance.

Teller, an AMD Fusion Test Bed, was originally fielded in September 2011. At the time, the A8-3850 (Llano) processor was the latest available technology in the Fusion line available for investigation. In fall 2012, Teller underwent a technology refresh replacing the Llano chips with next generation Trinity (AMD Fusion A10-5800K) chips. This refresh required a mother-board swap to support the new processor technology but retained the remainder of the infrastructure including memory and high-speed network components. Along with the processor refresh, Teller was outfitted with a new power monitoring technology co-designed and developed by Sandia and Penguin Computing. This new capability allows component level high frequency monitoring of power and energy use on Teller.

ASC Newsletter June 2013

Curie, originally a Cray XK6, was upgraded to an XK7. At time of installation in March of 2012, Currie supported the latest NVIDIA GP-GPU technology – Fermi 2090X. In January of 2013 the Fermi chips were replaced by next generation Kepler K20X GP-GPUs. This upgrade only involved the NVIDIA GP-GPUs and was accomplished in a single day and has enabled continued investigations into application performance analysis on state-of-the-art accelerator technology.

Compton, an Intel MIC (now Phi) based cluster, has already been through two technology upgrades and in July of 2013 will undergo another when production Intel Phi cards are installed.  Compton, renamed from Arthur during the first upgrade, was originally installed in September 2011. The main topic of investigation was and remains the new Intel many-core technology. A cooperative agreement with Intel allowed Sandia to obtain very early prototype MIC (Many Integrated Core) cards. In August of 2012 a 2nd generation MIC card was integrated along with a technology refresh of the general purpose X86 processor component (from Westmere to Sandybridge). The 3rd technology update to this platform will bring Compton up-to-date with the general availability version of Phi, the official product name of MIC line.

While occasionally labor intensive, these upgrades have allowed the Advanced Systems Technology Test Bed project at Sandia to field a wide range of emerging technologies and keep these technologies current. Investigations on the latest available technologies are critical to many areas affecting next generation platform research including application performance analysis, programming model investigations, advanced systems software and power research.

______________________________________________________

Gordon Bell Reminisces and Gets a Look at the HPC Revolution he Helped Foment

In opening his Director's Distinguished Lecturer Series presentation at LLNL, Gordon Bell joked that when it comes to the security badge requirements for getting on site "nothing has changed since my first visit in 1961."

But when it comes to high performance computing at LLNL, much has changed thanks to the computing technology revolution Bell helped bring about. Bell's presentation, "The Supercomputer Class Evolution: A Personal Perspective," before a packed Bldg. 123 auditorium Wednesday was a PowerPoint journey through time from the Lab's earliest supercomputing systems in the early 60s to today's era of massively parallel computing systems.

Over the five decades he has worked as a computer engineer, inventor, entrepreneur and futurist, Bell has lost none of his enthusiasm for high performance computing and the possibilities of electronics technologies.

Bell used his own "Bell's Law of Computer Classes," the subject of a 1972 article he authored, as the framework for discussing the evolution of supercomputing since the 1960s. The emergence in the 60s of a new, lower cost computer class based on microprocessors formed the basis of his law. Bell posited that advances in semiconductor, storage and network technologies brought about a new class of computers every decade to fulfill a new need. Classes include: mainframes (1960s), minicomputers (1970s), networked workstations and personal computers (1980s), browser-web-server structure (1990s), palm computing (1995), web services (2000s), convergence of cell phones and computers (2003), and Wireless Sensor Networks, aka motes (2004).

ASC Newsletter June 2013

ASC Newsletter June 2013

Mixing well documented technical history with personal reminiscences, Bell explained the tangibles and intangibles of what has defined a supercomputer. Apart from such technical performance metrics such as clock speed, there is a wow factor that's more difficult to define, he said. "A supercomputer to an engineer is something that is so grand, so big and breathtaking for its time." Talking about the CDC 6600, a mainframe supercomputer built by Control Data Corp. and deployed at LLNL in 1964, Bell said, "I was blown away by the beauty and elegance of the machine."

Another system that captivated his imagination was the UNIVAC Livermore Advanced Research Computer (LARC) system, acquired in 1960. "How do you know a supercomputer?" he quipped. "You have to ask me."

Bell discussed the advent of massively parallel computing in the 1980s, an approach he had championed since the early 70s, noting that the search for parallelism was littered with failures. He credited pioneers such as the Laboratory's George Michael for helping to bring about the revolution in supercomputing design and also acknowledged the contributions of LLNL's Eugene Brooks, who coined the phrase "killer micros"—a reference to the microchip technology that made massive parallelism possible.

"It's hard to understate the importance of Gordon Bell to supercomputing as we know it today. While he was known as an architect and as an entrepreneur, for me personally his great charm and greatest contribution has been his ability to understand and then communicate in a very pithy, often funny and understandable manner very deep or complex trends in computing—for example, comments attributed to him include 'the network becomes the system' or 'the most reliable components are the ones you leave out,' which often popped into my head this past year as we struggled with integrating a 20-PF system," said Michel McCoy, head of LLNL's Advanced Simulation and Computing Program. "He has also been a part of the Lab's history in supercomputing, showing us today that his passion for supercomputers and his belief in their importance in advancing human civilization is undiminished."

Bell later met with McCoy, Computation Associate Director Dona Crawford, Engineering's Rob Sharp, Bert Still and Fred Streitz to discuss current HPC challenges. He toured the TSF and the National Ignition Facility (NIF). During his daylong visit, he also met with Director Parney Albright and lunched with computational engineers and early career computer scientists. Bell, who received his MS in Electrical Engineering from MIT, was recruited in 1960 to the Digital Equipment Corporation where he spent 23 years designing computers, notably the PDP series. He later helped found several computer companies and worked at the National Science Foundation. He's a member of the National Academy of Engineering and the National Academy of Science and has earned many national and international honors. Currently, Bell is a researcher emeritus at Microsoft, working on 'Lifelogging.

ASC Newsletter June 2013

______________________________________________________

ASC Salutes Computational Physicist Misha Shashkov

ASC Newsletter June 2013Dr. Mikhail (Misha) Shashkov is making a difference. From his tireless promotion of the mathematical foundation for ASC codes to his drive to work well with technical staff, postdoctoral associates, and students, Misha is making a difference to DOE, NNSA, ASC, Office of Science, the national laboratories, and the world. He is a world-recognized leader in developing modern Arbitrary-Lagrangian-Eulerian (ALE) methods for high-speed, multi-material flows that lie at the heart of the NNSA ASC Program and at the weapons program at Los Alamos National Laboratory (LANL). Misha’s research is extensively used at LANL, Lawrence Livermore and Sandia national laboratories, at the UK’s Aldermaston Weapons Establishment (AWE), and France’s Commissariat à L’Energie Atomique (CEA), and around the world.

Misha is dedicated to doing research that matters. During the 19 years that he has been at LANL, he has contributed to the unclassified Advanced Scientific Computing Research Program in the Department of Energy’s Office of Science. He contributed to the DOE Project on Advanced Simulation Capability for Environmental Management and to LDRD. He also contributed to the classified programs in ASC.

As a prolific writer, Misha has published over 250 publications with over 3400 citations, and a remarkable high h-index [1] of 34 and a G-index [2] of 53. His publications have typically appeared in the most prestigious journals of his field, such as the Journal of Computational Physics and the SIAM Journal of Numerical Analysis. Of these, seven papers have been cited over 100 times. He is the author of a well-respected book Conservative Finite Difference Methods on General Grids, CRC Press, Boca Raton, FL, 1996. New recruits at LANL are commonly given a copy of the book as an introduction to advanced Lagrangian methods.

He obtained his Ph.D. at Keldysh Institute of Applied Mathematics in 1979 and his Doctor of Science (Habilitation) degree at the most prestigious of the Russian universities, Moscow State University. In 1994, Misha joined the Theoretical Division at LANL where he worked actively and productively as a lead scientist and team leader before he moved in 2010 to the Methods and Algorithms Group in the newly formed Computational Physics (XCP) Division to expedite development of LANL’s ASC codes.  In 2012, he was honored with the appointment of LANL Fellow, making him a distinguished member of the scientific staff.

His innovative development of mimetic methods for the numerical solution of complex partial differential equations (PDEs) has been the foundation for a new field driving LANL, and the rest of the world, toward accurate and stable numerical methods that solve governing PDEs for tightly coupled physics, while honoring conservation principles. These methods are keeping the LANL ASC Program at the forefront of numerical methods research.

Misha says that he enjoys working with people who will use the methods he develops. He enjoys addressing real needs by keeping active interaction with colleagues, post-doctoral candidates, and students. Misha is well recognized as a very active organizer of the specialized conference series “Multimat “ — the main unclassified conference for scientists from the US, UK, France and other laboratories working on problems similar to LANL. He is also well known for organizing mini symposia at SIAM annual meetings and conferences that address mathematical and computational issues about computational science. Misha is an Associate Editor of the SIAM Journal on Numerical Analysis. At LANL, he helped to organize the annual Computational Physics Student Summer Workshop.

Aside from developing the theory, Misha’s compatible conservative discretization has been implemented on the Lagrangian phase of such computer programs as FLAG (LANL), KULL (LLNL), ALEGRA (SNL), and CORVUS (AWE). Mimetic discretizations are in widespread use in many application areas such as fluid and solid dynamics, shock physics, flows in porous media, electromagnetic, radiation diffusion, MHD, image analysis, computational geometry, and many others. A Google search for “mimetic finite difference methods” returns 28,900 results. The total number of citations of his papers related to mimetic compatible discretizations is approximately 2000.

Misha has also played a world-class leadership role in creating a solid mathematical foundation for effective Arbitrary-Eulerian-Lagrangian (ALE) methods. He was a principal organizer of workshop “ALE – From Art to Science” with participation of leading computational scientists from LANL, LLNL, SNL and AWE. He invented a revolutionary new approach for multi-material interface reconstruction — the moment-of-fluid (MOF) method. This is the only method that allows for a correct, and without user intervention, reconstruction of a multi-material interface. His recent invention in the area of ALE research is the so-called ReALE class of methods — reconnection-based ALE.

Misha will be continuing in his extraordinary career in numerical methods and hydrodynamics at Los Alamos. According to Bob Webster, LANL’s Associate Director for Weapons Physics, "Hydrodynamics equations in the Lagrangian form have been used for the last 70 years. It is Misha's job to bring computational Lagrangian hydrodynamics to the next level."


[1] The h index measures the productivity and impact of the published work, where over 20 is considered high.
[2] The G index measures scientific productivity.

 ASC Relevant Research


 

 Los Alamos National Laboratory
Citations for Publications (previously not listed)

  1. Akula, B., Andrews, M.J., Ranjan, D. (2013). "Effect of shear on Rayleigh-Taylor mixing at small Atwood number," Physical Review E, Vol. 87, No. 3, article 033013. DOI:10.1103/PhysRevE.87.033013.
  2. Castan, T., Planes, A., Saxena, A. (2013). "Precursor Nanoscale Textures in Ferroelastics: Interplay between anisotropy and disorder," 9th European Symposium on Martensitic Transformations (ESOMAT 2012), St Petersburg, RUSSIA, 2013. In Materials Science Forum, Vol. 738-739, pp. 155-159. DOI:10.4028/www.scientific.net/MSF.738-739.155.
  3. Colgan, J., Abdallah, J., Faenov, A.Y., Pikuz, S.A., Wagenaars, E., Booth, N., Culfa, O., Dance, R.J., Evans, R.G., Gray, R.J., Kaempfer, T., Lancaster, K.L., McKenna, P., Rossall, A.L., Skobelev, I.Y., Schulze, K.S., Uschmann, I., Zhidkov, A.G., Woolsey, N.C. (2013). "Exotic Dense-Matter States Pumped by a Relativistic Laser Plasma in the Radiation-Dominated Regime," Physical Review Letters, Vol. 110, No. 12, article 125001. DOI:10.1103/PhysRevLett.110.125001.
  4. Colgan, J., Emmanouilidou, A., Pindzola, M.S. (2013). "Evidence for a T-Shape Break-Up Pattern in the Triple Photoionization of Li," Physical Review Letters, Vol. 110, No. 6, article 063001. DOI:10.1103/PhysRevLett.110.063001.
  5. Dimonte, G., Terrones, G., Cherne, F.J., Ramaprabhu, P. (2013). "Ejecta source model based on the nonlinear Richtmyer-Meshkov instability," Journal of Applied Physics, Vol. 113, No. 2, article 024905. DOI:10.1063/1.4773575.
  6. Fatenejad, M., Fryxell, B., Wohlbier, J., Myra, E., Lamb, D., Fryer, C., Graziani, C. (2013). "Collaborative comparison of simulation codes for high-energy-density physics applications," High Energy Density Physics, Vol. 9, No. 1, pp. 63-66. DOI:10.1016/j.hedp.2012.10.004.
  7. Haines, B.M., Grinstein, F.F., Welser-Sherrill, L., Fincke, J.R. (2013). "Simulations of material mixing in laser-driven reshock experiments," Physics of Plasmas, Vol. 20, No. 2, article 022309. DOI:10.1063/1.4793443.
  8. Hernandez, A., Bdzil, J.B., Stewart, D.S. (2013). "An MPI parallel level-set algorithm for propagating front curvature dependent detonation shock fronts in complex geometries," Combustion Theory and Modelling, Vol. 17, No. 1, pp. 109-141. DOI:10.1080/13647830.2012.725579.
  9. Higdon, D., Geelhood, K., Williams, B., Unal, C. (2013). "Calibration of tuning parameters in the FRAPCON model," Annals of Nuclear Energy, Vol. 52, pp. 95-102. DOI:10.1016/j.anucene.2012.06.018.
  10. Hunter, A., Zhang, R.F., Beyerlein, I.J., Germann, T.C., Koslowski, M. (2013). "Dependence of equilibrium stacking fault width in fcc metals on the gamma-surface," Modelling and Simulation in Materials Science and Engineering, Vol. 21, No. 2, article 025015. DOI:10.1088/0965-0393/21/2/025015.
  11. Jemison, M., Loch, E., Sussman, M., Shashkov, M., Arienti, M., Ohta, M., Wang, Y.H. (2013). "A Coupled Level Set-Moment of Fluid Method for Incompressible Two-Phase Flows," Journal of Scientific Computing, Vol. 54, No. 2-3, pp. 454-491. DOI:10.1007/s10915-012-9614-7.
  12. Kiyanda, C.B., Higgins, A.J. (2013). "Photographic investigation into the mechanism of combustion in irregular detonation waves," Shock Waves, Vol. 23, No. 2, pp. 115-130. DOI:10.1007/s00193-012-0413-8.
  13. Lewis, E.E., Li, Y.Z., Smith, M.A., Yang, W.S., Wollaber, A.B. (2013). "Preconditioned Krylov Solution of Response Matrix Equations," Nuclear Science and Engineering, Vol. 173, No. 3, pp. 222-232.
  14. McDermott, D., Amelang, J., Lopatina, L.M., Reichhardt, C.J.O., Reichhardt, C. (2013). "Domain and stripe formation between hexagonal and square ordered fillings of colloidal particles on periodic pinning substrates," Soft Matter, Vol. 9, No. 18, pp. 4607-4613. DOI:10.1039/c3sm27652j.
  15. Michalak, S.E., Hamada, M.S., Hengartner, N.W. (2013). "Analysis of interval-censored data with random unknown end points: an application to soft error rate estimation," Journal of the Royal Statistical Society Series C-Applied Statistics, Vol. 62, pp. 473-486. DOI:10.1111/rssc.12005.
  16. Nelson, A.F., Ruffert, M. (2013). "Dynamics of core accretion," Monthly Notices of the Royal Astronomical Society, Vol. 429, No. 2, pp. 1791-1826. DOI:10.1093/mnras/sts469.
  17. Ramsey, S.D., Hutchens, G.J. (2013). "High-Fidelity Approximations for Extinction Probability Calculations," Nuclear Science and Engineering, Vol. 173, No. 2, pp. 197-205.
  18. Reisner, J., Serencsa, J., Shkoller, S. (2013). "A space-time smooth artificial viscosity method for nonlinear conservation laws," Journal of Computational Physics, Vol. 235, pp. 912-933. DOI:10.1016/j.jcp.2012.08.027.
  19. Sambasivan, S., Kapahi, A., Udaykumar, H.S. (2013). "Simulation of high speed impact, penetration and fragmentation problems on locally refined Cartesian grids," Journal of Computational Physics, Vol. 235, pp. 334-370. DOI:10.1016/j.jcp.2012.10.031.
  20. Sambasivan, S.K., Shashkov, M.J., Burton, D.E. (2013). "A cell-centered Lagrangian finite volume approach for computing elasto-plastic response of solids in cylindrical axisymmetric geometries," Journal of Computational Physics, Vol. 237, pp. 251-288. DOI:10.1016/j.jcp.2012.11.044.
  21. Skillman, S.W., Xu, H., Hallman, E.J., O'Shea, B.W., Burns, J.O., Li, H., Collins, D.C., Norman, M.L. (2013). "COSMOLOGICAL MAGNETOHYDRODYNAMIC SIMULATIONS OF GALAXY CLUSTER RADIO RELICS: INSIGHTS AND WARNINGS FOR OBSERVATIONS," Astrophysical Journal, Vol. 765, No. 1, article 21. DOI:10.1088/0004-637x/765/1/21.
  22. Starrett, C.E., Saumon, D. (2013). "Electronic and ionic structures of warm and hot dense matter," Physical Review E, Vol. 87, No. 1, article 013104. DOI:10.1103/PhysRevE.87.013104.
  23. Waltz, J. (2013). "Performance of a three-dimensional unstructured mesh compressible flow solver on NVIDIA Fermi-class graphics processing unit hardware," International Journal for Numerical Methods in Fluids, Vol. 72, No. 2, pp. 259-268. DOI:10.1002/fld.3744.
  24. Waltz, J. (2013). "Spatial accuracy and performance of a mixed-order, explicit multi-stage method for unsteady flows," International Journal for Numerical Methods in Fluids, Vol. 71, No. 11, pp. 1361-1368. DOI:10.1002/fld.3715.
  25. Wollaber, A.B., Larsen, E.W., Densmore, J.D. (2013). "A Discrete Maximum Principle for the Implicit Monte Carlo Equations," Nuclear Science and Engineering, Vol. 173, No. 3, pp. 259-275.
  26. Zhang, H.L., Fontes, C.J. (2013). "Relativistic distorted-wave collision strengths for the optically allowed transitions with in the 67 Be-like ions with," Atomic Data and Nuclear Data Tables, Vol. 99, No. 4, pp. 416-430. DOI:10.1016/j.adt.2012.04.004.
  27. Fontes, C.J., Colgan, J., Zhang, H.L., Abdallah, J., Hungerford, A.L., Fryer, C.L., Kilcrease, D.P. (2012). "Atomic Data and the Modeling of Supernova Light Curves," XXVII International Conference on Photonic, Electronic and Atomic Collisions, Belfast, Northern Ireland, UK, 2011. In Journal of Physics Conference Series, Vol. 388, article 012022. DOI:10.1088/1742-6596/388/1/012022.
  28. Lovekin, C.C. (2012). "Mass Loss in 2-D Stellar Models," Four Decades of Research on Massive Stars, St-Michel-des-Saints, Québec, Canada, 2011. In Astronomical Society of the Pacific Conference Series, Vol. 465, pp. 74-79.
  29.  Lovekin, C.C., Guzik, J.A. (2012). "Pulsational Mass Loss in Luminous Blue Variables," Four Decades of Research on Massive Stars, St-Michel-des-Saints, Québec, Canada, 2011. In Astronomical Society of the Pacific Conference Series, Vol. 465, pp. 25-27.

Sandia National Laboratories
Citations for FY13, Q13

 Key: DOI = Digital Object Identifier; URL prefix of DOI is: http://dx.doi.org/

  1.  Allan, B. (2011).  “Optimization of CPAPR for x64 Multicore,” Sandia Technical Report SAND2011-9432.  DOI: 10.2172/1035321
    Unlimited release.
  2. Brake, M. A. (2013).  “The Effect of the Contact Model on the Impact-Vibration Response of Continuous and Discrete Systems,” Journal of Sound and Vibration, Vol. 332, Issue 15, pp. 3849-3878.  DOI: 10.1016/j.jsv.2013.02.003. SAND2012-5619 J. 
  3. Hale, L. M., Wong, B. M., Zimmerman, J. A., Zhou, X. W. (2013).  “Atomistic Potentials for Palladium-Silver Hydrides,” Modelling and Simulation in Materials Science and Engineering, Vol. 21, Issue 4, 045005 (23 pages).  DOI: 10.1088/0965-0393/21/4/045005. SAND2012-8945 J. 
  4. Kostka, T. D. (2013).  “Exomerge User’s Manual: A Lightweight Python Interface for Manipulating Exodus Files,” Sandia Technical Report SAND2013-0725.
    Unlimited release.
  5. Moussa, J. E., Foiles, S. M., Schultz, P. A. (2013).  “Simulation and Modeling of the Electronic Structure of GaAs Damage Clusters,” Journal of Applied Physics, Vol. 113, Issue 9, pp. 093706 - 093706-8.  Published online 5 March 2013.  DOI: 10.1063/1.4794164. SAND2012-9231 J. 
  6. Moussa, J. E., Schultz, P. A. (2013).  “Density Functional Calculations of Point Defects in InAs,” Bulletin of the American Physical Society, APS March Meeting 2013, Vol. 58, No. 1.  Abstract ID: BAPS.2013.MAR.Q1.249.  http://meetings.aps.org/link/BAPS.2013.MAR.Q1.249
    SAND2012-9648 A.  Unclassified title and abstract
  7. Rogers, D., Moreland, K. D., Oldfield, R. A., Fabian, N. D. (2013).  “Data Co-Processing for Extreme Scale Analysis Level II ASC Milestone (4745),” Sandia Technical Report SAND2013-1122.
  8. Shan, T.-R., Wixom, R. R., Mattsson, A. E., Thompson, A. P. (2013).  “Atomistic Simulation of Orientation Dependence in Shock-Induced Initiation of Pentaerythritol Tetranitrate,” The Journal of Physical Chemistry B, Vol. 117, Issue 3, pp. 928-936.  Published online 31 December 2012.  DOI: 10.1021/jp310473h
    SAND2012-7460 J. 
  9. Weinberger, C. R., Boyce, B. L., Battaile, C. C. (2013).  “Slip Planes in BCC Transition Metals,” International Materials Reviews, Vol. 58, No. 5, pp. 296-314.  Published online 18 March 2013.  DOI: 10.1179/1743280412Y.0000000015. SAND2012-4023 J.
  10. Weinberger, C. R., Tucker, G. J., Foiles, S. M. (2013).  “Peierls Potential of Screw Dislocations in BCC Transition Metals: Predictions from Density Functional Theory,” Physical Review B, Vol. 87, Issue 5, 054114 (8 pages).  DOI: 10.1103/PhysRevB.87.054114. SAND2013-0465 J.

CORRECTIONS TO PRIOR SUBMITTALS

Submitted FY12 Q4
Carroll, J. D., Brewer, L. N., Battaile, C. C., Boyce, B. L., Emery, J. M. (2012).  “The Effect of Grain Size on Void Deformation,” International Journal of Plasticity.  Available online 22 June 2012.  DOI: 10.1016/j.ijplas.2012.06.002
SAND2012-4023 J.  Unclassified title and document

Correction:
Carroll, J. D., Brewer, L. N., Battaile, C. C., Boyce, B. L., Emery, J. M. (2012).  “The Effect of Grain Size on Void Deformation,” International Journal of Plasticity.  Available online 22 June 2012.  DOI: 10.1016/j.ijplas.2012.06.002
SAND2012-1144 J.  Unclassified title and document

Submitted FY13 Q2
Dayal, J., Schwan, K., Oldfield, R. (2012).  “D2T: Doubly Distributed Transactions for High Performance and Distributed Computing,” 2012 IEEE International Conference on Cluster Computing (CLUSTER), Beijing, China, pp. 90-98.  DOI: 10.1109/CLUSTER.2012.79. SAND2012-4599 A.  Unclassified title and abstract
(Incorrect citation. Removed.)

 

LA-UR-13-25420