ASCeNews Quarterly Newsletter - March 2012

||   Who’s Who in ASC   ||   ASC eNews POCs   ||   Upcoming Events   ||

ASC    
  NA-ASC-500-12 Issue 19
  March 2012


The Meisner Minute

meisnerEditorial by Bob Meisner

At the recent ASC Principal Investigators meeting, I had the opportunity to describe why the ASC Program is living the La Vida Loca. Many of us were around in the mid 1990s and many have heard the legend of the Accelerated Strategic Computing Initiative (ASCI). I’ve got to say that tackling the challenge of proving that massively parallel computing could credibly underpin Science-Based Stockpile Stewardship was awesome. But carrying the 100 teraFLOPS (TF) legacy across the memory-constrained, massively parallel architectural inflection point to exascale is crazy. These are the good old days.
 
Today, we run 100 TF high-fidelity simulations on petascale platforms, supporting the safety, surety, and reliability of the nation’s nuclear deterrent. These calculations, once considered the goal of ASCI, are now routine. Weapons stewards are performing many such simulations to address Significant Findings Investigations and to work through the Life Extension Programs to ensure that the computational basis of these Department of Defense tasks are truly the best we can deliver without nuclear testing. We have surely come a long way since the pre-ASCI days of single processor vector computing mainly on Cray platforms. The physics algorithms are much more accurate today; the geometric fidelity in today’s simulations is far superior to those of 1990 and together we call this capability full-system, high-fidelity simulation tools. Although improved from previous generations of codes, there are still significant improvements needed to achieve the vision of the current ASC program, i.e. to predict with confidence the performance of our nuclear stockpile.
 
The pegposts of the Predictive Capability Framework track the major milestones yet to be included in the codes for such a predictive capability.  As we proceed to top the Top 500 list once again, we find that the memory footprint of our codes has grown almost exponentially, allowing us to increase the fidelity of our simulations and making us more confident that we can attain predictive stockpile simulations in our lifetimes. The ASC program has also allowed the three defense Laboratories (and their partners) to tackle and solve many challenging problems: uncertainty quantification; physics mysteries from our nuclear tests; qualification of non-nuclear systems through high-fidelity simulations instead of expensive and perhaps impossible testing; as well as, training the next generation of designers in computational simulations instead of testing.

The future is perhaps more challenging than the original ASCI task of moving from single vector processor production codes to massively parallel codes with better predictive capability. Computing platforms and their underlying chips are changing in significant ways. We must address this change in order for our current generation of ASC codes to run effectively. While the design of future computer architectures is unclear, it is clear that the industry is moving in a direction that will drive us to re-work our existing algorithms in significant ways. In fact, as we move forward, counting FLOPS is essentially guaranteed to be the wrong measure of a computer’s worth to the ASC program. Consequently, we are considering new and more meaningful metrics for effective future computers.

Suffice it to say that we are entering a new era in computing and you are leading the charge.  The simulation tools you are providing today were only dreams just over 16 years ago.  Today’s reality provides only a glimpse of what you can achieve.  So when the old timers, myself included, fondly recall the good old days when there was an “I” in ASC, let us have our moment. But, realize that you are living the good old days and you are the architects of our underlying technical capability that ensures a safer nuclear world without testing.

PS:  special thanks to Bob Weaver for his insights for this newsletter.

Viz Systems at Los Alamos Aid Progress in Predictive Science

The Production Visualization Project at Los Alamos National Laboratory (LANL) provides visualization (viz) systems to weapons designers. The scientists who are experts on the viz systems work directly with weapons designers in a physics-based, iterative discovery process using EnSight, the tool enabled for next-generation visualization and data analysis. They provide analytical expertise to help LANL weapons designers utilize the full power of the hardware and software. The ASC Program develops and deploys the hardware and software infrastructure for visualization and data analysis.

Bob Weaver, a weapons designer at LANL, commented about data analysis in a recent interview. “When we do data analysis — when we actually look at the results of these large 3D calculations — that whole process has become extremely user friendly. It’s almost become real time. There is a wealth of data in these calculations.”

Weaver goes on to say, “You can imagine a billion-cell calculation has a lot of detail throughout the problem. Our graphics visualization techniques — both small‑file daily interactions with the graphics of these calculations as well as our large three-dimensional visualizations on the stereo PowerWall theatre with our EnSight tools — are state of the art and very quick. We can actually visualize and look at the physics and understand the progress of the calculation almost in real time. This is really a large step forward for us.”

LANL completed the Visualization Cluster Upgrade Project in December 2011 (L2 Milestone 4469). Visualizations of groundbreaking simulations on the petascale systems Cielo and Roadrunner are being performed routinely using ViewMaster2, a new visualization cluster capable of delivering analysis of the scale and precision necessary for predictive scientific progress. This project is an excellent example of a co-designed platform with scientists, scientific and visualization experts, computer scientists, and computer system engineers working together to provide a significant step forward in post-processing performance and capacity.

 

New Lorenz Portal Increases Code Accessibility


The ASC Physics and Engineering Models (PEM) simulation code for material strength and damage, Parallel Dislocation Simulator (ParaDis), was recently chosen as the first code to be deployed under the new Lorenz application portal at LLNL. Increasing the accessibility of PEM codes adds value to ASC by enhancing engagement with the wider scientific community. The ParaDis code models in unprecedented detail the mechanisms of dislocation motion.

The Lorenz portal allows ASC computer users from across the country to quickly access all manner of information about their accounts and Livermore Computing, thus simplifying aspects of high performance computing (HPC) that traditionally have been tedious or difficult. The portal will provide high-level Web interfaces for setup, launch, monitoring, and analysis stages of user interaction with ParaDis, thus helping new users become familiar with the code and what it can do. Experienced users will be able to interact with the code and the LC machines more directly.

ParaDiS is a free, large-scale dislocation dynamics simulation code to study the fundamental mechanisms of plasticity. Originally developed at Lawrence Livermore, it is written in C (with a little C++) and uses the MPI library for communication between processors. It runs routinely on 100–1000 processors, and scalability on 132,000 processors of BlueGene/L has been demonstrated.

 

 

Trios: A Collaborative Vehicle for I/O Software Technology

Trilinos I/O Support (Trios) is an open-source package of libraries developed as part of a newly formed I/O capability area of the Trilinos project. Trios was released recently as part of the Trilinos 10.10.1 software product in February 2012. Trios serves two important roles: as a repository for production quality I/O libraries such as Exodus, Nemesis, and IOSS -- codes traditionally managed as part of the SIERRA toolkit and in use by ASC codes for more than a decade; and as a vehicle for collaborative design, evaluation, and distribution of new techniques to improve I/O on advanced platforms.

The development portion of Trios contains several ASC-developed software products, including, for example, the Network Scalable Service Interface (Nessie).  Nessie is the core framework used to develop "data services," a technology that leverages available compute resources on HPC systems for real-time management and analysis of simulation data. One data service built with Nessie provides caching and staging services for applications that have "bursty" I/O operations (such as checkpoints).  Published results demonstrate a 10x improvement in effective I/O rates for representative applications. We are actively developing a production-quality version of this staging service for the Exodus I/O library for users of the Alegra multiphysics codes.

A second data service under development provides real-time analysis for the Sandia shock physics code CTH. This service uses a separate partition of compute nodes to detect material fragments as they are generated by CTH simulation. The nature of the analysis required for fragment detection suggests that offloading the analysis to a separate set of compute nodes could substantially reduce the I/O and analysis overheads on the CTH shock physics code, and possibly lead to analysis techniques for fragment tracking that are not practical using traditional post-processing approaches. A detailed study of this work is underway as part of an FY13 ASC Level 2 milestone.

The integration of ASC I/O software into Trios allows us to leverage the professional quality code management and testing infrastructure in Trilinos to ensure a high-quality product. The broad availability of Trilinos has already facilitated a number of collaborative efforts with Oak Ridge National Laboratory, Georgia Institute of Technology, Northwestern University, and others.  An article detailing the R&D associated with Trios is scheduled for publication in a special issue of Scientific Programming later in 2012.

 

 

Improving Application Performance with CSSE-Provided Technology

At times, there can be a staff-resource tradeoff between improving application models and modifying the application for higher performance or scalability. One of CSSE’s (Computational Systems and Software Engineering) roles is to lessen this tension by providing generally available software that can help improve application performance with minimal, if any, source code changes to the applications. CSSE-developed software, underlying or in partnership with the application, can exploit newer technology. Sandia staff members have completed a survey of thirteen of their software contributions over the last three years that have addressed this role. The survey included data designed to quantify the improvement. Some of these contributions have been reported in prior ASC newsletters. We highlight three more here.

  1. Node architecture-oblivious job submission: The Red Storm system, currently in use for ASC’s broader national security mission, has undergone multiple hardware upgrades, resulting in nodes with differing numbers of cores and amount of memory. In order to maximize the capabilities of each node type, Sandia enhanced the job submission interface. This change enabled more efficient core utilization by accepting requests for compute resources using parameters that match the requirements of the application. This spares the user the task of remapping those requirements to the peculiarities of the hardware configuration, which was complicated by heterogeneity and evolving. Using data from actual workloads, we demonstrated that the interface change resulted in a 10% improvement in core utilization. Figure 1 shows the percent utilization of cores based on the old way in which only nodes were specified and the new way in which MPI ranks and optionally memory per rank were specified.




  2. Compressed Simulation and Analysis Workflow: Traditional analysis workflow prescribes storing simulation results to disk and later retrieving them to perform analysis and visualization for final results. This workflow is increasingly difficult due to the ever-widening gap between the ability to generate the data and the ability of I/O technology to absorb it. In situ and in transit are both techniques that by-pass the traditional workflow and manage the data movement problem. In situ refers to linking the visualization tools directly into the simulation code to run in the same memory. In transit uses data staging and executes the analysis and visualization in a separate staging job concurrently with the main computation job. Both techniques have merit depending on the problem.




    Figure 2a shows an example in which the visualization scales at the same rate as the simulation, making in situ analysis a good choice. Figure 2b shows an impedance mismatch between the simulation and visualization. An in transit solution, employing the Nessie (another CSSE contribution) buffering capability, will likely be a more viable approach. A more complete analysis of the tradeoffs between the two technologies is the subject of an FY13 ASC L2 milestone.


  3. Process Replication for Reliability: Our last example from the CSSE survey is more forwarding looking. We used a combination of modeling, empirical analysis (using a prototype rMPI software), and simulation to study the costs and benefits of using process replication as the primary fault tolerance mechanism in place of the traditional file checkpoint-restart solution. The results, which cover different failure distributions, hardware mean time between failures, and I/O bandwidths, show that a state machine replication is a potentially useful technique for meeting the fault tolerance demands of HPC applications on future highly parallel platforms. A simulation analysis found the “break even” points for replication for various checkpoint bandwidth rates. The shaded region corresponds to possible socket counts and socket MTBFs for exascale class machines.

 

 

New Robust Contact Capability Dramatically Improves Runtime

Recent additions to the DASH contact algorithm in the Sierra/Solid Mechanics module dramatically improved the robustness and efficiently of the overall capability. Problems that previously took 2 hours to run now complete in 2 minutes (a 25x reduction in contact iterations and 100x reduction in overall solution iterations). Users also report that problems that previously could not converge in commercial codes are now running to completion in Sierra. Perhaps even more impressive, these solutions were obtained with minimal simple specifications of contacts, i.e., there was no need to specify master or slave surfaces, capture tolerances or special iteration techniques.

These additions to the DASH contact capabilities have enabled robust simulations of high deformation forging problems, system preloads, and problems with many layers of contacts. Contact search and enforcement in implicit codes is one of the more difficult problems in computational solid mechanics due to the highly nonlinear nature of the contact phenomenon and its combination with the nonlinear material and geometric effects that can occur in these types of problems. These effects include phenomenon such as friction or general stick-slip, multiple contact combinations, interfaces with dramatically different stiffness (e.g., foam and steel), thermal softening, and nested contacts (e.g., multiple layers of shells or parts). These types of conditions are appearing in increasing frequency for various nuclear weapons applications as analysts include more and more details in the numerical models.

The recent development activities applied rigorous code verification procedures and advanced features such as adaptive penalty algorithms and methods to suppress intermediate rigid body modes to achieve the reported performance and robustness gains.

 

 
 

Predictive Science Panel Held Spring Meeting at Los Alamos

On March 13–16, 2012, the Predictive Science Panel (PSP) met at Los Alamos National Laboratory (LANL). Under a new charter effective in January 2011, the 15-member panel of experts familiar with relevant scientific disciplines—such as theoretical, computational, and experimental science—came together to provide feedback on the quality and direction of the predictive science work for the NNSA Stockpile Stewardship Program.

At their closeout briefing, the panel acknowledged the knowledge and enthusiasm of the technical presenters and noted that they enjoyed the poster session presented by early-career staff at LANL. The panel provided both technical and programmatic suggestions to strengthen the ASC and Science Campaign programs.

The PSP is chartered by the LANL and LLNL Advanced Simulation and Computing (ASC) and Science Campaign (SC) programs to get feedback on the scope of work executed by the ASC and SC programs. Meetings are scheduled roughly every six months with the location alternating between LANL (spring) and LLNL (fall).

 

Los Alamos’ ASC Program Sponsors Metropolis Postdoctoral Fellowship

Following the death of Nicholas Metropolis on October 17, 1999, then Los Alamos National Laboratory Director John Browne wrote: “Nick’s work in mathematics and the beginnings of computer science forms the basis for nearly everything the Laboratory has done in computing and simulation science.”

In 2010, LANL inaugurated the Nicholas C. Metropolis Postdoctoral Fellowship in Computer and Computational Science. Under the Advanced Simulation & Computing Program, computer simulation capabilities are developed to support the Stockpile Stewardship Program as well as broader national security needs. “Given that much of our weapons work today is done on computers, I wanted to develop a fellowship that specifically targets computational and computer scientists to join LANL,” says Brian Albright, a scientist in the Plasma Theory and Applications Group and one of the architects of the fellowship.

To date four recipients of this fellowship are pursuing advanced research in the areas of computational and computer science, physics, and engineering. Metropolis postdoc fellows have the opportunity to use the most powerful supercomputers in the world to perform cutting-edge research. An article about the fellowship and a few of its recipients appears in LANL’s National Security Science magazine web site (http://www.lanl.gov/science/NSS/past_issues.shtml). Click on Issue 2, 2011, and find the article “Rolling out a New Supercomputing Fellowship at Los Alamos.”

For more information about the fellowship, go to the LANL Postdocs website at http://www.lanl.gov/science/postdocs/appointments_fellow.shtml.

 
 

Lawrence Livermore Sparks Improvements in HPC Energy Efficiency

As Lawrence Livermore National Laboratory (LLNL) sets its sights on exascale computing, Lab scientists and engineers are researching and developing techniques to improve the energy efficiency of high performance computing (HPC). LLNL is involved in several efforts to reduce the energy use of the computers and the facilities that house them and to promote new standards of quantifying efficiency gains beyond gross energy use.

"Today, U.S. servers and data centers are already using more than 1.5% of the total national electricity consumption," said Anna Maria Bailey, Computation Associate Director Facility Manager. "With 20-megawatt exascale systems expected to come online in the next 7 to 10 years, it's vital that we redefine a supercomputer's relation to energy."

LLNL has been a leader in optimizing the efficiency of HPC through various sustainability projects. For instance, since 2004—when ASC Purple was brought online—until today, through multiple generations of HPC platforms, the Terascale Simulation Facility (TSF) computing power has increased five-fold (in one quarter of the space) while using 2.4 times less electricity.

While enhanced data center efficiency and metrics such as power usage effectiveness (PUE)—a measure of how much power is used by the computing equipment itself in contrast to cooling and other overhead—have improved the overall power picture, scientists must now envision innovations, fostering smart cooling, heat re-use, renewable energy, and full lifecycle sustainability. The core focus areas include benchmarking, computation fluid dynamics, Leadership in Energy and Environmental Design certifications, HPC capability gap analysis, free cooling, liquid cooling, innovative electrical distribution, sustainable HPC solutions, HPC platform power budgets, and power management.

"We're using several new techniques with Sequoia that address sustainability issues," Bailey said. "At 2 gigaFLOP/s per watt, it will be the world's most power-efficient supercomputer."

More than 91% of Sequoia will be cooled using a combination of liquid-cooling and air-cooling techniques. Its efficient design also includes an innovative 480-V electrical distribution system, which provides improved voltage optimization to reduce losses.

"Even with these techniques, Sequoia will use enough energy to power 7,200 homes," Bailey said. "We've got to bring that number down as we plan for exascale."

One way to improve an HPC center's energy efficiency is to implement "free cooling," a technique that uses the outside air to drive the machine-cooling process. Bailey and her team have completed a study that shows it would take a $5.5M investment to implement free cooling in the TSF.

While the upfront funding might seem substantial, free cooling would save an estimated 16M kWh per year and would pay for itself in four years. In addition, this design would allow B453 to increase its computational capacity from 30MW to 45MW and improve the overall facility PUE from 1.3 to 1.15. 

 

The LLNL team is also actively pursuing power management solutions, which will be critical to the success of energy management and, ultimately, exascale computing. The team's solution has been to create and implement a centralized real-time data management infrastructure of all data sources, from the individual computer racks to the entire Laboratory site. This effort presents many challenges: understanding how different types of hardware and software affect power utilization, correlating multiple data sources, coordinating with multiple owners of the data, accessing the data, selecting the best interface, comparing and viewing the data on a common platform, and creating various dashboards. Once complete, the infrastructure can be leveraged to all LLNL data centers and perhaps throughout DOE.

"One of our goals is to create solutions that can be adopted by the entire DOE complex," Bailey said. "We share a commitment to mission excellence and are working closely together to use computational efficiency as a viable alternative to measuring advances in HPC sustainable stewardship."

 

Popular Mechanics Features Story on Sequoia

 

The Sequoia facility infrastructure project is well underway; however, the facility requirements are challenging in many areas with heavy emphasis on structural, mechanical, and electrical systems pushing the envelope of the building. Each rack is 4W’x4’Lx7’H and weighs 4500 pounds. The entire system is comprised of 96 racks, which is over 210 tons of additional weight on the computer floor. Each rack requires 30 GPM of water ranging from 64°F to 74°F with an air supply of 1700 CFM all needing 100 kW of computational power for a total of 9.6 MW. These technical challenges are coming together in a compartmentalized master space plan to accommodate all of the utilities underneath the floor. December’s Popular Mechanics highlighted many of Sequoia’s features.

 

 

 

 

 

 

ASC Salutes Joel Stevenson

 

 

 

Joel is a Principal Member of Technical Staff in the Scientific Applications and User Support department at Sandia National Laboratories. He currently supports customers working on the ACES (Alliance for Computing at Extreme Scale) Cielo platform located at Los Alamos National Laboratory (LANL).

Over the last year or so, Joel has been engaged in helping Sandia code teams and analysts port applications to Cielo. He has been instrumental in enabling applications to run at scale, in providing detailed evidence of failure modes, in helping to isolate and identify problems encountered with the Panasas file system, and in applying defensive work-arounds to let simulations progress.

“I really enjoy working with subject matter experts like Jason Wilke and Steve Attaway on real-world design problems, and I appreciate the opportunity to learn the application space,” says Joel. “Getting a chance to run the Sandia hydrodynamics code CTH over the last few years has provided me with an appreciation for the work of the code developers and analysts. I think first-hand experience running the code gives me a better appreciation for the issues that users face, and makes me a better resource for users.”

In addition to this analytic effort, Joel also managed the call for Capability Computing Campaign (CCC) proposals for Cielo. He led the Sandia process for both the initial CCC-1 series of workload requests and the current CCC-2 campaign. Before this assignment, Joel supported the data gathering and decision process for determining the Sandia workload on the Purple platform at Lawrence Livermore National Laboratory (LLNL). Soliciting project proposals, refining computing estimates, and ensuring that each project was successfully running on Purple exposed Joel to many Sandia codes and code team members, an experience that has helped establish his reputation as someone who has the “right stuff” to handle production computing issues.

Joel originally joined Sandia in 1986 working in the Materials Science and Technology Center doing electrochemistry research, and left in 1997 to co-found Peak Sensor Systems, a supplier to the microelectronics fabrication industry. When he returned to Sandia in 2005, Joel was looking for a new challenge. He spent a little time working with the High Performance Storage System (HPSS) team, where he gained an appreciation for the complexities of moving and managing large data sets. Initially, Joel assisted customers in transferring data off Sandia’s Red Storm computer system to centralized file systems on the Sandia Restricted Network and the Sandia Classified Network, as well as to the Sandia Mass Storage System tape based archives that manage data using the HPSS servers.

This experience was beneficial as he progressed to supporting longer distance data movement for customers using Purple at LLNL and Cielo at LANL. The bulk of this long distance data transfer is across the ASC funded DisCom Wide Area Network. The DisCom WAN provides 10 gigabit Ethernet connects between the three laboratories classified computing environments. These interconnects require constant observation and analysis as minor changes or error conditions can drastically alter the performance of data transfer between the sites. Joel became very familiar with the various tools and their capabilities for data movement and performance and he was able to create a “cookbook” recommendation for analysts to refer to when needing to move data between systems either locally or across the DisCom WAN.

As a User Support professional, Joel helps beginners learn about High Performance Computing systems and often acts on their behalf to diagnose problems or manage production runs efficiently. Joel has helped several projects to run efficiently on Purple and Cielo by avoiding the little learning errors that can derail early adopters and delay progress towards deliverables. Most large simulations run for days to weeks and create thousands of files. Managing this complexity takes a thorough understanding of the codes, the file systems, and the limitations of the individual computing platform. Joel has become expert in all these areas, making him especially valuable to the ASC program and our NW mission.

 

 

ASC Relevant Research


 Lawrence Livermore National Laboratory

Citations for Publications

 

2009 Publications (previously not listed)

  1. Brecht, S.H., Hewett, D.W., Larson and D.J. (2009), “A Magnetized, Spherical Plasma Expansion in an Inhomogeneous Plasma: Transition from Super- to Sub-Alfvénic,” Geophys. Res. Lett., 36, L15105, doi:10.1029/2009GL038393, 6 Aug.

  2. Hewett, D.W., Brecht, S.H., Larson, D.J. (2009). “The physics of ion decoupling in magnetized plasma expansions,” J. Geophysical Research, Vol. 116, A11310.

  3. Hewett, D.W., Brecht, S.H., Larson, D.J. (2009). “The physics of ion decoupling in magnetized plasma explosions,” JRERE, Proceedings of the 2011 HEART Conference.

 

2010 Publications (previously not listed)

  1. Iglesias, C.A. (2010). “Excited Spectator Electron Effects on Spectral Lines,” High Energy Density Physics, Vol. 6, p. 318.

  2. Iglesias, C.A. (2010). “XUV Absorption by Solid Density Aluminum,” High Energy Density Physics, Vol. 6, p. 311.

  3. Iglesias, C.A., Sonnad, V. (2010). “Robust Algorithms for Computing Quasi-Static Stark Broadening of Spectral Lines,” High Energy Density Physics, Vol. 6, p. 399.

  4. Iglesias, C.A., Sonnad, V. (2010). “The Lanczos Method Applied to Quasi-Static  Stark Broadening of Spectral Lines,” High Energy Density Physics, Vol.  6, p. 391.

  5. Liedahl, D.A., Libby, S.B., Rubenchik, A. (2010). “Momentum Transfer by Laser Ablation of Irregularly Shaped Space Debris,” Int. Sym. on High Power Laser Ablation 2010, AIP 1278, pp. 772-779.

  6. Luna, G.J.M., Raymond, J.C., Brickhouse, N.S., Mauche, C. (2010). “Photoionized Features in the X-Ray Spectrum of Ex Hydrae,” Astrophysical J., 711, pp. 1333-1337.

 

2011 Publications (previously not listed)

  1. Alam, A., Khan S.N., Wilson B.G., et al. (2011). “Efficient Isoparametric Integration Over Arbitrary Space-Filling Voronoi Polyhedra for Electronic Structure Calculations,” Physical Review, B, Vol. 84, #4, Article #045105.

  2. Alam, A., Wilson, B.G., Johnson D.D. (2011). “Accurate and Fast Numerical Solution of Poisson's Equation for Arbitrary, Space-filling Voronoi Polyhedra: Near-field Corrections Revisited,” Physical Review B, Vol. 84, #20, Article #205106. 

  3. Alves, S., Kuhl, A., Najjar, F., Tringe, J., McMichael, L., Glascoe, L. (2011). “Building an Efficient Model for Afterburn Energy Release,” 82nd Shock & Vibration Sym., Baltimore, MD.

  4. Aufderheide III, M., Henderson, G., von Wittenau, A. (2011). “HADES User's Manual,” LLNL-SM-521112.

  5. Bailey, T.S., Chang, J.H., Warsa, J.S., Adams, M.L. (2011). “A Piecewise Bi-Linear Discontinuous Finite Element Spatial Discretization of the Sn Transport Equation,” Int. Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M&C 2011), Rio de Janeiro, Brazil.

  6. Bailey, T.S., Hawkins, W.D., Adams, M.L. (2011). “A Piecewise Linear Discontinuous Finite Element Spatial Discretization of the Sn Transport Equation for Polyhedral Grids in 3D Cartesian Geometry,” The 22nd Int. Conference on Transport Theory (ICTT-22), Portland, OR.

  7. Baker, A.H., Gamblin, T., Schulz, M., Yang, U.M. (2011). “Challenges of Scaling Algebraic Multigrid across Modern Multicore Architectures,” Proc. of IPDPS, May.

  8. Barton, N.R., Bernier, J.V., Becker, R., Arsenlis, A., Cavallo, R., Marian, J., Rhee, M., Park, H.-S., Remington, B., Olson, R.T (2011). “A multi-scale strength model for extreme loading conditions,” J. of Applied Physics, Vol. 109, No. 7, pp. 073501.

  9. Barton, N.R., Bernier, J.V., Knap, J., Sunwoo, A.J., Cerreta, E., Turner, T.J. (2011). “A call to arms for task parallelism in multi-scale materials modeling,” Int. J. for Numerical Methods in Engineering, Vol. 86, No. 6, pp. 744–764.

  10. Bastea, S. (2011). “Thermodynamics and diffusion in size-symmetric and asymmetric dense electrolytes,” Journal of Chemical Physics, Vol. 135, Issue 8, Article no. 084515.

  11. Bernard, R., Goutte, H., Gogny, D., Younes, W. (2011). “Microscopic and non-Aiabatic Schrodinger equation derived from the generator coordinate method based on zero- and two-quasiparticle states,” Phys. Rev. C. Vol. 84. p. 044308.

  12. Bernier, J.V., Barton, N.R., Lienert, U., Miller, M.P (2011). “Far-field high energy diffraction microscopy: A tool for intergranular orientation and strain analysis,” J. of Strain Analysis for Engineering Design, Vol. 46, No. 7, pp. 527–547.

  13. Beyer, J.C., Stotzer, E.J., Hart. A., de Supinski, B.R. (2011). “OpenMP for Accelerators,” Seventh Int. Workshop on OpenMP (IWOMP 2011), Chicago, IL, June 13–15.

  14. Bhatele, A., Gamblin, T., Gunney, B.T., Schulz, M., Bremer, P., Isaacs, K.E. (2011). “Revealing Performance Artifacts in Parallel Codes Through Multi-Domain Visualizations,” SIAM Conference on Parallel Processing, Feb. 15-17, Savannah, GA.

  15. Biswas, S., de Supinski, B.R., Schulz, M., Franklin, D., Sherwood, T., Chong, F.T. (2011). “Exploiting Data Similarity to Reduce Memory Footprints,” Twenty Fifth Int. Parallel and Distributed Processing Sym. (IPDPS 2011), Anchorage, AK, May 16–20.

  16. Boates, B., Bonev, S.A., (2011). “Electronic and structural properties of dense liquid and amorphous nitrogen,” Physical Review B, Vol. 83, pp. 174117.

  17. Boates, B., Hamel, S., Schwegler, E., Bonev, S.A., (2011). “Structural and optical properties of liquid CO2 up to 1 terapascal,” J. of Chemical Physics, Vol. 134, pp. 064504.

  18. Böhme, D., de Supinski, B.R., Geimer, M., Schulz, M. Wolf, F. (2011). “Scalable Critical-Path Based Performance Analysis,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS 2012), Shanghai, China, May 21–25.

  19. Boutoux, G., Jurado, B., Meot, V., Aiche, M., Bail, A., Barreau, G., Bauge, E., Burke, J.T., Capellan, N., Companis, I., Czajkowski, S., Daugas, J.M., Dassie, D., Derkx, X., Faul, T., Haas, B., Gaudefroy, L., Gunsing, F., Matea, I., Mathieu, L., Morel, P., Pillet, N., Porquet, M.G., Roig, O., Romain, P., Serot, O., Taieb, J., Tassan-Got, L., Theroine, C. (2011). “Neutron-induced Capture Cross Sections via the Surrogate Reaction Method,” J. Korean Phys. Soc., Vol. 59, pp. 1924-1927.

  20. Brandon, S., Domyancic, D., Johnson, B., Nimmakayala, R., Lucas, D., Tannahill, J., Christianson, G., McEnerney, J., and Klein, R.I. (2011). “Response Model Based Analysis of Climate Model Sensitivities and Uncertainties using the LLNL UQ Pipeline,” 2011 American Geophysical Union (AGU) Fall Conference, San Francisco, CA., Dec. 5-30, 2011.

  21. Brantley, P.S. (2011). “A Benchmark Comparison of Monte Carlo Particle Transport Algorithms for Binary Stochastic Mixtures,” J. Quant. Spect. Rad. Trans., Vol. 112, pp. 599-618.

  22. Brantley, P.S., Gentile, N.A., Zimmerman, G.B. (2011). “A Levermore-Pomraning Algorithm for Implicit Monte Carlo Radiative Transfer in Binary Stochastic Media,” Trans. Am. Nuc. Soc., 105, 498.

  23. Brantley, P.S., Martos, J.N. (2011). “Impact of Spherical Inclusion Mean Chord Length and Radius Distribution on Three-Dimensional Binary Stochastic Medium Particle Transport,” Int. Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M&C 2011), Rio de Janeiro, RJ, Brazil, May 8-12, 2011.

  24. Brown, D., Beck, B., Descalle, M.A., Hoffman, R., Ormand, E., Navratil, P., Summers, N., Thompson, I., Vogt, R., Younes, W., Barnowski, R. (2011). “Overview of the 2009 Release of the Evaluated Nuclear Data Library (ENDL2009),” J. Korean Phys. Soc., Vol. 59, pp. 1084-1087.

  25. Burke, J.T., Ressler, J.J., Escher, J.E., Scielzo, N.D., Thompson, I.J., Henderson, R., Gostic, J., Bernstein, L., Bluel, D., Weideking, M., Meot, V., Roig, O., Phair, L.W., Hatarik, R., Munson, J., Angell, C., Goldblum, B., Beausang, C.W., Ross, T., Hughes, R., Aiche, M., Barreau, C., Cappelan, N., Czajkowski, S., Hass, B., Jurado, B., Mathieu, L., Companis, I. (2011). “Experimental Approaches to Studying the Fission Process Using the Surrogate Reaction Technique,” J. Korean Phys. Soc., Vol. 59, pp. 1892-1895.

  26. Chau, R., Hamel, S., Nellis, W., (2011). “Chemical processes in the deep interior or Uranus,” Nature Communication, Vol. 2, pp. 203.

  27. Chen M.H., Cheng K.T. (2011). “Hyperfine Quenching of the Metastable 4s4p (3)P(0) and (3)P(2) States of Zn-like Ions,” Canadian J. of Physics, Vol. 89, # 4, pp. 473-482.

  28. Chen M.H., Cheng K.T. (2011). “Relativistic Configuration-Interaction Calculations of the n=3-3 Transition Energies in Highly Charged Tungsten Ions,” Physical Review A , Vol. 84 #1, Article # 012513.

  29. Chester, S.A., Anand, L. (2011). “A thermo-mechanically coupled theory for fluid permeation in elastomeric materials: application to thermally responsive gels,” J. of the Mechanics and Physics of Solids, Vol. 59, No. 10, pp. 1978-2006.

  30. Cho, B.I., Engelhorn, K., Correa, A.A., Ogitsu, T., Weber, C.P., Lee, H.J., Feng, J., Ni, P.A., Ping, Y., Nelson, A.J., Prendergast, D., Lee, R.W., Falcone, R.W., Heimann, P.A. (2011).  “Electronic Structure of Warm Dense Copper Studied by Ultrafast X-Ray Absorption Spectroscopy,” Physical Review Letters, Vol. 106, No. 16, pp. 167601-1–167601-4.

  31. Clark B.K., Morales M.A., McMinis J., Kim J., Scuseria G. E. (2011). “Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm,” J. of Chemical Physics, Vol. 135, No. 24, pp. 244105.

  32. Clark, D.S., Haan, S.W., Cook, A.W., Edwards, M.J., Hammel, B.A., Koning, J.M., Marinak, M.M. (2011). “Short-wavelength and three-dimensional instability evolution in National Ignition Facility ignition capsule designs,” Physics of Plasmas 18, 082701.

  33. Covey, C., Brandon, S., Bremer, P.T., Domyancic, D., Gavaizar, X., Johannesson, G., Klein, R.I., S., Lucas, D., Tannahill, J., Zhang, Y. (2011). “Quantifying the Uncertainty of Climate Predictions,” World Climate Research Programme (WCRP), Denver, CO., Oct. 24-28.

  34. Covey, C., Brandon, S., Bremer, P.T., Domyancic, D., Klein, R.I., Klein, S.A., Lucas, D.D., Tannahill, J., and Zhang, Y. (2011). “Quantifying the Uncertainties of Climate Predictions,”AOS 271 Seminar, UCLA Dept. of Atmospheric and Oceanic Sciences, Los Angeles, CA, July 26.

  35. Covey, C., Brandon, S., Domyancic, D., Johannesson, G., Klein, S., Klein R.I., Lucas, D., Tannahill, J., Zhang, Y. (2011). “Perturbed-Physics Experiments with CICE running with CAM4 & Slab Ocean Mode,” CESM Polar Climate Working Group Meeting, NCAR, Boulder, CO, Feb.-Mar.

  36. Cunningham, A.J., Klein, R.I., Krumholz, M.R., McKee, C.F. (2011). “Radiation-hydrodynamic Simulations of Massive Star Formation with Protostellar Outflows,” Astrophysical J., 740, 107.

  37. Cunningham, A.J., McKee, C.F., Klein, R.I., Krumholz, M.R., Teyssier, R. (2011). “Radiatively Efficient Magnetized Bondi Accretion, Astrophysical J., 2012, 745, 139L.

  38. Edmiston, J.K., Barton, N.R., Bernier, J.V., Johnson, G.C., Steigmann, D. J. (2011). “Precision of lattice strain and orientation measurements using high energy monochromatic x-ray diffraction,” J. of Applied Crystallography, Vol. 44.

  39. Ellis, I.N., Graziani, F.R., Glosli J.N., Strozzi, D.J., Surh, M.P., Richards, D.F., Decyk, V.K., Mori, W.B. (2011). “Studies of Particle Wake Potentials in Plasmas,” 53rd Annual Meeting of the APS Division of Plasma Physics, Salt Lake City, UT.

  40. Escher, J.E., Burke, J.T., Dietrich, F.S., Scielzo, N.D., Thompson, I.J., Younes, W. (2011). “Compound-nuclear Reaction Cross Sections From Surrogate Measurements,” accepted to Rev. Mod. Physics, Vol. 21, 01001.

  41. Escher, J.E., Dietrich, F.S., Scielzo, N.D. (2011). “Surrogate Approaches for Neutron Capture,” J. Korean Phys. Soc., Vol. 59, pp. 815-820.

  42. Ferencz, R.M., McCallen, R.C. (2011). “ALE3D/ParaDyn Status for Blast Protection for Platforms and Personnel HPC Software Application Institute,” DoD HPC Institute for Blast Protection for Personnel and Platforms, Army Research Laboratory, Aberdeen, MD, Oct.

  43. French, M., Hamel, S., Redmer, R. (2011). “Dynamical screening and ionic conductivity in water from ab initio simulations,” Physical Review Letters, Vol. 107, pp. 185901.

  44. Fried, L., Najjar, F.M., Howard, W.M., Manaa, M.R., Bastea, S., (2011). “Multiscale Simulation of Hot Spot Ignition,” 17th Biennial Int. Conference of the APS Topical Group on Shock Compression of Condensed Matter (SCCM11), Vol. 56, No. 6, Chicago, IL.

  45. Gahvari, H., Baker, A., Schulz, M., Yang, U., Jordan, K., Gropp, W. (2011). “Modeling the Performance of an Algebraic Multigrid Cycle on HPC Platforms,” Int. Conf. on Supercomputing (ICS 2011), Tuscon, AZ, June.

  46. Garung, T., Laney, D. E., Lindstrom, P., Rossignac, J. (2011). SQuad: Compact Representation for Triangle Meshes,” Computer Graphics Forum, Vol. 30, Issue 2, pp. 355-364, April.

  47. Gentile, N.A., Morel, J.E. (2011). “Material Motion Corrections for Implicit Monte Carlo Radiation Transport,” Int. Conf. on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M&C 2011), Rio de Janeiro, Brazil.

  48. Goehner, J., Arnold, D.C., Ahn, D.H., de Supinski, B.R., Lee, G.L., Legendre, M.P., Schulz, M., Miller, B.P. (2011). “A Framework for Bootstrapping Extreme Scale Software Systems,” First Int. Workshop on High-performance Infrastructure for Scalable Tools (WHIST), Tucson, AZ, June 4.

  49. Gopalakrishnan, G., Kirby, R.M., Siegel, S., Thakur, R., Gropp, W., Lusk, E., de Supinski, B.R., Schulz, M., Bronevetsky, G. (2011). “Formal Analysis of MPI-Based Parallel Programs: Present and Future,” Communications of the ACM (CACM), Vol. 54, No. 12, pp. 82-91, Dec.

  50. Graziani, F.R., Batista, V.S., Benedict, L.X., Castor, J.I., Chen, H., Chen, S.N., Fichtl, C.A., Glosli, J.N., Grabowski, P.E., Graf, A.T., Hau-Riege, S.P., Hazi, A.U., Khairallah, S.A., Krauss, L., Langdon, A.B., London, R.A., Markmann, A., Murillo, M.S., Richards, D.F., Scott, H.A. (2011). “Large-scale molecular dynamics simulations of dense plasmas: The Cimarron Project,” High Energy Density Physics, Vol. 8, Iss. 1, Mar., pp.105-131.

  51. Gunney, B.T.N., Bhatele, A., Gamblin, T. (2011).  “Tree-based communication for scalable mesh adaptation in the SAMRAI framework,” 2012 SIAM Annual Meeting (AN12), Jul, 9-13, Minneapolis, MN.

  52. Hamel, S., Morales, M.A., and Schwegler, E. (2011). “Signature of helium segregation in hydrogen-helium mixtures,” Physical Review B, Vol. 84, No. 16, pp. 165110.

  53. Hansen, C.E., Klein, R.I., McKee, C.F., Fisher, R.T. (2011). “Feedback Effects on Low-mass Star Formation,” Astrophysical J., 747, 22.

  54. Hansen, C.E., McKee, C.F., Klein, R.I. (2011). “Anistropy Lengthens The Decay Time of Turbulence in Molecular Clouds,” Astrophysical J., 738, 88.

  55. Hatch-Aguilar, T.J., Najjar, F.M., Szymanski, E.W. (2011). “Computational Hydrocode Study of Target Damage due to Fragment-Blast Impact,” 26th Int. Ballistics Sym., Vol. 2. p. 1918, Miami, FL.

  56. Hilbrich, T., Mueller, M., Schulz, M., de Supinski, B.R. (2011). “Order Preserving Event Aggregation in TBONs,” EuroMPI 2011, Santorini, Greece, Sep. 18-21.

  57. Hilbrich, T., Mueller, M.S., De Supinski, B.R., Schulz, M., Nagel, W.E. (2011). “GTI: A Generic Tools Infrastructure for Event Based Tools in Parallel Systems,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS 2012), Shanghai, China, May 21–25.

  58. Hill, E. and Pope, G. (2011). “A Software Quality Engineering Maturity Model,” Better Software Conf. East, Int. Conf., Orlando, FL.

  59. Hoefler, T., Rabenseifner, R., Ritzdorf, H., de Supinski, B.R., Thakur, R., Traff, J.L. (2011). “The Scalable Process Topology Interface of MPI 2.2,” Concurrency and Computation: Practice & Experience, Vol. 23, No. 4, pp. 293-310, Mar.

  60. Hoffman, R. (2011). “Supernova Grand Challenges on ATLAS,” LLNL cross-cutting high performance computing external review committee, Aug. 30, Livermore, CA.

  61. Hoffman, R., Woosley, S. (2011). “Nucleosynthesis in Massive Stars, the Role of Electron Screening,” Workshop on Nuclear Physics in Hot Dense Plasmas, London, UK Mar. 12. http://jinaweb.org/events/NP2011/london_rdhoffman_print.pdf.

  62. Iandola, F., O’Brien, M., Procassini, R. (2011). “PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code,” Int. Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering. Rio de Janeiro, Brazil, May 8-12.

  63. Iglesias, C.A. (2011). “Comment on ‘Free-free Opacity in Warm Aluminum,’” High Energy Density Physics, Vol. 7, p. 38.

  64. Iglesias, C.A., Sonnad, V. (2011). “Algorithm Comparisons for Stark-Profile Calculations,” High Energy Density Physics, Vol. 7, pp. 391-399.

  65. Iglesias, C.A., Sonnad, V. (2011). “Efficient Algorithm for Generating Spectra Using Line-by-Line Methods,” High Energy Density Physics, Vol. 7, pp. 43.

  66. Jena, R.J., Chester, S.A., Srivastava, V., Yue, Y.C., Anand, L., Lam, Y.C. (2011). “Large-strain thermo-mechanical behavior of cyclic olefin copolymers: Application to hot embossing and thermal bonding for the fabrication of microfluidic devices,” Sensors and Actuators B: Chemical, Vol. 155, No. 1, pp. 93-105.

  67. Johnson, B. M., Schilling, O. (2011). “Reynolds-averaged Navier–Stokes model predictions of linear instability. I. Buoyancy- and shear-driven flows,” Journal of Turbulence 12, 36-1–36-38.

  68. Johnson, B. M., Schilling, O. (2011). “Reynolds-averaged Navier–Stokes model predictions of linear instability. II. Shock-driven flows,” Journal of Turbulence 12, 37-1–37-31.

  69. Klein, R.I. (2011). “Radiation Hydrodynamics for Astrophysical Applications,” Lecture I, Winter School in Astrophysics, Tokyo, Japan NOAJ, Feb.

  70. Klein, R.I. (2011). “Radiation Hydrodynamics for Astrophysical Applications,” Lecture II, Winter School in Astrophysics, Tokyo, Japan NOAJ, Feb.

  71. Klein, R.I. (2011). “Radiation Hydrodynamics for Astrophysical Applications,” Lecture III, Winter School in Astrophysics, Tokyo, Japan NOAJ, Feb.

  72. Klein, R.I. (2011). “The Advance of UQ Science,” Institute for Computing in Science, Park City, UT, Aug. 12,.
  73. Klein, R.I., Cunningham, A., Krumholz, M., McKee, C., Myer, A. (2011). “Radiation-Hydrodynamics of High Mass Star Formation: The Effects of Feedback from Protostellar Outflows and Radiation,” AstroSim2012 in Davos, Switzerland, Feb. 1.

  74. Klein, R.I., Krumholz, M., McKee, C., Cunningham, A., Myer, A. (2011). “Radiation Hydrodynamic AMR Simulations of High Mass Star Formation: The Effects of Feedback in Cores to Clusters,” Astronum2011, Valencia, Spain, June.

  75. Kritcher, A., Bernstein, L.A., Bleuel, D., Caggiano, J., Cerjan, C., Chen, M.H., Landen, O., Libby, S.B., Mcnabb, D., Schneider, D., Wilson, B. (2011). “Nuclear Excitation by Electron Transition and Capture (NEET &NEEC) in Laser Produced Plasmas at Omega (& NIF), Joint Institute for Nuclear Astrophysics Workshop - Nuclear Physics in Hot Dense Dynamic Plasmas, Mar. 13, London, UK.

  76. Krumholz, M.R., Klein, R.I., McKee, C.F. (2011). “Radiation-hydrodynamic Simulations of the Formation of Orion-like Star Clusters. I. Implications for the Origin of the Initial Mass Function,” Astrophysical J., 740, 74.

  77. Laguna, I., Gamblin, T., De Supinski, B.R., Bagchi, S., Bronevetsky, G., Ahn, D.H., Schulz, M., Rountree, B. (2011). “Large Scale Debugging of Parallel Tasks with AutomaDeD,” SC2011, Seattle, WA, Nov. 12–18, 2011.

  78. Landa, A., Söderlind, P. (2011). “Alloying-Driven Phase Stability in Group-VB Transition Metals under Compression,” MRS Proceedings, Vol. 1369, mrss11-1369-xx02-06, Nov. 17. 

  79. Langer, S.H., Still, B., Bremer, P., Hinkel, D., Langdon, B., Levine, J., Williams, E. (2011).  “Cielo Full-System Simulations of Multi-Beam Laser-Plasma Interaction in NIF Experiments,” Cray Users' Group Meeting (CUG 2011), Fairbanks, AK.

  80. Leininger, L.D., Minkoff, S.A., Dorgan, R.J., DeFisher, S.E., Springer, H.K., McCallen, R.C. (2011). “Capability Improvements for Modeling Fragment Impact in ALE3D,” 26th Int. Sym. on Ballistics Location Miami, FL, Sep 12-16.

  81. Li, D., Nikolopoulos, D.S., Cameron, K., de Supinski, B.R., Schulz, M. (2011). “Scalable Memory Registration for High Performance Networks Using Helper Threads,” ACM Int. Conf. on Computing Frontiers (CF 2011), Ischia, Italy, May 3–5.

  82. Li, P.S., Martin, D.F., Klein, R.I., McKee, C.F. (2011). “A Stable, Accurate Methodology for High Mach Number, Strong Magnetic Field MHD Turbulence with Adaptive Mesh Refinement: Resolution and Refinement Studies,” Astrophysical J., 745, 139L.

  83. Li, P.S., McKee, C.F., Klein, R.I. (2011). “Ambipolar Diffusion Effects on Weakly Ionized Turbulence Molecular Clouds,” Computational Star Formation, Proc. of the Int. Astronomical Union, IAU Sym., 270, pp. 421-424.

  84. Li, P.S., McKee, C.F., Klein, R.I. (2011). “Sub-Alfvenic Non-Ideal MHD Turbulence Simulations with Ambipolar Diffusion: III. Implications for Observations and Turbulent Enhancement,” Astrophysical J., 744, 743L.

  85. Ling, Y., Balachandar, S., Najjar, F.M., Lieberthal, B., Stewart, D.S., Bdzil, J.B. (2011). “Modeling of Momentum and Energy Coupling on Compliant Particles Subjected to Intense Shocks,” 64th Annual Meeting of the APS Division of Fluid Dynamics, Vol. 56, No. 18, Baltimore, MD.

  86. Lucas, D.D., Brandon, S., Covey, C., Domyancic, D.M., Johannesson, G., Klein, R.I., Tannahill, J., Zhang, Y. (2011). “Uncertainty Quantification of Equilibrium Climate Sensitivity,” American Geophysical Union (AGU) Fall Meeting, San Francisco, CA, Dec. 5-10.

  87. McCallen, R., Anderson, A., Ferencz, R. (2011). “LLNL Modeling and Simulation,” DoD Modeling and Simulation Institute User Forum, Strategic Insight, Crystal City, VA, Feb.

  88. McFarland, J.A., Greenough, J.A., Devesh, R. (2011). “Computational parametric study of a Richtmyer-Meshkov instability for an inclined interface , “ PHYSICAL REVIEW E  Vol. 84, Issue 2, Article Number: 026303 DOI: 10.1103/PhysRevE.84.026303, Part 2, Aug.

  89. McGrath, M.J., Kuo, I. F.W., Ghogmu, J.N., Mundy, C.J., Siepmann, J.I. (2011). “Vapor-Liquid Coexistence Curves for Methanol and Methane Using Dispersion-Corrected Density Functional Theory,” J. of Physical Chemistry B, Vol. 115, Iss. 40, pp. 11688-11692.

  90. McGrath, M.J., Kuo, I.F. W., Siepmann, J. I. (2011). “Liquid Structures of Water, Methanol, and Hydrogen Fluoride at Ambient Conditions from First Principles Molecular Dynamics Simulations with a Dispersion Corrected Density Functional,” Physical Chemistry Chemical Physics, Vol. 13, Iss. 44, pp. 19943-19950.

  91. Michta, D., Graziani, F., Surh, M., Glosli, J. (2011). “Thermalization simulations of strongly/weakly coupled mixtures,” 53rd Annual Meeting of the APS Division of Plasma Physics, Salt Lake City, UT.

  92. Moody, A., Ahn, D.H., de Supinski, B.R. (2011). “Exascale Algorithms for Generalized MPI_Comm_split,” EuroMPI 2011, Santorini, Greece, Sep. 18-21.

  93. Morales, M.A., Benedict, L.X., Clark, D.S., Schwegler, E., Tamblyn, I., Bonev, S.A., Correa, A.A., Haan, S.W. (2012). “Ab Initio Calculations of the Equation of State of Hydrogen In a Regime Relevant for Inertial Fusion Applications,” High Energy Density Physics, Vol. 8, no. 1, pp. 5-12.

  94. Mueller, M.S., Gopalakrishnan, G., de Supinski, B.R., Lecomber, D., Hilbrich, T. “Dealing with MPI Bugs at Scale: Best Practices, Automatic Detection and Formal,” a tutorial at SC2011, Seattle, WA, Nov. 12–18.
  95. Myers, A.T., Krumholz, M.R., Klein, R.I., McKee, C.F. (2011). “Metallicity and the Universality of the Initial Mass Function,” Astrophysical J., 735, 49.

  96. Najjar, F., White, J., Rieben, R., Bazan, G. (2011). “Adiabatic Release Test Problem,” JOWOG42, Alderstrom, UK.

  97. Najjar, F.M., Howard, W.M., Fried, L.E., Manaa, M.R., Nichols, A., Levesque, G., (2011). “Computational Study of 3-D Hot-Spot Initiation in Shocked Insensitive High-Explosive,” 17th Biennial Int. Conference of the APS Topical Group on Shock Compression of Condensed Matter (SCCM11), Vol. 56, No. 6, Chicago, IL.

  98. Nichols, A. (editor), ALE3D Team (2011). “ALE3D: An Arbitrary Lagrange/Eulerian 2D and 3D Code System,” Version 4.14.X, Vol. I and II, Mar. 1.

  99. Nikolov, N., Schunck, N., Nazarewicz, W., Bender, M., Pei, J. (2011). “Surface symmetry energy of nuclear energy density functionals,” Phys. Rev. C, Vol. 83, p. 034305.

  100. Offner, S., Kratter, K.M., Matzner, C.D., Krumholz, M.R., Klein, R.I. (2011). “The Turbulent Fragmentation Origin of Low-Mass Binary Star Systems,” Bulletin of the American Astronomical Society, Vol. 43.

  101. Oliver, W.B. (2011). “Quantifying the Value of Static Analysis,” STARWest Int. Conf. (Software Testing Analysis and Review), Anaheim, CA.

  102. Olson, B.J., Lele, S.K., Larsson, J., Cook, A.W. (2011). “Nonlinear Effects in the Combined Rayleigh-Taylor/Kelvin-Helmholtz Instability,” Physics of Fluids 23, 037111.

  103. Owen, J.M. (2011). “Applications of the Voronoi Tessellation for Mesh-Free Methods,” Int. Conference on Numerical Methods for Multi-Material Flows, Arcachon, France.

  104. Owen, J.M. (2011). “Augmenting Meshless Methods using the Voronoi Tessellation,” SPHERIC Newsletter, Issue 13, Dec.

  105. Pfau, D., Najjar, F.M., Yao, J., McCandless, B., Nichols, A. (2011). “Parallel Detonation Shock Dynamics Algorithm for Insensitive Munitions using ALE3D,” 26th Int. Ballistics Sym., Vol. 1, p. 355, Miami, FL.

  106. Phipps, C.R., Baker, K.L., Bradford, B., George, E.V., Libby, S.B., Liedahl, D.A., Marcovici, B., Olivier, S.S., Pleasance, L.D., Reilly, J.P., Rubenchik, A., Strafford, D.N. Valley, M.T. (2011). “Removing Orbital Debris with Lasers,” Advances in Space Research, online at arXiv:1110.3835v1

  107. Phipps, C.R., Baker, K.L., Libby, S.B., Liedahl, D.A., Olivier, S.S., Pleasance, L.D., Trebes, J.E., George, E.V., Marcovici, B., Reilly, J.P., Rubenchik, A., Strafford, D.N. Valley, M.T. (2011). “Removing Orbital Debris with Pulsed Lasers,” High Power Laser Ablation Conference, Santa Fe, NM.

  108. Pigni, M.T., Herman, M., Oblozinsky, P., Dietrich, F.S. (2011). “Sensitivity analysis of neutron total and absorption cross sections within the optical model,” Phys. Rev. C, Vol. 83, p. 014601.

  109. Pope, G. (2011). “No Silver Bullet? Silver Buckshot May Work,” Keynote Presentation, Better Software Conf. East, Int. Conf., Orlando, FL.

  110. Protze, J., Hilbrich, T., Knüpfer, A., De Supinski, B.R., Mueller, M.S. (2011). “Holistic Debugging of MPI Derived Datatypes,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS 2012), Shanghai, China, May 21–25.

  111. Quaglioni, S. (2011). “Monte Carlo implementation of up- or down-scattering due to collisions with material at finite temperature,” LLNL-TR-488174.

  112. Ressler, J.J., Burke, J.T., Escher, J.E., Angell, C.T., Basunia, M.S., Beausang, C.W., Bernstein, L.A., Bleuel, D.L., Casperson, R.J., Goldblum, B.L., Gostic, J., Hatarik, R., Henderson, R., Hughes, R.O., Munson, J., Phair, L.W., Ross, T.J., Scielzo, N.D., Swanberg, E., Thompson, I.J., Wiedeking, M. (2011). “Surrogate measurement of the (238)Pu(n, f) cross section,” Phys. Rev. C, Vol. 83, p. 054610.

  113. Rountree, B., Cobb, G., Gamblin, T., Schulz, M., de Supinski, B.R., Tufo, H. (2011). “Parallelizing Heavyweight Debugging Tools with MPIecho,” First Int. Workshop on High-performance Infrastructure for Scalable Tools (WHIST), Tucson, AZ, June 4.

  114. Rountree, B., Lowenthal, D.K., Schulz, M., de Supinski, B.R. (2011). “Practical Performance Prediction Under Dynamic Voltage Frequency Scaling,” Second Int. Green Computing Conf. (IGCC11), Orlando, FL, July 25-28.

  115. Sandoval, L.A., Richards, D. (2011). “MD Study of the Nucleation and Growth of Deformation Twins in Polycrystalline Tantalum,” APS Meeting, Mar., Dallas, TX.

  116. Sandoval, L.A., Richards, D. (2011). “MD Study of the Nucleation and Growth of Deformation Twins in Polycrystalline Tantalum,” MRS Spring Meeting, San Francisco, CA.

  117. Schulz, M. (2011). “A Case for More Intuitive Performance Analysis,” Salishan Conference on High-Speed Computing, Apr., Salishan, OR.

  118. Schulz, M. (2011). “Checkpointing,” Encyclopedia of Parallel Computing, D. Padua (ed), Springer Verlg.

  119. Schulz, M. (2011). “More Intuitive Performance Analysis,” Invited talk, DOE Office of Science, Sep. 8, Germantown, MD.

  120. Schulz, M. (2011). “More Intuitive Performance Analysis,” Invited talk, Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), Sep., Heraklion, Greece.

  121. Schulz, M. (2011). “Performance and Optimization: A Case for more Modular and Intuitive Tools,” Institute for Nuclear Theory Exascale Workshop, Jun., Seattle, WA.

  122. Schulz, M., Bhatele, A., Bremer, P-T., Gamblin, T., Isaacs, K., Levine, J.A., Pascucci, V. (2011). “Creating a tool set for optimizing topology-aware node mappings,” 5th Int. Workshop on Parallel Tools, Sep.

  123. Schulz, M., Bhatele, A., Bremer, P., Gamblin, T., Isaacs, K., Landge, A., Levine, J., Pascucci, V. (2011).  “A Case for More Modular and Intuitive Performance Analysis Tools,” SIAM Conf. on Parallel Processing, Feb. 15-17,Savannah, GA.

  124. Schulz, M., Bhatele, A., Bremer, P., Gamblin, T., Isaacs, K., Levine, J., Pascucci, V. (2011). “Creating a Tool Set for Optimizing Topology-aware Node Mappings,” 5th ZIH Parallel Tools Workshop, Dresden, Germany, Sep. 26-27.

  125. Schulz, M., Galarowicz, J., Legendre, M., Maghrak, D., Rajan, M. (2011). “How to Analyze the Performance of Parallel Codes 101 – A Case Study with Open|SpeedShop,” SC 11, Seattle, WA, Nov.

  126. Schulz, M., Galarowicz, J., Legendre, M., Maghrak, D., Rajan, M. (2011). “An Introduction into Performance Analysis for HPC Systems with Open|Speedshop,” SC11, Seattle, WA, Nov.

  127. Schulz, M., Levine, J.A., Bremer, P., Gamblin, T., Pascucci, V. (2011). “Interpreting performance data across intuitive domains,” Int. Conf. on Parallel Processing (ICPP'11), Taipei, Taiwan, Sep. 13-16.

  128. Schulz, M., Mohr, B., Wylie, B. (2011). “Supporting Code Developments on Extreme-scale Computer Systems,” SC11, Seattle, WA, Nov.

  129. Schunck, N., Dobaczewski, J., McDonnell, J., Satu la, W., Sheikh, J.A., Staszczak, A., Stoitsov, M., Toivanen, P. (2012). “Solution of the Skyrme-Hartree-Fock-Bogolyubov equations in the  Cartesian deformed harmonic-oscillator basis,” Comp. Phys. Comm., Vol. 183, p. 166. 

  130. Scogland, T.R.W., Rountree, B., Feng, W., de Supinski, B.R. (2011). “Heterogeneous Task Scheduling for Accelerated OpenMP,” Twenty Sixth Int. Parallel and Distributed Processing Sym. (IPDPS 2012), Shanghai, China, May 21–25.

  131. Sleaford, B.W., Summers, N., Escher, J., Firestone, R.B., Basunia, S., Hurst, A., Krticka, M., Molnar, G., Belgya, T., Revay, Z., Choi, H.D. (2011). “Capture Gamma-ray Libraries for Nuclear Applications,” J. Korean Phys. Soc., Vol. 59, pp. 1473-1478.

  132. Soderlind, P., Moore, K.T., Landa, A., Sadigh, B., Bradley, J.A. (2011). “Pressure-induced changes in the electronic structure of americium metal,” Phys. Rev. B, Vol. 84, No. 7, pp. 075138-1–075138-8.

  133. Souers, P.C., Druce, R.L., Roeske, F., Jr., Vitello, P., May, C. (2011). “A Complete Detonator, Booster and Main Charge Study of LX-07/PBX 9502,” Propellants, Explosives, Pyrotechnics, Vol. 36, No. 2, pp. 119-124.

  134. Souers, P.C., Garza, R., Hornig, H., Lauderbach, L., Owens, C., Vitello, P. (2011). “Metal Angle Correction in the Cylinder Test,” Propellants, Explosives, Pyrotechnics, Vol. 36, No. 1, pp. 9-15.

  135. Souers, P.C., Lewis, P., Hoffman, M., Cunningham, B. (2011). “Thermal Expansion of LX-17, PBX 9502 and Ultrafine TATB,” Propellants, Explosives, Pyrotechnics, Vol. 36, No. 4, pp. 335-340.

  136. Spanu, L., Donadio, D., Hohl, D., Schwegler, E., Galli, G., (2011). “Stability of hydrocarbons at deep Earth pressures and temperatures,” Proc. of the National Academy of Sciences, Vol. 108, pp. 6843-6846.

  137. Spears, B.K., Glenzer, S., Edwards, M.J. Brandon, S., Clark, D., Town, R., Cerjan, C., Dylla-Spears, R., Mapoles, E., Munro, D., Salmonson, J., Sepke, S., Weber, S., Hatchett, S., Haan, S., Springer, P., Moses, E. (2011). “Performance Metrics for Inertial Confinement Fusion Implosions: Aspects of the Technical Framework for Measuring Progress in the National Ignition Campaign,” American Physical Society (APS) Conf., Salt Lake City, UT, Nov. 12.

  138. Stewart, D.S., Najjar, F.M., Szuck, M., Glumac, N., (2011). “Simulations of the Formation and Hydrodynamic Penetration of Micro-Shaped Charge Jets,” 64th Annual Meeting of the APS Division of Fluid Dynamics, Vol. 56, No. 18, Baltimore, MD.

  139. Szebenyi, Z., Gamblin, T., Schulz, M., de Supinski, B.R., Wolf, F., Wylie, B.J.N. (2011). “Reconciling Sampling and Direct Instrumentation for Unintrusive Call-Path Profiling of MPI Programs,” Twenty Fifth Int. Parallel and Distributed Processing Sym. (IPDPS 2011), Anchorage, AK, May 16–20.

  140. Tannahill, J.R., Lucas, D.D., Domyancic, D.M., Brandon, S.T., Klein, R.I. (2011). “Data Intensive Uncertainty Quantification: Applications to Climate Modeling,” Supercomputing 2011, Seattle, WA., Nov. 12-18.

  141. Teweldeberhan, A.M., Bonev, S.A., (2011). “Structural and thermodynamic properties of liquid Na-Li and Ca-Li alloys at high pressure,” Physical Review B, Vol. 83, pp. 134120.

  142. Tommasini, R., Hatchett, S., Hey, D., Iglesias, C.A., Izumi, N., Landen, O.L., MacKinnon, A.J., Sorce, C., Delettrez, J.A., Glebov, V.Y., Sangster, J.C., Stoekl, C. (2011). “Development of Compton radiography of inertial confinement fusion implosions,” Phys. Plasmas, Vol. 18, Article #056309.

  143. Tubman, N.M., DuBois, J.L., Hood, R.Q., and Alder, B.J. (2011). “Prospects for release-node quantum Monte Carlo,” J. of Chemical Physics, Vol. 135, No. 18, pp. 184109-1–184109-4.

  144. Vo, A., Gopalakrishnan, G., Kirby, R.M., De Supinski, B.R., Schulz, M., Bronevetsky, G. (2011). “Large Scale Verification of MPI Programs Using Lamport Clocks with Lazy Update," Twentieth Int. Conf. on Parallel Architectures and Compilation Techniques (PACT-2011), Galveston Island, TX, Oct. 10–14.

  145. Vogt, R. (2011). “Generalized Energy-Dependent Q Values for Fission,” J. Korean Phys. Soc., Vol. 59, pp. 899-902.

  146. Vogt, R., Randrup, J. (2011). “Event-by-event study of neutron observables in spontaneous and thermal fission,” Phys. Rev. C, Vol. 84, p. 044621.

  147. Vogt, R., Randrup, J., Pruet, J., Younes, W. (2011). “Calculation of (239)Pu Fission Observables in an Event-by-Event Simulation,” J. Korean Phys. Soc., Vol. 59, pp. 895-898.

  148. Whitlock, B.J. (2011). “2011 Update on VisIt,” DOE Computer Graphics Forum Meeting, Asheville, NC.

  149. Whitlock, B.J. (2011). “Transitioning VisIt to Cmake,” ASQ Build System Poster Session, Livermore.

  150. Whitlock, B.J. (2011). “Visualization with VisIt,” Army Research Laboratory, Aberdeen, MD, Lawrence Livermore National Laboratory, Livermore, CA.

  151. Whitlock, B.J., Biagas, K. S., Rawson, P. (2011). “Creating a Parallel Version of VisIt for Microsoft Windows,” LLNL-TR-519831.

  152. Whitlock, B.J., Favre, J.M., Meredith, J.S. (2011). “Parallel in situ coupling of simulation with a fully featured visualization system.” EGPGV, pp. 101-109.

  153. Wickett, M.E., Anderson, R.W., Elliott, N.S., Gunney, B.T., Hornung, R.D., Howell, L.H., Pudliner, B.S., Ryujin, B.S. (2011). “Structured Adaptive Mesh Refinement in a Multiblock Arbitrary-Lagrangian-Eulerian Radiation-Hydrodynamics Code,” Int. Conference on Numerical Methods For Multi-Material Fluid Flows, Arcachon, France.

  154. Wilson B.G., Sonnad V. (2011). “A Note on Generalized Radial Mesh Generation for Plasma Electronic Structure,” High Energy Density Physics, Vol. 7,  #3, pp. 161-162.

  155. Wilson, B.G., Johnson, D.D., Alam, A. (2011). “Multi-center Electronic Structure Calculations for Plasma Equation of State,” HED Physics, Vol. 7, pp. 61-70.

  156. Wu, C.-C., Aubry, S., Chung, P., Arsenlis, A. (2011). “Dislocation dynamics simulations of junctions in hexagonal close-packed crystals,” MRS proceeding, Fall Meeting, Boston MA.

  157. Yao, J. (2011). “An Efficient and Locally Conservative Interface Rezone Method,” Int. Conf. on Numerical Methods For Multi-Material Fluid Flows, Arcachon, France.

 

2012 Publications

  1. Iglesias, C.A., Sonnad, V. (2012). “Partially Resolved Transition Array Model for Atomic Spectra,” High Energy Density Physics 7, Jan., online.

________________________________________________________________________________________________

 Sandia National Laboratories

Citations for Publications (previously not listed)

 

  1. Axness, C. L., Kerr, B., Keiter, E. R. (2010).  "Analytic 1-D PN Junction Diode Photocurrent Solutions Following Ionizing Radiation and Including Time-Dependent Changes in the Carrier Lifetime From a Nonconcurrent Neutron Pulse," IEEE Transactions on Nuclear Science, Vol. 57, Issue 6, Part 1, pp. 3314-3321.  DOI: 10.1109/TNS.2010.2086484. SAND2010-7063 J. 

  2. Bishop, J. E., Cordova, T. E., Dion, K., Emery, J. M., Foster, J. T., Littlewood, D. J., Mota, A., Boyce, B. L., Cox, J. V., Crenshaw, T. B., Dowding, K. J.,  Foulk, J. W., III, Ostien, J. T., Robbins, J. H., Silling, S. A., Spencer, B. W., Wellman, G. W. (2011).  “Ductile Failure X-Prize,” Sandia Technical Report. SAND 2011-6801.

  3. Bond, B., Keiter, E. R., Mei, T., Thornquist, H. K. (2011).  “Accelerating Transient Simulation of Linear Reduced Order Models,” Sandia Technical Report. SAND2011-6223.

  4. Curry, M. L., Skjellum, A., Ward, H. L., Brightwell, R. (2011).  "Gibraltar: A Reed-Solomon Coding Library for Storage Applications on Programmable Graphics Processors," Concurrency and Computation, Practice and Experience, Vol. 23, Issue 18, pp. 2477-2495.  DOI: 10.1002/cpe.1810.  SAND2010-0079 J. 

  5. DeChant, L. J. (2011).  “Modification to the k-Omega Turbulence Model for Vortically Dominated Flows,” AIAA-2011-56, 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Orlando, FL. SAND2010-8895 C. 

  6. DeChant, L. J., Smith, J. L. (2011).  “An Approximate Expression for Base Pressure Fluctuation Spectra for Bluff Bodies,” AIAA-2011-180, 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Orlando, FL. SAND2010-8901 C.

  7. Franke, B. C., Crawford, M. J., Kensek, R. P., Kraftcheck, J. A. (2009).  "A Weight-Window Generator for Electron-Photon Transport in the Integrated TIGER Series Codes," International Conference on Mathematics, Computational Methods and Reactor Physics (M&C 2009), Saratoga Springs, NY, on CD-ROM, American Nuclear Society, LaGrange Park, IL. SAND2009-1344 C. 

  8. Franke, B. C., Kensek, R. P. (2009).  “An hp-Adaptivity Approach for Monte Carlo Tallies,” International Conference on Mathematics, Computational Methods and Reactor Physics (M&C 2009), Saratoga Springs, NY, on CD-ROM, American Nuclear Society, LaGrange Park, IL. SAND2010-1201 C.

  9. Franke, B. C., Prinja, A. K. (2010).  “Flux-Probability Distributions for Radiation Transport in Binary Stochastic Media,” Proceedings, Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2010 (SNA + MC2010), Tokyo, Japan. SAND2010-5354 J. 

  10. Franke, B. C., Prinja, A. K.  (2011). “Flux-Probability Distributions from the Master Equation for Radiation Transport in Stochastic Media,” Proceedings, International Conference on Mathematics and Computational Methods applied to Nuclear Science and Engineering (M&C 2011), Rio de Janeiro, Brazil. SAND2010-8851 C. 

  11. Gallis, M. A., Torczynski, J. R. (2011).  “Steady Isothermal Gas Mass Flow Rate in a Microscale Tube from Continuum to Free-Molecular Conditions,” AIAA 2011-3994, 41st AIAA Fluid Dynamics Conference and Exhibit, Honolulu, HI. SAND2011-4152 C. 

  12. Griffin, J. D., Fowler, K. R., Gray, G. A., Hemker, T., Parno, M. D. (2011).  “ Derivative-Free Optimization via Evolutionary Algorithms Guiding Local Search (EAGLS) for MINLP,” Special Issue on Derivative-Free Hybrid Optimization Methods for Solving Simulation-Based Problems in Hydrology, Pacific Journal of Optimization, Vol. 7, No. 3, pp. 425-443. SAND2010-3023 J. 

  13. Kraynik, A. M., Romero, L. A., Torczynski, J. R. (2010).  “Simulations of Bubble Motion in an Oscillating Liquid,” Bulletin of the American Physical Society, Annual Meeting of the APS Division of Fluid Dynamics, Vol. 55, No. 16, pp. 287.  Abstract ID: BAPS.2010.DFD.LR.9. SAND2010-4914 A. 

  14. Lee, H. K. H., Gramacy, R. B., Linkletter, C., Gray, G. A. (2011).  “Optimization Subject to Hidden Constraints via Statistical Emulation,” Special Issue on Derivative-Free Hybrid Optimization Methods for Solving Simulation-Based Problems in Hydrology, Pacific Journal of Optimization, Vol. 7, No. 3, pp. 467-478. SAND2010-2252 J. 

  15. Lloyd, J.T., Zimmerman, J. A., Jones, R. E., Zhou, X. W., McDowell, D. L. (2011).  “Finite Element Analysis of an Atomistically Derived Cohesive Model for Brittle Fracture,” Modelling and Simulation in Materials Science and Engineering, Vol. 19, No. 6, 065007 (18 pp).  DOI: 10.1088/0965-0393/19/6/065007. SAND2010-7536 J. 

  16. O’Hern, T. J., Torczynski, J. R., Romero, E. F., Shelden, B. (2010).  “Vibration-Induced Gas-Liquid Interface Breakup,” Bulletin of the American Physical Society, 63rd Annual Meeting of the APS Division of Fluid Dynamics, Vol. 55, No. 16, pp. 370.  Abstract ID: BAPS.2010.DFD.QM.3.  SAND2010-5155 A. 

  17. Pautz, S. D., Drumm, C. R., Bohnhoff, W. J., Fan, W. C. (2009).  "Software Engineering in the Sceptre Code," International Conference on Mathematics, Computational Methods and Reactor Physics (M&C 2009), Saratoga Springs, NY, on CD-ROM, American Nuclear Society, LaGrange Park, IL. SAND2009-1242 C. 

  18. Pautz, S. D., Pandya, T. M., Adams, M. L. (2009).  "Scalable Parallel Prefix Solvers For Discrete Ordinates Transport," International Conference on Mathematics, Computational Methods and Reactor Physics (M&C 2009), Saratoga Springs, NY, on CD-ROM, American Nuclear Society, LaGrange Park, IL. SAND2009-1206 C. 

  19. Romero, L. A., Kraynik, A. M., Torczynski, J. R. (2010).   “The Terminal Velocity of a Bubble in an Oscillating Flow,” Bulletin of the American Physical Society, 63rd Annual Meeting of the APS Division of Fluid Dynamics, Vol. 55, No. 16, pp. 287.  Abstract ID: BAPS.2010.DFD.LR.8. SAND2010-4916 A. 

 

ASC CSSE:

  1. Ang, J. (2011).  "Arthur: Sandia’s NNSA/ASC Experimental Architecture Testbed with 84 Intel® Knights Ferry Cards", Invited talk at IEEE/ACM International Conference for High Performance Computing, Networking, Storage, and Analysis (SC '11), Seattle, WA. SAND2011-8667 P. 

  2. Barrett, B., Brightwell, R., Hemmert, K. S., Wheeler, K., Underwood, K. (2011).  “Using Triggered Operations to Offload Rendezvous Messages”, Proceedings, 18th EuroMPI Conference, Santorini, Greece. SAND2011-3491 C. 

  3. Brandt, J., Chen, F., Gentile, A., Leangsuksun, C., Mayo, J., Pebay, P., Roe, D., Taerat, N., Thompson, D., Wong, M. (2011).  “Framework for Enabling System Understanding,” Proceedings, 4th Workshop on Resiliency (Resilience) in High Performance Computing at Euro-Par 2011, Bordeaux, France. SAND2011-4450 C. 

  4. Bridges, P., Arnold, D., Pedretti, K. (2011).  “VM-based Slack Emulation of Large-scale Systems,” Proceedings, ACM/SIGARCH International Conference on Supercomputing, Workshop on Runtime and Operating Systems for Supercomputers, Tuscon, AZ.  SAND2011-3054C. 

  5. Brightwell, R., Pedretti, K. (2011). “An Intra-Node Implementation of OpenSHMEM Using Virtual Address Space Mapping,” Proceedings, Conference on Partitioned Global Address Space Programming Models (PGAS), Galveston Island, TX.  SAND2011-5348 C.

  6. Curry, M. L., Ward, H. L., Grider, G., Gemmill, J., Harris, J., Martinez, D. (2011).  "Power Use of Disk Subsystems in Supercomputers," Proceedings, 6th Parallel Data Storage Workshop, Seattle, WA. SAND2011-6886 C. 

  7. Doerfler, D., Rajan, M., Nuss, C., Wright, C., Spelce, T., (2011).  “Application-Driven Acceptance of Cielo, an XE6 Petascale Capability Platform,” Proceedings, Cray User Group Meeting, Fairbanks, AK. SAND2011-3362 C. 

  8. Doerfler, D., Vigil, M., Dosanjh, S., Morrison, J., (2011).  “The Cielo Petascale Capability Supercomputer,” Proceedings, Cray User Group Meeting, Fairbanks, AK. SAND2011-3420 C. 

  9. Edwards, H. C., Sunderland, D. (2012).  "Kokkos Array Performance-Portable Manycore Programming Model," Proceedings, The 2012 International Workshop on Programming Models and Applications for Multicores and Manycores  at the 17th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, New Orleans, LA.  Copyright 2012 ACM 978-1-4503-1211-0/12/02. SAND2011-9311 C. 

  10. Edwards, H. C., Sunderland, D., Amsler, C., Mish, S. (2011).  “Multicore/GPGPU Portable Computational Mechanics Kernels via Multidimensional Arrays,” Proceedings, Workshop on Parallel Programming on Accelerator Clusters at IEEE Cluster 2011, Austin, TX. SAND2011-3953 C. 

  11. Fabian, N., Moreland, K., Thompson, D., Bauer, A. C., Marion, P., Geveci, B., Rasquin, M., Jansen, K. E. (2011).  "The ParaView Coprocessing Library: A Scalable, General Purpose In Situ Visualization Library," Proceedings, IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV), Providence, RI. SAND2011-2684 C. 

  12. Ferreira, K., Stearley, J., Laros, J., Oldfield, R., Pedretti, K., Brightwell, R., Riesen, R., Bridges, P., Arnold, D. (2011).  “Evaluating the Viability of Process Replication Reliability for Exascale Systems,” Proceedings, IEEE/ACM International Conference for High Performance Computing, Networking, Storage, and Analysis (SC '11), Seattle, WA. SAND2011-2634 C. 

  13. Greenfield, J., Ice, L., Corwell, S., Haskell, K., Pavlakos, C., Noe, J. (2011).  "One Stop High Performance Computing User Support at SNL." State of the Practice Reports (SC '11), Seattle, WA. SAND2011-4356 C. 

  14. Janssen, C. L., Adalsteinsson, H., Kenny, J. P. (2011).  “Using Simulation to Design Extreme-Scale Applications and Architectures: Programming Model Exploration”, ACM SIGMETRICS Performance Evaluation Review, Vol. 38, pp. 4-8. SAND2011-0585 J.

  15. Kelly, S., Klundt, R., Laros, J. (2011).  “Shared Libraries on a Capability Class Computer,” Proceedings, Cray User Group Meeting, Fairbanks, AK. SAND2011-3455 C. 

  16. Lindsay, A., Galloway-Carson, M., Johnson, C., Bunde, D., Leung, V. (2011).  "Backfilling with Guarantees Granted upon Job Submission", Proceedings, Euro-Par 2011, Bordeaux, France. SAND2011-0010 C. 

  17. Lofstead, G. F., II, Polte, M., Gibson, G., Klasky, S. A., Schwan, K., Oldfield, R., Wolf, M., Liu, Q. (2011).  "Six Degrees of Scientific Data: Reading Patterns for Extreme Scale Science IO," Proceedings, 20th International ACM Symposium on High-Performance Parallel and Distributed Computing, San Jose, CA. SAND2011-0442 C. 

  18. Lofstead, J., Oldfield, R., Kordenbrock, T., Reiss, C. (2011).  "Extending Scalability of Collective IO Through Nessie and Staging," Proceedings, Parallel Data Storage Workshop at Supercomputing 2011, Seattle, WA. SAND2011-8597 C. 

  19. Logan, J., Klasky, J., Lofstead, J. F., II, Abbasi, H., Either, S., Grout, R., Ku, S., Liu, Q., Ma, X., Parashar, M., Podhorszki, N., Schwan, K., Wolf, M. (2011).  "Skel: Generative Software for Producing Skeletal I/O Applications," Proceedings,  D3Science Workshop at IEEE e-Science Conference, Stockholm, Sweden. SAND2011-7850 C. 

  20. Moreland, K., Kendall, W., Peterka, T., Huang, J. (2011).  "An Image Compositing Solution at Scale," Proceedings, 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC '11), Seattle, WA. SAND2011-2482 C. 

  21. Moreland, K., Oldfield, R., Marion, P., Jourdain, S., Podhorszki, N., Vishwanath, V., Fabian, N., Docan, C., Parashar, M., Hereld, M., Papka, M. E., Klasky, S. (2011).  "Examples of In Transit Visualization," Proceedings, Petascale Data Analytics: Challenges and Opportunities (PDAC-11), Seattle, WA. SAND2011-6534 C. 

  22. Olivier, S., Porterfield, A., Wheeler, K., Prins, J. (2011).  "Scheduling Task Parallelism on Multi-Socket Multicore Systems," Proceedings, International Conference on Supercomputing, Tucson, AZ. SAND2011-0228 C. 

  23. Pedretti, K., Brightwell, R., Doerfler, D., Hemmert, K. S., Laros, J. (2011).  “The Impact of Injection Bandwidth Performance on Application Scalability,” Proceedings, European MPI Users Group Conference, Santorini, Greece. SAND2011-4617 C. 

  24. Tian, Y., Klasky, S., Abbasi, H., Lofstead, G. F., II, Grout, R., Podhorszki, N., Liu, Q., Wang, Y., Yu, W. (2011).  "EDO: Improving Read Performance for Scientific Applications Through Elastic Data Organization," Proceedings, IEEE Cluster 2011, Austin, TX.  SAND2011-0443 C. 

  25. Vaughan, C. T. (2011).  “Application Characteristics and Performance on a Cray XE6,” Proceedings, Cray User Group Meeting, Fairbanks, AK. SAND2011-3182 C. 

  26. Wheeler, K., Murphy, R., Stark, D., Chamberlain, B. (2011).  "The Chapel Tasking Layer Over Qthreads," Proceedings, Cray User Group Meeting, Fairbanks, AK.  SAND2011-3299 C. 

 

CORRECTIONS TO PRIOR SUBMITTALS

Submitted FY11 Q4

  1. Pébay, P., Thompson, D., Bennett, J., Mascarenhas, A. (2011).  "Design and Performance of a Scalable, Parallel Statistics Toolkit," ipdpsw, pp.1475-1484, 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and PhD Forum, 2011. SAND2010-8143 C.

 

Submitted FY12 Q1

  1. Reedy, E. D., Boyce, B. L., Foulk , J. W., Field, R. V., de Boer, M. P., Hazra, S. S.  (2011).  “Predicting Fracture in Micrometer-Scale Polycrystalline Silicon MEMS Structures,” Journal of Microelectromechanical Systems (JMEMS), Vol. 20, Issue 4, pp. 922-932.  DOI: 10.1109/JMEMS.2011.2153824.  SAND2010-8918 J.

  2. Vaughan, C. T., Rajan, M., Barrett, R. F., Doerfler, D., Pedretti, K. T.  (2011). “Investigating the Impact of the Cielo Cray XE6 Architecture on Scientific Application Codes”, Workshop Proceedings, Workshop on Large-Scale Parallel Processing (LSPP), 25th IEEE International Symposium on Parallel and Distributed Processing, IPDPS 2011, Anchorage, AK, USA.  DOI: 10.1109/IPDPS.2011.342. SAND2010-8925 C. 

 

LA-UR 12-01448

Printer-friendly version -- ASCeNews Quarterly Newsletter - March 2012