Ayan joined the Data Science at Scale team as a postdoctoral researcher on January 9, 2017. He received his Ph.D. in Computer Graphics and Visualization from The Ohio State University in December 2016 and has been a summer intern since 2013. He has worked with flow field data and particle tracing using streamlines and stream surfaces and on time-varying multivariate data exploration and using information theory to provide some insights into data. He is also working with turbulent flow structures and vortex visualization for the unstable time-varying complex flows.
Data scientists from LANL won the Best Visualizaiton and Data Analytics Showcase award at Supercomputing 2016 for their video detailing the science and high performance computing (HPC) behind the study of asteroid impacts in the ocean. The award-winning team included scientists from LANL’s Data Science at Scale team (John Patchett, Boonthanome Nouanesengsy, Karen Tsai and David Rogers), LANL’s Integrated Design & Assessment group (Galen Gisler), and UT-Austin (Francesca Samsel, Greg Abram and Terece Turton). They analyzed and visualized large scientific data from a simulation of asteroid ocean impacts, in order to better understand the threats they pose. The LANL Data Science at Scale Team won this award at Supercomputing 2015 last year as well. Their work is summarized in a paper and video. John Patchett is shown here presenting their work at Supercomputing 2016 in Salt Lake City.
The first Discovery Jam was held at IEEE VIS 2016 in Baltimore. It brought scientists together with visualization and analysis experts to brainstorm, generate ideas and present pitches of innovative solutions to the discovery problems that this community faces. The Discovery Jam team is Cecilia Aragon of the University of Washington, Dan Keefe of the University of Minnesota, Ethan Kerzner, Nina McCurdy and Mariah Meyer of the University of Utah, David Rogers of the Los Alamos National Lab and Francesca Samsel of University of Texas at Austin.
Hamish Carr presented a paper titled “Parallel Peak Pruning for Scalable SMP Contour Tree Computation” at the Sixth IEEE Symposium on Large Data Analysis and Visualization, held in conjunction with IEEE VIS 2016 in Baltimore. The paper received the Best Paper award and was authored by Hamish Carr and Gunther Weber of the University of Leeds and by Christopher Sewell and James Ahrens of the Los Alamos National Lab.
From October 19-21, Divya, Anne, and Roxana were at Grace Hopper Celebration of Women in Computing. On Friday the 21st at 9 a.m., they gave a one-hour workshop on Using ParaView for Scientific Visualization. Session Info Flyer ParaView Slides ParaView Cheat Sheet Sandia National Laboratories Tutorials Cinema project (software and sample data) Paraview executables: Download Windows64bit.zip from Los Alamos or from AWS Download Windows32bit.zip from Los Alamos or from AWS Download MacOSX.zip MacOSX.zip from Los Alamos or from AWS Download Linux.zip from Los Alamos or from AWS Download your choice from from Paraview.ORG
Kathrin Feige visited the Data Science at Scale Team on October 3 and 4. She is currently a postdoc at the University of Kaiserslautern with the Computer Graphics and HCI Group. During her visit she presented a talk titled “Footprint Computation for Urban Canopy Layer Measurements”. The abstract for her talk was “Relating observed air temperatures to the urban morphology surrounding the utilized sensor is essential in order to assess the microclimatic impact of urban form and design. The footprint of a sensor is a means to describe the impact of upwind surfaces on an observation, and can thus be used to quantify this relationship. While the most frequently applied footprint models are two-dimensional, treating the elements upwind of a sensor as surface patches with certain characteristics, the three-dimensional geometry of urban canopies requires a more complex approach. In my talk, I will present our approach to this problem, which is based on CFD simulations to estimate a steady-state, three-dimensional wind field at the time of an observation taken within the urban canopy layer to determine a sensor’s upwind area.”
Data scientists from the LANL Data Science at Scale team, John Patchett, Boonthanome Nouanesengsy, Karen Tsai and David Rogers, and UT-Austin, Francesca Samsel, Greg Abram and Terece Turton, worked with Galen Gisler of LANL’s Integrated Design & Assessment group to produce visualization and analysis of threats from asteroid ocean impacts. Their work is summarized in a paper and video that have been accepted as finalists in the SC16 Visualization Showcase. HD 1080p (1920 x 1080 / 89MB) HD 729p (1280 x 720 / 50MB) SD 540p (960 x 540 / 34MB) SD 360p (640 x 360 / 16MB)
Dr. Roxana Bujack joined the Data Science at Scale team in July 2016. She graduated in mathematics and computer science and received her PhD in the Image and Signal Processing Group at Leipzig University. Then, Roxana worked as a postdoctoral researcher at IDAV at the University of California, Davis and at the Computer Graphics and HCI Group at the Technical University Kaiserslautern. Her research interests include visualization, pattern recognition, vector fields, moment invariants, high performance computing, massive data analysis, Lagrangian flow representations, and Clifford analysis.
Greg Abram will be visiting from mid-June to mid-August. Greg is a visualization researcher at the Texas Advanced Computing Center, a research division of the University of Texas at Austin. Prior to joining TACC, he was at the IBM TJ Watson Research Center. He received his Ph.D. from the University of North Carolina at Chapel Hill in 1986.
Terry Turton will be visiting intermittently from mid-June to mid-August. Terry is an Associate Research Scientist at the University of Texas – Austin’s Center for Agile Technology. She received her Ph.D. in Physics from the University of Michigan and did postdoctoral work at the Superconducting Super Collider, Michgan State University and the University of Cincinnati. Her current work focuses on improved colormaps for scientific visualization and creating, implementing, running and analyzing user studies to improve visualizations of scientific data.
A paper authored by Francesca Samsel, Sebastian Klaasen, Mark Petersen, Terece L. Turton, Gregory Abram, David H. Rogers and James Ahrens and titled ‘Interactive Colormapping: Enabling Multiple Data Ranges, Detailed Views of Ocean Salinity‘ was presented at the CHI2016 Conference held in San Jose, California, on May 7 to 12, 2016.
Rising temperatures are rapidly thawing Arctic permafrost. As it thaws, permafrost releases carbon that will eventually impact ocean currents. Permafrost: Connecting Observations to Models is a multi-faceted look at permafrost research in the Arctic. It follows scientists as they collect data in the field, analyze it in the lab, and then use the data to model the dynamics of permafrost as temperatures continue to rise. The video also represents one possible look at Climate Perspectives: Change in the Terrestrial Arctic, an interactive exhibit of Arctic climate science through the eyes of scientists and artists. It is being developed by the Bradbury Science Museum, a division of Los Alamos National Laboratory.
Bill Hoffman visited to discuss the lab’s use of CMake and present a talk on building science with CMake. He is a founder of Kitware and currently serves as Vice President and Chief Technical Officer. He is the original author and lead architect of CMake , an open-source, cross-platform build and configuration tool that is used by hundreds of projects around the world and the co-author of the accompanying text, Mastering CMake. Using his 20+ years of experience with large software systems development, Mr. Hoffman is also a major technical contributor to the Visualization Toolkit, Insight Toolkit and ParaView open source projects.
‘Visualization of Ocean Currents and Eddies in a High-Resolution Global Ocean-Climate Model‘ was awarded ‘Best Scientific Visualization & Data Analytics Showcase‘ at Supercomputing 2015 in Austin, Texas. The showcase was based upon work supported by Dr. Lucy Nowell of the U.S. Department of Energy Office of Science, Advanced Scientific Computing Research under Award Numbers DE-SC-0012516 and released under LA-UR-15-20105. The MPAS-Ocean model is developed at Los Alamos National Laboratory by the Climate, Ocean, and Sea Ice Model team (COSIM). Core MPAS-Ocean developers include Mark Petersen, Todd Ringler, and Douglas Jacobsen. MPAS-Ocean is a component of the Accelerated Climate Model for Energy (ACME). MPAS-Ocean and ACME are supported by Dr. Dorothy Koch of the U.S. Department of Energy Office of Science, Earth Modeling Program of the Office of Biological and Environmental Research. Additionally, the authors would like to thank David Astrada for his videography expertise and the Bradbury Science Museum for their generous sharing of facilities and support. Gallery from the award presentation.
Vignesh Adhinarayanan of Virginia Tech and the Data Science at Scale School explains a poster on performance, power, and energy of in-situ and post-processing visualization at SC15. The poster was nominated for best poster at Supercomputing 2015 in Austin, Texas. The work being described is part of the ECX Project led by Jim Ahrens and David Rogers of the Data Science at Scale Team.
Qiang joined the Data Science at Scale team as a staff scientist in November 2016 following a successful postdoctoral research position with the the Ultra-scale System Research Center. His interests include cloud performance modeling and optimization, cloud dependability and reliability analysis, cloud failure detection and prediction, virtualization, power management and green computing in cloud infrastructures, resilience analysis In HPC, resource management in cloud system, data mining and machine learning, signal processing and image processing in Biometrics.
Anne Berres joined the Data Science at Scale team as a postdoctoral researcher in November 2016 following earning her Ph.D. in Computer Science from the Technical University of Kaiserslautern. Her research interests include topology, differential manifolds, differential geometry, medical visualization, neural diseases and probablistic tractogrphy. She was the lead author of ‘Tractography in Context: Multimodal Visualization of Probabilistic Tractograms in Anatomical Context’ published in the Eurographics Workshop on Visual Computing for Biology and Medicine 2012.
Jon Woodring presented the ‘In Situ Eddy Analysis in a High-Resolution Ocean Climate Model‘ paper at IEEE Vis 2015 in Chicago. The paper’s other authors are Mark Petersen, John Patchett and James Ahrens of Los Alamos National Lab, and Andre Schmeisser and Hans Hagen of the Kaiserslautern Technical University. The paper was previewed in a short video.
Francesca Samsel participated in the Color Mapping Panel at IEEE Vis 2015. The panel highlighted optimal solutions for designing and building color maps in visualization applications and presentations. The panelists represented artists, software engineers, cartographers, color scientists, perceptual psychologists, and visualization researchers who have contributed effective solutions to applying color to data visualization. Each panelist highlighted their perspective as well as tips and tricks for color map solutions. Drawing on perspectives from many disciplines, the panel identified gaps in the understanding about the use of color in visualization and identified future research directions. Francesca’s presentation is here.
In a previous paper Robert Kares described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen sub-optimally. A proposed method of addressing this difficulty is to simply render multiple images at runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView Catalyst has been extended to include such a capability via the so-called Cinema framework. Here Kares describes some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities.
Divya Banesh, David Barnes, Sebastian Klaassen and Uzma Shaikh, students in the Data Science at Scale School, presented a poster on some of their 2015 work at LANL. Their mentors were Jim Ahrens, David Rogers, Ollie Lo and Chris Sewell. They were assisted by another student, Arnold Eatmon, staff member Jon Woodring and artist Francesca Samsel.
The University of Washington eScience Institute is hosting a NSF-sponsored Graduate Data Science Workshop that will bring together 100 graduate students from diverse domain sciences and engineering with Data Scientists from industry and academia to discuss and collaborate on Big Data / Data Science challenges. To participate in the workshop, submit a white paper in PDF format that describes a Big Data / Data Science challenge faced by your scientific or engineering discipline or an idea for a new tool or method addressing Big Data / Data Science problem. The white paper submission deadline is June 20th, 2015. Invitees will be notified on July 1st, 2015.
Working with COSIM climate modelers at LANL artist Francesca Samsel and the Data Science at Scale Team developed new colormaps that enable scientists to see more detail within their simulations. The colormaps move through changes in hue, saturation and value to create a longer line through colorspace. The maps have been user tested for their ability to show even and accurate detail. A number of these colormaps will be incorporated by Kitware into the next release of ParaView. The new colormaps draw a longer line through color space by independently specified H, S, and V values. Some maps have 20 control points, others over 40. Removing some control points, as long as you are adjusting in LAB space, is fine, but you will lose some of the detailed color contrast which is what makes these maps effective.
Jim Ahrens gave an invited presentation on Thursday morning titled ’Shared Analysis for Resilience, Debugging, Verification, Validation and Discovery‘. In this talk he describes, from an application perspective, the interconnectedness of simulation analysis, debugging and resilience. He advocates for a joint approach to addressing these challenges based on a framework that supports information gathering, event detection and triggered actions.
Scientists from the Climate, Ocean and Sea Ice Modeling Team (COSIM) at the Los Alamos National Laboratory (LANL) are interested in gaining a deeper understanding of three primary ocean currents: the Gulf Stream, the Kuroshio Current, and the Agulhas Current & Retroflection. To address these needs, visual artist Francesca Samsel teamed up with experts from the areas of computer science, climate science, statistics, and perceptual science. By engaging an artist specializing in color, we created colormaps that provide the ability to see greater detail in these high-resolution datasets. The new colormaps applied to the POP dataset enabled scientists to see areas of interest unclear using standard colormaps. Improvements in the perceptual range of color allowed scientists to highlight structures within specific ocean currents. Work with the COSIM team members drove development of nested colormaps which provide further detail to the scientists. Francesca will present a paper titled ‘Colormaps that Improve Perception of High – Resolution Ocean Data‘ at the CHI2015 Conference being held in Seoul, Republic of Korea, on April 18 to 23, 2015.
Extreme scale scientific simulations are pushing the limits of scientific computation, and are stressing the limits of the data that we can store, explore, and understand. Options for extreme scale data analysis are often presented as a stark contrast: save massive data files to disk for interactive, exploratory visualization, or perform in situ analysis to save detailed data about phenomena a scientist knows about in advance. We propose that there is an alternative approach – a highly interactive, image-based approach that promotes exploration of simulation results, and is easily accessed through extensions to widely used open source tools. This new approach supports interactve exploration of a wide range of results, while still significantly reducing data movement and storage. We call this new approach Cinema. The tutorial linked to here assumes that you are somewhat familiar with the technical motivations that drove the creation of Cinema, but further technical detail can be found in the SC14 paper also linked to below. We have created the Cinema Virtual Machine (CVM) to provide a hands-on way of understanding some of the concepts behind Cinema, along with the deeper technical ideas. With the CVM, you can explore several small Cinema databases, explore concrete examples of the Cinema file specification, and get hands-on experience creating in situ export scripts with an MPAS-coupled analysis workflow.
Jim Ahrens gave an invited plenary presentation on Wednesday morning titled ‘Implications of Numerical and Data Intensive Technology Trends on Scientific Visualization and Analysis‘. Jon Woodring organized and co-presented a miniturtorial titled ‘Minitutorial: Python Visual Analytics for Big Data’ on Sunday. Presentations from select sessions at the SIAM Conference on Computational Science and Engineering (CSE15) are now available on SIAM Presents…Featured Lectures from Our Archives. You may also view and listen to invited lectures, minitutorials, minisymposia, and Linda Petzold’s talk Celebrating 15 Years of SIAM CSE. You do not need to login to view presentations, though registering will allow you to track the presentations you access. Audio/slides can be viewed by selecting the meeting/course. You can filter the list of sessions by using the selections under “All Tracks” and then connecting to a specific session. You can also view photos from the conference on the SIAM Facebook page. You do not need to log in to Facebook to view the photo albums.
Jim Ahrens participating in the mentoring program for students at SIAM CSE15. The program was part of an effort to broaden engagement with a more diverse community. In addition to small group and one-on-one mentoring sessions activities included the Workshop Celebrating Diversity, Professional Development Evening, Association of Women in Mathematics activities, Student Days, Student Careers Panel, and Job Fair.
Over decades and centuries, the practices of art and science have diverged as separate disciplines and, driven by scrutiny and opinions, have sought to define what makes a great artist or scientist. It is not surprising, therefore, that many scientists remain unfamiliar with the many and varied artistic contributions to scientific advancement. Art-science case studies aren’t encountered in our everyday work, but they can be highly suggestive of approaches for creative thinking and innovation. Today you can readily find scientists whose work could be shared with the general public more effectively. By introducing an Art on Graphics department to CG&A, the department editors aim to expose the work of teams that draw on the skills of art, science, and technology professions to make rigorous innovative contributions to the domain of computer graphics and applications. Bruce Campbell and Francesca Samsel edited ‘Pursuing Value in Art-Science Collaborations‘ for the ‘Art on Graphics’ department of the January/February 2015 IEEE Computer Graphics and Applications.
Cinema is an open-source, novel framework built on top of ParaView that couples processing exporters and a client user interface for output analysis using images or other types of reduced data. Cinema captures images based on several camera positions and filter configurations. The overall design of Cinema was discussed in the paper ‘An Image-based Approach to Extreme Scale In Situ Visualization and Analysis‘. A Cinema Database is a collection of data that supports this image-based approach to interactive data exploration. It is a set of images and associated metadata. The Cinema Database Specification is available on the Download page of the CinemaScience.Org website.
Our hypothesis was that by applying artistic principles of color theory, we could create colormaps that would provide the ability to see greater detail in high resolution datasets thereby facilitating the discovery and study of features of interest within the data. The collaboration between F. Samsel and M. Peterson led to the further development of Nested colormaps – inserting a complete colormap within a small range of the full data space. This study was designed to test a domain scientist exploratory task: identify and follow the location of the Gulf Stream across a range of latitudes.
The traditional post-processing visualization and analysis approach is becoming unworkable. This presentation given at SC14 by Los Alamos and Kitware scientists describes a new image based approach to extreme scale in-situ visualization and analysis. The full paper and video provide additional information.
Boonthanome Nouanesengsy, John Patchett and Jonathan Woodring attended the IEEE VIS2014 Conference in Paris. Boonth presented a talk titled ‘ADR Visualization: A Generalized Framework for Ranking Large-Scale Scientific Data using Analysis-Driven Refinement’ during the LDAV 2014 Symposium on Large Data Analysis and Visualization (LDAV).
Jon Woodring of the Data Science at Scale team has done all of his research in Phython since 2001 or so, and recently he is becoming a Phython evangelist. He will be at The Ohio State University, IEEE Vis 2014, and SIAM CSE 2015 this year teaching Phython for Visualization and Analysis tutorials with colleagues of his at Continuum Analytics, Kitware, and Indiana University.
In this paper Bob Kares describe some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation on the Cielo supercomputer at Los Alamos. The detailed procedures for the creation of the visualizations using ParaView/Catalyst are discussed and several images sequences from the ICF simulation problem produced with the in-situ method are presented. His impressions and conclusions concerning the use of the in-situ visualization method in RAGE are discussed.
As supercomputing moves towards exascale, scientists, engineers and medical researchers will look for efficient and cost effective ways to enable data analysis and visualization for the products of their computational efforts. The ‘exa’ metric prefix stands for quintillion, and the proposed exascale computers would approximately perform as many operations per second as 50 million laptops. Clearly, typical spatial and temporal data reduction techniques employed for post processing will not yield desirable results where reductions of 10e3, 10e6, or 10e9 may still produce petabytes, terabytes or gigabytes of data to transfer or store. Since transferring or storing data may no longer be viable for many simulation applications, data analysis and visualization must now be performed in situ. ParaView Catalyst is an open-source data analysis and visualization library, which aims to reduce IO by tightly coupling simulation, data analysis and visualization codes. This tutorial presented the architecture of ParaView Catalyst and the fundamentals of in situ data analysis and visualization. Attendees learned the basics of using ParaView Catalyst with hands-on exercises. The tutorial featured detailed guidance in implementing C++, Fortran and Python examples. Attendees installed a VirtualBox image from the ‘Download File’ link below for demonstrations and the exercises.
Hans Hagen, Christoph Garth, Anne Berres and Roger Daneker of the Kaiserslautern Technical University visited the Data Science at Scale School on October 1 & 2. The meeting discussed ways to further our collaboration as well as included presentations by Kaiserslatur students on their summer projects.
Anne Berres holds B.Sc. and M.Sc. Degrees in Computer Science from the Technical University of Kaiserslautern and is currently a Ph.D. student in Computer Science at the Technical University of Kaiserslautern. Her research interests include topology, differential manifolds, differential geometry, medical visualization, neural diseases and probablistic tractogrphy. She was the lead author of ‘Tractography in Context: Multimodal Visualization of Probabilistic Tractograms in Anatomical Context’ published in the Eurographics Workshop on Visual Computing for Biology and Medicine 2012. Flyer
This is a visualization of ocean temperature, in degrees Celsius, from the MPAS-Ocean Model, developed at Los Alamos National Laboratory. The goal of the visualization is to enable detailed views of the eddies and currents in specific regions of the model. The color map developed allows for a detailed rendering of specific data ranges. The image of the North Atlantic shows that the Gulf Stream departs the US coast at Cape Hatteras, NC, and turns left just south of Greenland, a feature known as the ‘Northwest Corner’. This pathway compares well with observations, and is highlighted by a narrow range of greens on the colormap. In the western Pacific, the Kuroshio current is a narrow stream of warm water that sheds eddies as it departs eastward from Japan. The colormap was chosen to use light yellow at 22C at the center of the jet, highlighting cooler eddies to the north in blue and warmer eddies to the south in green. It was produced by Francesca G. Samsel of the Center for Agile Technologies at the University of Texas at Austin and Mark Petersen of the Climate, Ocean, Sea Ice Modeling Team at the Los Alamos National Laboratory.
Ian Foster visited the Data Science at Scale School on September 18 and presented ‘Distributed Scientific Computing – The Acceleration of Discovery in a Networked World‘ as a Director’s Unclassified Colloquium at 1:10PM in the Physics Auditorium. Ian Foster is Director of the Computation Institute, a joint institute of the University of Chicago and Argonne National Laboratory. He is also an Argonne Senior Scientist and Distinguished Fellow and the Arthur Holly Compton Distinguished Service Professor of Computer Science. Ian received a BSc (Hons I) degree from the University of Canterbury, New Zealand, and a PhD from Imperial College, United Kingdom, both in computer science. His research deals with distributed, parallel, and data-intensive computing technologies, and innovative applications of those technologies to scientific problems in such domains as climate change and biomedicine. Methods and software developed under his leadership underpin many large national and international cyberinfrastructures. Dr. Foster is a fellow of the American Association for the Advancement of Science, the Association for Computing Machinery, and the British Computer Society. His awards include the Global Information Infrastructure (GII) Next Generation award, the British Computer Society’s Lovelace Medal, R&D Magazine’s Innovator of the Year, and an honorary doctorate from the University of Canterbury, New Zealand. He was a co-founder of Univa UD, Inc., a company established to deliver grid and cloud computing solutions.
Data from a Los Alamos National Lab direct numerical simulation (DNS) of homogeneous buoyancy driven turbulence on a 1024*3 periodic grid has been incorported into the Johns Hopkins Turbulence Databases. Access to the data is facilitated by a Web services interface that permits numerical experiments to be run across the Internet. C, Fortran and Matlab interfaces are layered above Web services so that scientists can use familiar programming tools on their client platforms. Calls to fetch subsets of the data can be made directly from within a program being executed on the client’s platform. Manual queries for data at individual points and times via web-browser are also supported. Evaluation of velocity and pressure at arbitrary points and time is supported using interpolations executed on the database nodes. Spatial differentiation using various order approximations (up to 8th order) and filtering are also supported (for details, see documentation page). Particle tracking can be performed both forward and backward in time using a second order accurate Runge-Kutta integration scheme. Subsets of the data can be downloaded in hdf5 file format using the data cutout service. The Johns Hopkins Turbulence Databases are located at http://turbulence.pha.jhu.edu/. Descriptions of the data can be found at http://turbulence.pha.jhu.edu/datasets.htmand include a Los Alamos National Lab direct numerical simulation (DNS) of homogeneous buoyancy driven turbulence on a 1024*3 periodic grid provided by Daniel Livescu. A full description of Livescu’s data is at http://turbulence.pha.jhu.edu/docs/README-HBDT.pdf. Queries can be run using http://turbulence.pha.jhu.edu/webquery/query.aspx. Select the ‘mixing’ dataset on the drop-down box for the Livescu’s dataset. You can perform these small web queries without an authentication token. Additional queries are available using C, Fortran, Matlab, or .Net frameworks. Instructions for these queries are at http://turbulence.pha.jhu.edu/instructionswebserv.htm.
John Patchett visited Kaiserslautern Technical University to further the collaboration between LANL and Kaiserslautern. Patchett gave an invited talk on the LANL’s Data Science at Scale teams efforts to enable large scale visualization and analysis by using in situ techniques for visualization, analysis and data triage.
On January 17, President Obama spoke at the Justice Department about changes in the technology that we use for national security purposes, and what these technologies mean for our privacy broadly. He called on the administration to conduct a broad 90-day review of big data and privacy: how these technologies affect the way we live and the way we work – and how big data is being used by universities, the private sector, and the government. Jim Ahrens assisted Dimitri Kusnezov, Senior Advisor to the Secretary of Energy, to prepare DOE’s briefing for the review. The briefing included describing the relationship between big data and exascale. An article entitled, ‘The Big Data-High Performance Computing Convergence’ is in submission to Innovations magazine on this topic, 2014.
Jim Ahrens was interviewed by HPCwire for an article on prioritizing data in the age of exascale. The interview was based, in part, on Jim’s paper ‘Increasing Scientific Data Insights About Exascale Class Simulations Under Power and Storage Constraints’ for the Big Data and Extreme-Scale Computing Conference earlier in the year.
Jim Ahrens gave an invited talk and led data sessions at the Big Data and Extreme-Scale Computing Conference in Fukuoka, Japan from Feb. 26-28, 2014. The title of Jim’s talk was ‘Increasing Scientific Data Insights About Exascale Class Simulations Under Power and Storage Constraints’. Jim also described the talk in a podcast interview with HPCWire.