Difference between revisions of "Develop proposal for special issue"

From Geoscience Paper of the Future
Jump to: navigation, search
([David 2015])
([David 2015])
Line 67: Line 67:
 
* '''Keywords of research area:''' Hydrology, Rivers, Modeling, Testing, Reproducibility.  
 
* '''Keywords of research area:''' Hydrology, Rivers, Modeling, Testing, Reproducibility.  
 
* '''Tentative title:''' Going beyond triple-checking, allowing for peace of mind in community model development.
 
* '''Tentative title:''' Going beyond triple-checking, allowing for peace of mind in community model development.
* '''Short abstract:''' The development of computer models in the general field of geosciences is often made incrementally over many years.  Endeavors that generally start on one single researcher's own machine evolve over time into software that is often much larger than was initially anticipated.  Looking at years of building on their computer code, geosciences software developers can easily experience an overwhelming sense of incompetence when contemplating ways to further community usage of their software.  How does one allow others to use their code?  How can one foster survival of their tool?  How shall one ensure the scientific integrity of ongoing developments including those made by others?  Common issues faced by geosciences developers include selecting a license, learning how to track and document past and ongoing changes, choosing a software repository, and allowing for community development.  This paper provides a brief summary of experience with the three former steps of software growth by focusing on a computer code designed for river routing.  The core of this study, however, focuses on reproducing previously-published experiments.  This step is highly repetitive and can therefore benefit greatly from automation.  Additionally, enabling automated software testing can arguably be considered the final step for sustainable software sharing, by allowing the main software developer to let go of a mental block considering scientific integrity.  Creating tools to automatically compare the results of an updated version of a software with previous studies can not only save the main developer's some time, it can also empower other researchers to in their ability to check and justify that their potential additions have retained scientific integrity.   
+
* '''Short abstract:''' The development of computer models in the general field of geoscience is often made incrementally over many years.  Endeavors that generally start on one single researcher's own machine evolve over time into software that is often much larger than was initially anticipated.  Looking at years of building on their computer code, sometimes without much training in computer science, geoscience software developers can easily experience an overwhelming sense of incompetence when contemplating ways to further community usage of their software.  How does one allow others to use their code?  How can one foster survival of their tool?  How shall one ensure the scientific integrity of ongoing developments including those made by others?  Common issues faced by geoscience developers include selecting a license, learning how to track and document past and ongoing changes, choosing a software repository, and allowing for community development.  This paper provides a brief summary of experience with the three former steps of software growth by focusing on a computer code designed for river routing.  The core of this study, however, focuses on reproducing previously-published experiments.  This step is highly repetitive and can therefore benefit greatly from automation.  Additionally, enabling automated software testing can arguably be considered the final step for sustainable software sharing, by allowing the main software developer to let go of a mental block considering scientific integrity.  Creating tools to automatically compare the results of an updated version of a software with previous studies can not only save the main developer's some time, it can also empower other researchers to in their ability to check and justify that their potential additions have retained scientific integrity.   
 
* '''Challenge:''' Ensure that updates to an existing model are able to reproduce a series of simulations published previously.
 
* '''Challenge:''' Ensure that updates to an existing model are able to reproduce a series of simulations published previously.
 
* '''Relationship to other publications:''' This research is related to past and ongoing development of the Routing Application for Parallel computatIon of Discharge (RAPID).  The primary focus of this paper is to allow automated reproducibility of at least the [http://dx.doi.org/10.1175/2011JHM1345.1 first RAPID publication].
 
* '''Relationship to other publications:''' This research is related to past and ongoing development of the Routing Application for Parallel computatIon of Discharge (RAPID).  The primary focus of this paper is to allow automated reproducibility of at least the [http://dx.doi.org/10.1175/2011JHM1345.1 first RAPID publication].

Revision as of 16:57, 31 March 2015


Background: Why a Special Issue on Geoscience Papers of the Future?

Include here our discussion for the vision

Background should be 1-2 pages.

Motivated by need to fully document and make research accessible and reproducible.

Motivation: The EarthCube Initiative and the GeoSoft Project

Include here background about GeoSoft from the web site

OSTP memo. EarthCube reports. Other reports that talk about the need for new approaches to editing.

It's possible that small or very large contributions are not well captured in the current publishing paradigms. Nanopublications.

For example, nano-publications are a possible way to reflect advances in a research process that may not merit a full pubication but they are useful advances to share with the community. A challenge here is that there is a stigma in publishing for publishing units that are too small or very small.

Alternatively, a very large piece of research or work with many parts may be better suited to a GPF style publication.


Perhaps, the concept of a 'paper' can be better reflected in the concept of a 'wrapper' or a collection of materials and resources. The purpose is to assure that publications are representative of the work, effort, and results achieved in the research process.

What is a GPF

Include here our discussion of what is a GPF

The challenges of creating GPFs

The articles in this issue reflect the current best practice for generating a Geoscience Paper of the Future.

Figure discussions: Do we want to do exactly the same figure automatically. Figures in the paper may be a clean versions of an image generated by software. To the extent possible, authors have included clear delineations of provenance. The goal is to assure that readers may regenerate the figures using documented workflows, data, and codes. An important note (Allen, Sandra) is that frequently figures are generated by code, scripts, etc. yet the actual figure is finalized with user..... Mimi is trying to say: is it really worth belaboring the point about how the prettified version of the figure is made? If it is: both of the visualization software I've used (Matlab and SigmaPlot) have actual code in the background that specifies how to set up the prettification, and this code can be found, copied out, and rerun to generate the exact same figure with all of the prettification in the same place. SigmaPlot uses Visual Basic (I think) in its macros. If it is an important point about explicit code, this should be doable. But I'm not sure it's strictly necessary to specify exactly where all the prettifications are to get the gist across.

How much of your experimental history does one include? (Ibrahim). The experimental process often ends up nowhere. Should we document all the failed experiments? Get one DOI for the results of the successful experiment? Another for failed trials?


Documenting: Timing and Intermediate proceses When should we document and what are the bounds on what we document? For example, should we document and include data and workflows for 'failed' experiments? Or should we assign datasets DOIs before we know the results from using them? The group thinks that good ideas/practices may include documenting and sharing data when you have a clear understanding of the outcomes worth reporting. For example successful experiments should have clear, clean data documented and shared. Whereas one strategy with 'failed' experiments could include bundling the intermediate datasets with one DOI and a more general discussion of the process/methods.

Related work

Include here the related work we have discussed

Papers to be included

Would it be worthwhile to group the papers into broader categories rather than giving specifics about every single paper?

For each submission, we describe:

  • Authors and affiliations
  • Keywords of research area
  • Tentative title
  • Short abstract
  • Challenge
  • Relationship to other publications (is the article based on a previously published article? is it new content? IF PREVIOUSLY PUBLISHED, PLS PROVIDE A POINTER TO THE PUBLISHED ARTICLE AND SPECIFY WHAT PERCENTAGE OF THE WORK PRESENTED WILL BE NEW)
  • Pointer to the wiki page that documents the article
  • Expected submission date

[David 2015]

  • Authors and affiliations: Cedric David
  • Keywords of research area: Hydrology, Rivers, Modeling, Testing, Reproducibility.
  • Tentative title: Going beyond triple-checking, allowing for peace of mind in community model development.
  • Short abstract: The development of computer models in the general field of geoscience is often made incrementally over many years. Endeavors that generally start on one single researcher's own machine evolve over time into software that is often much larger than was initially anticipated. Looking at years of building on their computer code, sometimes without much training in computer science, geoscience software developers can easily experience an overwhelming sense of incompetence when contemplating ways to further community usage of their software. How does one allow others to use their code? How can one foster survival of their tool? How shall one ensure the scientific integrity of ongoing developments including those made by others? Common issues faced by geoscience developers include selecting a license, learning how to track and document past and ongoing changes, choosing a software repository, and allowing for community development. This paper provides a brief summary of experience with the three former steps of software growth by focusing on a computer code designed for river routing. The core of this study, however, focuses on reproducing previously-published experiments. This step is highly repetitive and can therefore benefit greatly from automation. Additionally, enabling automated software testing can arguably be considered the final step for sustainable software sharing, by allowing the main software developer to let go of a mental block considering scientific integrity. Creating tools to automatically compare the results of an updated version of a software with previous studies can not only save the main developer's some time, it can also empower other researchers to in their ability to check and justify that their potential additions have retained scientific integrity.
  • Challenge: Ensure that updates to an existing model are able to reproduce a series of simulations published previously.
  • Relationship to other publications: This research is related to past and ongoing development of the Routing Application for Parallel computatIon of Discharge (RAPID). The primary focus of this paper is to allow automated reproducibility of at least the first RAPID publication.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Demir 2015]

  • Authors and affiliations: Ibrahim Demir
  • Keywords of research area: hydrologic network, optimization, network representation, database query
  • Tentative title: Optimization of hydrological network representation for fast access and query in web-based system
  • Short abstract: The article is about benchmarking various network representation techniques for optimization of hydrological network access and query.
  • Challenge:
  • Relationship to other publications: The article is based on a new study
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Fulweiler 2015]

  • Authors and affiliations: Wally Fulweiler
  • Keywords of research area:
  • Tentative title:
  • Short abstract:
  • Challenge:
  • Relationship to other publications: (is the article based on a previously published article? is it new content?)
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Loh and Karlstrom 2015]

  • Authors and affiliations: Lay Kuan Loh and Leif Karlstrom
  • Keywords of research area: Spatial clustering, Eigenvector selection, Entropy Ranking, Cascades Volcanic Region, Afar Depression, Tharsis provonce
  • Tentative title: Characterization of volcanic vent distributions using spectral clustering with eigenvector selection and entropy ranking
  • Short abstract: Volcanic vents on the surface of Earth and other planets often appear in groups that exhibit spatial patterning. Such vent distributions reflect complex interplay between time-evolving mechanical controls on the pathways of magma ascent, background tectonic stresses, and unsteady supply of rising magma. With the ultimate aim of connecting surface vent distributions with the dynamics of magma ascent, we have developed a clustering method to quantify spatial patterns in vents. Clustering is typically used in exploratory data analysis to identify groups with similar behavior by partitioning a dataset into clusters that share similar attributes. Traditional clustering algorithms that work well on simple point-cloud type synthetic datasets generally do not scale well the real-world data we are interested in, where there are poor boundaries between clusters and much ambiguity in cluster assignments. We instead use a spectral clustering algorithm with eigenvector selection based on entropy ranking based off work from Zhao et al 2010 that outperforms traditional spectral clustering algorithms in choosing the right number of clusters for point data. We benchmark this algorithm on synthetic vent data with increasingly complex spatial distributions, to test the ability to accurately cluster vent data with variable spatial density, skewness, number of clusters, and proximity of clusters. We then apply our algorithm to several real-world datasets from the Cascades, Afar Depression and Mars.
  • Challenge: Quantifying clustering. We plan to study how varying the statistical distribution, density, skewness, background noise, number of clusters, proximity of clusters, and combinations of any of these factors affects the performance of our algorithm. We test it against man-made and real world datasets.
  • Relationship to other publications: New content, but one of the databases we are studying in the paper (Cascades Volcanic Range) would be based off a different paper we are preparing and planning to submit earlier.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: June 2015

[Lee 2015]

  • Authors and affiliations: Kyo Lee, Maziyar Boustani and Chris Mattmann, Jet Propulsion Laboratory
  • Keywords of research area:Regional Climate Model Evaluation System, Open Climate Workbench, regional climate change
  • Tentative title:Regional Climate Model Evaluation System to facilitate climate model evaluation using observational datasets from various sources
  • Short abstract:Regional Climate Model Evaluation System (RCMES) is open source software developed to faciliate climate model evaluation. Recognizing the need for a comprehensive tool for studying climate science, RCMES also provides easy access to popularly used observational data from its own database. We provide a clear and easy-to-follow workflow of RCMES to replicate published papers.
  • Challenge:Sharing big data, better documenting source codes, encouraging climate science community to use RCMES
  • Relationship to other publications: Kim et al. 2013, Kim et al. 2014, Lee et al. 2014
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:End of June 2015

[Miller 2015]

  • Authors and affiliations: Kim Miller
  • Keywords of research area:
  • Tentative title:
  • Short abstract:
  • Challenge:
  • Relationship to other publications: (is the article based on a previously published article? is it new content?)
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Mills 2015]

  • Authors and affiliations: Heath Mills, University of Houston Clear Lake; Brandi Kiel Reese, Texas A&M Corpus Christi
  • Keywords of research area:
  • Tentative title:Iron and Sulfur Cycling Biogeography Using Advanced Geochemical and Molecular Analyses
  • Short abstract:
  • Challenge: My paper will develop and document a new pipeline to analyze a combined and robust genetic and geochemical data set. New, reproducible methods will be highlighted in this manuscript to help others better analyze similar data sets. There is a general lack of guidance within my field for such challenges. This manuscript will be unique and helpful from an analysis standpoint as well as for the science being presented.
  • Relationship to other publications: Original Manuscript
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Oh 2015]

  • Authors and affiliations: Ji-Hyun Oh Jet Propulsion Laboratory/University of Southern California
  • Keywords of research area: Tropical Meteorology, Madden-Julian Oscillation, Momentum budget analysis
  • Tentative title: Tools for computing momentum budget for the westerly wind event associated with the Madden-Julian Oscillation
  • Short abstract:As one of the most pronounced modes of tropical intraseasonal variability, the Madden-Julian Oscillation (MJO) prominently connects global weather and climate, and serves as one of critical predictability sources for extended-range forecasting. The zonal circulation of the MJO is characterized by low-level westerlies (easterlies) in and to the west (east) of the convective center, respectively. The direction of zonal winds in the upper troposphere is opposite to that in the lower troposphere. In addition to the convective signal as an identifier of the MJO initiation, certain characteristics of the zonal circulation been used as a standard metric for monitoring the state of MJO and investigating features of the MJO and its impact on other atmospheric phenomena. This paper documents a tool for investigating the generation of low-level westerly winds during the MJO life cycle. The tool is used for the momentum budget analysis to understand the respective contributions of various processes involved in the wind evolution associated with the MJO using European Centre for Medium-Range Weather Forecasts operational analyses during Dynamics of the Madden–Julian Oscillation field campaign.
  • Challenge: This paper will cover how to reproduce two key figures from the paper that I recently submitted to Journal of Atmospheric Science. This will include detailed procedures related to generating the figures such as how/where to download data, how to transform the format of the data to be used as an input for my codes, and so on..
  • Relationship to other publications: (is the article based on a previously published article? is it new content?) This article is related to the part of the paper submitted to Journal of Atmospheric Science.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Pierce 2015]

  • Authors and affiliations: Suzanne Pierce
  • Keywords of research area:
  • Tentative title:
  • Short abstract:
  • Challenge: Fully document a new software application and framework using example case study data and tutorials.
  • Relationship to other publications: (is the article based on a previously published article? is it new content?)
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Pope 2015]

  • Authors and affiliations: Allen Pope, National Snow and Ice Data Center, University of Colorado, Boulder
  • Keywords of research area: Glaciology, Remote Sensing, Landsat 8, Polar Science
  • Tentative title: Data and Code for Estimating and Evaluating Supraglacial Lake Depth With Landsat 8 and other Multispectral Sensors
  • Short abstract: Supraglacial lakes play a significant role in glacial hydrological systems – for example, transporting water to the glacier bed in Greenland or leading to ice shelf fracture and disintegration in Antarctica. To investigate these important processes, multispectral remote sensing provides multiple methods for estimating supraglacial lake depth – either through single-band or band-ratio methods, both empirical and physically-based. Landsat 8 is the newest satellite in the Landsat series. With new bands, higher dynamic range, and higher radiometric resolution, the Operational Land Imager (OLI) aboard Landsat 8 has a lot of potential.

This paper will document the data and code used in processing in situ reflectance spectra and depth measurements to investigate the ability of Landsat 8 to estimate lake depths using multiple methods, as well as quantify improvements over Landsat 7’s ETM+. A workflow, data, and code are provided to detail promising methods as applied to Landsat 8 OLI imagery of case study areas in Greenland, allowing calculation of regional volume estimates using 2013 and 2014 summer-season imagery. Altimetry from WorldView DEMs are used to validate lake depth estimates. The optimal method for supraglacial lake depth estimation with Landsat 8 is shown to be an average of single band depths by red and panchromatic bands. With this best method, preliminary investigation of seasonal behavior and elevation distribution of lakes is also discussed and documented.

  • Challenge: Reproducibility, Dark Code
  • Relationship to other publications: Documenting and explaining the data and code behind the analysis and results presented in another paper.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: Late June 2015

[Read and Winslow 2015]

  • Authors and affiliations: Jordan Read and Luke Winslow
  • Keywords of research area:
  • Tentative title:
  • Short abstract:
  • Challenge:
  • Relationship to other publications: (is the article based on a previously published article? is it new content?)
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Tzeng 2015]

  • Authors and affiliations: Mimi Tzeng, Brian Dzwonkowski (DISL); Kyeong Park (TAMU Galveston)
  • Keywords of research area:physical oceanography, remote sensing
  • Tentative title: Fisheries Oceanography of Coastal Alabama (FOCAL): A Subset of a Time-Series of Hydrographic and Current Data from a Permanent Moored Station Outside Mobile Bay (27 Jan to 18 May 2011)
  • Short abstract:The Fisheries Oceanography in Coastal Alabama (FOCAL) program began in 2006 as a way for scientists at Dauphin Island Sea Lab (DISL) to study the natural variability of Alabama's nearshore environment as it relates to fisheries production. FOCAL provided a long-term baseline data set that included time-series hydrographic data from a permanent offshore mooring (ADCP, vertical thermister array and CTDs at surface and bottom) and shipboard surveys (vertical CTD profiles and water sampling), as well as monthly ichthyoplankton and zooplankton (depth-discrete) sample collections at FOCAL sites. The subset of data presented here are from the mooring, and includes a vertical array of thermisters, CTDs at surface and bottom, an ADCP at the bottom, and vertical CTD profiles collected at the mooring during maintenance surveys. The mooring is located at 30 05.410'N 88 12.694'W, 25 km southwest of the entrance to Mobile Bay. Temperature, salinity, density, depth, and current velocity data were collected at 20-minute intervals from 2006 to 2012. Other parameters, such as dissolved oxygen, are available for portions of the time series depending on which instruments were deployed at the time.
  • Challenge: My paper will be about the processing of data in a larger dataset, from which peer-reviewed papers have been written. The processing I did was not specific to any particular paper. I can point to an example paper that used some of the data from this dataset, that I processed, however all of the figures in the paper are composites that also include other data from elsewhere that I had nothing to do with (and it wouldn't be feasible to try to get hold of the other data within our timeframe).
  • Relationship to other publications: A recent paper that used the part of the FOCAL data I'm documenting as the sample from the larger dataset: Dzwonkowski, Brian, Kyeong Park, Jungwoo Lee, Bret M. Webb, and Arnoldo Valle-Levinson. 2014. "Spatial variability of flow over a river-influenced inner shelf in coastal Alabama during spring." Continental Shelf Research 74:25-34.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Villamizar 2015]

  • Authors and affiliations: Sandra Villamizar, University of California, Merced
  • Keywords of research area: river ecohydrology
  • Tentative title: Producing long-term series of whole-stream metabolism using readily available data.
  • Short abstract: Continuous water quality and discharge data that are readily available through government websites may be used to produce useful information about the processes within a river ecosystem. This paper will provide a detailed description on how to produce a long-term series of whole stream metabolism for the case of the restoration reach of the San Joaquin River in California.
  • Challenge: Document new software/applications
  • Relationship to other publications: This will be a new publication
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: To be defined

[Yu and Bhatt 2015]

  • Authors and affiliations: Xuan Yu, Department of Geological Sciences, University of Delaware. Gopal Bhatt, Department of Civil & Environmental Engineering, Pennsylvania State University.
  • Keywords of research area: coupled processes, integrated hydrologic modeling, PIHM, surface flow, subsurface flow, open science
  • Tentative title: Learning integrated modeling of surface and subsurface flow from scratch
  • Short abstract: Integrated modeling of surface and subsurface flow has been of great interest in understanding not only intimate interconnectedness of hydrological processes, but also land-surface energy balance, biogeochemical and ecological processes, and landscape evolution. Although a growing number of complex hydrologic models have been used for resolving environmental processes, hypothesis testing, hydrologic predictions for effective management of watershed, very limited resources of the model implementation have been made accessible to a large group of model users. The users have to invest a significant amount of time and effort to reproduce, and to understand the workflow of hydrologic simulation in a modeling paper. To provide a challenging and stimulating introduction to integrated modeling of surface and subsurface flow in this paper, we revisit the development of Penn State Integrated Hydrologic Model (PIHM) by reproducing a numerical benchmarking example, and a real world catchment scale application. Specifically, we document PIHM and it’s modeling workflow to enable basic understanding of simulating coupled surface and subsurface flow processes. We provide model and data to highlight the reciprocal roles between the two. In addition, we incorporate user experience as third dimension in the modeling workflow to enable deeper communications between model developers and users. The workflow has important implications for smoothing and accelerating open scientific collaborations in geosciences research.
  • Challenge: Reproduce published simulations by a existing model with the latest version. Benchmarking modeling application for numerical experiment and field data.
  • Relationship to other publications: The article is based on a previously published article.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: End of June 2015

Special Issue Editors

  • Co-editor: Chris Duffy and/or Scott Peckham
  • Co-editor: Cedric David
  • Co-editor: possibly Karan Venayagamoorthy

The editors will only accept submissions that follow the special issue review criteria.

The editors will select a set of reviewers to handle the submissions. Reviewers will include computer scientists, library scientists, and geoscientists.

Special Issue Review Criteria

The reviewers will be asked to provide feedback on the papers according to the following criteria. Note that some papers will have good reasons for limiting the information (e.g. the data is from third parties and not openly available, etc), and in that case they would document those reasons.

  • Documentation of the datasets: descriptions of datasets, unique identifiers, repositories.
  • Documentation of software: description of all software used (including pre-processing of data, visualization steps, etc), unique identifiers, repositories.
  • Documentation of the provenance of results: provenance for each figure or result, such as the workflow or the provenance record.

Tentative Timeline

  • Journal committed to special issue: April 15, 2015
  • Submissions due to editors: June 30, 2015
  • Reviews due: Sept 15, 2015
  • Decisions out to authors: Sept 30, 2015
  • Revisions due: October 31, 2015
  • Final versions due November 15, 2015
  • Issue published December 31, 2015