41 datasets found
  1. d

    Data from: Data to Assess Nitrogen Export from Forested Watersheds in and...

    • catalog.data.gov
    • data.usgs.gov
    Updated Sep 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Data to Assess Nitrogen Export from Forested Watersheds in and near the Long Island Sound Basin with Weighted Regressions on Time, Discharge, and Season (WRTDS) [Dataset]. https://catalog.data.gov/dataset/data-to-assess-nitrogen-export-from-forested-watersheds-in-and-near-the-long-island-sound-
    Explore at:
    Dataset updated
    Sep 12, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Long Island, Long Island Sound
    Description

    The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency's Long Island Sound Study (https://longislandsoundstudy.net), characterized nitrogen export from forested watersheds and whether nitrogen loading has been increasing or decreasing to help inform Long Island Sound management strategies. The Weighted Regressions on Time, Discharge, and Season (WRTDS; Hirsch and others, 2010) method was used to estimate annual concentrations and fluxes of nitrogen species using long-term records (14 to 37 years in length) of stream total nitrogen, dissolved organic nitrogen, nitrate, and ammonium concentrations and daily discharge data from 17 watersheds located in the Long Island Sound basin or in nearby areas of Massachusetts, New Hampshire, or New York. This data release contains the input water-quality and discharge data, annual outputs (including concentrations, fluxes, yields, and confidence intervals about these estimates), statistical tests for trends between the periods of water years 1999-2000 and 2016-2018, and model diagnostic statistics. These datasets are organized into one zip file (WRTDSeLists.zip) and six comma-separated values (csv) data files (StationInformation.csv, AnnualResults.csv, TrendResults.csv, ModelStatistics.csv, InputWaterQuality.csv, and InputStreamflow.csv). The csv file (StationInformation.csv) contains information about the stations and input datasets. Finally, a short R script (SampleScript.R) is included to facilitate viewing the input and output data and to re-run the model. Reference: Hirsch, R.M., Moyer, D.L., and Archfield, S.A., 2010, Weighted Regressions on Time, Discharge, and Season (WRTDS), with an application to Chesapeake Bay River inputs: Journal of the American Water Resources Association, v. 46, no. 5, p. 857–880.

  2. f

    Data from: Export unit value across markets: dampened by export subsidies

    • figshare.com
    application/gzip
    Updated Oct 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aadil Nakhoda (2022). Export unit value across markets: dampened by export subsidies [Dataset]. http://doi.org/10.6084/m9.figshare.21353661.v3
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Oct 26, 2022
    Dataset provided by
    figshare
    Authors
    Aadil Nakhoda
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    1. The programming language used is R. Please download and install R Studio R.
    2. The code is saved in R-file named ‘export subsidies R code’. Please open it.
    3. Please set the working directory.
    4. You will need to make sure that you have following files before you run the code. These should be saved in the working directory. Data files for the tables: (i) exportdata_used1.RDS (ii) export_data_prod_level.RDS (iii) export_data_kg_unit.RDS Data files for the figures: (i) data_fig1.RDS (ii) data_fig2.RDS (iii) data_fig3.RDS (iv) data_fig4_5.RDS (v) data_fig6_7.RDS (vi) data_fig8.RDS
    5. Please make sure all the packages are installed as required.
  3. Test data and model for the FlowCam data processing pipeline

    • zenodo.org
    Updated Mar 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous; Anonymous (2025). Test data and model for the FlowCam data processing pipeline [Dataset]. http://doi.org/10.5281/zenodo.14832978
    Explore at:
    Dataset updated
    Mar 22, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anonymous; Anonymous
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Testing data for the processing pipeline for FlowCam data

    The data are fully processed but can be used to test each pipeline component. You can download the scripts at https://anonymous.4open.science/r/LabelChecker_Pipeline-2539/README.md

    LabelChecker software can be downloaded from https://anonymous.4open.science/r/LabelChecker-4312/README.md

    Pipeline scripts

    To use the model, unzip the freshwater_phytoplankton_model.zip and place the folder in the respective model folder in the services.

    |--services
      |-- ProcessData.py
      |-- config.py
      |-- classification
    |-- ObjectClassification
    |-- models
    |--

    Once you unzip the data.zip file, each folder corresponds to the data export of a FlowCam run. You have the TIF collage files, a CSV file with the sample name containing all the parameters measured by the FlowCam, and a LabelChecker_

    You can run the preprocessing.py script directly on the files by including the -R (reprocess) argument. Otherwise you can do it by removing the LabelChecker CSV from the folders. The PreprocessingTrue column will remain the same.

    When running the classification.py script you can get new predictions on the data. In this case, only the LabelPredicted column will be updated and the validated labels (LabelTrue column) will not be lost.

    You could also use these files to try out the train_model.ipynb, although the resulting model will not be very good with so little data. We recommend trying it with your own data.

    LabelChecker

    These files can be used to test LabelChecker. You can open them one by one or all together and try all functionalities. We provide a label_file.csv but you can also make your own.

  4. Data supporting the Master thesis "Monitoring von Open Data Praktiken -...

    • zenodo.org
    zip
    Updated Nov 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Katharina Zinke; Katharina Zinke (2024). Data supporting the Master thesis "Monitoring von Open Data Praktiken - Herausforderungen beim Auffinden von Datenpublikationen am Beispiel der Publikationen von Forschenden der TU Dresden" [Dataset]. http://doi.org/10.5281/zenodo.14196539
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 21, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Katharina Zinke; Katharina Zinke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data supporting the Master thesis "Monitoring von Open Data Praktiken - Herausforderungen beim Auffinden von Datenpublikationen am Beispiel der Publikationen von Forschenden der TU Dresden" (Monitoring open data practices - challenges in finding data publications using the example of publications by researchers at TU Dresden) - Katharina Zinke, Institut für Bibliotheks- und Informationswissenschaften, Humboldt-Universität Berlin, 2023

    This ZIP-File contains the data the thesis is based on, interim exports of the results and the R script with all pre-processing, data merging and analyses carried out. The documentation of the additional, explorative analysis is also available. The actual PDFs and text files of the scientific papers used are not included as they are published open access.

    The folder structure is shown below with the file names and a brief description of the contents of each file. For details concerning the analyses approach, please refer to the master's thesis (publication following soon).

    ## Data sources

    Folder 01_SourceData/

    - PLOS-Dataset_v2_Mar23.csv (PLOS-OSI dataset)

    - ScopusSearch_ExportResults.csv (export of Scopus search results from Scopus)

    - ScopusSearch_ExportResults.ris (export of Scopus search results from Scopus)

    - Zotero_Export_ScopusSearch.csv (export of the file names and DOIs of the Scopus search results from Zotero)

    ## Automatic classification

    Folder 02_AutomaticClassification/

    - (NOT INCLUDED) PDFs folder (Folder for PDFs of all publications identified by the Scopus search, named AuthorLastName_Year_PublicationTitle_Title)

    - (NOT INCLUDED) PDFs_to_text folder (Folder for all texts extracted from the PDFs by ODDPub, named AuthorLastName_Year_PublicationTitle_Title)

    - PLOS_ScopusSearch_matched.csv (merge of the Scopus search results with the PLOS_OSI dataset for the files contained in both)

    - oddpub_results_wDOIs.csv (results file of the ODDPub classification)

    - PLOS_ODDPub.csv (merge of the results file of the ODDPub classification with the PLOS-OSI dataset for the publications contained in both)

    ## Manual coding

    Folder 03_ManualCheck/

    - CodeSheet_ManualCheck.txt (Code sheet with descriptions of the variables for manual coding)

    - ManualCheck_2023-06-08.csv (Manual coding results file)

    - PLOS_ODDPub_Manual.csv (Merge of the results file of the ODDPub and PLOS-OSI classification with the results file of the manual coding)

    ## Explorative analysis for the discoverability of open data

    Folder04_FurtherAnalyses

    Proof_of_of_Concept_Open_Data_Monitoring.pdf (Description of the explorative analysis of the discoverability of open data publications using the example of a researcher) - in German

    ## R-Script

    Analyses_MA_OpenDataMonitoring.R (R-Script for preparing, merging and analyzing the data and for performing the ODDPub algorithm)

  5. m

    Raw Temperature Data from the Antarctic Peninsula acquired during R/V...

    • marine-geo.org
    txt
    Updated 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    William Fraser (2021). Raw Temperature Data from the Antarctic Peninsula acquired during R/V Laurence M. Gould expedition LMG1901 (2018) [Dataset]. http://doi.org/10.1594/IEDA/324747
    Explore at:
    txtAvailable download formats
    Dataset updated
    2021
    Dataset provided by
    Marine Geoscience Data System (MGDS)
    Authors
    William Fraser
    License

    Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
    License information was derived automatically

    Area covered
    Description

    Abstract: This data set was acquired with a Sippican MK21 Expendable BathyThermograph during R/V Laurence M. Gould expedition LMG1901 conducted in 2018 (Chief Scientist: Dr. William Fraser, Investigator: Dr. William Fraser). These data files are of Sippican MK21 Export Data File format and include Temperature data that have not been processed. Data were acquired as part of the project(s): LTER Palmer, Antarctica (PAL): Land-Shelf-Ocean Connectivity, Ecosystem Resilience and Transformation in a Sea-Ice Influenced Pelagic Ecosystem. Funding was provided by NSF award(s): PLR14-40435.

  6. d

    Data from: Commercial harvest and export of snapping turtles (Chelydra...

    • datadryad.org
    • search.dataone.org
    zip
    Updated Nov 17, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin C. Colteaux; Derek M. Johnson (2017). Commercial harvest and export of snapping turtles (Chelydra serpentina) in the United States: trends and the efficacy of size limits at reducing harvest [Dataset]. http://doi.org/10.5061/dryad.j5v05
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 17, 2017
    Dataset provided by
    Dryad
    Authors
    Benjamin C. Colteaux; Derek M. Johnson
    Time period covered
    Nov 16, 2016
    Area covered
    United States
    Description

    State Harvest Data (csv)Commercial snapping turtle harvest data (in individuals) for eleven states from 1998 - 2013. States reporting are Arkansas, Delaware, Iowa, Maryland, Massachusetts, Michigan, Minnesota, New Jersey, North Carolina, Pennsylvania, and Virginia.StateHarvestData.csvInput and execution code for Colteaux_Johnson_2016Attached R file includes the code described in the listed publication. The companion JAGS (just another Gibbs sampler) code is also stored in this repository under separate cover.ColteauxJohnsonNatureConservation.RJAGS model code for Colteaux_Johnson_2016Attached R file includes the JAGS (just another Gibbs sampler) code described in the listed publication. The companion input and execution code is also stored in this repository under separate cover.ColteauxJohnsonNatureConservationJAGS.R

  7. 2007-08 V3 CEAMARC-CASO Bathymetry Plots Over Time During Events

    • researchdata.edu.au
    Updated Jun 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Ocean Data Network (2025). 2007-08 V3 CEAMARC-CASO Bathymetry Plots Over Time During Events [Dataset]. https://researchdata.edu.au/2007-08-v3-during-events/3718183
    Explore at:
    Dataset updated
    Jun 23, 2025
    Dataset provided by
    Data.govhttps://data.gov/
    Authors
    Australian Ocean Data Network
    Area covered
    Description

    A routine was developed in R ('bathy_plots.R') to plot bathymetry data over time during individual CEAMARC events. This is so we can analyse benthic data in relation to habitat, ie. did we trawl over a slope or was the sea floor relatively flat. Note that the depth range in the plots is autoscaled to the data, so a small range in depths appears as a scatetring of points. As long as you look at the depth scale though interpretation will be ok. The R files need a file of bathymetry data in '200708V3_one_minute.csv' which is a file containing a data export from the underway PostgreSQL ship database and 'events.csv' which is a stripped down version of the events export from the ship board events database export. If you wish to run the code again you may need to change the pathnames in the R script to relevant locations. If you have opened the csv files in excel at any stage and the R script gets an error you may need to format the date/time columns as yyyy-mm-dd hh;mm:ss, save and close the file as csv without opening it again and then run the R script. However, all output files are here for every CEAMARC event. Filenames contain a reference to CEAMARC event id. Files are in eps format and can be viewed using Ghostview which is available as a free download on the internet.

  8. z

    Replication Package: Unboxing Default Argument Breaking Changes in 1 + 2...

    • zenodo.org
    application/gzip
    Updated Jul 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi; João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi (2024). Replication Package: Unboxing Default Argument Breaking Changes in 1 + 2 Data Science Libraries in Python [Dataset]. http://doi.org/10.5281/zenodo.11584961
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jul 15, 2024
    Dataset provided by
    Zenodo
    Authors
    João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi; João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Replication Package

    This repository contains data and source files needed to replicate our work described in the paper "Unboxing Default Argument Breaking Changes in Scikit Learn".

    Requirements

    We recommend the following requirements to replicate our study:

    1. Internet access
    2. At least 100GB of space
    3. Docker installed
    4. Git installed

    Package Structure

    We relied on Docker containers to provide a working environment that is easier to replicate. Specifically, we configure the following containers:

    • data-analysis, an R-based Container we used to run our data analysis.
    • data-collection, a Python Container we used to collect Scikit's default arguments and detect them in client applications.
    • database, a Postgres Container we used to store clients' data, obtainer from Grotov et al.
    • storage, a directory used to store the data processed in data-analysis and data-collection. This directory is shared in both containers.
    • docker-compose.yml, the Docker file that configures all containers used in the package.

    In the remainder of this document, we describe how to set up each container properly.

    Using VSCode to Setup the Package

    We selected VSCode as the IDE of choice because its extensions allow us to implement our scripts directly inside the containers. In this package, we provide configuration parameters for both data-analysis and data-collection containers. This way you can directly access and run each container inside it without any specific configuration.

    You first need to set up the containers

    $ cd /replication/package/folder
    $ docker-compose build
    $ docker-compose up
    # Wait docker creating and running all containers
    

    Then, you can open them in Visual Studio Code:

    1. Open VSCode in project root folder
    2. Access the command palette and select "Dev Container: Reopen in Container"
      1. Select either Data Collection or Data Analysis.
    3. Start working

    If you want/need a more customized organization, the remainder of this file describes it in detail.

    Longest Road: Manual Package Setup

    Database Setup

    The database container will automatically restore the dump in dump_matroskin.tar in its first launch. To set up and run the container, you should:

    Build an image:

    $ cd ./database
    $ docker build --tag 'dabc-database' .
    $ docker image ls
    REPOSITORY  TAG    IMAGE ID    CREATED     SIZE
    dabc-database latest  b6f8af99c90d  50 minutes ago  18.5GB
    

    Create and enter inside the container:

    $ docker run -it --name dabc-database-1 dabc-database
    $ docker exec -it dabc-database-1 /bin/bash
    root# psql -U postgres -h localhost -d jupyter-notebooks
    jupyter-notebooks=# \dt
           List of relations
     Schema |    Name    | Type | Owner
    --------+-------------------+-------+-------
     public | Cell       | table | root
     public | Code_cell     | table | root
     public | Md_cell      | table | root
     public | Notebook     | table | root
     public | Notebook_features | table | root
     public | Notebook_metadata | table | root
     public | repository    | table | root
    

    If you got the tables list as above, your database is properly setup.

    It is important to mention that this database is extended from the one provided by Grotov et al.. Basically, we added three columns in the table Notebook_features (API_functions_calls, defined_functions_calls, andother_functions_calls) containing the function calls performed by each client in the database.

    Data Collection Setup

    This container is responsible for collecting the data to answer our research questions. It has the following structure:

    • dabcs.py, extract DABCs from Scikit Learn source code, and export them to a CSV file.
    • dabcs-clients.py, extract function calls from clients and export them to a CSV file. We rely on a modified version of Matroskin to leverage the function calls. You can find the tool's source code in the `matroskin`` directory.
    • Makefile, commands to set up and run both dabcs.py and dabcs-clients.py
    • matroskin, the directory containing the modified version of matroskin tool. We extended the library to collect the function calls performed on the client notebooks of Grotov's dataset.
    • storage, a docker volume where the data-collection should save the exported data. This data will be used later in Data Analysis.
    • requirements.txt, Python dependencies adopted in this module.

    Note that the container will automatically configure this module for you, e.g., install dependencies, configure matroskin, download scikit learn source code, etc. For this, you must run the following commands:

    $ cd ./data-collection
    $ docker build --tag "data-collection" .
    $ docker run -it -d --name data-collection-1 -v $(pwd)/:/data-collection -v $(pwd)/../storage/:/data-collection/storage/ data-collection
    $ docker exec -it data-collection-1 /bin/bash
    $ ls
    Dockerfile Makefile config.yml dabcs-clients.py dabcs.py matroskin storage requirements.txt utils.py
    

    If you see project files, it means the container is configured accordingly.

    Data Analysis Setup

    We use this container to conduct the analysis over the data produced by the Data Collection container. It has the following structure:

    • dependencies.R, an R script containing the dependencies used in our data analysis.
    • data-analysis.Rmd, the R notebook we used to perform our data analysis
    • datasets, a docker volume pointing to the storage directory.

    Execute the following commands to run this container:

    $ cd ./data-analysis
    $ docker build --tag "data-analysis" .
    $ docker run -it -d --name data-analysis-1 -v $(pwd)/:/data-analysis -v $(pwd)/../storage/:/data-collection/datasets/ data-analysis
    $ docker exec -it data-analysis-1 /bin/bash
    $ ls
    data-analysis.Rmd datasets dependencies.R Dockerfile figures Makefile
    

    If you see project files, it means the container is configured accordingly.

    A note on storage shared folder

    As mentioned, the storage folder is mounted as a volume and shared between data-collection and data-analysis containers. We compressed the content of this folder due to space constraints. Therefore, before starting working on Data Collection or Data Analysis, make sure you extracted the compressed files. You can do this by running the Makefile inside storage folder.

    $ make unzip # extract files
    $ ls
    clients-dabcs.csv clients-validation.csv dabcs.csv Makefile scikit-learn-versions.csv versions.csv
    $ make zip # compress files
    $ ls
    csv-files.tar.gz Makefile
  9. d

    Raw Temperature Data from the Antarctic Peninsula acquired during R/V...

    • search.dataone.org
    • marine-geo.org
    • +1more
    Updated Mar 4, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IEDA: Marine-Geo Digital Library (2019). Raw Temperature Data from the Antarctic Peninsula acquired during R/V Laurence M. Gould expedition LMG1603 (2016) [Dataset]. http://doi.org/10.1594/IEDA/323830
    Explore at:
    Dataset updated
    Mar 4, 2019
    Dataset provided by
    IEDA: Marine-Geo Digital Library
    Time period covered
    Mar 19, 2016 - Apr 13, 2016
    Area covered
    Description

    This data set was acquired with a Sippican MK21 Expendable CTD during R/V Laurence M. Gould expedition LMG1603 conducted in 2016 (Chief Scientist: USAP Marine Manager, Investigator: USAP Marine Manager). These data files are of Sippican MK21 Export Data File format and include Temperature data that have not been processed.

  10. i

    Raw Temperature Data from the Antarctic Peninsula acquired during R/V...

    • get.iedadata.org
    • search.dataone.org
    • +1more
    edf v.1, xml
    Updated Feb 12, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    default publisher (2020). Raw Temperature Data from the Antarctic Peninsula acquired during R/V Laurence M. Gould expedition LMG1801 (2017) [Dataset]. http://doi.org/10.1594/IEDA/324471
    Explore at:
    edf v.1, xmlAvailable download formats
    Dataset updated
    Feb 12, 2020
    Dataset provided by
    default publisher
    Area covered
    Description

    This data set was acquired with a Sippican MK21 Expendable BathyThermograph during R/V Laurence M. Gould expedition LMG1801 conducted in 2017 (Chief Scientist: Dr. William Fraser, Investigator: Dr. William Fraser). These data files are of Sippican MK21 Export Data File format and include Temperature data that have not been processed. Data were acquired as part of the project(s): LTER Palmer, Antarctica (PAL): Land-Shelf-Ocean Connectivity, Ecosystem Resilience and Transformation in a Sea-Ice Influenced Pelagic Ecosystem. Funding was provided by NSF award(s): PLR14-40435.

  11. COALMOD-World 2.0 data, results, figures for: Stranded Assets in the Coal...

    • zenodo.org
    bin, zip
    Updated Jan 13, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christian Hauenstein; Christian Hauenstein; Franziska Holz; Franziska Holz; Lennart Rathje; Thomas Mitterecker; Lennart Rathje; Thomas Mitterecker (2023). COALMOD-World 2.0 data, results, figures for: Stranded Assets in the Coal Export Industry? The Case of the Australian Galilee Basin [Dataset]. http://doi.org/10.5281/zenodo.6483464
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Jan 13, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Christian Hauenstein; Christian Hauenstein; Franziska Holz; Franziska Holz; Lennart Rathje; Thomas Mitterecker; Lennart Rathje; Thomas Mitterecker
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    World, Galilee Basin
    Description

    This dataset contains all COALMOD-World 2.0 data for Hauenstein et al. (in preparation): Stranded Assets in the Coal Export Industry? The Case of the Australian Galilee Basin.

    With the input data files and the GAMS scenario file the model (https://github.com/chauenstein/COALMOD-World_v2.0) can be run to reproduce the model results.

    Furthermore, the output.zip folder contains the results file, the R code to compile the figures, and PDFs of the figures.

  12. Global map of tree density

    • figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Crowther, T. W.; Glick, H. B.; Covey, K. R.; Bettigole, C.; Maynard, D. S.; Thomas, S. M.; Smith, J. R.; Hintler, G.; Duguid, M. C.; Amatulli, G.; Tuanmu, M. N.; Jetz, W.; Salas, C.; Stam, C.; Piotto, D.; Tavani, R.; Green, S.; Bruce, G.; Williams, S. J.; Wiser, S. K.; Huber, M. O.; Hengeveld, G. M.; Nabuurs, G. J.; Tikhonova, E.; Borchardt, P.; Li, C. F.; Powrie, L. W.; Fischer, M.; Hemp, A.; Homeier, J.; Cho, P.; Vibrans, A. C.; Umunay, P. M.; Piao, S. L.; Rowe, C. W.; Ashton, M. S.; Crane, P. R.; Bradford, M. A. (2023). Global map of tree density [Dataset]. http://doi.org/10.6084/m9.figshare.3179986.v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Crowther, T. W.; Glick, H. B.; Covey, K. R.; Bettigole, C.; Maynard, D. S.; Thomas, S. M.; Smith, J. R.; Hintler, G.; Duguid, M. C.; Amatulli, G.; Tuanmu, M. N.; Jetz, W.; Salas, C.; Stam, C.; Piotto, D.; Tavani, R.; Green, S.; Bruce, G.; Williams, S. J.; Wiser, S. K.; Huber, M. O.; Hengeveld, G. M.; Nabuurs, G. J.; Tikhonova, E.; Borchardt, P.; Li, C. F.; Powrie, L. W.; Fischer, M.; Hemp, A.; Homeier, J.; Cho, P.; Vibrans, A. C.; Umunay, P. M.; Piao, S. L.; Rowe, C. W.; Ashton, M. S.; Crane, P. R.; Bradford, M. A.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Crowther_Nature_Files.zip This description pertains to the original download. Details on revised (newer) versions of the datasets are listed below. When more than one version of a file exists in Figshare, the original DOI will take users to the latest version, though each version technically has its own DOI. -- Two global maps (raster files) of tree density. These maps highlight how the number of trees varies across the world. One map was generated using biome-level models of tree density, and applied at the biome scale. The other map was generated using ecoregion-level models of tree density, and applied at the ecoregion scale. For this reason, transitions between biomes or between ecoregions may be unrealistically harsh, but large-scale estimates are robust (see Crowther et al 2015 and Glick et al 2016). At the outset, this study was intended to generate reliable estimates at broad spatial scales, which inherently comes at the cost of fine-scale precision. For this reason, country-scale (or larger) estimates are generally more robust than individual pixel-level estimates. Additionally, due to data limitations, estimates for Mangroves and Tropical coniferous forest (as identified by WWF and TNC) were generated using models constructed from Topical moist broadleaf forest data and Temperate coniferous forest data, respectively. Because we used ecological analogy, the estimates for these two biomes should be considered less reliable than those of other biomes . These two maps initially appeared in Crowther et al (2015), with the biome map being featured more prominently. Explicit publication of the data is associated with Glick et al (2016). As they are produced, updated versions of these datasets, as well as alternative formats, will be made available under Additional Versions (see below).

    Methods: We collected over 420,000 ground-sources estimates of tree density from around the world. We then constructed linear regression models using vegetative, climatic, topographic, and anthropogenic variables to produce forest tree density estimates for all locations globally. All modeling was done in R. Mapping was done using R and ArcGIS 10.1.

    Viewing Instructions: Load the files into an appropriate geographic information system (GIS). For the original download (ArcGIS geodatabase files), load the files into ArcGIS to view or export the data to other formats. Because these datasets are large and have a unique coordinate system that is not read by many GIS, we suggest loading them into an ArcGIS dataframe whose coordinate system matches that of the data (see File Format). For GeoTiff files (see Additional Versions), load them into any compatible GIS or image management program.

    Comments: The original download provides a zipped folder that contains (1) an ArcGIS File Geodatabase (.gdb) containing one raster file for each of the two global models of tree density – one based on biomes and one based on ecoregions; (2) a layer file (.lyr) for each of the global models with the symbology used for each respective model in Crowther et al (2015); and an ArcGIS Map Document (.mxd) that contains the layers and symbology for each map in the paper. The data is delivered in the Goode homolosine interrupted projected coordinate system that was used to compute biome, ecoregion, and global estimates of the number and density of trees presented in Crowther et al (2015). To obtain maps like those presented in the official publication, raster files will need to be reprojected to the Eckert III projected coordinate system. Details on subsequent revisions and alternative file formats are list below under Additional Versions.----------

    Additional Versions: Crowther_Nature_Files_Revision_01.zip contains tree density predictions for small islands that are not included in the data available in the original dataset. These predictions were not taken into consideration in production of maps and figures presented in Crowther et al (2015), with the exception of the values presented in Supplemental Table 2. The file structure follows that of the original data and includes both biome- and ecoregion-level models.

    Crowther_Nature_Files_Revision_01_WGS84_GeoTiff.zip contains Revision_01 of the biome-level model, but stored in WGS84 and GeoTiff format. This file was produced by reprojecting the original Goode homolosine files to WGS84 using nearest neighbor resampling in ArcMap. All areal computations presented in the manuscript were computed using the Goode homolosine projection. This means that comparable computations made with projected versions of this WGS84 data are likely to differ (substantially at greater latitudes) as a product of the resampling. Included in this .zip file are the primary .tif and its visualization support files.

    References:

    Crowther, T. W., Glick, H. B., Covey, K. R., Bettigole, C., Maynard, D. S., Thomas, S. M., Smith, J. R., Hintler, G., Duguid, M. C., Amatulli, G., Tuanmu, M. N., Jetz, W., Salas, C., Stam, C., Piotto, D., Tavani, R., Green, S., Bruce, G., Williams, S. J., Wiser, S. K., Huber, M. O., Hengeveld, G. M., Nabuurs, G. J., Tikhonova, E., Borchardt, P., Li, C. F., Powrie, L. W., Fischer, M., Hemp, A., Homeier, J., Cho, P., Vibrans, A. C., Umunay, P. M., Piao, S. L., Rowe, C. W., Ashton, M. S., Crane, P. R., and Bradford, M. A. 2015. Mapping tree density at a global scale. Nature, 525(7568): 201-205. DOI: http://doi.org/10.1038/nature14967Glick, H. B., Bettigole, C. B., Maynard, D. S., Covey, K. R., Smith, J. R., and Crowther, T. W. 2016. Spatially explicit models of global tree density. Scientific Data, 3(160069), doi:10.1038/sdata.2016.69.

  13. Data from: Composition of Foods Raw, Processed, Prepared USDA National...

    • agdatacommons.nal.usda.gov
    • datasetcatalog.nlm.nih.gov
    • +4more
    pdf
    Updated Apr 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David B. Haytowitz; Jaspreet K.C. Ahuja; Bethany Showell; Meena Somanchi; Melissa Nickle; Quynh Anh Nguyen; Juhi R. Williams; Janet M. Roseland; Mona Khan; Kristine Y. Patterson; Jacob Exler; Shirley Wasswa-Kintu; Robin Thomas; Pamela R. Pehrsson (2025). Composition of Foods Raw, Processed, Prepared USDA National Nutrient Database for Standard Reference, Release 28 [Dataset]. http://doi.org/10.15482/USDA.ADC/1324304
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Apr 30, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Authors
    David B. Haytowitz; Jaspreet K.C. Ahuja; Bethany Showell; Meena Somanchi; Melissa Nickle; Quynh Anh Nguyen; Juhi R. Williams; Janet M. Roseland; Mona Khan; Kristine Y. Patterson; Jacob Exler; Shirley Wasswa-Kintu; Robin Thomas; Pamela R. Pehrsson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    [Note: Integrated as part of FoodData Central, April 2019.] The database consists of several sets of data: food descriptions, nutrients, weights and measures, footnotes, and sources of data. The Nutrient Data file contains mean nutrient values per 100 g of the edible portion of food, along with fields to further describe the mean value. Information is provided on household measures for food items. Weights are given for edible material without refuse. Footnotes are provided for a few items where information about food description, weights and measures, or nutrient values could not be accommodated in existing fields. Data have been compiled from published and unpublished sources. Published data sources include the scientific literature. Unpublished data include those obtained from the food industry, other government agencies, and research conducted under contracts initiated by USDA’s Agricultural Research Service (ARS). Updated data have been published electronically on the USDA Nutrient Data Laboratory (NDL) web site since 1992. Standard Reference (SR) 28 includes composition data for all the food groups and nutrients published in the 21 volumes of "Agriculture Handbook 8" (US Department of Agriculture 1976-92), and its four supplements (US Department of Agriculture 1990-93), which superseded the 1963 edition (Watt and Merrill, 1963). SR28 supersedes all previous releases, including the printed versions, in the event of any differences. Attribution for photos: Photo 1: k7246-9 Copyright free, public domain photo by Scott Bauer Photo 2: k8234-2 Copyright free, public domain photo by Scott Bauer Resources in this dataset:Resource Title: READ ME - Documentation and User Guide - Composition of Foods Raw, Processed, Prepared - USDA National Nutrient Database for Standard Reference, Release 28. File Name: sr28_doc.pdfResource Software Recommended: Adobe Acrobat Reader,url: http://www.adobe.com/prodindex/acrobat/readstep.html Resource Title: ASCII (6.0Mb; ISO/IEC 8859-1). File Name: sr28asc.zipResource Description: Delimited file suitable for importing into many programs. The tables are organized in a relational format, and can be used with a relational database management system (RDBMS), which will allow you to form your own queries and generate custom reports.Resource Title: ACCESS (25.2Mb). File Name: sr28db.zipResource Description: This file contains the SR28 data imported into a Microsoft Access (2007 or later) database. It includes relationships between files and a few sample queries and reports.Resource Title: ASCII (Abbreviated; 1.1Mb; ISO/IEC 8859-1). File Name: sr28abbr.zipResource Description: Delimited file suitable for importing into many programs. This file contains data for all food items in SR28, but not all nutrient values--starch, fluoride, betaine, vitamin D2 and D3, added vitamin E, added vitamin B12, alcohol, caffeine, theobromine, phytosterols, individual amino acids, individual fatty acids, or individual sugars are not included. These data are presented per 100 grams, edible portion. Up to two household measures are also provided, allowing the user to calculate the values per household measure, if desired.Resource Title: Excel (Abbreviated; 2.9Mb). File Name: sr28abxl.zipResource Description: For use with Microsoft Excel (2007 or later), but can also be used by many other spreadsheet programs. This file contains data for all food items in SR28, but not all nutrient values--starch, fluoride, betaine, vitamin D2 and D3, added vitamin E, added vitamin B12, alcohol, caffeine, theobromine, phytosterols, individual amino acids, individual fatty acids, or individual sugars are not included. These data are presented per 100 grams, edible portion. Up to two household measures are also provided, allowing the user to calculate the values per household measure, if desired.Resource Software Recommended: Microsoft Excel,url: https://www.microsoft.com/ Resource Title: ASCII (Update Files; 1.1Mb; ISO/IEC 8859-1). File Name: sr28upd.zipResource Description: Update Files - Contains updates for those users who have loaded Release 27 into their own programs and wish to do their own updates. These files contain the updates between SR27 and SR28. Delimited file suitable for import into many programs.

  14. m

    Raw Temperature Data from the Rio Grande Rise acquired during R/V Nathaniel...

    • marine-geo.org
    • search.dataone.org
    txt
    Updated 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Koppers (2020). Raw Temperature Data from the Rio Grande Rise acquired during R/V Nathaniel B. Palmer expedition NBP1808 (2018) [Dataset]. http://doi.org/10.1594/IEDA/324651
    Explore at:
    txtAvailable download formats
    Dataset updated
    2020
    Dataset provided by
    Marine Geoscience Data System (MGDS)
    Authors
    Anthony Koppers
    License

    Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
    License information was derived automatically

    Area covered
    Description

    Abstract: This data set was acquired with a Sippican MK21 Expendable BathyThermograph during R/V Nathaniel B. Palmer expedition NBP1808 conducted in 2018 (Chief Scientist: Dr. Anthony Koppers, Investigator: Dr. Anthony Koppers). These data files are of Sippican MK21 Export Data File format and include Temperature data that have not been processed.

  15. d

    Uncalibrated Hydrographic Data from the Antarctic Peninsula acquired during...

    • dx.doi.org
    • search.dataone.org
    • +2more
    txt
    Updated 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elizabeth Shadwick (2018). Uncalibrated Hydrographic Data from the Antarctic Peninsula acquired during R/V Laurence M. Gould expedition LMG1612 (2016) [Dataset]. http://doi.org/10.1594/IEDA/324121
    Explore at:
    txtAvailable download formats
    Dataset updated
    2018
    Dataset provided by
    Marine Geoscience Data System (MGDS)
    Authors
    Elizabeth Shadwick
    License

    Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
    License information was derived automatically

    Area covered
    Description

    Abstract: This data set was acquired with a Sippican MK21 Expendable CTD during R/V Laurence M. Gould expedition LMG1612 conducted in 2016 (Chief Scientist: Dr. Elizabeth Shadwick, Investigator: Dr. Elizabeth Shadwick). These data files are of Sippican MK21 Export Data File format and include Temperature data that have not been processed. Data were acquired as part of the project(s): Resolving CO2 System Seasonality in the West Antarctic Peninsula with Autonomous Observations. Funding was provided by NSF award(s): PLR15-43380.

  16. Electronic Disclosure System - State and Local Election Funding and...

    • researchdata.edu.au
    • data.qld.gov.au
    • +1more
    Updated Jan 10, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.qld.gov.au (2019). Electronic Disclosure System - State and Local Election Funding and Donations [Dataset]. https://researchdata.edu.au/electronic-disclosure-state-funding-donations/1360703
    Explore at:
    Dataset updated
    Jan 10, 2019
    Dataset provided by
    Queensland Governmenthttp://qld.gov.au/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Electoral Commission of Queensland is responsible for the Electronic Disclosure System (EDS), which provides real-time reporting of political donations. It aims to streamline the disclosure process while increasing transparency surrounding gifts.\r \r All entities conducting or supporting political activity in Queensland are required to submit a disclosure return to the Electoral Commission of Queensland. These include reporting of gifts and loans, as well as periodic reporting of other dealings such as advertising and expenditure. EDS makes these returns readily available to the public, providing faster and easier access to political financial disclosure information.\r \r The EDS is an outcome of the Electoral Commission of Queensland's ongoing commitment to the people of Queensland, to drive improvements to election services and meet changing community needs.\r \r To export the data from the EDS as a CSV file, consult this page: https://helpcentre.disclosures.ecq.qld.gov.au/hc/en-us/articles/115003351428-Can-I-export-the-data-I-can-see-in-the-map-\r \r For a detailed glossary of terms used by the EDS, please consult this page: https://helpcentre.disclosures.ecq.qld.gov.au/hc/en-us/articles/115002784587-Glossary-of-Terms-in-EDS\r \r For other information about how to use the EDS, please consult the FAQ page here: https://helpcentre.disclosures.ecq.qld.gov.au/hc/en-us/categories/115000599068-FAQs

  17. GAL Assessment Units 1000m 20160522 v01

    • researchdata.edu.au
    Updated Dec 7, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2018). GAL Assessment Units 1000m 20160522 v01 [Dataset]. https://researchdata.edu.au/gal-assessment-units-20160522-v01/2989375
    Explore at:
    Dataset updated
    Dec 7, 2018
    Dataset provided by
    Data.govhttps://data.gov/
    Authors
    Bioregional Assessment Program
    License

    Attribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
    License information was derived automatically

    Description

    Abstract \r

    \r The dataset was derived by the Bioregional Assessment Programme. This dataset was derived from multiple datasets. You can find a link to the parent datasets in the Lineage Field in this metadata statement. The History Field in this metadata statement describes how this dataset was derived.\r \r \r \r To ensure efficiency for processing speed and rendering this is a clip of the Vector Reference grid for the GAL region.\r \r It was created with a 50km buffer of the extent of the Hunter PAE and then selecting all grid cells that intersect with the extent.\r \r The unique ID field for each grid cell is AUID and starts from 1 in the reference grid. The grid also has a column id and row for easy reference\r \r The grid is in Australia Albers (GDA94) (EPSG 3577)\r \r

    Purpose \r

    \r This is an attempt to standardise (where possible) outputs of models from BA assessments and is the template to be used for GAL (clipped from whole of BA reference Grid) for the groundwater and potentially surface water model outputs.\r \r

    Dataset History \r

    \r The minimum bounding geometry tool in ArcGIS 10.1 was used to return the extent of the Bioregion boundary. This was then buffered with a 50km radius.\r \r The select location tool in ArcGIS 10.1 was then used to select all gridcells within the buffered extent.\r \r An export of the grid cells was then created to produce a rectangle reference grid of the GAL region.\r \r The file contains 2 shape files \r \r 1) The grid cells clipped to the boundary\r \r 2) The boundary extents as a reference of the Region\r \r

    Dataset Citation \r

    \r Bioregional Assessment Programme (XXXX) GAL Assessment Units 1000m 20160522 v01. Bioregional Assessment Derived Dataset. Viewed 12 December 2018, http://data.bioregionalassessments.gov.au/dataset/96dffeea-5208-4cfc-8c5d-408af9ac508e.\r \r

    Dataset Ancestors \r

    \r * Derived From BA ALL Assessment Units 1000m Reference 20160516_v01\r \r * Derived From BA ALL Assessment Units 1000m 'super set' 20160516_v01\r \r

  18. 2012 Economic Surveys: CF1200E1 | Exports Series: Shipment Characteristics...

    • data.census.gov
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ECN, 2012 Economic Surveys: CF1200E1 | Exports Series: Shipment Characteristics by Commodity by Export Mode: 2012 and 2007 (ECNSVY Commodity Flow Survey Exports Series) [Dataset]. https://data.census.gov/table/CFSEXPORT2012.CF1200E1
    Explore at:
    Dataset provided by
    United States Census Bureauhttp://census.gov/
    Authors
    ECN
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    2012
    Description

    Release Date: 2014-12-09...Table Name.Export Series: Shipment Characteristics by Commodity by Export Mode: 2012 and 2007....ReleaseSchedule. The data in this file are scheduled for release in December 2014.....Key TableInformation.None.....Universe. The 2012 Commodity Flow Survey (CFS) covers business establishments with paid employees that are located in the United States and are classified using the 2007 North American Industry Classification System (NAICS) in mining, manufacturing, wholesale trade, and selected retail trade and services industries, namely, electronic shopping and mail-order houses, fuel dealers, and publishers. Establishments classified in transportation, construction, and all other retail and services industries are excluded from the survey. Farms, fisheries, foreign establishments, and most government-owned establishments are also excluded. The survey also covers auxiliary establishments (i.e., warehouses and managing offices) of multi-establishments companies. For the 2012 CFS, an advance survey (pre-canvass) of approximately 100,000 establishments was conducted to identify establishments with shipping activity and to try and obtain an accurate measure of their shipping activity. Surveyed establishments that indicated undertaking shipping activities and the non-respondents to the pre-canvass were included in the CFS sample universe....GeographyCoverage.The data are shown at the U.S. level only.....IndustryCoverage.None.....Data ItemsandOtherIdentifyingRecords. This file contains data on:..Value ($ Millions).Tons (Thousands).Ton-miles (Millions).Percent change from 2007, and coefficient of variation or standard error for all above data items. .The data are shown by commodity code (COMM) and export mode of transportation (XMODE)......Sort Order.Data are presented in ascending YEAR by COMM by XMODE sequence.....FTP Download. Download the entire table at Table E1 FTP. ....ContactInformation.U.S. Census Bureau.Commodity Flow Survey.Tel: (301)763-2108.Email: erd.cfs@census.gov...The estimates presented are based on data from the 2012 and 2007 Commodity Flow Surveys (CFS) and supersede data previously released in the 2012 CFS Preliminary Report. These estimates only cover businesses with paid employees. All dollar values are expressed in current dollars relative to each sample year (2012 and 2007), i.e., they are based on price levels in effect at the time of each sample. Estimates may not be additive due to rounding. ...For information on Commodity Flow Survey geographies, including changes for 2012, see Census Geographies. .Symbols:.S - Estimate does not meet publication standards because of high sampling variability, poor response quality, or other concerns about the estimate quality. Unpublished estimates derived from this table by subtraction are subject to these same limitations and should not be attributed to the U.S. Census Bureau. For a description of publication standards and the total quantity response rate, see link to program methodology page..Z - Rounds to Zero..X - Not Applicable..For a complete list of all economic programs symbols, see the Symbols Glossary..Source: U.S. Department of Transportation, Bureau of Transportation Statistics and U.S. Census Bureau, 2012 Commodity Flow Survey. .Note: The noise infusion data protection method has been applied to prevent data disclosure, and to protect respondent's confidentiality. Estimates are based on a sample of establishments and are subject to both sampling and nonsampling error. Estimated measures of sampling variability are provided in the tables. For information on confidentiality protection, sampling error, and nonsampling error see Survey Methodology..Commodity Code changes for 2012 CFS.. (CFS10) 07-R - Prior to the 2012 CFS, oils and fats treated for use as biodiesel were included in Commodity Code 07. In the 2012 CFS, oils and fats treated for use as biodiesel moved to Commodity Code 18. . (CFS20) 08-R - Prior to the 2012 CFS, alcohols intended for use as fuel such as ethanol, although not specifically identified, were included in Commodity Code 08. In the 2012 CFS, ethanol moved to Commodity Code 17. . (CFS30) 17-R - Prior to the 2012 CFS, fuel alcohols such as ethanol were included in Commodity Code 08, although not specifically identified. Also, kerosene was included in Commodity Code 19. In the 2012 CFS, ethanol, fuel alcohols and kerosene moved to Commodity Code 17. . (CFS40) 18-R - Prior to the 2012 CFS, biodiesel, although not specifically identified, was included in Commodity Code 07. In the 2012 CFS, biodiesel moved to Commodity Code 18. . (CFS11) 074-R - Prior to the 2012 CFS, oils and fats treated for use as biodiesel were included in Commodity Code 074. In the 2012 CFS, oils and fats treated for use as biodiesel moved to Commodity Code 182. . (CFS21) 083-R - Prior to the 2012 CFS, denatured alcohol of more than 80% by volume was included in Commodity Code 083. In the 2012 CFS, denatured alcohol of mor...

  19. n

    ESG rating of general stock indices

    • narcis.nl
    • data.mendeley.com
    Updated Oct 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erhart, S (via Mendeley Data) (2021). ESG rating of general stock indices [Dataset]. http://doi.org/10.17632/58mwkj5pf8.1
    Explore at:
    Dataset updated
    Oct 22, 2021
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Erhart, S (via Mendeley Data)
    Description
    ################################################################################################## THE FILES HAVE BEEN CREATED BY SZILÁRD ERHART FOR A RESEARCH: ERHART (2021): ESG RATINGS OF GENERAL # STOCK EXCHANGE INDICES, INTERNATIONAL REVIEW OF FINANCIAL ANALYSIS# USERS OF THE FILES AGREE TO QUOTE THE ABOVE PAPER# THE PYTHON SCRIPT (PYTHONESG_ERHART.TXT) HELPS USERS TO GET TICKERS BY STOCK EXCHANGES AND EXTRACT ESG SCORES FOR THE UNDERLYING STOCKS FROM YAHOO FINANCE.# THE R SCRIPT (ESG_UA.TXT) HELPS TO REPLICATE THE MONTE CARLO EXPERIMENT DETAILED IN THE STUDY.# THE EXPORT_ALL CSV CONTAINS THE DOWNLOADED ESG DATA (SCORES, CONTROVERSIES, ETC) ORGANIZED BY STOCKS AND EXCHANGES.############################################################################################################################################################################################################### DISCLAIMER # The author takes no responsibility for the timeliness, accuracy, completeness or quality of the information provided. # The author is in no event liable for damages of any kind incurred or suffered as a result of the use or non-use of the # information presented or the use of defective or incomplete information. # The contents are subject to confirmation and not binding. # The author expressly reserves the right to alter, amend, whole and in part, # without prior notice or to discontinue publication for a period of time or even completely. ###########################################################################################################################################READ ME############################################################# BEFORE USING THE MONTE CARLO SIMULATIONS SCRIPT: # (1) COPY THE goascores.csv and goalscores_alt.csv FILES ONTO YOUR ON COMPUTER DRIVE. THE TWO FILES ARE IDENTICAL.# (2) SET THE EXACT FILE LOCATION INFORMATION IN THE 'Read in data' SECTION OF THE MONTE CARLO SCRIPT AND FOR THE OUTPUT FILES AT THE END OF THE SCRIPT# (3) LOAD MISC TOOLS AND MATRIXSTATS IN YOUR R APPLICATION# (4) RUN THE CODE.####################################READ ME
  20. u

    Surface Water Disinfection Byproducts and Organic Matter Characterization...

    • data.nceas.ucsb.edu
    • osti.gov
    Updated Aug 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura T. Leonard; Curtis A. Beutler; Rosalie Chu; Robert E. Danczak; Brieanne Forbes; Vanessa A. Garayburu-Caruso; Amy E. Goldman; Stephanie S. Lau; Sophia A. McKever; William A. Mitch; Alexander W. Newman; Lupita Renteria; Jason G. Toyoda; James C. Stegen; Gary F. Vanzin; Kenneth H. Williams; Jonathon O. Sharp (2024). Surface Water Disinfection Byproducts and Organic Matter Characterization Data Associated with: “Disinfection byproducts formed during drinking water treatment reveal an export control point for dissolved organic matter in a subalpine headwater stream” [Dataset]. http://doi.org/10.15485/1969118
    Explore at:
    Dataset updated
    Aug 20, 2024
    Dataset provided by
    ESS-DIVE
    Authors
    Laura T. Leonard; Curtis A. Beutler; Rosalie Chu; Robert E. Danczak; Brieanne Forbes; Vanessa A. Garayburu-Caruso; Amy E. Goldman; Stephanie S. Lau; Sophia A. McKever; William A. Mitch; Alexander W. Newman; Lupita Renteria; Jason G. Toyoda; James C. Stegen; Gary F. Vanzin; Kenneth H. Williams; Jonathon O. Sharp
    Time period covered
    Jul 30, 2020 - Jul 28, 2021
    Area covered
    Description

    This dataset is associated with the publication “Disinfection byproducts formed during drinking water treatment reveal an export control point for dissolved organic matter in a subalpine headwater stream” published in Water Research X (Leonard et al. 2022; https://doi.org/10.1016/j.wroa.2022.100144). The associated study analyzed temporal trends from the Town of Crested Butte water treatment facility and synoptic sampling at Coal Creek in Crested Butte, Colorado, US. This work demonstrates how drinking water quality archives combined with synoptic sampling and targeted analyses can be used to identify and understand export control points for dissolved organic matter. This dataset is comprised of one main data folder containing (1) file-level metadata; (2) data dictionary; (3) metadata and international geo-sample number (IGSN) mapping file; (4) dissolved organic carbon (DOC), ultraviolet absorbance at 254 nanometers (UV254), total nitrogen (TN), and specific ultraviolet absorbance (SUVA) data; (5) disinfection bioproduct formation potential (DBP-FP) data; (6) readme; (7) methods codes; (8) water collection protocol; (9) folder of high resolution characterization of organic matter via 12 Tesla Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) through the Environmental Molecular Sciences Laboratory (EMSL; https://www.pnnl.gov/environmental-molecular-sciences-laboratory); and (10) folder of excitation emissions matrix (EEM) spectra. The FTICR folder contains a file of DOC (measured as non-purgeable organic carbon; NPOC) used for FTICR sample preparation. The FTICR folder also contains three subfolders: one subfolder containing the raw .xml data files, one containing the processed data, and the other containing instructions for using Formularity (https://omics.pnl.gov/software/formularity) and an R script to process the data based on the user's specific needs. All files are .csv, .pdf, .dat, .R, .ref, or .xml

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2025). Data to Assess Nitrogen Export from Forested Watersheds in and near the Long Island Sound Basin with Weighted Regressions on Time, Discharge, and Season (WRTDS) [Dataset]. https://catalog.data.gov/dataset/data-to-assess-nitrogen-export-from-forested-watersheds-in-and-near-the-long-island-sound-

Data from: Data to Assess Nitrogen Export from Forested Watersheds in and near the Long Island Sound Basin with Weighted Regressions on Time, Discharge, and Season (WRTDS)

Related Article
Explore at:
Dataset updated
Sep 12, 2025
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Area covered
Long Island, Long Island Sound
Description

The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency's Long Island Sound Study (https://longislandsoundstudy.net), characterized nitrogen export from forested watersheds and whether nitrogen loading has been increasing or decreasing to help inform Long Island Sound management strategies. The Weighted Regressions on Time, Discharge, and Season (WRTDS; Hirsch and others, 2010) method was used to estimate annual concentrations and fluxes of nitrogen species using long-term records (14 to 37 years in length) of stream total nitrogen, dissolved organic nitrogen, nitrate, and ammonium concentrations and daily discharge data from 17 watersheds located in the Long Island Sound basin or in nearby areas of Massachusetts, New Hampshire, or New York. This data release contains the input water-quality and discharge data, annual outputs (including concentrations, fluxes, yields, and confidence intervals about these estimates), statistical tests for trends between the periods of water years 1999-2000 and 2016-2018, and model diagnostic statistics. These datasets are organized into one zip file (WRTDSeLists.zip) and six comma-separated values (csv) data files (StationInformation.csv, AnnualResults.csv, TrendResults.csv, ModelStatistics.csv, InputWaterQuality.csv, and InputStreamflow.csv). The csv file (StationInformation.csv) contains information about the stations and input datasets. Finally, a short R script (SampleScript.R) is included to facilitate viewing the input and output data and to re-run the model. Reference: Hirsch, R.M., Moyer, D.L., and Archfield, S.A., 2010, Weighted Regressions on Time, Discharge, and Season (WRTDS), with an application to Chesapeake Bay River inputs: Journal of the American Water Resources Association, v. 46, no. 5, p. 857–880.

Search
Clear search
Close search
Google apps
Main menu