3 datasets found
  1. Code and Data for "Green Growth in the Mirror of History: Long-Term Evidence...

    • zenodo.org
    • produccioncientifica.ugr.es
    zip
    Updated Oct 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juan Infante-Amate; Juan Infante-Amate; Emiliano Travieso; Emiliano Travieso; Eduardo Aguilera; Eduardo Aguilera (2024). Code and Data for "Green Growth in the Mirror of History: Long-Term Evidence on Decoupling Emissions from GDP" [Dataset]. http://doi.org/10.5281/zenodo.13944279
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Juan Infante-Amate; Juan Infante-Amate; Emiliano Travieso; Emiliano Travieso; Eduardo Aguilera; Eduardo Aguilera
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The code and files contained in this repository allow for the reproduction of the results and figures from the paper “Green Growth in the Mirror of History: Long-Term Evidence on Decoupling Emissions from GDP” (submitted to Nature Communications). In particular, it generates the historical database of gross domestic product (GDP), population, and greenhouse gas emissions (GHGe) at the national, regional, and global levels from 1820 to 2022. It also includes the code required for data processing and allows reproduction of all figures and analyses in the manuscript.

    The repository is organized into a set of folders functioning as an R project. These folders are as follows:

    · data_input: Contains the databases and files necessary to run our code that have been preprocessed by ourselves. The databases which we have not preprocessed are downloaded directly from their original sources using the links provided in the R scripts. These databases were created by other researchers. Please cite the original databases if you use them directly.

    · code: Contains R scripts used to process the datasets from data_input and to produce the figures. These scripts are:

    o common.R contains code that is sourced in the other scripts and does not need to be run by the user.

    o input_processing.R processes the data input to generate the main dataset saved in the folder data_output and used to make the figures.

    o figures.R contains the code that processes the main dataset to produce each figure.

    · data_output: Contains datasets generated by the scripts contained in input_processing.R that are used to produce the figures.

    • figures: This folder contains all the figures produced by figures.R. Note that some figures also underwent purely aesthetic edits in external software (Adobe Illustrator) for clarity purposes.
  2. g

    OpenStreetMaps raw data for Münster | gimi9.com

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    OpenStreetMaps raw data for Münster | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_e9be6594-2e67-4b8c-879b-90f4904c8f9a/
    Explore at:
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Description

    OpenStreetMap is a project founded in 2004 with the aim of creating a free world map. Volunteers from many countries work on the further development of the software as well as the collection and processing of geodata. Data is collected about roads, railways, rivers, forests, houses and everything else that is commonly seen on maps. The OpenStreetMap data may be used free of charge and processed as long as the source is mentioned (see also: https://www.openstreetmap.org/copyright). This data set contains an excerpt from the OpenStreetMaps “Planet-File”, which contains the relevant data for the administrative district of Münster. Other formats such as OSM-XML, shapefiles, SVG, Adobe Illustrator, Garmin GPS, GPX, GML, KML, Manifold GIS or raster graphics can be exported at http://wiki.openstreetmap.org/wiki/Export. For questions about OpenStreetMap data, there is a German-speaking user forum: http://forum.openstreetmap.org/viewforum.php?id=14

  3. f

    Evaluating the accuracy of binary classifiers for geomorphic applications by...

    • figshare.com
    zip
    Updated Apr 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthew Rossi (2024). Evaluating the accuracy of binary classifiers for geomorphic applications by Rossi (2024) - Accuracy assessment software and figure generation [Dataset]. http://doi.org/10.6084/m9.figshare.23796024.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 29, 2024
    Dataset provided by
    figshare
    Authors
    Matthew Rossi
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset contains the data and scripts to reproduce figures from 'Evaluating the accuracy of binary classifiers for geomorphic applications' published in Earth Surface Dynamics (Rossi, 2024).Figure 1 elevation data was downloaded from OpenTopography (2010 Channel Islands Lidar Collection, 2012; Anderson et al., 2012; Reed, 2006). GIS files for elevation data and transect locations are provided in the zipped geodatabase gis_fig1.gdb.zip.Figure 2 is based on the bedrock mapping at site P01 from Rossi et al. (2020). GIS files for 1-m slope, air photo mapping, its conversion to a truth raster, and the accuracy classification using a 38 degree slope threshold are provided in the zipped geodatabase gis_fig2.gdb.zip. Figures 3-7 are ultimately based on synthetic_feature_maps_main.py and synthetic_feature_maps_functions.py. The former uses the latter to plot example classified maps along with how accuracy scores vary as a function of feature fraction for a given set of input parameters set by the user. Results are saved as a .csv file. Because these master scripts are designed for one set of input parameters, I provide a number of other scripts below that aid in reproducing the figures shown in the manuscript.Figure 3a and 3c can be reproduced using generate_fig3.py directly using input parameters of l = 100, scl = 1, sflag = 2, and fmap = 0.5. This plots the 'match scene' scenario only. Note that there is code that is commented out that will let you plot the 'all feature' scenario as well.Figure 3b and 3d can be reproduced using generate_fig3.py directly using input parameters of l = 100, scl = 10, sflag = 2, and fmap = 0.5. This plots the 'match scene' scenario only. Note that there is code that is commented out that will let you plot the 'all feature' scenario as well.Figure 4 can be reproduced using generate_Fig4.py. It uses saved results from synthetic_feature_maps_main.py that are stored in the folder results_rand_only.Figure 5 can be reproduced using generate_Fig5.py. It uses saved results from synthetic_feature_maps_main.py that are stored in the folder results_syst_only.Figure 6 can be reproduced using generate_Fig6.py. It uses saved results from synthetic_feature_maps_main.py that are stored in the folder results_rand_plus_syst.Figure 7 can be reproduced using generate_Fig7.py. It uses saved results from synthetic_feature_maps_main.py that are stored in the folders results_rand_only, results_syst_only, and results_rand_plus_syst.Figure 8 is conceptual. Figs. 8a-b were drawn in Adobe Illustrator. The plot shown in Fig. 8c can be reproduced using generate_Fig8c.py and requires the associated file fig8_examples.txt.Figure 9 is conceptual. Fig. 9a was drawn in Adobe Illustrator. The plot shown in Fig. 9b can be reproduced using generate_Fig9b.py. Because it is not using saved results and runs the 'systematic error' scenario from scratch using synthetic_feature_maps_functions.py, this script will take a bit of time to run.Table 1 uses the data from the classified map in Fig 2a and can be directly derived from eqs. 1-7.Table 2 requires merging two scenes with different feature fractions to produce and average feature fraction of 0.50. Each cell in the table can be calculated using generate_Table2_contents.py. It uses saved results from synthetic_feature_maps_main.py that are stored in the folders results_rand_only, results_syst_only, and results_rand_plus_syst.

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Juan Infante-Amate; Juan Infante-Amate; Emiliano Travieso; Emiliano Travieso; Eduardo Aguilera; Eduardo Aguilera (2024). Code and Data for "Green Growth in the Mirror of History: Long-Term Evidence on Decoupling Emissions from GDP" [Dataset]. http://doi.org/10.5281/zenodo.13944279
Organization logo

Code and Data for "Green Growth in the Mirror of History: Long-Term Evidence on Decoupling Emissions from GDP"

Explore at:
zipAvailable download formats
Dataset updated
Oct 17, 2024
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Juan Infante-Amate; Juan Infante-Amate; Emiliano Travieso; Emiliano Travieso; Eduardo Aguilera; Eduardo Aguilera
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The code and files contained in this repository allow for the reproduction of the results and figures from the paper “Green Growth in the Mirror of History: Long-Term Evidence on Decoupling Emissions from GDP” (submitted to Nature Communications). In particular, it generates the historical database of gross domestic product (GDP), population, and greenhouse gas emissions (GHGe) at the national, regional, and global levels from 1820 to 2022. It also includes the code required for data processing and allows reproduction of all figures and analyses in the manuscript.

The repository is organized into a set of folders functioning as an R project. These folders are as follows:

· data_input: Contains the databases and files necessary to run our code that have been preprocessed by ourselves. The databases which we have not preprocessed are downloaded directly from their original sources using the links provided in the R scripts. These databases were created by other researchers. Please cite the original databases if you use them directly.

· code: Contains R scripts used to process the datasets from data_input and to produce the figures. These scripts are:

o common.R contains code that is sourced in the other scripts and does not need to be run by the user.

o input_processing.R processes the data input to generate the main dataset saved in the folder data_output and used to make the figures.

o figures.R contains the code that processes the main dataset to produce each figure.

· data_output: Contains datasets generated by the scripts contained in input_processing.R that are used to produce the figures.

  • figures: This folder contains all the figures produced by figures.R. Note that some figures also underwent purely aesthetic edits in external software (Adobe Illustrator) for clarity purposes.
Search
Clear search
Close search
Google apps
Main menu