27 datasets found
  1. d

    Data from: Data to Assess Nitrogen Export from Forested Watersheds in and...

    • catalog.data.gov
    • data.usgs.gov
    Updated Sep 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Data to Assess Nitrogen Export from Forested Watersheds in and near the Long Island Sound Basin with Weighted Regressions on Time, Discharge, and Season (WRTDS) [Dataset]. https://catalog.data.gov/dataset/data-to-assess-nitrogen-export-from-forested-watersheds-in-and-near-the-long-island-sound-
    Explore at:
    Dataset updated
    Sep 12, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Long Island Sound, Long Island
    Description

    The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency's Long Island Sound Study (https://longislandsoundstudy.net), characterized nitrogen export from forested watersheds and whether nitrogen loading has been increasing or decreasing to help inform Long Island Sound management strategies. The Weighted Regressions on Time, Discharge, and Season (WRTDS; Hirsch and others, 2010) method was used to estimate annual concentrations and fluxes of nitrogen species using long-term records (14 to 37 years in length) of stream total nitrogen, dissolved organic nitrogen, nitrate, and ammonium concentrations and daily discharge data from 17 watersheds located in the Long Island Sound basin or in nearby areas of Massachusetts, New Hampshire, or New York. This data release contains the input water-quality and discharge data, annual outputs (including concentrations, fluxes, yields, and confidence intervals about these estimates), statistical tests for trends between the periods of water years 1999-2000 and 2016-2018, and model diagnostic statistics. These datasets are organized into one zip file (WRTDSeLists.zip) and six comma-separated values (csv) data files (StationInformation.csv, AnnualResults.csv, TrendResults.csv, ModelStatistics.csv, InputWaterQuality.csv, and InputStreamflow.csv). The csv file (StationInformation.csv) contains information about the stations and input datasets. Finally, a short R script (SampleScript.R) is included to facilitate viewing the input and output data and to re-run the model. Reference: Hirsch, R.M., Moyer, D.L., and Archfield, S.A., 2010, Weighted Regressions on Time, Discharge, and Season (WRTDS), with an application to Chesapeake Bay River inputs: Journal of the American Water Resources Association, v. 46, no. 5, p. 857–880.

  2. Petre_Slide_CategoricalScatterplotFigShare.pptx

    • figshare.com
    pptx
    Updated Sep 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benj Petre; Aurore Coince; Sophien Kamoun (2016). Petre_Slide_CategoricalScatterplotFigShare.pptx [Dataset]. http://doi.org/10.6084/m9.figshare.3840102.v1
    Explore at:
    pptxAvailable download formats
    Dataset updated
    Sep 19, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Benj Petre; Aurore Coince; Sophien Kamoun
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Categorical scatterplots with R for biologists: a step-by-step guide

    Benjamin Petre1, Aurore Coince2, Sophien Kamoun1

    1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK

    Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.

    Protocol

    • Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.

    • Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.

    • Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.

    Notes

    • Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.

    • Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.

    7 Display the graph in a separate window. Dot colors indicate

    replicates

    graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()

    References

    Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.

    Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035

    Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128

    https://cran.r-project.org/

    http://ggplot2.org/

  3. Data export CSV files from HDX Workbench, software platform for the analysis...

    • figshare.com
    txt
    Updated Oct 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christiane Brugger; Jacob Schwartz; Scott Novick; Song Tong; Joel Hoskins; Nadim Majdalani; Rebecca Kim; Martin Filipovski; Sue Wickner; Susan Gottesman; Patrick R. Griffin; Alexandra M. Deaconescu (2023). Data export CSV files from HDX Workbench, software platform for the analysis of hydrogen/deuterium exchange (HDX) mass spectrometry data. [Dataset]. http://doi.org/10.6084/m9.figshare.24329482.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 18, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Christiane Brugger; Jacob Schwartz; Scott Novick; Song Tong; Joel Hoskins; Nadim Majdalani; Rebecca Kim; Martin Filipovski; Sue Wickner; Susan Gottesman; Patrick R. Griffin; Alexandra M. Deaconescu
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    In enterobacteria such as Escherichia coli, the general stress response is mediatedby σs, the stationary phase dissociable promoter specificity subunit of RNApolymerase. σs is degraded by ClpXP during active growth in a process dependent onthe RssB adaptor, which is thought to be stimulated by phosphorylation of a conservedaspartate in its N-terminal receiver domain. Here we present the crystal structure offull-length RssB bound to a beryllofluoride phosphomimic. Compared to the structure ofRssB bound to the IraD anti-adaptor, our new RssB structure with bound beryllofluoridereveals conformational differences and coil-to-helix transitions in the C-terminal regionof the RssB receiver domain and in the inter-domain segmented helical linker. Theseare accompanied by masking of the α4-β5-α5 (4-5-5) “signaling” face of the RssBreceiver domain by its C-terminal domain. Critically, using hydrogen-deuteriumexchange mass spectrometry we identify σs binding determinants on the 4-5-5 face,implying that this surface needs to be unmasked to effect an interdomain interfaceswitch and enable full σs engagement and hand-off to ClpXP. In activated receiverdomains, the 4-5-5 face is often the locus of intermolecular interactions, but its maskingby intramolecular contacts upon phosphorylation is unusual, emphasizing that RssB isa response regulator that undergoes atypical regulation.Files included are data export from HDX Workbench software from the HDX-MS experiments in support of this work. The files are in CSV format.

  4. d

    Data from: Indoor air quality in California homes with code-required...

    • datadryad.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Apr 22, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wanyu Chan; Yang-Seon Kim; William Delp; Iain Walker; Brett Singer (2020). Indoor air quality in California homes with code-required mechanical ventilation [Dataset]. http://doi.org/10.7941/D1ZS7X
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 22, 2020
    Dataset provided by
    Dryad
    Authors
    Wanyu Chan; Yang-Seon Kim; William Delp; Iain Walker; Brett Singer
    Time period covered
    Feb 7, 2020
    Area covered
    California
    Description

    Time Series Data Handling and Quality Assurance Review

    Most instruments had internal logging and special software to download data from the field instruments as binary files or ascii/csv files. The instruments for which files downloaded as binary provide software to view the data or export the data to csv files.

    One-minute resolution time-series data files were created for each house using an R script that pulled data from the csv files, aligned data by time, executed unit conversions, and translated from instruments with longer or different data intervals (e.g. 30 min formaldehyde data and 1.5 min for anemometer data). Visual review was conducted on the compiled files (and primary csv or binary files were consulted as needed) to check for translation or writing errors (especially from terminal emulator), indications of instrument malfunction, mislabeled units or unit conversion errors, mislabeled location, and time stamp errors.

    The draft final set of time-series data&nb...

  5. Electronic Disclosure System - State and Local Election Funding and...

    • researchdata.edu.au
    • data.qld.gov.au
    • +1more
    Updated Jan 10, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.qld.gov.au (2019). Electronic Disclosure System - State and Local Election Funding and Donations [Dataset]. https://researchdata.edu.au/electronic-disclosure-state-funding-donations/1360703
    Explore at:
    Dataset updated
    Jan 10, 2019
    Dataset provided by
    Queensland Governmenthttp://qld.gov.au/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Electoral Commission of Queensland is responsible for the Electronic Disclosure System (EDS), which provides real-time reporting of political donations. It aims to streamline the disclosure process while increasing transparency surrounding gifts.\r \r All entities conducting or supporting political activity in Queensland are required to submit a disclosure return to the Electoral Commission of Queensland. These include reporting of gifts and loans, as well as periodic reporting of other dealings such as advertising and expenditure. EDS makes these returns readily available to the public, providing faster and easier access to political financial disclosure information.\r \r The EDS is an outcome of the Electoral Commission of Queensland's ongoing commitment to the people of Queensland, to drive improvements to election services and meet changing community needs.\r \r To export the data from the EDS as a CSV file, consult this page: https://helpcentre.disclosures.ecq.qld.gov.au/hc/en-us/articles/115003351428-Can-I-export-the-data-I-can-see-in-the-map-\r \r For a detailed glossary of terms used by the EDS, please consult this page: https://helpcentre.disclosures.ecq.qld.gov.au/hc/en-us/articles/115002784587-Glossary-of-Terms-in-EDS\r \r For other information about how to use the EDS, please consult the FAQ page here: https://helpcentre.disclosures.ecq.qld.gov.au/hc/en-us/categories/115000599068-FAQs

  6. Data supporting the Master thesis "Monitoring von Open Data Praktiken -...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Nov 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Katharina Zinke; Katharina Zinke (2024). Data supporting the Master thesis "Monitoring von Open Data Praktiken - Herausforderungen beim Auffinden von Datenpublikationen am Beispiel der Publikationen von Forschenden der TU Dresden" [Dataset]. http://doi.org/10.5281/zenodo.14196539
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 21, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Katharina Zinke; Katharina Zinke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Dresden
    Description

    Data supporting the Master thesis "Monitoring von Open Data Praktiken - Herausforderungen beim Auffinden von Datenpublikationen am Beispiel der Publikationen von Forschenden der TU Dresden" (Monitoring open data practices - challenges in finding data publications using the example of publications by researchers at TU Dresden) - Katharina Zinke, Institut für Bibliotheks- und Informationswissenschaften, Humboldt-Universität Berlin, 2023

    This ZIP-File contains the data the thesis is based on, interim exports of the results and the R script with all pre-processing, data merging and analyses carried out. The documentation of the additional, explorative analysis is also available. The actual PDFs and text files of the scientific papers used are not included as they are published open access.

    The folder structure is shown below with the file names and a brief description of the contents of each file. For details concerning the analyses approach, please refer to the master's thesis (publication following soon).

    ## Data sources

    Folder 01_SourceData/

    - PLOS-Dataset_v2_Mar23.csv (PLOS-OSI dataset)

    - ScopusSearch_ExportResults.csv (export of Scopus search results from Scopus)

    - ScopusSearch_ExportResults.ris (export of Scopus search results from Scopus)

    - Zotero_Export_ScopusSearch.csv (export of the file names and DOIs of the Scopus search results from Zotero)

    ## Automatic classification

    Folder 02_AutomaticClassification/

    - (NOT INCLUDED) PDFs folder (Folder for PDFs of all publications identified by the Scopus search, named AuthorLastName_Year_PublicationTitle_Title)

    - (NOT INCLUDED) PDFs_to_text folder (Folder for all texts extracted from the PDFs by ODDPub, named AuthorLastName_Year_PublicationTitle_Title)

    - PLOS_ScopusSearch_matched.csv (merge of the Scopus search results with the PLOS_OSI dataset for the files contained in both)

    - oddpub_results_wDOIs.csv (results file of the ODDPub classification)

    - PLOS_ODDPub.csv (merge of the results file of the ODDPub classification with the PLOS-OSI dataset for the publications contained in both)

    ## Manual coding

    Folder 03_ManualCheck/

    - CodeSheet_ManualCheck.txt (Code sheet with descriptions of the variables for manual coding)

    - ManualCheck_2023-06-08.csv (Manual coding results file)

    - PLOS_ODDPub_Manual.csv (Merge of the results file of the ODDPub and PLOS-OSI classification with the results file of the manual coding)

    ## Explorative analysis for the discoverability of open data

    Folder04_FurtherAnalyses

    Proof_of_of_Concept_Open_Data_Monitoring.pdf (Description of the explorative analysis of the discoverability of open data publications using the example of a researcher) - in German

    ## R-Script

    Analyses_MA_OpenDataMonitoring.R (R-Script for preparing, merging and analyzing the data and for performing the ODDPub algorithm)

  7. d

    QA/QC-ed Groundwater Level Time Series in PLM-1 and PLM-6 Monitoring Wells,...

    • dataone.org
    • knb.ecoinformatics.org
    • +1more
    Updated Feb 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Boris Faybishenko; Roelof Versteeg; Kenneth Williams; Rosemary Carroll; Wenming Dong; Tetsu Tokunaga; Dylan O'Ryan (2024). QA/QC-ed Groundwater Level Time Series in PLM-1 and PLM-6 Monitoring Wells, East River, Colorado (2016-2022) [Dataset]. http://doi.org/10.15485/1866836
    Explore at:
    Dataset updated
    Feb 8, 2024
    Dataset provided by
    ESS-DIVE
    Authors
    Boris Faybishenko; Roelof Versteeg; Kenneth Williams; Rosemary Carroll; Wenming Dong; Tetsu Tokunaga; Dylan O'Ryan
    Time period covered
    Nov 30, 2016 - Oct 13, 2022
    Area covered
    Description

    This data set contains QA/QC-ed (Quality Assurance and Quality Control) water level data for the PLM1 and PLM6 wells. PLM1 and PLM6 are location identifiers used by the Watershed Function SFA project for two groundwater monitoring wells along an elevation gradient located along the lower montane life zone of a hillslope near the Pumphouse location at the East River Watershed, Colorado, USA. These wells are used to monitor subsurface water and carbon inventories and fluxes, and to determine the seasonally dependent flow of groundwater under the PLM hillslope. The downslope flow of groundwater in combination with data on groundwater chemistry (see related references) can be used to estimate rates of solute export from the hillslope to the floodplain and river. QA/QC analysis of measured groundwater levels in monitoring wells PLM-1 and PLM-6 included identification and flagging of duplicated values of timestamps, gap filling of missing timestamps and water levels, removal of abnormal/bad and outliers of measured water levels. The QA/QC analysis also tested the application of different QA/QC methods and the development of regular (5-minute, 1-hour, and 1-day) time series datasets, which can serve as a benchmark for testing other QA/QC techniques, and will be applicable for ecohydrological modeling. The package includes a Readme file, one R code file used to perform QA/QC, a series of 8 data csv files (six QA/QC-ed regular time series datasets of varying intervals (5-min, 1-hr, 1-day) and two files with QA/QC flagging of original data), and three files for the reporting format adoption of this dataset (InstallationMethods, file level metadata (flmd), and data dictionary (dd) files).QA/QC-ed data herein were derived from the original/raw data publication available at Williams et al., 2020 (DOI: 10.15485/1818367). For more information about running R code file (10.15485_1866836_QAQC_PLM1_PLM6.R) to reproduce QA/QC output files, see README (QAQC_PLM_readme.docx). This dataset replaces the previously published raw data time series, and is the final groundwater data product for the PLM wells in the East River. Complete metadata information on the PLM1 and PLM6 wells are available in a related dataset on ESS-DIVE: Varadharajan C, et al (2022). https://doi.org/10.15485/1660962. These data products are part of the Watershed Function Scientific Focus Area collection effort to further scientific understanding of biogeochemical dynamics from genome to watershed scales. 2022/09/09 Update: Converted data files using ESS-DIVE’s Hydrological Monitoring Reporting Format. With the adoption of this reporting format, the addition of three new files (v1_20220909_flmd.csv, V1_20220909_dd.csv, and InstallationMethods.csv) were added. The file-level metadata file (v1_20220909_flmd.csv) contains information specific to the files contained within the dataset. The data dictionary file (v1_20220909_dd.csv) contains definitions of column headers and other terms across the dataset. The installation methods file (InstallationMethods.csv) contains a description of methods associated with installation and deployment at PLM1 and PLM6 wells. Additionally, eight data files were re-formatted to follow the reporting format guidance (er_plm1_waterlevel_2016-2020.csv, er_plm1_waterlevel_1-hour_2016-2020.csv, er_plm1_waterlevel_daily_2016-2020.csv, QA_PLM1_Flagging.csv, er_plm6_waterlevel_2016-2020.csv, er_plm6_waterlevel_1-hour_2016-2020.csv, er_plm6_waterlevel_daily_2016-2020.csv, QA_PLM6_Flagging.csv). The major changes to the data files include the addition of header_rows above the data containing metadata about the particular well, units, and sensor description. 2023/01/18 Update: Dataset updated to include additional QA/QC-ed water level data up until 2022-10-12 for ER-PLM1 and 2022-10-13 for ER-PLM6. Reporting format specific files (v2_20230118_flmd.csv, v2_20230118_dd.csv, v2_20230118_InstallationMethods.csv) were updated to reflect the additional data. R code file (QAQC_PLM1_PLM6.R) was added to replace the previously uploaded HTML files to enable execution of the associated code. R code file (QAQC_PLM1_PLM6.R) and ReadMe file (QAQC_PLM_readme.docx) were revised to clarify where original data was retrieved from and to remove local file paths.

  8. g

    2007-08 V3 CEAMARC-CASO Bathymetry Plots Over Time During Events | gimi9.com...

    • gimi9.com
    Updated Apr 20, 2008
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2008). 2007-08 V3 CEAMARC-CASO Bathymetry Plots Over Time During Events | gimi9.com [Dataset]. https://gimi9.com/dataset/au_2007-08-v3-ceamarc-caso-bathymetry-plots-over-time-during-events1/
    Explore at:
    Dataset updated
    Apr 20, 2008
    Description

    A routine was developed in R ('bathy_plots.R') to plot bathymetry data over time during individual CEAMARC events. This is so we can analyse benthic data in relation to habitat, ie. did we trawl over a slope or was the sea floor relatively flat. Note that the depth range in the plots is autoscaled to the data, so a small range in depths appears as a scatetring of points. As long as you look at the depth scale though interpretation will be ok. The R files need a file of bathymetry data in '200708V3_one_minute.csv' which is a file containing a data export from the underway PostgreSQL ship database and 'events.csv' which is a stripped down version of the events export from the ship board events database export. If you wish to run the code again you may need to change the pathnames in the R script to relevant locations. If you have opened the csv files in excel at any stage and the R script gets an error you may need to format the date/time columns as yyyy-mm-dd hh;mm:ss, save and close the file as csv without opening it again and then run the R script. However, all output files are here for every CEAMARC event. Filenames contain a reference to CEAMARC event id. Files are in eps format and can be viewed using Ghostview which is available as a free download on the internet.

  9. Replication Package: Unboxing Default Argument Breaking Changes in 1 + 2...

    • zenodo.org
    application/gzip
    Updated Jul 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi; João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi (2024). Replication Package: Unboxing Default Argument Breaking Changes in 1 + 2 Data Science Libraries in Python [Dataset]. http://doi.org/10.5281/zenodo.11584961
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jul 15, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi; João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Replication Package

    This repository contains data and source files needed to replicate our work described in the paper "Unboxing Default Argument Breaking Changes in Scikit Learn".

    Requirements

    We recommend the following requirements to replicate our study:

    1. Internet access
    2. At least 100GB of space
    3. Docker installed
    4. Git installed

    Package Structure

    We relied on Docker containers to provide a working environment that is easier to replicate. Specifically, we configure the following containers:

    • data-analysis, an R-based Container we used to run our data analysis.
    • data-collection, a Python Container we used to collect Scikit's default arguments and detect them in client applications.
    • database, a Postgres Container we used to store clients' data, obtainer from Grotov et al.
    • storage, a directory used to store the data processed in data-analysis and data-collection. This directory is shared in both containers.
    • docker-compose.yml, the Docker file that configures all containers used in the package.

    In the remainder of this document, we describe how to set up each container properly.

    Using VSCode to Setup the Package

    We selected VSCode as the IDE of choice because its extensions allow us to implement our scripts directly inside the containers. In this package, we provide configuration parameters for both data-analysis and data-collection containers. This way you can directly access and run each container inside it without any specific configuration.

    You first need to set up the containers

    $ cd /replication/package/folder
    $ docker-compose build
    $ docker-compose up
    # Wait docker creating and running all containers
    

    Then, you can open them in Visual Studio Code:

    1. Open VSCode in project root folder
    2. Access the command palette and select "Dev Container: Reopen in Container"
      1. Select either Data Collection or Data Analysis.
    3. Start working

    If you want/need a more customized organization, the remainder of this file describes it in detail.

    Longest Road: Manual Package Setup

    Database Setup

    The database container will automatically restore the dump in dump_matroskin.tar in its first launch. To set up and run the container, you should:

    Build an image:

    $ cd ./database
    $ docker build --tag 'dabc-database' .
    $ docker image ls
    REPOSITORY  TAG    IMAGE ID    CREATED     SIZE
    dabc-database latest  b6f8af99c90d  50 minutes ago  18.5GB
    

    Create and enter inside the container:

    $ docker run -it --name dabc-database-1 dabc-database
    $ docker exec -it dabc-database-1 /bin/bash
    root# psql -U postgres -h localhost -d jupyter-notebooks
    jupyter-notebooks=# \dt
           List of relations
     Schema |    Name    | Type | Owner
    --------+-------------------+-------+-------
     public | Cell       | table | root
     public | Code_cell     | table | root
     public | Md_cell      | table | root
     public | Notebook     | table | root
     public | Notebook_features | table | root
     public | Notebook_metadata | table | root
     public | repository    | table | root
    

    If you got the tables list as above, your database is properly setup.

    It is important to mention that this database is extended from the one provided by Grotov et al.. Basically, we added three columns in the table Notebook_features (API_functions_calls, defined_functions_calls, andother_functions_calls) containing the function calls performed by each client in the database.

    Data Collection Setup

    This container is responsible for collecting the data to answer our research questions. It has the following structure:

    • dabcs.py, extract DABCs from Scikit Learn source code, and export them to a CSV file.
    • dabcs-clients.py, extract function calls from clients and export them to a CSV file. We rely on a modified version of Matroskin to leverage the function calls. You can find the tool's source code in the `matroskin`` directory.
    • Makefile, commands to set up and run both dabcs.py and dabcs-clients.py
    • matroskin, the directory containing the modified version of matroskin tool. We extended the library to collect the function calls performed on the client notebooks of Grotov's dataset.
    • storage, a docker volume where the data-collection should save the exported data. This data will be used later in Data Analysis.
    • requirements.txt, Python dependencies adopted in this module.

    Note that the container will automatically configure this module for you, e.g., install dependencies, configure matroskin, download scikit learn source code, etc. For this, you must run the following commands:

    $ cd ./data-collection
    $ docker build --tag "data-collection" .
    $ docker run -it -d --name data-collection-1 -v $(pwd)/:/data-collection -v $(pwd)/../storage/:/data-collection/storage/ data-collection
    $ docker exec -it data-collection-1 /bin/bash
    $ ls
    Dockerfile Makefile config.yml dabcs-clients.py dabcs.py matroskin storage requirements.txt utils.py
    

    If you see project files, it means the container is configured accordingly.

    Data Analysis Setup

    We use this container to conduct the analysis over the data produced by the Data Collection container. It has the following structure:

    • dependencies.R, an R script containing the dependencies used in our data analysis.
    • data-analysis.Rmd, the R notebook we used to perform our data analysis
    • datasets, a docker volume pointing to the storage directory.

    Execute the following commands to run this container:

    $ cd ./data-analysis
    $ docker build --tag "data-analysis" .
    $ docker run -it -d --name data-analysis-1 -v $(pwd)/:/data-analysis -v $(pwd)/../storage/:/data-collection/datasets/ data-analysis
    $ docker exec -it data-analysis-1 /bin/bash
    $ ls
    data-analysis.Rmd datasets dependencies.R Dockerfile figures Makefile
    

    If you see project files, it means the container is configured accordingly.

    A note on storage shared folder

    As mentioned, the storage folder is mounted as a volume and shared between data-collection and data-analysis containers. We compressed the content of this folder due to space constraints. Therefore, before starting working on Data Collection or Data Analysis, make sure you extracted the compressed files. You can do this by running the Makefile inside storage folder.

    $ make unzip # extract files
    $ ls
    clients-dabcs.csv clients-validation.csv dabcs.csv Makefile scikit-learn-versions.csv versions.csv
    $ make zip # compress files
    $ ls
    csv-files.tar.gz Makefile
  10. u

    Raw data (CSVs and pipelines) for Cell Painting and beta catenin...

    • deepblue.lib.umich.edu
    Updated Jul 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    A. Tapaswi; N. Cemalovic; K. Polemi; J. Sexton; J. Colacino (2024). Raw data (CSVs and pipelines) for Cell Painting and beta catenin immunofluorescence in MCF10A cells exposed to common chemical exposures [Dataset]. http://doi.org/10.7302/seb7-cc14
    Explore at:
    Dataset updated
    Jul 31, 2024
    Dataset provided by
    Deep Blue Data
    Authors
    A. Tapaswi; N. Cemalovic; K. Polemi; J. Sexton; J. Colacino
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Aug 13, 2020
    Description

    MCF10A non-tumorigenic breast cells were dosed with environmental toxicants and stained with multiple cellular stains to study morphological perturbations. Following up on feature results, MCF10A cells were stained with an anti-beta catenin antibody to study beta catenin nuclear translocation. Cell profiler software was used to measure and export per cell data .CSV formats to be further analyze din BMDExpress2 and R studio

  11. n

    ESG rating of general stock indices

    • narcis.nl
    • data.mendeley.com
    Updated Oct 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erhart, S (via Mendeley Data) (2021). ESG rating of general stock indices [Dataset]. http://doi.org/10.17632/58mwkj5pf8.1
    Explore at:
    Dataset updated
    Oct 22, 2021
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Erhart, S (via Mendeley Data)
    Description
    ################################################################################################## THE FILES HAVE BEEN CREATED BY SZILÁRD ERHART FOR A RESEARCH: ERHART (2021): ESG RATINGS OF GENERAL # STOCK EXCHANGE INDICES, INTERNATIONAL REVIEW OF FINANCIAL ANALYSIS# USERS OF THE FILES AGREE TO QUOTE THE ABOVE PAPER# THE PYTHON SCRIPT (PYTHONESG_ERHART.TXT) HELPS USERS TO GET TICKERS BY STOCK EXCHANGES AND EXTRACT ESG SCORES FOR THE UNDERLYING STOCKS FROM YAHOO FINANCE.# THE R SCRIPT (ESG_UA.TXT) HELPS TO REPLICATE THE MONTE CARLO EXPERIMENT DETAILED IN THE STUDY.# THE EXPORT_ALL CSV CONTAINS THE DOWNLOADED ESG DATA (SCORES, CONTROVERSIES, ETC) ORGANIZED BY STOCKS AND EXCHANGES.############################################################################################################################################################################################################### DISCLAIMER # The author takes no responsibility for the timeliness, accuracy, completeness or quality of the information provided. # The author is in no event liable for damages of any kind incurred or suffered as a result of the use or non-use of the # information presented or the use of defective or incomplete information. # The contents are subject to confirmation and not binding. # The author expressly reserves the right to alter, amend, whole and in part, # without prior notice or to discontinue publication for a period of time or even completely. ###########################################################################################################################################READ ME############################################################# BEFORE USING THE MONTE CARLO SIMULATIONS SCRIPT: # (1) COPY THE goascores.csv and goalscores_alt.csv FILES ONTO YOUR ON COMPUTER DRIVE. THE TWO FILES ARE IDENTICAL.# (2) SET THE EXACT FILE LOCATION INFORMATION IN THE 'Read in data' SECTION OF THE MONTE CARLO SCRIPT AND FOR THE OUTPUT FILES AT THE END OF THE SCRIPT# (3) LOAD MISC TOOLS AND MATRIXSTATS IN YOUR R APPLICATION# (4) RUN THE CODE.####################################READ ME
  12. d

    Data from: Commercial harvest and export of snapping turtles (Chelydra...

    • datadryad.org
    • data-staging.niaid.nih.gov
    zip
    Updated Nov 17, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin C. Colteaux; Derek M. Johnson (2017). Commercial harvest and export of snapping turtles (Chelydra serpentina) in the United States: trends and the efficacy of size limits at reducing harvest [Dataset]. http://doi.org/10.5061/dryad.j5v05
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 17, 2017
    Dataset provided by
    Dryad
    Authors
    Benjamin C. Colteaux; Derek M. Johnson
    Time period covered
    Nov 16, 2016
    Area covered
    United States
    Description

    State Harvest Data (csv)Commercial snapping turtle harvest data (in individuals) for eleven states from 1998 - 2013. States reporting are Arkansas, Delaware, Iowa, Maryland, Massachusetts, Michigan, Minnesota, New Jersey, North Carolina, Pennsylvania, and Virginia.StateHarvestData.csvInput and execution code for Colteaux_Johnson_2016Attached R file includes the code described in the listed publication. The companion JAGS (just another Gibbs sampler) code is also stored in this repository under separate cover.ColteauxJohnsonNatureConservation.RJAGS model code for Colteaux_Johnson_2016Attached R file includes the JAGS (just another Gibbs sampler) code described in the listed publication. The companion input and execution code is also stored in this repository under separate cover.ColteauxJohnsonNatureConservationJAGS.R

  13. Queensland Police Service Crime Statistics – Recorded Offences for the Gold...

    • researchdata.edu.au
    • cloud.csiss.gmu.edu
    • +2more
    Updated May 31, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Gold Coast (2013). Queensland Police Service Crime Statistics – Recorded Offences for the Gold Coast [Dataset]. https://researchdata.edu.au/queensland-police-service-gold-coast/
    Explore at:
    Dataset updated
    May 31, 2013
    Dataset provided by
    Data.govhttps://data.gov/
    Authors
    City of Gold Coast
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Description

    For updated crime statistics please refer to the Queensland Police Online Crime Maps website - http://www.police.qld.gov.au/online/crimemap/ which allows uses to search on a range of variables and export data in CSV format and under a Creative Commons Attribution Licence. \r \r The datasets published on this page have been provided by the Queensland Police Service under a Creative Commons Attribution 2.5 Australia Licence. To attribute this material, cite the Queensland Police Service.

  14. S1 Supporting information -

    • plos.figshare.com
    zip
    Updated Oct 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jens Winther Johannsen; Julian Laabs; Magdalena M. E. Bunbury; Morten Fischer Mortensen (2024). S1 Supporting information - [Dataset]. http://doi.org/10.1371/journal.pone.0301938.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 28, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jens Winther Johannsen; Julian Laabs; Magdalena M. E. Bunbury; Morten Fischer Mortensen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    S1 File. SI_C01_SPD_KDE_models. R-script for analysing radiocarbon dates dates. The code performs the computation of over-regional and regional SPD and KDE models, as well as their export to CSV files (Rmd). S2 File. SI_C02_aoristic_dating. R-script for exporting aoristic time series derived from typochronological dated archaeological material as CSV files (Rmd). S3 File. SI_C03_vegetation_openness_score_example. R-script performing the computation of a vegetation openness score from pollen records and the export of the generated time series as CVS file (Rmd). S4 File. SI_C04_data_preparation. Jupyter Notebook performing the import and transformation of relevant data visualize plots exhibited in the paper (ipynb). S5 File. SI_C05_figures_extra. Jupyter Notebook visualizing the plots exhibited in the paper (ipynb). S1 Data. SI_D01_reg_data_no_dups. Spread sheet holding radiocarbon dates, with the information of laboratory identification, site name, geographical coordinates, site type, material, source and regional affiliation (csv). S2 Data. SI_D02_reg_axe_dagger_graves. Spread sheet holding entries of axes and daggers, with the information of context, site, parish, artefact identification, type, subtype, absolute dating, typochonological dating, references, geographical coordinates and regional affiliations (csv). S3 Data. SI_D03_pollen_example. Spread sheet holding sample entries of the pollen records from Krageholm (neotoma Site ID 3204) and Bjäresjöholmsjön (neotoma Site ID 3017) for example run of S3 File. Record can be access via the neotoma explorer (https://apps.neotomadb.org/explorer/) with their given IDs. Each entry holds the information of the records type, regional affiliation, absolute BP and BCE dating, as well as the counts of given plant taxa (csv). S4 Data. SI_D04_PAP_303600_TOC_LOI. Table holding sample entries of TOC content, LOI and SST reconstruction of sediment core PAP_303600 for correlations of population development with Baltic sea surface temperature. Available via 10.1594/PANGAEA.883292 (tab). S5 Data. SI_D05_vos_[…]. Spread sheets holding the vegetation openness score time series of lake Belau, Vinge, Northern Jutland and Zealand (csv). (ZIP)

  15. Data and script for the GenABEL paper

    • zenodo.org
    • data.niaid.nih.gov
    bin, csv
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lennart C. Karssen; Lennart C. Karssen; Cornelia M. Van Duijn; Yurii S. Aulchenko; Yurii S. Aulchenko; Cornelia M. Van Duijn (2020). Data and script for the GenABEL paper [Dataset]. http://doi.org/10.5281/zenodo.51008
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Lennart C. Karssen; Lennart C. Karssen; Cornelia M. Van Duijn; Yurii S. Aulchenko; Yurii S. Aulchenko; Cornelia M. Van Duijn
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset contains the automatically collected data used for the overview paper about the GenABEL Project (Karssen et al, 2016, DOI:10.12688/f1000research.8733.1). Some data used for the paper was collected manually and is therefore not included in this dataset.

    The file "tracker_report-2016-04-16.csv" is an export of the bug reports from the GenABEL R-forge bug tracker on the date listed in the file name.

    The file "Analytics www.genabel.org Locatie Lennart 20150428-20160428.csv" is a custom export of the Google Analytics data for visits to the GenABEL website (www.genabel.org) in the period marked by the dates listed in the file name. The columns contain the ISO code of the country, city, number of sessions, number of new viewers, bounce percentage, pages per session and average session duration, respectively.

    The file analysis_GenABELpaper.org contains the source code
    used for the automated data extraction for this paper in Emacs
    Org mode literate programming format (http://orgmode.org, Schulte 2012, doi:10.18637/jss.v046.i03)

  16. 4

    Data from: Data & Scripts underlying the publication: Abiotic origins of...

    • data.4tu.nl
    zip
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    G.S. (Gregory) Fivash; Marte Stoorvogel; Jaco de Smit; Floris van Rees; Jeroen van Dalen; Tim Grandjean; Roeland van de Vijsel; T.J. (Tjeerd) Bouma; Stijn Temmerman; Jim van Belzen (2023). Data & Scripts underlying the publication: Abiotic origins of self-organized ridge-runnel patterns on tidal flats [Dataset]. http://doi.org/10.4121/b6c37815-dd5e-4cae-9776-9395ab0b3d9c.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    4TU.ResearchData
    Authors
    G.S. (Gregory) Fivash; Marte Stoorvogel; Jaco de Smit; Floris van Rees; Jeroen van Dalen; Tim Grandjean; Roeland van de Vijsel; T.J. (Tjeerd) Bouma; Stijn Temmerman; Jim van Belzen
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Data & Scripts featured in the currently unpublished manuscript (as of 2023-11-14)

    Fivash et al. 2023b: 'Abiotic origins of self-organized ridge-runnel patterns on tidal flats'


    These files include the data and scripts used to create each figure in the manuscript.


    File types: (.tif, .png, .pptx, .clip, .r, .csv)

    Organization:


    The main directory for each figure '/Figure X/' contains a copy of the final figure used in the manuscript:

    Figure X.tif or .png, Figure X.clip or .pptx - Final figure as .tif or .png, and as a layered .clip or .pptx image file (readable in the software Clip Studios or Powerpoint)


    src - Source folder containing the R Scripts (Figure X.r) used to create each figure panel. Scripts reference data found in '/data' and export figure panels to '/panels'

    data - Data folder containing .csv data tables used to make each figure panel. In '/Figure 5 & S2/data' a raster file (.tif) is also included.

    panels - Figure panel folder containing .tif files produced by the R Scripts arranged later in Powerpoint or Clip Studio Paint into final figures found in the manuscript.


    Content:


    Main manuscript figures

    Figure 1; Figure 2; Figure 3; Figure 4; Figure 5


    Supplementary figures

    Figure S1; Figure S2 (included with Figure 5)

  17. d

    Salmon age, sex, and length data from Westward and Southeast Alaska,...

    • search.dataone.org
    • knb.ecoinformatics.org
    • +3more
    Updated Aug 19, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alaska Department of Fish and Game, Division of Commercial Fisheries (2021). Salmon age, sex, and length data from Westward and Southeast Alaska, 1979-2017 [Dataset]. http://doi.org/10.5063/J38QX8
    Explore at:
    Dataset updated
    Aug 19, 2021
    Dataset provided by
    Knowledge Network for Biocomplexity
    Authors
    Alaska Department of Fish and Game, Division of Commercial Fisheries
    Time period covered
    Jul 21, 1979 - Mar 18, 2017
    Area covered
    Variables measured
    SEX, Sex, GEAR, Gear, MESH, FW_AGE, Length, SW_AGE, Source, WEIGHT, and 32 more
    Description

    Age, sex and length data provide population dynamics information that can indicate how populations trends occur and may be changing. These data can help researchers estimate population growth rates, age-class distribution and population demographics. Knowing population demographics, growth rates and trends is particularly valuable to fisheries managers who must perform population assessments to inform management decisions. These data are therefore particularly important in valuable fisheries like the salmon fisheries of Alaska. This dataset includes age, sex and length data compiled from annual sampling of commercial and subsistence salmon harvests and research projects in westward and southeast Kodiak. It includes data on five salmon species: chinook, chum, coho, pink and sockeye. Age estimates were made by examining scales or bony structures (e.g. otoliths - ear bones). Scales were removed from the side of the fish; usually the left side above the lateral line. Scales or bony structures were then mounted on gummed cards and pressed on acetate to make an impression. The number of freshwater and saltwater annuli (i.e. rings) was counted to estimate age in years. Age is recorded in European Notation, which is a method of recording both fresh and saltwater annuli. For example, for a fish that spent one year in freshwater and 3 years in saltwater, its age is recorded as 1.3. The total fish age is the sum of the first and second numbers, plus one to account for the time between deposition and emergence. Therefore the fish in this example is 5 years old. Fish sex was determined by either examining external morphology (eg. head and belly shape) or internal sex organ. Length was measured in millimeters, generally from mid-eye to the fork of the tail. This data package includes the original data file (ASL DATA EXPORT.csv), a reformatting script that reformats the original data file into a consistent format (ASL_Formatting_SoutheastKodiak.R), and the reformatted dataset as a .csv file (ASL_formatted_SoutheastKodiak.csv).

  18. Z

    Data from: # Replication code and data for: Tracking green space along...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Falchetta, Giacomo; Hammad, T. Ahmed (2024). # Replication code and data for: Tracking green space along streets of world cities [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8001676
    Explore at:
    Dataset updated
    Oct 11, 2024
    Dataset provided by
    IIASA
    Decatab
    Authors
    Falchetta, Giacomo; Hammad, T. Ahmed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Replication code and data for: Tracking green space along streets of world citiesBy Giacomo Falchetta and Ahmed T. HammadPreprint: https://doi.org/10.21203/rs.3.rs-3916891/v1

    To replicate the analysis, the results, and the figures of the paper:

    Download input data from this Zenodo repository and code from Github https://github.com/giacfalk/urban_green_space_mapping_and_tracking

    Optional data extraction steps (processed output data are already available in the Zenodo repository):

    Adjust your working directory

    Run [lines 4-11] of workflow/sourcer.R

    Run the Javascript scripts written by the string_generator_training.R and string_generator_prediction.R files in Google Earth Engine (https://code.earthengine.google.com) and complete the export to Drive tasks to generate the output .csv files

    Run workflow/sourcer.R [lines 15-46] to train the ML model and make predictions (including figures and tables replication)

  19. BTC/USDT Monthly OHLCV (2017-Aug to 2025-Sep)

    • kaggle.com
    zip
    Updated Oct 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    R. Kukuh (2025). BTC/USDT Monthly OHLCV (2017-Aug to 2025-Sep) [Dataset]. https://www.kaggle.com/datasets/rkukuh/btcusdt-monthly-2017-aug-2025-sep-binance
    Explore at:
    zip(2948 bytes)Available download formats
    Dataset updated
    Oct 2, 2025
    Authors
    R. Kukuh
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Bitcoin vs USDT Monthly OHLCV Data (2017-Aug to 2025-Sep)

    This dataset contains monthly candlestick (OHLCV) data for the BTC/USDT pair from Binance Exchange, covering the period from August 2017 up to September 2025.

    Data was exported directly from TradingView to ensure historical consistency with chart analysis tools widely used by traders.

    📊 Dataset Overview

    • Symbol: BTC/USDT
    • Exchange: Binance
    • Timeframe: 1M (monthly)
    • Period Covered: 2017-08 → 2025-09
    • Total Rows: 99 (as of Sep 2025)
    • Data Type: OHLCV (Open, High, Low, Close, Volume)

    📂 File Information

    • File Name: BINANCE_BTCUSDT_1M_201708-202509.csv
    • Format: CSV (comma-separated)
    • Rows: 99
    • Columns: 6

    📑 Column Descriptions

    • date → The first day of the month (UTC)
    • open → Opening price of BTC/USDT for the month
    • high → Highest traded price in that month
    • low → Lowest traded price in that month
    • close → Closing price at the end of the month
    • volume → Total traded volume of BTC in that month

    ✅ Provenance

    • Source: Binance Exchange
    • Exported via: TradingView chart export feature
    • Collector: Dataset author (manual export, no modifications)

    🔄 Update Frequency

    This dataset will be updated monthly as new candles close on Binance.

    💡 Example Use Cases

    • Financial time series forecasting (ARIMA, Prophet, LSTM)
    • Volatility analysis and risk modeling
    • Technical indicators (RSI, MACD, moving averages)
    • Portfolio allocation backtesting
    • Crypto market academic research
  20. C

    Data from: Our Block

    • data.cityofchicago.org
    Updated Nov 29, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chicago Police Department (2025). Our Block [Dataset]. https://data.cityofchicago.org/Public-Safety/Our-Block/285v-myf3
    Explore at:
    xml, csv, kmz, kml, application/geo+json, xlsxAvailable download formats
    Dataset updated
    Nov 29, 2025
    Authors
    Chicago Police Department
    Description

    This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system. In order to protect the privacy of crime victims, addresses are shown at the block level only and specific locations are not identified. Should you have questions about this dataset, you may contact the Research & Development Division of the Chicago Police Department at 312.745.6071 or RandD@chicagopolice.org. Disclaimer: These crimes may be based upon preliminary information supplied to the Police Department by the reporting parties that have not been verified. The preliminary crime classifications may be changed at a later date based upon additional investigation and there is always the possibility of mechanical or human error. Therefore, the Chicago Police Department does not guarantee (either expressed or implied) the accuracy, completeness, timeliness, or correct sequencing of the information and the information should not be used for comparison purposes over time. The Chicago Police Department will not be responsible for any error or omission, or for the use of, or the results obtained from the use of this information. All data visualizations on maps should be considered approximate and attempts to derive specific addresses are strictly prohibited. The Chicago Police Department is not responsible for the content of any off-site pages that are referenced by or that reference this web page other than an official City of Chicago or Chicago Police Department web page. The user specifically acknowledges that the Chicago Police Department is not responsible for any defamatory, offensive, misleading, or illegal conduct of other users, links, or third parties and that the risk of injury from the foregoing rests entirely with the user. The unauthorized use of the words "Chicago Police Department," "Chicago Police," or any colorable imitation of these words or the unauthorized use of the Chicago Police Department logo is unlawful. This web page does not, in any way, authorize such use. Data is updated daily Tuesday through Sunday. The dataset contains more than 65,000 records/rows of data and cannot be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Wordpad, to view and search. To access a list of Chicago Police Department - Illinois Uniform Crime Reporting (IUCR) codes, go to http://data.cityofchicago.org/Public-Safety/Chicago-Police-Department-Illinois-Uniform-Crime-R/c7ck-438e

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2025). Data to Assess Nitrogen Export from Forested Watersheds in and near the Long Island Sound Basin with Weighted Regressions on Time, Discharge, and Season (WRTDS) [Dataset]. https://catalog.data.gov/dataset/data-to-assess-nitrogen-export-from-forested-watersheds-in-and-near-the-long-island-sound-

Data from: Data to Assess Nitrogen Export from Forested Watersheds in and near the Long Island Sound Basin with Weighted Regressions on Time, Discharge, and Season (WRTDS)

Related Article
Explore at:
Dataset updated
Sep 12, 2025
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Area covered
Long Island Sound, Long Island
Description

The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency's Long Island Sound Study (https://longislandsoundstudy.net), characterized nitrogen export from forested watersheds and whether nitrogen loading has been increasing or decreasing to help inform Long Island Sound management strategies. The Weighted Regressions on Time, Discharge, and Season (WRTDS; Hirsch and others, 2010) method was used to estimate annual concentrations and fluxes of nitrogen species using long-term records (14 to 37 years in length) of stream total nitrogen, dissolved organic nitrogen, nitrate, and ammonium concentrations and daily discharge data from 17 watersheds located in the Long Island Sound basin or in nearby areas of Massachusetts, New Hampshire, or New York. This data release contains the input water-quality and discharge data, annual outputs (including concentrations, fluxes, yields, and confidence intervals about these estimates), statistical tests for trends between the periods of water years 1999-2000 and 2016-2018, and model diagnostic statistics. These datasets are organized into one zip file (WRTDSeLists.zip) and six comma-separated values (csv) data files (StationInformation.csv, AnnualResults.csv, TrendResults.csv, ModelStatistics.csv, InputWaterQuality.csv, and InputStreamflow.csv). The csv file (StationInformation.csv) contains information about the stations and input datasets. Finally, a short R script (SampleScript.R) is included to facilitate viewing the input and output data and to re-run the model. Reference: Hirsch, R.M., Moyer, D.L., and Archfield, S.A., 2010, Weighted Regressions on Time, Discharge, and Season (WRTDS), with an application to Chesapeake Bay River inputs: Journal of the American Water Resources Association, v. 46, no. 5, p. 857–880.

Search
Clear search
Close search
Google apps
Main menu