73 datasets found
  1. A Baseflow Filter for Hydrologic Models in R

    • catalog.data.gov
    Updated Apr 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). A Baseflow Filter for Hydrologic Models in R [Dataset]. https://catalog.data.gov/dataset/a-baseflow-filter-for-hydrologic-models-in-r-41440
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    A Baseflow Filter for Hydrologic Models in R Resources in this dataset:Resource Title: A Baseflow Filter for Hydrologic Models in R. File Name: Web Page, url: https://www.ars.usda.gov/research/software/download/?softwareid=383&modecode=20-72-05-00 download page

  2. Filter Import Data | Soluciones En Logistica Rcl S De R

    • seair.co.in
    Updated Jan 29, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2025). Filter Import Data | Soluciones En Logistica Rcl S De R [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Jan 29, 2025
    Dataset provided by
    Seair Exim Solutions
    Authors
    Seair Exim
    Area covered
    United States
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  3. h

    libritts-r-filtered-speaker-descriptions

    • huggingface.co
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Parler TTS, libritts-r-filtered-speaker-descriptions [Dataset]. https://huggingface.co/datasets/parler-tts/libritts-r-filtered-speaker-descriptions
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset authored and provided by
    Parler TTS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset Card for Annotated LibriTTS-R

    This dataset is an annotated version of a filtered LibriTTS-R [1]. LibriTTS-R [1] is a sound quality improved version of the LibriTTS corpus which is a multi-speaker English corpus of approximately 960 hours of read English speech at 24kHz sampling rate, published in 2019. In the text_description column, it provides natural language annotations on the characteristics of speakers and utterances, that have been generated using the Data-Speech… See the full description on the dataset page: https://huggingface.co/datasets/parler-tts/libritts-r-filtered-speaker-descriptions.

  4. Z

    Data from: Dataset from : Browsing is a strong filter for savanna tree...

    • data.niaid.nih.gov
    Updated Oct 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wayne Twine (2021). Dataset from : Browsing is a strong filter for savanna tree seedlings in their first growing season [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4972083
    Explore at:
    Dataset updated
    Oct 1, 2021
    Dataset provided by
    Archibald, Sally
    Wayne Twine
    Nicola Stevens
    Craddock Mthabini
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data presented here were used to produce the following paper:

    Archibald, Twine, Mthabini, Stevens (2021) Browsing is a strong filter for savanna tree seedlings in their first growing season. J. Ecology.

    The project under which these data were collected is: Mechanisms Controlling Species Limits in a Changing World. NRF/SASSCAL Grant number 118588

    For information on the data or analysis please contact Sally Archibald: sally.archibald@wits.ac.za

    Description of file(s):

    File 1: cleanedData_forAnalysis.csv (required to run the R code: "finalAnalysis_PostClipResponses_Feb2021_requires_cleanData_forAnalysis_.R"

    The data represent monthly survival and growth data for ~740 seedlings from 10 species under various levels of clipping.

    The data consist of one .csv file with the following column names:

    treatment Clipping treatment (1 - 5 months clip plus control unclipped) plot_rep One of three randomised plots per treatment matrix_no Where in the plot the individual was placed species_code First three letters of the genus name, and first three letters of the species name uniquely identifies the species species Full species name sample_period Classification of sampling period into time since clip. status Alive or Dead standing.height Vertical height above ground (in mm) height.mm Length of the longest branch (in mm) total.branch.length Total length of all the branches (in mm) stemdiam.mm Basal stem diameter (in mm) maxSpineLength.mm Length of the longest spine postclipStemNo Number of resprouting stems (only recorded AFTER clipping) date.clipped date.clipped date.measured date.measured date.germinated date.germinated Age.of.plant Date measured - Date germinated newtreat Treatment as a numeric variable, with 8 being the control plot (for plotting purposes)

    File 2: Herbivory_SurvivalEndofSeason_march2017.csv (required to run the R code: "FinalAnalysisResultsSurvival_requires_Herbivory_SurvivalEndofSeason_march2017.R"

    The data consist of one .csv file with the following column names:

    treatment Clipping treatment (1 - 5 months clip plus control unclipped) plot_rep One of three randomised plots per treatment matrix_no Where in the plot the individual was placed species_code First three letters of the genus name, and first three letters of the species name uniquely identifies the species species Full species name sample_period Classification of sampling period into time since clip. status Alive or Dead standing.height Vertical height above ground (in mm) height.mm Length of the longest branch (in mm) total.branch.length Total length of all the branches (in mm) stemdiam.mm Basal stem diameter (in mm) maxSpineLength.mm Length of the longest spine postclipStemNo Number of resprouting stems (only recorded AFTER clipping) date.clipped date.clipped date.measured date.measured date.germinated date.germinated Age.of.plant Date measured - Date germinated newtreat Treatment as a numeric variable, with 8 being the control plot (for plotting purposes) genus Genus MAR Mean Annual Rainfall for that Species distribution (mm) rainclass High/medium/low

    File 3: allModelParameters_byAge.csv (required to run the R code: "FinalModelSeedlingSurvival_June2021_.R"

    Consists of a .csv file with the following column headings

    Age.of.plant Age in days species_code Species pred_SD_mm Predicted stem diameter in mm pred_SD_up top 75th quantile of stem diameter in mm pred_SD_low bottom 25th quantile of stem diameter in mm treatdate date when clipped pred_surv Predicted survival probability pred_surv_low Predicted 25th quantile survival probability pred_surv_high Predicted 75th quantile survival probability species_code species code Bite.probability Daily probability of being eaten max_bite_diam_duiker_mm Maximum bite diameter of a duiker for this species duiker_sd standard deviation of bite diameter for a duiker for this species max_bite_diameter_kudu_mm Maximum bite diameer of a kudu for this species kudu_sd standard deviation of bite diameter for a kudu for this species mean_bite_diam_duiker_mm mean etc duiker_mean_sd standard devaition etc mean_bite_diameter_kudu_mm mean etc kudu_mean_sd standard deviation etc genus genus rainclass low/med/high

    File 4: EatProbParameters_June2020.csv (required to run the R code: "FinalModelSeedlingSurvival_June2021_.R"

    Consists of a .csv file with the following column headings

    shtspec species name species_code species code genus genus rainclass low/medium/high seed mass mass of seed (g per 1000seeds)
    Surv_intercept coefficient of the model predicting survival from age of clip for this species Surv_slope coefficient of the model predicting survival from age of clip for this species GR_intercept coefficient of the model predicting stem diameter from seedling age for this species GR_slope coefficient of the model predicting stem diameter from seedling age for this species species_code species code max_bite_diam_duiker_mm Maximum bite diameter of a duiker for this species duiker_sd standard deviation of bite diameter for a duiker for this species max_bite_diameter_kudu_mm Maximum bite diameer of a kudu for this species kudu_sd standard deviation of bite diameter for a kudu for this species mean_bite_diam_duiker_mm mean etc duiker_mean_sd standard devaition etc mean_bite_diameter_kudu_mm mean etc kudu_mean_sd standard deviation etc AgeAtEscape_duiker[t] age of plant when its stem diameter is larger than a mean duiker bite AgeAtEscape_duiker_min[t] age of plant when its stem diameter is larger than a min duiker bite AgeAtEscape_duiker_max[t] age of plant when its stem diameter is larger than a max duiker bite AgeAtEscape_kudu[t] age of plant when its stem diameter is larger than a mean kudu bite AgeAtEscape_kudu_min[t] age of plant when its stem diameter is larger than a min kudu bite AgeAtEscape_kudu_max[t] age of plant when its stem diameter is larger than a max kudu bite

  5. Small form factor filter based sampling data - Ambient and chamber

    • catalog.data.gov
    Updated Mar 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2022). Small form factor filter based sampling data - Ambient and chamber [Dataset]. https://catalog.data.gov/dataset/small-form-factor-filter-based-sampling-data-ambient-and-chamber
    Explore at:
    Dataset updated
    Mar 12, 2022
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    Small form factor filter based PM collection data from both co-located ambient sampling and chamber studies conducted on controlled smoke environments. Metadata is contained within files. This dataset is associated with the following publication: Krug, J.D., R. Long, M. Colon, A. Habel, S. Urbanski, and M. Landis. Evaluation of Small Form Factor, Filter-Based PM2.5 Samplers for Temporary Non-Regulatory Monitoring During Wildland Fire Smoke Events. ATMOSPHERIC ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 265: 0, (2021).

  6. e

    Sloan Digital Sky Survey DR3 - Filter r

    • b2find.eudat.eu
    Updated Aug 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Sloan Digital Sky Survey DR3 - Filter r [Dataset]. https://b2find.eudat.eu/dataset/52e60e4a-7986-5773-bcb8-254aa968ff2e
    Explore at:
    Dataset updated
    Aug 29, 2022
    Description

    The Sloan Digital Sky Survey is a project to survey a 10000 square degree area on the Northern sky over a 5 year period. A dedicated 2.5m telescope is specially designed to take wide field (3 degrees in diameter) images using a 5x6 mosaic of 2048x2048 CCD`s, in five wavelength bands, operating in drift scan mode. The total raw data will exceed 40 TB. A processed subset, of about 1 TB in size, will consist of 1 million spectra, positions and image parameters for over 100 million objects, plus a mini-image centered on each object in every color. The data will be made available to the public after the completion of the survey

  7. l

    Raw data and R filtering code for "An investigation of genetic connectivity...

    • opal.latrobe.edu.au
    • researchdata.edu.au
    txt
    Updated Mar 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    James O'Dwyer; Nicholas Murphy; Katherine Harrisson; Zeb Tonkin; Jarod Lyon; Wayne Koster; Frank Amtstaetter; David Dawson (2024). Raw data and R filtering code for "An investigation of genetic connectivity shines a light on the relative roles of isolation by distance and oceanic currents in three diadromous fish species" [Dataset]. http://doi.org/10.26181/14671884.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Mar 7, 2024
    Dataset provided by
    La Trobe
    Authors
    James O'Dwyer; Nicholas Murphy; Katherine Harrisson; Zeb Tonkin; Jarod Lyon; Wayne Koster; Frank Amtstaetter; David Dawson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set contains the Raw SNP output files for each of the three diadromous species studied within this manuscript. Additionally the covariates file containing all environmental and individual data about all individuals is included. All R code used to filter SNP's to the quality thresholds within this paper is additionally provided

  8. THEMIS-C: Probe Electric Field Instrument and Search Coil Magnetometer...

    • heliophysicsdata.gsfc.nasa.gov
    application/x-cdf +2
    Updated Jul 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Angelopoulos, Vassilis; Bonnell, John, W.; Ergun, Robert, E.; Mozer, Forrest, S.; Roux, Alain (2023). THEMIS-C: Probe Electric Field Instrument and Search Coil Magnetometer Instrument, Digital Fields Board - digitally computed Filter Bank spectra and E12 peak and average in HF band (FBK). [Dataset]. http://doi.org/10.48322/ken1-sn21
    Explore at:
    csv, bin, application/x-cdfAvailable download formats
    Dataset updated
    Jul 30, 2023
    Dataset provided by
    NASAhttp://nasa.gov/
    Authors
    Angelopoulos, Vassilis; Bonnell, John, W.; Ergun, Robert, E.; Mozer, Forrest, S.; Roux, Alain
    License

    https://spdx.org/licenses/CC0-1.0https://spdx.org/licenses/CC0-1.0

    Variables measured
    thc_fbh, thc_fb_yaxis, thc_fbh_time, thc_fbk_fband, thc_fb_v1_time, thc_fb_v2_time, thc_fb_v3_time, thc_fb_v4_time, thc_fb_v5_time, thc_fb_v6_time, and 12 more
    Description

    The Filter Bank is part of the Digital fields board and provides band-pass filtering for EFI and SCM spectra as well as E12HF peak and average value calculations. The Filter Bank provides band-pass filtering for less computationally and power intensive spectra than the FFT would provide. The process is as follows: Signals are fed to the Filter Bank via a low-pass FIR filter with a cut-off frequency half that of the original signal maximum. The output is passed to the band-pass filters, is differenced from the original signal, then absolute value of the data is taken and averaged. The output from the low-pass filter is also sent to a second FIR filter with 2:1 decimation. This output is then fed back through the system. The process runs through 12 cascades for input at 8,192 samples/s and 13 for input at 16,384 samples/sec (EAC input only), reducing the signal and computing power by a factor 2 at each cascade. At each cascade a set of data is produced at a sampling frequency of 2^n from 2 Hz to the initial sampling frequency (frequency characteristics for each step are shown below in Table 1). The average from the Filter Bank is compressed to 8 bits with a pseudo-logarithmic encoder. The data is stored in sets of six frequency bins at 2.689 kHz, 572 Hz, 144.2 Hz, 36.2 Hz, 9.05 Hz, and 2.26 Hz. The average of the coupled E12HF signal and it's peak value are recorded over 62.5 ms windows (i.e. a 16 Hz sampling rate). Accumulation of values from signal 31.25 ms windows is performed externally. The analog signals fed into the FBK are E12DC and SCM1. Sensor and electronics design provided by UCB (J. W. Bonnell, F. S. Mozer), Digital Fields Board provided by LASP (R. Ergun), Search coil data provided by CETP (A. Roux). Table 1: Frequency Properties. Cascade Frequency content of Input Signal Low-pass Filter Cutoff Frequency Freuency Content of Low-pass Output Signal Filter Bank Frequency Band 0* 0 - 8 kHz 4 kHz 0 - 4 kHz 4 - 8 kHz 1 0 - 4 kHz 2 kHz 0 - 2 kHz 2 - 4 kHz 2 0 - 2 kHz 1 kHz 0 - 1 kHz 1 - 2 kHz 3 0 - 1 kHz 512 Hz 0 - 512 Hz 512 Hz - 1 kHz 4 0 - 512 Hz 256 Hz 0 - 256 Hz 256 - 512 Hz 5 0 - 256 Hz 128 Hz 0 - 128 Hz 128 - 256 Hz 6 0 - 128 Hz 64 Hz 0 - 64 Hz 64 - 128 Hz 7 0 - 64 Hz 32 Hz 0 - 32 Hz 32 - 64 Hz 8 0 - 32 Hz 16 Hz 0 - 16 Hz 16 - 32 Hz 9 0 - 16 Hz 8 Hz 0 - 8 Hz 8 - 16 Hz 10 0 - 8 Hz 4 Hz 0 - 4 Hz 4 - 8 Hz 11 0 - 4 Hz 2 Hz 0 - 2 Hz 2 - 4 Hz 12 0 - 2 Hz 1 Hz 0 - 1 Hz 1 - 2 Hz *Only available for 16,384 Hz sampling.

  9. Preprocessing and Binomial Filtering of Bisulfite Sequencing Coverage Data

    • figshare.com
    txt
    Updated May 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eamonn Mallon (2025). Preprocessing and Binomial Filtering of Bisulfite Sequencing Coverage Data [Dataset]. http://doi.org/10.6084/m9.figshare.29126867.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 22, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Eamonn Mallon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This R script performs data preprocessing and statistical filtering on bisulfite sequencing output files (*.cov format) using a binomial test approach. The primary objective is to prepare methylation count data in a format compatible with the DSS package for downstream differential methylation analysis.The script applies a coverage threshold, computes binomial p-values against a specified background rate (default: 0.007), adjusts p-values using the Benjamini–Hochberg method, and exports three structured output files per sample:_Binomial_Applied.txt: Full filtered data with FDR-adjusted p-values._DSS_Format.txt: Reformatted table with columns chr, pos, N, and X for DSS._w_Unique.txt: Same as DSS format, with a unique site identifier column (chr_pos).Inputs:Tab-delimited coverage files matching *evidence.cov (e.g., from Bismark methylation extractor).Columns must include chromosome, start, end, percent methylation, count methylated (C), and count unmethylated (T).Outputs:For each input file:[Sample]_Binomial_Applied.txt[Sample]_DSS_Format.txt[Sample]_w_Unique.txtSoftware Requirements:R (≥ 4.0)Packages: readr, sqldf, doBy, dplyr, foreach, doParallelUsage Notes:This script is intended to run in a directory containing .cov files.Parallel processing is used for speed; adjust the number of cores with doParallel::registerDoParallel().Downstream DSS analysis expects *_DSS_Format.txt files to be loaded using makeBSseqData().

  10. Storage and Transit Time Data and Code

    • zenodo.org
    zip
    Updated Oct 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Felton; Andrew Felton (2024). Storage and Transit Time Data and Code [Dataset]. http://doi.org/10.5281/zenodo.14009758
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andrew Felton; Andrew Felton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Author: Andrew J. Felton
    Date: 10/29/2024

    This R project contains the primary code and data (following pre-processing in python) used for data production, manipulation, visualization, and analysis, and figure production for the study entitled:

    "Global estimates of the storage and transit time of water through vegetation"

    Please note that 'turnover' and 'transit' are used interchangeably. Also please note that this R project has been updated multiple times as the analysis has updated.

    Data information:

    The data folder contains key data sets used for analysis. In particular:

    "data/turnover_from_python/updated/august_2024_lc/" contains the core datasets used in this study including global arrays summarizing five year (2016-2020) averages of mean (annual) and minimum (monthly) transit time, storage, canopy transpiration, and number of months of data able as both an array (.nc) or data table (.csv). These data were produced in python using the python scripts found in the "supporting_code" folder. The remaining files in the "data" and "data/supporting_data"" folder primarily contain ground-based estimates of storage and transit found in public databases or through a literature search, but have been extensively processed and filtered here. The "supporting_data"" folder also contains annual (2016-2020) MODIS land cover data used in the analysis and contains separate filters containing the original data (.hdf) and then the final process (filtered) data in .nc format. The resulting annual land cover distributions were used in the pre-processing of data in python.

    #Code information

    Python scripts can be found in the "supporting_code" folder.

    Each R script in this project has a role:

    "01_start.R": This script sets the working directory, loads in the tidyverse package (the remaining packages in this project are called using the `::` operator), and can run two other scripts: one that loads the customized functions (02_functions.R) and one for importing and processing the key dataset for this analysis (03_import_data.R).

    "02_functions.R": This script contains custom functions. Load this using the
    `source()` function in the 01_start.R script.

    "03_import_data.R": This script imports and processes the .csv transit data. It joins the mean (annual) transit time data with the minimum (monthly) transit data to generate one dataset for analysis: annual_turnover_2. Load this using the
    `source()` function in the 01_start.R script.

    "04_figures_tables.R": This is the main workhouse for figure/table production and
    supporting analyses. This script generates the key figures and summary statistics
    used in the study that then get saved in the manuscript_figures folder. Note that all
    maps were produced using Python code found in the "supporting_code"" folder.

    "supporting_generate_data.R": This script processes supporting data used in the analysis, primarily the varying ground-based datasets of leaf water content.

    "supporting_process_land_cover.R": This takes annual MODIS land cover distributions and processes them through a multi-step filtering process so that they can be used in preprocessing of datasets in python.

  11. Z

    Data from: Filtered Data from the Retrospective Analysis of Antarctic...

    • data.niaid.nih.gov
    • researchdata.edu.au
    • +3more
    Updated Mar 24, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kieran Lawton (2020). Filtered Data from the Retrospective Analysis of Antarctic Tracking Data Project from the Scientific Committee on Antarctic Research [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3722948
    Explore at:
    Dataset updated
    Mar 24, 2020
    Dataset provided by
    Horst Bornemann
    Mike Double
    Joachim Plötz
    Phil O'B. Lyver
    Wayne Trivelpiece
    Ben Arthur
    Kit M. Kovacs
    Klemens Pütz
    P. J. Nico de Bruyn
    Peter G. Ryan
    Jefferson T. Hinke
    Erling S. Nordøy
    Clive R. McMahon
    Mike Goebel
    Ian D. Jonsen
    Kerstin Jerosch
    Louise Emmerson
    Luciano Dalla Rosa
    Knowles R. Kerry
    John Bengtson
    Mike Fedak
    Christian Lydersen
    Luis A. Hückstädt
    Ben Raymond
    Grant Ballard
    Ari Friedlaende
    Kieran Lawton
    Daniel P. Costa
    Henri Weimerskirch
    Ryan R. Reisinger
    Keith W. Nicholls
    Ewan Wakefield
    Christophe Guinet
    David Thompson
    Sébastien Descamps
    Simon Wotherspoon
    Virginia Andrews-Goff
    Bruno Danis
    Robert J. M. Crawford
    Simon D. Goldsworthy
    Maria E. I. Márquez
    Rochelle Constantine
    Roger Kirkwood
    Pierre Pistorius
    Leigh G. Torres
    Kimberly T. Goetz
    Jaimie Cleeland
    Azwianewi B. Makhado
    Charles-André Bost
    Nick Gales
    Iain Staniland
    Colin Southwell
    Marthán N. Bester
    Silvia Olmastroni
    Rob Harcourt
    Arnaud Tarroux
    José C. Xavier
    Barbara Wienecke
    Karine Delord
    Andrew D. Lowther
    Jean-Benoît Charrassin
    Mark A. Hindell
    Arnoldus Schytte Blix
    Anton P. Van de Putte
    David G. Ainley
    Norman Ratcliffe
    Mary-Anne Lea
    Dominik Nachtsheim
    Peter Boveng
    Philip N. Trathan
    Gerald L. Kooyman
    Akiko Kato
    Yan Ropert-Coudert
    Mercedes Santos
    Birgitte I. McDonald
    Monica Muelbert
    Lars Boehme
    Rachael Alderman
    Richard A. Phillips
    Akinori Takahashi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Antarctica
    Description

    The Retrospective Analysis of Antarctic Tracking Data (RAATD) is a Scientific Committee for Antarctic Research (SCAR) project led jointly by the Expert Groups on Birds and Marine Mammals and Antarctic Biodiversity Informatics, and endorsed by the Commission for the Conservation of Antarctic Marine Living Resources. The RAATD project team consolidated tracking data for multiple species of Antarctic meso- and top-predators to identify Areas of Ecological Significance. These datasets constitute the compiled tracking data from a large number of research groups that have worked in the Antarctic since the 1990s.

    This metadata record pertains to the "filtered" version of the data files. These files contain position estimates that have been processed using a state-space model in order to estimate locations at regular time intervals. For technical details of the filtering process, consult the data paper. The filtering code can be found in the https://github.com/SCAR/RAATD repository.

    This data set comprises one metadata csv file that describes all deployments, along with data files (3 files for each of 17 species). For each species there is: - an RDS file that contains the fitted TMB filter model object and model predictions (this file is RDS format that can be read by the R statistical software package) - a PDF file that shows the quality control results for each individual model - a CSV file containing the interpolated position estimates

    For details of the file contents and formats, consult the data paper.

    The original copy of these data are available through the Australian Antarctic Data Centre (https://data.aad.gov.au/metadata/records/SCAR_EGBAMM_RAATD_2018_Filtered)

    The data are also available in a standardized version (see https://data.aad.gov.au/metadata/records/SCAR_EGBAMM_RAATD_2018_Standardised) that contain position estimates as provided by the original data collectors (generally, raw Argos or GPS locations, or estimated GLS locations) without state-space filtering.

  12. n

    Processed data for the analysis of human mobility changes from COVID-19...

    • data.niaid.nih.gov
    • search.dataone.org
    • +2more
    zip
    Updated Mar 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jin Bai; Michael Caslin; Madhusudan Katti (2024). Processed data for the analysis of human mobility changes from COVID-19 lockdown on bird occupancy in North Carolina, USA [Dataset]. http://doi.org/10.5061/dryad.gb5mkkwxr
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 28, 2024
    Dataset provided by
    North Carolina State University
    Authors
    Jin Bai; Michael Caslin; Madhusudan Katti
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    United States, North Carolina
    Description

    The COVID-19 pandemic lockdown worldwide provided a unique research opportunity for ecologists to investigate the human-wildlife relationship under abrupt changes in human mobility, also known as Anthropause. Here we chose 15 common non-migratory bird species with different levels of synanthrope and we aimed to compare how human mobility changes could influence the occupancy of fully synanthropic species such as House Sparrow (Passer domesticus) versus casual to tangential synanthropic species such as White-breasted Nuthatch (Sitta carolinensis). We extracted data from the eBird citizen science project during three study periods in the spring and summer of 2020 when human mobility changed unevenly across different counties in North Carolina. We used the COVID-19 Community Mobility reports from Google to examine how community mobility changes towards workplaces, an indicator of overall human movements at the county level, could influence bird occupancy. Methods The data source we used for bird data was eBird, a global citizen science project run by the Cornell Lab of Ornithology. We used the COVID-19 Community Mobility Reports by Google to represent the pause of human activities at the county level in North Carolina. These data are publicly available and were last updated on 10/15/2022. We used forest land cover data from NC One Map that has a high resolution (1-meter pixel) raster data from 2016 imagery to represent canopy cover at each eBird checklist location. We also used the raster data of the 2019 National Land Cover Database to represent the degree of development/impervious surface at each eBird checklist location. All three measurements were used for the highest resolution that was available to use. We downloaded the eBird Basic Dataset (EBD) that contains the 15 study species from February to June 2020. We also downloaded the sampling event data that contains the checklist efforts information. First, we used the R package Auk (version 0.6.0) in R (version 4.2.1) to filter data in the following conditions: (1) Date: 02/19/2020 - 03/29/2020; (2) Checklist type: stationary; (3) Complete checklist; (4) Time: 07:00 am - 06:00 pm; (5) Checklist duration: 5-20 mins; (6) Location: North Carolina. After filtering data, we used the zero fill function from Auk to create detection/non-detection data of each study species in NC. Then we used the repeat visits filter from Auk to filter eBird checklist locations where at least 2 checklists (max 10 checklists) have been submitted to the same location by the same observer, allowing us to create a hierarchical data frame where both detection and state process can be analyzed using Occupancy Modeling. This data frame was in a matrix format that each row represents a sampling location and the columns represent the detection and non-detection of the 2-10 repeat sampling events. For the Google Community Mobility data, we chose the “Workplaces” categoriy of mobility data to analyze the Anthropause effect because it was highly relevant to the pause of human activities in urban areas. The mobility data from Google is a percentage change compared to a baseline for each day. A baseline day represents a normal value for the day of the week from the 5-week period (01/03/2020-02/06/2020). For example, a mobility value of -30.0 for Wake County on Apr 15, 2020, means the overall mobility in Wake County on that day decreased by 30% compared to the baseline day a few months ago. Because the eBird data we used covers a wider range of dates rather than each day, we took the average value of mobility before lockdown, during lockdown, and after lockdown in each county in NC. For the environmental variables, we calculated the values in ArcGIS Pro (version 3.1.0). We created a 200 m buffer at each eligible eBird checklist location. For the forest cover data, we used “Zonal Statistics as Table” to extract the percentage of forest cover at each checklist location’s 200-meter circular buffer. For the National Land Cover Database (NLCD) data, we combined low-intensity, medium-intensity, and high-intensity development as development covers and used “Summarize Within” to extract the percentage of development cover using the polygon version of NLCD. We used a correlation matrix of the three predictors (workplace mobility, percent forest cover, and percent development cover) and found no co-linearity. Thus, these three predictors plus the interaction between workplace mobility and percent development cover were the site covariates of the Occupancy Models. For the detection covariates, four predictors were considered including time of observation, checklist duration, number of observers, and workplace mobility. These detection covariates were also not highly correlated. We then merged all data into an unmarked data frame using the “unmarked” R package (version 1.2.5). The unmarked data frame has eBird sampling locations as sites (rows in the data frame) and repeat checklists at the same sampling locations as repeat visits (columns in the data frame).

  13. C

    Theft Filter

    • data.cityofchicago.org
    Updated Jul 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chicago Police Department (2025). Theft Filter [Dataset]. https://data.cityofchicago.org/Public-Safety/Theft-Filter/aqvv-ggim
    Explore at:
    csv, tsv, xml, application/rdfxml, application/rssxml, application/geo+json, kml, kmzAvailable download formats
    Dataset updated
    Jul 18, 2025
    Authors
    Chicago Police Department
    Description

    This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system. In order to protect the privacy of crime victims, addresses are shown at the block level only and specific locations are not identified. Should you have questions about this dataset, you may contact the Research & Development Division of the Chicago Police Department at 312.745.6071 or RandD@chicagopolice.org. Disclaimer: These crimes may be based upon preliminary information supplied to the Police Department by the reporting parties that have not been verified. The preliminary crime classifications may be changed at a later date based upon additional investigation and there is always the possibility of mechanical or human error. Therefore, the Chicago Police Department does not guarantee (either expressed or implied) the accuracy, completeness, timeliness, or correct sequencing of the information and the information should not be used for comparison purposes over time. The Chicago Police Department will not be responsible for any error or omission, or for the use of, or the results obtained from the use of this information. All data visualizations on maps should be considered approximate and attempts to derive specific addresses are strictly prohibited. The Chicago Police Department is not responsible for the content of any off-site pages that are referenced by or that reference this web page other than an official City of Chicago or Chicago Police Department web page. The user specifically acknowledges that the Chicago Police Department is not responsible for any defamatory, offensive, misleading, or illegal conduct of other users, links, or third parties and that the risk of injury from the foregoing rests entirely with the user. The unauthorized use of the words "Chicago Police Department," "Chicago Police," or any colorable imitation of these words or the unauthorized use of the Chicago Police Department logo is unlawful. This web page does not, in any way, authorize such use. Data is updated daily Tuesday through Sunday. The dataset contains more than 65,000 records/rows of data and cannot be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Wordpad, to view and search. To access a list of Chicago Police Department - Illinois Uniform Crime Reporting (IUCR) codes, go to http://data.cityofchicago.org/Public-Safety/Chicago-Police-Department-Illinois-Uniform-Crime-R/c7ck-438e

  14. Z

    Marine geophysical data exchange files for R/V Kilo Moana: 2002 to 2018

    • data.niaid.nih.gov
    • zenodo.org
    Updated Apr 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hamilton, Michael (2021). Marine geophysical data exchange files for R/V Kilo Moana: 2002 to 2018 [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_4699568
    Explore at:
    Dataset updated
    Apr 27, 2021
    Dataset authored and provided by
    Hamilton, Michael
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary:

    Marine geophysical exchange files for R/V Kilo Moana: 2002 to 2018 includes 328 geophysical archive files spanning km0201, the vessel's very first expedition, through km1812, the last survey included in this data synthesis.

    Data formats (you will likely require only one of these):

    MGD77T (M77T): ASCII - the current standard format for marine geophysical data exchange, tab delimited, low human readability

    MGD77: ASCII - legacy format for marine geophysical data exchange (no longer recommended due to truncated data precision and low human readability)

    GMT DAT: ASCII - the Generic Mapping Tools format in which these archive files were built, best human readability but largest file size

    MGD77+: highly flexible and disk space saving binary NetCDF-based format, enables adding additional columns and application of errata-based data correction methods (i.e., Chandler et al, 2012), not human readable

    The process by which formats were converted is explained below.

    Data Reduction and Explanation:

    R/V Kilo Moana routinely acquired bathymetry data using two concurrently operated sonar systems hence, for this analysis, a best effort was made to extract center beam depth values from the appropriate sonar system. No resampling or decimation of center beam depth data has been performed with the exception that all depth measurements were required to be temporally separated by at least 1 second. The initial sonar systems were the Kongsberg EM120 for deep and EM1002 for shallow water mapping. The vessel's deep sonar system was upgraded to Kongsberg EM122 in January of 2010 and the shallow system to EM710 in March 2012.

    The vessel deployed a Lacoste and Romberg spring-type gravity meter (S-33) from 2002 until March 2012 when it was replaced with a Bell Labs BGM-3 forced feedback-type gravity meter. Of considerable importance is that gravity tie-in logs were by and large inadequate for the rigorous removal of gravity drift and tares. Hence a best effort has been made to remove gravity meter drift via robust regression to satellite-derived gravity data. Regression slope and intercept are analogous to instrument drift and DC shift hence their removal markedly improves the agreement between shipboard and satellite gravity anomalies for most surveys. These drift corrections were applied to both observed gravity and free air anomaly fields. If the corrections are undesired by users, the correction coefficients have been supplied within the metadata headers for all gravity surveys, thereby allowing users to undo these drift corrections.

    The L&R gravity meter had a 180 second hardware filter so for this analysis the data were Gaussian filtered another 180 seconds and resampled at 10 seconds. BGM-3 data are not hardware filtered hence a 360 second Gaussian filter was applied for this analysis. BGM-3 gravity anomalies were resampled at 15 second intervals. For both meter types, data gaps exceeding the filter length were not through-interpolated. Eotvos corrections were computed via the standard formula (e.g., Dehlinger, 1978) and were subjected to identical filtering of the respective gravity meter.

    The vessel also deployed a Geometrics G-882 cesium vapor magnetometer on several expeditions. A Gaussian filter length of 135 seconds has been applied and resampling was performed at 15 second intervals with the same exception that no interpolation was performed through data gaps exceeding the filter length.

    Archive file production:

    At all depth, gravity and magnetic measurement times, vessel GPS navigation was resampled using linear interpolation as most geophysical measurement times did not exactly coincide with GPS position times. The geophysical fields were then merged with resampled vessel navigation and listed sequentially in the GMT DAT format to produce data records.

    Archive file header fields were populated with relevant information such as port names, PI names, instrument and data processing details, and others whereas survey geographic and temporal boundary fields were automatically computed from the data records.

    Archive file conversion:

    Once completed, each marine geophysical data exchange file was converted to the other formats using the Generic Mapping Tools program known as mgd77convert. For example, conversions to the other formats were carried out as follows:

    mgd77convert km0201.dat -Ft -Tm # gives mgd77t (m77t file extension)

    mgd77convert km0201.dat -Ft -Ta # gives mgd77

    mgd77convert km0201.dat -Ft -Tc # gives mgd77+ (nc file extension)

    Disclaimers:

    These data have not been edited in detail using a visual data editor and data outliers are known to exist. Several hardware malfunctions are known to have occurred during the 2002 to 2018 time frame and these malfunctions are apparent in some of the data sets. No guarantee is made that the data are accurate and they are not meant to be used for vessel navigation. Close scrutiny and further removal of outliers and other artifacts is recommended before making scientific determinations from these data.

    The archive file production method employed for this analysis is explained in detail by Hamilton et al (2019).

  15. r

    SNP dataset, analysis scripts, and raw phylogenetic & phylogenomic trees for...

    • researchdata.edu.au
    Updated Jun 25, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Steven L. Chown; Helena Baird; Seunggwan Shin; Rolf G. Oberprieler; Maurice Hullé; Philippe Vernon; Katherine L. Moon; Richard H. Adams; Duane McKenna (2021). SNP dataset, analysis scripts, and raw phylogenetic & phylogenomic trees for the sub-Antarctic Ectemnorhinini weevils [Dataset]. http://doi.org/10.26180/14446023
    Explore at:
    Dataset updated
    Jun 25, 2021
    Dataset provided by
    Monash University
    Authors
    Steven L. Chown; Helena Baird; Seunggwan Shin; Rolf G. Oberprieler; Maurice Hullé; Philippe Vernon; Katherine L. Moon; Richard H. Adams; Duane McKenna
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data and files associated with the manuscript: Baird et al (2021) 'Fifty Million Years of Beetle Evolution Along the Antarctic Polar Front', published in PNAS.

    - Full, quality-filtered dataset of 5,859 genome-wide SNPs for the sub-Antarctic weevil Palirhoeus eatoni (Coleoptera: Curculionidae), provided in vcf format

    - A metadata spreadsheet (providing the collection details for every P. eatoni individual, corresponding to sample IDs in the SNP dataset)

    - R script used to filter the SNP dataset and perform basic SNP-based phylogeographic analyses

    - Phylogenetic trees in .newick format, both dated and undated, are provided for the sub-Antarctic Ectemnorhinini weevil tribe as inferred from concatenated COI, cytochrome b, 16S, 28S, and Elongation factor 1-alpha sequences

    - R script is provided for RPANDA macroevolutionary analyses, which are based on the dated phylogenetic tree generated in BEAST.

    - R script and input files (with prefix 'BGB') are provided for BioGeoBEARS biogeographic inference analysis

    - Phylogenomic trees in .nexus format are provided for representatives of the sub-Antarctic Ectemnorhinini together with a worldwide sample of Entiminae weevils, based on a concatenated dataset of 515 genes (1st and 2nd codons)

  16. d

    Link to SeaSoar CTD Data from R/V Wecoma cruise W0005A in the Northeast...

    • search.dataone.org
    • bco-dmo.org
    Updated Dec 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Timothy Cowles; Jack Barth (2021). Link to SeaSoar CTD Data from R/V Wecoma cruise W0005A in the Northeast Pacific in 2000 as part of the U.S. GLOBEC program (NEP project) [Dataset]. https://search.dataone.org/view/http%3A%2F%2Flod.bco-dmo.org%2Fid%2Fdataset%2F2467
    Explore at:
    Dataset updated
    Dec 5, 2021
    Dataset provided by
    Biological and Chemical Oceanography Data Management Office (BCO-DMO)
    Authors
    Timothy Cowles; Jack Barth
    Description
    W0005 R/V Wecoma 29 May - 17 June 2000
     
    SeaSoar data from the U.S. GLOBEC Northeast Pacific Program
    are available from the SeaSoar Web Site
    at Oregon State University
    Contact Jack Barth at OSU (Phone: 541-737-1607; email barth@oce.orst.edu)
    
    SeaSoar data are available in two formats:
    
    \"1Hz data\" or \"gridded\".
    
    Each of these is described below.
    
    1Hz Data
    --------
    The *.dat2c files give final 1Hz SeaSoar CTD data.
    
    Here is the first line of inshore.line1.dat2c:
    
     44.64954 -125.25666 108.7 8.6551 33.5239 26.0164 8.6439 26.0181 155.64019
    000603152152 0001 0.069 0.288 0.476  0.23
    
    The format of the *.dat2c files is given by:
    
       col 1: latitude (decimal degrees) 
       col 2: longitude (decimal degrees)
       col 3: pressure (dbars)
       col 4: temperature (C) 
       col 5: salinity (psu) 
       col 6: Sigma-t (kg/cubic meter)
       col 7: potential temperature (C)
       col 8: sigma-theta (kg/cubic meter) 
       col 9: time (decimal year-day of 2000)
       col 10: date and time (integer year, month, day, hour, minute, second)
       col 11: flag
       col 12: PAR (volts)
       col 13: FPK010 FL (violet filter) (volts)
       col 14: FPK016 FL (green filter) (volts)
       col 15: chlorophyl-a (micro g/liter)
    
    The ones place of the flags variable indicates which of the
    two sensor pairs was selected as the preferred sensor, giving
    the values for T, S, and sigma-t:
    
         0 indicates use of sensor pair 1 (T1, C1)
         1 indicates use of sensor pair 2 (T2, C2)
    
    Voltage values (columns 12 - 14) are in the range of 0-5 volts.
    A value of 9.999 indicates \"no value\" for those columns
    
    Chlorophyll was calculated based on the voltage values of
    the green filtered FPK016; if that FPAK was 9-filled, then the 
    chlorophyll value was set at 999.99; if the calibrated value 
    was negative (due to noise in the calibration) the chlorophyll 
    value was set at 0.00; otherwise the calibration equation
    used was:
    
        chl_a = 7.6727(volts) - 3.4208
    
    Gridded Data
    ------------
    The *1.25km files give the final SeaSoar CTD data gridded
    at a spacing of 1.25 km in the horizontal, and 2 db in the 
    vertical. In general this was used for the mapping surveys
    that were on the continental shelf.
    
    The *2.5km files give the final SeaSoar CTD data gridded at
    a spacing of 2.5 km in the horizontal (and 2 db in the
    vertical). These were used for the deeper, offshore survey.
    
    Here is the first line of inshore.line1.1.25km:
    
      6.25  155.92008    44.651726   -124.13853    1.0  9  9.5228777 
       33.127800    25.569221    240.63866    9.5227690   
    0.24063867E-01  3.7872221    1.1320001   0.78988892  
    
    The format of the *km files is given by:
    
       col 1 = distance       (km)
       col 2 = julian day + fractional day (noon, Jan 1 = 1.5)
       col 3 = latitude       (decimal degrees)
       col 4 = longitude       (decimal degress)
       col 5 = pressure       (dbar)
       col 6 = count
       col 7 = temperature      (degrees C)
       col 8 = salinity       (psu)
       col 9 = density (sigma-t)   (kg/cubic meter)
       col 10 = specific vol anomaly (1.0E-8 cubic meter/kg)
       col 11 = potential temperature (degrees C)
       col 12 = dynamic height    (dynamic meters)
       col 13 = PAR          (volts)
       col 14 = FPK010        (volts)  (violet filter)
       col 15 = FPK016        (volts)  (green filter)
    
    \"missing data\" was set at 1.0e35
    
    columns 1 - 4 give the average location and time of the
    values contained in the column at that location. The
    column gives values for every two dbars of depth, starting at 
    1db and extending down to a value at 121 db. The column
    then shifts to the next location, 1.25km further along the
    line. If we are working with the 2.5km sections, then the column
    goes down to a value of 329 db, and the next column then shifts
    2.5km further along the line.
    
    For the E-W lines, column 1 gives the distance from the coastline;
    for the N-S lines, column 1 gives the distance from southernmost point.
    
    column 6 (count) gives the number of samples in that 2db bin
    
  17. Filtered NARS Occurrence Data for Range Maps

    • figshare.com
    application/csv
    Updated Apr 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ethan Brown; Ronald Hellenthal; Michael Mahon; Samantha Rumschlag; Jason R Rohr (2024). Filtered NARS Occurrence Data for Range Maps [Dataset]. http://doi.org/10.6084/m9.figshare.25517455.v1
    Explore at:
    application/csvAvailable download formats
    Dataset updated
    Apr 8, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Ethan Brown; Ronald Hellenthal; Michael Mahon; Samantha Rumschlag; Jason R Rohr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CSV file containing the filtered benthic macroinvertebrate occurrence data used to generate range maps. NARS source data available through the NARS Data Download Tool (https://owshiny.epa.gov/nars-data-download/).

  18. Meta-Analysis and modeling of vegetated filter removal of sediment using...

    • catalog.data.gov
    Updated Nov 22, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2021). Meta-Analysis and modeling of vegetated filter removal of sediment using global dataset [Dataset]. https://catalog.data.gov/dataset/meta-analysis-and-modeling-of-vegetated-filter-removal-of-sediment-using-global-dataset
    Explore at:
    Dataset updated
    Nov 22, 2021
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    Data on vegetated filter strips, sediment loading into and out of riparian corridors/buffers (VFS), removal efficiency of sediment, meta-analysis of removal efficiencies, dimensional analysis of predictor variables, and regression modeling of VFS removal efficiencies. This dataset is associated with the following publication: Ramesh, R., L. Kalin, M. Hantush, and A. Chaudhary. A secondary assessment of sediment trapping effectiveness by vegetated buffers. ECOLOGICAL ENGINEERING. Elsevier Science Ltd, New York, NY, USA, 159: 106094, (2021).

  19. C

    HOMICIDE FILTER

    • data.cityofchicago.org
    Updated Jul 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chicago Police Department (2025). HOMICIDE FILTER [Dataset]. https://data.cityofchicago.org/Public-Safety/HOMICIDE-FILTER/4ser-6e2h
    Explore at:
    application/rssxml, csv, xml, tsv, application/rdfxml, kml, kmz, application/geo+jsonAvailable download formats
    Dataset updated
    Jul 30, 2025
    Authors
    Chicago Police Department
    Description

    This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system. In order to protect the privacy of crime victims, addresses are shown at the block level only and specific locations are not identified. Should you have questions about this dataset, you may contact the Research & Development Division of the Chicago Police Department at 312.745.6071 or RandD@chicagopolice.org. Disclaimer: These crimes may be based upon preliminary information supplied to the Police Department by the reporting parties that have not been verified. The preliminary crime classifications may be changed at a later date based upon additional investigation and there is always the possibility of mechanical or human error. Therefore, the Chicago Police Department does not guarantee (either expressed or implied) the accuracy, completeness, timeliness, or correct sequencing of the information and the information should not be used for comparison purposes over time. The Chicago Police Department will not be responsible for any error or omission, or for the use of, or the results obtained from the use of this information. All data visualizations on maps should be considered approximate and attempts to derive specific addresses are strictly prohibited. The Chicago Police Department is not responsible for the content of any off-site pages that are referenced by or that reference this web page other than an official City of Chicago or Chicago Police Department web page. The user specifically acknowledges that the Chicago Police Department is not responsible for any defamatory, offensive, misleading, or illegal conduct of other users, links, or third parties and that the risk of injury from the foregoing rests entirely with the user. The unauthorized use of the words "Chicago Police Department," "Chicago Police," or any colorable imitation of these words or the unauthorized use of the Chicago Police Department logo is unlawful. This web page does not, in any way, authorize such use. Data is updated daily Tuesday through Sunday. The dataset contains more than 65,000 records/rows of data and cannot be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Wordpad, to view and search. To access a list of Chicago Police Department - Illinois Uniform Crime Reporting (IUCR) codes, go to http://data.cityofchicago.org/Public-Safety/Chicago-Police-Department-Illinois-Uniform-Crime-R/c7ck-438e

  20. f

    Open data: Visual load effects on the auditory steady-state responses to...

    • su.figshare.com
    • researchdata.se
    txt
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Wiens; Malina Szychowska (2023). Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones [Dataset]. http://doi.org/10.17045/sthlmuni.12582002.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Stockholm University
    Authors
    Stefan Wiens; Malina Szychowska
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The main results file are saved separately:- ASSR2.html: R output of the main analyses (N = 33)- ASSR2_subset.html: R output of the main analyses for the smaller sample (N = 25)FIGSHARE METADATACategories- Biological psychology- Neuroscience and physiological psychology- Sensory processes, perception, and performanceKeywords- crossmodal attention- electroencephalography (EEG)- early-filter theory- task difficulty- envelope following responseReferences- https://doi.org/10.17605/OSF.IO/6FHR8- https://github.com/stamnosslin/mn- https://doi.org/10.17045/sthlmuni.4981154.v3- https://biosemi.com/- https://www.python.org/- https://mne.tools/stable/index.html#- https://www.r-project.org/- https://rstudio.com/products/rstudio/GENERAL INFORMATION1. Title of Dataset:Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones2. Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se B. Associate or Co-investigator Contact Information Name: Malina Szychowska Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.researchgate.net/profile/Malina_Szychowska Email: malina.szychowska@psychology.su.se3. Date of data collection: Subjects (N = 33) were tested between 2019-11-15 and 2020-03-12.4. Geographic location of data collection: Department of Psychology, Stockholm, Sweden5. Information about funding sources that supported the collection of the data:Swedish Research Council (Vetenskapsrådet) 2015-01181SHARING/ACCESS INFORMATION1. Licenses/restrictions placed on the data: CC BY 4.02. Links to publications that cite or use the data: Szychowska M., & Wiens S. (2020). Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Submitted manuscript.The study was preregistered:https://doi.org/10.17605/OSF.IO/6FHR83. Links to other publicly accessible locations of the data: N/A4. Links/relationships to ancillary data sets: N/A5. Was data derived from another source? No 6. Recommended citation for this dataset: Wiens, S., & Szychowska M. (2020). Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.12582002DATA & FILE OVERVIEWFile List:The files contain the raw data, scripts, and results of main and supplementary analyses of an electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.ASSR2_experiment_scripts.zip: contains the Python files to run the experiment. ASSR2_rawdata.zip: contains raw datafiles for each subject- data_EEG: EEG data in bdf format (generated by Biosemi)- data_log: logfiles of the EEG session (generated by Python)ASSR2_EEG_scripts.zip: Python-MNE scripts to process the EEG dataASSR2_EEG_preprocessed_data.zip: EEG data in fif format after preprocessing with Python-MNE scriptsASSR2_R_scripts.zip: R scripts to analyze the data together with the main datafiles. The main files in the folder are: - ASSR2.html: R output of the main analyses- ASSR2_subset.html: R output of the main analyses but after excluding eight subjects who were recorded as pilots before preregistering the studyASSR2_results.zip: contains all figures and tables that are created by Python-MNE and R.METHODOLOGICAL INFORMATION1. Description of methods used for collection/generation of data:The auditory stimuli were amplitude-modulated tones with a carrier frequency (fc) of 500 Hz and modulation frequencies (fm) of 20.48 Hz, 40.96 Hz, or 81.92 Hz. The experiment was programmed in python: https://www.python.org/ and used extra functions from here: https://github.com/stamnosslin/mnThe EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and saved in .bdf format.For more information, see linked publication.2. Methods for processing the data:We conducted frequency analyses and computed event-related potentials. See linked publication3. Instrument- or software-specific information needed to interpret the data:MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html#Rstudio used with R (R Core Team, 2020): https://rstudio.com/products/rstudio/Wiens, S. (2017). Aladins Bayes Factor in R (Version 3). https://www.doi.org/10.17045/sthlmuni.4981154.v34. Standards and calibration information, if appropriate:For information, see linked publication.5. Environmental/experimental conditions:For information, see linked publication.6. Describe any quality-assurance procedures performed on the data:For information, see linked publication.7. People involved with sample collection, processing, analysis and/or submission:- Data collection: Malina Szychowska with assistance from Jenny Arctaedius.- Data processing, analysis, and submission: Malina Szychowska and Stefan WiensDATA-SPECIFIC INFORMATION:All relevant information can be found in the MNE-Python and R scripts (in EEG_scripts and analysis_scripts folders) that process the raw data. For example, we added notes to explain what different variables mean.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Agricultural Research Service (2025). A Baseflow Filter for Hydrologic Models in R [Dataset]. https://catalog.data.gov/dataset/a-baseflow-filter-for-hydrologic-models-in-r-41440
Organization logo

A Baseflow Filter for Hydrologic Models in R

Explore at:
Dataset updated
Apr 21, 2025
Dataset provided by
Agricultural Research Servicehttps://www.ars.usda.gov/
Description

A Baseflow Filter for Hydrologic Models in R Resources in this dataset:Resource Title: A Baseflow Filter for Hydrologic Models in R. File Name: Web Page, url: https://www.ars.usda.gov/research/software/download/?softwareid=383&modecode=20-72-05-00 download page

Search
Clear search
Close search
Google apps
Main menu