56 datasets found
  1. f

    Supplement 1. R code for fitting the random-walk state-space model using...

    • wiley.figshare.com
    html
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonas Knape; Perry de Valpine (2023). Supplement 1. R code for fitting the random-walk state-space model using particle filter MCMC. [Dataset]. http://doi.org/10.6084/m9.figshare.3552534.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Wiley
    Authors
    Jonas Knape; Perry de Valpine
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List adaptiveMH.r (md5: 1c7f3697e28dca0aceda63360930e29f) adaptiveMHfuns.r (md5: cabc33a60ab779b954d853816c9e3cce) PF.r (md5: eff6f6611833c86c1d1a8e8135af7e04)

    Description
      adaptiveMH.r – Contains a script for fitting a random-walk model with drift for Kangaroo population dynamics on the log-scale using particle filtering Metropolis Hastings with an initial adaptive phase.
      adaptiveMHfuns.r – Contains functions that are used for estimating and handling the normal mixture proposals.
      PF.r – Contains functions that perform the particle filtering and define the model.
    
  2. The Global Baghouse Filter - Fabric Dust Collector market size is USD 2815.2...

    • cognitivemarketresearch.com
    pdf,excel,csv,ppt
    Updated Jun 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cognitive Market Research (2024). The Global Baghouse Filter - Fabric Dust Collector market size is USD 2815.2 million in 2024. [Dataset]. https://www.cognitivemarketresearch.com/baghouse-filter-fabric-dust-collector-market-report
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset updated
    Jun 27, 2024
    Dataset authored and provided by
    Cognitive Market Research
    License

    https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy

    Time period covered
    2021 - 2033
    Area covered
    Global
    Description

    According to Cognitive Market Research, the global Baghouse Filter - Fabric Dust Collector market size is USD 2815.2 million in 2024. It will expand at a compound annual growth rate (CAGR) of 8.00% from 2024 to 2031.

    North America held the major market share for more than 40% of the global revenue with a market size of USD 1126.08 million in 2024 and will grow at a compound annual growth rate (CAGR) of 6.2% from 2024 to 2031.
    Europe accounted for a market share of over 30% of the global revenue with a market size of USD 844.56 million.
    Asia Pacific held a market share of around 23% of the global revenue with a market size of USD 647.50 million in 2024 and will grow at a compound annual growth rate (CAGR) of 10.0% from 2024 to 2031.
    Latin America had a market share of more than 5% of the global revenue with a market size of USD 140.76 million in 2024 and will grow at a compound annual growth rate (CAGR) of 7.4% from 2024 to 2031.
    Middle East and Africa had a market share of around 2% of the global revenue and was estimated at a market size of USD 56.30 million in 2024 and will grow at a compound annual growth rate (CAGR) of 7.7% from 2024 to 2031.
    The Power Plant held the highest Baghouse Filter - Fabric Dust Collector market revenue share in 2024.
    

    Market Dynamics of Baghouse Filter - Fabric Dust Collector Market

    Key Drivers for Baghouse Filter - Fabric Dust Collector Market

    Rise of Sustainable Technology to Increase the Demand Globally

    An increasing number of energy-efficient baghouse dust collectors are in demand due to tighter environmental regulations and increased energy costs. When compared to traditional shaking or reverse air processes, manufacturers are focusing on new technologies such as pulse-jet cleaning systems, which use compressed air in short bursts, to minimize energy use. To reduce waste production and maintenance costs, there is also an increasing need for fabric materials with longer lifespans and better filtration efficiency.

    Strict Environment Regulation and Industrialization to Propel Market Growth

    Baghouse filters are becoming more and more popular due to growing environmental rules governing emissions management and air quality. The use of efficient air pollution control devices, such as baghouse filters, is mandated by industry compliance with emission standards, which aim to minimize the discharge of particulate matter and other pollutants into the atmosphere. Baghouse filters are also in high demand due to the growth of industrial activities in several industries, such as manufacturing, power generation, mining, and chemical processing. The requirement to regulate emissions and preserve the quality of the surrounding air grows with the extent of industrial output.

    Restraint Factor for the Baghouse Filter - Fabric Dust Collector Market

    High Cost to Limit the Sales

    Purchasing and installing baghouse filtration systems can involve a considerable initial financial outlay, particularly for big industrial plants. Some businesses, especially smaller ones, can be discouraged from implementing baghouse filters due to this cost, especially if they believe the initial outlay to be too high. Additionally, although baghouse filters are typically thought to be cost-effective throughout their operation, continuous maintenance and running costs can mount up. Certain sectors may find it burdensome to pay more for routine maintenance, which includes cleaning, replacing filters, and using energy for operation.

    Impact of Covid-19 on the Baghouse Filter - Fabric Dust Collector Market

    The COVID-19 epidemic has affected the baghouse filter business in both positive and negative ways. Public health now understands the significance of both indoor and outdoor air quality because of the pandemic. Increased demand for baghouse filters and other air pollution control technology as industries look to enhance the quality of the air within their buildings and the surrounding areas could result from this increased awareness. Furthermore, businesses will probably spend money on ways to lessen airborne pollutants including dust and particulate matter as a result of the heightened focus on workplace safety and cleanliness to stop the spread of COVID-19. By absorbing airborne contaminants, baghouse filters can help create safer and cleaner work environments. But a significant factor in the market expansion was the slowdown in manufacturing activity. Introduction of...

  3. f

    Table_2_webGQT: A Shiny Server for Genotype Query Tools for Model-Based...

    • frontiersin.figshare.com
    xlsx
    Updated Jun 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meharji Arumilli; Ryan M. Layer; Marjo K. Hytönen; Hannes Lohi (2023). Table_2_webGQT: A Shiny Server for Genotype Query Tools for Model-Based Variant Filtering.xlsx [Dataset]. http://doi.org/10.3389/fgene.2020.00152.s004
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Frontiers
    Authors
    Meharji Arumilli; Ryan M. Layer; Marjo K. Hytönen; Hannes Lohi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    SummaryGenotype Query Tools (GQT) were developed to discover disease-causing variations from billions of genotypes and millions of genomes, processes data at substantially higher speed over other existing methods. While GQT has been available to a wide audience as command-line software, the difficulty of constructing queries among non-IT or non-bioinformatics researchers has limited its applicability. To overcome this limitation, we developed webGQT, an easy-to-use tool with a graphical user interface. With pre-built queries across three modules, webGQT allows for pedigree analysis, case-control studies, and population frequency studies. As a package, webGQT allows researchers with less or no applied bioinformatics/IT experience to mine potential disease-causing variants from billions.ResultswebGQT offers a flexible and easy-to-use interface for model-based candidate variant filtering for Mendelian diseases from thousands to millions of genomes at a reduced computation time. Additionally, webGQT provides adjustable parameters to reduce false positives and rescue missing genotypes across all modules. Using a case study, we demonstrate the applicability of webGQT to query non-human genomes. In addition, we demonstrate the scalability of webGQT on large data sets by implementing complex population-specific queries on the 1000 Genomes Project Phase 3 data set, which includes 8.4 billion variants from 2504 individuals across 26 different populations. Furthermore, webGQT supports filtering single-nucleotide variants, short insertions/deletions, copy number or any other variant genotypes supported by the VCF specification. Our results show that webGQT can be used as an online web service, or deployed on personal computers or local servers within research groups.AvailabilitywebGQT is made available to the users in three forms: 1) as a webserver available at https://vm1138.kaj.pouta.csc.fi/webgqt/, 2) as an R package to install on personal computers, and 3) as part of the same R package to configure on the user's own servers. The application is available for installation at https://github.com/arumds/webgqt.

  4. h

    THEMIS-B Digital Fields Board, Filter Bank

    • hpde.io
    Updated May 5, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). THEMIS-B Digital Fields Board, Filter Bank [Dataset]. https://hpde.io/SMWG/Instrument/THEMIS/B/FBK.html
    Explore at:
    Dataset updated
    May 5, 2019
    License

    https://cdla.io/permissive-1-0/https://cdla.io/permissive-1-0/

    Description

    The Filter Bank is part of the Digital fields board and provides band-pass filtering for EFI and SCM spectra as well as E12HF peak and average values. The Filter Bank provides band-pass filtering for less computationally and power intensive spectra than the FFT would provide. The process is as follows: Signals are fed to the Filter Bank via a low-pass FIR filter with a cut-off frequency half that of the original signal maximum. The output is passed to the band-pass filters, is differenced from the original signal, then absolute value of the data is taken and averaged. The output from the first low-pass filter is also sent to a second FIR filter with 2:1 decimation. This output is then fed back through the system. The cascade runs 12 cycles for input at 8,192 samples/s and 13 for input at 16,384 samples/sec (EAC input only), reducing the signal (and computing power) by a factor 2 at each cascade. At each cascade a set of data is produced at a sampling frequency of 2^n from 2 Hz to the initial sampling frequency (frequency characteristics for each step are shown below in Table 1). The average from the Filter Bank is compressed to 8 bits with a pseudo-logarithmic encoder. Analog signals sent to the FBK are E12DC and SCM1. The average of the coupled E12HF signal and it's peak value are recorded over 62.5 ms windows (i.e. a 16 Hz sampling rate). Accumulation of values from signal 31.25 ms windows is performed externally. Sensor and electronics design provided by UCB (J. W. Bonnell, F. S. Mozer), Digital Fields Board provided by LASP (R. Ergun), Search coil data provided by CETP (A. Roux).

  5. Data from: Comparison of capture and storage methods for aqueous macrobial...

    • zenodo.org
    • datadryad.org
    txt
    Updated May 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johan Spens; Alice R. Evans; David Halfmaerten; Steen W. Knudsen; Mita E. Sengupta; Sarah S. T. Mak; Eva E. Sigsgaard; Micaela Hellström; Johan Spens; Alice R. Evans; David Halfmaerten; Steen W. Knudsen; Mita E. Sengupta; Sarah S. T. Mak; Eva E. Sigsgaard; Micaela Hellström (2022). Data from: Comparison of capture and storage methods for aqueous macrobial eDNA using an optimized extraction protocol: advantage of enclosed filter [Dataset]. http://doi.org/10.5061/dryad.p2q4r
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 29, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Johan Spens; Alice R. Evans; David Halfmaerten; Steen W. Knudsen; Mita E. Sengupta; Sarah S. T. Mak; Eva E. Sigsgaard; Micaela Hellström; Johan Spens; Alice R. Evans; David Halfmaerten; Steen W. Knudsen; Mita E. Sengupta; Sarah S. T. Mak; Eva E. Sigsgaard; Micaela Hellström
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Aqueous environmental DNA (eDNA) is an emerging efficient non-invasive tool for species inventory studies. To maximize performance of downstream quantitative PCR (qPCR) and next-generation sequencing (NGS) applications, quality and quantity of the starting material is crucial, calling for optimized capture, storage and extraction techniques of eDNA. Previous comparative studies for eDNA capture/storage have tested precipitation and 'open' filters. However, practical 'enclosed' filters which reduce unnecessary handling have not been included. Here, we fill this gap by comparing a filter capsule (Sterivex-GP polyethersulfone, pore size 0·22 μm, hereafter called SX) with commonly used methods. Our experimental set-up, covering altogether 41 treatments combining capture by precipitation or filtration with different preservation techniques and storage times, sampled one single lake (and a fish-free control pond). We selected documented capture methods that have successfully targeted a wide range of fauna. The eDNA was extracted using an optimized protocol modified from the DNeasy® Blood & Tissue kit (Qiagen). We measured total eDNA concentrations and Cq-values (cycles used for DNA quantification by qPCR) to target specific mtDNA cytochrome b (cyt b) sequences in two local keystone fish species. SX yielded higher amounts of total eDNA along with lower Cq-values than polycarbonate track-etched filters (PCTE), glass fibre filters (GF) or ethanol precipitation (EP). SX also generated lower Cq-values than cellulose nitrate filters (CN) for one of the target species. DNA integrity of SX samples did not decrease significantly after 2 weeks of storage in contrast to GF and PCTE. Adding preservative before storage improved SX results. In conclusion, we recommend SX filters (originally designed for filtering micro-organisms) as an efficient capture method for sampling macrobial eDNA. Ethanol or Longmire's buffer preservation of SX immediately after filtration is recommended. Preserved SX capsules may be stored at room temperature for at least 2 weeks without significant degradation. Reduced handling and less exposure to outside stress compared with other filters may contribute to better eDNA results. SX capsules are easily transported and enable eDNA sampling in remote and harsh field conditions as samples can be filtered/preserved on site.

  6. Aircraft Filters Market Size Worth $1,040.4 Million By 2028 | CAGR: 4.3%

    • polarismarketresearch.com
    Updated Jan 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Polaris Market Research (2025). Aircraft Filters Market Size Worth $1,040.4 Million By 2028 | CAGR: 4.3% [Dataset]. https://www.polarismarketresearch.com/press-releases/aircraft-filters-market
    Explore at:
    Dataset updated
    Jan 2, 2025
    Dataset provided by
    Polaris Market Research & Consulting
    Authors
    Polaris Market Research
    License

    https://www.polarismarketresearch.com/privacy-policyhttps://www.polarismarketresearch.com/privacy-policy

    Description

    The global aircraft filters market size is expected to reach USD 1,040.4 million by 2028 according to a new study by Polaris Market Research.

  7. d

    (high-temp) No 3. Filtering: (16S rRNA/ITS) Output

    • search.dataone.org
    • smithsonian.figshare.com
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jarrod Scott (2024). (high-temp) No 3. Filtering: (16S rRNA/ITS) Output [Dataset]. https://search.dataone.org/view/urn%3Auuid%3A1a55e979-6a62-4c6c-b738-8288b98deac1
    Explore at:
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    Smithsonian Research Data Repository
    Authors
    Jarrod Scott
    Description

    Output files from the No 3. Filtering Workflow page of the SWELTR high- temp study. ASV filtering for 16S rRNA & ITS using a) Arbitrary filtering, b) PERFect (PERmutation Filtering test for microbiome data), and c) PIME (Prevalence Interval for Microbiome Evaluation) Workflow objects:

    filtering_wf.rdata: contains all variables and phyloseq objects from 16s rRNA and ITS ASV filtering. To see the Objects , in R run load("_filtering_wf.rdata", verbose=TRUE)_

    Additional files:

    For convenience, we also include individual phyloseq objects for each filtered data set.

    **_Arbitrary_ :
    **

    ****ssu18_ps_filt.rds:**** phyloseq object for Arbitrary filtered 16S rRNA ASVs.****
    its18_ps_filt.rds:**** phyloseq object for Arbitrary filtered ITS ASVs.****

    PERfect :

    ******ssu18_ps_perfect.rds** : ****phyloseq object for PERfect filtered 16S rRNA ASVs.****
    its18_ps_perfect.rds : ****phyloseq object for PERfect filtered ITS ASVs.****

    ****_PIME_ : ** **

    ssu18_ps_pime.rds : phyloseq object for PIME filtered 16S rRNA ASVs.
    its18_ps_pime.rds : phyloseq object for PIME filtered ITS ASVs.

    Source code for the workflow can be found here:

    https://github.com/sweltr/high-temp/blob/master/filtering.Rmd

  8. Filter Pad absorption measurements of suspended particulate matter - data...

    • data.aad.gov.au
    • researchdata.edu.au
    • +2more
    Updated Aug 27, 2010
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SCHWARZ, JILL (2010). Filter Pad absorption measurements of suspended particulate matter - data from the BROKE-West voyage of the Aurora Australis, 2006 [Dataset]. http://doi.org/10.4225/15/598d150b8150c
    Explore at:
    Dataset updated
    Aug 27, 2010
    Dataset provided by
    Australian Antarctic Divisionhttps://www.antarctica.gov.au/
    Australian Antarctic Data Centre
    Authors
    SCHWARZ, JILL
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 9, 2006 - Feb 28, 2006
    Area covered
    Description

    Particulates in the water were concentrated onto 25mm glass fibre filters.

    Light transmission and reflection through the filters was measured using a spectrophotometer to yield spectral absorption coefficients.

    Data Acquisition:

    Water samples were taken from Niskin bottles mounted on the CTD rosette. Two or three depths were selected at each station, using the CTD fluorometer profile to identify the depth of maximum fluorescence and below the fluorescence maximum. One sample was always taken at 10m, provided water was available, as a reference depth for comparisons with satellite data (remote sensing international standard). Water sampling was carried out after other groups, leading to a considerable time delay of between half an hour and 3 hours, during which particulates are likely to have sedimented within the Niskin bottle, and algae photoadapted to the dark. In order to minimise problems of sedimentation, as large a sample as practical was taken. Often so little water remained in the Niskin bottle that the entire remnant was taken. Where less than one litre remained, leftover sample water was taken from the HPLC group. Water samples were filtered through 25mm diameter GF/F filters under a low vacuum (less than 5mmHg), in the dark. Filters were stored in tissue capsules in liquid nitrogen and transported to the lab for analysis after the cruise. Three water samples were filtered through GF/F filters under gravity, with 2 30ml pre-rinses to remove organic substances from the filter, and brought to the laboratory for further filtration through 0.2micron membrane filters.

    Filters were analysed in batches of 3 to 7, with all depths at each station being analysed within the same batch to ensure comparability. Filters were removed one batch at a time and place on ice in the dark. Once defrosted, the filters were placed upon a drop of filtered seawater in a clean petri dish and returned to cold, dark conditions. One by one, the filters were placed on a clean glass plate and scanned from 200 to 900nm in a spectrophotometer equipped with an integrating sphere. A fresh baseline was taken with each new batch using 2 blank filters from the same batch as the sample filters, soaked in filtered seawater. After scanning, the filters were placed on a filtration manifold, soaked in methanol for between 1 and 2 hours to extract pigments, and rinsed with filtered seawater. They were then scanned again against blanks soaked in methanol and rinsed in filtered seawater.

    Data Processing:

    The initial scan of total particulate matter, ap, and the second scan of non-pigmented particles, anp, were corrected for baseline wandering by setting the near-infrared absorption to zero.

    This technique requires correction for enhanced scattering within the filter, which has been reported to vary with species. One dilution series was carried out at station 118 to allow calculation of the correction (beta-factor). Since it is debatable whether this factor will be applicable to all samples, no correction has been applied to the dataset. Potential users should contact JSchwarz for advice on this matter when using the data quantitatively.

    Not yet complete:

    Comparison of the beta-factor calculated for station 118 with the literature values.

    Comparison of phytoplankton populations from station 118 with those found at other stations to evaluate the applicability of the beta-factor.

    Dataset Format:

    Two files: phyto_absorp_brokew.txt and phyto_absorp_brokew_2.txt: covering stations 4 to 90 and 91 to 118, respectively. Note that not every station was sampled.

    File format: Matlab-readable ascii text with 3 'header' lines: Row 1: col.1=-999, col.2 to end = ctd number Row 2: col.1=-999, col.2 to end = sample depth in metres Row 3: col.1=-999, col.2 to end = 1 for total absorption by particulates, 2 for absorption by non-pigmented particles Row 4 to end: col.1=wavelength in nanometres, col.2 to end = absorption coefficient corresponding to station, depth and type given in rows 1 to 3 of the same column.

    This work was completed as part of ASAC projects 2655 and 2679 (ASAC_2655, ASAC_2679).

  9. d

    MATLAS color (filters i, r, g) survey collection - Dataset - B2FIND

    • b2find.dkrz.de
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MATLAS color (filters i, r, g) survey collection - Dataset - B2FIND [Dataset]. https://b2find.dkrz.de/dataset/91a5018f-ed34-5589-9018-27875b2cfecb
    Explore at:
    Description

    MATLAS (Mass Assembly of early-Type GaLAxies with their fine Structures) investigates the mass assembly of Early-Type Galaxies (ETGs) and the build-up of their scaling relations, with extremely deep optical images. The stellar populations in the outermost regions of ETGs, the fine structures (tidal tails, stellar stream, and shells) around them, the Globular Cluster (GCs) and dwarf satellites, preserve a record of past merger events and more generally of the evolution and transformation of galaxies. The MATLAS color HiPS has been generated from i, r, g HiPS. Original acknowledgement for data: MATLAS collaboration

  10. Z

    Data from: Filtered Data from the Retrospective Analysis of Antarctic...

    • data.niaid.nih.gov
    • catalogue-temperatereefbase.imas.utas.edu.au
    • +3more
    Updated Mar 24, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wayne Trivelpiece (2020). Filtered Data from the Retrospective Analysis of Antarctic Tracking Data Project from the Scientific Committee on Antarctic Research [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3722948
    Explore at:
    Dataset updated
    Mar 24, 2020
    Dataset provided by
    Peter G. Ryan
    Jefferson T. Hinke
    Ian D. Jonsen
    Joachim Plötz
    Phil O'B. Lyver
    P. J. Nico de Bruyn
    Ben Arthur
    Kit M. Kovacs
    Rochelle Constantine
    Roger Kirkwood
    David Thompson
    Sébastien Descamps
    Simon Wotherspoon
    Erling S. Nordøy
    Clive R. McMahon
    Henri Weimerskirch
    Ryan R. Reisinger
    Horst Bornemann
    Knowles R. Kerry
    John Bengtson
    Mike Goebel
    Keith W. Nicholls
    Ewan Wakefield
    Azwianewi B. Makhado
    Charles-André Bost
    Christophe Guinet
    Mike Double
    Marthán N. Bester
    Silvia Olmastroni
    Rob Harcourt
    David G. Ainley
    Norman Ratcliffe
    Mary-Anne Lea
    Pierre Pistorius
    Mike Fedak
    Christian Lydersen
    Klemens Pütz
    Wayne Trivelpiece
    Yan Ropert-Coudert
    Mercedes Santos
    Birgitte I. McDonald
    Monica Muelbert
    Lars Boehme
    Virginia Andrews-Goff
    Bruno Danis
    Robert J. M. Crawford
    Arnaud Tarroux
    José C. Xavier
    Barbara Wienecke
    Karine Delord
    Andrew D. Lowther
    Kerstin Jerosch
    Louise Emmerson
    Luciano Dalla Rosa
    Rachael Alderman
    Richard A. Phillips
    Akinori Takahashi
    Simon D. Goldsworthy
    Maria E. I. Márquez
    Nick Gales
    Iain Staniland
    Colin Southwell
    Jean-Benoît Charrassin
    Mark A. Hindell
    Leigh G. Torres
    Kimberly T. Goetz
    Ari Friedlaende
    Kieran Lawton
    Daniel P. Costa
    Luis A. Hückstädt
    Ben Raymond
    Grant Ballard
    Dominik Nachtsheim
    Peter Boveng
    Philip N. Trathan
    Gerald L. Kooyman
    Akiko Kato
    Arnoldus Schytte Blix
    Anton P. Van de Putte
    Jaimie Cleeland
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Antarctica, Antarctica
    Description

    The Retrospective Analysis of Antarctic Tracking Data (RAATD) is a Scientific Committee for Antarctic Research (SCAR) project led jointly by the Expert Groups on Birds and Marine Mammals and Antarctic Biodiversity Informatics, and endorsed by the Commission for the Conservation of Antarctic Marine Living Resources. The RAATD project team consolidated tracking data for multiple species of Antarctic meso- and top-predators to identify Areas of Ecological Significance. These datasets constitute the compiled tracking data from a large number of research groups that have worked in the Antarctic since the 1990s.

    This metadata record pertains to the "filtered" version of the data files. These files contain position estimates that have been processed using a state-space model in order to estimate locations at regular time intervals. For technical details of the filtering process, consult the data paper. The filtering code can be found in the https://github.com/SCAR/RAATD repository.

    This data set comprises one metadata csv file that describes all deployments, along with data files (3 files for each of 17 species). For each species there is: - an RDS file that contains the fitted TMB filter model object and model predictions (this file is RDS format that can be read by the R statistical software package) - a PDF file that shows the quality control results for each individual model - a CSV file containing the interpolated position estimates

    For details of the file contents and formats, consult the data paper.

    The original copy of these data are available through the Australian Antarctic Data Centre (https://data.aad.gov.au/metadata/records/SCAR_EGBAMM_RAATD_2018_Filtered)

    The data are also available in a standardized version (see https://data.aad.gov.au/metadata/records/SCAR_EGBAMM_RAATD_2018_Standardised) that contain position estimates as provided by the original data collectors (generally, raw Argos or GPS locations, or estimated GLS locations) without state-space filtering.

  11. d

    Soluble Fe passed through 0.2 um Anopore filter from R/V Knorr cruise...

    • search.dataone.org
    • bco-dmo.org
    Updated Apr 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Edward A. Boyle; Christopher I. Measures; Jingfeng Wu; Jessica N. Fitzsimmons (2022). Soluble Fe passed through 0.2 um Anopore filter from R/V Knorr cruise KN204-01 in the Subtropical northern Atlantic Ocean in 2011 (U.S. GEOTRACES NAT project) [Dataset]. https://search.dataone.org/view/sha256:0a37c2686c58da90309cae078ca4354ed9a7a5da78370ae09d32787f1e7ac124
    Explore at:
    Dataset updated
    Apr 15, 2022
    Dataset provided by
    Biological and Chemical Oceanography Data Management Office (BCO-DMO)
    Authors
    Edward A. Boyle; Christopher I. Measures; Jingfeng Wu; Jessica N. Fitzsimmons
    Area covered
    Atlantic Ocean
    Description

    Soluble iron (Fe), the Fe passing through a 0.02 µm Anodisc membrane filter, is reported in nmol Fe per kg of seawater. Samples were collected on the U.S. GEOTRACES North Atlantic Zonal Transect, Leg 2, in 2011.

    In comparing this data to other published profiles of soluble Fe, it is valuable to know that soluble Fe is a highly operationally-defined parameter. The two most common methods of collecting soluble Fe samples are via 0.02 µm Anopore membrane filtration (this study) and by cross-flow filtration. An intercalibration between the two methods used to collect soluble Fe samples on the U.S. Atlantic GEOTRACES cruises are described in this excerpt (PDF) from a Fitzsimmons manuscript (in preparation). The intercalibration determined that \"soluble Fe produced by cross-flow filtration (10 kDa membrane) is only ~65-70% of the soluble Fe produced by Anopore filtration.\"

    Please note that some US GEOTRACES data may not be final, pending intercalibration results and further analysis. If you are interested in following changes to US GEOTRACES NAT data, there is an RSS feed available via the BCO-DMO US GEOTRACES project page (scroll down and expand the \"Datasets\" section).

  12. f

    Table_1_webGQT: A Shiny Server for Genotype Query Tools for Model-Based...

    • figshare.com
    • frontiersin.figshare.com
    xlsx
    Updated May 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meharji Arumilli; Ryan M. Layer; Marjo K. Hytönen; Hannes Lohi (2023). Table_1_webGQT: A Shiny Server for Genotype Query Tools for Model-Based Variant Filtering.xlsx [Dataset]. http://doi.org/10.3389/fgene.2020.00152.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Meharji Arumilli; Ryan M. Layer; Marjo K. Hytönen; Hannes Lohi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    SummaryGenotype Query Tools (GQT) were developed to discover disease-causing variations from billions of genotypes and millions of genomes, processes data at substantially higher speed over other existing methods. While GQT has been available to a wide audience as command-line software, the difficulty of constructing queries among non-IT or non-bioinformatics researchers has limited its applicability. To overcome this limitation, we developed webGQT, an easy-to-use tool with a graphical user interface. With pre-built queries across three modules, webGQT allows for pedigree analysis, case-control studies, and population frequency studies. As a package, webGQT allows researchers with less or no applied bioinformatics/IT experience to mine potential disease-causing variants from billions.ResultswebGQT offers a flexible and easy-to-use interface for model-based candidate variant filtering for Mendelian diseases from thousands to millions of genomes at a reduced computation time. Additionally, webGQT provides adjustable parameters to reduce false positives and rescue missing genotypes across all modules. Using a case study, we demonstrate the applicability of webGQT to query non-human genomes. In addition, we demonstrate the scalability of webGQT on large data sets by implementing complex population-specific queries on the 1000 Genomes Project Phase 3 data set, which includes 8.4 billion variants from 2504 individuals across 26 different populations. Furthermore, webGQT supports filtering single-nucleotide variants, short insertions/deletions, copy number or any other variant genotypes supported by the VCF specification. Our results show that webGQT can be used as an online web service, or deployed on personal computers or local servers within research groups.AvailabilitywebGQT is made available to the users in three forms: 1) as a webserver available at https://vm1138.kaj.pouta.csc.fi/webgqt/, 2) as an R package to install on personal computers, and 3) as part of the same R package to configure on the user's own servers. The application is available for installation at https://github.com/arumds/webgqt.

  13. Machine learning pipeline to train toxicity prediction model of...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jan Ewald; Jan Ewald (2020). Machine learning pipeline to train toxicity prediction model of FunTox-Networks [Dataset]. http://doi.org/10.5281/zenodo.3529162
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jan Ewald; Jan Ewald
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Machine Learning pipeline used to provide toxicity prediction in FunTox-Networks

    01_DATA # preprocessing and filtering of raw activity data from ChEMBL
    - Chembl_v25 # latest activity assay data set from ChEMBL (retrieved Nov 2019)
    - filt_stats.R # Filtering and preparation of raw data
    - Filtered # output data sets from filt_stats.R
    - toxicity_direction.csv # table of toxicity measurements and their proportionality to toxicity

    02_MolDesc # Calculation of molecular descriptors for all compounds within the filtered ChEMBL data set
    - datastore # files with all compounds and their calculated molecular descriptors based on SMILES
    - scripts
    - calc_molDesc.py # calculates for all compounds based on their smiles the molecular descriptors
    - chemopy-1.1 # used python package for descriptor calculation as decsribed in: https://doi.org/10.1093/bioinformatics/btt105

    03_Averages # Calculation of moving averages for levels and organisms as required for calculation of Z-scores
    - datastore # output files with statistics calculated by make_Z.R
    - scripts
    -make_Z.R # script to calculate statistics to calculate Z-scores as used by the regression models

    04_ZScores # Calculation of Z-scores and preparation of table to fit regression models
    - datastore # Z-normalized activity data and molecular descriptors in the form as used for fitting regression models
    - scripts
    -calc_Ztable.py # based on activity data, molecular descriptors and Z-statistics, the learning data is calculated

    05_Regression # Performing regression. Preparation of data by removing of outliers based on a linear regression model. Learning of random forest regression models. Validation of learning process by cross validation and tuning of hyperparameters.

    - datastore # storage of all random forest regression models and average level of Z output value per level and organism (zexp_*.tsv)
    - scripts
    - data_preperation.R # set up of regression data set, removal of outliers and optional removal of fields and descriptors
    - Rforest_CV.R # analysis of machine learning by cross validation, importance of regression variables and tuning of hyperparameters (number of trees, split of variables)
    - Rforest.R # based on analysis of Rforest_CV.R learning of final models

    rregrs_output
    # early analysis of regression model performance with the package RRegrs as described in: https://doi.org/10.1186/s13321-015-0094-2

  14. C

    Theft Filter

    • data.cityofchicago.org
    Updated Mar 18, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chicago Police Department (2025). Theft Filter [Dataset]. https://data.cityofchicago.org/Public-Safety/Theft-Filter/aqvv-ggim
    Explore at:
    csv, tsv, xml, application/rdfxml, application/rssxml, application/geo+json, kml, kmzAvailable download formats
    Dataset updated
    Mar 18, 2025
    Authors
    Chicago Police Department
    Description

    This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system. In order to protect the privacy of crime victims, addresses are shown at the block level only and specific locations are not identified. Should you have questions about this dataset, you may contact the Research & Development Division of the Chicago Police Department at 312.745.6071 or RandD@chicagopolice.org. Disclaimer: These crimes may be based upon preliminary information supplied to the Police Department by the reporting parties that have not been verified. The preliminary crime classifications may be changed at a later date based upon additional investigation and there is always the possibility of mechanical or human error. Therefore, the Chicago Police Department does not guarantee (either expressed or implied) the accuracy, completeness, timeliness, or correct sequencing of the information and the information should not be used for comparison purposes over time. The Chicago Police Department will not be responsible for any error or omission, or for the use of, or the results obtained from the use of this information. All data visualizations on maps should be considered approximate and attempts to derive specific addresses are strictly prohibited. The Chicago Police Department is not responsible for the content of any off-site pages that are referenced by or that reference this web page other than an official City of Chicago or Chicago Police Department web page. The user specifically acknowledges that the Chicago Police Department is not responsible for any defamatory, offensive, misleading, or illegal conduct of other users, links, or third parties and that the risk of injury from the foregoing rests entirely with the user. The unauthorized use of the words "Chicago Police Department," "Chicago Police," or any colorable imitation of these words or the unauthorized use of the Chicago Police Department logo is unlawful. This web page does not, in any way, authorize such use. Data is updated daily Tuesday through Sunday. The dataset contains more than 65,000 records/rows of data and cannot be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Wordpad, to view and search. To access a list of Chicago Police Department - Illinois Uniform Crime Reporting (IUCR) codes, go to http://data.cityofchicago.org/Public-Safety/Chicago-Police-Department-Illinois-Uniform-Crime-R/c7ck-438e

  15. THEMIS-A: Probe Electric Field Instrument and Search Coil Magnetometer...

    • heliophysicsdata.gsfc.nasa.gov
    • hpde.io
    application/x-cdf +2
    Updated Jul 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Angelopoulos, Vassilis; Bonnell, John, W.; Ergun, Robert, E.; Mozer, Forrest, S.; Roux, Alain (2023). THEMIS-A: Probe Electric Field Instrument and Search Coil Magnetometer Instrument, Digital Fields Board - digitally computed Filter Bank spectra and E12 peak and average in HF band (FBK). [Dataset]. http://doi.org/10.48322/ed7r-tt72
    Explore at:
    bin, application/x-cdf, csvAvailable download formats
    Dataset updated
    Jul 30, 2023
    Dataset provided by
    NASAhttp://nasa.gov/
    Authors
    Angelopoulos, Vassilis; Bonnell, John, W.; Ergun, Robert, E.; Mozer, Forrest, S.; Roux, Alain
    License

    https://cdla.io/permissive-1-0/https://cdla.io/permissive-1-0/

    Description

    The Filter Bank is part of the Digital fields board and provides band-pass filtering for EFI and SCM spectra as well as E12HF peak and average value calculations. The Filter Bank provides band-pass filtering for less computationally and power intensive spectra than the FFT would provide. The process is as follows: Signals are fed to the Filter Bank via a low-pass FIR filter with a cut-off frequency half that of the original signal maximum. The output is passed to the band-pass filters, is differenced from the original signal, then absolute value of the data is taken and averaged. The output from the low-pass filter is also sent to a second FIR filter with 2:1 decimation. This output is then fed back through the system. The process runs through 12 cascades for input at 8,192 samples/s and 13 for input at 16,384 samples/sec (EAC input only), reducing the signal and computing power by a factor 2 at each cascade. At each cascade a set of data is produced at a sampling frequency of 2^n from 2 Hz to the initial sampling frequency (frequency characteristics for each step are shown below in Table 1). The average from the Filter Bank is compressed to 8 bits with a pseudo-logarithmic encoder. The data is stored in sets of six frequency bins at 2.689 kHz, 572 Hz, 144.2 Hz, 36.2 Hz, 9.05 Hz, and 2.26 Hz. The average of the coupled E12HF signal and it's peak value are recorded over 62.5 ms windows (i.e. a 16 Hz sampling rate). Accumulation of values from signal 31.25 ms windows is performed externally. The analog signals fed into the FBK are E12DC and SCM1. Sensor and electronics design provided by UCB (J. W. Bonnell, F. S. Mozer), Digital Fields Board provided by LASP (R. Ergun), Search coil data provided by CETP (A. Roux). Table 1: Frequency Properties. Cascade Frequency content of Input Signal Low-pass Filter Cutoff Frequency Freuency Content of Low-pass Output Signal Filter Bank Frequency Band 0* 0 - 8 kHz 4 kHz 0 - 4 kHz 4 - 8 kHz 1 0 - 4 kHz 2 kHz 0 - 2 kHz 2 - 4 kHz 2 0 - 2 kHz 1 kHz 0 - 1 kHz 1 - 2 kHz 3 0 - 1 kHz 512 Hz 0 - 512 Hz 512 Hz - 1 kHz 4 0 - 512 Hz 256 Hz 0 - 256 Hz 256 - 512 Hz 5 0 - 256 Hz 128 Hz 0 - 128 Hz 128 - 256 Hz 6 0 - 128 Hz 64 Hz 0 - 64 Hz 64 - 128 Hz 7 0 - 64 Hz 32 Hz 0 - 32 Hz 32 - 64 Hz 8 0 - 32 Hz 16 Hz 0 - 16 Hz 16 - 32 Hz 9 0 - 16 Hz 8 Hz 0 - 8 Hz 8 - 16 Hz 10 0 - 8 Hz 4 Hz 0 - 4 Hz 4 - 8 Hz 11 0 - 4 Hz 2 Hz 0 - 2 Hz 2 - 4 Hz 12 0 - 2 Hz 1 Hz 0 - 1 Hz 1 - 2 Hz *Only available for 16,384 Hz sampling.

  16. h

    filtered_cc100_25gb

    • huggingface.co
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xingming Li (2025). filtered_cc100_25gb [Dataset]. https://huggingface.co/datasets/xmli/filtered_cc100_25gb
    Explore at:
    Dataset updated
    Mar 26, 2025
    Authors
    Xingming Li
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description


    CC100 dataset comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. This dataset loader implements streaming to iterate over CC100 dataset. It applies strict filtering criteria to remove short, noisy, or repetitive sentences and keeps the language proportions similar to the ones used for XLM-R pre-training. The filtered CC100 dataset is ~25 GB.

  17. RRR/RAPID input and output files corresponding to "Underlying Fundamentals...

    • zenodo.org
    • data.niaid.nih.gov
    bin, csv, nc, zip
    Updated Jul 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Charlotte M. Emery; Charlotte M. Emery; Cédric H. David; Cédric H. David; Konstantinos M. Andreadis; Konstantinos M. Andreadis; Michael J. Turmon; Michael J. Turmon; John T. Reager; John T. Reager; Jonathan M. Hobbs; Ming Pan; Ming Pan; James S. Famiglietti; James S. Famiglietti; R. Edward Beighley; R. Edward Beighley; Matthew Rodell; Matthew Rodell; Jonathan M. Hobbs (2022). RRR/RAPID input and output files corresponding to "Underlying Fundamentals of Kalman Filtering for River Network Modeling" [Dataset]. http://doi.org/10.5281/zenodo.6789028
    Explore at:
    nc, csv, bin, zipAvailable download formats
    Dataset updated
    Jul 2, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Charlotte M. Emery; Charlotte M. Emery; Cédric H. David; Cédric H. David; Konstantinos M. Andreadis; Konstantinos M. Andreadis; Michael J. Turmon; Michael J. Turmon; John T. Reager; John T. Reager; Jonathan M. Hobbs; Ming Pan; Ming Pan; James S. Famiglietti; James S. Famiglietti; R. Edward Beighley; R. Edward Beighley; Matthew Rodell; Matthew Rodell; Jonathan M. Hobbs
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Corresponding peer-reviewed publication

    This dataset corresponds to all the RRR/RAPID input and output files that were used in the study reported in:

    • Emery, C. M., C. H. David, K. M. Andreadis, M. J. Turmon, J. T. Reager, and J. M. Hobbs (2020), Underlying Fundamentals of Kalman Filtering for River Network Modeling, Journal of Hydrometeorology, 21, 453-474, DOI: 10.1175/JHM-D-19-0084.1.

    When making use of any of the files in this dataset, please cite both the aforementioned article and the dataset herein.

    Known bugs and limitations in this dataset or the associated manuscript.

    In the final version of the published manuscript, Figure 5a, Figure 5b, Figure 5c, and Figure SF1 are inaccurate. The issue in these figures is that they were all prepared with an incorrect indexing relating observed and simulated discharge, hence observations at any one location were consistently being compared to simulations at another different location. As a result, all values of "measured" discharge errors (i.e. Bias, STDE, and RMSE) are incorrect. This issue did not affect the values of "estimated" errors, nor did it affect all values of the Nash-Sutcliffe effeciency that are presented. The figures published in the manuscript can all be recreated using the files in which "BUG_DO_NOT_USE" was appended to the name. Correct figures can also be created using corresponding file names that were not so appended.

    Note that corrected versions of Figure 5a, Figure 5b, Figure 5c, and Figure SF1 all retain the same strong linear relationships that are discussed in the paper. The slope of the daily discharge STDE trend initially reported as \(\alpha = 0.3876\) in Figure 5c changes to \(\alpha = 0.4507\) after correction. The resulting value of the ideal inflation factor hence changes from \(I = {1 \over 0.3876} \approx 2.58\) to \(I = {1 \over 0.4507} \approx 2.22\). This updated ideal inflation factor has no impact on the conclusions reached in the manuscript because it remains closer to \(I = 2.58\) than to \(I = 1\) or \(I = 5\), i.e. the three values that were evaluated.

    Additionally, a faulty version 1.3.1 of the Python toolbox netCDF4 led to incorrect interpretation of _FillValue in which every data point of value greater than _FillValue was interpreted as masked. This created discrepancies in the following three files, which were updated between V1 and V2 of this dataset: "timeseries_rap_exp01.csv", "timeseries_rap_exp18.csv", and "stats_rap_exp18.csv". Faulty versions of the same files have "BUG_NETCDF4" appended to their names. Correct files have been recreated with file names that were not so appended.

  18. Darwin Harbour Habitat Mapping Program: Probability of occurrence of filter...

    • researchdata.edu.au
    • ecat.ga.gov.au
    Updated Feb 20, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Galaiduk, R.; Radford, B. (2019). Darwin Harbour Habitat Mapping Program: Probability of occurrence of filter feeders habitat [Dataset]. https://researchdata.edu.au/darwin-harbour-habitat-feeders-habitat/1442249
    Explore at:
    Dataset updated
    Feb 20, 2019
    Dataset provided by
    Geoscience Australiahttp://ga.gov.au/
    Authors
    Galaiduk, R.; Radford, B.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2011 - 2017
    Area covered
    Description

    This resource contains a probability of occurrence grid of filter feeders for the greater Darwin Harbour region as part of a baseline seabed mapping program of Darwin Harbour and Bynoe Harbour. This project was funded through offset funds provided by an INPEX-led Ichthys LNG Project to the Northern Territory Government’s Department of Environment and Natural Resources (NTG-DENR) with co-investment from Geoscience Australia (GA) and the Australian Institute of Marine Science (AIMS). The intent of this program is to improve knowledge of the marine environments in the Darwin and Bynoe Harbour regions by collating and collecting baseline data that enable the creation of thematic habitat maps and information to underpin marine resource management decisions. The probability of occurrence grid of filter feeders was derived from a compilation of multiple surveys undertaken by GA, AIMS and NTG-DENR between 2011 and 2017, including GA0333 (Siwabessy et al., 2015), GA0341 (Siwabessy et al., 2015), GA0351/SOL6187 (Siwabessy et al., 2016), GA4452/SOL6432 (Siwabessy et al., 2017), GA0356 (Radke et al., 2017), and GA0358 and GA0359 (Radke et al., 2018), adding to those from a previous survey GA0333 collected by GA, AIMS and NTG-DENR.

  19. Storage and Transit Time Data and Code

    • zenodo.org
    zip
    Updated Nov 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Felton; Andrew Felton (2024). Storage and Transit Time Data and Code [Dataset]. http://doi.org/10.5281/zenodo.14171251
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 15, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andrew Felton; Andrew Felton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Author: Andrew J. Felton
    Date: 11/15/2024

    This R project contains the primary code and data (following pre-processing in python) used for data production, manipulation, visualization, and analysis, and figure production for the study entitled:

    "Global estimates of the storage and transit time of water through vegetation"

    Please note that 'turnover' and 'transit' are used interchangeably. Also please note that this R project has been updated multiple times as the analysis has updated throughout the peer review process.

    #Data information:

    The data folder contains key data sets used for analysis. In particular:

    "data/turnover_from_python/updated/august_2024_lc/" contains the core datasets used in this study including global arrays summarizing five year (2016-2020) averages of mean (annual) and minimum (monthly) transit time, storage, canopy transpiration, and number of months of data able as both an array (.nc) or data table (.csv). These data were produced in python using the python scripts found in the "supporting_code" folder. The remaining files in the "data" and "data/supporting_data" folder primarily contain ground-based estimates of storage and transit found in public databases or through a literature search, but have been extensively processed and filtered here. The "supporting_data"" folder also contains annual (2016-2020) MODIS land cover data used in the analysis and contains separate filters containing the original data (.hdf) and then the final process (filtered) data in .nc format. The resulting annual land cover distributions were used in the pre-processing of data in python.

    #Code information

    Python scripts can be found in the "supporting_code" folder.

    Each R script in this project has a role:

    "01_start.R": This script sets the working directory, loads in the tidyverse package (the remaining packages in this project are called using the `::` operator), and can run two other scripts: one that loads the customized functions (02_functions.R) and one for importing and processing the key dataset for this analysis (03_import_data.R).

    "02_functions.R": This script contains custom functions. Load this using the `source()` function in the 01_start.R script.

    "03_import_data.R": This script imports and processes the .csv transit data. It joins the mean (annual) transit time data with the minimum (monthly) transit data to generate one dataset for analysis: annual_turnover_2. Load this using the
    `source()` function in the 01_start.R script.

    "04_figures_tables.R": This is the main workhouse for figure/table production and supporting analyses. This script generates the key figures and summary statistics used in the study that then get saved in the "manuscript_figures" folder. Note that all maps were produced using Python code found in the "supporting_code"" folder. Also note that within the "manuscript_figures" folder there is an "extended_data" folder, which contains tables of the summary statistics (e.g., quartiles and sample sizes) behind figures containing box plots or depicting regression coefficients.

    "supporting_generate_data.R": This script processes supporting data used in the analysis, primarily the varying ground-based datasets of leaf water content.

    "supporting_process_land_cover.R": This takes annual MODIS land cover distributions and processes them through a multi-step filtering process so that they can be used in preprocessing of datasets in python.

  20. Concentration data of NO3-, NH4+, PO4, and Si from filtered sediment...

    • search.datacite.org
    • dataservices.gfz-potsdam.de
    • +1more
    Updated Sep 15, 2006
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Beat Müller; Martin Märki; Martin Schmid; Elena Vologina; Bernhard Wehrli; Alfred Wüest; Michael Sturm (2006). Concentration data of NO3-, NH4+, PO4, and Si from filtered sediment porewaters. Basis for the flux data “F” in Table 1 [Dataset]. http://doi.org/10.1594/gfz.sddb.1070
    Explore at:
    Dataset updated
    Sep 15, 2006
    Dataset provided by
    DataCitehttps://www.datacite.org/
    Deutsches GeoForschungsZentrum GFZ
    Authors
    Beat Müller; Martin Märki; Martin Schmid; Elena Vologina; Bernhard Wehrli; Alfred Wüest; Michael Sturm
    Description

    Porewater samples were conserved with 0.2% chloroform and analyzed at EAWAG, Switzerland, for NO3-, NH4+, SiO2, and o-PO4 using standard photometric methods (DEW, 1996). In 2002, NH4+ in porewater was measured on board of the research vessel with the indophenol method (DEW, 1996) and a portable photometer (Merck Spectroquant). In March and July 2001, porewater measurements of O2, NO3−, and NH4+ were performed with ion-selective electrodes on retrieved sediment cores from the South Basin, Vydrino, and Posolskoe High on the ice and in the hydrological institute on shore at Listvijanka to give vertical concentration profiles with high spatial resolution. Measurements are described in detail by Maerki et al. (submitted for publication). Diffusive fluxes across the sediment–water interface were calculated from the chemical gradients of their porewater concentration profiles assuming steady-state conditions and using Ficks first law of diffusion (Berner, 1980):

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jonas Knape; Perry de Valpine (2023). Supplement 1. R code for fitting the random-walk state-space model using particle filter MCMC. [Dataset]. http://doi.org/10.6084/m9.figshare.3552534.v1

Supplement 1. R code for fitting the random-walk state-space model using particle filter MCMC.

Related Article
Explore at:
htmlAvailable download formats
Dataset updated
May 31, 2023
Dataset provided by
Wiley
Authors
Jonas Knape; Perry de Valpine
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

File List adaptiveMH.r (md5: 1c7f3697e28dca0aceda63360930e29f) adaptiveMHfuns.r (md5: cabc33a60ab779b954d853816c9e3cce) PF.r (md5: eff6f6611833c86c1d1a8e8135af7e04)

Description
  adaptiveMH.r – Contains a script for fitting a random-walk model with drift for Kangaroo population dynamics on the log-scale using particle filtering Metropolis Hastings with an initial adaptive phase.
  adaptiveMHfuns.r – Contains functions that are used for estimating and handling the normal mixture proposals.
  PF.r – Contains functions that perform the particle filtering and define the model.
Search
Clear search
Close search
Google apps
Main menu