100+ datasets found
  1. d

    Diel and synoptic sampling data from Boulder Creek and South Boulder Creek,...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Diel and synoptic sampling data from Boulder Creek and South Boulder Creek, near Boulder, Colorado, September–October 2019 [Dataset]. https://catalog.data.gov/dataset/diel-and-synoptic-sampling-data-from-boulder-creek-and-south-boulder-creek-near-boulder-co
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Colorado, South Boulder Creek, Boulder
    Description

    Multiple sampling campaigns were conducted near Boulder, Colorado, to quantify constituent concentrations and loads in Boulder Creek and its tributary, South Boulder Creek. Diel sampling was initiated at approximately 1100 hours on September 17, 2019, and continued until approximately 2300 hours on September 18, 2019. During this time period, samples were collected at two locations on Boulder Creek approximately every 3.5 hours to quantify the diel variability of constituent concentrations at low flow. Synoptic sampling campaigns on South Boulder Creek and Boulder Creek were conducted October 15-18, 2019, to develop spatial profiles of concentration, streamflow, and load. Numerous main stem and inflow locations were sampled during each synoptic campaign using the simple grab technique (17 main stem and 2 inflow locations on South Boulder Creek; 34 main stem and 17 inflow locations on Boulder Creek). Streamflow at each main stem location was measured using acoustic doppler velocimetry. Bulk samples from all sampling campaigns were processed within one hour of sample collection. Processing steps included measurement of pH and specific conductance, and filtration using 0.45-micron filters. Laboratory analyses were subsequently conducted to determine dissolved and total recoverable constituent concentrations. Filtered samples were analyzed for a suite of dissolved anions using ion chromatography. Filtered, acidified samples and unfiltered acidified samples were analyzed by inductively coupled plasma-mass spectrometry and inductively coupled plasma-optical emission spectroscopy to determine dissolved and total recoverable cation concentrations, respectively. This data release includes three data tables, three photographs, and a kmz file showing the sampling locations. Additional information on the data table contents, including the presentation of data below the analytical detection limits, is provided in a Data Dictionary.

  2. f

    Sample names, sampling descriptions and contextual data.

    • plos.figshare.com
    • figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Linda A. Amaral-Zettler; Elizabeth A. McCliment; Hugh W. Ducklow; Susan M. Huse (2023). Sample names, sampling descriptions and contextual data. [Dataset]. http://doi.org/10.1371/journal.pone.0006372.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Linda A. Amaral-Zettler; Elizabeth A. McCliment; Hugh W. Ducklow; Susan M. Huse
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sample names, sampling descriptions and contextual data.

  3. e

    A unified approach to enhanced sampling - Dataset - B2FIND

    • b2find.eudat.eu
    Updated Oct 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). A unified approach to enhanced sampling - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/efb6201d-0d85-500e-aa12-c0b6f32f9d22
    Explore at:
    Dataset updated
    Oct 23, 2023
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The sampling problem lies at the heart of atomistic simulations and over the years many different enhanced sampling methods have been suggested towards its solution. These methods are often grouped into two broad families. On the one hand methods such as umbrella sampling and metadynamics that build a bias potential based on few order parameters or collective variables. On the other hand tempering methods such as replica exchange that combine different thermodynamic ensembles in one single expanded ensemble. We adopt instead a unifying perspective, focusing on the target probability distribution sampled by the different methods. This allows us to introduce a new method that can sample any of the ensembles normally sampled via replica exchange, but does so in a collective-variables-based scheme. This method is an extension of the recently developed on-the-fly probability enhanced sampling method [Invernizzi and Parrinello, J. Phys. Chem. Lett. 11.7 (2020)] that has been previously used for metadynamics-like sampling. The method is thus very general and can be used to achieve different types of enhanced sampling. It is also reliable and simple to use, since it presents only few and robust external parameters and has a straightforward reweighting scheme. Furthermore, it can be used with any number of parallel replicas. We show the versatility of our approach with applications to multicanonical and multithermal-multibaric simulations, thermodynamic integration, umbrella sampling, and combinations thereof.

  4. The Sampling Problem when Mining Inter-Library Usage Patterns

    • zenodo.org
    zip
    Updated Oct 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous; Anonymous (2024). The Sampling Problem when Mining Inter-Library Usage Patterns [Dataset]. http://doi.org/10.5281/zenodo.13889885
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 4, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anonymous; Anonymous
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Tool support in software engineering often depends on relationships, regularities, patterns, or rules, mined from sampled code. Examples are approaches to bug prediction, code recommendation, and code autocompletion. Samples are relevant to scale the analysis of data. Many such samples consist of software projects taken from GitHub; however, the specifics of sampling might influence the generalization of the patterns.

    In this paper, we focus on how to sample software projects that are clients of libraries and frameworks, when mining for interlibrary usage patterns. We notice that when limiting the sample to a very specific library, inter-library patterns in the form of implications from one library to another may not generalize well. Using a simulation and a real case study, we analyze different sampling methods. Most importantly, our simulation shows that only when sampling for the disjunction of both libraries involved in the implication, the implication generalizes well. Second, we show that real empirical data sampled from GitHub does not behave as we would expect it from our simulation. This identifies a potential problem with the usage of such API for studying inter-library usage patterns.

  5. d

    FSIS Laboratory Sampling Data - Raw Beef Sampling

    • catalog.data.gov
    • s.cnmilf.com
    Updated May 8, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Food Safety and Inspection Service (2025). FSIS Laboratory Sampling Data - Raw Beef Sampling [Dataset]. https://catalog.data.gov/dataset/fsis-raw-beef-sampling-data
    Explore at:
    Dataset updated
    May 8, 2025
    Dataset provided by
    Food Safety and Inspection Servicehttps://www.fsis.usda.gov/
    Description

    Establishment specific sampling results for Raw Beef sampling projects. Current data is updated quarterly; archive data is updated annually. Data is split by FY. See the FSIS website for additional information.

  6. h

    dummy-cot-sampling-dataset

    • huggingface.co
    Updated Mar 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    crumb (2024). dummy-cot-sampling-dataset [Dataset]. https://huggingface.co/datasets/crumb/dummy-cot-sampling-dataset
    Explore at:
    Dataset updated
    Mar 1, 2024
    Authors
    crumb
    Description

    crumb/dummy-cot-sampling-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community

  7. f

    Details of the sampling sites.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Feb 13, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Minuto, Luigi; Guerrina, Maria; Dovana, Francesco; Arnulfo, Annamaria; Ercole, Enrico; Mucciarelli, Marco; Casazza, Gabriele; Lumini, Erica; Fusconi, Anna (2017). Details of the sampling sites. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001769531
    Explore at:
    Dataset updated
    Feb 13, 2017
    Authors
    Minuto, Luigi; Guerrina, Maria; Dovana, Francesco; Arnulfo, Annamaria; Ercole, Enrico; Mucciarelli, Marco; Casazza, Gabriele; Lumini, Erica; Fusconi, Anna
    Description

    Details of the sampling sites.

  8. f

    Data from: Evaluating Supplemental Samples in Longitudinal Research:...

    • tandf.figshare.com
    txt
    Updated Feb 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura K. Taylor; Xin Tong; Scott E. Maxwell (2024). Evaluating Supplemental Samples in Longitudinal Research: Replacement and Refreshment Approaches [Dataset]. http://doi.org/10.6084/m9.figshare.12162072.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Feb 9, 2024
    Dataset provided by
    Taylor & Francis
    Authors
    Laura K. Taylor; Xin Tong; Scott E. Maxwell
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Despite the wide application of longitudinal studies, they are often plagued by missing data and attrition. The majority of methodological approaches focus on participant retention or modern missing data analysis procedures. This paper, however, takes a new approach by examining how researchers may supplement the sample with additional participants. First, refreshment samples use the same selection criteria as the initial study. Second, replacement samples identify auxiliary variables that may help explain patterns of missingness and select new participants based on those characteristics. A simulation study compares these two strategies for a linear growth model with five measurement occasions. Overall, the results suggest that refreshment samples lead to less relative bias, greater relative efficiency, and more acceptable coverage rates than replacement samples or not supplementing the missing participants in any way. Refreshment samples also have high statistical power. The comparative strengths of the refreshment approach are further illustrated through a real data example. These findings have implications for assessing change over time when researching at-risk samples with high levels of permanent attrition.

  9. f

    Description of the sampling sites and samples.

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefania Daghino; Claude Murat; Elisa Sizzano; Mariangela Girlanda; Silvia Perotto (2023). Description of the sampling sites and samples. [Dataset]. http://doi.org/10.1371/journal.pone.0044233.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Stefania Daghino; Claude Murat; Elisa Sizzano; Mariangela Girlanda; Silvia Perotto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Location, presence of fibrous minerals (including asbestos), brief description of the sites and of the samples, extractable fraction of macro- and micronutrients (µg of ions/g of soil ± standard deviation) C%, N%, C/N (the statistical analysis was performed by ANOVA with Tukey as post-hoc test (P

  10. d

    ScienceBase Item Summary Page

    • datadiscoverystudio.org
    Updated Feb 3, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). ScienceBase Item Summary Page [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/67dabfe8bae6459b8a66190c2c1df67a/html
    Explore at:
    Dataset updated
    Feb 3, 2016
    Description

    Link to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information

  11. h

    sampling-distill-train-data-kth-shift4

    • huggingface.co
    Updated May 23, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chenchen Gu (2024). sampling-distill-train-data-kth-shift4 [Dataset]. https://huggingface.co/datasets/cygu/sampling-distill-train-data-kth-shift4
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 23, 2024
    Authors
    Chenchen Gu
    Description

    Dataset Card for "sampling-distill-train-data-kth-shift4"

    Training data for sampling-based watermark distillation using the KTH s=4s=4s=4 watermarking strategy in the paper On the Learnability of Watermarks for Language Models. Llama 2 7Bwith decoding-based watermarking was used to generate 640,000 watermarked samples, each 256 tokens long. Each sample is prompted with 50-token prefixes from OpenWebText (prompts not included in the samples).

  12. f

    Data from: Sample metadata

    • fairdomhub.org
    xlsx
    Updated Jul 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Harvey (2021). Sample metadata [Dataset]. https://fairdomhub.org/data_files/1440
    Explore at:
    xlsx(43.9 KB)Available download formats
    Dataset updated
    Jul 1, 2021
    Authors
    Thomas Harvey
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Information on samples submitted for RNAseq

    Rows are individual samples

    Columns are: ID Sample Name Date sampled Species Sex Tissue Geographic location Date extracted Extracted by Nanodrop Conc. (ng/µl) 260/280 260/230 RIN Plate ID Position Index name Index Seq Qubit BR kit Conc. (ng/ul) BioAnalyzer Conc. (ng/ul) BioAnalyzer bp (region 200-1200) Submission reference Date submitted Conc. (nM) Volume provided PE/SE Number of reads Read length

  13. f

    Dataset for Figs 2–4.

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Nov 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esther L. German; Helen M. Nabwera; Ryan Robinson; Farah Shiham; Kostas Liatsikos; Christopher M. Parry; Claire McNamara; Sanjana Kattera; Katie Carter; Ashleigh Howard; Sherin Pojar; Joshua Hamilton; Agnes Matope; Jonathan M. Read; Stephen J. Allen; Helen Hill; Daniel B. Hawcutt; Britta C. Urban; Andrea M. Collins; Daniela M. Ferreira; Elissavet Nikolaou (2023). Dataset for Figs 2–4. [Dataset]. http://doi.org/10.1371/journal.pone.0294133.s004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 9, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Esther L. German; Helen M. Nabwera; Ryan Robinson; Farah Shiham; Kostas Liatsikos; Christopher M. Parry; Claire McNamara; Sanjana Kattera; Katie Carter; Ashleigh Howard; Sherin Pojar; Joshua Hamilton; Agnes Matope; Jonathan M. Read; Stephen J. Allen; Helen Hill; Daniel B. Hawcutt; Britta C. Urban; Andrea M. Collins; Daniela M. Ferreira; Elissavet Nikolaou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Longitudinal, community-based sampling is important for understanding prevalence and transmission of respiratory pathogens. Using a minimally invasive sampling method, the FAMILY Micro study monitored the oral, nasal and hand microbiota of families for 6 months. Here, we explore participant experiences and opinions. A mixed methods approach was utilised. A quantitative questionnaire was completed after every sampling timepoint to report levels of discomfort and pain, as well as time taken to collect samples. Participants were also invited to discuss their experiences in a qualitative structured exit interview. We received questionnaires from 36 families. Most adults and children >5y experienced no pain (94% and 70%) and little discomfort (73% and 47% no discomfort) regardless of sample type, whereas children ≤5y experienced variable levels of pain and discomfort (48% no pain but 14% hurts even more, whole lot or worst; 38% no discomfort but 33% moderate, severe, or extreme discomfort). The time taken for saliva and hand sampling decreased over the study. We conducted interviews with 24 families. Families found the sampling method straightforward, and adults and children >5y preferred nasal sampling using a synthetic absorptive matrix over nasopharyngeal swabs. It remained challenging for families to fit sampling into their busy schedules. Adequate fridge/freezer space and regular sample pick-ups were found to be important factors for feasibility. Messaging apps proved extremely effective for engaging with participants. Our findings provide key information to inform the design of future studies, specifically that self-sampling at home using minimally invasive procedures is feasible in a family context.

  14. d

    FSIS Laboratory Sampling Data - Siluriformes Product Sampling

    • catalog.data.gov
    • s.cnmilf.com
    Updated May 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Food Safety and Inspection Service (2025). FSIS Laboratory Sampling Data - Siluriformes Product Sampling [Dataset]. https://catalog.data.gov/dataset/fsis-raw-siluriformes-product-sampling-data
    Explore at:
    Dataset updated
    May 8, 2025
    Dataset provided by
    Food Safety and Inspection Servicehttps://www.fsis.usda.gov/
    Description

    Establishment specific sampling results for Siluriformes Product sampling projects. Current data is updated quarterly; archive data is updated annually. Data is split by FY. See the FSIS website for additional information.

  15. f

    (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in...

    • plos.figshare.com
    • figshare.com
    docx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frank J. van Rijnsoever (2023). (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research [Dataset]. http://doi.org/10.1371/journal.pone.0181689
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Frank J. van Rijnsoever
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  16. f

    Appendix A. Descriptions of the sampling design and dates.

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    • +1more
    Updated Aug 10, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harms, Kyle E.; Connell, Joseph H.; Kerr, Alexander M.; Tanner, Jason E.; Wallace, Carden C.; Hughes, Terence P. (2016). Appendix A. Descriptions of the sampling design and dates. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001587759
    Explore at:
    Dataset updated
    Aug 10, 2016
    Authors
    Harms, Kyle E.; Connell, Joseph H.; Kerr, Alexander M.; Tanner, Jason E.; Wallace, Carden C.; Hughes, Terence P.
    Description

    Descriptions of the sampling design and dates.

  17. NEON Biorepository Mammal Collection (Vouchers [Standard Sampling])

    • gbif.org
    • demo.gbif.org
    Updated Jun 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GBIF (2025). NEON Biorepository Mammal Collection (Vouchers [Standard Sampling]) [Dataset]. http://doi.org/10.15468/25vq9q
    Explore at:
    Dataset updated
    Jun 23, 2025
    Dataset provided by
    National Ecological Observatory Networkhttp://www.neonscience.org/
    Global Biodiversity Information Facilityhttps://www.gbif.org/
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This collection contains small mammal vouchers collected during small mammal sampling (NEON sample classes: mam_pertrapnight_in.voucherSampleID). Small mammal sampling is based on the lunar calendar, with timing of sampling constrained to occur within 10 days before or after the new moon. Typically, core sites are sampled 6 times per year, and gradient sites 4 times per year. Small mammals are sampled using box traps (models LFA, XLK, H.B. Sherman Traps, Inc., Tallahassee, FL, USA). Box traps are arrayed in three to eight (depending on the size of the site) 10 x 10 grids with 10m spacing between traps at all sites. Small mammal trapping bouts are comprised of one or three nights of trapping, depending on whether a grid is designated for pathogen sample collection (3 nights) or not (1 night). Only mortalities and individuals that require euthanasia due to injuries are vouchered. The NEON Biorepository receives whole frozen specimens and prepares vouchers as either study skins with skulls (or full skeletons) or in 70-95% ethanol. Standard mammalian measurements are taken during specimen preparation (in mm; total length, tail length, hind foot length, ear length; and in g: mass) and are accessible in downloaded records (note: field measurements are listed in parentheses after preparation measurements, when available). Additional notes about parasites and reproductive condition are also accessible in downloaded records. See related links below for protocols and NEON related data products.

  18. f

    Sampling data.

    • figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lena Teuber; Anna Schukat; Wilhelm Hagen; Holger Auel (2023). Sampling data. [Dataset]. http://doi.org/10.1371/journal.pone.0077590.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Lena Teuber; Anna Schukat; Wilhelm Hagen; Holger Auel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sampling intervals highlighted in bold numbers indicate the approximate vertical extent of the oxygen minimum zone (O2≤45 µmol kg−1). D = Discovery cruise, MSM = Maria S. Merian cruises, UTC = universal time code, O2 min = lowest oxygen concentration at the respective station, O2 min depth = depth of the oxygen minimum at the respective station, SST = sea surface temperature, n.d. = no data, * = stations analysed for copepod abundance.

  19. g

    IE GSI MI Seabed Sediment Samples Irish Waters WGS84 LAT

    • geohive.ie
    • ga.geohive.ie
    • +3more
    Updated Feb 11, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    geohive_curator (2014). IE GSI MI Seabed Sediment Samples Irish Waters WGS84 LAT [Dataset]. https://www.geohive.ie/maps/04113903bbd04f1fbd7c83efe3261e0d
    Explore at:
    Dataset updated
    Feb 11, 2014
    Dataset authored and provided by
    geohive_curator
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Research ships working at sea map the seafloor. The ships collect bathymetry data. Bathymetry is the measurement of how deep the sea is. Bathymetry is the study of the shape and features of the seabed. The name comes from Greek words meaning "deep" and “measure". Backscatter is the measurement of how hard the seabed is.Bathymetry and backscatter data are collected on board boats working at sea. The boats use special equipment called a multibeam echosounder. A multibeam echosounder is a type of sonar that is used to map the seabed. Sound waves are emitted in a fan shape beneath the boat. The amount of time it takes for the sound waves to bounce off the bottom of the sea and return to a receiver is used to find out the water depth. The strength of the sound wave is used to find out how hard the bottom of the sea is. A strong sound wave indicates a hard surface (rocks, gravel), and a weak signal indicates a soft surface (silt, mud). The word backscatter comes from the fact that different bottom types “scatter” sound waves differently.Using the equipment also allows predictions as to the type of material present on the seabed e.g. rocks, pebbles, sand, mud. To confirm this, sediment samples are taken from the seabed. This process is called ground-truthing or sampling.Grab sampling is the most popular method of ground-truthing. There are three main types of grab used depending on the size of the vessel and the weather conditions; Day Grab, Shipek or Van Veen Grabs. The grabs take a sample of sediment from the surface layer of the seabed. The samples are then sent to a lab for analysis. Particle size analysis (PSA) has been carried out on samples collected since 2004. The results are used to cross-reference the seabed sediment classifications that are made from the bathymetry and backscatter datasets and are used to create seabed sediment maps (mud, sand, gravel, rock). Sediments have been classified based on percentage sand, mud and gravel (after Folk 1954).This dataset show locations that have completed samples from the seabed around Ireland. The bottom of the sea is known as the seabed or seafloor. These samples are known as grab samples. This is a dataset collected from 2001 to 2019.It is a vector dataset. Vector data portrays the world using points, lines and polygons (areas). The sample data is shown as points. Each point holds information on the surveyID, year, vessel name, sample id, instrument used, date, time, latitude, longitude, depth, report, recovery, percentage of mud, sand and gravel, description and folk classification.The dataset was mapped as part of the Irish National Seabed Survey (INSS) and INFOMAR (Integrated Mapping for the Sustainable Development of Ireland’s Marine Resource). Samples from related projects are also included: ADFish, DCU, FEAS, GATEWAYS, IMAGIN, IMES, INIS_HYRDO, JIBS, MESH, SCALLOP, SEAI and UCC.

  20. S

    Stationary Sampling Systems Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Apr 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Stationary Sampling Systems Report [Dataset]. https://www.archivemarketresearch.com/reports/stationary-sampling-systems-202113
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Apr 16, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global stationary sampling systems market is experiencing robust growth, driven by increasing demand across diverse industries like pharmaceuticals, food processing, and chemicals. The market, valued at approximately $1.5 billion in 2025, is projected to witness a Compound Annual Growth Rate (CAGR) of 6% from 2025 to 2033. This expansion is fueled by stringent regulatory compliance requirements necessitating accurate and reliable sample collection, the rising adoption of automation in various industries for improved efficiency and reduced human error, and growing investments in research and development leading to the development of advanced sampling systems with enhanced features. Liquid sampling systems currently dominate the market due to widespread applications in diverse sectors; however, the gas and powder sampling systems segments are poised for significant growth, fueled by increasing demand in specialized industries such as environmental monitoring and materials science. Geographic expansion is another key driver. While North America and Europe currently hold a significant market share, rapid industrialization and infrastructural development in Asia-Pacific, particularly in China and India, are creating lucrative opportunities for stationary sampling system providers. The market faces challenges such as high initial investment costs associated with advanced systems and the need for skilled personnel to operate and maintain them. However, ongoing technological advancements leading to more cost-effective and user-friendly systems are expected to mitigate these restraints and support continued market expansion. Competitive rivalry among established players like Parker, GEMü, and Swagelok, alongside the emergence of niche players focusing on specialized applications, ensures a dynamic and innovative market landscape.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2024). Diel and synoptic sampling data from Boulder Creek and South Boulder Creek, near Boulder, Colorado, September–October 2019 [Dataset]. https://catalog.data.gov/dataset/diel-and-synoptic-sampling-data-from-boulder-creek-and-south-boulder-creek-near-boulder-co

Diel and synoptic sampling data from Boulder Creek and South Boulder Creek, near Boulder, Colorado, September–October 2019

Explore at:
Dataset updated
Jul 6, 2024
Dataset provided by
U.S. Geological Survey
Area covered
Colorado, South Boulder Creek, Boulder
Description

Multiple sampling campaigns were conducted near Boulder, Colorado, to quantify constituent concentrations and loads in Boulder Creek and its tributary, South Boulder Creek. Diel sampling was initiated at approximately 1100 hours on September 17, 2019, and continued until approximately 2300 hours on September 18, 2019. During this time period, samples were collected at two locations on Boulder Creek approximately every 3.5 hours to quantify the diel variability of constituent concentrations at low flow. Synoptic sampling campaigns on South Boulder Creek and Boulder Creek were conducted October 15-18, 2019, to develop spatial profiles of concentration, streamflow, and load. Numerous main stem and inflow locations were sampled during each synoptic campaign using the simple grab technique (17 main stem and 2 inflow locations on South Boulder Creek; 34 main stem and 17 inflow locations on Boulder Creek). Streamflow at each main stem location was measured using acoustic doppler velocimetry. Bulk samples from all sampling campaigns were processed within one hour of sample collection. Processing steps included measurement of pH and specific conductance, and filtration using 0.45-micron filters. Laboratory analyses were subsequently conducted to determine dissolved and total recoverable constituent concentrations. Filtered samples were analyzed for a suite of dissolved anions using ion chromatography. Filtered, acidified samples and unfiltered acidified samples were analyzed by inductively coupled plasma-mass spectrometry and inductively coupled plasma-optical emission spectroscopy to determine dissolved and total recoverable cation concentrations, respectively. This data release includes three data tables, three photographs, and a kmz file showing the sampling locations. Additional information on the data table contents, including the presentation of data below the analytical detection limits, is provided in a Data Dictionary.

Search
Clear search
Close search
Google apps
Main menu