Multiple sampling campaigns were conducted near Boulder, Colorado, to quantify constituent concentrations and loads in Boulder Creek and its tributary, South Boulder Creek. Diel sampling was initiated at approximately 1100 hours on September 17, 2019, and continued until approximately 2300 hours on September 18, 2019. During this time period, samples were collected at two locations on Boulder Creek approximately every 3.5 hours to quantify the diel variability of constituent concentrations at low flow. Synoptic sampling campaigns on South Boulder Creek and Boulder Creek were conducted October 15-18, 2019, to develop spatial profiles of concentration, streamflow, and load. Numerous main stem and inflow locations were sampled during each synoptic campaign using the simple grab technique (17 main stem and 2 inflow locations on South Boulder Creek; 34 main stem and 17 inflow locations on Boulder Creek). Streamflow at each main stem location was measured using acoustic doppler velocimetry. Bulk samples from all sampling campaigns were processed within one hour of sample collection. Processing steps included measurement of pH and specific conductance, and filtration using 0.45-micron filters. Laboratory analyses were subsequently conducted to determine dissolved and total recoverable constituent concentrations. Filtered samples were analyzed for a suite of dissolved anions using ion chromatography. Filtered, acidified samples and unfiltered acidified samples were analyzed by inductively coupled plasma-mass spectrometry and inductively coupled plasma-optical emission spectroscopy to determine dissolved and total recoverable cation concentrations, respectively. This data release includes three data tables, three photographs, and a kmz file showing the sampling locations. Additional information on the data table contents, including the presentation of data below the analytical detection limits, is provided in a Data Dictionary.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sample names, sampling descriptions and contextual data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The sampling problem lies at the heart of atomistic simulations and over the years many different enhanced sampling methods have been suggested towards its solution. These methods are often grouped into two broad families. On the one hand methods such as umbrella sampling and metadynamics that build a bias potential based on few order parameters or collective variables. On the other hand tempering methods such as replica exchange that combine different thermodynamic ensembles in one single expanded ensemble. We adopt instead a unifying perspective, focusing on the target probability distribution sampled by the different methods. This allows us to introduce a new method that can sample any of the ensembles normally sampled via replica exchange, but does so in a collective-variables-based scheme. This method is an extension of the recently developed on-the-fly probability enhanced sampling method [Invernizzi and Parrinello, J. Phys. Chem. Lett. 11.7 (2020)] that has been previously used for metadynamics-like sampling. The method is thus very general and can be used to achieve different types of enhanced sampling. It is also reliable and simple to use, since it presents only few and robust external parameters and has a straightforward reweighting scheme. Furthermore, it can be used with any number of parallel replicas. We show the versatility of our approach with applications to multicanonical and multithermal-multibaric simulations, thermodynamic integration, umbrella sampling, and combinations thereof.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Tool support in software engineering often depends on relationships, regularities, patterns, or rules, mined from sampled code. Examples are approaches to bug prediction, code recommendation, and code autocompletion. Samples are relevant to scale the analysis of data. Many such samples consist of software projects taken from GitHub; however, the specifics of sampling might influence the generalization of the patterns.
In this paper, we focus on how to sample software projects that are clients of libraries and frameworks, when mining for interlibrary usage patterns. We notice that when limiting the sample to a very specific library, inter-library patterns in the form of implications from one library to another may not generalize well. Using a simulation and a real case study, we analyze different sampling methods. Most importantly, our simulation shows that only when sampling for the disjunction of both libraries involved in the implication, the implication generalizes well. Second, we show that real empirical data sampled from GitHub does not behave as we would expect it from our simulation. This identifies a potential problem with the usage of such API for studying inter-library usage patterns.
Establishment specific sampling results for Raw Beef sampling projects. Current data is updated quarterly; archive data is updated annually. Data is split by FY. See the FSIS website for additional information.
crumb/dummy-cot-sampling-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community
Details of the sampling sites.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Despite the wide application of longitudinal studies, they are often plagued by missing data and attrition. The majority of methodological approaches focus on participant retention or modern missing data analysis procedures. This paper, however, takes a new approach by examining how researchers may supplement the sample with additional participants. First, refreshment samples use the same selection criteria as the initial study. Second, replacement samples identify auxiliary variables that may help explain patterns of missingness and select new participants based on those characteristics. A simulation study compares these two strategies for a linear growth model with five measurement occasions. Overall, the results suggest that refreshment samples lead to less relative bias, greater relative efficiency, and more acceptable coverage rates than replacement samples or not supplementing the missing participants in any way. Refreshment samples also have high statistical power. The comparative strengths of the refreshment approach are further illustrated through a real data example. These findings have implications for assessing change over time when researching at-risk samples with high levels of permanent attrition.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Location, presence of fibrous minerals (including asbestos), brief description of the sites and of the samples, extractable fraction of macro- and micronutrients (µg of ions/g of soil ± standard deviation) C%, N%, C/N (the statistical analysis was performed by ANOVA with Tukey as post-hoc test (P
Link to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information
Dataset Card for "sampling-distill-train-data-kth-shift4"
Training data for sampling-based watermark distillation using the KTH s=4s=4s=4 watermarking strategy in the paper On the Learnability of Watermarks for Language Models. Llama 2 7Bwith decoding-based watermarking was used to generate 640,000 watermarked samples, each 256 tokens long. Each sample is prompted with 50-token prefixes from OpenWebText (prompts not included in the samples).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Information on samples submitted for RNAseq
Rows are individual samples
Columns are: ID Sample Name Date sampled Species Sex Tissue Geographic location Date extracted Extracted by Nanodrop Conc. (ng/µl) 260/280 260/230 RIN Plate ID Position Index name Index Seq Qubit BR kit Conc. (ng/ul) BioAnalyzer Conc. (ng/ul) BioAnalyzer bp (region 200-1200) Submission reference Date submitted Conc. (nM) Volume provided PE/SE Number of reads Read length
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Longitudinal, community-based sampling is important for understanding prevalence and transmission of respiratory pathogens. Using a minimally invasive sampling method, the FAMILY Micro study monitored the oral, nasal and hand microbiota of families for 6 months. Here, we explore participant experiences and opinions. A mixed methods approach was utilised. A quantitative questionnaire was completed after every sampling timepoint to report levels of discomfort and pain, as well as time taken to collect samples. Participants were also invited to discuss their experiences in a qualitative structured exit interview. We received questionnaires from 36 families. Most adults and children >5y experienced no pain (94% and 70%) and little discomfort (73% and 47% no discomfort) regardless of sample type, whereas children ≤5y experienced variable levels of pain and discomfort (48% no pain but 14% hurts even more, whole lot or worst; 38% no discomfort but 33% moderate, severe, or extreme discomfort). The time taken for saliva and hand sampling decreased over the study. We conducted interviews with 24 families. Families found the sampling method straightforward, and adults and children >5y preferred nasal sampling using a synthetic absorptive matrix over nasopharyngeal swabs. It remained challenging for families to fit sampling into their busy schedules. Adequate fridge/freezer space and regular sample pick-ups were found to be important factors for feasibility. Messaging apps proved extremely effective for engaging with participants. Our findings provide key information to inform the design of future studies, specifically that self-sampling at home using minimally invasive procedures is feasible in a family context.
Establishment specific sampling results for Siluriformes Product sampling projects. Current data is updated quarterly; archive data is updated annually. Data is split by FY. See the FSIS website for additional information.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
Descriptions of the sampling design and dates.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This collection contains small mammal vouchers collected during small mammal sampling (NEON sample classes: mam_pertrapnight_in.voucherSampleID). Small mammal sampling is based on the lunar calendar, with timing of sampling constrained to occur within 10 days before or after the new moon. Typically, core sites are sampled 6 times per year, and gradient sites 4 times per year. Small mammals are sampled using box traps (models LFA, XLK, H.B. Sherman Traps, Inc., Tallahassee, FL, USA). Box traps are arrayed in three to eight (depending on the size of the site) 10 x 10 grids with 10m spacing between traps at all sites. Small mammal trapping bouts are comprised of one or three nights of trapping, depending on whether a grid is designated for pathogen sample collection (3 nights) or not (1 night). Only mortalities and individuals that require euthanasia due to injuries are vouchered. The NEON Biorepository receives whole frozen specimens and prepares vouchers as either study skins with skulls (or full skeletons) or in 70-95% ethanol. Standard mammalian measurements are taken during specimen preparation (in mm; total length, tail length, hind foot length, ear length; and in g: mass) and are accessible in downloaded records (note: field measurements are listed in parentheses after preparation measurements, when available). Additional notes about parasites and reproductive condition are also accessible in downloaded records. See related links below for protocols and NEON related data products.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sampling intervals highlighted in bold numbers indicate the approximate vertical extent of the oxygen minimum zone (O2≤45 µmol kg−1). D = Discovery cruise, MSM = Maria S. Merian cruises, UTC = universal time code, O2 min = lowest oxygen concentration at the respective station, O2 min depth = depth of the oxygen minimum at the respective station, SST = sea surface temperature, n.d. = no data, * = stations analysed for copepod abundance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Research ships working at sea map the seafloor. The ships collect bathymetry data. Bathymetry is the measurement of how deep the sea is. Bathymetry is the study of the shape and features of the seabed. The name comes from Greek words meaning "deep" and “measure". Backscatter is the measurement of how hard the seabed is.Bathymetry and backscatter data are collected on board boats working at sea. The boats use special equipment called a multibeam echosounder. A multibeam echosounder is a type of sonar that is used to map the seabed. Sound waves are emitted in a fan shape beneath the boat. The amount of time it takes for the sound waves to bounce off the bottom of the sea and return to a receiver is used to find out the water depth. The strength of the sound wave is used to find out how hard the bottom of the sea is. A strong sound wave indicates a hard surface (rocks, gravel), and a weak signal indicates a soft surface (silt, mud). The word backscatter comes from the fact that different bottom types “scatter” sound waves differently.Using the equipment also allows predictions as to the type of material present on the seabed e.g. rocks, pebbles, sand, mud. To confirm this, sediment samples are taken from the seabed. This process is called ground-truthing or sampling.Grab sampling is the most popular method of ground-truthing. There are three main types of grab used depending on the size of the vessel and the weather conditions; Day Grab, Shipek or Van Veen Grabs. The grabs take a sample of sediment from the surface layer of the seabed. The samples are then sent to a lab for analysis. Particle size analysis (PSA) has been carried out on samples collected since 2004. The results are used to cross-reference the seabed sediment classifications that are made from the bathymetry and backscatter datasets and are used to create seabed sediment maps (mud, sand, gravel, rock). Sediments have been classified based on percentage sand, mud and gravel (after Folk 1954).This dataset show locations that have completed samples from the seabed around Ireland. The bottom of the sea is known as the seabed or seafloor. These samples are known as grab samples. This is a dataset collected from 2001 to 2019.It is a vector dataset. Vector data portrays the world using points, lines and polygons (areas). The sample data is shown as points. Each point holds information on the surveyID, year, vessel name, sample id, instrument used, date, time, latitude, longitude, depth, report, recovery, percentage of mud, sand and gravel, description and folk classification.The dataset was mapped as part of the Irish National Seabed Survey (INSS) and INFOMAR (Integrated Mapping for the Sustainable Development of Ireland’s Marine Resource). Samples from related projects are also included: ADFish, DCU, FEAS, GATEWAYS, IMAGIN, IMES, INIS_HYRDO, JIBS, MESH, SCALLOP, SEAI and UCC.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global stationary sampling systems market is experiencing robust growth, driven by increasing demand across diverse industries like pharmaceuticals, food processing, and chemicals. The market, valued at approximately $1.5 billion in 2025, is projected to witness a Compound Annual Growth Rate (CAGR) of 6% from 2025 to 2033. This expansion is fueled by stringent regulatory compliance requirements necessitating accurate and reliable sample collection, the rising adoption of automation in various industries for improved efficiency and reduced human error, and growing investments in research and development leading to the development of advanced sampling systems with enhanced features. Liquid sampling systems currently dominate the market due to widespread applications in diverse sectors; however, the gas and powder sampling systems segments are poised for significant growth, fueled by increasing demand in specialized industries such as environmental monitoring and materials science. Geographic expansion is another key driver. While North America and Europe currently hold a significant market share, rapid industrialization and infrastructural development in Asia-Pacific, particularly in China and India, are creating lucrative opportunities for stationary sampling system providers. The market faces challenges such as high initial investment costs associated with advanced systems and the need for skilled personnel to operate and maintain them. However, ongoing technological advancements leading to more cost-effective and user-friendly systems are expected to mitigate these restraints and support continued market expansion. Competitive rivalry among established players like Parker, GEMü, and Swagelok, alongside the emergence of niche players focusing on specialized applications, ensures a dynamic and innovative market landscape.
Multiple sampling campaigns were conducted near Boulder, Colorado, to quantify constituent concentrations and loads in Boulder Creek and its tributary, South Boulder Creek. Diel sampling was initiated at approximately 1100 hours on September 17, 2019, and continued until approximately 2300 hours on September 18, 2019. During this time period, samples were collected at two locations on Boulder Creek approximately every 3.5 hours to quantify the diel variability of constituent concentrations at low flow. Synoptic sampling campaigns on South Boulder Creek and Boulder Creek were conducted October 15-18, 2019, to develop spatial profiles of concentration, streamflow, and load. Numerous main stem and inflow locations were sampled during each synoptic campaign using the simple grab technique (17 main stem and 2 inflow locations on South Boulder Creek; 34 main stem and 17 inflow locations on Boulder Creek). Streamflow at each main stem location was measured using acoustic doppler velocimetry. Bulk samples from all sampling campaigns were processed within one hour of sample collection. Processing steps included measurement of pH and specific conductance, and filtration using 0.45-micron filters. Laboratory analyses were subsequently conducted to determine dissolved and total recoverable constituent concentrations. Filtered samples were analyzed for a suite of dissolved anions using ion chromatography. Filtered, acidified samples and unfiltered acidified samples were analyzed by inductively coupled plasma-mass spectrometry and inductively coupled plasma-optical emission spectroscopy to determine dissolved and total recoverable cation concentrations, respectively. This data release includes three data tables, three photographs, and a kmz file showing the sampling locations. Additional information on the data table contents, including the presentation of data below the analytical detection limits, is provided in a Data Dictionary.