Facebook
TwitterMultiple sampling campaigns were conducted near Boulder, Colorado, to quantify constituent concentrations and loads in Boulder Creek and its tributary, South Boulder Creek. Diel sampling was initiated at approximately 1100 hours on September 17, 2019, and continued until approximately 2300 hours on September 18, 2019. During this time period, samples were collected at two locations on Boulder Creek approximately every 3.5 hours to quantify the diel variability of constituent concentrations at low flow. Synoptic sampling campaigns on South Boulder Creek and Boulder Creek were conducted October 15-18, 2019, to develop spatial profiles of concentration, streamflow, and load. Numerous main stem and inflow locations were sampled during each synoptic campaign using the simple grab technique (17 main stem and 2 inflow locations on South Boulder Creek; 34 main stem and 17 inflow locations on Boulder Creek). Streamflow at each main stem location was measured using acoustic doppler velocimetry. Bulk samples from all sampling campaigns were processed within one hour of sample collection. Processing steps included measurement of pH and specific conductance, and filtration using 0.45-micron filters. Laboratory analyses were subsequently conducted to determine dissolved and total recoverable constituent concentrations. Filtered samples were analyzed for a suite of dissolved anions using ion chromatography. Filtered, acidified samples and unfiltered acidified samples were analyzed by inductively coupled plasma-mass spectrometry and inductively coupled plasma-optical emission spectroscopy to determine dissolved and total recoverable cation concentrations, respectively. This data release includes three data tables, three photographs, and a kmz file showing the sampling locations. Additional information on the data table contents, including the presentation of data below the analytical detection limits, is provided in a Data Dictionary.
Facebook
TwitterBiological sampling data is information that comes from biological samples of fish harvested in Virginia for aging purposes to aid in coastal stock assessments
Facebook
Twittercrumb/dummy-cot-sampling-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterThis view presents data selected from the geochemical mapping of North Greenland that are relevant for an evaluation of the potential for zinc mineralisation: CaO, K2O, Ba, Cu, Sr, Zn. The data represent the most reliable analytical values from 2469 stream sediment and 204 soil samples collected and analysed over a period from 1978 to 1999 plus a large number of reanalyses in 2011. The compiled data have been quality controlled and calibrated to eliminate bias between methods and time of analysis as described in Thrane et al., 2011. In the present dataset, all values below lower detection limit are indicated by the digit 0. Sampling The regional geochemical surveys undertaken in North Greenland follows the procedure for stream sediment sampling given in Steenfelt, 1999. Thrane et al., 2011 give more information on sampling campaigns in the area. The sample consists of 500 g sediment collected into paper bags from stream bed and banks, alternatively soil from areas devoid of streams. The sampling density is not consistent throughout the covered area and varies from regular with 1 sample per 30 to 50 km2 to scarce and irregular in other areas. Analyses were made on screened < 0.1 mm or <0.075 mm grain size fractions.
Facebook
TwitterEstablishment specific sampling results for Raw Beef sampling projects. Current data is updated quarterly; archive data is updated annually. Data is split by FY. See the FSIS website for additional information.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sample names, sampling descriptions and contextual data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Location, presence of fibrous minerals (including asbestos), brief description of the sites and of the samples, extractable fraction of macro- and micronutrients (µg of ions/g of soil ± standard deviation) C%, N%, C/N (the statistical analysis was performed by ANOVA with Tukey as post-hoc test (P
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Tool support in software engineering often depends on relationships, regularities, patterns, or rules, mined from sampled code. Examples are approaches to bug prediction, code recommendation, and code autocompletion. Samples are relevant to scale the analysis of data. Many such samples consist of software projects taken from GitHub; however, the specifics of sampling might influence the generalization of the patterns.
In this paper, we focus on how to sample software projects that are clients of libraries and frameworks, when mining for interlibrary usage patterns. We notice that when limiting the sample to a very specific library, inter-library patterns in the form of implications from one library to another may not generalize well. Using a simulation and a real case study, we analyze different sampling methods. Most importantly, our simulation shows that only when sampling for the disjunction of both libraries involved in the implication, the implication generalizes well. Second, we show that real empirical data sampled from GitHub does not behave as we would expect it from our simulation. This identifies a potential problem with the usage of such API for studying inter-library usage patterns.
Facebook
TwitterDetails of the sampling sites.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
(n): natural substratum; (a): artificial substratum; (0.5 m): 0.5 meter deep; (2.0 m): 2.0 meters deep; (5.0 m): 5.0 meters deep. Total number of samples: 72.
Facebook
TwitterLink to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information
Facebook
TwitterSampling overview.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Longitudinal, community-based sampling is important for understanding prevalence and transmission of respiratory pathogens. Using a minimally invasive sampling method, the FAMILY Micro study monitored the oral, nasal and hand microbiota of families for 6 months. Here, we explore participant experiences and opinions. A mixed methods approach was utilised. A quantitative questionnaire was completed after every sampling timepoint to report levels of discomfort and pain, as well as time taken to collect samples. Participants were also invited to discuss their experiences in a qualitative structured exit interview. We received questionnaires from 36 families. Most adults and children >5y experienced no pain (94% and 70%) and little discomfort (73% and 47% no discomfort) regardless of sample type, whereas children ≤5y experienced variable levels of pain and discomfort (48% no pain but 14% hurts even more, whole lot or worst; 38% no discomfort but 33% moderate, severe, or extreme discomfort). The time taken for saliva and hand sampling decreased over the study. We conducted interviews with 24 families. Families found the sampling method straightforward, and adults and children >5y preferred nasal sampling using a synthetic absorptive matrix over nasopharyngeal swabs. It remained challenging for families to fit sampling into their busy schedules. Adequate fridge/freezer space and regular sample pick-ups were found to be important factors for feasibility. Messaging apps proved extremely effective for engaging with participants. Our findings provide key information to inform the design of future studies, specifically that self-sampling at home using minimally invasive procedures is feasible in a family context.
Facebook
TwitterEstablishment specific sampling results for Siluriformes Product sampling projects. Current data is updated quarterly; archive data is updated annually. Data is split by FY. See the FSIS website for additional information.
Facebook
TwitterSurvey research in the Global South has traditionally required large budgets and lengthy fieldwork. The expansion of digital connectivity presents an opportunity for researchers to engage global subject pools and study settings where in-person contact is challenging. This paper evaluates Facebook advertisements as a tool to recruit diverse survey samples in the Global South. Using Facebook's advertising platform we quota-sample respondents in Mexico, Kenya, and Indonesia and assess how well these samples perform on a range of survey indicators, identify sources of bias, replicate a canonical experiment, and highlight trade-offs for researchers to consider. This method can quickly and cheaply recruit respondents, but these samples tend to be more educated than corresponding national populations. Weighting ameliorates sample imbalances. This method generates comparable data to a commercial online sample for a fraction of the cost. Our analysis demonstrates the potential of Facebook advertisements to cost-effectively conduct research in diverse settings.
Facebook
TwitterDataset Card for "sampling-distill-train-data-kth-shift4"
Training data for sampling-based watermark distillation using the KTH s=4s=4s=4 watermarking strategy in the paper On the Learnability of Watermarks for Language Models. Llama 2 7Bwith decoding-based watermarking was used to generate 640,000 watermarked samples, each 256 tokens long. Each sample is prompted with 50-token prefixes from OpenWebText (prompts not included in the samples).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Information on samples submitted for RNAseq
Rows are individual samples
Columns are: ID Sample Name Date sampled Species Sex Tissue Geographic location Date extracted Extracted by Nanodrop Conc. (ng/µl) 260/280 260/230 RIN Plate ID Position Index name Index Seq Qubit BR kit Conc. (ng/ul) BioAnalyzer Conc. (ng/ul) BioAnalyzer bp (region 200-1200) Submission reference Date submitted Conc. (nM) Volume provided PE/SE Number of reads Read length
Facebook
Twitterhttps://pgmapinfo.princegeorge.ca/opendata/CityofPrinceGeorge_Open_Government_License_Open_Data.pdfhttps://pgmapinfo.princegeorge.ca/opendata/CityofPrinceGeorge_Open_Government_License_Open_Data.pdf
A sampling station is a facility that is used for collecting water samples. Sampling stations may be dedicated sampling devices, or they may be other devices of the system where a sample may be obtained.
Facebook
TwitterDescriptions of the sampling design and dates.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Number of samples for each sampling interval.
Facebook
TwitterMultiple sampling campaigns were conducted near Boulder, Colorado, to quantify constituent concentrations and loads in Boulder Creek and its tributary, South Boulder Creek. Diel sampling was initiated at approximately 1100 hours on September 17, 2019, and continued until approximately 2300 hours on September 18, 2019. During this time period, samples were collected at two locations on Boulder Creek approximately every 3.5 hours to quantify the diel variability of constituent concentrations at low flow. Synoptic sampling campaigns on South Boulder Creek and Boulder Creek were conducted October 15-18, 2019, to develop spatial profiles of concentration, streamflow, and load. Numerous main stem and inflow locations were sampled during each synoptic campaign using the simple grab technique (17 main stem and 2 inflow locations on South Boulder Creek; 34 main stem and 17 inflow locations on Boulder Creek). Streamflow at each main stem location was measured using acoustic doppler velocimetry. Bulk samples from all sampling campaigns were processed within one hour of sample collection. Processing steps included measurement of pH and specific conductance, and filtration using 0.45-micron filters. Laboratory analyses were subsequently conducted to determine dissolved and total recoverable constituent concentrations. Filtered samples were analyzed for a suite of dissolved anions using ion chromatography. Filtered, acidified samples and unfiltered acidified samples were analyzed by inductively coupled plasma-mass spectrometry and inductively coupled plasma-optical emission spectroscopy to determine dissolved and total recoverable cation concentrations, respectively. This data release includes three data tables, three photographs, and a kmz file showing the sampling locations. Additional information on the data table contents, including the presentation of data below the analytical detection limits, is provided in a Data Dictionary.