Facebook
TwitterResearchers in different social science disciplines have successfully used Facebook to recruit subjects for their studies. However, such convenience samples are not generally representative of the population. We develop and validate a new quota sampling method to recruit respondents using Facebook advertisements, and publish an R package to semi-automate this quota sampling process using the Facebook Marketing API. To test the method, we used Facebook advertisements to quota sample 2432 U.S. respondents for a survey on climate change public opinion. We conducted a contemporaneous nationally representative survey asking identical questions using a high-quality online survey panel whose respondents were recruited using probability sampling. Many results from the Facebook-sampled survey are similar to those from the online panel survey; furthermore, results from the Facebook-sampled survey approximate results from the American Community Survey (ACS) for a set of validation questions. These findings suggest that using Facebook to recruit respondents is a viable option for survey researchers wishing to approximate population-level public opinion.
Facebook
TwitterThis dataset was created by Prajwal Kumar
Facebook
Twitterhttps://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
| BASE YEAR | 2024 |
| HISTORICAL DATA | 2019 - 2023 |
| REGIONS COVERED | North America, Europe, APAC, South America, MEA |
| REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
| MARKET SIZE 2024 | 935.9(USD Million) |
| MARKET SIZE 2025 | 1023.0(USD Million) |
| MARKET SIZE 2035 | 2500.0(USD Million) |
| SEGMENTS COVERED | Application, Service Type, End Use, Deployment Model, Regional |
| COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
| KEY MARKET DYNAMICS | growing data volume, increasing demand for efficiency, advancements in algorithm technology, rising adoption of big data analytics, need for unbiased sampling methods |
| MARKET FORECAST UNITS | USD Million |
| KEY COMPANIES PROFILED | Amazon, SAP, Tibco Software, Google, Dell Technologies, Microsoft, Salesforce, Hewlett Packard Enterprise, Cisco, Rustam Group, Intel, Cloudera, IBM, DataStax, Facebook, Oracle |
| MARKET FORECAST PERIOD | 2025 - 2035 |
| KEY MARKET OPPORTUNITIES | Increasing demand for data analytics, Growth in cloud-based service adoption, Rising need for real-time data processing, Expanding use in machine learning, Advancements in big data technologies |
| COMPOUND ANNUAL GROWTH RATE (CAGR) | 9.3% (2025 - 2035) |
Facebook
TwitterThis is a database of a variety of biological, reproductive, and energetic data collected from fish on the continental shelf in the northwest Atlantic Ocean. Species sampled in this database thus far include winter flounder, yellowtail flounder, summer flounder, haddock, cusk, Atlantic wolffish, and Atlantic herring. Data are collected from fish provided principally from fishermen participating...
Facebook
TwitterA data set of cross-nationally comparable microdata samples for 15 Economic Commission for Europe (ECE) countries (Bulgaria, Canada, Czech Republic, Estonia, Finland, Hungary, Italy, Latvia, Lithuania, Romania, Russia, Switzerland, Turkey, UK, USA) based on the 1990 national population and housing censuses in countries of Europe and North America to study the social and economic conditions of older persons. These samples have been designed to allow research on a wide range of issues related to aging, as well as on other social phenomena. A common set of nomenclatures and classifications, derived on the basis of a study of census data comparability in Europe and North America, was adopted as a standard for recoding. This series was formerly called Dynamics of Population Aging in ECE Countries. The recommendations regarding the design and size of the samples drawn from the 1990 round of censuses envisaged: (1) drawing individual-based samples of about one million persons; (2) progressive oversampling with age in order to ensure sufficient representation of various categories of older people; and (3) retaining information on all persons co-residing in the sampled individual''''s dwelling unit. Estonia, Latvia and Lithuania provided the entire population over age 50, while Finland sampled it with progressive over-sampling. Canada, Italy, Russia, Turkey, UK, and the US provided samples that had not been drawn specially for this project, and cover the entire population without over-sampling. Given its wide user base, the US 1990 PUMS was not recoded. Instead, PAU offers mapping modules, which recode the PUMS variables into the project''''s classifications, nomenclatures, and coding schemes. Because of the high sampling density, these data cover various small groups of older people; contain as much geographic detail as possible under each country''''s confidentiality requirements; include more extensive information on housing conditions than many other data sources; and provide information for a number of countries whose data were not accessible until recently. Data Availability: Eight of the fifteen participating countries have signed the standard data release agreement making their data available through NACDA/ICPSR (see links below). Hungary and Switzerland require a clearance to be obtained from their national statistical offices for the use of microdata, however the documents signed between the PAU and these countries include clauses stipulating that, in general, all scholars interested in social research will be granted access. Russia requested that certain provisions for archiving the microdata samples be removed from its data release arrangement. The PAU has an agreement with several British scholars to facilitate access to the 1991 UK data through collaborative arrangements. Statistics Canada and the Italian Institute of statistics (ISTAT) provide access to data from Canada and Italy, respectively. * Dates of Study: 1989-1992 * Study Features: International, Minority Oversamples * Sample Size: Approx. 1 million/country Links: * Bulgaria (1992), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/02200 * Czech Republic (1991), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/06857 * Estonia (1989), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/06780 * Finland (1990), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/06797 * Romania (1992), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/06900 * Latvia (1989), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/02572 * Lithuania (1989), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/03952 * Turkey (1990), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/03292 * U.S. (1990), http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/06219
Facebook
TwitterThe data products are the sampling results from FSIS’ National Antimicrobial Resistance Monitoring System (NARMS) Cecal sampling program. Data for sampling results from NARMS Product sampling program is currently posted on the FSIS Website and are grouped by commodity (https://www.fsis.usda.gov/science-data/data-sets-visualizations/laboratory-sampling-data). The antimicrobials and bacteria tested under NARMS are selected are based on their importance to human health and use in food-producing animals (FDA Guidance for Industry # 152 (https://www.fda.gov/media/69949/download)). Cecal contents from cattle, swine, chicken, and turkeys were sampled as part of FSIS’s routine NARMS cecal sampling program for major species.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Telemetry Sampling Strategies market size reached USD 2.87 billion in 2024, driven by accelerating demand for efficient real-time data collection across industries. The market is exhibiting a robust compound annual growth rate (CAGR) of 12.1% from 2025 to 2033. By 2033, the market is projected to reach USD 7.96 billion. This strong growth trajectory is underpinned by the proliferation of IoT devices, increasing complexity of enterprise networks, and the critical need for actionable insights from vast telemetry data streams.
The growth of the Telemetry Sampling Strategies market is primarily fueled by the exponential increase in connected devices and systems generating vast volumes of telemetry data. Enterprises across sectors such as telecommunications, healthcare, automotive, and energy are deploying advanced telemetry solutions to monitor, analyze, and optimize operational performance. The adoption of sophisticated sampling strategies allows organizations to capture high-quality, representative data efficiently while minimizing storage and processing costs. The evolution of network architectures, including the rise of 5G and edge computing, further amplifies the need for scalable and intelligent telemetry sampling, as real-time monitoring becomes mission-critical for maintaining service quality and security.
Another significant growth driver is the rising importance of network security, compliance, and performance management. Organizations are increasingly leveraging telemetry sampling strategies to detect anomalies, prevent cyber threats, and ensure regulatory compliance. The integration of artificial intelligence and machine learning with telemetry platforms enables predictive analytics and automated decision-making, enhancing the value derived from sampled data. Furthermore, as organizations transition to cloud-native and hybrid environments, the demand for flexible and scalable telemetry sampling solutions continues to surge, supporting seamless visibility across complex, distributed infrastructures.
The market’s expansion is also propelled by the growing adoption of telemetry sampling in data analytics and business intelligence applications. Enterprises are seeking to harness the full potential of telemetry data to drive digital transformation, optimize resource allocation, and improve customer experience. The ability to extract actionable insights from sampled telemetry data is becoming a key differentiator, particularly as data volumes outpace traditional processing capabilities. Vendors are responding with innovative solutions that offer customizable sampling techniques, real-time analytics, and integration with leading cloud platforms, further accelerating market growth.
Regionally, North America maintains a leading position in the Telemetry Sampling Strategies market, supported by early adoption of advanced technologies, significant investments in network infrastructure, and a strong presence of key market players. Asia Pacific is emerging as the fastest-growing region, driven by rapid digitalization, expanding IoT deployments, and government initiatives to enhance industrial automation and smart city projects. Europe exhibits steady growth, underpinned by stringent data privacy regulations and increasing demand for secure, high-performance network monitoring solutions. Latin America and the Middle East & Africa are also witnessing rising adoption, albeit from a smaller base, as enterprises in these regions embark on digital transformation journeys.
The Telemetry Sampling Strategies market is segmented by component into hardware, software, and services. Hardware components encompass devices such as sensors, telemetry modules, and network probes that enable data collection and transmission. The hardware segment continues to witness steady growth as organizations upgrade legacy systems and deploy new infrastructure to support real-time telemetry. The proliferation of IoT devices and the advent of 5G networks are driving demand for advanced hardware that can handle higher data volumes and more complex sampling requirements. Manufacturers are focusing on developing energy-efficient, compact, and interoperable devices to cater to diverse industry needs.
Software solutions play a pivotal role in the telemetry sampling ecosystem, providing the intelligence required to implement, man
Facebook
TwitterThe main table contains all data collected from each bottomfish received and processed over the course of the Federal Disaster Relief Program (FDRP)-funded Hawaiian Archipelago Bottomfish Sampling Program as well as fish collected on the Oscar Elton Sette, other research cruises including small boat operations, hapu'upu'puu received from the Laysan during a separate project, and any bottomfish...
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
When researchers design an experiment, they usually hold potentially relevant features of the experiment constant. We call these details the “topic” of the experiment. For example, researchers studying the impact of party cues on attitudes must inform respondents of the parties’ positions on a particular policy. In doing so, researchers implement just one of many possible designs. Clifford, Leeper, and Rainey (2023) argue that researchers should implement many of the possible designs in parallel—what they call “topic sampling”—to generalize to a larger population of topics. We describe two estimators for topic-sampling designs. First, we describe a nonparametric estimator of the typical effect that is unbiased under the assumptions of the design. Second, we describe a hierarchical model that researchers can use to describe the heterogeneity. We suggest describing the heterogeneity across topics in three ways: (1) the standard deviation in treatment effects across topics, (2) the treatment effects for particular topics, and (3) how the treatment effects for particular topics vary with topic-level predictors. We evaluate the performance of the hierarchical model using the Strengthening Democracy Challenge megastudy and show that the hierarchical model works well.
Facebook
TwitterSurvey research in the Global South has traditionally required large budgets and lengthy fieldwork. The expansion of digital connectivity presents an opportunity for researchers to engage global subject pools and study settings where in-person contact is challenging. This paper evaluates Facebook advertisements as a tool to recruit diverse survey samples in the Global South. Using Facebook's advertising platform we quota-sample respondents in Mexico, Kenya, and Indonesia and assess how well these samples perform on a range of survey indicators, identify sources of bias, replicate a canonical experiment, and highlight trade-offs for researchers to consider. This method can quickly and cheaply recruit respondents, but these samples tend to be more educated than corresponding national populations. Weighting ameliorates sample imbalances. This method generates comparable data to a commercial online sample for a fraction of the cost. Our analysis demonstrates the potential of Facebook advertisements to cost-effectively conduct research in diverse settings.
Facebook
TwitterBetween 1984 January - 2002 June, personnel from NMFS/PIFSC/FRMD/FMB/FMAP and Hawaii Department of Aquatic Resources (DAR) conducted port sampling at the United Fishing Agency (UFA) Fish Auction. They recorded the total landing at the UFA Fish Auction, with a frequency of six times a week during the earlier years to twice a week during the later years.
In 2000 January, DAR implemented a Dea...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Two different problems, i.e. a low-dimensional (LD) and a high-dimensional (HD) problems are considered. The LD problem has 2 variables for a 4-ply symmetric square composite laminate. Similarly, the HD problem consists of 16 variables for a 32-ply symmetric square composite laminate. The value of h for LD and HD problems is taken as 0.005 and 0.04 respectively.
For each problem, three different types of sampling technique, i.e. random sampling (RS), Latin hypercube sampling (LHS) [1] and Hammersley sampling (HS) [2] are adopted. The RS, LHS and HS primarily differ in the uniformity of sample points over the design space such that RS has the least and HS has the maximum uniform distributions of sample points. Based on the recommendations of Jin et al. [3], and Zhao and Xue [4], 72 and 612 sample points are considered in each training dataset of LD and HD problems respectively.
Based on the FE formulation, several high-fidelity datasets for the LD and HD problems are generated, as presented in the Supplementary Material file “Predictive modelling of laminated composite plates.xlsx” in nine sheets that are organized as detailed out in Table 1.
References:
McKay, M. D.; Beckman, R. J.; Conover, W. J. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 2000, 42, 55-61.
Hammersley, J. M. Monte Carlo methods for solving multivariable problems. Annals of the New York Academy of Sciences, 1960, 86, 844-874.
Jin, R.; Chen, W.; Simpson, T. W. Comparative studies of metamodelling techniques under multiple modelling criteria. Structural and Multidisciplinary Optimization, 2001, 23, 1-13.
Zhao, D.; Xue, D. A comparative study of metamodeling methods considering sample quality merits. Structural and Multidisciplinary Optimization, 2010, 42, 923-938.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Scholars have made considerable strides in evaluating and improving the external validity of experimental research. However, little attention has been paid to a crucial aspect of external validity – the topic of study. Researchers frequently develop a general theory and hypotheses (e.g., about policy attitudes), then conduct a study on a specific topic (e.g., environmental attitudes). Yet, the results may vary depending on the topic chosen. In this paper, we develop the idea of topic sampling – rather than studying a single topic, we randomly sample many topics from a defined population. As an application, we combine topic sampling with a classic survey experiment design on partisan cues. Using a hierarchical model, we efficiently estimate the effect of partisan cues for each policy, showing that the size of the effect varies considerably, and predictably, across policies. We conclude with advice on implementing our approach and using it to improve theory testing.
Facebook
TwitterDiversity and Distributions, 00, 1–16. https://doi.org/10.1111/ddi.13749
Access this dataset on Dryad: https://doi.org/10.5068/D1769Q
Description: R script for calculating bias in: 1) all iNaturalist plant observations and 2) iNaturalist and professional observations of the 4 study species, Hedychium gardnerianum, Lantana camara, Leucaena leucocephala, and Psidium cattleianum.
Description: R script for: 1) producing Hedychium gardnerianum, Lantana camara, Leucaena leucocephala, and Psidium cattleianum habitat suitability models and 2) calculating overlap among model series with Schoener's D.
Description: Comma-delimited file containing the ...
Facebook
Twitter
According to our latest research, the global Social Sampling Platform market size reached USD 1.27 billion in 2024 and is projected to grow at a strong CAGR of 14.2% during the forecast period. By 2033, the market is expected to attain a value of USD 3.76 billion, driven by the rising need among brands and retailers to engage consumers directly, gather actionable feedback, and optimize product launches. The rapid expansion of digital marketing strategies, the increasing influence of social media, and the growing emphasis on personalized customer experiences are collectively fueling the robust growth trajectory of the Social Sampling Platform market worldwide.
The primary growth factor for the Social Sampling Platform market is the increasing adoption of digital-first consumer engagement strategies. Brands across sectors such as consumer goods, food & beverage, beauty & personal care, and healthcare are leveraging these platforms to distribute product samples, collect real-time feedback, and foster authentic interactions with their target audiences. The shift from traditional sampling methods to digital platforms enables companies to reach a broader, more targeted demographic while reducing operational costs and improving ROI. Additionally, the ability to track consumer responses and behavior through analytics empowers brands to make data-driven decisions, further accelerating the adoption of social sampling solutions.
Another significant driver is the growing power of social media and influencer marketing. Social sampling platforms are increasingly integrated with social media channels, allowing brands to amplify their reach and tap into user-generated content. The viral nature of social sharing, combined with the authenticity of peer recommendations, enhances brand visibility and credibility. As consumers become more discerning and expect personalized experiences, platforms that facilitate seamless sample distribution and feedback collection on social channels are becoming indispensable tools for marketers. This trend is particularly pronounced among younger demographics who value digital interactions and are more likely to engage with brands online.
Furthermore, technological advancements in artificial intelligence, machine learning, and data analytics are transforming the capabilities of social sampling platforms. These innovations enable platforms to offer sophisticated targeting, predictive analytics, and automated campaign management, resulting in higher engagement rates and more meaningful insights. The integration of AI-powered chatbots, real-time sentiment analysis, and automated reporting tools streamlines the sampling process, reduces manual intervention, and enhances the overall efficiency of marketing campaigns. As a result, both large enterprises and small and medium-sized businesses are increasingly investing in advanced social sampling solutions to stay competitive in a rapidly evolving digital landscape.
In the realm of digital marketing, the Sampler Workstation has emerged as a pivotal tool for brands seeking to enhance their social sampling strategies. This innovative platform offers a comprehensive suite of features designed to streamline the sampling process, from campaign creation to execution and analysis. With the ability to integrate seamlessly with existing marketing stacks, the Sampler Workstation empowers brands to target specific demographics with precision, ensuring that product samples reach the right audience at the right time. The platform's robust analytics capabilities provide valuable insights into consumer behavior, enabling marketers to refine their strategies and maximize return on investment. As the demand for personalized marketing experiences continues to grow, the Sampler Workstation stands out as a versatile solution that caters to the evolving needs of modern brands.
From a regional perspective, North America continues to dominate the Social Sampling Platform market, accounting for the largest share in 2024, followed closely by Europe and Asia Pacific. The high penetration of digital marketing technologies, widespread use of social media, and a mature e-commerce ecosystem in these regions are key contributors to market growth. Meanwhile, emerging markets in Asia Pacific and Latin America are witnessing accelerated adoption, dri
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This sampling frame is a set of grid-based, finite-area frames spanning the offshore areas surrounding Mexico, and is intended for use with the North American Bat Monitoring Program (NABat). A Generalized Random-Tessellation Stratified (GRTS) Survey Design draw was added to the sample units from the raw sampling grids (https://doi.org/10.5066/P9XBOCVV). The GRTS survey design algorithm assigns a spatially balanced and randomized ordering (GRTS order) to each cell within its respective framework. Grid cells are prioritized numerically; the lower the number, the higher the sampling priority. Cells can then be selected for monitoring following the GRTS order, ensuring both randomization and spatial balance. Monitoring within this standardized framework allows statistical inference to non-surveyed locations and ensures the validity of analyses at regional and range-wide scales. NABat is a continental collaboration including state and provincial, federal, and local agencies intended to ...
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This data release contains calculated metrics which summarize various biodiversity and functional/life history trait information about fish communities sampled across the Chesapeake Bay Watershed as well as ancillary data related to time/place of sampling and sampling methodology. The fish sampling data used to compute these metrics were compiled from various fish sampling programs conducted by state and federal agencies, county governments, universities, and river basin commissions across the watershed. Prior to computation of community metrics data from individual fish sampling programs were investigated for completeness and data entry errors. Subsequently, desired fields were extracted from each individual dataset and manipulated into a common form and compiled, including standardization of species names and conversion of coordinates, to similar datum. Following compilation of the disparate datasets, fish species were linked to species-specific trait information including nativ ...
Facebook
TwitterThis dataset contains information on fishing activity collecting biololgical samples for better understanding life history of bottomfish in Guam. The data contain information on trip, catch, and effort. The trip specific data contains information on fisher and vessel, observer (person who entered the data), the port trip started in, start time and day and end time and day of trip, the fishing a...
Facebook
TwitterNortheast Cooperative Research Study Fleet (SF) Program partners with a subset of commercial fishermen to collect high quality, high resolution, haul by haul self-reported fishing data. SF staff routinely sails with program participants in order to collect independent catch weight measurements to compare to the Captains’ kept and discard records for verification.The SF Program creates a unique...
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Recent guidance on environmental modeling and global land-cover validation stresses the need for a probability-based design. Additionally, spatial balance has also been recommended as it ensures more efficient sampling, which is particularly relevant for understanding land use change. In this paper I describe a global sample design and database called the Global Grid (GG) that has both of these statistical characteristics, as well as being flexible, multi-scale, and globally comprehensive. The GG is intended to facilitate collaborative science and monitoring of land changes among local, regional, and national groups of scientists and citizens, and it is provided in a variety of open source formats to promote collaborative and citizen science. Since the GG sample grid is provided at multiple scales and is globally comprehensive, it provides a universal, readily-available sample. It also supports uneven probability sample designs through filtering sample locations by user-defined strata. The GG is not appropriate for use at locations above ±85° because the shape and topological distortion of quadrants becomes extreme near the poles. Additionally, the file sizes of the GG datasets are very large at fine scale (resolution ~600 m × 600 m) and require a 64-bit integer representation.
Facebook
TwitterResearchers in different social science disciplines have successfully used Facebook to recruit subjects for their studies. However, such convenience samples are not generally representative of the population. We develop and validate a new quota sampling method to recruit respondents using Facebook advertisements, and publish an R package to semi-automate this quota sampling process using the Facebook Marketing API. To test the method, we used Facebook advertisements to quota sample 2432 U.S. respondents for a survey on climate change public opinion. We conducted a contemporaneous nationally representative survey asking identical questions using a high-quality online survey panel whose respondents were recruited using probability sampling. Many results from the Facebook-sampled survey are similar to those from the online panel survey; furthermore, results from the Facebook-sampled survey approximate results from the American Community Survey (ACS) for a set of validation questions. These findings suggest that using Facebook to recruit respondents is a viable option for survey researchers wishing to approximate population-level public opinion.