Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Online surveys often include quantitative attention checks, but inattentive participants might also be identified using their qualitative responses. We used the software Turnitin™ to assess the originality of open-ended responses in four mixed-method online surveys that included validated multi-item rating scales. Across surveys, 18-35% of participants were identified as having copied responses from online sources. We assessed indicator reliability and internal consistency reliability and found that both were lower for participants identified as using copied text versus those who wrote more original responses. Those who provided more original responses also provided more consistent responses to the validated scales, suggesting that these participants were more attentive. We conclude that this process can be used to screen qualitative responses from online surveys. We encourage future research to replicate this screening process using similar tools, investigate strategies to reduce copying behaviour, and explore the motivation of participants to search for information online.
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.null/customlicense?persistentId=doi:10.18738/T8/Y3HT9Khttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.null/customlicense?persistentId=doi:10.18738/T8/Y3HT9K
Data set to accompany article.
By means of a split-ballot survey experiment, we study whether a normative instruction not to use the internet when answering political knowledge questions reduces cheating in web surveys. The knowledge questions refer to basic facts about the European Union and the data come from the Italian National Election Study web panel carried out in Italy before the 2014 European Election. Our analysis shows that a simple normative instruction significantly reduces cheating. We also show that reducing cheating is important to achieve a correct assessment of reliability of knowledge scales, while a decrease of cheating leaves unaltered the knowledge gap between lower and higher educated respondents. These results invite caution when including political knowledge questions in an online survey. Our advice is to include a normative instruction not to search the internet to reduce cheating and obtain more genuine answers. More generally, we conclude by stressing the need to consider the implications of online data collection when building questionnaires for public opinion research.
The Quarterly Labour Force Survey (QLFS) is a household-based sample survey conducted by Statistics South Africa (Stats SA). It collects data on the labour market activities of individuals aged 15 years or older who live in South Africa.
National coverage
Individuals
The QLFS sample covers the non-institutional population of South Africa with one exception. The only institutional subpopulation included in the QLFS sample are individuals in worker's hostels. Persons living in private dwelling units within institutions are also enumerated. For example, within a school compound, one would enumerate the schoolmaster's house and teachers' accommodation because these are private dwellings. Students living in a dormitory on the school compound would, however, be excluded.
Sample survey data [ssd]
The QLFS uses a master sampling frame that is used by several household surveys conducted by Statistics South Africa. This wave of the QLFS is based on the 2013 master frame, which was created based on the 2011 census. There are 3324 PSUs in the master frame and roughly 33000 dwelling units.
The sample for the QLFS is based on a stratified two-stage design with probability proportional to size (PPS) sampling of PSUs in the first stage, and sampling of dwelling units (DUs) with systematic sampling in the second stage.
For each quarter of the QLFS, a quarter of the sampled dwellings are rotated out of the sample. These dwellings are replaced by new dwellings from the same PSU or the next PSU on the list. For more information see the statistical release.
Computer Assisted Telephone Interview [cati]
The survey questionnaire consists of the following sections: - Biographical information (marital status, education, etc.) - Economic activities in the last week for persons aged 15 years and older - Unemployment and economic inactivity for persons aged 15 years and above - Main work activity in the last week for persons aged 15 years and above - Earnings in the main job for employees, employers and own-account workers aged 15 years and above
From 2010 the income data collected by South Africa's Quarterly Labour Force Survey is no longer provided in the QLFS dataset (except for a brief return in QLFS 2010 Q3 which may be an error). Possibly because the data is unreliable at the level of the quarter, Statistics South Africa now provides the income data from the QLFS in an annualised dataset called Labour Market Dynamics in South Africa (LMDSA). The datasets for LMDSA are available from DataFirst's website.
This dataset relates to a study exploring off-grid sanitation practices in Kenya, Peru, and South Africa, with a focus on how various user demographics access and utilize sanitation facilities. The study contrasts container-based sanitation with alternative methods. Participants, acting as citizen researchers, gathered confidential information using a specialized mobile application. The primary objective was to uncover obstacles and challenges, with the intention of sharing insights with other municipalities interested in implementing container-based sanitation solutions for off-grid regions.
Over the course of 12 months, participants received incentives for consistent involvement, following a micro-payment for micro-tasks model. Selection of participants was randomized, involving attendance at a training session and, if necessary, provision of a smartphone which they retained at the conclusion of the project. Weekly smartphone surveys were conducted in more than 300 households within informal settlements across the three countries throughout the project duration. These surveys aimed to capture daily routines, well-being, income levels, usage of infrastructure services, livelihood or environmental shocks and other socioeconomic factors on a weekly basis, contributing to more comprehensive analyses and informed decision-making processes.
The smartphone-based methodology offered an efficient and adaptable means of data collection, facilitating broad coverage across diverse geographical areas and subjects, while promoting regular engagement. Open Data Kit (ODK) tools were utilized to support data collection in resource-limited settings with unreliable connectivity.
Southerners tend to slip through the cracks between state surveys, which are unreliable for generalizing to the region, on the one hand, and national sample surveys, which usually contain too few Southerners to allow detailed examination, on the other. Moreover, few surveys routinely include questions specifically about the South.
To remedy this situation, the Odum Institute for Research in Social Science and the Center for the Study of the American South sponsor a Southern regional survey, called the Southern Focus Poll. Respondents in both the South and non-South are asked questions about economic conditions in their communities, cultural issues (such as Southern accent and the Confederate flag), race relations, religious involvement, and characteristics of Southerners and Northerners.
All of the data sets from the Southern Focus Polls archived here are generously made available by the Odum Institute for Research in Social Science of the University of North Carolina at Chapel Hill (OIRSS).
Quality of data Definition: An area within which a uniform assessment of the quality of the data exists. Distinction: accuracy of data; survey reliability;
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Characteristics of study participants, by study group.
The map reliability calculator was developed to provide guidance for those mapping American Community Survey (ACS) data, with regards to data uncertainty and its impact on a map. ACS estimates are derived from a survey, and such statistics are subject to sampling error – divergence from the actual characteristics of the surveyed population. Sampling error can have a considerable impact on the representativeness of maps. One way to measure the impact of sampling error on quantitative choropleth maps is to calculate the probability that geographic units are misclassified. The calculator determines probable misclassification by examining published estimates, their associated Margins of Error (MOEs), and category break points. Using these numbers along with a standard probability density function, the calculator determines the likelihood that an estimate’s actual value falls in a different category. The cumulative probability of erroneously classed units is summed for all geographies in a category and averaged, to produce a relative reliability statistic. The same averaging of the cumulative probability of error is also calculated for the entire map. From these statistics, ACS data mappers can see the likelihood, on average, that any given geography in a map, or map category, actually falls in a different category and has therefore been misclassified. The map reliability calculator was developed to provide guidance for those mapping American Community Survey (ACS) data, with regards to data uncertainty and its impact on a map. ACS estimates are derived from a survey, and such statistics are subject to sampling error – divergence from the actual characteristics of the surveyed population. Sampling error can have a considerable impact on the representativeness of maps. One way to measure the impact of sampling error on quantitative choropleth maps is to calculate the probability that geographic units are misclassified. The calculator determines probable misclassification by examining published estimates, their associated Margins of Error (MOEs), and category break points. Using these numbers along with a standard probability density function, the calculator determines the likelihood that an estimate’s actual value falls in a different category. The cumulative probability of erroneously classed units is summed for all geographies in a category and averaged, to produce a relative reliability statistic. The same averaging of the cumulative probability of error is also calculated for the entire map. From these statistics, ACS data mappers can see the likelihood, on average, that any given geography in a map, or map category, actually falls in a different category and has therefore been misclassified. To provide further guidance, the map reliability calculator also tells ACS data mappers whether their proposed map passes a reliability test – suggesting that the map is suitable for general use. There are two criteria employed in this reliability threshold. First, a map must have less than a 10% chance of erroneously classed geographies. This matches the Census Bureau’s standard of publishing MOEs at a 90% confidence interval and using the 90% confidence level to determine statistically significant differences. Additionally, all individual categories must have reliability scores under 20%. This second criterion ensures that even categories with relatively few geographies, and therefore little impact on overall map reliability, still are reasonably trustworthy representations of reality.
In this project, data were used from: - the British Household Panel Survey (wave 1-18), punlic release data file (GN33196), as available here. - several waves of the Longitudinal Internet Studies for the Social Sciences, data available through www.lissdata.nl
The data were used to study nonresponse and measurement errors in panel surveys, and study the trade-off between the two in a longuitudinal setting.
Panel surveys involve groups of people or households that are followed over time.
From panel surveys,a lot can be learnt as long as the measurement of all topics of interest is without error. There are two sources of error that threaten to make panel data invalid and unreliable. First, nonresponse among specific respondents, and second errors in measurement of the topic of interest using survey questions. Survey methodologists worry that errors due to nonresponse and measurement interact. Some reasons for nonresponse might at the same time also be a reason for reporting with more measurement error. Lower cognitive abilities, complex income compositions, or language difficulties are among these. In this research project, trade-offs and common causes for both nonresponse error and measurement error are studied using a Latent Variable modeling approach, using data from the British Household Panel Survey.Understanding the trade-off better will enable researchers to compare the nature and size of both errors, and make better informed decisions in trying to limit survey errors, and reduce the costs of trying to minimise such errors.
Noncommunicable diseases (NCDs) are the leading cause of death worldwide. Efficient monitoring and surveillance are cornerstones to track progress of NCD burden, related risk factors and policy interventions. The systematic monitoring of risk factors to generate accurate and timely data is essential for a country’s ability to prioritize essential resources and make sound policy decisions to address the growing NCD burden.
With increasing access and use of mobile phones globally, opportunities exist to explore the feasibility of using mobile phone technology as an interim method to collect data and supplement household surveys. Such technologies have the potential to allow for efficiencies in producing timely, affordable, and accurate data to monitor trends, and augment traditional health surveys with new, faster mobile phone surveys.
The Bloomberg Data for Health initiative aims to strengthen the collection and use of critical public health information. One of the components of the initiative aims to explore innovative approaches to NCD surveillance, including the use of mobile phone surveys for NCDs. The main objectives of this component are to assess the feasibility, quality, and validity of nationally representative NCD Mobile Phone Surveys and propose a globally standardized protocol.
In order to develop various methods of comparable data collection on health and health system responsiveness WHO started a scientific survey study in 2000-2001. This study has used a common survey instrument in nationally representative populations with modular structure for assessing health of indviduals in various domains, health system responsiveness, household health care expenditures, and additional modules in other areas such as adult mortality and health state valuations.
The health module of the survey instrument was based on selected domains of the International Classification of Functioning, Disability and Health (ICF) and was developed after a rigorous scientific review of various existing assessment instruments. The responsiveness module has been the result of ongoing work over the last 2 years that has involved international consultations with experts and key informants and has been informed by the scientific literature and pilot studies.
Questions on household expenditure and proportionate expenditure on health have been borrowed from existing surveys. The survey instrument has been developed in multiple languages using cognitive interviews and cultural applicability tests, stringent psychometric tests for reliability (i.e. test-retest reliability to demonstrate the stability of application) and most importantly, utilizing novel psychometric techniques for cross-population comparability.
The study was carried out in 61 countries completing 71 surveys because two different modes were intentionally used for comparison purposes in 10 countries. Surveys were conducted in different modes of in- person household 90 minute interviews in 14 countries; brief face-to-face interviews in 27 countries and computerized telephone interviews in 2 countries; and postal surveys in 28 countries. All samples were selected from nationally representative sampling frames with a known probability so as to make estimates based on general population parameters.
The survey study tested novel techniques to control the reporting bias between different groups of people in different cultures or demographic groups ( i.e. differential item functioning) so as to produce comparable estimates across cultures and groups. To achieve comparability, the selfreports of individuals of their own health were calibrated against well-known performance tests (i.e. self-report vision was measured against standard Snellen's visual acuity test) or against short descriptions in vignettes that marked known anchor points of difficulty (e.g. people with different levels of mobility such as a paraplegic person or an athlete who runs 4 km each day) so as to adjust the responses for comparability . The same method was also used for self-reports of individuals assessing responsiveness of their health systems where vignettes on different responsiveness domains describing different levels of responsiveness were used to calibrate the individual responses.
This data are useful in their own right to standardize indicators for different domains of health (such as cognition, mobility, self care, affect, usual activities, pain, social participation, etc.) but also provide a better measurement basis for assessing health of the populations in a comparable manner. The data from the surveys can be fed into composite measures such as "Healthy Life Expectancy" and improve the empirical data input for health information systems in different regions of the world. Data from the surveys were also useful to improve the measurement of the responsiveness of different health systems to the legitimate expectations of the population.
Sample survey data [ssd]
The last census was carried out in Georgia in 1989. Because of various political and economical events in the country, such as conflict in Abkhazia and Tskhinvali region, civil war, etc., which caused migration, there are no population lists available that could be used for the sampling purposes. Lists prepared for elections are inaccurate. Based on the existing statistical data, a random sample design was used and a Random Walk Procedure was followed. This design was exceptionally accepted by WHO. A total of 10 regions were sampled and 10,000 were drawn from these regions: Region 1: Tbilisi Region 2: Ajara Region 3: Guria Region 4: Imereti Region 5: Kakheti Region 6: Mstkheta-Mtianeti Region 7: Samegrelo Region 8: Samtskhe-Javakheti Region 9: Kvemo Kartli Region 10: Shida Kartli The sampling frame covered urban and rural areas, however due to the political situation the Abkhazia and Tskhinvali regions were excluded. More females (57.8%) than males (42.2%) were interviewed.
Because of the questionnaire size and the difficult winter period of the fieldwork a higher non-response rate was anticipated. However, the total percentage of non-responses was much lower than expected. The main reasons of refusals to participate in interviews were mistrust, fear, and irritation due to their bad socioeconomic conditions. As well, interview duration was reported as being a problem. Further, in regions and sub regions of Georgia with a predominant non-Georgian population the language barrier became one additional negative factor, even if a bilingual questionnaire was used. In the Kvemo Kartli region, the Azeri population hardly understood either Georgian or Russian. Another problem was religion. Female Muslim respondents were not allowed to participate in the survey without the permission of their husbands who often were present during the interviews.
Face-to-face [f2f]
Data Coding At each site the data was coded by investigators to indicate the respondent status and the selection of the modules for each respondent within the survey design. After the interview was edited by the supervisor and considered adequate it was entered locally.
Data Entry Program A data entry program was developed in WHO specifically for the survey study and provided to the sites. It was developed using a database program called the I-Shell (short for Interview Shell), a tool designed for easy development of computerized questionnaires and data entry (34). This program allows for easy data cleaning and processing.
The data entry program checked for inconsistencies and validated the entries in each field by checking for valid response categories and range checks. For example, the program didn’t accept an age greater than 120. For almost all of the variables there existed a range or a list of possible values that the program checked for.
In addition, the data was entered twice to capture other data entry errors. The data entry program was able to warn the user whenever a value that did not match the first entry was entered at the second data entry. In this case the program asked the user to resolve the conflict by choosing either the 1st or the 2nd data entry value to be able to continue. After the second data entry was completed successfully, the data entry program placed a mark in the database in order to enable the checking of whether this process had been completed for each and every case.
Data Transfer The data entry program was capable of exporting the data that was entered into one compressed database file which could be easily sent to WHO using email attachments or a file transfer program onto a secure server no matter how many cases were in the file. The sites were allowed the use of as many computers and as many data entry personnel as they wanted. Each computer used for this purpose produced one file and they were merged once they were delivered to WHO with the help of other programs that were built for automating the process. The sites sent the data periodically as they collected it enabling the checking procedures and preliminary analyses in the early stages of the data collection.
Data quality checks Once the data was received it was analyzed for missing information, invalid responses and representativeness. Inconsistencies were also noted and reported back to sites.
Data Cleaning and Feedback After receipt of cleaned data from sites, another program was run to check for missing information, incorrect information (e.g. wrong use of center codes), duplicated data, etc. The output of this program was fed back to sites regularly. Mainly, this consisted of cases with duplicate IDs, duplicate cases (where the data for two respondents with different IDs were identical), wrong country codes, missing age, sex, education and some other important variables.
Commercially available regional economic data for Alaska fisheries [such as IMpact analysis for PLANning (IMPLAN)] are unreliable. Therefore, these data need to be either collected or estimated based on more reliable information. These data have been collected or estimated for important economic variables such as cost, employment, and factor income (labor income and capital) for Alaska fisheries. The data thus collected or estimated have been used to develop regional economic models for Alaska fisheries in order to estimate the economic impacts of Alaska fisheries.
Large Language Models (LLMs) offer new research possibilities for social scientists, but their potential as “synthetic data" is still largely unknown. In this paper, we investigate how accurately the popular LLM ChatGPT can recover public opinion, prompting the LLM to adopt different “personas” and then provide feeling thermometer scores for 11 sociopolitical groups. The average scores generated by ChatGPT correspond closely to the averages in our baseline survey, the 2016–2020 American National Election Study. Nevertheless, sampling by ChatGPT is not reliable for statistical inference: there is less variation in responses than in the real surveys, and regression coefficients often differ significantly from equivalent estimates obtained using ANES data. We also document how the distribution of synthetic responses varies with minor changes in prompt wording, and we show how the same prompt yields significantly different results over a three-month period. Altogether, our findings raise serious concerns about the quality, reliability, and reproducibility of synthetic survey data generated by LLMs.
Land Surveying Equipment Market Size 2025-2029
The land surveying equipment market size is forecast to increase by USD 2.95 billion, at a CAGR of 6.3% between 2024 and 2029.
The market is experiencing significant growth due to the increasing demand for accurate mapping and data analysis in various industries, including construction, engineering, and real estate. This trend is driving the adoption of advanced surveying technologies, such as robotic total stations and 3D scanners, which offer higher precision and efficiency compared to traditional methods. Robotic total stations, in particular, are gaining popularity due to their ability to automate the surveying process, reducing human error and increasing productivity. These devices use GPS technology and self-leveling mechanisms to capture data and generate accurate surveys. However, the market faces challenges in the form of regulatory and compliance requirements. As surveying data is critical for infrastructure projects, governments and regulatory bodies impose stringent regulations to ensure data accuracy and security. Compliance with these regulations can be time-consuming and costly, posing a significant challenge for market players. Moreover, the emergence of drone technology in surveying applications is another trend transforming the market landscape. Drones equipped with high-resolution cameras and LiDAR sensors are increasingly being used for topographic surveys, volumetric analysis, and infrastructure inspections. However, the use of drones in surveying raises concerns regarding data privacy, security, and safety, which need to be addressed through regulatory frameworks and technological solutions. In conclusion, the market is poised for growth due to the increasing demand for accurate mapping and data analysis. The adoption of advanced surveying technologies, such as robotic total stations and drones, is driving innovation and efficiency in the market. However, regulatory and compliance challenges, data privacy concerns, and safety issues pose significant obstacles that market players need to navigate effectively to capitalize on market opportunities and maintain a competitive edge.
What will be the Size of the Land Surveying Equipment Market during the forecast period?
Request Free SampleThe market is characterized by continuous evolution and dynamic market activities. Construction staking and cadastral surveying remain key applications, with data integrity and report generation playing essential roles in ensuring accuracy and reliability. Data processing, survey software, and RTK systems facilitate efficient data collection and analysis, while total stations and control surveys ensure alignment and precision. Alignment surveys and as-built surveys are crucial for infrastructure development, as are elevation surveys and field data collection in construction sites. Aerial surveying, site plans, and BIM integration are transforming the industry with advanced technologies such as laser scanning and drone surveying. Mining surveying, land development, and engineering surveying require high levels of data analysis and GIS integration for effective planning and execution. Hydrographic surveying, ground control points, and coordinate systems ensure data accuracy in various applications. Emerging technologies like artificial intelligence, machine learning, and point cloud processing are revolutionizing the industry, offering new possibilities for data analysis and automation. The market's ongoing unfolding is marked by the integration of GPS mapping, infrastructure development, and environmental monitoring, among others. Construction surveying, site preparation, and boundary surveys are essential components of real estate development, with 3D modeling and GNSS receivers streamlining the process. The market's continuous evolution underscores the importance of staying updated with the latest trends and technologies.
How is this Land Surveying Equipment Industry segmented?
The land surveying equipment industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments. ProductTS and TLUAVGNSS systemPipe lasersOthersEnd-userConstructionMiningOil and gasOthersGeographyNorth AmericaUSCanadaEuropeFranceGermanyUKAPACAustraliaChinaIndiaJapanSouth AmericaBrazilRest of World (ROW)
By Product Insights
The ts and tl segment is estimated to witness significant growth during the forecast period.The market experiences significant growth due to the increasing adoption of advanced technologies in surveying applications. Total stations and theodolites, such as TS and TL, play a pivotal role in this market. These instruments, which measure angles, distances, and elevations with high precision, are indispensable in construction, infrastr
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the replication package for "Accounting for Individual-Specific Reliability of Self-Assessed Measures of Economic Preferences and Personality Traits," accepted in 2023 by the Journal of Political Economy Microeconomics.
http://dcat-ap.ch/vocabulary/licenses/terms_byhttp://dcat-ap.ch/vocabulary/licenses/terms_by
Data of official surveying in the simplified data model (MOpublic). Fixed points: The data level contains the location fixed points of category 1 (points of the Land survey), category 2 (points of the canton) and category 3 (points of the official survey) as well as the height fixed points of category 1 (points of the Land levelling), category 2 (points of the cantonal levelling) and category 3 (points of municipal levelling). The points make reference to the coordinate system. The heights or working heights are based on the official height system of the Land levelling system of 1902 (LN02). The accuracy and reliability requirements depend on the tolerance levels (TVAV Art. 3). The position and height accuracy of the fixed points is defined as a standard deviation (mean errors) (TVAV Art. 28). The external reliability of each fixed point is demonstrated by suitable statistical parameters (TVAV Art. 34). Land cover: The data level contains the land cover areas (buildings, roads, water bodies, forests, etc.), which are defined nationwide as the division of areas, the building or insurance numbers and the object names (e.g. building names, water bodies names, etc.). The accuracy requirements of the land cover areas as well as the minimum areas to be included depend on the tolerance levels (TVAV Art. 3). The positional accuracy of individual points is defined as a standard deviation (mean errors) (TVAV Art. 29). Individual objects: The data layer contains individual objects (e.g. walls, wells, masts, bridges, etc.), which are represented as point, line or surface elements. The accuracy requirements of the individual objects depend on the tolerance levels (TVAV Art. 3). The positional accuracy of the individual points is defined as a standard deviation (mean errors) (TVAV Art. 29). Nomenclature: The data level contains the floor, place and terrain names. The floor names refer to parts of the terrain and are depicted throughout as a division of territory. The place names refer to demarcated parts of the terrain and overlap the floor names. The terrain names stand for individual terrain points, which are not place and field names and have no demarcation. Properties: The data level contains the boundary points, the plots of land and the independent and permanent rights as far as these can be eliminated in terms of area. The accuracy and reliability requirements depend on the tolerance levels (TVAV Art. 3). The positional accuracy of the boundary points is defined as a standard deviation (mean errors) (TVAV Art. 31). The external reliability of the boundary points is demonstrated by suitable parameters (TVAV Art. 34). Piping: The data level contains the high pressure pipelines for gas. The accuracy requirements of the pipelines depend on the tolerance levels (TVAV Art. 3). The positional accuracy of the individual points is defined as a standard deviation (mean errors) (TVAV Art. 31). Building addresses: The data level contains the building addresses consisting of house or police numbers as well as street and square names. Sovereign borders: The data level contains the sovereign border points and sovereign borders (municipal, district, cantonal and state borders). The accuracy and reliability requirements depend on the tolerance levels (TVAV Art. 3). The positional accuracy of the sovereign boundary points is defined as a standard deviation (mean errors) (TVAV Art. 31). The external reliability of the sovereign border points is demonstrated by suitable parameters (TVAV Art. 34). Data format INTERLIS (1 file per municipality) contains the following topics of the data model MOpublic: Control_points, Land_cover, Single_objects, Heights, Local_names, Ownership, Pipelines, Territorial_boundaries and Building_addresses. Data format Shape contains the topics of the data model MOpublic listed above or the following shape files (the geometry type is specified in parentheses): FP_LFP (Point), BB_BoFlaeche (Flaeche), BB_BBText (Point), BB_ProjBoFlaeche (Flaeche), BB_ProjBBText (Point), EO_Flaechenelement (Flaeche), EO_FlaechenelementText (Point), EO_Linienelement (Line), EO_LinienelementText (Point), EO_Punktelement (Point), EO_PunktelementText (Point), NK_Names (Flaeche), NK_NamenPos (Point), LS_Grenzpunkt (Point), LS_Liegenschaft (Flaeche), LS_LiegenschaftPos (Point), LS_SelbstRecht-Bergwerk (Flaeche), LS_SelbstRecht-Bergwerks (Point), LS_ProjResidence (area), LS_ProjResidencePos (point), LS_ProjSelfRight Mine (area), LS_ProjSelfRight MinePos (point), RL_Line Element (line), RL_Line ElementText (point), GEM_Localisation Point (point), GEM_Municipality Limit (area), GEB_Building Entrance (point), GEB_Localisation NamePos (point) and GEB_House NumberPos (point). Data format DXF contains the topics of the data model MOpublic listed above in the layer structure according to the instruction data model «MOpublic», chapter 9.6 DXF.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The National Centre for Social Research (EKKE), in the framework of the ELIDEK-funded pilot project "Public Discourse Fact Checking-Check4facts", conducted an online survey on the sources of information and the public's views on the credibility of public information. A quantitative survey using an online questionnaire was conducted among a nationwide convenience sample of 1370 people (aged 17+) during the period 26/11/2021 to 26/05/2022. The data file and survey questionnaire are freely available in the Resources section of this data project (in Greek language).
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Between May 7th and June 4th 2021, the Government of Ontario ran a consultation on the proposed Trustworthy Artificial Intelligence (AI) Framework. An open and anonymous survey was developed for input on: * how the government can ensure AI is used responsibly to minimize misuse and maximize benefits for Ontarians * ideas to improve the public’s trust in AI * experiences, concerns, and insights that will help the government shape it’s approach to using AI The survey asked respondents to rank and comment on action items related to three draft commitments: * No AI in secret * AI use Ontarians can trust * AI for all Ontarians The survey data includes ranking of action items for each commitment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Online surveys often include quantitative attention checks, but inattentive participants might also be identified using their qualitative responses. We used the software Turnitin™ to assess the originality of open-ended responses in four mixed-method online surveys that included validated multi-item rating scales. Across surveys, 18-35% of participants were identified as having copied responses from online sources. We assessed indicator reliability and internal consistency reliability and found that both were lower for participants identified as using copied text versus those who wrote more original responses. Those who provided more original responses also provided more consistent responses to the validated scales, suggesting that these participants were more attentive. We conclude that this process can be used to screen qualitative responses from online surveys. We encourage future research to replicate this screening process using similar tools, investigate strategies to reduce copying behaviour, and explore the motivation of participants to search for information online.