CBS News and The New York Times were partners in a series of election surveys covering the 1976 United States presidential election campaign. The surveys were intended to provide another dimension to the political reporting of the two organizations. The surveys, using extensive coverage early in the primary campaign, were designed to monitor the public's changing perception of the candidates, the issues, and the candidates' positions vis-a-vis the issues. Parts 1-9 contain separate nationwide surveys conducted by telephone, with approximately 1,500 randomly selected adults. Five surveys were conducted monthly from February through June, and four more between early September and the general election -- one in September and one following each presidential debate. A final survey was conducted two days after the general election. Respondents were asked for their preferred presidential candidate, their ratings of the candidates' qualifications and positions, and their opinions on a variety of political issues. Part 10, the Election Day Survey, contains a national sample of voters who were interviewed at the polls. Respondents were asked to fill out a questionnaire that asked the name of the presidential candidate for whom they had just voted, and other questions about their political preferences. Part 11 contains data for respondents who were first interviewed in Part 9, Debate Three Survey, and recontacted and reinterviewed for the Post-Election Survey. Data include respondents' voting history, their evaluation of the nominees' positions on various political issues, and their opinions on current political and social issues. Parts 12-26 contain surveys conducted in 12 states on the day of the primary at the polling place, among a random sample of people who had just voted in either the Democratic or Republican presidential primary election. These surveys were conducted in the following primary states: California, Florida, Illinois, Indiana, Massachusetts, Michigan, New Hampshire, New York, Ohio, Pennsylvania, and Wisconsin. There are separate files for the Democratic and Republican primaries in Michigan, Ohio, Indiana, and California, making a total of fifteen primary day "exit" surveys. Respondents were asked whom they voted for and why, the issues that were important in making their choice, and their voting history. Demographic information on respondents in all surveys may include sex, race, age, religion, education, occupation, and labor union affiliation. These files were processed by the Roper Center under a cooperative arrangement with ICPSR. Most of these data were collected by CBS News and The New York Times. The Election Day Survey was conducted solely by CBS News. Parts 1-11 were made available to the ICPSR by CBS News. Datasets: DS0: Study-Level Files DS1: February Survey DS2: March Survey DS3: April Survey DS4: May Survey DS5: June Survey DS6: September Survey DS7: Debate One Survey DS8: Debate Two Survey (Registered Only) DS9: Debate Three Survey (Registered Only) DS10: The Election Day Survey DS11: The Post-Election Survey (All) DS12: New Hampshire Primary Survey DS13: Massachusetts Primary Survey DS14: Florida Primary Survey DS15: Illinois Primary Survey DS16: New York Primary DS17: Wisconsin Primary Survey DS18: Pennsylvania Primary Survey DS19: Indiana Democratic Primary Survey DS20: Indiana Republican Primary Survey DS21: Michigan Democratic Primary Survey DS22: Michigan Republican Primary Survey DS23: California Democratic Primary Survey DS24: California Republican Primary Survey DS25: Ohio Democratic Primary Survey DS26: Ohio Republican Primary Survey DS27: Codebook Introduction (1) These files contain weights, which must be used in any data analysis. (2) There is no card image data for Part 3 and there is only card image data for Parts 11-19. Also, this collection does not contain data for Oregon as the machine-readable documentation indicates. Parts 1-6: Persons in households with telephones in the coterminous United States. Parts 7-9 and 11: Registered voters with telephones in the coterminous United States. Parts 10 and 12-26: Voters in the 1976 primary election.
This dataset contains data about domestic absentee voting, provisional balloting, poll books, polling places, precincts, poll workers, and voting technology used in the 2022 election cycle. The corresponding comprehensive report addresses voter registration, uniformed and overseas citizen voting, domestic absentee voting, provisional balloting, poll books, polling places, precincts, poll workers, and voting technology used in the 2022 election. The Election Administration and Voting Survey report is part of EAC's Election Administration and Voting Survey biennial project.
https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de443090https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de443090
Abstract (en): This poll is part of a continuing series of monthly surveys that solicit public opinion on the presidency and on a range of other political and social issues. Interviews were conducted with respondents in each state as they left their polling places on election day, November 6, 1984. Respondents were asked about their vote for president, political party identification, and opinions on several issues such as the United States budget deficit, national tax policies, and characteristics of the candidates that influenced voting decisions. The survey also includes state-specific questions that were only asked of voters in that state. Respondents were asked for their marital status, veteran status, religion, income, and whether they were a government employee or a school teacher. ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection: Performed consistency checks.; Standardized missing values.; Checked for undocumented or out-of-range codes.. Voters in the United States on November 6, 1984 election. Precincts were selected with a probability proportionate to the total vote count cast in a recent past election. The data collection instrument is provided by ICPSR as Portable Document Format (PDF) files. The PDF file format was developed by Adobe Systems Incorporated and can be accessed using PDF reader software, such as the Adobe Acrobat Reader. Information on how to obtain a copy of the Acrobat Reader is provided on the ICPSR Web site.
According to the survey published by YouGov on July 1, 2025, the CDU/CSU would have received 28 percent of the vote if there had been a federal election in Germany on the Sunday after the survey. The SPD would have been well behind the Union with 14 percent of the vote. The right-wing populist party AfD received 23 percent.
https://www.icpsr.umich.edu/web/ICPSR/studies/7814/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/7814/terms
This data collection consists of two election surveys. Part 1, Pre-Congressional Poll, contains a nationwide telephone survey conducted in late September 1978, focusing on the respondents' voting intentions for the 1978 United States Congressional elections. A total of 1,451 randomly selected adults were surveyed. Respondents were asked whether they intended to vote and what issues would influence their vote, their reactions to President Carter's policies, and their preferences for presidential candidates in 1980. Demographic information including age, race, religion, income, political orientation, and education is available for each respondent. Part 2, Nationwide Election Day Poll, contains a nationwide "exit" survey conducted at the polls on election day, November 7, 1978. A total of 8,808 randomly selected voters were asked to fill out a questionnaire asking which party they voted for in the Congressional election and their opinion on a number of current political issues. Demographic information for respondents in Part 2 includes age, race, religion, income, and labor union affiliation. These datasets were made available to the ICPSR by the Election and Survey Unit of CBS News. The Pre-Congressional Poll was conducted solely by CBS News.
Provides demographic information on persons who did and did not register to vote. Also measures number of persons who voted and reasons for not registering.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The online polling software market is experiencing robust growth, driven by increasing digital adoption across various sectors. The market, currently estimated at $2.5 billion in 2025, is projected to witness a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033. This expansion is fueled by the rising demand for efficient and cost-effective data collection methods in businesses, educational institutions, and political campaigns. The ease of use, real-time data analysis capabilities, and diverse polling options offered by these platforms are key drivers. Furthermore, the integration of online polling software with other communication and collaboration tools enhances its appeal and functionality, leading to wider adoption. Different polling types like browser, app, and scan polling cater to diverse needs, with applications ranging from simple opinion polls to complex surveys for market research. The market is segmented by application type (meeting, education, business, and others), reflecting the diverse use cases across industries. Geographic expansion, particularly in developing economies with increasing internet penetration, contributes to market growth. Competitive landscape is also dynamic with a blend of established players and emerging companies offering a wide range of features and pricing models. However, challenges remain. Data security and privacy concerns are paramount, requiring robust security measures to maintain user trust. The increasing sophistication of online polling tools requires continuous investment in research and development to stay ahead of the curve. Market saturation in certain regions and the cost of implementing and maintaining these systems can also pose restraints. Furthermore, the need for effective user training and support is crucial to ensure widespread adoption and maximum value realization. To overcome these limitations, future market players will need to focus on innovative data security solutions, affordable pricing models, and user-friendly interfaces that can cater to diverse technical capabilities. The continuous evolution of functionalities and integration with other platforms will ensure competitiveness in the rapidly evolving digital landscape.
In many emerging democracies women are less likely to vote than men and, when they do vote, are likely to follow the wishes of male household and clan heads. We assess the impact of a voter awareness campaign on female turnout, candidate choice and party vote shares. Geographic clusters within villages were randomly assigned to treatment or control, and within treated clusters, some households were not targeted. Compared to women in control clusters, both targeted and untargeted women in treated clusters are 12 percentage points more likely to vote, and are also more likely to exercise independence in candidate choice, indicating large spillovers. Data from polling stations suggests that treating 10 women increased female turnout by about 7 votes, resulting in a cost per vote of US$ 3.1 Finally, a 10 percent increase in the share of treated women at the polling station led to a 6 percent decrease in the share of votes of the winning party.
Rural areas of districts Sukkur and Khairpur in the southern province of Sindh.
Household Individual
The survey was administered to all women in the household above 18 years old as well as the male household head or the male spouse if the head was a woman.
Sample survey data [ssd]
A typical sample cluster yielded about 15 sample households and 41 sample women. In total, 2,736 women from 1,018 households were reached. During the door-to-door visit, basic data on each sample household was collected, including the GPS location of the house and a basic roster of all adult women with their past voting record and the name and address of their closest friend or confidant in the village. The confidant was selected as follows: in every even numbered household, the confidant of a woman who was either a daughter or a daughter in law of the household head was selected, while in every odd numbered household, the confidant of the household head (if the head was a woman) or the head’s wife, sister, mother or aunt was selected. Not all households yielded at least one “eligible” woman using this rule, so the final sample includes 797 confidants. Of these, almost all were in the same cluster as the sample women who identified them, but only 18 confidants were also in a sampled household. The door-to-door visit took 20 to 25 minutes for treated households and 5 to 10 minutes for control households. No selected household refused to be interviewed, although in a few cases a repeat visit took place on the same day. None of the households refused to participate in the awareness campaign.
Vote verification took place between the evening of February 18th, Election Day, and allday February 19th. On the evening of February 19th, the survey firm sent out a field team to each village to check 10 percent of the verifier’s assignment at random. They found no significant differences. However, the village based vote verifiers were unable to locate 99 sample women (in 27 households), roughly 3 percent of the sample. The final sample, therefore, has 2637 women and 991 households. All 797 confidants were found and their vote verified. Attrition is, therefore, quite low and unrelated to treatment assignment (see Panel A, Appendix Table OA8). In addition, 158 women claimed to have cast a vote but did not have the requisite ink mark. To be conservative, we treat these women as not having voted although the results do not change when these women are coded as voters.
Face-to-face [f2f]
The questionnaires for this survey include:
Pre-Election Visit Questionnaire: Due to time and budget constraints, only one of the control clusters in each village was included in the pre-election survey, with the exception of one large village in which two control clusters were selected.
Voting Verification Form- For Village Informant
Final Voting Questionnaire (Female) - The survey was administered to all women in the household above 18 years old as well as the male household head or the male spouse if the head was a woman
Final Voting Questionnaire (Male) - The survey was administered to all women in the household above 18 years old as well as the male household head or the male spouse if the head was a woman
During elections, political polls provide critical data for the support each candidate receives. For that reason, the measurement of questions asking about candidate support has been receiving some research attention. As the online survey is increasingly becoming a widely used tool for public opinion and election polls, evaluation of the measurement error associated with this survey mode is of importance. This study examines whether a candidate name order effect exists in presidential primary election surveys in the US. The findings show that contrary to previous studies the order of names does not have a significant impact on the support candidates received.
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.2/customlicense?persistentId=doi:10.7910/DVN/6ITQKChttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.2/customlicense?persistentId=doi:10.7910/DVN/6ITQKC
Project Description: Every two years, the U.S. Election Assistance Commission collects election administration-related data from 50 States, the District of Columbia, and four territories (Guam, Puerto Rico, American Samoa, and the U.S. Virgin Islands). Data are collected at the State and county levels from the entities listed above. This is administrative data collected directly from the States and territories; there is no sampling or weighting of the data involved. Topics include: voter registration, overseas voting, domestic absentee voting, provisional balloting, first-time voters, poll workers, electronic poll books, turnout, and type of voting systems used. The data cover the period from Election Day +1 2006 to Election Day 2008.
https://www.thebusinessresearchcompany.com/privacy-policyhttps://www.thebusinessresearchcompany.com/privacy-policy
Global Public Opinion And Election Polling market size is expected to reach $10.23 billion by 2029 at 3.5%, segmented as by mode, online surveys, paper surveys, telephonic surveys, one-to-one interviews
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘US non-voters poll data’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/yamqwe/us-non-voters-poll-datae on 28 January 2022.
--- Dataset description provided by original source is as follows ---
This dataset contains the data behind Why Many Americans Don't Vote.
Data presented here comes from polling done by Ipsos for FiveThirtyEight, using Ipsos’s KnowledgePanel, a probability-based online panel that is recruited to be representative of the U.S. population. The poll was conducted from Sept. 15 to Sept. 25 among a sample of U.S. citizens that oversampled young, Black and Hispanic respondents, with 8,327 respondents, and was weighted according to general population benchmarks for U.S. citizens from the U.S. Census Bureau’s Current Population Survey March 2019 Supplement. The voter file company Aristotle then matched respondents to a voter file to more accurately understand their voting history using the panelist’s first name, last name, zip code, and eight characters of their address, using the National Change of Address program if applicable. Sixty-four percent of the sample (5,355 respondents) matched, although we also included respondents who did not match the voter file but described themselves as voting “rarely” or “never” in our survey, so as to avoid underrepresenting nonvoters, who are less likely to be included in the voter file to begin with. We dropped respondents who were only eligible to vote in three elections or fewer. We defined those who almost always vote as those who voted in all (or all but one) of the national elections (presidential and midterm) they were eligible to vote in since 2000; those who vote sometimes as those who voted in at least two elections, but fewer than all the elections they were eligible to vote in (or all but one); and those who rarely or never vote as those who voted in no elections, or just one.
The data included here is the final sample we used: 5,239 respondents who matched to the voter file and whose verified vote history we have, and 597 respondents who did not match to the voter file and described themselves as voting "rarely" or "never," all of whom have been eligible for at least 4 elections.
If you find this information useful, please let us know.
License: Creative Commons Attribution 4.0 International License
Source: https://github.com/fivethirtyeight/data/tree/master/non-voters
This dataset was created by data.world's Admin and contains around 6000 samples along with Race, Q27 6, technical information and other features such as: - Q4 6 - Q8 3 - and more.
- Analyze Q10 3 in relation to Q8 6
- Study the influence of Q6 on Q10 4
- More datasets
If you use this dataset in your research, please credit data.world's Admin
--- Original source retains full ownership of the source dataset ---
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Granite State Poll is a quarterly poll conducted by the University of New Hampshire Survey Center. The poll sample consists of about 500 New Hampshire adults with a working telephone across the state. Each poll contains a series of basic demographic questions that are repeated in future polls, as well as a set of unique questions that are submitted by clients. This poll includes four questions related to preferences about dams. These questions were designed by Natallia Leuchanka Diessner, Catherine M. Ashcraft, Kevin H. Gardner, and Lawrence C. Hamilton as part of the "Future of Dams" project.This Technical Report was written by the UNH Survey Center and describes the protocols and standards of the Granite State Poll #69 (Client Poll), which includes questions related to preferences about dams, designed by Natallia Leuchanka Diessner, Catherine M. Ashcraft, Kevin H. Gardner, and Lawrence C. Hamilton as part of the "Future of Dams" project.The first file is a screenshot of the Technical Report to provide a preview for Figshare. The second file is the Technical Report in Microsoft Word format.
AP VoteCast is a survey of the American electorate conducted by NORC at the University of Chicago for Fox News, NPR, PBS NewsHour, Univision News, USA Today Network, The Wall Street Journal and The Associated Press.
AP VoteCast combines interviews with a random sample of registered voters drawn from state voter files with self-identified registered voters selected using nonprobability approaches. In general elections, it also includes interviews with self-identified registered voters conducted using NORC’s probability-based AmeriSpeak® panel, which is designed to be representative of the U.S. population.
Interviews are conducted in English and Spanish. Respondents may receive a small monetary incentive for completing the survey. Participants selected as part of the random sample can be contacted by phone and mail and can take the survey by phone or online. Participants selected as part of the nonprobability sample complete the survey online.
In the 2020 general election, the survey of 133,103 interviews with registered voters was conducted between Oct. 26 and Nov. 3, concluding as polls closed on Election Day. AP VoteCast delivered data about the presidential election in all 50 states as well as all Senate and governors’ races in 2020.
This is survey data and must be properly weighted during analysis: DO NOT REPORT THIS DATA AS RAW OR AGGREGATE NUMBERS!!
Instead, use statistical software such as R or SPSS to weight the data.
National Survey
The national AP VoteCast survey of voters and nonvoters in 2020 is based on the results of the 50 state-based surveys and a nationally representative survey of 4,141 registered voters conducted between Nov. 1 and Nov. 3 on the probability-based AmeriSpeak panel. It included 41,776 probability interviews completed online and via telephone, and 87,186 nonprobability interviews completed online. The margin of sampling error is plus or minus 0.4 percentage points for voters and 0.9 percentage points for nonvoters.
State Surveys
In 20 states in 2020, AP VoteCast is based on roughly 1,000 probability-based interviews conducted online and by phone, and roughly 3,000 nonprobability interviews conducted online. In these states, the margin of sampling error is about plus or minus 2.3 percentage points for voters and 5.5 percentage points for nonvoters.
In an additional 20 states, AP VoteCast is based on roughly 500 probability-based interviews conducted online and by phone, and roughly 2,000 nonprobability interviews conducted online. In these states, the margin of sampling error is about plus or minus 2.9 percentage points for voters and 6.9 percentage points for nonvoters.
In the remaining 10 states, AP VoteCast is based on about 1,000 nonprobability interviews conducted online. In these states, the margin of sampling error is about plus or minus 4.5 percentage points for voters and 11.0 percentage points for nonvoters.
Although there is no statistically agreed upon approach for calculating margins of error for nonprobability samples, these margins of error were estimated using a measure of uncertainty that incorporates the variability associated with the poll estimates, as well as the variability associated with the survey weights as a result of calibration. After calibration, the nonprobability sample yields approximately unbiased estimates.
As with all surveys, AP VoteCast is subject to multiple sources of error, including from sampling, question wording and order, and nonresponse.
Sampling Details
Probability-based Registered Voter Sample
In each of the 40 states in which AP VoteCast included a probability-based sample, NORC obtained a sample of registered voters from Catalist LLC’s registered voter database. This database includes demographic information, as well as addresses and phone numbers for registered voters, allowing potential respondents to be contacted via mail and telephone. The sample is stratified by state, partisanship, and a modeled likelihood to respond to the postcard based on factors such as age, race, gender, voting history, and census block group education. In addition, NORC attempted to match sampled records to a registered voter database maintained by L2, which provided additional phone numbers and demographic information.
Prior to dialing, all probability sample records were mailed a postcard inviting them to complete the survey either online using a unique PIN or via telephone by calling a toll-free number. Postcards were addressed by name to the sampled registered voter if that individual was under age 35; postcards were addressed to “registered voter” in all other cases. Telephone interviews were conducted with the adult that answered the phone following confirmation of registered voter status in the state.
Nonprobability Sample
Nonprobability participants include panelists from Dynata or Lucid, including members of its third-party panels. In addition, some registered voters were selected from the voter file, matched to email addresses by V12, and recruited via an email invitation to the survey. Digital fingerprint software and panel-level ID validation is used to prevent respondents from completing the AP VoteCast survey multiple times.
AmeriSpeak Sample
During the initial recruitment phase of the AmeriSpeak panel, randomly selected U.S. households were sampled with a known, non-zero probability of selection from the NORC National Sample Frame and then contacted by mail, email, telephone and field interviewers (face-to-face). The panel provides sample coverage of approximately 97% of the U.S. household population. Those excluded from the sample include people with P.O. Box-only addresses, some addresses not listed in the U.S. Postal Service Delivery Sequence File and some newly constructed dwellings. Registered voter status was confirmed in field for all sampled panelists.
Weighting Details
AP VoteCast employs a four-step weighting approach that combines the probability sample with the nonprobability sample and refines estimates at a subregional level within each state. In a general election, the 50 state surveys and the AmeriSpeak survey are weighted separately and then combined into a survey representative of voters in all 50 states.
State Surveys
First, weights are constructed separately for the probability sample (when available) and the nonprobability sample for each state survey. These weights are adjusted to population totals to correct for demographic imbalances in age, gender, education and race/ethnicity of the responding sample compared to the population of registered voters in each state. In 2020, the adjustment targets are derived from a combination of data from the U.S. Census Bureau’s November 2018 Current Population Survey Voting and Registration Supplement, Catalist’s voter file and the Census Bureau’s 2018 American Community Survey. Prior to adjusting to population totals, the probability-based registered voter list sample weights are adjusted for differential non-response related to factors such as availability of phone numbers, age, race and partisanship.
Second, all respondents receive a calibration weight. The calibration weight is designed to ensure the nonprobability sample is similar to the probability sample in regard to variables that are predictive of vote choice, such as partisanship or direction of the country, which cannot be fully captured through the prior demographic adjustments. The calibration benchmarks are based on regional level estimates from regression models that incorporate all probability and nonprobability cases nationwide.
Third, all respondents in each state are weighted to improve estimates for substate geographic regions. This weight combines the weighted probability (if available) and nonprobability samples, and then uses a small area model to improve the estimate within subregions of a state.
Fourth, the survey results are weighted to the actual vote count following the completion of the election. This weighting is done in 10–30 subregions within each state.
National Survey
In a general election, the national survey is weighted to combine the 50 state surveys with the nationwide AmeriSpeak survey. Each of the state surveys is weighted as described. The AmeriSpeak survey receives a nonresponse-adjusted weight that is then adjusted to national totals for registered voters that in 2020 were derived from the U.S. Census Bureau’s November 2018 Current Population Survey Voting and Registration Supplement, the Catalist voter file and the Census Bureau’s 2018 American Community Survey. The state surveys are further adjusted to represent their appropriate proportion of the registered voter population for the country and combined with the AmeriSpeak survey. After all votes are counted, the national data file is adjusted to match the national popular vote for president.
This statistic shows a survey on voting in general elections in Sweden in 2016, by gender. When asked during the survey period, 5 percent of the respondents stated that they are not voting in a general election.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Granite State Poll is a quarterly poll conducted by the University of New Hampshire Survey Center. The poll sample consists of about 500 New Hampshire adults with a working telephone across the state. Each poll contains a series of basic demographic questions that are repeated in future polls, as well as a set of unique questions that are submitted by clients. This poll includes four questions related to preferences about dams. These questions were designed by Natallia Leuchanka Diessner, Catherine M. Ashcraft, Kevin H. Gardner, and Lawrence C. Hamilton as part of the "Future of Dams" project.This Technical Report was written by the UNH Survey Center and describes the protocols and standards of the Granite State Poll #68 (Client Poll), which includes questions related to preferences about dams, designed by Natallia Leuchanka Diessner, Catherine M. Ashcraft, Kevin H. Gardner, and Lawrence C. Hamilton as part of the "Future of Dams" project.The first file is a screenshot of the Technical Report to provide a preview for Figshare. The second file is the Technical Report in Microsoft Word format.
The New York City Health Opinion Poll (HOP) is a periodic rapid online poll conducted by New York City Department of Health and Mental Hygiene. The goals of the poll are to measure adult New Yorkers’ awareness, acceptance and use — or barriers to use — of our programs; knowledge, opinions and attitudes about health care and practices; and opinions about public events that are related to health. The data collected through public health polling are rapidly analyzed and disseminated. This real-time community input informs programming and policy development at the Health Department to better meet the needs of New Yorkers.
"Southerners tend to slip through the cracks between state surveys, which are unreliable for generalizing to the region, on the one hand, and national sample surveys, which usually contain too few Southerners to allow detailed examination, on the other. And few surveys routinely include questions specifically about the South. To remedy this situation, the [Odum] Institute and the Center for the Study of the American South sponsor the Southern Focus Poll" (Odum Institute).
Southern and non-Southern residents are surveyed yearly and "are asked questions about economic conditions in their communities; cultural issues such as Southern accent, the Confederate flag and "Dixie"; race relations; feelings toward migrants to the South; and characteristics of Southerners vs. Northerners" (Odum Institute).
All of the data sets from the Southern Focus Polls archived here are generously made available by the Odum Institute for Research in Social Science of the University of North Carolina at Chapel Hill (OIRSS).
https://www.icpsr.umich.edu/web/ICPSR/studies/36383/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36383/terms
This data collection is comprised of responses from two sets of survey questionnaires, the basic Current Population Survey (CPS) and a survey on the topic of voting and registration in the United States, which was administered as a supplement to the November 2012 CPS questionnaire. The CPS, administered monthly, is a labor force survey providing current estimates of the economic status and activities of the population of the United States. Specifically, the CPS provides estimates of total employment (both farm and nonfarm), nonfarm self-employed persons, domestics, and unpaid helpers in nonfarm family enterprises, wage and salaried employees, and estimates of total unemployment. Data from the CPS are provided for the week prior to the survey. The voting and registration supplement data are collected every two years to monitor trends in the voting and nonvoting behavior of United States citizens in terms of their different demographic and economic characteristics. The supplement was designed to be a proxy response supplement, meaning a single respondent could provide answers for all eligible household members. The supplement questions were asked of all persons who were both United States citizens and 18 years of age or older. The CPS instrument determined who was eligible for the voting and registration supplement through the use of check items that referred to basic CPS items, including age and citizenship. Respondents were queried on whether they were registered to vote in the November 6, 2012 election, main reasons for not being registered to vote, main reasons for not voting, whether they voted in person or by mail, and method used to register to vote. Demographic variables include age, sex, race, Hispanic origin, marital status, veteran status, disability status, educational attainment, occupation, and income.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The online polling software market is experiencing robust growth, driven by the increasing need for efficient and cost-effective data collection across diverse sectors. The market's expansion is fueled by several key factors. Businesses are leveraging online polling tools for market research, customer feedback gathering, and employee surveys, enhancing decision-making processes. Educational institutions utilize these platforms for interactive learning, assessment, and gauging student understanding. Furthermore, the rise of virtual meetings and remote work has significantly increased the demand for seamless, real-time polling solutions integrated into online collaboration platforms. The adoption of various polling methodologies, including browser, app, and scan-based polling, caters to diverse user preferences and technological capabilities, further boosting market penetration. While the market is fragmented, with numerous players offering specialized solutions, the continuous innovation in features like advanced analytics, data visualization, and personalized reporting contributes to the market's dynamic nature. Despite the positive trends, the market faces certain restraints. Concerns regarding data privacy and security are paramount, requiring robust security measures and transparent data handling practices. The cost of implementation and training can pose a barrier for small businesses or organizations with limited budgets. However, the availability of various pricing models and free versions caters to a wider range of users. Competition is fierce, with established players and emerging startups continually vying for market share. The future growth will likely depend on the ability of vendors to deliver innovative features, improve user experience, and build trust through enhanced data security protocols. We project a substantial increase in market size over the forecast period, driven by technological advancements and expanding adoption across multiple sectors. The market is expected to maintain a healthy Compound Annual Growth Rate (CAGR), reflecting continuous innovation and the growing demand for efficient data collection and analysis solutions.
CBS News and The New York Times were partners in a series of election surveys covering the 1976 United States presidential election campaign. The surveys were intended to provide another dimension to the political reporting of the two organizations. The surveys, using extensive coverage early in the primary campaign, were designed to monitor the public's changing perception of the candidates, the issues, and the candidates' positions vis-a-vis the issues. Parts 1-9 contain separate nationwide surveys conducted by telephone, with approximately 1,500 randomly selected adults. Five surveys were conducted monthly from February through June, and four more between early September and the general election -- one in September and one following each presidential debate. A final survey was conducted two days after the general election. Respondents were asked for their preferred presidential candidate, their ratings of the candidates' qualifications and positions, and their opinions on a variety of political issues. Part 10, the Election Day Survey, contains a national sample of voters who were interviewed at the polls. Respondents were asked to fill out a questionnaire that asked the name of the presidential candidate for whom they had just voted, and other questions about their political preferences. Part 11 contains data for respondents who were first interviewed in Part 9, Debate Three Survey, and recontacted and reinterviewed for the Post-Election Survey. Data include respondents' voting history, their evaluation of the nominees' positions on various political issues, and their opinions on current political and social issues. Parts 12-26 contain surveys conducted in 12 states on the day of the primary at the polling place, among a random sample of people who had just voted in either the Democratic or Republican presidential primary election. These surveys were conducted in the following primary states: California, Florida, Illinois, Indiana, Massachusetts, Michigan, New Hampshire, New York, Ohio, Pennsylvania, and Wisconsin. There are separate files for the Democratic and Republican primaries in Michigan, Ohio, Indiana, and California, making a total of fifteen primary day "exit" surveys. Respondents were asked whom they voted for and why, the issues that were important in making their choice, and their voting history. Demographic information on respondents in all surveys may include sex, race, age, religion, education, occupation, and labor union affiliation. These files were processed by the Roper Center under a cooperative arrangement with ICPSR. Most of these data were collected by CBS News and The New York Times. The Election Day Survey was conducted solely by CBS News. Parts 1-11 were made available to the ICPSR by CBS News. Datasets: DS0: Study-Level Files DS1: February Survey DS2: March Survey DS3: April Survey DS4: May Survey DS5: June Survey DS6: September Survey DS7: Debate One Survey DS8: Debate Two Survey (Registered Only) DS9: Debate Three Survey (Registered Only) DS10: The Election Day Survey DS11: The Post-Election Survey (All) DS12: New Hampshire Primary Survey DS13: Massachusetts Primary Survey DS14: Florida Primary Survey DS15: Illinois Primary Survey DS16: New York Primary DS17: Wisconsin Primary Survey DS18: Pennsylvania Primary Survey DS19: Indiana Democratic Primary Survey DS20: Indiana Republican Primary Survey DS21: Michigan Democratic Primary Survey DS22: Michigan Republican Primary Survey DS23: California Democratic Primary Survey DS24: California Republican Primary Survey DS25: Ohio Democratic Primary Survey DS26: Ohio Republican Primary Survey DS27: Codebook Introduction (1) These files contain weights, which must be used in any data analysis. (2) There is no card image data for Part 3 and there is only card image data for Parts 11-19. Also, this collection does not contain data for Oregon as the machine-readable documentation indicates. Parts 1-6: Persons in households with telephones in the coterminous United States. Parts 7-9 and 11: Registered voters with telephones in the coterminous United States. Parts 10 and 12-26: Voters in the 1976 primary election.