Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Electoral Integrity Project at Harvard University and the University of Sydney (www.electoralintegrityproject.com) developed the AVE data, release 1.0. The dataset contains information from a three-wave panel survey designed to gather the views of a representative sample of ordinary Australians just before and after the 2nd July 2016 Australian federal elections. The survey monitored Australian voters’ experience at the polls, perceptions of the integrity and convenience of the registration and voting process, patterns of civic engagement, public confidence in electoral administration, and attitudes towards reforms, such as civic education campaigns and convenience voting facilities. Respondents were initially contacted in the week before the election between 28 June and 1 July and completed an online questionnaire lasting approximately 15 minutes. This forms the pre-election base line survey (wave 1). The same individuals were contacted again after the election to complete a longer survey, an average of 25 minutes in length. Respondents in wave 2 were contacted between 4 July and 19 July, with two thirds completing the survey after the first week. About six weeks later, the same respondents were interviewed again (wave 3) beginning on 23 August and ending on 13 September. The initial sample contains 2,139 valid responses for the first wave of questionnaires, 1,838 for the second wave (an 86 percent retention rate), and 1,543 for the third wave (84 percent retention rate). Overall, 72 percent of the respondents were carried over from the pre-election wave to the final wave. The following files can be accessed: a) dataset in Stata and SPSS formats; b) codebook; c) questionnaire. The EIP acknowledges support from the Kathleen Fitzpatrick Australian Laureate from the Australian Research Council (ARC ref: FL110100093). **** EIP further publications: BOOKS • LeDuc, Lawrence, Richard Niemi and Pippa Norris. Eds. 2014. Comparing Democracies 4: Elections and Voting in a Changing World. London: Sage Publications. • Nai, Alessandro and Walter, Annemarie. Eds. 2015 New Perspectives on Negative Campaigning: Why Attack Politics Matters. Colchester: ECPR Press. • Norris, Pippa, Richard W. Frank and Ferran Martínez i Coma. Eds. 2014. Advancing Electoral Integrity. New York: Oxford University Press. • Norris, Pippa, Richard W. Frank and Ferran Martínez i Coma. Eds. 2015. Contentious Elections: From Ballots to the Barricades. New York: Routledge. • Norris, Pippa. 2014. Why Electoral Integrity Matters. New York: Cambridge University Press. • Norris, Pippa. 2015. Why Elections Fail. New York: Cambridge University Press. • Norris, Pippa and Andrea Abel van Es. Eds. 2016. Checkbook Elections? Political Finance in Comparative Perspective. Oxford University Press. ARTICLES AND CHAPTERS • W. Frank. 2013. ‘Assessing the quality of elections.’ Journal of Democracy. 24(4): 124-135.• Lago, Ignacio and Martínez i Coma, Ferran. 2016. ‘Challenge or Consent? Understanding Losers’ Reactions in Mass Elections’. Government and Opposition doi:10.1071/gov.3015.31 • Martínez i Coma, Ferran and Lago, Ignacio. 2016. 'Gerrymandering in Comparative Perspective’ Party Politics DOI: 10.1177/1354068816642806 • Norris, Pippa. 2013. ‘Does the world agree about standards of electoral integrity? Evidence for the diffusion of global norms.’ Special issue of Electoral Studies. 32(4):576-588. • Norris, Pippa. 2013. ‘The new research agenda studying electoral integrity’. Special issue of Electoral Studies. 32(4): 563-575.57 • Norris, Pippa. 2014. ‘Electoral integrity and political legitimacy.’ In Comparing Democracies 4. Lawrence LeDuc, Richard Niemi and Pippa Norris. Eds. London: Sage. • Norris, Pippa, Richard W. Frank and Ferran Martínez i Coma. 2014. ‘Measuring electoral integrity: A new dataset.’ PS: Political Science & Politics. 47(4): 789-798. • Norris, Pippa. 2016 (forthcoming). ‘Electoral integrity in East Asia.’ Routledge Handbook on Democratization in East Asia. Tun-jen Cheng and Yun-han Chu. Eds. Routledge: New York. • Norris, Pippa. 2016 (forthcoming). ‘Electoral transitions: Stumbling out of the gate.’ In Rebooting Transitology – Democratization in the 21st Century. Mohammad-Mahmoud Ould Mohamedou and Timothy D. Sisk. Eds. • Pietsch, Juliet; Michael Miller and Jeffrey Karp. 2015. ‘Public support for democracy in transitional regimes.’ Journal of Elections, Public Opinion and Parties. 25(1): 1–9. DOI: 10.1080/17457289.2014. • Smith, Rodney. 2016 (forthcoming). ‘Confidence in paper-based and electronic voting channels: Evidence from Australia.’ Australian Journal of Political Science. ID: 1093091 DOI: 10.1080/10361146.2015.1093091 dx.doi.org/10.1080/07907184.2015.1099097 • Van Ham, Carolien and Staffan Lindberg. 2015. ‘From sticks to carrots: Electoral manipulation in Africa, 1986-2012’, Government and Opposition 50(3): 521 - 548, http://dx.doi.org/10.1017/gov.2015.6 • Van Ham, Carolien and Staffan Lindberg. 2015. ‘When Guardians Matter...
Facebook
Twitterhttps://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de456289https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de456289
Abstract (en): This survey is the first broad-based, systematic examination of the nature of civil litigation in state general jurisdiction trial courts. Data collection was carried out by the National Center for State Courts with assistance from the National Association of Criminal Justice Planners and the United States Bureau of the Census. The data collection produced two datasets. Part 1, Tort, Contract, and Real Property Rights Data, is a merged sample of approximately 30,000 tort, contract, and real property rights cases disposed during the 12-month period ending June 30, 1992. Part 2, Civil Jury Cases Data, is a sample of about 6,500 jury trial cases disposed over the same time period. Data collected include information about litigants, case type, disposition type, processing time, case outcome, and award amounts for civil jury cases. ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection: Performed consistency checks.; Standardized missing values.; Checked for undocumented or out-of-range codes.. Forty-five jurisdictions chosen to represent the 75 most populous counties in the nation. The sample for this study was designed and selected by the United States Bureau of the Census. It was a two-stage stratified sample with 45 of the 75 most populous counties selected at the first stage. The top 75 counties account for about 37 percent of the United States population and about half of all civil filings. The 75 counties were divided into four strata based on aggregate civil disposition data for 1990 obtained through telephone interviews with court staffs in the general jurisdiction trial courts. The sample consisted of tort, contract, and real property rights cases disposed between July 1, 1991, and June 30, 1992. 2011-11-02 All parts are being moved to restricted access and will be available only using the restricted access procedures.2006-03-30 File CB6587.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.2006-03-30 File CB6587.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.2006-03-30 File CB6587.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.2006-03-30 File CB6587.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.2006-03-30 File CB6587.ALL.PDF was removed from any previous datasets and flagged as a study-level file, so that it will accompany all downloads.2005-11-04 On 2005-03-14 new files were added to one or more datasets. These files included additional setup files as well as one or more of the following: SAS program, SAS transport, SPSS portable, and Stata system files. The metadata record was revised 2005-11-04 to reflect these additions.2004-06-01 The data have been updated by the principal investigator to include replicate weights and a few other variables. The codebook and SAS and SPSS data definition statements have been revised to reflect these changes.2001-03-26 The data have been updated by the principal investigator to include replicate weights. The codebook and SAS and SPSS data definition statements have been revised to reflect these changes.2001-03-26 The data had been updated by the principal investigator to include replicate weights. The codebook and SAS and SPSS data definition statements had been revised to reflect these changes.1997-07-29 The codebook had been revised to correct errors documenting both data files. Column location (and width) of variable WGHT "TOTAL WEIGHT" was incorrectly shown as 10.4 for Part 1, Tort, Contract, and Real Property Data. It was accurately shown in the data definition statements as 9.4. Variables listed after WGHT were inaccurately reported one column off in the codebook. Similarly, column location (and width) of variable WGHT "TOTAL WEIGHT" was incorrectly shown as 10.2 for Part 2, Civil Jury Data. It was accurately shown in the data definition statements as 9.2. Variables listed after WGHT were inaccurately reported one column off in the codebook. Fundi...
Facebook
Twitterhttps://timssandpirls.bc.edu/Copyright/index.htmlhttps://timssandpirls.bc.edu/Copyright/index.html
For the TIMSS 2015 fourth grade assessment, the database includes student mathematics and science achievement data as well as the student, parent, teacher, school, and curricular background data for the 47 participating countries and 6 benchmarking entities. For the TIMSS 2015 eighth grade assessment, the database includes student mathematics and science achievement data as well as the student, teacher, school, and curricular background data for the 39 participating countries and 6 benchmarking entities. The TIMSS 2015 International Database also includes data from the TIMSS Numeracy 2015 assessment, with the participation of 7 countries and 1 benchmarking entity. The student, parent, teacher, and school data files are in SAS and SPSS formats. The entire database and its supporting documents are described in the TIMSS 2015 User Guide (Foy, 2017) and its three supplements. The data can be analyzed using the downloadable IEA IDB Analyzer (version 4.0), an application developed by the IEA Data Processing and Research Center to facilitate the analysis of the TIMSS data. A restricted use version of the TIMSS 2015 International Database is available to users who require access to variables removed from the public use version (see Chapter 4 of the User Guide). Users who require access to the restricted use version of the International Database to conduct their analyses should contact the IEA through its Study Data Repository.
Facebook
Twitterhttps://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de455344https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de455344
Abstract (en): These data were collected using the National Electronic Injury Surveillance System (NEISS), the primary data system of the United States Consumer Product Safety Commission (CPSC). CPSC began operating NEISS in 1972 to monitor product-related injuries treated in United States hospital emergency departments (EDs). In June 1992, the National Center for Injury Prevention and Control (NCIPC), within the Centers for Disease Control and Prevention, established an interagency agreement with CPSC to begin collecting data on nonfatal firearm-related injuries to monitor the incidence and characteristics of persons with nonfatal firearm-related injuries treated in United States hospital EDs over time. This dataset represents all nonfatal firearm-related injuries (i.e., injuries associated with powder-charged guns) and all nonfatal BB and pellet gun-related injuries reported through NEISS from 1993 through 2000. The cases consist of initial ED visits for treatment of the injuries. Cases were reported even if the patients subsequently died. Secondary visits and transfers from other hospitals were excluded. Information is available on injury diagnosis, firearm type, use of drugs or alcohol, criminal incident, and locale of the incident. Demographic information includes age, sex, and race of the injured person. United States hospitals providing emergency services. Stratified probability sample of all United States hospitals that had at least six beds and provided 24-hour emergency services. There were four hospital size strata (defined as very large, large, medium, and small, based on the number of annual ED visits) and one children's hospital stratum. From 1993 through 1996, there were 91 NEISS hospital EDs in the sample. In 1997, the sampling frame was updated so that from 1997 through 1999, the sample included 101 NEISS hospital EDs. In 2000, one NEISS hospital dropped of the system so there were 100 NEISS hospital EDs in the sample. In 1997, CPSC collected firearm-related cases using the "old" and "new" NEISS hospital samples for a 9-month period. This dataset includes data from the "new" sample. The overlapping "old" sample is not included. Comparisons of weighted estimates based on the "old" and "new" samples indicated a difference of about 1 percent in the overall national estimate using these samples. The characteristics of firearm-related cases from these two overlapping samples were also very similar. 2005-11-04 On 2005-03-14 new files were added to one or more datasets. These files included additional setup files as well as one or more of the following: SAS program, SAS transport, SPSS portable, and Stata system files. The metadata record was revised 2005-11-04 to reflect these additions.2003-09-16 The 2000 data have been added to the cumulative data. The codebook and SAS and SPSS data definition statements have been updated to reflect these changes.2002-09-19 The 1999 data have been added to the cumulative data and a variable was removed. The codebook and data definition statements have been updated to reflect these changes.2001-05-18 The 1998 data have been added to this study, and the codebook has been updated to reflect these changes. Funding insitution(s): United States Department of Health and Human Services. Centers for Disease Control and Prevention. National Center for Injury Prevention and Control. United States Department of Justice. Office of Justice Programs. Bureau of Justice Statistics.
Facebook
TwitterThe National Congregations Study (NCS) dataset "fills a void in the sociological study of congregations by providing, for the first time, data that can be used to draw a nationally aggregate picture of congregations" (Chaves et al. 1999, p.460). Thanks to innovations in sampling techniques, the NCS data is the first nationally representative sample of American congregations. In 2006-07, a panel component was added to the NCS. In addition to the new cross-section of congregations generated in conjunction with the 2006 General Social Survey (GSS), a stratified random sample was drawn from congregations who participated in the 1998 NCS. A full codebook, prepared by the primary investigator, is available for download "https://sites.duke.edu/ncsweb/" Target="_blank">here. The codebook contains the original questionnaire, as well as detailed information on survey methodology, weights, coding, and more.
Variable names have been shortened to allow for downloading of the data set as an SPSS portable file. Original variable names are shown in parentheses at the beginning of each variable description.
The "/data-archive?fid=NCSIV" Target="_blank">NCS Cumulative Dataset is also available from the ARDA.
Facebook
TwitterA longitudinal study of firms that are in the process of being established (nascent firms) and firms that have recently been established (young firms). The datasets comprise 1412 qualifying cases (obtained from a screening sample of 30,430 randomly selected households in Year 1). The data were collected in 4 annual waves each with over 500 variables per record, plus a fifth, follow-up wave focusing on outcomes. Wave 1 completed in May 2008; Wave 2 completed in July 2009; Wave 3 was completed in July 2010 and Wave 4 in July 2011. Wave 5 (outcomes follow-up) was completed in late 2013. The comprehensive data were collected by computer-aided telephone interviews (CATI) (approximately 45 minutes for each interview) and have been converted into SPSS and data base formats. There is extensive documentation on the dataset in the related codebook. The data set and documentation are available at http://eprints.qut.edu.au/49327/.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Electoral Integrity Project at Harvard University and the University of Sydney (www.electoralintegrityproject.com) developed the AVE data, release 1.0. The dataset contains information from a three-wave panel survey designed to gather the views of a representative sample of ordinary Australians just before and after the 2nd July 2016 Australian federal elections. The survey monitored Australian voters’ experience at the polls, perceptions of the integrity and convenience of the registration and voting process, patterns of civic engagement, public confidence in electoral administration, and attitudes towards reforms, such as civic education campaigns and convenience voting facilities. Respondents were initially contacted in the week before the election between 28 June and 1 July and completed an online questionnaire lasting approximately 15 minutes. This forms the pre-election base line survey (wave 1). The same individuals were contacted again after the election to complete a longer survey, an average of 25 minutes in length. Respondents in wave 2 were contacted between 4 July and 19 July, with two thirds completing the survey after the first week. About six weeks later, the same respondents were interviewed again (wave 3) beginning on 23 August and ending on 13 September. The initial sample contains 2,139 valid responses for the first wave of questionnaires, 1,838 for the second wave (an 86 percent retention rate), and 1,543 for the third wave (84 percent retention rate). Overall, 72 percent of the respondents were carried over from the pre-election wave to the final wave. The following files can be accessed: a) dataset in Stata and SPSS formats; b) codebook; c) questionnaire. The EIP acknowledges support from the Kathleen Fitzpatrick Australian Laureate from the Australian Research Council (ARC ref: FL110100093). **** EIP further publications: BOOKS • LeDuc, Lawrence, Richard Niemi and Pippa Norris. Eds. 2014. Comparing Democracies 4: Elections and Voting in a Changing World. London: Sage Publications. • Nai, Alessandro and Walter, Annemarie. Eds. 2015 New Perspectives on Negative Campaigning: Why Attack Politics Matters. Colchester: ECPR Press. • Norris, Pippa, Richard W. Frank and Ferran Martínez i Coma. Eds. 2014. Advancing Electoral Integrity. New York: Oxford University Press. • Norris, Pippa, Richard W. Frank and Ferran Martínez i Coma. Eds. 2015. Contentious Elections: From Ballots to the Barricades. New York: Routledge. • Norris, Pippa. 2014. Why Electoral Integrity Matters. New York: Cambridge University Press. • Norris, Pippa. 2015. Why Elections Fail. New York: Cambridge University Press. • Norris, Pippa and Andrea Abel van Es. Eds. 2016. Checkbook Elections? Political Finance in Comparative Perspective. Oxford University Press. ARTICLES AND CHAPTERS • W. Frank. 2013. ‘Assessing the quality of elections.’ Journal of Democracy. 24(4): 124-135.• Lago, Ignacio and Martínez i Coma, Ferran. 2016. ‘Challenge or Consent? Understanding Losers’ Reactions in Mass Elections’. Government and Opposition doi:10.1071/gov.3015.31 • Martínez i Coma, Ferran and Lago, Ignacio. 2016. 'Gerrymandering in Comparative Perspective’ Party Politics DOI: 10.1177/1354068816642806 • Norris, Pippa. 2013. ‘Does the world agree about standards of electoral integrity? Evidence for the diffusion of global norms.’ Special issue of Electoral Studies. 32(4):576-588. • Norris, Pippa. 2013. ‘The new research agenda studying electoral integrity’. Special issue of Electoral Studies. 32(4): 563-575.57 • Norris, Pippa. 2014. ‘Electoral integrity and political legitimacy.’ In Comparing Democracies 4. Lawrence LeDuc, Richard Niemi and Pippa Norris. Eds. London: Sage. • Norris, Pippa, Richard W. Frank and Ferran Martínez i Coma. 2014. ‘Measuring electoral integrity: A new dataset.’ PS: Political Science & Politics. 47(4): 789-798. • Norris, Pippa. 2016 (forthcoming). ‘Electoral integrity in East Asia.’ Routledge Handbook on Democratization in East Asia. Tun-jen Cheng and Yun-han Chu. Eds. Routledge: New York. • Norris, Pippa. 2016 (forthcoming). ‘Electoral transitions: Stumbling out of the gate.’ In Rebooting Transitology – Democratization in the 21st Century. Mohammad-Mahmoud Ould Mohamedou and Timothy D. Sisk. Eds. • Pietsch, Juliet; Michael Miller and Jeffrey Karp. 2015. ‘Public support for democracy in transitional regimes.’ Journal of Elections, Public Opinion and Parties. 25(1): 1–9. DOI: 10.1080/17457289.2014. • Smith, Rodney. 2016 (forthcoming). ‘Confidence in paper-based and electronic voting channels: Evidence from Australia.’ Australian Journal of Political Science. ID: 1093091 DOI: 10.1080/10361146.2015.1093091 dx.doi.org/10.1080/07907184.2015.1099097 • Van Ham, Carolien and Staffan Lindberg. 2015. ‘From sticks to carrots: Electoral manipulation in Africa, 1986-2012’, Government and Opposition 50(3): 521 - 548, http://dx.doi.org/10.1017/gov.2015.6 • Van Ham, Carolien and Staffan Lindberg. 2015. ‘When Guardians Matter...