10 datasets found
  1. H

    Global Indicators 2015 Dataset (Cross-Sectional)

    • dataverse.harvard.edu
    tsv
    Updated Dec 18, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2017). Global Indicators 2015 Dataset (Cross-Sectional) [Dataset]. http://doi.org/10.7910/DVN/ZN6MWY
    Explore at:
    tsv(68037), tsv(6171)Available download formats
    Dataset updated
    Dec 18, 2017
    Dataset provided by
    Harvard Dataverse
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is a small dataset of various global indicators developed for use in a course teaching research methods at the Croft Institute for International Studies at the University of Mississippi. The data is ready to be directly imported into SPSS, Stata, or other statistical packages. A brief codebook includes descriptions of each variable, the indicator's reference year(s), and links to the original sources. The data is cross-sectional, country-level data centered on 2015 as the primary reference year. Some data come from the most recent election or averages from a handful of years. The dataset includes socioeconomic and political data drawn from sources and indicators from the World Bank, the UNDP, and International IDEA. It also includes popular indexes (and some key components) from Freedom House, Polity IV, the Economist's Democracy Index, the Heritage Foundation's Index of Economic Freedom, and the Fund for Peace's Fragile States Index. The dataset also includes various types of data (nominal, ordinal, interval, and ratio), useful for pedagogical examples of how to handle statistical data.

  2. SEM/EDS hyperspectral data set from a Famatinite sample

    • data.nist.gov
    • datasets.ai
    • +1more
    Updated Sep 27, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicholas Ritchie (2021). SEM/EDS hyperspectral data set from a Famatinite sample [Dataset]. http://doi.org/10.18434/mds2-2469
    Explore at:
    Dataset updated
    Sep 27, 2021
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Authors
    Nicholas Ritchie
    License

    https://www.nist.gov/open/licensehttps://www.nist.gov/open/license

    Description

    Famatinite is a mineral with nominal chemical formula Cu3SbS4. This electron excited X-ray data set was collected from a natural flat-polished sample and the surrounding silicate mineral. Live time/pixel: 0.70*4.0*0.95*3600.0/(512*512 # 0.95 hours on 4 detectors Probe current: 1.0 nA Beam energy: 20 keV Energy scale: 10 eV/ch and 0.0 eV offset

  3. d

    DXC'09 Industrial Track Sample Data

    • catalog.data.gov
    • datadiscoverystudio.org
    • +2more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). DXC'09 Industrial Track Sample Data [Dataset]. https://catalog.data.gov/dataset/dxc09-industrial-track-sample-data
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    Sample data, including nominal and faulty scenarios, for Tier 1 and Tier 2 of the First International Diagnostic Competition. Three file formats are provided, tab-delimited .txt files, Matlab .mat files, and tab-delimited .scn files. The scenario (.scn) files are read by the DXC framework. See the Support/Documentation section below and the First International Diagnostic Competition project page for more information.

  4. DXC'10 Industrial Track Sample Data

    • data.nasa.gov
    • gimi9.com
    • +1more
    Updated Mar 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). DXC'10 Industrial Track Sample Data [Dataset]. https://data.nasa.gov/dataset/dxc10-industrial-track-sample-data
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Sample data, including nominal and faulty scenarios, for Diagnostic Problems I and II of the Second International Diagnostic Competition. Three file formats are provided, tab-delimited .txt files, Matlab .mat files, and tab-delimited .scn files. The scenario (.scn) files are read by the DXC framework. See the Second International Diagnostic Competition project page for more information.

  5. Fundamental Data Record for Atmospheric Composition [ATMOS_L1B]

    • earth.esa.int
    Updated Sep 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    European Space Agency (2024). Fundamental Data Record for Atmospheric Composition [ATMOS_L1B] [Dataset]. https://earth.esa.int/eogateway/catalog/fdr-for-atmospheric-composition
    Explore at:
    Dataset updated
    Sep 12, 2024
    Dataset authored and provided by
    European Space Agencyhttp://www.esa.int/
    License

    https://earth.esa.int/eogateway/documents/20142/1564626/Terms-and-Conditions-for-the-use-of-ESA-Data.pdfhttps://earth.esa.int/eogateway/documents/20142/1564626/Terms-and-Conditions-for-the-use-of-ESA-Data.pdf

    Time period covered
    Jun 28, 1995 - Apr 7, 2012
    Description

    The Fundamental Data Record (FDR) for Atmospheric Composition UVN v.1.0 dataset is a cross-instrument Level-1 product [ATMOS_L1B] generated in 2023 and resulting from the ESA FDR4ATMOS project. The FDR contains selected Earth Observation Level 1b parameters (irradiance/reflectance) from the nadir-looking measurements of the ERS-2 GOME and Envisat SCIAMACHY missions for the period ranging from 1995 to 2012. The data record offers harmonised cross-calibrated spectra with focus on spectral windows in the Ultraviolet-Visible-Near Infrared regions for the retrieval of critical atmospheric constituents like ozone (O3), sulphur dioxide (SO2), nitrogen dioxide (NO2) column densities, alongside cloud parameters. The FDR4ATMOS products should be regarded as experimental due to the innovative approach and the current use of a limited-sized test dataset to investigate the impact of harmonization on the Level 2 target species, specifically SO2, O3 and NO2. Presently, this analysis is being carried out within follow-on activities. The FDR4ATMOS V1 is currently being extended to include the MetOp GOME-2 series. Product format For many aspects, the FDR product has improved compared to the existing individual mission datasets: GOME solar irradiances are harmonised using a validated SCIAMACHY solar reference spectrum, solving the problem of the fast-changing etalon present in the original GOME Level 1b data; Reflectances for both GOME and SCIAMACHY are provided in the FDR product. GOME reflectances are harmonised to degradation-corrected SCIAMACHY values, using collocated data from the CEOS PIC sites; SCIAMACHY data are scaled to the lowest integration time within the spectral band using high-frequency PMD measurements from the same wavelength range. This simplifies the use of the SCIAMACHY spectra which were split in a complex cluster structure (with own integration time) in the original Level 1b data; The harmonization process applied mitigates the viewing angle dependency observed in the UV spectral region for GOME data; Uncertainties are provided. Each FDR product provides, within the same file, irradiance/reflectance data for UV-VIS-NIR special regions across all orbits on a single day, including therein information from the individual ERS-2 GOME and Envisat SCIAMACHY measurements. FDR has been generated in two formats: Level 1A and Level 1B targeting expert users and nominal applications respectively. The Level 1A [ATMOS_L1A] data include additional parameters such as harmonisation factors, PMD, and polarisation data extracted from the original mission Level 1 products. The ATMOS_L1A dataset is not part of the nominal dissemination to users. In case of specific requirements, please contact EOHelp. Please refer to the README file for essential guidance before using the data. All the new products are conveniently formatted in NetCDF. Free standard tools, such as Panoply, can be used to read NetCDF data. Panoply is sourced and updated by external entities. For further details, please consult our Terms and Conditions page. Uncertainty characterisation One of the main aspects of the project was the characterization of Level 1 uncertainties for both instruments, based on metrological best practices. The following documents are provided: General guidance on a metrological approach to Fundamental Data Records (FDR) Uncertainty Characterisation document Effect tables NetCDF files containing example uncertainty propagation analysis and spectral error correlation matrices for SCIAMACHY (Atlantic and Mauretania scene for 2003 and 2010) and GOME (Atlantic scene for 2003) reflectance_uncertainty_example_FDR4ATMOS_GOME.nc reflectance_uncertainty_example_FDR4ATMOS_SCIA.nc Known Issues Non-monotonous wavelength axis for SCIAMACHY in FDR data version 1.0 In the SCIAMACHY OBSERVATION group of the atmospheric FDR v1.0 dataset (DOI: 10.5270/ESA-852456e), the wavelength axis (lambda variable) is not monotonically increasing. This issue affects all spectral channels (UV, VIS, NIR) in the SCIAMACHY group, while GOME OBSERVATION data remain unaffected. The root cause of the issue lies in the incorrect indexing of the lambda variable during the NetCDF writing process. Notably, the wavelength values themselves are calculated correctly within the processing chain. Temporary Workaround The wavelength axis is correct in the first record of each product. As a workaround, users can extract the wavelength axis from the first record and apply it to all subsequent measurements within the same product. The first record can be retrieved by setting the first two indices (time and scanline) to 0 (assuming counting of array indices starts at 0). Note that this process must be repeated separately for each spectral range (UV, VIS, NIR) and every daily product. Since the wavelength axis of SCIAMACHY is highly stable over time, using the first record introduces no expected impact on retrieval results. Python pseudo-code example: lambda_...

  6. Z

    Controlled Anomalies Time Series (CATS) Dataset

    • data.niaid.nih.gov
    Updated Jul 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Patrick Fleith (2024). Controlled Anomalies Time Series (CATS) Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7646896
    Explore at:
    Dataset updated
    Jul 11, 2024
    Dataset authored and provided by
    Patrick Fleith
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Controlled Anomalies Time Series (CATS) Dataset consists of commands, external stimuli, and telemetry readings of a simulated complex dynamical system with 200 injected anomalies.

    The CATS Dataset exhibits a set of desirable properties that make it very suitable for benchmarking Anomaly Detection Algorithms in Multivariate Time Series [1]:

    Multivariate (17 variables) including sensors reading and control signals. It simulates the operational behaviour of an arbitrary complex system including:

    4 Deliberate Actuations / Control Commands sent by a simulated operator / controller, for instance, commands of an operator to turn ON/OFF some equipment.

    3 Environmental Stimuli / External Forces acting on the system and affecting its behaviour, for instance, the wind affecting the orientation of a large ground antenna.

    10 Telemetry Readings representing the observable states of the complex system by means of sensors, for instance, a position, a temperature, a pressure, a voltage, current, humidity, velocity, acceleration, etc.

    5 million timestamps. Sensors readings are at 1Hz sampling frequency.

    1 million nominal observations (the first 1 million datapoints). This is suitable to start learning the "normal" behaviour.

    4 million observations that include both nominal and anomalous segments. This is suitable to evaluate both semi-supervised approaches (novelty detection) as well as unsupervised approaches (outlier detection).

    200 anomalous segments. One anomalous segment may contain several successive anomalous observations / timestamps. Only the last 4 million observations contain anomalous segments.

    Different types of anomalies to understand what anomaly types can be detected by different approaches. The categories are available in the dataset and in the metadata.

    Fine control over ground truth. As this is a simulated system with deliberate anomaly injection, the start and end time of the anomalous behaviour is known very precisely. In contrast to real world datasets, there is no risk that the ground truth contains mislabelled segments which is often the case for real data.

    Suitable for root cause analysis. In addition to the anomaly category, the time series channel in which the anomaly first developed itself is recorded and made available as part of the metadata. This can be useful to evaluate the performance of algorithm to trace back anomalies to the right root cause channel.

    Affected channels. In addition to the knowledge of the root cause channel in which the anomaly first developed itself, we provide information of channels possibly affected by the anomaly. This can also be useful to evaluate the explainability of anomaly detection systems which may point out to the anomalous channels (root cause and affected).

    Obvious anomalies. The simulated anomalies have been designed to be "easy" to be detected for human eyes (i.e., there are very large spikes or oscillations), hence also detectable for most algorithms. It makes this synthetic dataset useful for screening tasks (i.e., to eliminate algorithms that are not capable to detect those obvious anomalies). However, during our initial experiments, the dataset turned out to be challenging enough even for state-of-the-art anomaly detection approaches, making it suitable also for regular benchmark studies.

    Context provided. Some variables can only be considered anomalous in relation to other behaviours. A typical example consists of a light and switch pair. The light being either on or off is nominal, the same goes for the switch, but having the switch on and the light off shall be considered anomalous. In the CATS dataset, users can choose (or not) to use the available context, and external stimuli, to test the usefulness of the context for detecting anomalies in this simulation.

    Pure signal ideal for robustness-to-noise analysis. The simulated signals are provided without noise: while this may seem unrealistic at first, it is an advantage since users of the dataset can decide to add on top of the provided series any type of noise and choose an amplitude. This makes it well suited to test how sensitive and robust detection algorithms are against various levels of noise.

    No missing data. You can drop whatever data you want to assess the impact of missing values on your detector with respect to a clean baseline.

    Change Log

    Version 2

    Metadata: we include a metadata.csv with information about:

    Anomaly categories

    Root cause channel (signal in which the anomaly is first visible)

    Affected channel (signal in which the anomaly might propagate) through coupled system dynamics

    Removal of anomaly overlaps: version 1 contained anomalies which overlapped with each other resulting in only 190 distinct anomalous segments. Now, there are no more anomaly overlaps.

    Two data files: CSV and parquet for convenience.

    [1] Example Benchmark of Anomaly Detection in Time Series: “Sebastian Schmidl, Phillip Wenig, and Thorsten Papenbrock. Anomaly Detection in Time Series: A Comprehensive Evaluation. PVLDB, 15(9): 1779 - 1797, 2022. doi:10.14778/3538598.3538602”

    About Solenix

    Solenix is an international company providing software engineering, consulting services and software products for the space market. Solenix is a dynamic company that brings innovative technologies and concepts to the aerospace market, keeping up to date with technical advancements and actively promoting spin-in and spin-out technology activities. We combine modern solutions which complement conventional practices. We aspire to achieve maximum customer satisfaction by fostering collaboration, constructivism, and flexibility.

  7. f

    Data from: Test for Trend With a Multinomial Outcome

    • tandf.figshare.com
    application/gzip
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aniko Szabo (2023). Test for Trend With a Multinomial Outcome [Dataset]. http://doi.org/10.6084/m9.figshare.5718232.v2
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Aniko Szabo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    There is no established procedure for testing for trend with nominal outcomes that would provide both a global hypothesis test and outcome-specific inference. We derive a simple formula for such a test using a weighted sum of Cochran–Armitage test statistics evaluating the trend in each outcome separately. The test is shown to be equivalent to the score test for multinomial logistic regression, however, the new formulation enables the derivation of a sample size formula and multiplicity-adjusted inference for individual outcomes. The proposed methods are implemented in the R package multiCA.

  8. Degradation Measurement of Robot Arm Position Accuracy

    • data.nist.gov
    Updated Sep 7, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Helen Qiao (2018). Degradation Measurement of Robot Arm Position Accuracy [Dataset]. http://doi.org/10.18434/M31962
    Explore at:
    Dataset updated
    Sep 7, 2018
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Authors
    Helen Qiao
    License

    https://www.nist.gov/open/licensehttps://www.nist.gov/open/license

    Description

    The dataset contains both the robot's high-level tool center position (TCP) health data and controller-level components' information (i.e., joint positions, velocities, currents, temperatures, currents). The datasets can be used by users (e.g., software developers, data scientists) who work on robot health management (including accuracy) but have limited or no access to robots that can capture real data. The datasets can support the: - Development of robot health monitoring algorithms and tools - Research of technologies and tools to support robot monitoring, diagnostics, prognostics, and health management (collectively called PHM) - Validation and verification of the industrial PHM implementation. For example, the verification of a robot's TCP accuracy after the work cell has been reconfigured, or whenever a manufacturer wants to determine if the robot arm has experienced a degradation. For data collection, a trajectory is programmed for the Universal Robot (UR5) approaching and stopping at randomly-selected locations in its workspace. The robot moves along this preprogrammed trajectory during different conditions of temperature, payload, and speed. The TCP (x,y,z) of the robot are measured by a 7-D measurement system developed at NIST. Differences are calculated between the measured positions from the 7-D measurement system and the nominal positions calculated by the nominal robot kinematic parameters. The results are recorded within the dataset. Controller level sensing data are also collected from each joint (direct output from the controller of the UR5), to understand the influences of position degradation from temperature, payload, and speed. Controller-level data can be used for the root cause analysis of the robot performance degradation, by providing joint positions, velocities, currents, accelerations, torques, and temperatures. For example, the cold-start temperatures of the six joints were approximately 25 degrees Celsius. After two hours of operation, the joint temperatures increased to approximately 35 degrees Celsius. Control variables are listed in the header file in the data set (UR5TestResult_header.xlsx). If you'd like to comment on this data and/or offer recommendations on future datasets, please email guixiu.qiao@nist.gov.

  9. c

    Dataset of X-Ray Micro Computed Tomography Measurements of Porosity for...

    • kilthub.cmu.edu
    txt
    Updated Jun 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Justin Miner; Sneha Prabha Narra (2025). Dataset of X-Ray Micro Computed Tomography Measurements of Porosity for Nominal and Low Power Coupons Fabricated by Powder Bed Fusion-Laser Beam [Dataset]. http://doi.org/10.1184/R1/28152209.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 25, 2025
    Dataset provided by
    Carnegie Mellon University
    Authors
    Justin Miner; Sneha Prabha Narra
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    DescriptionDataset of porosity data in Powder Bed Fusion - Laser Beam of Ti-6Al-4V obtained via X-ray Micro Computed Tomography. This work was conducted on an EOS M290. The coupons in this dataset are fabricated at 150 W and 280 W.Contentsporedf.csv: A csv file with pore measurements for each sample scanned.parameters.csv: A csv file containing the process parameters and extreme value statistics (EVS) parameters for each sample scanned.WARNING: parameters.csv is too large to open in excel. Saving it in excel will cause data loss.

  10. d

    Statistical Area 3 2025 - Dataset - data.govt.nz - discover and use data

    • catalogue.data.govt.nz
    Updated Dec 2, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Statistical Area 3 2025 - Dataset - data.govt.nz - discover and use data [Dataset]. https://catalogue.data.govt.nz/dataset/statistical-area-3-2025
    Explore at:
    Dataset updated
    Dec 2, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Refer to the current geographies boundaries table for a list of all current geographies and recent updates. This dataset is the definitive version of the annually released statistical area 3 (SA3) boundaries as at 1 January 2025 as defined by Stats NZ. This version contains 929 SA3s, including 4 non-digitised SA3s. The SA3 geography aims to meet three purposes: approximate suburbs in major, large, and medium urban areas, in predominantly rural areas, provide geographical areas that are larger in area and population size than SA2s but smaller than territorial authorities, minimise data suppression. SA3s in major, large, and medium urban areas were created by combining SA2s to approximate suburbs as delineated in the Fire and Emergency NZ (FENZ) Localities dataset. Some of the resulting SA3s have very large populations. Outside of major, large, and medium urban areas, SA3s generally have populations of 5,000–10,000. These SA3s may represent either a single small urban area, a combination of small urban areas and their surrounding rural SA2s, or a combination of rural SA2s. Zero or nominal population SA3s To minimise the amount of unsuppressed data that can be provided in multivariate statistical tables, SA2s with fewer than 1,000 residents are combined with other SA2s wherever possible to reach the 1,000 SA3 population target. However, there are still a number of SA3s with zero or nominal populations. Small population SA2s designed to maintain alignment between territorial authority and regional council geographies are merged with other SA2s to reach the 5,000–10,000 SA3 population target. These merges mean that some SA3s do not align with regional council boundaries but are aligned to territorial authority. Small population island SA2s are included in their adjacent land-based SA3. Island SA2s outside territorial authority or region are the same in the SA3 geography. Inland water SA2s are aggregated and named by territorial authority, as in the urban rural classification. Inlet SA2s are aggregated and named by territorial authority or regional council where the water area is outside the territorial authority. Oceanic SA2s translate directly to SA3s as they are already aggregated to regional council. The 16 non-digitised SA2s are aggregated to the following 4 non-digitised SA3s (SA3 code; SA3 name): 70001; Oceanic outside region, 70002; Oceanic oil rigs, 70003; Islands outside region, 70004; Ross Dependency outside region. SA3 numbering and naming Each SA3 is a single geographic entity with a name and a numeric code. The name refers to a suburb, recognised place name, or portion of a territorial authority. In some instances where place names are the same or very similar, the SA3s are differentiated by their territorial authority, for example, Hillcrest (Hamilton City) and Hillcrest (Rotorua District). SA3 codes have five digits. North Island SA3 codes start with a 5, South Island SA3 codes start with a 6 and non-digitised SA3 codes start with a 7. They are numbered approximately north to south within their respective territorial authorities. When first created in 2025, the last digit of each code was 0. When SA3 boundaries change in future, only the last digit of the code will change to ensure the north-south pattern is maintained. ​ High-definition version This high definition (HD) version is the most detailed geometry, suitable for use in GIS for geometric analysis operations and for the computation of areas, centroids and other metrics. The HD version is aligned to the LINZ cadastre. ​ Macrons Names are provided with and without tohutō/macrons. The column name for those without macrons is suffixed ‘ascii’. ​ Digital data Digital boundary data became freely available on 1 July 2007 ​ Further information To download geographic classifications in table formats such as CSV please use Ariā For more information please refer to the Statistical standard for geographic areas 2023. Contact: geography@stats.govt.nz

  11. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Harvard Dataverse (2017). Global Indicators 2015 Dataset (Cross-Sectional) [Dataset]. http://doi.org/10.7910/DVN/ZN6MWY

Global Indicators 2015 Dataset (Cross-Sectional)

Explore at:
tsv(68037), tsv(6171)Available download formats
Dataset updated
Dec 18, 2017
Dataset provided by
Harvard Dataverse
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

This is a small dataset of various global indicators developed for use in a course teaching research methods at the Croft Institute for International Studies at the University of Mississippi. The data is ready to be directly imported into SPSS, Stata, or other statistical packages. A brief codebook includes descriptions of each variable, the indicator's reference year(s), and links to the original sources. The data is cross-sectional, country-level data centered on 2015 as the primary reference year. Some data come from the most recent election or averages from a handful of years. The dataset includes socioeconomic and political data drawn from sources and indicators from the World Bank, the UNDP, and International IDEA. It also includes popular indexes (and some key components) from Freedom House, Polity IV, the Economist's Democracy Index, the Heritage Foundation's Index of Economic Freedom, and the Fund for Peace's Fragile States Index. The dataset also includes various types of data (nominal, ordinal, interval, and ratio), useful for pedagogical examples of how to handle statistical data.

Search
Clear search
Close search
Google apps
Main menu