100+ datasets found
  1. Entire World Educational Data

    • kaggle.com
    zip
    Updated Dec 23, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bhavik Jikadara (2023). Entire World Educational Data [Dataset]. https://www.kaggle.com/datasets/bhavikjikadara/entire-world-educational-data
    Explore at:
    zip(9465 bytes)Available download formats
    Dataset updated
    Dec 23, 2023
    Authors
    Bhavik Jikadara
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Area covered
    World
    Description

    This meticulously curated dataset offers a panoramic view of education on a global scale , delivering profound insights into the dynamic landscape of education across diverse countries and regions. Spanning a rich tapestry of educational aspects, it encapsulates crucial metrics including out-of-school rates, completion rates, proficiency levels, literacy rates, birth rates, and primary and tertiary education enrollment statistics. A treasure trove of knowledge, this dataset is an indispensable asset for discerning researchers, dedicated educators, and forward-thinking policymakers, enabling them to embark on a transformative journey of assessing, enhancing, and reshaping education systems worldwide.

    Key Features: - Countries and Areas: Name of the countries and areas. - Latitude: Latitude coordinates of the geographical location. - Longitude: Longitude coordinates of the geographical location. - OOSR_Pre0Primary_Age_Male: Out-of-school rate for pre-primary age males. - OOSR_Pre0Primary_Age_Female: Out-of-school rate for pre-primary age females. - OOSR_Primary_Age_Male: Out-of-school rate for primary age males. - OOSR_Primary_Age_Female: Out-of-school rate for primary age females. - OOSR_Lower_Secondary_Age_Male: Out-of-school rate for lower secondary age males. - OOSR_Lower_Secondary_Age_Female: Out-of-school rate for lower secondary age females. - OOSR_Upper_Secondary_Age_Male: Out-of-school rate for upper secondary age males. - OOSR_Upper_Secondary_Age_Female: Out-of-school rate for upper secondary age females. - Completion_Rate_Primary_Male: Completion rate for primary education among males. - Completion_Rate_Primary_Female: Completion rate for primary education among females. - Completion_Rate_Lower_Secondary_Male: Completion rate for lower secondary education among males. - Completion_Rate_Lower_Secondary_Female: Completion rate for lower secondary education among females. - Completion_Rate_Upper_Secondary_Male: Completion rate for upper secondary education among males. - Completion_Rate_Upper_Secondary_Female: Completion rate for upper secondary education among females. - Grade_2_3_Proficiency_Reading: Proficiency in reading for grade 2-3 students. - Grade_2_3_Proficiency_Math: Proficiency in math for grade 2-3 students. - Primary_End_Proficiency_Reading: Proficiency in reading at the end of primary education. - Primary_End_Proficiency_Math: Proficiency in math at the end of primary education. - Lower_Secondary_End_Proficiency_Reading: Proficiency in reading at the end of lower secondary education. - Lower_Secondary_End_Proficiency_Math: Proficiency in math at the end of lower secondary education. - Youth_15_24_Literacy_Rate_Male: Literacy rate among male youths aged 15-24. - Youth_15_24_Literacy_Rate_Female: Literacy rate among female youths aged 15-24. - Birth_Rate: Birth rate in the respective countries/areas. - Gross_Primary_Education_Enrollment: Gross enrollment in primary education. - Gross_Tertiary_Education_Enrollment: Gross enrollment in tertiary education. - Unemployment_Rate: Unemployment rate in the respective countries/areas.

  2. Job Offers Web Scraping Search

    • kaggle.com
    zip
    Updated Feb 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Job Offers Web Scraping Search [Dataset]. https://www.kaggle.com/datasets/thedevastator/job-offers-web-scraping-search
    Explore at:
    zip(5322 bytes)Available download formats
    Dataset updated
    Feb 11, 2023
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Job Offers Web Scraping Search

    Targeted Results to Find the Optimal Work Solution

    By [source]

    About this dataset

    This dataset collects job offers from web scraping which are filtered according to specific keywords, locations and times. This data gives users rich and precise search capabilities to uncover the best working solution for them. With the information collected, users can explore options that match with their personal situation, skillset and preferences in terms of location and schedule. The columns provide detailed information around job titles, employer names, locations, time frames as well as other necessary parameters so you can make a smart choice for your next career opportunity

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset is a great resource for those looking to find an optimal work solution based on keywords, location and time parameters. With this information, users can quickly and easily search through job offers that best fit their needs. Here are some tips on how to use this dataset to its fullest potential:

    • Start by identifying what type of job offer you want to find. The keyword column will help you narrow down your search by allowing you to search for job postings that contain the word or phrase you are looking for.

    • Next, consider where the job is located – the Location column tells you where in the world each posting is from so make sure it’s somewhere that suits your needs!

    • Finally, consider when the position is available – look at the Time frame column which gives an indication of when each posting was made as well as if it’s a full-time/ part-time role or even if it’s a casual/temporary position from day one so make sure it meets your requirements first before applying!

    • Additionally, if details such as hours per week or further schedule information are important criteria then there is also info provided under Horari and Temps Oferta columns too! Now that all three criteria have been ticked off - key words, location and time frame - then take a look at Empresa (Company Name) and Nom_Oferta (Post Name) columns too in order to get an idea of who will be employing you should you land the gig!

      All these pieces of data put together should give any motivated individual all they need in order to seek out an optimal work solution - keep hunting good luck!

    Research Ideas

    • Machine learning can be used to groups job offers in order to facilitate the identification of similarities and differences between them. This could allow users to specifically target their search for a work solution.
    • The data can be used to compare job offerings across different areas or types of jobs, enabling users to make better informed decisions in terms of their career options and goals.
    • It may also provide an insight into the local job market, enabling companies and employers to identify where there is potential for new opportunities or possible trends that simply may have previously gone unnoticed

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: web_scraping_information_offers.csv | Column name | Description | |:-----------------|:------------------------------------| | Nom_Oferta | Name of the job offer. (String) | | Empresa | Company offering the job. (String) | | Ubicació | Location of the job offer. (String) | | Temps_Oferta | Time of the job offer. (String) | | Horari | Schedule of the job offer. (String) |

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. If you use this dataset in your research, please credit .

  3. d

    COVID Impact Survey - Public Data

    • data.world
    csv, zip
    Updated Oct 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Associated Press (2024). COVID Impact Survey - Public Data [Dataset]. https://data.world/associatedpress/covid-impact-survey-public-data
    Explore at:
    csv, zipAvailable download formats
    Dataset updated
    Oct 16, 2024
    Authors
    The Associated Press
    Description

    Overview

    The Associated Press is sharing data from the COVID Impact Survey, which provides statistics about physical health, mental health, economic security and social dynamics related to the coronavirus pandemic in the United States.

    Conducted by NORC at the University of Chicago for the Data Foundation, the probability-based survey provides estimates for the United States as a whole, as well as in 10 states (California, Colorado, Florida, Louisiana, Minnesota, Missouri, Montana, New York, Oregon and Texas) and eight metropolitan areas (Atlanta, Baltimore, Birmingham, Chicago, Cleveland, Columbus, Phoenix and Pittsburgh).

    The survey is designed to allow for an ongoing gauge of public perception, health and economic status to see what is shifting during the pandemic. When multiple sets of data are available, it will allow for the tracking of how issues ranging from COVID-19 symptoms to economic status change over time.

    The survey is focused on three core areas of research:

    • Physical Health: Symptoms related to COVID-19, relevant existing conditions and health insurance coverage.
    • Economic and Financial Health: Employment, food security, and government cash assistance.
    • Social and Mental Health: Communication with friends and family, anxiety and volunteerism. (Questions based on those used on the U.S. Census Bureau’s Current Population Survey.) ## Using this Data - IMPORTANT This is survey data and must be properly weighted during analysis: DO NOT REPORT THIS DATA AS RAW OR AGGREGATE NUMBERS!!

    Instead, use our queries linked below or statistical software such as R or SPSS to weight the data.

    Queries

    If you'd like to create a table to see how people nationally or in your state or city feel about a topic in the survey, use the survey questionnaire and codebook to match a question (the variable label) to a variable name. For instance, "How often have you felt lonely in the past 7 days?" is variable "soc5c".

    Nationally: Go to this query and enter soc5c as the variable. Hit the blue Run Query button in the upper right hand corner.

    Local or State: To find figures for that response in a specific state, go to this query and type in a state name and soc5c as the variable, and then hit the blue Run Query button in the upper right hand corner.

    The resulting sentence you could write out of these queries is: "People in some states are less likely to report loneliness than others. For example, 66% of Louisianans report feeling lonely on none of the last seven days, compared with 52% of Californians. Nationally, 60% of people said they hadn't felt lonely."

    Margin of Error

    The margin of error for the national and regional surveys is found in the attached methods statement. You will need the margin of error to determine if the comparisons are statistically significant. If the difference is:

    • At least twice the margin of error, you can report there is a clear difference.
    • At least as large as the margin of error, you can report there is a slight or apparent difference.
    • Less than or equal to the margin of error, you can report that the respondents are divided or there is no difference. ## A Note on Timing Survey results will generally be posted under embargo on Tuesday evenings. The data is available for release at 1 p.m. ET Thursdays.

    About the Data

    The survey data will be provided under embargo in both comma-delimited and statistical formats.

    Each set of survey data will be numbered and have the date the embargo lifts in front of it in the format of: 01_April_30_covid_impact_survey. The survey has been organized by the Data Foundation, a non-profit non-partisan think tank, and is sponsored by the Federal Reserve Bank of Minneapolis and the Packard Foundation. It is conducted by NORC at the University of Chicago, a non-partisan research organization. (NORC is not an abbreviation, it part of the organization's formal name.)

    Data for the national estimates are collected using the AmeriSpeak Panel, NORC’s probability-based panel designed to be representative of the U.S. household population. Interviews are conducted with adults age 18 and over representing the 50 states and the District of Columbia. Panel members are randomly drawn from AmeriSpeak with a target of achieving 2,000 interviews in each survey. Invited panel members may complete the survey online or by telephone with an NORC telephone interviewer.

    Once all the study data have been made final, an iterative raking process is used to adjust for any survey nonresponse as well as any noncoverage or under and oversampling resulting from the study specific sample design. Raking variables include age, gender, census division, race/ethnicity, education, and county groupings based on county level counts of the number of COVID-19 deaths. Demographic weighting variables were obtained from the 2020 Current Population Survey. The count of COVID-19 deaths by county was obtained from USA Facts. The weighted data reflect the U.S. population of adults age 18 and over.

    Data for the regional estimates are collected using a multi-mode address-based (ABS) approach that allows residents of each area to complete the interview via web or with an NORC telephone interviewer. All sampled households are mailed a postcard inviting them to complete the survey either online using a unique PIN or via telephone by calling a toll-free number. Interviews are conducted with adults age 18 and over with a target of achieving 400 interviews in each region in each survey.Additional details on the survey methodology and the survey questionnaire are attached below or can be found at https://www.covid-impact.org.

    Attribution

    Results should be credited to the COVID Impact Survey, conducted by NORC at the University of Chicago for the Data Foundation.

    AP Data Distributions

    ​To learn more about AP's data journalism capabilities for publishers, corporations and financial institutions, go here or email kromano@ap.org.

  4. d

    COVID-19 Cases, Hospitalizations, and Deaths (By County) - ARCHIVE

    • catalog.data.gov
    • data.ct.gov
    Updated Aug 12, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.ct.gov (2023). COVID-19 Cases, Hospitalizations, and Deaths (By County) - ARCHIVE [Dataset]. https://catalog.data.gov/dataset/covid-19-cases-hospitalizations-and-deaths-by-county
    Explore at:
    Dataset updated
    Aug 12, 2023
    Dataset provided by
    data.ct.gov
    Description

    Note: DPH is updating and streamlining the COVID-19 cases, deaths, and testing data. As of 6/27/2022, the data will be published in four tables instead of twelve. The COVID-19 Cases, Deaths, and Tests by Day dataset contains cases and test data by date of sample submission. The death data are by date of death. This dataset is updated daily and contains information back to the beginning of the pandemic. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Cases-Deaths-and-Tests-by-Day/g9vi-2ahj. The COVID-19 State Metrics dataset contains over 93 columns of data. This dataset is updated daily and currently contains information starting June 21, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-State-Level-Data/qmgw-5kp6 . The COVID-19 County Metrics dataset contains 25 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-County-Level-Data/ujiq-dy22 . The COVID-19 Town Metrics dataset contains 16 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Town-Level-Data/icxw-cada . To protect confidentiality, if a town has fewer than 5 cases or positive NAAT tests over the past 7 days, those data will be suppressed. COVID-19 cases, hospitalizations, and associated deaths that have been reported among Connecticut residents. All data in this report are preliminary; data for previous dates will be updated as new reports are received and data errors are corrected. Hospitalization data were collected by the Connecticut Hospital Association and reflect the number of patients currently hospitalized with laboratory-confirmed COVID-19. Deaths reported to the either the Office of the Chief Medical Examiner (OCME) or Department of Public Health (DPH) are included in the daily COVID-19 update. Data on Connecticut deaths were obtained from the Connecticut Deaths Registry maintained by the DPH Office of Vital Records. Cause of death was determined by a death certifier (e.g., physician, APRN, medical examiner) using their best clinical judgment. Additionally, all COVID-19 deaths, including suspected or related, are required to be reported to OCME. On April 4, 2020, CT DPH and OCME released a joint memo to providers and facilities within Connecticut providing guidelines for certifying deaths due to COVID-19 that were consistent with the CDC’s guidelines and a reminder of the required reporting to OCME.25,26 As of July 1, 2021, OCME had reviewed every case reported and performed additional investigation on about one-third of reported deaths to better ascertain if COVID-19 did or did not cause or contribute to the death. Some of these investigations resulted in the OCME performing postmortem swabs for PCR testing on individuals whose deaths were suspected to be due to COVID-19, but antemortem diagnosis was unable to be made.31 The OCME issued or re-issued about 10% of COVID-19 death certificates and, when appropriate, removed COVID-19 from the death certificate. For standardization and tabulation of mortality statistics, written cause of death statements made by the certifiers on death certificates are sent to the National Center for Health Statistics (NCHS) at the CDC which assigns cause of death codes according to the International Causes of Disease 10th Revision (ICD-10) classification system.25,26 COVID-19 deaths in this report are defined as those for which the death certificate has an ICD-10 code of U07.1 as either a primary (underlying) or a contributing cause of death. More information on COVID-19 mortality can be found at the following link: https://portal.ct.gov/DPH/Health-Information-Systems--Reporting/Mortality/Mortality-Statistics Data are reported d

  5. u

    Data from: Data on xylem sap proteins from Mn- and Fe-deficient tomato...

    • agdatacommons.nal.usda.gov
    • datasets.ai
    • +3more
    bin
    Updated Nov 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura Ceballos-Laita; Elain Gutierrez-Carbonell; Daisuke Takahashi; Anunciación Abadía; Matsuo Uemura; Javier Abadía; Ana Flor López-Millán (2025). Data from: Data on xylem sap proteins from Mn- and Fe-deficient tomato plants obtained using shotgun proteomics [Dataset]. http://doi.org/10.1016/j.dib.2018.01.034
    Explore at:
    binAvailable download formats
    Dataset updated
    Nov 21, 2025
    Dataset provided by
    ProteomeXchange
    Authors
    Laura Ceballos-Laita; Elain Gutierrez-Carbonell; Daisuke Takahashi; Anunciación Abadía; Matsuo Uemura; Javier Abadía; Ana Flor López-Millán
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This article contains consolidated proteomic data obtained from xylem sap collected from tomato plants grown in Fe- and Mn-sufficient control, as well as Fe-deficient and Mn-deficient conditions. Data presented here cover proteins identified and quantified by shotgun proteomics and Progenesis LC-MS analyses: proteins identified with at least two peptides and showing changes statistically significant (ANOVA; p ≤ 0.05) and above a biologically relevant selected threshold (fold ≥ 2) between treatments are listed. The comparison between Fe-deficient, Mn-deficient and control xylem sap samples using a multivariate statistical data analysis (Principal Component Analysis, PCA) is also included. Data included in this article are discussed in depth in "Effects of Fe and Mn deficiencies on the protein profiles of tomato (Solanum lycopersicum) xylem sap as revealed by shotgun analyses", Ceballos-Laita et al., J. Proteomics, 2018. This dataset is made available to support the cited study as well to extend analyses at a later stage. Resources in this dataset:Resource Title: ProteomeExchange submission PXD007517. Xylem sap shotgun proteomics from Fe- and Mn-deficient and Mn-toxic tomato plants. . File Name: Web Page, url: http://proteomecentral.proteomexchange.org/cgi/GetDataset?ID=PXD007517 The MS proteomics data have been deposited to the ProteomeXchange Consortium via the Pride partner repository with the data set identifier PXD007517. Also includes FTP location. Files available at https://www.ebi.ac.uk/pride/archive/projects/PXD007517 via HTML, FTP, or Fast (Aspera) download : 1 SEARCH.xml file, 1 Peak file, 24 RAW files, 1 Mascot information.xlsx file. Supplementary data associated with this article can be found in the online version at http://dx.doi.org/10.1016/j.dib.2018.01.034

  6. U

    RADSeq Data to assess population structure of Desmophyllum pertusum found...

    • data.usgs.gov
    • datasets.ai
    • +1more
    Updated Aug 24, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexis Weinnig; Aaron Aunins; Veronica Salamone; Cheryl Morrison (2024). RADSeq Data to assess population structure of Desmophyllum pertusum found along the United States eastern continental margin [Dataset]. http://doi.org/10.5066/P145JOIO
    Explore at:
    Dataset updated
    Aug 24, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Alexis Weinnig; Aaron Aunins; Veronica Salamone; Cheryl Morrison
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Aug 6, 2009 - Apr 29, 2019
    Area covered
    United States
    Description

    This dataset contains metadata about the origin of the cold-water coral samples that were RAD sequenced and the single nucleotide polymorphism (SNPs) generated for population genomic analyses. These data were used to examine patterns of genomic structure of Desmophyllum pertusum from throughout US waters along the eastern continental margin and the Gulf of Mexico. The raw sequence data are archived in the GenBank Bioproject PRJNA1027916 at: https://www.ncbi.nlm.nih.gov/bioproject/

  7. d

    Final Report for Phased Data Recovery at AZ U:15:1(REC) on SCIDD Property...

    • search.dataone.org
    Updated Jun 21, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    the Digital Archaeological Record (2016). Final Report for Phased Data Recovery at AZ U:15:1(REC) on SCIDD Property and Trenching for Additional Canal (AZ U:15:8[REC]) Exposures on Federal Land Near Ashurst-Hayden Diversion Dam, Pinal County, Arizona: Redacted Pages [Dataset]. http://doi.org/10.6067/XCV8TQ62BG
    Explore at:
    Dataset updated
    Jun 21, 2016
    Dataset provided by
    the Digital Archaeological Record
    Area covered
    Description

    As authorized under the Arizona Water Settlements Act of 2004, the San Carlos and Irrigation Drainage District (SCIDD) is undertaking a 10-year rehabilitation project of its irrigation system. SCIDD is the non-Indian irrigation component of the San Carlos Irrigation Project (SCIP), which provides irrigation water to the communities of Florence, Coolidge, and Casa Grande in Pinal County, Arizona. The initial focus of the SCIDD Rehabilitation Project is the rehabilitation of the Ashurst-Hayden Diversion Dam and its associated headworks, which diverts water from the Gila River for SCIP; the armoring of a segment of the south bank of the Gila River, and the construction of a sediment removal pond and storage area downstream of Ashurst-Hayden Diversion Dam. Following consultation with project stakeholders on the results of Phase 1 data testing at AZ U:15:1(REC) (later AZ U:15:676[ASM]), a prehistoric site, and additional recording of historic features at two other sites, AZ U:16:303(ASM), the Dam Tender’s Complex, and AZ AA:3:215(ASM), the Florence-Casa Grande Canal, Reclamation directed Archaeological Consulting Services, Ltd. (ACS) to conduct Phase 2 data recovery at AZ U:15:1(REC) to excavate the known features and expose the prehistoric canal. ACS excavated 13 features and subfeatures at AZ U:15:1(REC) and two additional exposures of the canal were located northeast of AZ U:15:1(REC). The excavation and subsequent artifact analysis reveal that AZ U:15:1(REC) was a Late Sedentary (Sacaton phase) short-term, seasonal farmstead. The additional canal (AZ U:15:8[REC]) exposures to the northeast of AZ U:15:1(REC) reflect a depositional environment that suggests they were very near the canal’s intake point along the river. Samples processed from the canal produced a weak ostracode result, but succeeded in providing a good radiocarbon date of A.D. 990 to 1150, which overlaps with the dates from AZ U:15:1(REC). The size of the canal, and the lack of laterals or parallel canals, strongly suggests that this was the uppermost extent of the Grewe-Casa Grande Canal system. The project provides important new data for understanding Hohokam movement into the upper portion of the Middle Gila River Valley, its timing, and the resources that they focused on, wild and domesticated. Based on the results of the Phase 2 data recovery, Reclamation determined that these efforts had mitigated the adverse effects to both AZ U:15:1(REC) and AZ U:15:8(REC) that would result from the proposed project. These are the redacted pages. The final report can be found at tDAR ID: 377904. Photos taken during Phase 2 data recovery can be found at tDAR ID: 378156. The photo log can be found at tDAR ID: 378175. Ceramic data can be found at tDAR ID: 377912. Lithic data can be found at tDAR ID: 377913. Shell data can be found at tDAR ID: 377921. Flotation data can be found at tDAR ID: 377922. Pollen data can be found at tDAR ID: 377914.

  8. d

    Data release for Wind turbine wakes can impact down-wind vegetation...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Oct 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Data release for Wind turbine wakes can impact down-wind vegetation greenness [Dataset]. https://catalog.data.gov/dataset/data-release-for-wind-turbine-wakes-can-impact-down-wind-vegetation-greenness
    Explore at:
    Dataset updated
    Oct 1, 2025
    Dataset provided by
    U.S. Geological Survey
    Description

    Global wind energy has expanded 5-fold since 2010 and is predicted to expand another 8–10-fold over the next 30 years. Wakes generated by wind turbines can alter downwind microclimates and potentially downwind vegetation. However, the design of past studies has made it difficult to isolate the impact of wake effects on vegetation from land cover change. We used hourly wind data to model wake and non-wake zones around 17 wind facilities across the U.S. and compared remotely-sensed vegetation greenness in wake and non-wake zones before and after construction. We located sampling sites only in the dominant vegetation type and in areas that were not disturbed before or after construction. We found evidence for wake effects on vegetation greenness at 10 of 17 facilities for portions of, or the entire growing season. Evidence included statistical significance in Before After Control Impact statistical models, differences >3% between expected and observed values of vegetation greenness, and consistent spatial patterns of anomalies in vegetation greenness relative to turbine locations and wind direction. Wakes induced both increases and decreases in vegetation greenness, which may be difficult to predict prior to construction. The magnitude of wake effects depended primarily on precipitation and to a lesser degree aridity. Wake effects did not show trends over time following construction, suggesting the changes impact vegetation greenness within a growing season, but do not accrue over years. Even small changes in vegetation greenness, similar to those found in this study, have been seen to affect higher trophic levels. Given the rapid global growth of wind energy, and the importance of vegetation condition for agriculture, grazing, wildlife, and carbon storage, understanding how wakes from wind turbines impact vegetation is essential to exploit or ameliorate these effects.

  9. w

    Vehicle licensing statistics data tables

    • gov.uk
    • s3.amazonaws.com
    Updated Oct 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department for Transport (2025). Vehicle licensing statistics data tables [Dataset]. https://www.gov.uk/government/statistical-data-sets/vehicle-licensing-statistics-data-tables
    Explore at:
    Dataset updated
    Oct 15, 2025
    Dataset provided by
    GOV.UK
    Authors
    Department for Transport
    Description

    Data files containing detailed information about vehicles in the UK are also available, including make and model data.

    Some tables have been withdrawn and replaced. The table index for this statistical series has been updated to provide a full map between the old and new numbering systems used in this page.

    The Department for Transport is committed to continuously improving the quality and transparency of our outputs, in line with the Code of Practice for Statistics. In line with this, we have recently concluded a planned review of the processes and methodologies used in the production of Vehicle licensing statistics data. The review sought to seek out and introduce further improvements and efficiencies in the coding technologies we use to produce our data and as part of that, we have identified several historical errors across the published data tables affecting different historical periods. These errors are the result of mistakes in past production processes that we have now identified, corrected and taken steps to eliminate going forward.

    Most of the revisions to our published figures are small, typically changing values by less than 1% to 3%. The key revisions are:

    Licensed Vehicles (2014 Q3 to 2016 Q3)

    We found that some unlicensed vehicles during this period were mistakenly counted as licensed. This caused a slight overstatement, about 0.54% on average, in the number of licensed vehicles during this period.

    3.5 - 4.25 tonnes Zero Emission Vehicles (ZEVs) Classification

    Since 2023, ZEVs weighing between 3.5 and 4.25 tonnes have been classified as light goods vehicles (LGVs) instead of heavy goods vehicles (HGVs). We have now applied this change to earlier data and corrected an error in table VEH0150. As a result, the number of newly registered HGVs has been reduced by:

    • 3.1% in 2024

    • 2.3% in 2023

    • 1.4% in 2022

    Table VEH0156 (2018 to 2023)

    Table VEH0156, which reports average CO₂ emissions for newly registered vehicles, has been updated for the years 2018 to 2023. Most changes are minor (under 3%), but the e-NEDC measure saw a larger correction, up to 15.8%, due to a calculation error. Other measures (WLTP and Reported) were less notable, except for April 2020 when COVID-19 led to very few new registrations which led to greater volatility in the resultant percentages.

    Neither these specific revisions, nor any of the others introduced, have had a material impact on the statistics overall, the direction of trends nor the key messages that they previously conveyed.

    Specific details of each revision made has been included in the relevant data table notes to ensure transparency and clarity. Users are advised to review these notes as part of their regular use of the data to ensure their analysis accounts for these changes accordingly.

    If you have questions regarding any of these changes, please contact the Vehicle statistics team.

    All vehicles

    Licensed vehicles

    Overview

    VEH0101: https://assets.publishing.service.gov.uk/media/68ecf5acf159f887526bbd7c/veh0101.ods">Vehicles at the end of the quarter by licence status and body type: Great Britain and United Kingdom (ODS, 99.7 KB)

    Detailed breakdowns

    VEH0103: https://assets.publishing.service.gov.uk/media/68ecf5abf159f887526bbd7b/veh0103.ods">Licensed vehicles at the end of the year by tax class: Great Britain and United Kingdom (ODS, 23.8 KB)

    VEH0105: https://assets.publishing.service.gov.uk/media/68ecf5ac2adc28a81b4acfc8/veh0105.ods">Licensed vehicles at

  10. Data from: Wind Turbine / Reviewed Data

    • data.openei.org
    • s.cnmilf.com
    • +2more
    b0
    Updated Oct 5, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andy Scholbrock; Andy Scholbrock (2019). Wind Turbine / Reviewed Data [Dataset]. https://data.openei.org/submissions/4189
    Explore at:
    b0Available download formats
    Dataset updated
    Oct 5, 2019
    Dataset provided by
    United States Department of Energyhttp://energy.gov/
    Office of Energy Efficiency and Renewable Energyhttp://energy.gov/eere
    Wind Energy Technologies Office (WETO)
    Open Energy Data Initiative (OEDI)
    Authors
    Andy Scholbrock; Andy Scholbrock
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview

    The SUMR-D CART2 turbine data are recorded by the CART2 wind turbine's supervisory control and data acquisition (SCADA) system for the Advanced Research Projects Agency–Energy (ARPA-E) SUMR-D project located at the National Renewable Energy Laboratory (NREL) Flatirons Campus. For the project, the CART2 wind turbine was outfitted with a highly flexible rotor specifically designed and constructed for the project. More details about the project can be found here: https://sumrwind.com/. The data include power, loads, and meteorological information from the turbine during startup, operation, and shutdown, and when it was parked and idle.

    Data Details

    Additional files are attached:
    sumr_d_5-Min_Database.mat - a database file in MATLAB format of this dataset, which can be used to search for desired data files; sumr_d_5-Min_Database.xlsx - a database file in Microsoft Excel format of this dataset, which can be used to search for desired data files; loadcartU.m - this script loads in a CART data file and puts it in your workspace as a Matlab matrix (you can call this script from your own Matlab scripts to do your own analysis); charts.mat - this is a dependency file needed for the other scripts (it allows you to make custom preselections for cartPlotU.m); cartLoadHdrU.m - this script loads in the header file information for the data file (the header is embedded in each data file at the beginning); cartPlotU.m - this is a graphic user interface (GUI) that allows you to interactively look at different channels (to use it, run the script in Matlab, and load in the data file(s) of interest; from there, you can select different channels and plot things against each other; note that this script has issues with later versions of MATLAB; the preferred version to use is R2011b).

    Data Quality

    Wind turbine blade loading data were calibrated using blade gravity calibrations prior to data collection and throughout the data collection period. Blade loading was also checked for data quality following data collection as strain gauge measurements drifted throughout the data collection. These drifts in the strain gauge measurements were removed in post processing.

  11. Z

    Data from: Examining LGBTQ+-related Concepts in the Semantic Web: Link...

    • data.niaid.nih.gov
    Updated Jan 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wang, Shuai; Adamidou, Maria (2025). Examining LGBTQ+-related Concepts in the Semantic Web: Link Discovery, Concept Drift, Ambiguity, and Multilingual Information Reuse [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_12684869
    Explore at:
    Dataset updated
    Jan 18, 2025
    Dataset provided by
    Vrije Universiteit Amsterdam
    Authors
    Wang, Shuai; Adamidou, Maria
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Examining LGBTQ+-related Concepts in the Semantic Web

    Introduction

    Welcome to the project. We study the links between LGBTQ+ ontologies and structured vocabularies. More specifically, we focus on GSSO, Homosaurus, QLIT, and Wikidata. The code is free for use with the license CC-BY 4.0. You can resue/extend the code for free as long as you give credits to us in your publication/data. Citation information will be added after the corresponding paper gets accepted. The paper is under submission and will be included soon.

    If you would like to extend this work, you may want to contact the experts in the acknowledgement before releasing your data/code about legal and ethical issues. The DOI for this version is 10.5281/zenodo.12684870. The latest code can be found at https://github.com/Multilingual-LGBTQIA-Vocabularies/Examing_LGBTQ_Concepts.

    To reproduce the results or extend our work, you need to take the following steps.

    Step 1: Preparing the data

    In this project, the following datasets were used:

    QLIT: version 1.0

    Homosaurus: version 3.5 and version 2.3

    Wikidata: retrieved from the SPARQL Endpoint (https://query.wikidata.org/sparql) and processed between 5th May and 8th May, 2024.

    GSSO: we used gsso.owl (version 2.0.10) obtained from its Github (https://github.com/Superraptor/GSSO).

    LCSH was obtained from the official website: https://id.loc.gov/authorities/subjects.html on 9th May, 2024. The LCSH data was converted to its HDT format.

    Please put the corresponding files in the following folders (and change its names where necessary) to make sure that the Python scripts can find your code.

    ./data/GSSO/gsso.owl

    ./data/Homosaurus/v2.ttl and ./data/Homosaurus/v3.ttl

    ./data/LCSH/lcsh.hdt (we used its HDT format for fast query and analysis). The original file is also attached: subjects.skosrdf.nt.

    ./data/QLIT/Qlit-v1.ttl

    The case of Wikidata is more complicated. The following scripts were used for the retrival of data. These scripts are all in the folder ./data/wikidata/

    We used the Wikidata SPARQL endpoint: https://query.wikidata.org/

    The following relations from Wikidata were used while extracting triples.

    Wikidata - GSSO: http://www.wikidata.org/prop/direct/P9827

    Wikidata - Homosaurus 2: http://www.wikidata.org/prop/direct/P6417

    Wikidata - Homosaurus 3: http://www.wikidata.org/prop/direct/P10192

    Wikidata - LCSH: http://www.wikidata.org/prop/direct/P244

    The generated files are:

    'wikidata-homosaurus-v2-links.nt'

    'wikidata-homosaurus-v3-links.nt'

    'wikidata-gsso-links.nt'

    'wikidata-qlit-links.nt'

    'wikidata-lcsh-links-all.nt'

    Please note that the case of Wikdiata-LCSH is more complicated: there are so many links that are nothing to do with the entities in our scope. We restrict it to only entities in the scope of this paper. See below for more details.

    You can find all the scripts in the corresponding folder in the data folder.

    All the SPARQL queries used can be found in the folder ./SPARQL/

    Note! For GSSO, the following two mistakes were corrected while preprocessing:

    https://www.wikidata.org/wiki/Q1823134 should not be used as a relation. We have replaced it with http://www.wikidata.org/prop/direct/P244.

    Instead of referring to the page, we refer to the entity. We use http://www.wikidata.org/entity/* instead of https://www.wikidata.org/wiki/*

    The redirection test was conducted on 30th April, 2024, between 6PM and 8PM. The files can be found in the folder of ./data/Homosaurus/redirect/.

    Integrating the data

    In the folder ./integrated_data/, you can find all the scripts related to the integrated data. Unfortunately, due to the CC-BY-NC-ND license of GSSO and Homosaurus, the integrated data will not be made available. But you can generate it with the instructions above and by using the following scripts.

    The script ./integrated_data/integrate.py takes advantage of the data generated. It first integrates a list of files of links. Then we go through the links between Wikidata and LCSH. Only those that are in the scope of the study are included.

    If your steps are correct and using the same version as we did, you should be able to get four files:

    a) the integrated file as integrated.nt

    b) the links that are relevant for this study: wikidata-lcsh-links-selected.nt.

    c) a plot of the distribution of the size of WCCs

    d) a mapping of entities and their corresponding ID of WCCs.

    Weakly Connected Components

    The weakly connected components (WCCs) were computed for the following three purposes:

    a) Discovering missing links. See the section below for details.

    b) The WCCs can be used for manual examination. These are entities that form clusters about related concepts. The intuition is that the larger they are, the more likely there is concept drift/change, ambiguity, and mistakes.

    c) Multilingual information reuse. Smaller WCCs with exactly one entity from each dataset (e.g. Homosaurus and Wikidata) can then be used to suggest labels for the one with fewer labels for some given languages. See below for more details.

    As mentioned above, the distribution has been plotted. You can find this plot here: ./integrated_data/frequency.png

    In the folder ./integrated_data/weakly_connected_components/, you can find all the WCCs and their links.

    Two examples were given in the folder. The largest WCC about sex, gender, fucking, etc. The other is about BDSM and fetish.

    Discovering missing and outdated links

    Taking advantage of WCCs, we can further find missing and outdated links. The scripts are in the folder ./discover_missing_links.

    Three examples were given. The first two is about discovering missing links. The last one is about finding outdated links.

    The script ./discover_missing_links/discover_H3_LCSH.py and ./discover_missing_links/discover_QLIT_LCSH.py are scripts that outputs links that could be missing in Homosaurus and QLIT respectively. This was computed by looking at the WCCs. If two entities are both involved in the same WCC, there could be a link between them. The csv files in the same folder are the corresponding links found.

    The script ./discover_missing_links/find_qlit_outdated_links/ is used to discover the outdated links between QLIT and Homosaurus v3. There was only one link found.

    The 105 potentially missing links were taken for further review by Swedish-speaking experts from the QLIT team, which showed that 78 (72.38%) suggested links should be included: 38 (36.19%) can be included using skos:exactMatch and another 38 (36.19%) using skos:closeMatch. 28 (26.67%) suggested links are incorrect. The manual annotation are included in the file ./discover_missing_links/Annotated_found_new_links_qlit-lcsh.xlsx.

    Multilingual Information Reuse

    You can find two attempts in the folders about the use of GSSO and Wikidata for Homosaurus respectively.

    ./WCC-based-gsso-multilingual_info_reuse/

    ./WCC-based-wikidata-multilingual_info_reuse/

    Additionally, we provide also some code for the reuse of Wikidata multilingual info for QLIT. It's in the folder

    ./WCC-based-QLIT-info-reuse-from-Wikidata/

    They follow very similar steps:

    Compute the one-to-one mapping using the WCCs. The script is named compute-one-to-one-mapping.py

    Extract the multilingual labels from sources. The corresponding file is extract_multilingual_labels_from_one_to_one_mappings.py

    Provide the extracted multilingual as suggestions for targeting entities. The name of the corresponding files are like "*suggesting-labels.py", where the * is replaced by the actual source/target.

    For GSSO, we use the following relations:

    http://www.w3.org/2000/01/rdf-schema#label

    http://www.geneontology.org/formats/oboInOwl#hasRelatedSynonym

    http://www.geneontology.org/formats/oboInOwl#hasSynonym

    http://www.geneontology.org/formats/oboInOwl#hasExactSynonym

    http://purl.org/dc/terms/replaces

    https://www.wikidata.org/wiki/Property:P5191

    https://www.wikidata.org/wiki/Property:P1813

    https://schema.org/alternateName

    http://www.w3.org/2002/07/owl#annotatedTarget

    Additioinally, we found the relation to be studied in the future: http://www.geneontology.org/formats/oboInOwl#hasNarrowSynonym

    For Wikidata, there are only two:

    http://www.w3.org/2000/01/rdf-schema#label

    http://www.w3.org/2004/02/skos/core#altLabel

    Additional analysis

    Additionally, we perform an analysis using only redirection and replacement for GSSO and Homosaurus. The scripts are in the folder ./additional_test_gsso_multilingual_info_reuse. We consider also Homosaurus v2. This additional analysis shows the following:

    For the Turkish language, in total there are 103 triples about labels about 23 entities. The average suggested labels per entity is 3.0.

    For the Spanish language, in total there are 205 triples about labels about 43 entities. The average suggested labels per entity is 2.12.

    For the French language, in total there are 277 triples about labels about 47 entities. The average suggested labels per entity is 2.19.

    For the Danish language, in total there are 115 triples about labels about 47 entities. The average suggested labels per entity is 2.70.

    Some analysis about the replacement relations of Homosaurus is in the folder ./data/Homosaurus/replace_relations_homosaurus/.

    Finally, some additional analysis is included in the folder ./analysis_integrated_graph. Currently, there is only one that is about outdated entities in Homosaurus v3. Some more analysis will be added in the future.

    Acknowledgement

    The authors appreciate the help of the following researchers:

    Siska Humlesjö, QLIT, Göteborgs Universitet (siska.humlesjo@lir.gu.se)

    Olov Kriström, former member of QLIT

    Jack van der Wel, IHLIA (jack@ihlia.nl)

    Clair Kronk, GSSO (clair.kronk@mountsinai.org)

    If you would like to extend this work, you may want to contact them before releasing your data/code about legal and ethical issues.

    Contact

    Shuai Wang, Vrije Universiteit Amsterdam

  12. c

    found Price Prediction Data

    • coinbase.com
    Updated Nov 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). found Price Prediction Data [Dataset]. https://www.coinbase.com/price-prediction/base-found-05a8
    Explore at:
    Dataset updated
    Nov 5, 2025
    Variables measured
    Growth Rate, Predicted Price
    Measurement technique
    User-defined projections based on compound growth. This is not a formal financial forecast.
    Description

    This dataset contains the predicted prices of the asset found over the next 16 years. This data is calculated initially using a default 5 percent annual growth rate, and after page load, it features a sliding scale component where the user can then further adjust the growth rate to their own positive or negative projections. The maximum positive adjustable growth rate is 100 percent, and the minimum adjustable growth rate is -100 percent.

  13. o

    Model, data, and code for paper "Modeling of streamflow in a...

    • osti.gov
    • knb.ecoinformatics.org
    • +1more
    Updated Sep 13, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Environmental System Science Data Infrastructure for a Virtual Ecosystem (2021). Model, data, and code for paper "Modeling of streamflow in a 30-kilometer-long reach spanning 5 years using OpenFOAM 5.x" [Dataset]. http://doi.org/10.15485/1819956
    Explore at:
    Dataset updated
    Sep 13, 2021
    Dataset provided by
    Environmental System Science Data Infrastructure for a Virtual Ecosystem
    River Corridor and Watershed Biogeochemistry SFA
    U.S. DOE > Office of Science > Biological and Environmental Research (BER)
    Description

    The data package includes data, model, and code that support the analyses and conclusions in the paper titled “modeling of streamflow in a 30-kilometer-long reach spanning 5 years using OpenFOAM 5.x”. The primary goal of this paper is to demonstrate that key streamflow properties such as water depth, flow velocity, and dynamic pressure in a natural river at 30-kilometer scale over 5 years can be reliably and efficiently modeled using the computational framework presented in this paper. To support the paper, various data types from remote sensing, field observations, and computational models are used. Specific details are described as follows. Firstly, the river bathymetry data was obtained from a Light Detection and Ranging (LiDAR) survey. This data is then converted to a triangulated surface format, STL, for mesh generation in OpenFOAM. The STL data can be found in Model_Setups/BaseCase_2013To2015/constant/triSurface. The OpenFOAM mesh generated using this STL file can be found in constant/polyMesh. Other model setups, boundary and initial conditions can be found in /system and /0.org under folder BaseCase_2013To2015. A similar data structure can also be found in BaseCase_2018To2019 for the simulations during 2018 and 2019. Secondly, the OpenFOAM simulations need the upstream discharge and water depth information at the upstream boundary to drive the model. These data are generated from a one-dimensional hydraulic model and the data can be found under the folder Model_Setups /1D model Mass1 data. The mass1_65.csv and mass1_191.csv files include the results of the 1D model at the model inlet and outlet, respectively. The Matlab source code Mass1ToOFBC20182019.m is used to convert these data into OpenFOAM boundary condition setups.With the above OpenFOAM model, it can generate data for water surface elevation, flow velocity, and dynamic pressure. In this paper, the water surface elevation was measured at 7 locations during different periods between 2011 and 2019. The exact survey locations (see Fig1_SurveyLocations.txt) can be found in folder Fig_1. The variation of water stage over time at the 7 locations can be found in folder /Observation_WSE. The data type include .txt, .csv, .xlsx, and .mat. The .mat data can be loaded by Matlab.We also measured the flow velocities at 12 cross-sections along the river. At each cross-section, we recorded the x, y locations, depth, three velocity components u,v,w. These data are saved to a Matlab format which can be found under folder /Observation_Velocity and /Fig_1. The relative locations of velocity survey locations to the river bathymetry can be found in Figure 1c.The water stage data at the 7 locations from OpenFOAM, 1D, and 2D hydraulic models are also provided to evaluate the long-term performance of 3D models vs 1D/2D models. The water stage data for the 7 locations from OpenFOAM have been saved to .mat format and can be found in /OpenFOAM_WSE. The water stage data from the 1D model are saved in .csv format and can be found in /Mass1_WSE. The water stage from the 2D model is saved as .mat format and can be found in / Mass2_WSEIn addition, the OpenFOAM model outputs the information of hydrostatic and hydrodynamic pressure. They are saved as .mat format under folder /Fig_11/2013_1. As the files are too large, we only uploaded the data for January 2013. The area of different ratio of dynamic pressure to static pressure for all simulation range, i.e., 2013-2015, are saved to .mat format. They can be found in /Fig_11/PA. Further, the data of wall clock time versus the solution time of the OpenFOAM modeling are also saved to .mat format under folder /Fig_13/LogsMat. In summary, the data package contains seven data types, including .txt, .csv, .xlsx, .dat, .stl, .m, and .mat. The former 4 types can be directly open using a text editor or Microsoft Office. The .mat format needs to be read by Matlab. The Matlab source code .m files need to be run with Matlab. The OpenFOAM setups can be visualized in ParaView. The .stl file can be opened in ParaView or Blender. The data in subfolders Fig_1 to Fig_10 and Fig_12 are copied from the aforementioned data folders to generate specific figures for the paper. A readME.txt file is included in each subfolder to further describe how the data in each folder are generated and used to support the paper.Please use the data package's DOI to cite the data package. Please contact yunxiang.chen@pnnl.gov if you need more data related to the paper.

  14. NOAA Continuously Operating Reference Stations (CORS) Network (NCN)

    • registry.opendata.aws
    Updated Jul 15, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA (2019). NOAA Continuously Operating Reference Stations (CORS) Network (NCN) [Dataset]. https://registry.opendata.aws/noaa-ncn/
    Explore at:
    Dataset updated
    Jul 15, 2019
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Description

    The NOAA Continuously Operating Reference Stations (CORS) Network (NCN), managed by NOAA/National Geodetic Survey (NGS), provide Global Navigation Satellite System (GNSS) data, supporting three dimensional positioning, meteorology, space weather, and geophysical applications throughout the United States. The NCN is a multi-purpose, multi-agency cooperative endeavor, combining the efforts of hundreds of government, academic, and private organizations. The stations are independently owned and operated. Each agency shares their GNSS/GPS carrier phase and code range measurements and station metadata with NGS, which are analyzed and distributed free of charge. NGS provides access to all NCN data collected since 9 February (040) 1994.

    • Access to NCN Data and Products

    • NCN Data and Products

      • RINEX: The GPS/GNSS data collected at NCN stations are made available to the public by NGS in Receiver INdependent EXchange (RINEX) format. Most data are available within 1 hour (60 minutes) from when they were recorded at the remote site, and a few sites have a delay of 24 hours (1440 minutes).
        RINEX data can be found at: rinex/YYYY/DDD/ssss/
      • Station logs:
        • Station log files contain all the historical equipment (receiver/antenna) used at that station, approximate location, owner and operating agency, etc..
          Station log files can be found at: station_log/ssss.log.txt
        • Historical and current equipment information of all NCN stations, except those that are considered IGS stations.
          These data can be found at: station_log/cumulative.station.info.cors
      • Published Coordinates and Velocities: NAD83 and ITRF coordinates and velocities of each NCN station. All published coordinates and velocities are given for the Antenna Reference Point (ARP).
        Published coordinate and velocity files can be found at: coord/coord_YY/
        In July 2019, NGS published MYCS2!
      • Time-series Plots:
        • Short-term plots show the repeatability of a site for the last 90-days with respect to the current published position, corrected for the effect of the published velocity. These plots are updated daily.
          Short-term plots can be found at: /Plots/ssssYY.short.png
        • Long-term plots show the show weekly residual positions with respect to the current published coordinates from our stacked solution. Newer sites may not have a long-term plot if they were added after our Multi-year Solution Processing campaign.
          Long-term plots can be found at: /Plots/Longterm/ssssYY.long.png
      • Daily Broadcast Ephemeris:
        • Daily GPS Broadcast ephemeris can be found at: rinex/YYYY/DDD/brdcDDD0.YYn.gz
        • Daily GLONASS-only Broadcast ephemeris can be found at: rinex/YYYY/DDD/brdcDDD0.YYg.gz
      • Daily final, rapid, and hourly ultra-rapid GNSS Orbit can be found at:
        • Daily final and rapid GNSS Orbit can be found at: rinex/YYYY/DDD/AAAWWWWD.sp3.gz
        • Hourly ultra-rapid GNSS Orbit can be found at: rinex/YYYY/DDD/AAAWWWWD_HH.sp3.gz
      • In which:
        • YYYY: 4-digit year
        • YY: The last 2-digit of year
        • DDD: 3-digit day of year [001,002,..366]
        • D: day of week [Sun=0, Mon=1,..,Fri=6]
        • ssss: 4-char station ID
        • h: 1-char hour of day (a=00, b=01, c=02,..,x=23)
        • HH: 2-digit hour of day (00,01,02,..,23)
        • WWWW: 4-digit GPS week number
        • AAA: 3-char analysis center name/type of solution, such as:
          • igs: IGS final solution combination
          • igl: IGS final solution combination (GLONASS-only)
          • igr: IGS rapid solution combination
          • igu: IGS ultra-rapid solution combination

  15. f

    Data analyzed in this study.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Aug 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gerstorf, Denis; Ng, Michelle; Ram, Nilàm; Pincus, Aaron L.; Conroy, David E. (2024). Data analyzed in this study. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001423481
    Explore at:
    Dataset updated
    Aug 7, 2024
    Authors
    Gerstorf, Denis; Ng, Michelle; Ram, Nilàm; Pincus, Aaron L.; Conroy, David E.
    Description

    Individuals’ sensitivity to climate hazards is a central component of their vulnerability to climate change. In this paper, we introduce and outline the utility of a new intraindividual variability construct, affective sensitivity to air pollution (ASAP)–defined as the extent to which an individual’s affective states fluctuate in accordance with daily changes in air quality. As such, ASAP pushes beyond examination of differences in individuals’ exposures to air pollution to examination of differences in individuals’ sensitivities to air pollution. Building on known associations between air pollution exposure and adverse mental health outcomes, we empirically illustrate how application of Bayesian multilevel models to intensive repeated measures data obtained in an experience sampling study (N = 150) over one year can be used to examine whether and how individuals’ daily affective states fluctuate with the daily concentrations of outdoor air pollution in their county. Results indicate construct viability, as we found substantial interindividual differences in ASAP for both affect arousal and affect valence. This suggests that repeated measures of individuals’ day-to-day affect provides a new way of measuring their sensitivity to climate change. In addition to contributing to discourse around climate vulnerability, the intraindividual variability construct and methodology proposed here can help better integrate affect and mental health in climate adaptation policies, plans, and programs.

  16. n

    Data from: A new digital method of data collection for spatial point pattern...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Jul 6, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chao Jiang; Xinting Wang (2021). A new digital method of data collection for spatial point pattern analysis in grassland communities [Dataset]. http://doi.org/10.5061/dryad.brv15dv70
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 6, 2021
    Dataset provided by
    Inner Mongolia University of Technology
    Chinese Academy of Agricultural Sciences
    Authors
    Chao Jiang; Xinting Wang
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    A major objective of plant ecology research is to determine the underlying processes responsible for the observed spatial distribution patterns of plant species. Plants can be approximated as points in space for this purpose, and thus, spatial point pattern analysis has become increasingly popular in ecological research. The basic piece of data for point pattern analysis is a point location of an ecological object in some study region. Therefore, point pattern analysis can only be performed if data can be collected. However, due to the lack of a convenient sampling method, a few previous studies have used point pattern analysis to examine the spatial patterns of grassland species. This is unfortunate because being able to explore point patterns in grassland systems has widespread implications for population dynamics, community-level patterns and ecological processes. In this study, we develop a new method to measure individual coordinates of species in grassland communities. This method records plant growing positions via digital picture samples that have been sub-blocked within a geographical information system (GIS). Here, we tested out the new method by measuring the individual coordinates of Stipa grandis in grazed and ungrazed S. grandis communities in a temperate steppe ecosystem in China. Furthermore, we analyzed the pattern of S. grandis by using the pair correlation function g(r) with both a homogeneous Poisson process and a heterogeneous Poisson process. Our results showed that individuals of S. grandis were overdispersed according to the homogeneous Poisson process at 0-0.16 m in the ungrazed community, while they were clustered at 0.19 m according to the homogeneous and heterogeneous Poisson processes in the grazed community. These results suggest that competitive interactions dominated the ungrazed community, while facilitative interactions dominated the grazed community. In sum, we successfully executed a new sampling method, using digital photography and a Geographical Information System, to collect experimental data on the spatial point patterns for the populations in this grassland community.

    Methods 1. Data collection using digital photographs and GIS

    A flat 5 m x 5 m sampling block was chosen in a study grassland community and divided with bamboo chopsticks into 100 sub-blocks of 50 cm x 50 cm (Fig. 1). A digital camera was then mounted to a telescoping stake and positioned in the center of each sub-block to photograph vegetation within a 0.25 m2 area. Pictures were taken 1.75 m above the ground at an approximate downward angle of 90° (Fig. 2). Automatic camera settings were used for focus, lighting and shutter speed. After photographing the plot as a whole, photographs were taken of each individual plant in each sub-block. In order to identify each individual plant from the digital images, each plant was uniquely marked before the pictures were taken (Fig. 2 B).

    Digital images were imported into a computer as JPEG files, and the position of each plant in the pictures was determined using GIS. This involved four steps: 1) A reference frame (Fig. 3) was established using R2V software to designate control points, or the four vertexes of each sub-block (Appendix S1), so that all plants in each sub-block were within the same reference frame. The parallax and optical distortion in the raster images was then geometrically corrected based on these selected control points; 2) Maps, or layers in GIS terminology, were set up for each species as PROJECT files (Appendix S2), and all individuals in each sub-block were digitized using R2V software (Appendix S3). For accuracy, the digitization of plant individual locations was performed manually; 3) Each plant species layer was exported from a PROJECT file to a SHAPE file in R2V software (Appendix S4); 4) Finally each species layer was opened in Arc GIS software in the SHAPE file format, and attribute data from each species layer was exported into Arc GIS to obtain the precise coordinates for each species. This last phase involved four steps of its own, from adding the data (Appendix S5), to opening the attribute table (Appendix S6), to adding new x and y coordinate fields (Appendix S7) and to obtaining the x and y coordinates and filling in the new fields (Appendix S8).

    1. Data reliability assessment

    To determine the accuracy of our new method, we measured the individual locations of Leymus chinensis, a perennial rhizome grass, in representative community blocks 5 m x 5 m in size in typical steppe habitat in the Inner Mongolia Autonomous Region of China in July 2010 (Fig. 4 A). As our standard for comparison, we used a ruler to measure the individual coordinates of L. chinensis. We tested for significant differences between (1) the coordinates of L. chinensis, as measured with our new method and with the ruler, and (2) the pair correlation function g of L. chinensis, as measured with our new method and with the ruler (see section 3.2 Data Analysis). If (1) the coordinates of L. chinensis, as measured with our new method and with the ruler, and (2) the pair correlation function g of L. chinensis, as measured with our new method and with the ruler, did not differ significantly, then we could conclude that our new method of measuring the coordinates of L. chinensis was reliable.

    We compared the results using a t-test (Table 1). We found no significant differences in either (1) the coordinates of L. chinensis or (2) the pair correlation function g of L. chinensis. Further, we compared the pattern characteristics of L. chinensis when measured by our new method against the ruler measurements using a null model. We found that the two pattern characteristics of L. chinensis did not differ significantly based on the homogenous Poisson process or complete spatial randomness (Fig. 4 B). Thus, we concluded that the data obtained using our new method was reliable enough to perform point pattern analysis with a null model in grassland communities.

  17. m

    Data from: The Sign Found in The Movie “Mr Harrigan’s Phone”

    • data.mendeley.com
    Updated Aug 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PRAGMATICA; Journal of Linguistics and Literature (2023). The Sign Found in The Movie “Mr Harrigan’s Phone” [Dataset]. http://doi.org/10.17632/34gysk34xv.1
    Explore at:
    Dataset updated
    Aug 22, 2023
    Authors
    PRAGMATICA; Journal of Linguistics and Literature
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The title of this research is “The Sign Found in The Movie Mr Harrigan’s Phone”. This research has difference with the other previous. Previous research was analyzing kind and function. This research analyzes types and meaning by Peirce and Barthes. Previous research used action, adventure, and fantasy movie genres. This reseach used horror and drama genres. The purpose of this research are observe and provide information to readers about the types and inform the meanings contained in the movie "Mr. Harrigan's Phone". Therefore this research can inform the reader's about sign systems contained in the movie “Mr. Harrigan's Phone” and this research can be expected a reference for readers as to learn the sign. This research used qualitative method by Creswell to analyze the data. This research has 2 problems. The first is type used Peirce's theory. Peirce has three types of signs namely icon, index, and symbol. The results of the identification in the Mr Harrigan's Phone movie get 5 icons, 5 indexes and 5 symbols. The second is the meaning. This study used Barthes' theory to identifying meaning. Barthes has 3 system namely denotation, connotation, and myth.

  18. w

    Data from: Upper Freeport Coal Bed County Statistics (Chemistry) in...

    • data.wu.ac.at
    • data.usgs.gov
    • +2more
    zip
    Updated Jun 8, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Interior (2018). Upper Freeport Coal Bed County Statistics (Chemistry) in Pennsylvania, Ohio, West Virginia, and Maryland [Dataset]. https://data.wu.ac.at/schema/data_gov/NmM4M2NlMTItZTY4Mi00NzFiLTgzMWYtYjlkMzYzNTcyNzY3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 8, 2018
    Dataset provided by
    Department of the Interior
    Area covered
    6402170f01efe95f7356277966205c55bfc7d018
    Description

    This dataset is a polygon coverage of counties limited to the extent of the Upper Freeport coal bed resource areas and attributed with statistics on these coal quality parameters: ash yield (percent), sulfur (percent), SO2 (lbs per million Btu), calorific value (Btu/lb), arsenic (ppm) content and mercury (ppm) content. The file has been generalized from detailed geologic coverages found elsewhere in Professional Paper 1625-C. The attributes were generated from public data found in the geochemical dataset found in Chap. D, Appendix 8, Disc 1, as well as some additional proprietary data. Please see the metadata file found in Chap. D, Appendix 9, Disc 1, for more detailed information on the geochemical attributes. The county statistical data used for this data set are found in Tables 2-5 and 17-18, Chap. D, Disc 1. Additional county geochemical statistics for other parameters are found in Tables 6-16, Chap. D, Disc 1.

  19. a

    OBSOLETE Land Use

    • open-data-massgis.hub.arcgis.com
    • gis.data.mass.gov
    Updated Jul 1, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Cambridge (2014). OBSOLETE Land Use [Dataset]. https://open-data-massgis.hub.arcgis.com/items/48941ca460364a759e16ba432c153225
    Explore at:
    Dataset updated
    Jul 1, 2014
    Dataset authored and provided by
    City of Cambridge
    License

    ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
    License information was derived automatically

    Area covered
    Description

    This dataset is OBSOLETE as of 12/3/2024 and will be removed from ArcGIS Online on 12/3/2025.An updated version of this dataset is available at Land Use FY2024.This data set derives from several sources, and is updated annually with data current through July 1 of the reported year. The primary source is a data dump from the VISION assessing data system, which provided data up to date as of January 1, 2012, and is supplemented by information from subsequent building permits and Development Logs. (Use codes provided by this system combine aspects of land use, tax status, and condominium status. In an effort to clarify land use type the data has been cleaned and subdivided to break the original use code into several different fields.) The data set has further been supplemented and updated with development information provided by building permits issued by the Inspectional Services Department and from data found in the Development Log publication. Information from these sources is added to the data set periodically. Land use status is up to date as of the Last Modified date.Differences From “Official” Parcel LayerThe Cambridge GIS system maintains a separate layer of land parcels reflecting up to date subdivision and ownership. The parcel data associated with the Land Use Data set differs from the “official” parcel layer in a number of cases. For that reason this separate parcel layer is provided to work with land use data in a GIS environment. See the Assessing Department’s Parcel layer for the most up-to-date land parcel boundaries.Table of Land Use CodesThe following table lists all land use code found in the data layer:Land Use CodeLand Use DescriptionCategory0101MXD SNGL-FAM-REMixed Use Residential0104MXD TWO-FAM-RESMixed Use Residential0105MXD THREE-FM-REMixed Use Residential0111MXD 4-8-UNIT-APMixed Use Residential0112MXD >8-UNIT-APTMixed Use Residential0121MXD BOARDING-HSMixed Use Residential013MULTIUSE-RESMixed Use Residential031MULTIUSE-COMMixed Use Commercial0340MXD GEN-OFFICEMixed Use Commercial041MULTIUSE-INDMixed Use Industrial0942Higher Ed and Comm MixedMixed Use Education101SNGL-FAM-RESResidential1014SINGLE FAM W/AUResidential104TWO-FAM-RESResidential105THREE-FM-RESResidential106RES-LAND-IMPTransportation1067RES-COV-PKGTransportation1114-8-UNIT-APTResidential112>8-UNIT-APTResidential113ASSISTED-LIVAssisted Living/Boarding House121BOARDING-HSEAssisted Living/Boarding House130RES-DEV-LANDVacant Residential131RES-PDV-LANDVacant Residential132RES-UDV-LANDVacant Residential1322RES-UDV-PARK (OS) LNVacant Residential140CHILD-CARECommercial300HOTELCommercial302INN-RESORTCommercial304NURSING-HOMEHealth316WAREHOUSECommercial323SH-CNTR/MALLCommercial324SUPERMARKETCommercial325RETAIL-STORECommercial326EATING-ESTBLCommercial327RETAIL-CONDOCommercial330AUTO-SALESCommercial331AUTO-SUPPLYCommercial332AUTO-REPAIRCommercial334GAS-STATIONCommercialLand Use CodeLand Use DescriptionCategory335CAR-WASHCommercial336PARKING-GARTransportation337PARKING-LOTTransportation340GEN-OFFICEOffice341BANKCommercial342MEDICAL-OFFCHealth343OFFICE-CONDOOffice345RETAIL-OFFICOffice346INV-OFFICEOffice353FRAT-ORGANIZCommercial362THEATRECommercial370BOWLING-ALLYCommercial375TENNIS-CLUBCommercial390COM-DEV-LANDVacant Commercial391COM-PDV-LANDVacant Commercial392COM-UDV-LANDVacant Commercial3922CRMCL REC LNDVacant Commercial400MANUFACTURNGIndustrial401WAREHOUSEIndustrial404RES-&-DEV-FCOffice/R&D406HIGH-TECHOffice/R&D407CLEAN-MANUFIndustrial409INDUST-CONDOIndustrial413RESRCH IND CNDIndustrial422ELEC GEN PLANTUtility424PUB UTIL REGUtility428GAS-CONTROLUtility430TELE-EXCH-STAUtility440IND-DEV-LANDVacant Industrial442IND-UDV-LANDVacant Industrial920ParklandsPublic Open Space930Government OperationsGovernment Operations934Public SchoolsEducation940Private Pre & Elem SchoolEducation941Private Secondary SchoolEducation942Private CollegeHigher Education9421Private College Res UnitsEducation Residential943Other Educ & Research OrgHigher EducationLand Use CodeLand Use DescriptionCategory953CemeteriesCemetery955Hospitals & Medical OfficHealth956MuseumsHigher Education957Charitable ServicesCharitable/Religious960ReligiousCharitable/Religious971Water UtilityUtility972Road Right of WayTransportation975MBTA/RailroadTransportation9751MBTA/RailroadTransportation995Private Open SpacePrivately-Owned Open SpaceExplore all our data on the Cambridge GIS Data Dictionary.Attributes NameType DetailsDescription ML type: Stringwidth: 16precision: 0 Map-Lot: This a unique parcel identifier found in the deed and used by the Assessing data system. In a few cases, where parcels have been subdivided subsequent to January 1, 2012, a placeholder Map-Lot number is assigned that differs from that used elsewhere.

    MAP type: Stringwidth: 5precision: 0 This Map portion of the unique parcel identifier found in the deed and used by the Assessing data system. In a few cases, where parcels have been subdivided subsequent to January 1, 2012, a placeholder Map-Lot number is assigned that differs from that used elsewhere.

    LOT type: Stringwidth: 5precision: 0 This is the Lot portion of the unique parcel identifier found in the deed and used by the Assessing data system. In a few cases, where parcels have been subdivided subsequent to January 1, 2012, a placeholder Map-Lot number is assigned that differs from that used elsewhere.

    Location type: Stringwidth: 254precision: 0 In the great majority of cases this is the street address of the parcel as it is recorded in the Registry of Deed record. In instances where edits were made to the base parcel layer the best address available at the time is employed.

    LandArea type: Doublewidth: 8precision: 15

    LUCode type: Stringwidth: 254precision: 0 The four digit text string in this field indicates the primary usage of a parcel. While the codes are based on the standard Massachusetts assessing land use classification system, they differ in a number of cases; the coding system used here is unique to this data set. Note that other minor uses may occur on a property and, in some cases, tenants may introduce additional uses not reflected here (eg, office space used as a medical office, home based businesses).

    LUDesc type: Stringwidth: 254precision: 0 The short description gives more detail about the specific use indicated by the Land Use Code. Most descriptions are taken from the standard Massachusetts assessing land use classification system.

    Category type: Stringwidth: 254precision: 0 This broader grouping of land uses can be used to map land use data. You can find the land use data mapped at: https://www.cambridgema.gov/CDD/factsandmaps/mapgalleries/othermaps

    ExistUnits type: Doublewidth: 8precision: 15 This value indicates the number of existing residential units as of July 1 of the reported year. A residential unit may be a house, an apartment, a mobile home, a group of rooms or a single room that is occupied (or, if vacant, intended for occupancy) as separate living quarters. This includes units found in apartment style graduate student housing residences and rooms in assisted living facilities and boarding houses are treated as also housing units. The unit count does not include college or graduate student dormitories, nursing home rooms, group homes, or other group quarters living arrangements.

    MixedUseTy type: Stringwidth: 254precision: 0 Two flags are used for this field. “Groundfloor” indicates that a commercial use is found on the ground floor of the primary building, and upper floors are used for residential purposes. “Mixed” indicates that two or more uses are found throughout the structure or multiple structures on the parcel, one of which is residential.

    GQLodgingH type: Stringwidth: 254precision: 0 A value of “Yes” indicates that the primary use of the property is as a group quarters living arrangement. Group quarters are a place where people live or stay, in a group living arrangement, that is owned or managed by an entity or organization providing housing and/or services for the residents. Group quarters include such places as college residence halls, residential treatment centers, skilled nursing facilities, group homes, military barracks, correctional facilities, and workers’ dormitories.

    Most university dormitories are included under the broader higher education land use code, as most dormitories are included in the larger parcels comprising the bulk of higher education campuses.

    GradStuden type: Stringwidth: 254precision: 0 A value of “Yes” indicates the parcel is used to house graduate students in apartment style units. Graduate student dormitories are treated as a higher education land use.

    CondoFlag type: Stringwidth: 254precision: 0 “Yes” indicates that the parcel is owned as a condominium. Condo properties can include one or more uses, including residential, commercial, and parking. The great majority of such properties in Cambridge are residential only.

    TaxStatus type: Stringwidth: 254precision: 0 A value indicates that the parcel is not subject to local property taxes. The following general rules are employed to assign properties to subcategories, though special situations exist in a number of cases.

    o Authority: Properties owned the Cambridge Redevelopment Authority and Cambridge Housing Authority. o City: Properties owned by the City of Cambridge or cemetery land owned by the Town of Belmont. o Educ: Includes properties used for education purposes, ranging from pre-schools to university research facilities. (More detail about the level of education can be found using the Land Use Code.) o Federal: Properties owned by the federal government, including the Post Office. Certain properties with assessing data indicating Cambridge Redevelopment Authority ownership are in fact owned by the federal government as part of the Volpe Transportation Research Center and are so treated here. o Other: Nontaxable properties owned by a nonprofit organization and not

  20. c

    The COVID Tracking Project

    • covidtracking.com
    google sheets
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The COVID Tracking Project [Dataset]. https://covidtracking.com/
    Explore at:
    google sheetsAvailable download formats
    Description

    The COVID Tracking Project collects information from 50 US states, the District of Columbia, and 5 other US territories to provide the most comprehensive testing data we can collect for the novel coronavirus, SARS-CoV-2. We attempt to include positive and negative results, pending tests, and total people tested for each state or district currently reporting that data.

    Testing is a crucial part of any public health response, and sharing test data is essential to understanding this outbreak. The CDC is currently not publishing complete testing data, so we’re doing our best to collect it from each state and provide it to the public. The information is patchy and inconsistent, so we’re being transparent about what we find and how we handle it—the spreadsheet includes our live comments about changing data and how we’re working with incomplete information.

    From here, you can also learn about our methodology, see who makes this, and find out what information states provide and how we handle it.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Bhavik Jikadara (2023). Entire World Educational Data [Dataset]. https://www.kaggle.com/datasets/bhavikjikadara/entire-world-educational-data
Organization logo

Entire World Educational Data

You can find data, visualizations, and writing on global education at Our World

Explore at:
zip(9465 bytes)Available download formats
Dataset updated
Dec 23, 2023
Authors
Bhavik Jikadara
License

Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically

Area covered
World
Description

This meticulously curated dataset offers a panoramic view of education on a global scale , delivering profound insights into the dynamic landscape of education across diverse countries and regions. Spanning a rich tapestry of educational aspects, it encapsulates crucial metrics including out-of-school rates, completion rates, proficiency levels, literacy rates, birth rates, and primary and tertiary education enrollment statistics. A treasure trove of knowledge, this dataset is an indispensable asset for discerning researchers, dedicated educators, and forward-thinking policymakers, enabling them to embark on a transformative journey of assessing, enhancing, and reshaping education systems worldwide.

Key Features: - Countries and Areas: Name of the countries and areas. - Latitude: Latitude coordinates of the geographical location. - Longitude: Longitude coordinates of the geographical location. - OOSR_Pre0Primary_Age_Male: Out-of-school rate for pre-primary age males. - OOSR_Pre0Primary_Age_Female: Out-of-school rate for pre-primary age females. - OOSR_Primary_Age_Male: Out-of-school rate for primary age males. - OOSR_Primary_Age_Female: Out-of-school rate for primary age females. - OOSR_Lower_Secondary_Age_Male: Out-of-school rate for lower secondary age males. - OOSR_Lower_Secondary_Age_Female: Out-of-school rate for lower secondary age females. - OOSR_Upper_Secondary_Age_Male: Out-of-school rate for upper secondary age males. - OOSR_Upper_Secondary_Age_Female: Out-of-school rate for upper secondary age females. - Completion_Rate_Primary_Male: Completion rate for primary education among males. - Completion_Rate_Primary_Female: Completion rate for primary education among females. - Completion_Rate_Lower_Secondary_Male: Completion rate for lower secondary education among males. - Completion_Rate_Lower_Secondary_Female: Completion rate for lower secondary education among females. - Completion_Rate_Upper_Secondary_Male: Completion rate for upper secondary education among males. - Completion_Rate_Upper_Secondary_Female: Completion rate for upper secondary education among females. - Grade_2_3_Proficiency_Reading: Proficiency in reading for grade 2-3 students. - Grade_2_3_Proficiency_Math: Proficiency in math for grade 2-3 students. - Primary_End_Proficiency_Reading: Proficiency in reading at the end of primary education. - Primary_End_Proficiency_Math: Proficiency in math at the end of primary education. - Lower_Secondary_End_Proficiency_Reading: Proficiency in reading at the end of lower secondary education. - Lower_Secondary_End_Proficiency_Math: Proficiency in math at the end of lower secondary education. - Youth_15_24_Literacy_Rate_Male: Literacy rate among male youths aged 15-24. - Youth_15_24_Literacy_Rate_Female: Literacy rate among female youths aged 15-24. - Birth_Rate: Birth rate in the respective countries/areas. - Gross_Primary_Education_Enrollment: Gross enrollment in primary education. - Gross_Tertiary_Education_Enrollment: Gross enrollment in tertiary education. - Unemployment_Rate: Unemployment rate in the respective countries/areas.

Search
Clear search
Close search
Google apps
Main menu