As of June 2024, the most popular database management system (DBMS) worldwide was Oracle, with a ranking score of 1244.08; MySQL and Microsoft SQL server rounded out the top three. Although the database management industry contains some of the largest companies in the tech industry, such as Microsoft, Oracle and IBM, a number of free and open-source DBMSs such as PostgreSQL and MariaDB remain competitive. Database Management Systems As the name implies, DBMSs provide a platform through which developers can organize, update, and control large databases. Given the business world’s growing focus on big data and data analytics, knowledge of SQL programming languages has become an important asset for software developers around the world, and database management skills are seen as highly desirable. In addition to providing developers with the tools needed to operate databases, DBMS are also integral to the way that consumers access information through applications, which further illustrates the importance of the software.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This folder contains the Spider-Realistic dataset used for evaluation in the paper "Structure-Grounded Pretraining for Text-to-SQL". The dataset is created based on the dev split of the Spider dataset (2020-06-07 version from https://yale-lily.github.io/spider). We manually modified the original questions to remove the explicit mention of column names while keeping the SQL queries unchanged to better evaluate the model's capability in aligning the NL utterance and the DB schema. For more details, please check our paper at https://arxiv.org/abs/2010.12773.
It contains the following files:
- spider-realistic.json
# The spider-realistic evaluation set
# Examples: 508
# Databases: 19
- dev.json
# The original dev split of Spider
# Examples: 1034
# Databases: 20
- tables.json
# The original DB schemas from Spider
# Databases: 166
- README.txt
- license
The Spider-Realistic dataset is created based on the dev split of the Spider dataset realsed by Yu, Tao, et al. "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task." It is a subset of the original dataset with explicit mention of the column names removed. The sql queries and databases are kept unchanged.
For the format of each json file, please refer to the github page of Spider https://github.com/taoyds/spider.
For the database files please refer to the official Spider release https://yale-lily.github.io/spider.
This dataset is distributed under the CC BY-SA 4.0 license.
If you use the dataset, please cite the following papers including the original Spider datasets, Finegan-Dollak et al., 2018 and the original datasets for Restaurants, GeoQuery, Scholar, Academic, IMDB, and Yelp.
@article{deng2020structure,
title={Structure-Grounded Pretraining for Text-to-SQL},
author={Deng, Xiang and Awadallah, Ahmed Hassan and Meek, Christopher and Polozov, Oleksandr and Sun, Huan and Richardson, Matthew},
journal={arXiv preprint arXiv:2010.12773},
year={2020}
}
@inproceedings{Yu&al.18c,
year = 2018,
title = {Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task},
booktitle = {EMNLP},
author = {Tao Yu and Rui Zhang and Kai Yang and Michihiro Yasunaga and Dongxu Wang and Zifan Li and James Ma and Irene Li and Qingning Yao and Shanelle Roman and Zilin Zhang and Dragomir Radev }
}
@InProceedings{P18-1033,
author = "Finegan-Dollak, Catherine
and Kummerfeld, Jonathan K.
and Zhang, Li
and Ramanathan, Karthik
and Sadasivam, Sesh
and Zhang, Rui
and Radev, Dragomir",
title = "Improving Text-to-SQL Evaluation Methodology",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "351--360",
location = "Melbourne, Australia",
url = "http://aclweb.org/anthology/P18-1033"
}
@InProceedings{data-sql-imdb-yelp,
dataset = {IMDB and Yelp},
author = {Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig},
title = {SQLizer: Query Synthesis from Natural Language},
booktitle = {International Conference on Object-Oriented Programming, Systems, Languages, and Applications, ACM},
month = {October},
year = {2017},
pages = {63:1--63:26},
url = {http://doi.org/10.1145/3133887},
}
@article{data-academic,
dataset = {Academic},
author = {Fei Li and H. V. Jagadish},
title = {Constructing an Interactive Natural Language Interface for Relational Databases},
journal = {Proceedings of the VLDB Endowment},
volume = {8},
number = {1},
month = {September},
year = {2014},
pages = {73--84},
url = {http://dx.doi.org/10.14778/2735461.2735468},
}
@InProceedings{data-atis-geography-scholar,
dataset = {Scholar, and Updated ATIS and Geography},
author = {Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer},
title = {Learning a Neural Semantic Parser from User Feedback},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
year = {2017},
pages = {963--973},
location = {Vancouver, Canada},
url = {http://www.aclweb.org/anthology/P17-1089},
}
@inproceedings{data-geography-original
dataset = {Geography, original},
author = {John M. Zelle and Raymond J. Mooney},
title = {Learning to Parse Database Queries Using Inductive Logic Programming},
booktitle = {Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume 2},
year = {1996},
pages = {1050--1055},
location = {Portland, Oregon},
url = {http://dl.acm.org/citation.cfm?id=1864519.1864543},
}
@inproceedings{data-restaurants-logic,
author = {Lappoon R. Tang and Raymond J. Mooney},
title = {Automated Construction of Database Interfaces: Intergrating Statistical and Relational Learning for Semantic Parsing},
booktitle = {2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora},
year = {2000},
pages = {133--141},
location = {Hong Kong, China},
url = {http://www.aclweb.org/anthology/W00-1317},
}
@inproceedings{data-restaurants-original,
author = {Ana-Maria Popescu, Oren Etzioni, and Henry Kautz},
title = {Towards a Theory of Natural Language Interfaces to Databases},
booktitle = {Proceedings of the 8th International Conference on Intelligent User Interfaces},
year = {2003},
location = {Miami, Florida, USA},
pages = {149--157},
url = {http://doi.acm.org/10.1145/604045.604070},
}
@inproceedings{data-restaurants,
author = {Alessandra Giordani and Alessandro Moschitti},
title = {Automatic Generation and Reranking of SQL-derived Answers to NL Questions},
booktitle = {Proceedings of the Second International Conference on Trustworthy Eternal Systems via Evolving Software, Data and Knowledge},
year = {2012},
location = {Montpellier, France},
pages = {59--76},
url = {https://doi.org/10.1007/978-3-642-45260-4_5},
}
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
In our work, we have designed and implemented a novel workflow with several heuristic methods to combine state-of-the-art methods related to CVE fix commits gathering. As a consequence of our improvements, we have been able to gather the largest programming language-independent real-world dataset of CVE vulnerabilities with the associated fix commits. Our dataset containing 26,617 unique CVEs coming from 6,945 unique GitHub projects is, to the best of our knowledge, by far the biggest CVE vulnerability dataset with fix commits available today. These CVEs are associated with 31,883 unique commits that fixed those vulnerabilities. Compared to prior work, our dataset brings about a 397% increase in CVEs, a 295% increase in covered open-source projects, and a 480% increase in commit fixes. Our larger dataset thus substantially improves over the current real-world vulnerability datasets and enables further progress in research on vulnerability detection and software security. We used NVD(nvd.nist.gov) and Github Secuirty advisory Database as the main sources of our pipeline.
We release to the community a 14GB PostgreSQL database that contains information on CVEs up to January 24, 2024, CWEs of each CVE, files and methods changed by each commit, and repository metadata. Additionally, patch files related to the fix commits are available as a separate package. Furthermore, we make our dataset collection tool also available to the community.
cvedataset-patches.zip
file contains fix patches, and dump_morefixes_27-03-2024_19_52_58.sql.zip
contains a postgtesql dump of fixes, together with several other fields such as CVEs, CWEs, repository meta-data, commit data, file changes, method changed, etc.
MoreFixes data-storage strategy is based on CVEFixes to store CVE commits fixes from open-source repositories, and uses a modified version of Porspector(part of ProjectKB from SAP) as a module to detect commit fixes of a CVE. Our full methodology is presented in the paper, with the title of "MoreFixes: A Large-Scale Dataset of CVE Fix Commits Mined through Enhanced Repository Discovery", which will be published in the Promise conference (2024).
For more information about usage and sample queries, visit the Github repository: https://github.com/JafarAkhondali/Morefixes
If you are using this dataset, please be aware that the repositories that we mined contain different licenses and you are responsible to handle any licesnsing issues. This is also the similar case with CVEFixes.
This product uses the NVD API but is not endorsed or certified by the NVD.
This research was partially supported by the Dutch Research Council (NWO) under the project NWA.1215.18.008 Cyber Security by Integrated Design (C-SIDe).
To restore the dataset, you can use the docker-compose file available at the gitub repository. Dataset default credentials after restoring dump:
POSTGRES_USER=postgrescvedumper POSTGRES_DB=postgrescvedumper POSTGRES_PASSWORD=a42a18537d74c3b7e584c769152c3d
The World Bank has launched a fast-deploying high-frequency phone-based survey of households to generate near real time insights into the socio-economic impact of COVID-19 on households which hence to be used to support evidence-based policy responses to the crisis. At a time when conventional modes of data collection are not feasible, this phone-based rapid data collection method offers a way to gather granular information on the transmission mechanisms of the crisis on the populations, to identify gaps in policy responses, and to generate insights to inform scaling up or redirection of resources as the crisis unfolds.
National
Individual, Household-level
A mobile frame was generated via random digit dialing (RDD), based on the National Numbering Plans from the Malaysian Communications and Multimedia Commission (MCMC). All possible subscriber combinations were generated in DRUID (D Force Sampling's Reactive User Interface Database), an SQL database interface which houses the complete sampling frame. From this database, complete random telephone numbers were sampled. For Round 1, a sample of 33,894 phone numbers were drawn (without replacement within the survey wave) from a total of 102,780,000 possible mobile numbers from more than 18 mobile providers in the sampling frame, which were not stratified. Once the sample was drawn in the form of replicates (subsamples) of n = 10.000, the numbers were filtered by D-Force Sampling using an auto-dialer to determine each numbers' working status. All numbers that yield a working call disposition for at least one of the two filtering attempts were then passed to the CATI center human interviewing team. Mobile devices were assumed to be personal, and therefore the person who answered the call was the selected respondent. Screening questions were used to ensure that the respondent was at least 18 years old and within the capacity of either contributing, making or with knowledge of household finances. Respondents who had participated in Round 1 were sampled for Round 2. Fresh respondents were introduced in Round 3 in addition to panel respondents from Round 2; fresh respondents in Round 3 were selected using the same procedure for sampling respondents in Round 1.
Computer Assisted Telephone Interview [cati]
The questionnaire is available in three languages, including English, Bahasa Melayu, and Mandarin Chinese. It can be downloaded from the Downloads section.
In Round 1, the survey successfully interviewed 2,210 individuals out of 33,894 sampled phone numbers. In Round 2, the survey successfully re-interviewed 1,047 individuals, recording a 47% response rate. In Round 3, the survey successfully re-interviewed 667 respondents who had been previously interviewed in Round 2, recording a 64% response rate. The panel respondents in Round 3 were added with 446 fresh respondents.
In Round 1, assuming a simple random sample, with p=0.5 and n=2,210 at the 95% CI level, yields a margin of sampling error (MOE) of 2.09 percentage points. Incorporating the design effect into this estimate yields a margin of sampling error of 2.65% percentage points.
In Round 2, the complete weight was for the entire sample adjusted to the 2021 population estimates from DOSM’s annual intercensal population projections. Assuming a simple random sample with p=0.5 and n=1,047 at the 95% CI level, yields a margin of sampling error (MOE) of 3.803 percentage points. Incorporating the design effect into this estimate yields a margin of sampling error of 3.54 percentage points.
Among both fresh and panel samples in Round 3, assuming a simple random sample, with p=0.5 and n=1,113 at the 95% CI level yields a margin of sampling error (MOE) of 2.94 percentage points. Incorporating the design effect into this estimate yields a margin of sampling error of 3.34 percentage points.
Among panel samples in Round 3, with p=0.5 and n=667 at the 95% CI level yields a margin of sampling error (MOE) of 3.80 percentage points. Incorporating the design effect into this estimate yields a margin of sampling error of 4.16 percentage points.
The establishment of a BES Multi-User Geodatabase (BES-MUG) allows for the storage, management, and distribution of geospatial data associated with the Baltimore Ecosystem Study. At present, BES data is distributed over the internet via the BES website. While having geospatial data available for download is a vast improvement over having the data housed at individual research institutions, it still suffers from some limitations. BES-MUG overcomes these limitations; improving the quality of the geospatial data available to BES researches, thereby leading to more informed decision-making.
BES-MUG builds on Environmental Systems Research Institute's (ESRI) ArcGIS and ArcSDE technology. ESRI was selected because its geospatial software offers robust capabilities. ArcGIS is implemented agency-wide within the USDA and is the predominant geospatial software package used by collaborating institutions.
Commercially available enterprise database packages (DB2, Oracle, SQL) provide an efficient means to store, manage, and share large datasets. However, standard database capabilities are limited with respect to geographic datasets because they lack the ability to deal with complex spatial relationships. By using ESRI's ArcSDE (Spatial Database Engine) in conjunction with database software, geospatial data can be handled much more effectively through the implementation of the Geodatabase model. Through ArcSDE and the Geodatabase model the database's capabilities are expanded, allowing for multiuser editing, intelligent feature types, and the establishment of rules and relationships. ArcSDE also allows users to connect to the database using ArcGIS software without being burdened by the intricacies of the database itself.
For an example of how BES-MUG will help improve the quality and timeless of BES geospatial data consider a census block group layer that is in need of updating. Rather than the researcher downloading the dataset, editing it, and resubmitting to through ORS, access rules will allow the authorized user to edit the dataset over the network. Established rules will ensure that the attribute and topological integrity is maintained, so that key fields are not left blank and that the block group boundaries stay within tract boundaries. Metadata will automatically be updated showing who edited the dataset and when they did in the event any questions arise.
Currently, a functioning prototype Multi-User Database has been developed for BES at the University of Vermont Spatial Analysis Lab, using Arc SDE and IBM's DB2 Enterprise Database as a back end architecture. This database, which is currently only accessible to those on the UVM campus network, will shortly be migrated to a Linux server where it will be accessible for database connections over the Internet. Passwords can then be handed out to all interested researchers on the project, who will be able to make a database connection through the Geographic Information Systems software interface on their desktop computer.
This database will include a very large number of thematic layers. Those layers are currently divided into biophysical, socio-economic and imagery categories. Biophysical includes data on topography, soils, forest cover, habitat areas, hydrology and toxics. Socio-economics includes political and administrative boundaries, transportation and infrastructure networks, property data, census data, household survey data, parks, protected areas, land use/land cover, zoning, public health and historic land use change. Imagery includes a variety of aerial and satellite imagery.
See the readme: http://96.56.36.108/geodatabase_SAL/readme.txt
See the file listing: http://96.56.36.108/geodatabase_SAL/diroutput.txt
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is sample code that uses Spark SQL to perform an interval based join to the contents of sampleReads.adam with the contents of Additional File 5. (SCALA 1 kb)
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The Google Merchandise Store sells Google branded merchandise. The data is typical of what you would see for an ecommerce website.
The sample dataset contains Google Analytics 360 data from the Google Merchandise Store, a real ecommerce store. The Google Merchandise Store sells Google branded merchandise. The data is typical of what you would see for an ecommerce website. It includes the following kinds of information:
Traffic source data: information about where website visitors originate. This includes data about organic traffic, paid search traffic, display traffic, etc. Content data: information about the behavior of users on the site. This includes the URLs of pages that visitors look at, how they interact with content, etc. Transactional data: information about the transactions that occur on the Google Merchandise Store website.
Fork this kernel to get started.
Banner Photo by Edho Pratama from Unsplash.
What is the total number of transactions generated per device browser in July 2017?
The real bounce rate is defined as the percentage of visits with a single pageview. What was the real bounce rate per traffic source?
What was the average number of product pageviews for users who made a purchase in July 2017?
What was the average number of product pageviews for users who did not make a purchase in July 2017?
What was the average total transactions per user that made a purchase in July 2017?
What is the average amount of money spent per session in July 2017?
What is the sequence of pages viewed?
The Household Income and Expenditure Survey is a survey collecting data on income, consumption and expenditure patterns of households, in accordance with methodological principles of statistical enquiries, which are linked to demographic and socio-economic characteristics of households. A Household Income and expenditure Survey is the sole source of information on expenditure, consumption and income patterns of households, which is used to calculate poverty and income distribution indicators. It also serves as a statistical infrastructure for the compilation of the national basket of goods used to measure changes in price levels. Furthermore, it is used for updating of the national accounts.
The main objective of the NHIES 2009/2010 is to comprehensively describe the levels of living of Namibians using actual patterns of consumption and income, as well as a range of other socio-economic indicators based on collected data. This survey was designed to inform policy making at the international, national and regional levels within the context of the Fourth National Development Plan, in support of monitoring and evaluation of Vision 2030 and the Millennium Development Goals. The NHIES was designed to provide policy decision making with reliable estimates at regional levels as well as to meet rural - urban disaggregation requirements.
National Coverage
Individuals and Households
Every week of the four weeks period of a survey round all persons in the household were asked if they spent at least 4 nights of the week in the household. Any person who spent at least 4 nights in the household was taken as having spent the whole week in the household. To qualify as a household member a person must have stayed in the household for at least two weeks out of four weeks.
Sample survey data [ssd]
The targeted population of NHIES 2009/2010 was the private households of Namibia. The population living in institutions, such as hospitals, hostels, police barracks and prisons were not covered in the survey. However, private households residing within institutional settings were covered. The sample design for the survey was a stratified two-stage probability sample, where the first stage units were geographical areas designated as the Primary Sampling Units (PSUs) and the second stage units were the households. The PSUs were based on the 2001 Census EAs and the list of PSUs serves as the national sample frame. The urban part of the sample frame was updated to include the changes that take place due to rural to urban migration and the new developments in housing. The sample frame is stratified first by region followed by urban and rural areas within region. In urban areas further stratification is carried out by level of living which is based on geographic location and housing characteristics. The first stage units were selected from the sampling frame of PSUs and the second stage units were selected from a current list of households within each selected PSU, which was compiled just before the interviews.
PSUs were selected using probability proportional to size sampling coupled with the systematic sampling procedure where the size measure was the number of households within the PSU in the 2001 Population and Housing Census. The households were selected from the current list of households using systematic sampling procedure.
The sample size was designed to achieve reliable estimates at the region level and for urban and rural areas within each region. However the actual sample sizes in urban or rural areas within some of the regions may not satisfy the expected precision levels for certain characteristics. The final sample consists of 10 660 households in 533 PSUs. The selected PSUs were randomly allocated to the 13 survey rounds.
All the expected sample of 533 PSUs was covered. However a number of originally selected PSUs had to be substituted by new ones due to the following reasons.
Urban areas: Movement of people for resettlement in informal settlement areas from one place to another caused a selected PSU to be empty of households.
Rural areas: In addition to Caprivi region (where one constituency is generally flooded every year) Ohangwena and Oshana regions were badly affected from an unusual flood situation. Although this situation was generally addressed by interchanging the PSUs betweensurvey rounds still some PSUs were under water close to the end of the survey period. There were five empty PSUs in the urban areas of Hardap (1), Karas (3) and Omaheke (1) regions. Since these PSUs were found in the low strata within the urban areas of the relevant regions the substituting PSUs were selected from the same strata. The PSUs under water were also five in rural areas of Caprivi (1), Ohangwena (2) and Oshana (2) regions. Wherever possible the substituting PSUs were selected from the same constituency where the original PSU was selected. If not, the selection was carried out from the rural stratum of the particular region. One sampled PSU in urban area of Khomas region (Windhoek city) had grown so large that it had to be split into 7 PSUs. This was incorporated into the geographical information system (GIS) and one PSU out of the seven was selected for the survey. In one PSU in Erongo region only fourteen households were listed and one in Omusati region listed only eleven households. All these households were interviewed and no additional selection was done to cover for the loss in sample.
Face-to-face [f2f]
The instruments for data collection were as in the previous survey the questionnaires and manuals. Form I questionnaire collected demographic and socio-economic information of household members, such as: sex, age, education, employment status among others. It also collected information on household possessions like animals, land, housing, household goods, utilities, household income and expenditure, etc.
Form II or the Daily Record Book is a diary for recording daily household transactions. A book was administered to each sample household each week for four consecutive weeks (survey round). Households were asked to record transactions, item by item, for all expenditures and receipts, including incomes and gifts received or given out. Own produce items were also recorded. Prices of items from different outlets were also collected in both rural and urban areas. The price collection was needed to supplement information from areas where price collection for consumer price indices (CPI) does not currently take place.
The questionnaires received from the regions were registered and counterchecked at the survey head office. The data processing team consisted of Systems administrator, IT technician, Programmers, Statisticians and Data typists.
Data capturing
The data capturing process was undertakenin the following ways: Form 1 was scanned, interpreted and verified using the “Scan”, “Interpret” & “Verify” modules of the Eyes & Hands software respectively. Some basic checks were carried out to ensure that each PSU was valid and every household was unique. Invalid characters were removed. The scanned and verified data was converted into text files using the “Transfer” module of the Eyes & Hands. Finally, the data was transferred to a SQL database for further processing, using the “TranScan” application. The Daily Record Books (DRB or form 2) were manually entered after the scanned data had been transferred to the SQL database. The reason was to ensure that all DRBs were linked to the correct Form 1, i.e. each household’s Form 1 was linked to the corresponding Daily Record Book. In total, 10 645 questionnaires (Form 1), comprising around 500 questions each, were scanned and close to one million transactions from the Form 2 (DRBs) were manually captured.
Household response rate: Total number of responding households and non-responding households and the reason for non-response are shown below. Non-contacts and incomplete forms, which were rejected due to a lot of missing data in the questionnaire, at 3.4 and 4.0 percent, respectively, formed the largest part of non-response. At the regional level Erongo, Khomas, and Kunene reported the lowest response rate and Caprivi and Kavango the highest. See page 17 of the report for a detailed breakdown of response rates by region.
To be able to compare with the previous survey in 2003/2004 and to follow up the development of the country, methodology and definitions were kept the same. Comparisons between the surveys can be found in the different chapters in this report. Experiences from the previous survey gave valuable input to this one and the data collection was improved to avoid earlier experienced errors. Also, some additional questions in the questionnaire helped to confirm the accuracy of reported data. During the data cleaning process it turned out, that some households had difficulty to separate their household consumption from their business consumption when recording their daily transactions in DRB. This was in particular applicable for the guest farms, the number of which has shown a big increase during the past five years. All households with extreme high consumption were examined manually and business transactions were recorded and separated from private consumption.
The Household Income and Expenditure Survey (NHIES) 2009 was a survey collecting data on income, consumption and expenditure patterns of households, in accordance with methodological principles of statistical enquiries, which were linked to demographic and socio-economic characteristics of households. A Household Income and expenditure Survey was the sole source of information on expenditure, consumption and income patterns of households, which was used to calculate poverty and income distribution indicators. It also served as a statistical infrastructure for the compilation of the national basket of goods used to measure changes in price levels. It was also used for updating the national accounts.
The main objective of the NHIES 2009-2010 was to comprehensively describe the levels of living of Namibians using actual patterns of consumption and income, as well as a range of other socio-economic indicators based on collected data. This survey was designed to inform policy making at the international, national and regional levels within the context of the Fourth National Development Plan, in support of monitoring and evaluation of Vision 2030 and the Millennium Development Goals (MDG's). The NHIES was designed to provide policy decision making with reliable estimates at regional levels as well as to meet rural - urban disaggregation requirements.
National
Every week of the four weeks period of a survey round all persons in the household were asked if they spent at least 4 nights of the week in the household. Any person who spent at least 4 nights in the household was taken as having spent the whole week in the household. To qualify as a household member a person must have stayed in the household for at least two weeks out of four weeks.
Sample survey data [ssd]
The targeted population of NHIES 2009-2010 was the private households of Namibia. The population living in institutions, such as hospitals, hostels, police barracks and prisons were not covered in the survey. However, private households residing within institutional settings were covered. The sample design for the survey was a stratified two-stage probability sample, where the first stage units were geographical areas designated as the Primary Sampling Units (PSUs) and the second stage units were the households. The PSUs were based on the 2001 Census EAs and the list of PSUs serves as the national sample frame. The urban part of the sample frame was updated to include the changes that take place due to rural to urban migration and the new developments in housing. The sample frame is stratified first by region followed by urban and rural areas within region. In urban areas, further stratification is carried out by level of living which is based on geographic location and housing characteristics. The first stage units were selected from the sampling frame of PSUs and the second stage units were selected from a current list of households within each selected PSU, which was compiled just before the interviews.
PSUs were selected using probability proportional to size sampling coupled with the systematic sampling procedure where the size measure was the number of households within the PSU in the 2001 Population and Housing Census (PHC). The households were selected from the current list of households using systematic sampling procedure.
The sample size was designed to achieve reliable estimates at the region level and for urban and rural areas within each region. However, the actual sample sizes in urban or rural areas within some of the regions may not satisfy the expected precision levels for certain characteristics. The final sample consists of 10 660 households in 533 PSUs. The selected PSUs were randomly allocated to the 13 survey rounds.
All the expected sample of 533 PSUs was covered. However, a number of originally selected PSUs had to be substituted by new ones due to the following reasons.
Urban areas: Movement of people for resettlement in informal settlement areas from one place to another caused a selected PSU to be empty of households.
Rural areas: In addition to Caprivi region (where one constituency is generally flooded every year) Ohangwena and Oshana regions were badly affected from an unusual flood situation. Although this situation was generally addressed by interchanging the PSUs between survey rounds still some PSUs were under water close to the end of the survey period.
There were five empty PSUs in the urban areas of Hardap (1), Karas (3) and Omaheke (1) regions. Since these PSUs were found in the low strata within the urban areas of the relevant regions the substituting PSUs were selected from the same strata. The PSUs under water were also five in rural areas of Caprivi (1), Ohangwena (2) and Oshana (2) regions. Wherever possible the substituting PSUs were selected from the same constituency where the original PSU was selected. If not, the selection was carried out from the rural stratum of the particular region.
One sampled PSU in urban area of Khomas region (Windhoek city) had grown so large that it had to be split into 7 PSUs. This was incorporated into the geographical information system (GIS) and one PSU out of the seven was selected for the survey. In one PSU in Erongo region only fourteen households were listed and one in Omusati region listed only eleven households. All these households were interviewed and no additional selection was done to cover for the loss in sample.
Face-to-face [f2f]
The instruments for data collection were as in the previous survey the questionnaires and manuals. Form I questionnaire collected demographic and socio-economic information of household members, such as: sex, age, education, employment status among others. It also collected information on household possessions like animals, land, housing, household goods, utilities, household income and expenditure, etc.
Form II or the Daily Record Book is a diary for recording daily household transactions. A book was administered to each sample household each week for four consecutive weeks (survey round). Households were asked to record transactions, item by item, for all expenditures and receipts, including incomes and gifts received or given out. Own produce items were also recorded. Prices of items from different outlets were also collected in both rural and urban areas. The price collection was needed to supplement information from areas where price collection for consumer price indices (CPI) does not currently take place.
The data capturing process was undertaken in the following ways: Form 1 was scanned, interpreted and verified using the “Scan”, “Interpret” & “Verify” modules of the Eyes & Hands software respectively. Some basic checks were carried out to ensure that each PSU was valid and every household was unique. Invalid characters were removed. The scanned and verified data was converted into text files using the “Transfer” module of the Eyes & Hands. Finally, the data was transferred to a SQL database for further processing, using the “TranScan” application. The Daily Record Books (DRB or form 2) were manually entered after the scanned data had been transferred to the SQL database. The reason was to ensure that all DRBs were linked to the correct Form 1, i.e. each household's Form 1 was linked to the corresponding Daily Record Book. In total, 10 645 questionnaires (Form 1), comprising around 500 questions each, were scanned and close to one million transactions from the Form 2 (DRBs) were manually captured.
Household response rate: Total number of responding households and non-responding households and the reason for non-response are shown below. Non-contacts and incomplete forms, which were rejected due to a lot of missing data in the questionnaire, at 3.4 and 4.0 percent, respectively, formed the largest part of non-response. At the regional level Erongo, Khomas, and Kunene reported the lowest response rate and Caprivi and Kavango the highest.
To be able to compare with the previous survey in 2003/2004 and to follow up the development of the country, methodology and definitions were kept the same. Comparisons between the surveys can be found in the different chapters in this report. Experiences from the previous survey gave valuable input to this one and the data collection was improved to avoid earlier experienced errors. Also, some additional questions in the questionnaire helped to confirm the accuracy of reported data. During the data cleaning process it turned out, that some households had difficulty to separate their household consumption from their business consumption when recording their daily transactions in DRB. This was in particular applicable for the guest farms, the number of which has shown a big increase during the past five years. All households with extreme high consumption were examined manually and business transactions were recorded and separated from private consumption.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Dataset Card for Spider
Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
Supported Tasks and Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
Languages
The text in the dataset is in English.
Dataset… See the full description on the dataset page: https://huggingface.co/datasets/xlangai/spider.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A data delivery application that provides web-based access to of soil, water, climate, land management, and geospatial data produced by Conservation Effects Assessment Project (CEAP) watershed research sites across the United States. Data access via ArcGIS Server and MS SQL Server Enhanced data searches and summary options in Tools Access to high-resolution imagery in the Map>Table of Contents Enhanced graphing options on the Get Data page Transparency sliders for individual map components in the Map>Table of Contents Resources in this dataset:Resource Title: STEWARDS - A data delivery application for the USDA/ARS Conservation Effects Assessment Project. File Name: Web Page, url: https://www.nrrig.mwa.ars.usda.gov/stewards/stewards.html
Sampling strategy: Samples from trawls or sledges are sieved on the trawl deck then sorted in the wet lab per taxonomic group. Sorting may vary from high taxonomic levels (order, family) to specific ones according to expertise on board.
For some taxa, sampling includes: up to 10 voucher specimens with a unique batch number; photos; tissue samples in 80% ethanol for DNA analysis (Barcoding and Phylogeny); 30 samples minimum for population genetics (for abundant species); sampling for isotopic measures; fish chromosomes preparations; primary fish cell lines and cryopreservation of fish tissues for permanent cell lines
The database was intended to contain information about stations, events, gear, all material collected and associated samples listed above. currently only contains information on material collected and samples.
Data was recorded on log sheets then transcribed into an Oracle database called cabo. Tailor made user interace for entering data. No export functionality. SQL database dump has been provided but there was no-one on the voyage to elaborate on the structure, this was promised post voyage along with some simple data exports to match the log sheets, so we have access to the data without the unfriendly database.
Introduction OntoNotes Release 5.0 is the final release of the OntoNotes project, a collaborative effort between BBN Technologies, the University of Colorado, the University of Pennsylvania and the University of Southern Californias Information Sciences Institute. The goal of the project was to annotate a large corpus comprising various genres of text (news, conversational telephone speech, weblogs, usenet newsgroups, broadcast, talk shows) in three languages (English, Chinese, and Arabic) with structural information (syntax and predicate argument structure) and shallow semantics (word sense linked to an ontology and coreference). OntoNotes Release 5.0 contains the content of earlier releases -- OntoNotes Release 1.0 LDC2007T21, OntoNotes Release 2.0 LDC2008T04, OntoNotes Release 3.0 LDC2009T24 and OntoNotes Release 4.0 LDC2011T03 -- and adds source data from and/or additional annotations for, newswire (News), broadcast news (BN), broadcast conversation (BC), telephone conversation (Tele) and web data (Web) in English and Chinese and newswire data in Arabic. Also contained is English pivot text (Old Testament and New Testament text). This cumulative publication consists of 2.9 million words with counts shown in the table below. Arabic English Chinese News 300k 625k 250k BN n/a 200k 250k BC n/a 200k 150k Web n/a 300k 150k Tele n/a 120k 100k Pivot n/a n/a 300k The OntoNotes project built on two time-tested resources, following the Penn Treebank for syntax and the Penn PropBank for predicate-argument structure. Its semantic representation includes word sense disambiguation for nouns and verbs, with some word senses connected to an ontology, and coreference. Data Documents describing the annotation guidelines and the routines for deriving various views of the data from the database are included in the documentation directory of this release. The annotation is provided both in separate text files for each annotation layer (Treebank, PropBank, word sense, etc.) and in the form of an integrated relational database (ontonotes-v5.0.sql.gz) with a Python API to provide convenient cross-layer access. It is a known issue that this release contains some non-validating XML files. The included tools, however, use a non-validating XML parser to parse the .xml files and load the appropriate values. Tools This release includes OntoNotes DB Tool v0.999 beta, the tool used to assemble the database from the original annotation files. It can be found in the directory tools/ontonotes-db-tool-v0.999b. This tool can be used to derive various views of the data from the database, and it provides an API that can implement new queries or views. Licensing information for the OntoNotes DB Tool package is included in its source directory. Samples Please view these samples: Chinese Arabic English Updates Additional documentation was added on December 11, 2014 and is included in downloads after that date. Acknowledgment This work is supported in part by the Defense Advanced Research Projects Agency, GALE Program Grant No. HR0011-06-1-003. The content of this publication does not necessarily reflect the position or policy of the Government, and no official endorsement should be inferred. Copyright Portions © 2006 Abu Dhabi TV, © 2006 Agence France Presse, © 2006 Al-Ahram, © 2006 Al Alam News Channel, © 2006 Al Arabiya, © 2006 Al Hayat, © 2006 Al Iraqiyah, © 2006 Al Quds-Al Arabi, © 2006 Anhui TV, © 2002, 2006 An Nahar, © 2006 Asharq-al-Awsat, © 2010 Bible League International, © 2005 Cable News Network, LP, LLLP, © 2000-2001 China Broadcasting System, © 2000-2001, 2005-2006 China Central TV, © 2006 China Military Online, © 2000-2001 China National Radio, © 2006 Chinanews.com, © 2000-2001 China Television System, © 1989 Dow Jones & Company, Inc., © 2006 Dubai TV, © 2006 Guangming Daily, © 2006 Kuwait TV, © 2005-2006 National Broadcasting Company, Inc., © 2006 New Tang Dynasty TV, © 2006 Nile TV, © 2006 Oman TV, © 2006 PAC Ltd, © 2006 Peoples Daily Online, © 2005-2006 Phoenix TV, © 2000-2001 Sinorama Magazine, © 2006 Syria TV, © 1996-1998, 2006 Xinhua News Agency, © 1996, 1997, 2005, 2007, 2008, 2009, 2011, 2013 Trustees of the University of Pennsylvania
BASIC CHARACTERISTICS OF PEOPLE AND HOUSING FOR INDIVIDUAL 2010 CENSUS TRACT PORTIONS INSIDE OR OUTSIDE KCMO - Some demographic data are from the 2010 Census while other data are from the 2015-2019 American Community Survey - ACS. The ACS replaces what until 2000 was the Long Form of the census; both have been based on surveys of a partial sample of people. The ACS sample is so small that surveys from five years must be combined to be reliable. The 2015-2019 ACS is the most recent grouping of 5 years of data. ACS data have been proportioned to conform with 2010 Census total population and total households.
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
http://www.apache.org/licenses/LICENSE-2.0http://www.apache.org/licenses/LICENSE-2.0
This dataset contains 56,964 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch, specifically in the domain of programming (Stack Overflow). They are translations of Baize's machine-generated answers to the Stack Overflow dataset.
Data Instances
{'id': 9, 'input': 'Het gesprek tussen de mens en de AI-assistent. ' '[|Human|] Ik heb wat hulp nodig met Spark SQL union en het ordenen ' 'van kolommen. ' '[|AI|] Natuurlijk, ik help je graag. Waar heb je specifiek hulp ' 'bij nodig? ' '[|Human|] Ik wil de output van een UNION-query ordenen op een ' 'kolom die niet aanwezig is in de SELECT-instructie. Is dat ' 'mogelijk? ' '[|AI|] Ja, het is mogelijk met Spark SQL. Je kunt de ordinale ' 'positie van de kolom in de ORDER BY-clausule gebruiken in plaats ' 'van de kolomnaam. Als je bijvoorbeeld wilt ordenen op de derde ' 'kolom in de SELECT-instructie, kun je ORDER BY 3 gebruiken. ' '[|Human|] Dat is nuttig. Kan ik ook een alias gebruiken in de ' 'ORDER BY-clausule? ' '[|AI|] Ja, je kunt een alias gebruiken in de ORDER BY-clausule. De ' 'alias moet worden gedefinieerd in de SELECT-instructie. Als je ' 'bijvoorbeeld de derde kolom als "col3" alias geeft, kun je ordenen ' 'met ORDER BY col3. ' '[|Human|] Oké, ik denk dat ik het begrijp. Kan ik dezelfde alias ' 'gebruiken voor kolommen uit beide tabellen in de UNION? ' '[|AI|] Nee, je kunt dezelfde alias niet gebruiken voor kolommen ' 'uit beide tabellen in de UNION. Elke tabel in de UNION moet een ' 'unieke alias hebben voor de kolommen. ' '[|Human|] ', 'topic': 'Spark SQL UNION - ORDER BY kolom niet in SELECT'},
Data Fields
id: the ID of the item. The following 82 IDs are not included because they could not be translated: [1713, 1937, 1960, 4326, 4356, 8357, 8542, 8827, 9137, 9782, 11560, 11961, 12244, 12362, 12488, 13259, 13621, 14445, 14835, 15006, 17746, 18808, 19285, 19426, 19491, 21270, 21661, 22098, 23352, 23840, 23869, 25148, 25928, 27102, 27856, 28387, 29942, 30041, 30251, 32396, 32742, 32941, 33628, 34116, 34648, 34859, 35977, 35987, 36035, 36456, 37028, 37238, 37640, 38107, 38735, 39015, 40984, 41115, 41567, 42397, 43219, 43783, 44599, 44980, 45239, 47676, 48922, 49534, 50282, 50683, 50804, 50919, 51076, 51211, 52000, 52183, 52489, 52595, 53884, 54726, 55795, 56992]
input: the machine-generated conversation between AI and "Human". Always starts with Het gesprek tussen de mens en de AI-assistent. and has at least one occurrence of both [|AI|] and [|Human|].
topic: the topic description
Dataset Creation
Both the translations and the topics were translated with OpenAI's API for gpt-3.5-turbo. max_tokens=1024, temperature=0 as parameters.
The prompt template to translate the input is (where src_lang was English and tgt_lang Dutch):
CONVERSATION_TRANSLATION_PROMPT = """You are asked to translate a conversation between an AI assistant and a human from {src_lang} into {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the conversation consists of the AI (marked as [|AI|]
) and the human ([|Human|]
) talking in turns and responding to each other;
2. do not translate the speaker identifiers [|AI|]
and [|Human|]
but always copy them into the translation in appropriate places;
3. ensure accurate translation and keep the correctness of the conversation;
4. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
5. translate the human's text using informal, but standard, language;
6. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
7. if the human asks to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in {tgt_lang}, and then also generate a corrected output version for the AI in {tgt_lang};
8. if the human asks to translate text from one to another language, then you only translate the human's question to {tgt_lang} but you keep the translation that the AI provides in the language that the human requested;
9. do not translate code fragments but copy them as they are. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following conversation with the requirements set out above. Do not provide an explanation and do not add anything else.
"""
The prompt to translate the topic is:
TOPIC_TRANSLATION_PROMPT = "Translate the following title of a conversation from {src_lang} to {tgt_lang} in a succinct,"
" summarizing manner. Translate accurately and formally. Do not provide any explanation"
" about the translation and do not include the original title.
"
The system message was:
You are a helpful assistant that translates English to Dutch to the requirements that are given to you.
Note that 82 items (0.1%) were not successfully translated. The translation was missing the AI identifier [|AI|] and/or the human one [|Human|]. The IDs for the missing items are [1713, 1937, 1960, 4326, 4356, 8357, 8542, 8827, 9137, 9782, 11560, 11961, 12244, 12362, 12488, 13259, 13621, 14445, 14835, 15006, 17746, 18808, 19285, 19426, 19491, 21270, 21661, 22098, 23352, 23840, 23869, 25148, 25928, 27102, 27856, 28387, 29942, 30041, 30251, 32396, 32742, 32941, 33628, 34116, 34648, 34859, 35977, 35987, 36035, 36456, 37028, 37238, 37640, 38107, 38735, 39015, 40984, 41115, 41567, 42397, 43219, 43783, 44599, 44980, 45239, 47676, 48922, 49534, 50282, 50683, 50804, 50919, 51076, 51211, 52000, 52183, 52489, 52595, 53884, 54726, 55795, 56992].
The translation quality has not been verified. Use at your own risk!
Licensing Information
Licensing info for Stack Overflow Questions is listed as Apache 2.0. If you use the current dataset, you should also adhere to the original license.
This text was generated (either in part or in full) with GPT-3 (gpt-3.5-turbo), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the Sharing and Usage policies.
As clearly stated in their Terms of Use, specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. As far as I am aware, that is a specific restriction that should serve as an addendum to the current license.
This dataset is also available on the Hugging Face hub with the same DOI and license. See that README for more info.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The file set is a freely downloadable aggregation of information about Australian schools. The individual files represent a series of tables which, when considered together, form a relational database. The records cover the years 2008-2014 and include information on approximately 9500 primary and secondary school main-campuses and around 500 subcampuses. The records all relate to school-level data; no data about individuals is included. All the information has previously been published and is publicly available but it has not previously been released as a documented, useful aggregation. The information includes:
(a) the names of schools
(b) staffing levels, including full-time and part-time teaching and non-teaching staff
(c) student enrolments, including the number of boys and girls
(d) school financial information, including Commonwealth government, state government, and private funding
(e) test data, potentially for school years 3, 5, 7 and 9, relating to an Australian national testing programme know by the trademark 'NAPLAN'
Documentation of this Edition 2016.1 is incomplete but the organization of the data should be readily understandable to most people. If you are a researcher, the simplest way to study the data is to make use of the SQLite3 database called 'school-data-2016-1.db'. If you are unsure how to use an SQLite database, ask a guru.
The database was constructed directly from the other included files by running the following command at a command-line prompt:
sqlite3 school-data-2016-1.db < school-data-2016-1.sql
Note that a few, non-consequential, errors will be reported if you run this command yourself. The reason for the errors is that the SQLite database is created by importing a series of '.csv' files. Each of the .csv files contains a header line with the names of the variable relevant to each column. The information is useful for many statistical packages but it is not what SQLite expects, so it complains about the header. Despite the complaint, the database will be created correctly.
Briefly, the data are organized as follows.
(a) The .csv files ('comma separated values') do not actually use a comma as the field delimiter. Instead, the vertical bar character '|' (ASCII Octal 174 Decimal 124 Hex 7C) is used. If you read the .csv files using Microsoft Excel, Open Office, or Libre Office, you will need to set the field-separator to be '|'. Check your software documentation to understand how to do this.
(b) Each school-related record is indexed by an identifer called 'ageid'. The ageid uniquely identifies each school and consequently serves as the appropriate variable for JOIN-ing records in different data files. For example, the first school-related record after the header line in file 'students-headed-bar.csv' shows the ageid of the school as 40000. The relevant school name can be found by looking in the file 'ageidtoname-headed-bar.csv' to discover that the the ageid of 40000 corresponds to a school called 'Corpus Christi Catholic School'.
(3) In addition to the variable 'ageid' each record is also identified by one or two 'year' variables. The most important purpose of a year identifier will be to indicate the year that is relevant to the record. For example, if one turn again to file 'students-headed-bar.csv', one sees that the first seven school-related records after the header line all relate to the school Corpus Christi Catholic School with ageid of 40000. The variable that identifies the important differences between these seven records is the variable 'studentyear'. 'studentyear' shows the year to which the student data refer. One can see, for example, that in 2008, there were a total of 410 students enrolled, of whom 185 were girls and 225 were boys (look at the variable names in the header line).
(4) The variables relating to years are given different names in each of the different files ('studentsyear' in the file 'students-headed-bar.csv', 'financesummaryyear' in the file 'financesummary-headed-bar.csv'). Despite the different names, the year variables provide the second-level means for joining information acrosss files. For example, if you wanted to relate the enrolments at a school in each year to its financial state, you might wish to JOIN records using 'ageid' in the two files and, secondarily, matching 'studentsyear' with 'financialsummaryyear'.
(5) The manipulation of the data is most readily done using the SQL language with the SQLite database but it can also be done in a variety of statistical packages.
(6) It is our intention for Edition 2016-2 to create large 'flat' files suitable for use by non-researchers who want to view the data with spreadsheet software. The disadvantage of such 'flat' files is that they contain vast amounts of redundant information and might not display the data in the form that the user most wants it.
(7) Geocoding of the schools is not available in this edition.
(8) Some files, such as 'sector-headed-bar.csv' are not used in the creation of the database but are provided as a convenience for researchers who might wish to recode some of the data to remove redundancy.
(9) A detailed example of a suitable SQLite query can be found in the file 'school-data-sqlite-example.sql'. The same query, used in the context of analyses done with the excellent, freely available R statistical package (http://www.r-project.org) can be seen in the file 'school-data-with-sqlite.R'.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
As of June 2024, the most popular database management system (DBMS) worldwide was Oracle, with a ranking score of 1244.08; MySQL and Microsoft SQL server rounded out the top three. Although the database management industry contains some of the largest companies in the tech industry, such as Microsoft, Oracle and IBM, a number of free and open-source DBMSs such as PostgreSQL and MariaDB remain competitive. Database Management Systems As the name implies, DBMSs provide a platform through which developers can organize, update, and control large databases. Given the business world’s growing focus on big data and data analytics, knowledge of SQL programming languages has become an important asset for software developers around the world, and database management skills are seen as highly desirable. In addition to providing developers with the tools needed to operate databases, DBMS are also integral to the way that consumers access information through applications, which further illustrates the importance of the software.