https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
YouTube was created in 2005, with the first video – Me at the Zoo - being uploaded on 23 April 2005. Since then, 1.3 billion people have set up YouTube accounts. In 2018, people watch nearly 5 billion videos each day. People upload 300 hours of video to the site every minute.
According to 2016 research undertaken by Pexeso, music only accounts for 4.3% of YouTube’s content. Yet it makes 11% of the views. Clearly, an awful lot of people watch a comparatively small number of music videos. It should be no surprise, therefore, that the most watched videos of all time on YouTube are predominantly music videos.
On August 13, BTS became the most-viewed artist in YouTube history, accumulating over 26.7 billion views across all their official channels. This count includes all music videos and dance practice videos.
Justin Bieber and Ed Sheeran now hold the records for second and third-highest views, with over 26 billion views each.
Currently, BTS’s most viewed videos are their music videos for “**Boy With Luv**,” “**Dynamite**,” and “**DNA**,” which all have over 1.4 billion views.
Headers of the Dataset Total = Total views (in millions) across all official channels Avg = Current daily average of all videos combined 100M = Number of videos with more than 100 million views
I am working in the area of Privacy Preserving Big Data Publishing. The state-of-art approaches were tested on Adult dataset. I found that Adult dataset is available at UCI repository but synthetic version wasn't available anywhere. As I am working with big data, I need large size of data to justify my contribution. Therefore, I created my own version of synthetic datasets with 100 thousands, 1 million, 10 millions and 100 millions numbers of records. Here I am sharing the original Adult dataset with approx 33 thousands records and the synthesis versions Adult100k, Adult 1m, Adult10m and Adult100m.
Adult dataset contains census information.
I would like to thank UCI repository for providing the base dataset without which I may not be able to synthesis the large data.
The datasets might be helpful to all those who wants to work on Big Data Privacy.
50 Million Rows MSSQL Backup File with Clustered Columnstore Index.
This dataset contains -27K categorized Turkish supermarket items. -81 stores (Every city of Turkey has a store) -100K real Turkish names customer, address -10M rows sales data generated randomly. -All data has a near real price with influation factor by the time.
All the data generated randomly. So the usernames have been generated with real Turkish names and surnames but they are not real people.
The sale data generated randomly. But it has some rules.
For example, every order can contains 1-9 kind of item.
Every orderline amount can be 1-9 pieces.
The randomise function works according to population of the city.
So the number of orders for Istanbul (the biggest city of Turkey) is about 20% of all data
and another city for example orders for the Gaziantep (the population is 2.5% of Turkey population) is about 2.5% off all data.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1611072%2F9442f2a1dbae7f05ead4fde9e1033ac6%2Finbox_1611072_135236e39b79d6fae8830dec3fca4961_1.png?generation=1693509562300174&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1611072%2F1c39195270db87250e59d9f2917ccea1%2Finbox_1611072_b73d9ca432dae956564cfa5bfe42268c_3.png?generation=1693509575061587&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1611072%2Fa908389f33ae5c983e383d17f0d9a763%2Finbox_1611072_c5d349aa1f33c0fc4fc74b79b7167d3a_F3za81TXkAA1Il4.png?generation=1693509586158658&alt=media" alt="">
The MarketScan health claims database is a compilation of nearly 110 million patient records with information from more than 100 private insurance carriers and large self-insuring companies. Public forms of insurance (i.e., Medicare and Medicaid) are not included, nor are small (< 100 employees) or medium (1000 employees). We excluded the relatively few (n=6735) individuals over 65 years of age because Medicare is the primary insurance of U.S. adults over 65. The EQI was constructed for 2000-2005 for all US counties and is composed of five domains (air, water, built, land, and sociodemographic), each composed of variables to represent the environmental quality of that domain. Domain-specific EQIs were developed using principal components analysis (PCA) to reduce these variables within each domain while the overall EQI was constructed from a second PCA from these individual domains (L. C. Messer et al., 2014). To account for differences in environment across rural and urban counties, the overall and domain-specific EQIs were stratified by rural urban continuum codes (RUCCs) (U.S. Department of Agriculture, 2015). This dataset is not publicly accessible because: EPA cannot release personally identifiable information regarding living individuals, according to the Privacy Act and the Freedom of Information Act (FOIA). This dataset contains information about human research subjects. Because there is potential to identify individual participants and disclose personal information, either alone or in combination with other datasets, individual level data are not appropriate to post for public access. Restricted access may be granted to authorized persons by contacting the party listed. It can be accessed through the following means: Human health data are not available publicly. EQI data are available at: https://edg.epa.gov/data/Public/ORD/NHEERL/EQI. Format: Data are stored as csv files. This dataset is associated with the following publication: Gray, C., D. Lobdell, K. Rappazzo, Y. Jian, J. Jagai, L. Messer, A. Patel, S. Deflorio-Barker, C. Lyttle, J. Solway, and A. Rzhetsky. Associations between environmental quality and adult asthma prevalence in medical claims data. ENVIRONMENTAL RESEARCH. Elsevier B.V., Amsterdam, NETHERLANDS, 166: 529-536, (2018).
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
NYC Open Data is an opportunity to engage New Yorkers in the information that is produced and used by City government. We believe that every New Yorker can benefit from Open Data, and Open Data can benefit from every New Yorker. Source: https://opendata.cityofnewyork.us/overview/
Thanks to NYC Open Data, which makes public data generated by city agencies available for public use, and Citi Bike, we've incorporated over 150 GB of data in 5 open datasets into Google BigQuery Public Datasets, including:
Over 8 million 311 service requests from 2012-2016
More than 1 million motor vehicle collisions 2012-present
Citi Bike stations and 30 million Citi Bike trips 2013-present
Over 1 billion Yellow and Green Taxi rides from 2009-present
Over 500,000 sidewalk trees surveyed decennially in 1995, 2005, and 2015
This dataset is deprecated and not being updated.
Fork this kernel to get started with this dataset.
https://opendata.cityofnewyork.us/
This dataset is publicly available for anyone to use under the following terms provided by the Dataset Source - https://data.cityofnewyork.us/ - and is provided "AS IS" without any warranty, express or implied, from Google. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
By accessing datasets and feeds available through NYC Open Data, the user agrees to all of the Terms of Use of NYC.gov as well as the Privacy Policy for NYC.gov. The user also agrees to any additional terms of use defined by the agencies, bureaus, and offices providing data. Public data sets made available on NYC Open Data are provided for informational purposes. The City does not warranty the completeness, accuracy, content, or fitness for any particular purpose or use of any public data set made available on NYC Open Data, nor are any such warranties to be implied or inferred with respect to the public data sets furnished therein.
The City is not liable for any deficiencies in the completeness, accuracy, content, or fitness for any particular purpose or use of any public data set, or application utilizing such data set, provided by any third party.
Banner Photo by @bicadmedia from Unplash.
On which New York City streets are you most likely to find a loud party?
Can you find the Virginia Pines in New York City?
Where was the only collision caused by an animal that injured a cyclist?
What’s the Citi Bike record for the Longest Distance in the Shortest Time (on a route with at least 100 rides)?
https://cloud.google.com/blog/big-data/2017/01/images/148467900588042/nyc-dataset-6.png" alt="enter image description here">
https://cloud.google.com/blog/big-data/2017/01/images/148467900588042/nyc-dataset-6.png
https://www.pioneerdatahub.co.uk/data/data-request-process/https://www.pioneerdatahub.co.uk/data/data-request-process/
OMOP dataset: Hospital COVID patients: severity, acuity, therapies, outcomes Dataset number 2.0
Coronavirus disease 2019 (COVID-19) was identified in January 2020. Currently, there have been more than 6 million cases & more than 1.5 million deaths worldwide. Some individuals experience severe manifestations of infection, including viral pneumonia, adult respiratory distress syndrome (ARDS) & death. There is a pressing need for tools to stratify patients, to identify those at greatest risk. Acuity scores are composite scores which help identify patients who are more unwell to support & prioritise clinical care. There are no validated acuity scores for COVID-19 & it is unclear whether standard tools are accurate enough to provide this support. This secondary care COVID OMOP dataset contains granular demographic, morbidity, serial acuity and outcome data to inform risk prediction tools in COVID-19.
PIONEER geography The West Midlands (WM) has a population of 5.9 million & includes a diverse ethnic & socio-economic mix. There is a higher than average percentage of minority ethnic groups. WM has a large number of elderly residents but is the youngest population in the UK. Each day >100,000 people are treated in hospital, see their GP or are cared for by the NHS. The West Midlands was one of the hardest hit regions for COVID admissions in both wave 1 & 2.
EHR. University Hospitals Birmingham NHS Foundation Trust (UHB) is one of the largest NHS Trusts in England, providing direct acute services & specialist care across four hospital sites, with 2.2 million patient episodes per year, 2750 beds & 100 ITU beds. UHB runs a fully electronic healthcare record (EHR) (PICS; Birmingham Systems), a shared primary & secondary care record (Your Care Connected) & a patient portal “My Health”. UHB has cared for >5000 COVID admissions to date. This is a subset of data in OMOP format.
Scope: All COVID swab confirmed hospitalised patients to UHB from January – August 2020. The dataset includes highly granular patient demographics & co-morbidities taken from ICD-10 & SNOMED-CT codes. Serial, structured data pertaining to care process (timings, staff grades, specialty review, wards), presenting complaint, acuity, all physiology readings (pulse, blood pressure, respiratory rate, oxygen saturations), all blood results, microbiology, all prescribed & administered treatments (fluids, antibiotics, inotropes, vasopressors, organ support), all outcomes.
Available supplementary data: Health data preceding & following admission event. Matched “non-COVID” controls; ambulance, 111, 999 data, synthetic data. Further OMOP data available as an additional service.
Available supplementary support: Analytics, Model build, validation & refinement; A.I.; Data partner support for ETL (extract, transform & load) process, Clinical expertise, Patient & end-user access, Purchaser access, Regulatory requirements, Data-driven trials, “fast screen” services.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The CIFAR-10 and CIFAR-100 dataset contains labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.
* More info on CIFAR-100: https://www.cs.toronto.edu/~kriz/cifar.html
* TensorFlow listing of the dataset: https://www.tensorflow.org/datasets/catalog/cifar100
* GitHub repo for converting CIFAR-100 tarball
files to png
format: https://github.com/knjcode/cifar2png
The CIFAR-10
dataset consists of 60,000 32x32 colour images in 10 classes
, with 6,000 images per class. There are 50,000
training images and 10,000 test
images [in the original dataset].
This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training
images and 100 testing
images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs). However, this project does not contain the superclasses.
* Superclasses version: https://universe.roboflow.com/popular-benchmarks/cifar100-with-superclasses/
More background on the dataset:
https://i.imgur.com/5w8A0Vm.png" alt="CIFAR-100 Dataset Classes and Superclassees">
train
(83.33% of images - 50,000 images) set and test
(16.67% of images - 10,000 images) set only.train
set split to provide 80% of its images to the training set (approximately 40,000 images) and 20% of its images to the validation set (approximately 10,000 images)@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
The dataset is a relational dataset of 8,000 households households, representing a sample of the population of an imaginary middle-income country. The dataset contains two data files: one with variables at the household level, the other one with variables at the individual level. It includes variables that are typically collected in population censuses (demography, education, occupation, dwelling characteristics, fertility, mortality, and migration) and in household surveys (household expenditure, anthropometric data for children, assets ownership). The data only includes ordinary households (no community households). The dataset was created using REaLTabFormer, a model that leverages deep learning methods. The dataset was created for the purpose of training and simulation and is not intended to be representative of any specific country.
The full-population dataset (with about 10 million individuals) is also distributed as open data.
The dataset is a synthetic dataset for an imaginary country. It was created to represent the population of this country by province (equivalent to admin1) and by urban/rural areas of residence.
Household, Individual
The dataset is a fully-synthetic dataset representative of the resident population of ordinary households for an imaginary middle-income country.
ssd
The sample size was set to 8,000 households. The fixed number of households to be selected from each enumeration area was set to 25. In a first stage, the number of enumeration areas to be selected in each stratum was calculated, proportional to the size of each stratum (stratification by geo_1 and urban/rural). Then 25 households were randomly selected within each enumeration area. The R script used to draw the sample is provided as an external resource.
other
The dataset is a synthetic dataset. Although the variables it contains are variables typically collected from sample surveys or population censuses, no questionnaire is available for this dataset. A "fake" questionnaire was however created for the sample dataset extracted from this dataset, to be used as training material.
The synthetic data generation process included a set of "validators" (consistency checks, based on which synthetic observation were assessed and rejected/replaced when needed). Also, some post-processing was applied to the data to result in the distributed data files.
This is a synthetic dataset; the "response rate" is 100%.
This data was collected as part of a university research paper where COVID-19 cases were analysed using a cross-sectional regression model as at 17th May 2020. In order to better understand COVID-19 cases growth at a country level I decided to create a dataset containing key dates in the progression of the virus globally.
210 rows, 6 columns.
This dataset contains data relating to COVID-19 cases for 210 countries globally. Data was collected using the most recent and reliable information as at 17th May 2020. The majority of data was collected from Worldometer. https://www.worldometers.info/coronavirus/#countries
This dataset contains dates for the 1st coronavirus case, 100th coronavirus case, and (50th coronavirus case per 1 million people) for 210 countries. Data is also provided for the number of days between the 1st case and the 100th as well as the 1st case and the 50th per 1 million people.
Data prior to 15th February 2020, was not easily accessible at the country level from Worldometer. Therefore any dates prior to 15th February 2020 were not sourced from Worldometer but reputable government and local media sources.
Blanks (null values) indicate that the country in question has not reached either 50 coronavirus cases per 1 million people or 100 coronavirus cases. These were left blank.
I would like to acknowledge Worldometer for providing the vast majority of the data in this file. Worldometer is a website that provides real time statistics on topics such as coronavirus cases. Its sources include government official reports as well as trusted local media sources all of which are referenced on their website.
Hopefully this data can be used to better understand the growth of COVID-19 cases globally.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
This is the dataset for the study of "Social dilemma in the excess use of antimicrobials incurring antimicrobial resistance". The emergence of antimicrobial resistance (AMR) caused by the excess use of antimicrobials has come to be recognized as a global threat to public health. There is a ‘tragedy of the commons’ type social dilemma behind this excessive use of antimicrobials, which should be recognized by all stakeholders. To address this global threat, we thus surveyed eight countries/areas to determine whether people recognize this dilemma and showed that although more than half of the population pays little, if any, attention to it, almost 20% recognize this social dilemma, and 15–30% of those have a positive attitude toward solving that dilemma. We suspect that increasing individual awareness of this social dilemma contributes to decreasing the frequency of AMR emergencies. Methods We designed a questionnaire to observe a social dilemma in the excess use of antimicrobials incurring antimicrobial resistance by placing two types of imaginary artificial-intelligence (AI) physicians who perform medical practice from either an individual or societal perspective. We assume two AI medical diagnosis systems: “Individual precedence AI” (abbreviated Individual-AI) and “World precedence AI” (abbreviated World-AI). Both AIs diagnose and prescribe medicine automatically. The Individual-AI system diagnoses patients and prescribes medicine to prevent infections based on an individual perspective, including all prophylactic prescriptions against rare accidental infections (not yet present and unlikely to occur). It does not consider the global risk of AMR in the decision. The World-AI system, instead, takes into account the global mortality rate of AMR, aiming to reduce the total number of all AMR-related deaths. Because of this, this AI system does not prescribe antimicrobials against rare and not-yet-present infections. This questionnaire design allows us to observe the social dilemma. For example, it shows a typical social dilemma caused by preferring the use of Individual-AI for diagnosing oneself but preferring the use of World-AI for diagnosing strangers.
The survey entitled “Survey on Medical Advancement” was administered to 8 countries/areas. The survey was conducted 4 times. For the two surveys in Japan, an internet survey company, Cross Marketing Inc. (https://www.cross-m.co.jp/en/), created the questionnaire webpages based on our study design. The company also collected the data. As of April 2020, Cross Marketing Inc. has 4.79 million people in an active panel (survey participants who registered in advance). Here, the definition of an active panel is a survey respondent who has been active within the last year. For the panels, the questionnaire and response column were displayed on the website through which the respondents could complete and submit their responses. We extracted 500 submissions for each gender and each age group by random sampling from all samples collected during the survey periods. The surveys in the 7 countries/areas (i.e., the United States, the United Kingdom, Sweden, Taiwan, Australia, Brazil, and Russia) are conducted by Cint (https://www.cint.com/). Cint is the world’s largest consumer network for digital survey-based research. The headquarters of the company is in Sweden. Cint maintains a survey platform that contained more than 100 million consumer monitors in over 80 countries as of May 2020. For surveys in the US, UK, Sweden, Taiwan, Australia, Brazil, and Russia, Cint Japan (https://jp.cint.com/), which is the Japanese distributor of Cint, created translated questionnaire webpages based on our study design. The company also collected the data. We extracted at least 500 (US, UK, SWE, BRA, RUS) or 250 (TWN, AUS) submissions for each gender (male and female) and each age group (20 s, 30 s, 40 s, 50 s, and 60 s) by random sampling from all samples collected between survey periods. Note that both companies eliminated inconsistent or apathetic respondents. For example, respondents with inconsistent responses (e.g., the registered age of the respondent differed from the reported age at the time of the survey.) were eliminated before reaching the authors. In addition, respondents with significantly short response times (i.e., shorter than 1 min) were eliminated because they may not have read the questions carefully.
https://publichealthscotland.scot/services/data-research-and-innovation-services/electronic-data-research-and-innovation-service-edris/services-we-offer/https://publichealthscotland.scot/services/data-research-and-innovation-services/electronic-data-research-and-innovation-service-edris/services-we-offer/
The Brain Health Data Pilot (BHDP) project aims to be a shared database (like a library) of information for scientists studying brain health, especially for diseases like dementia, which affects about 900,000 people in the UK. Its main feature is a huge collection of brain images linked to routinely collected health records, both from NHS Scotland, which will help scientists learn more about dementia and other brain diseases. What is special about this database is that it will get better over time – as scientists use it and add their discoveries, it becomes more valuable.
This dataset is a subset of the Prescribing Information System (PIS) data for use with the BHDP project.
The information is supplied by Practitioner & Counter Fraud Services Division (P&CFS) who is responsible for the processing and pricing of all prescriptions dispensed in Scotland. These data are augmented with information on prescriptions written in Scotland that were dispensed elsewhere in the United Kingdom. GP’s write the vast majority of these prescriptions, with the remainder written by other authorised prescribers such as nurses and dentists. Also included in the dataset are prescriptions written in hospitals that are dispensed in the community. Note that prescriptions dispensed within hospitals are not included. Data includes CHI number, prescriber and dispenser details for community prescribing, costs and drug information. Data on practices (e.g. list size), organisational structures (e.g. practices within Community Health Partnerships (CHPs) and NHS Boards), prescribable items (e.g. manufacturer, formulation code, strength) are also included. Around 100 million data items are loaded per annum.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The economic factors present in this dataset include data items of gross domestic product (GDP) (100 million), per-capita GDP (yuan/people), primary industry (100 million), secondary industry (100 million), tertiary industry (100 million) and total investment in fixed assets (100 million). Time serial data from 1949 to 2013 of whole China and all the provinces are included. All of data were collected from the China Statistical Yearbook from 1981 to 2014 and China Compendium of Statistics from 1949 to 2008.These data are not intended for demarcation.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the population of United States by gender across 18 age groups. It lists the male and female population in each age group along with the gender ratio for United States. The dataset can be utilized to understand the population distribution of United States by gender and age. For example, using this dataset, we can identify the largest age group for both Men and Women in United States. Additionally, it can be used to see how the gender ratio changes from birth to senior most age group and male to female ratio across each age group for United States.
Key observations
Largest age group (population): Male # 30-34 years (11.65 million) | Female # 30-34 years (11.41 million). Source: U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.
Age groups:
Scope of gender :
Please note that American Community Survey asks a question about the respondents current sex, but not about gender, sexual orientation, or sex at birth. The question is intended to capture data for biological sex, not gender. Respondents are supposed to respond with the answer as either of Male or Female. Our research and this dataset mirrors the data reported as Male and Female for gender distribution analysis.
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for United States Population by Gender. You can refer the same here
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Many people believe that news media they dislike are biased, while their favorite news source isn't. Can we move beyond such subjectivity and measure media bias objectively, from data alone? The auto-generated figure below answers this question with a resounding "yes", showing left-leaning media on the left, right-leaning media on the right, establishment-critical media at the bottom, etc.
https://space.mit.edu/home/tegmark/phrasebias.jpg" alt="Media bias landscape">
Our algorithm analyzed over a million articles from over a hundred newspapers. It first audo-identifies phrases that help predict which newspaper a givens article is from (e.g. "undocumented immigrant" vs. "illegal immigrant"). It then analyzes the frequencies of such phrases across newspapers and topics, producing the media bias landscape below. This means that although news bias is inherently political, its measurement need not be.
Here's our paper: arXiv:2109.00024. Our Kaggle data set here contains the discriminative phrases and phrase counts needed to reproduce all the plots in our paper. The files contain the following data: - The directory phrase_selection contains tables such as immigration_phrases.csv that you can open with Microsoft Excel. They contain the phrases that our method found most informative for predicting which newspaper an article is from, sorted by decreasing utility. Our analysis ones only the ones passing all our screenings, i.e., with ones in columns D, E and F. - The directory counts contains tables such as immigration_counts.csv, listing the number of times that each phrase in occurs in each newspaper's coverage of that topic. - The file blacklist.csv contains journalist names and other phrases that were discarded because they helped revealed the identity of a newspaper without reflecting any political bias.
If you have questions, please contact Samantha at sdalonzo@mit.edu or Max at tegmark@mit.edu.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Multi-modal Machine Translation (MMT) enables the use of visual information to enhance the quality of translations, especially where the full context is not available to enable the unambiguous translation in standard machine translation. Despite the increasing popularity of such technique, it lacks sufficient and qualitative datasets to maximize the full extent of its potential. Hausa, a Chadic language, is a member of the Afro-Asiatic language family. It is estimated that about 100 to 150 million people speak the language, with more than 80 million indigenous speakers. This is more than any of the other Chadic languages. Despite the large number of speakers, the Hausa language is considered as a low resource language in natural language processing (NLP). This is due to the absence of enough resources to implement most of the tasks in NLP. While some datasets exist, they are either scarce, machine-generated or in the religious domain. Therefore, there is the need to create training and evaluation data for implementing machine learning tasks and bridging the research gap in the language. This work presents the Hausa Visual Genome (HaVG), a dataset that contains the description of an image or a section within the image in Hausa and its equivalent in English. The dataset was prepared by automatically translating the English description of the images in the Hindi Visual Genome (HVG). The synthetic Hausa data was then carefully postedited, taking into cognizance the respective images. The data is made of 32,923 images and their descriptions that are divided into training, development, test, and challenge test set. The Hausa Visual Genome is the first dataset of its kind and can be used for Hausa-English machine translation, multi-modal research, image description, among various other natural language processing and generation tasks.
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies.
Spotify Million Playlist Dataset Challenge
Summary
The Spotify Million Playlist Dataset Challenge consists of a dataset and evaluation to enable research in music recommendations. It is a continuation of the RecSys Challenge 2018, which ran from January to July 2018. The dataset contains 1,000,000 playlists, including playlist titles and track titles, created by users on the Spotify platform between January 2010 and October 2017. The evaluation task is automatic playlist continuation: given a seed playlist title and/or initial set of tracks in a playlist, to predict the subsequent tracks in that playlist. This is an open-ended challenge intended to encourage research in music recommendations, and no prizes will be awarded (other than bragging rights).
Background
Playlists like Today’s Top Hits and RapCaviar have millions of loyal followers, while Discover Weekly and Daily Mix are just a couple of our personalized playlists made especially to match your unique musical tastes.
Our users love playlists too. In fact, the Digital Music Alliance, in their 2018 Annual Music Report, state that 54% of consumers say that playlists are replacing albums in their listening habits.
But our users don’t love just listening to playlists, they also love creating them. To date, over 4 billion playlists have been created and shared by Spotify users. People create playlists for all sorts of reasons: some playlists group together music categorically (e.g., by genre, artist, year, or city), by mood, theme, or occasion (e.g., romantic, sad, holiday), or for a particular purpose (e.g., focus, workout). Some playlists are even made to land a dream job, or to send a message to someone special.
The other thing we love here at Spotify is playlist research. By learning from the playlists that people create, we can learn all sorts of things about the deep relationship between people and music. Why do certain songs go together? What is the difference between “Beach Vibes” and “Forest Vibes”? And what words do people use to describe which playlists?
By learning more about nature of playlists, we may also be able to suggest other tracks that a listener would enjoy in the context of a given playlist. This can make playlist creation easier, and ultimately help people find more of the music they love.
Dataset
To enable this type of research at scale, in 2018 we sponsored the RecSys Challenge 2018, which introduced the Million Playlist Dataset (MPD) to the research community. Sampled from the over 4 billion public playlists on Spotify, this dataset of 1 million playlists consist of over 2 million unique tracks by nearly 300,000 artists, and represents the largest public dataset of music playlists in the world. The dataset includes public playlists created by US Spotify users between January 2010 and November 2017. The challenge ran from January to July 2018, and received 1,467 submissions from 410 teams. A summary of the challenge and the top scoring submissions was published in the ACM Transactions on Intelligent Systems and Technology.
In September 2020, we re-released the dataset as an open-ended challenge on AIcrowd.com. The dataset can now be downloaded by registered participants from the Resources page.
Each playlist in the MPD contains a playlist title, the track list (including track IDs and metadata), and other metadata fields (last edit time, number of playlist edits, and more). All data is anonymized to protect user privacy. Playlists are sampled with some randomization, are manually filtered for playlist quality and to remove offensive content, and have some dithering and fictitious tracks added to them. As such, the dataset is not representative of the true distribution of playlists on the Spotify platform, and must not be interpreted as such in any research or analysis performed on the dataset.
Dataset Contains
1000 examples of each scenario:
Title only (no tracks) Title and first track Title and first 5 tracks First 5 tracks only Title and first 10 tracks First 10 tracks only Title and first 25 tracks Title and 25 random tracks Title and first 100 tracks Title and 100 random tracks
Download Link
Full Details: https://www.aicrowd.com/challenges/spotify-million-playlist-dataset-challenge
Download Link: https://www.aicrowd.com/challenges/spotify-million-playlist-dataset-challenge/dataset_files
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the second version of the Google Landmarks dataset (GLDv2), which contains images annotated with labels representing human-made and natural landmarks. The dataset can be used for landmark recognition and retrieval experiments. This version of the dataset contains approximately 5 million images, split into 3 sets of images: train, index and test. The dataset was presented in our CVPR'20 paper. In this repository, we present download links for all dataset files and relevant code for metric computation. This dataset was associated to two Kaggle challenges, on landmark recognition and landmark retrieval. Results were discussed as part of a CVPR'19 workshop. In this repository, we also provide scores for the top 10 teams in the challenges, based on the latest ground-truth version. Please visit the challenge and workshop webpages for more details on the data, tasks and technical solutions from top teams.
Success.ai’s Education Industry Data provides access to comprehensive profiles of global professionals in the education sector. Sourced from over 700 million verified LinkedIn profiles, this dataset includes actionable insights and verified contact details for teachers, school administrators, university leaders, and other decision-makers. Whether your goal is to collaborate with educational institutions, market innovative solutions, or recruit top talent, Success.ai ensures your efforts are supported by accurate, enriched, and continuously updated data.
Why Choose Success.ai’s Education Industry Data? 1. Comprehensive Professional Profiles Access verified LinkedIn profiles of teachers, school principals, university administrators, curriculum developers, and education consultants. AI-validated profiles ensure 99% accuracy, reducing bounce rates and enabling effective communication. 2. Global Coverage Across Education Sectors Includes professionals from public schools, private institutions, higher education, and educational NGOs. Covers markets across North America, Europe, APAC, South America, and Africa for a truly global reach. 3. Continuously Updated Dataset Real-time updates reflect changes in roles, organizations, and industry trends, ensuring your outreach remains relevant and effective. 4. Tailored for Educational Insights Enriched profiles include work histories, academic expertise, subject specializations, and leadership roles for a deeper understanding of the education sector.
Data Highlights: 700M+ Verified LinkedIn Profiles: Access a global network of education professionals. 100M+ Work Emails: Direct communication with teachers, administrators, and decision-makers. Enriched Professional Histories: Gain insights into career trajectories, institutional affiliations, and areas of expertise. Industry-Specific Segmentation: Target professionals in K-12 education, higher education, vocational training, and educational technology.
Key Features of the Dataset: 1. Education Sector Profiles Identify and connect with teachers, professors, academic deans, school counselors, and education technologists. Engage with individuals shaping curricula, institutional policies, and student success initiatives. 2. Detailed Institutional Insights Leverage data on school sizes, student demographics, geographic locations, and areas of focus. Tailor outreach to align with institutional goals and challenges. 3. Advanced Filters for Precision Targeting Refine searches by region, subject specialty, institution type, or leadership role. Customize campaigns to address specific needs, such as professional development or technology adoption. 4. AI-Driven Enrichment Enhanced datasets include actionable details for personalized messaging and targeted engagement. Highlight educational milestones, professional certifications, and key achievements.
Strategic Use Cases: 1. Product Marketing and Outreach Promote educational technology, learning platforms, or training resources to teachers and administrators. Engage with decision-makers driving procurement and curriculum development. 2. Collaboration and Partnerships Identify institutions for collaborations on research, workshops, or pilot programs. Build relationships with educators and administrators passionate about innovative teaching methods. 3. Talent Acquisition and Recruitment Target HR professionals and academic leaders seeking faculty, administrative staff, or educational consultants. Support hiring efforts for institutions looking to attract top talent in the education sector. 4. Market Research and Strategy Analyze trends in education systems, curriculum development, and technology integration to inform business decisions. Use insights to adapt products and services to evolving educational needs.
Why Choose Success.ai? 1. Best Price Guarantee Access industry-leading Education Industry Data at unmatched pricing for cost-effective campaigns and strategies. 2. Seamless Integration Easily integrate verified data into CRMs, recruitment platforms, or marketing systems using downloadable formats or APIs. 3. AI-Validated Accuracy Depend on 99% accurate data to reduce wasted outreach and maximize engagement rates. 4. Customizable Solutions Tailor datasets to specific educational fields, geographic regions, or institutional types to meet your objectives.
Strategic APIs for Enhanced Campaigns: 1. Data Enrichment API Enrich existing records with verified education professional profiles to enhance engagement and targeting. 2. Lead Generation API Automate lead generation for a consistent pipeline of qualified professionals in the education sector. Success.ai’s Education Industry Data enables you to connect with educators, administrators, and decision-makers transforming global...
Cristiano Ronaldo has one of the most popular Instagram accounts as of April 2024.
The Portuguese footballer is the most-followed person on the photo sharing app platform with 628 million followers. Instagram's own account was ranked first with roughly 672 million followers.
How popular is Instagram?
Instagram is a photo-sharing social networking service that enables users to take pictures and edit them with filters. The platform allows users to post and share their images online and directly with their friends and followers on the social network. The cross-platform app reached one billion monthly active users in mid-2018. In 2020, there were over 114 million Instagram users in the United States and experts project this figure to surpass 127 million users in 2023.
Who uses Instagram?
Instagram audiences are predominantly young – recent data states that almost 60 percent of U.S. Instagram users are aged 34 years or younger. Fall 2020 data reveals that Instagram is also one of the most popular social media for teens and one of the social networks with the biggest reach among teens in the United States.
Celebrity influencers on Instagram
Many celebrities and athletes are brand spokespeople and generate additional income with social media advertising and sponsored content. Unsurprisingly, Ronaldo ranked first again, as the average media value of one of his Instagram posts was 985,441 U.S. dollars.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
YouTube was created in 2005, with the first video – Me at the Zoo - being uploaded on 23 April 2005. Since then, 1.3 billion people have set up YouTube accounts. In 2018, people watch nearly 5 billion videos each day. People upload 300 hours of video to the site every minute.
According to 2016 research undertaken by Pexeso, music only accounts for 4.3% of YouTube’s content. Yet it makes 11% of the views. Clearly, an awful lot of people watch a comparatively small number of music videos. It should be no surprise, therefore, that the most watched videos of all time on YouTube are predominantly music videos.
On August 13, BTS became the most-viewed artist in YouTube history, accumulating over 26.7 billion views across all their official channels. This count includes all music videos and dance practice videos.
Justin Bieber and Ed Sheeran now hold the records for second and third-highest views, with over 26 billion views each.
Currently, BTS’s most viewed videos are their music videos for “**Boy With Luv**,” “**Dynamite**,” and “**DNA**,” which all have over 1.4 billion views.
Headers of the Dataset Total = Total views (in millions) across all official channels Avg = Current daily average of all videos combined 100M = Number of videos with more than 100 million views