100+ datasets found
  1. t

    Trusted Research Environments: Analysis of Characteristics and Data...

    • researchdata.tuwien.ac.at
    bin, csv
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Weise; Martin Weise; Andreas Rauber; Andreas Rauber (2024). Trusted Research Environments: Analysis of Characteristics and Data Availability [Dataset]. http://doi.org/10.48436/cv20m-sg117
    Explore at:
    bin, csvAvailable download formats
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    TU Wien
    Authors
    Martin Weise; Martin Weise; Andreas Rauber; Andreas Rauber
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Trusted Research Environments (TREs) enable analysis of sensitive data under strict security assertions that protect the data with technical organizational and legal measures from (accidentally) being leaked outside the facility. While many TREs exist in Europe, little information is available publicly on the architecture and descriptions of their building blocks & their slight technical variations. To shine light on these problems, we give an overview of existing, publicly described TREs and a bibliography linking to the system description. We further analyze their technical characteristics, especially in their commonalities & variations and provide insight on their data type characteristics and availability. Our literature study shows that 47 TREs worldwide provide access to sensitive data of which two-thirds provide data themselves, predominantly via secure remote access. Statistical offices make available a majority of available sensitive data records included in this study.

    Methodology

    We performed a literature study covering 47 TREs worldwide using scholarly databases (Scopus, Web of Science, IEEE Xplore, Science Direct), a computer science library (dblp.org), Google and grey literature focusing on retrieving the following source material:

    • Peer-reviewed articles where available,
    • TRE websites,
    • TRE metadata catalogs.

    The goal for this literature study is to discover existing TREs, analyze their characteristics and data availability to give an overview on available infrastructure for sensitive data research as many European initiatives have been emerging in recent months.

    Technical details

    This dataset consists of five comma-separated values (.csv) files describing our inventory:

    • countries.csv: Table of countries with columns id (number), name (text) and code (text, in ISO 3166-A3 encoding, optional)
    • tres.csv: Table of TREs with columns id (number), name (text), countryid (number, refering to column id of table countries), structureddata (bool, optional), datalevel (one of [1=de-identified, 2=pseudonomized, 3=anonymized], optional), outputcontrol (bool, optional), inceptionyear (date, optional), records (number, optional), datatype (one of [1=claims, 2=linked records]), optional), statistics_office (bool), size (number, optional), source (text, optional), comment (text, optional)
    • access.csv: Table of access modes of TREs with columns id (number), suf (bool, optional), physical_visit (bool, optional), external_physical_visit (bool, optional), remote_visit (bool, optional)
    • inclusion.csv: Table of included TREs into the literature study with columns id (number), included (bool), exclusion reason (one of [peer review, environment, duplicate], optional), comment (text, optional)
    • major_fields.csv: Table of data categorization into the major research fields with columns id (number), life_sciences (bool, optional), physical_sciences (bool, optional), arts_and_humanities (bool, optional), social_sciences (bool, optional).

    Additionally, a MariaDB (10.5 or higher) schema definition .sql file is needed, properly modelling the schema for databases:

    • schema.sql: Schema definition file to create the tables and views used in the analysis.

    The analysis was done through Jupyter Notebook which can be found in our source code repository: https://gitlab.tuwien.ac.at/martin.weise/tres/-/blob/master/analysis.ipynb

  2. An Insight Into What Is Data Analytics?

    • kaggle.com
    zip
    Updated Sep 19, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    itcourses (2022). An Insight Into What Is Data Analytics? [Dataset]. https://www.kaggle.com/itcourses/an-insight-into-what-is-data-analytics
    Explore at:
    zip(60771 bytes)Available download formats
    Dataset updated
    Sep 19, 2022
    Authors
    itcourses
    Description

    What exactly is data analytics and do you want to learn so Visit BookMyShiksha they provide the Best Data Analytics Course in Delhi, INDIA. Analytics can be defined as "the science of analysis." A more practical definition, however, would be how an entity, such as a business, arrives at an optimal or realistic decision based on available data. Business managers may choose to make decisions based on past experiences or rules of thumb, or there may be other qualitative aspects to decision-making. Still, it will not be an analytical decision-making process unless data is considered.

    Analytics has been used in business since Frederick Winslow Taylor pioneered time management exercises in the late 1800s. Henry Ford revolutionized manufacturing by measuring the pacing of the assembly line. However, analytics gained popularity in the late 1960s, when computers were used in decision support systems. Analytics has evolved since then, with the development of enterprise resource planning (ERP) systems, data warehouses, and a wide range of other hardware and software tools and applications.

    Analytics is now used by businesses of all sizes. For example, if you ask my fruit vendor why he stopped servicing our street, he will tell you that we try to bargain a lot, which causes him to lose money, but on the road next to mine, he has some great customers for whom he provides excellent service. This is the nucleus of analytics. Our fruit vendor TESTED servicing my street and realised he was losing money - within a month, he stopped servicing us and will not show up even if we ask him. How many companies today are aware of who their MOST PROFITABLE CUSTOMERS are? Do they know who their most profitable customers are? And, knowing which customers are the most profitable, how should you direct your efforts to acquire the MOST PROFITABLE customers?

    Analytics is used to drive the overall organizational strategy in large corporations. Here are a few examples: • Capital One, a credit card company based in the United States, employs analytics to differentiate customers based on credit risk and to match customer characteristics with appropriate product offerings.

    • Harrah's Casino, another American company, discovered that, contrary to popular belief, their most profitable customers are those who play slots. They have developed a mamarketing program to attract and retain their MOST PROFITABLE CUSTOMERS in order to capitalise on this insight.

    • Netflicks, an online movie service, recommends the most logical movies based on past behavior. This model has increased their sales because the movie choices are based on the customers' preferences, and thus the experience is tailored to each individual.

    Analytics is commonly used to study business data using statistical analysis to discover and understand historical patterns in order to predict and improve future business performance. In addition, some people use the term to refer to the application of mathematics in business. Others believe that the field of analytics includes the use of operations research, statistics, and probability; however, limiting the field of Best Big Data Analytics Services to statistics and mathematics would be incorrect.

    While the concept is simple and intuitive, the widespread use of analytics to drive business is still in its infancy. Stay tuned for the second part of this article to learn more about the Science of Analytics.

  3. w

    Data Use in Academia Dataset

    • datacatalog.worldbank.org
    csv, utf-8
    Updated Nov 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Semantic Scholar Open Research Corpus (S2ORC) (2023). Data Use in Academia Dataset [Dataset]. https://datacatalog.worldbank.org/search/dataset/0065200/data_use_in_academia_dataset
    Explore at:
    utf-8, csvAvailable download formats
    Dataset updated
    Nov 27, 2023
    Dataset provided by
    Semantic Scholar Open Research Corpus (S2ORC)
    Brian William Stacy
    License

    https://datacatalog.worldbank.org/public-licenses?fragment=cchttps://datacatalog.worldbank.org/public-licenses?fragment=cc

    Description

    This dataset contains metadata (title, abstract, date of publication, field, etc) for around 1 million academic articles. Each record contains additional information on the country of study and whether the article makes use of data. Machine learning tools were used to classify the country of study and data use.


    Our data source of academic articles is the Semantic Scholar Open Research Corpus (S2ORC) (Lo et al. 2020). The corpus contains more than 130 million English language academic papers across multiple disciplines. The papers included in the Semantic Scholar corpus are gathered directly from publishers, from open archives such as arXiv or PubMed, and crawled from the internet.


    We placed some restrictions on the articles to make them usable and relevant for our purposes. First, only articles with an abstract and parsed PDF or latex file are included in the analysis. The full text of the abstract is necessary to classify the country of study and whether the article uses data. The parsed PDF and latex file are important for extracting important information like the date of publication and field of study. This restriction eliminated a large number of articles in the original corpus. Around 30 million articles remain after keeping only articles with a parsable (i.e., suitable for digital processing) PDF, and around 26% of those 30 million are eliminated when removing articles without an abstract. Second, only articles from the year 2000 to 2020 were considered. This restriction eliminated an additional 9% of the remaining articles. Finally, articles from the following fields of study were excluded, as we aim to focus on fields that are likely to use data produced by countries’ national statistical system: Biology, Chemistry, Engineering, Physics, Materials Science, Environmental Science, Geology, History, Philosophy, Math, Computer Science, and Art. Fields that are included are: Economics, Political Science, Business, Sociology, Medicine, and Psychology. This third restriction eliminated around 34% of the remaining articles. From an initial corpus of 136 million articles, this resulted in a final corpus of around 10 million articles.


    Due to the intensive computer resources required, a set of 1,037,748 articles were randomly selected from the 10 million articles in our restricted corpus as a convenience sample.


    The empirical approach employed in this project utilizes text mining with Natural Language Processing (NLP). The goal of NLP is to extract structured information from raw, unstructured text. In this project, NLP is used to extract the country of study and whether the paper makes use of data. We will discuss each of these in turn.


    To determine the country or countries of study in each academic article, two approaches are employed based on information found in the title, abstract, or topic fields. The first approach uses regular expression searches based on the presence of ISO3166 country names. A defined set of country names is compiled, and the presence of these names is checked in the relevant fields. This approach is transparent, widely used in social science research, and easily extended to other languages. However, there is a potential for exclusion errors if a country’s name is spelled non-standardly.


    The second approach is based on Named Entity Recognition (NER), which uses machine learning to identify objects from text, utilizing the spaCy Python library. The Named Entity Recognition algorithm splits text into named entities, and NER is used in this project to identify countries of study in the academic articles. SpaCy supports multiple languages and has been trained on multiple spellings of countries, overcoming some of the limitations of the regular expression approach. If a country is identified by either the regular expression search or NER, it is linked to the article. Note that one article can be linked to more than one country.


    The second task is to classify whether the paper uses data. A supervised machine learning approach is employed, where 3500 publications were first randomly selected and manually labeled by human raters using the Mechanical Turk service (Paszke et al. 2019).[1] To make sure the human raters had a similar and appropriate definition of data in mind, they were given the following instructions before seeing their first paper:


    Each of these documents is an academic article. The goal of this study is to measure whether a specific academic article is using data and from which country the data came.

    There are two classification tasks in this exercise:

    1. identifying whether an academic article is using data from any country

    2. Identifying from which country that data came.

    For task 1, we are looking specifically at the use of data. Data is any information that has been collected, observed, generated or created to produce research findings. As an example, a study that reports findings or analysis using a survey data, uses data. Some clues to indicate that a study does use data includes whether a survey or census is described, a statistical model estimated, or a table or means or summary statistics is reported.

    After an article is classified as using data, please note the type of data used. The options are population or business census, survey data, administrative data, geospatial data, private sector data, and other data. If no data is used, then mark "Not applicable". In cases where multiple data types are used, please click multiple options.[2]

    For task 2, we are looking at the country or countries that are studied in the article. In some cases, no country may be applicable. For instance, if the research is theoretical and has no specific country application. In some cases, the research article may involve multiple countries. In these cases, select all countries that are discussed in the paper.

    We expect between 10 and 35 percent of all articles to use data.


    The median amount of time that a worker spent on an article, measured as the time between when the article was accepted to be classified by the worker and when the classification was submitted was 25.4 minutes. If human raters were exclusively used rather than machine learning tools, then the corpus of 1,037,748 articles examined in this study would take around 50 years of human work time to review at a cost of $3,113,244, which assumes a cost of $3 per article as was paid to MTurk workers.


    A model is next trained on the 3,500 labelled articles. We use a distilled version of the BERT (bidirectional Encoder Representations for transformers) model to encode raw text into a numeric format suitable for predictions (Devlin et al. (2018)). BERT is pre-trained on a large corpus comprising the Toronto Book Corpus and Wikipedia. The distilled version (DistilBERT) is a compressed model that is 60% the size of BERT and retains 97% of the language understanding capabilities and is 60% faster (Sanh, Debut, Chaumond, Wolf 2019). We use PyTorch to produce a model to classify articles based on the labeled data. Of the 3,500 articles that were hand coded by the MTurk workers, 900 are fed to the machine learning model. 900 articles were selected because of computational limitations in training the NLP model. A classification of “uses data” was assigned if the model predicted an article used data with at least 90% confidence.


    The performance of the models classifying articles to countries and as using data or not can be compared to the classification by the human raters. We consider the human raters as giving us the ground truth. This may underestimate the model performance if the workers at times got the allocation wrong in a way that would not apply to the model. For instance, a human rater could mistake the Republic of Korea for the Democratic People’s Republic of Korea. If both humans and the model perform the same kind of errors, then the performance reported here will be overestimated.


    The model was able to predict whether an article made use of data with 87% accuracy evaluated on the set of articles held out of the model training. The correlation between the number of articles written about each country using data estimated under the two approaches is given in the figure below. The number of articles represents an aggregate total of

  4. f

    Criteria and definitions for study selection and data analysis.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Feb 22, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Junghanss, Thomas; Eckerle, Isabella; Zwahlen, Marcel; Rosenberger, Kerstin Daniela (2013). Criteria and definitions for study selection and data analysis. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001660452
    Explore at:
    Dataset updated
    Feb 22, 2013
    Authors
    Junghanss, Thomas; Eckerle, Isabella; Zwahlen, Marcel; Rosenberger, Kerstin Daniela
    Description

    Inclusion and exclusion criteria (A) and definitions used for study selection and data analysis (B).*If a shorter time interval was accepted exceptionally, this is indicated in the text or by a footnote.

  5. f

    Descriptive statistics.

    • plos.figshare.com
    xls
    Updated Oct 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mrinal Saha; Aparna Deb; Imtiaz Sultan; Sujat Paul; Jishan Ahmed; Goutam Saha (2023). Descriptive statistics. [Dataset]. http://doi.org/10.1371/journal.pgph.0002475.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 31, 2023
    Dataset provided by
    PLOS Global Public Health
    Authors
    Mrinal Saha; Aparna Deb; Imtiaz Sultan; Sujat Paul; Jishan Ahmed; Goutam Saha
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Vitamin D insufficiency appears to be prevalent in SLE patients. Multiple factors potentially contribute to lower vitamin D levels, including limited sun exposure, the use of sunscreen, darker skin complexion, aging, obesity, specific medical conditions, and certain medications. The study aims to assess the risk factors associated with low vitamin D levels in SLE patients in the southern part of Bangladesh, a region noted for a high prevalence of SLE. The research additionally investigates the possible correlation between vitamin D and the SLEDAI score, seeking to understand the potential benefits of vitamin D in enhancing disease outcomes for SLE patients. The study incorporates a dataset consisting of 50 patients from the southern part of Bangladesh and evaluates their clinical and demographic data. An initial exploratory data analysis is conducted to gain insights into the data, which includes calculating means and standard deviations, performing correlation analysis, and generating heat maps. Relevant inferential statistical tests, such as the Student’s t-test, are also employed. In the machine learning part of the analysis, this study utilizes supervised learning algorithms, specifically Linear Regression (LR) and Random Forest (RF). To optimize the hyperparameters of the RF model and mitigate the risk of overfitting given the small dataset, a 3-Fold cross-validation strategy is implemented. The study also calculates bootstrapped confidence intervals to provide robust uncertainty estimates and further validate the approach. A comprehensive feature importance analysis is carried out using RF feature importance, permutation-based feature importance, and SHAP values. The LR model yields an RMSE of 4.83 (CI: 2.70, 6.76) and MAE of 3.86 (CI: 2.06, 5.86), whereas the RF model achieves better results, with an RMSE of 2.98 (CI: 2.16, 3.76) and MAE of 2.68 (CI: 1.83,3.52). Both models identify Hb, CRP, ESR, and age as significant contributors to vitamin D level predictions. Despite the lack of a significant association between SLEDAI and vitamin D in the statistical analysis, the machine learning models suggest a potential nonlinear dependency of vitamin D on SLEDAI. These findings highlight the importance of these factors in managing vitamin D levels in SLE patients. The study concludes that there is a high prevalence of vitamin D insufficiency in SLE patients. Although a direct linear correlation between the SLEDAI score and vitamin D levels is not observed, machine learning models suggest the possibility of a nonlinear relationship. Furthermore, factors such as Hb, CRP, ESR, and age are identified as more significant in predicting vitamin D levels. Thus, the study suggests that monitoring these factors may be advantageous in managing vitamin D levels in SLE patients. Given the immunological nature of SLE, the potential role of vitamin D in SLE disease activity could be substantial. Therefore, it underscores the need for further large-scale studies to corroborate this hypothesis.

  6. D

    Surveillance Data Analytics Services Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Surveillance Data Analytics Services Market Research Report 2033 [Dataset]. https://dataintelo.com/report/surveillance-data-analytics-services-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Surveillance Data Analytics Services Market Outlook



    According to our latest research, the global Surveillance Data Analytics Services market size reached USD 7.9 billion in 2024, reflecting a robust expansion driven by escalating security needs and technological advancements. The market is expected to grow at a CAGR of 16.2% during the forecast period, reaching USD 36.2 billion by 2033. This remarkable growth is primarily fueled by the increasing adoption of AI-powered surveillance solutions across various sectors, as organizations and governments worldwide invest heavily in data-driven security infrastructure to enhance real-time monitoring, threat detection, and operational efficiency.




    One of the primary growth factors propelling the Surveillance Data Analytics Services market is the rapid proliferation of smart cities and the corresponding need for intelligent security systems. Urbanization is accelerating globally, with city administrations focusing on deploying integrated surveillance networks that leverage advanced analytics to monitor traffic, prevent crime, and ensure public safety. The integration of IoT devices, high-definition video cameras, and AI-based analytics platforms allows for real-time data processing and actionable insights, enabling authorities to respond swiftly to incidents. Moreover, the emphasis on predictive analytics for crime prevention and traffic management is pushing both public and private sector entities to invest in state-of-the-art surveillance data analytics services.




    Another significant driver for market growth is the digital transformation initiatives undertaken by enterprises in sectors such as retail, banking, industrial, and transportation. Businesses are increasingly recognizing the value of surveillance data not just for security, but also for operational intelligence and customer behavior analysis. Retailers, for instance, are utilizing video analytics to optimize store layouts, monitor foot traffic, and reduce shrinkage. Similarly, the financial sector employs surveillance analytics to detect suspicious activities and enhance compliance with regulatory mandates. The convergence of AI, machine learning, and big data analytics is enabling organizations to extract actionable intelligence from vast volumes of surveillance footage, thereby improving decision-making and operational resilience.




    The expansion of cloud-based surveillance data analytics services is another key factor contributing to market growth. Cloud deployment offers scalability, remote accessibility, and cost efficiencies, making advanced analytics solutions accessible to a broader range of organizations, including small and medium enterprises. The ability to aggregate and analyze data from distributed locations in real-time is particularly valuable for multinational corporations and government agencies with widespread operations. Additionally, the ongoing evolution of edge computing is enhancing the capabilities of surveillance analytics by enabling faster data processing at the source, reducing latency, and improving response times in critical situations.




    From a regional perspective, North America continues to dominate the Surveillance Data Analytics Services market, owing to its early adoption of advanced security technologies and robust investments in smart city initiatives. However, the Asia Pacific region is expected to exhibit the highest growth rate, driven by rapid urbanization, increasing security concerns, and governmental focus on public safety infrastructure. Europe also remains a significant market, supported by stringent regulatory frameworks and the proliferation of surveillance networks across urban centers. Latin America and the Middle East & Africa are gradually catching up, with increasing investments in infrastructure modernization and national security.



    Component Analysis



    The Component segment of the Surveillance Data Analytics Services market is bifurcated into Software and Services, each playing a pivotal role in shaping the industry landscape. The software segment, which includes video analytics platforms, AI algorithms, and data visualization tools, is witnessing substantial demand due to the growing need for automated threat detection, real-time monitoring, and predictive analytics. Advanced software solutions are now equipped with deep learning capabilities, enabling them to recognize patterns, detect anomalies, and generate alert

  7. i

    Household Health Survey 2012-2013, Economic Research Forum (ERF)...

    • catalog.ihsn.org
    • datacatalog.ihsn.org
    Updated Jun 26, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Central Statistical Organization (CSO) (2017). Household Health Survey 2012-2013, Economic Research Forum (ERF) Harmonization Data - Iraq [Dataset]. https://catalog.ihsn.org/index.php/catalog/6937
    Explore at:
    Dataset updated
    Jun 26, 2017
    Dataset provided by
    Kurdistan Regional Statistics Office (KRSO)
    Economic Research Forum
    Central Statistical Organization (CSO)
    Time period covered
    2012 - 2013
    Area covered
    Iraq
    Description

    Abstract

    The harmonized data set on health, created and published by the ERF, is a subset of Iraq Household Socio Economic Survey (IHSES) 2012. It was derived from the household, individual and health modules, collected in the context of the above mentioned survey. The sample was then used to create a harmonized health survey, comparable with the Iraq Household Socio Economic Survey (IHSES) 2007 micro data set.

    ----> Overview of the Iraq Household Socio Economic Survey (IHSES) 2012:

    Iraq is considered a leader in household expenditure and income surveys where the first was conducted in 1946 followed by surveys in 1954 and 1961. After the establishment of Central Statistical Organization, household expenditure and income surveys were carried out every 3-5 years in (1971/ 1972, 1976, 1979, 1984/ 1985, 1988, 1993, 2002 / 2007). Implementing the cooperation between CSO and WB, Central Statistical Organization (CSO) and Kurdistan Region Statistics Office (KRSO) launched fieldwork on IHSES on 1/1/2012. The survey was carried out over a full year covering all governorates including those in Kurdistan Region.

    The survey has six main objectives. These objectives are:

    1. Provide data for poverty analysis and measurement and monitor, evaluate and update the implementation Poverty Reduction National Strategy issued in 2009.
    2. Provide comprehensive data system to assess household social and economic conditions and prepare the indicators related to the human development.
    3. Provide data that meet the needs and requirements of national accounts.
    4. Provide detailed indicators on consumption expenditure that serve making decision related to production, consumption, export and import.
    5. Provide detailed indicators on the sources of households and individuals income.
    6. Provide data necessary for formulation of a new consumer price index number.

    The raw survey data provided by the Statistical Office were then harmonized by the Economic Research Forum, to create a comparable version with the 2006/2007 Household Socio Economic Survey in Iraq. Harmonization at this stage only included unifying variables' names, labels and some definitions. See: Iraq 2007 & 2012- Variables Mapping & Availability Matrix.pdf provided in the external resources for further information on the mapping of the original variables on the harmonized ones, in addition to more indications on the variables' availability in both survey years and relevant comments.

    Geographic coverage

    National coverage: Covering a sample of urban, rural and metropolitan areas in all the governorates including those in Kurdistan Region.

    Analysis unit

    1- Household/family. 2- Individual/person.

    Universe

    The survey was carried out over a full year covering all governorates including those in Kurdistan Region.

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    ----> Design:

    Sample size was (25488) household for the whole Iraq, 216 households for each district of 118 districts, 2832 clusters each of which includes 9 households distributed on districts and governorates for rural and urban.

    ----> Sample frame:

    Listing and numbering results of 2009-2010 Population and Housing Survey were adopted in all the governorates including Kurdistan Region as a frame to select households, the sample was selected in two stages: Stage 1: Primary sampling unit (blocks) within each stratum (district) for urban and rural were systematically selected with probability proportional to size to reach 2832 units (cluster). Stage two: 9 households from each primary sampling unit were selected to create a cluster, thus the sample size of total survey clusters was 25488 households distributed on the governorates, 216 households in each district.

    ----> Sampling Stages:

    In each district, the sample was selected in two stages: Stage 1: based on 2010 listing and numbering frame 24 sample points were selected within each stratum through systematic sampling with probability proportional to size, in addition to the implicit breakdown urban and rural and geographic breakdown (sub-district, quarter, street, county, village and block). Stage 2: Using households as secondary sampling units, 9 households were selected from each sample point using systematic equal probability sampling. Sampling frames of each stages can be developed based on 2010 building listing and numbering without updating household lists. In some small districts, random selection processes of primary sampling may lead to select less than 24 units therefore a sampling unit is selected more than once , the selection may reach two cluster or more from the same enumeration unit when it is necessary.

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    ----> Preparation:

    The questionnaire of 2006 survey was adopted in designing the questionnaire of 2012 survey on which many revisions were made. Two rounds of pre-test were carried out. Revision were made based on the feedback of field work team, World Bank consultants and others, other revisions were made before final version was implemented in a pilot survey in September 2011. After the pilot survey implemented, other revisions were made in based on the challenges and feedbacks emerged during the implementation to implement the final version in the actual survey.

    ----> Questionnaire Parts:

    The questionnaire consists of four parts each with several sections: Part 1: Socio – Economic Data: - Section 1: Household Roster - Section 2: Emigration - Section 3: Food Rations - Section 4: housing - Section 5: education - Section 6: health - Section 7: Physical measurements - Section 8: job seeking and previous job

    Part 2: Monthly, Quarterly and Annual Expenditures: - Section 9: Expenditures on Non – Food Commodities and Services (past 30 days). - Section 10 : Expenditures on Non – Food Commodities and Services (past 90 days). - Section 11: Expenditures on Non – Food Commodities and Services (past 12 months). - Section 12: Expenditures on Non-food Frequent Food Stuff and Commodities (7 days). - Section 12, Table 1: Meals Had Within the Residential Unit. - Section 12, table 2: Number of Persons Participate in the Meals within Household Expenditure Other Than its Members.

    Part 3: Income and Other Data: - Section 13: Job - Section 14: paid jobs - Section 15: Agriculture, forestry and fishing - Section 16: Household non – agricultural projects - Section 17: Income from ownership and transfers - Section 18: Durable goods - Section 19: Loans, advances and subsidies - Section 20: Shocks and strategy of dealing in the households - Section 21: Time use - Section 22: Justice - Section 23: Satisfaction in life - Section 24: Food consumption during past 7 days

    Part 4: Diary of Daily Expenditures: Diary of expenditure is an essential component of this survey. It is left at the household to record all the daily purchases such as expenditures on food and frequent non-food items such as gasoline, newspapers…etc. during 7 days. Two pages were allocated for recording the expenditures of each day, thus the roster will be consists of 14 pages.

    Cleaning operations

    ----> Raw Data:

    Data Editing and Processing: To ensure accuracy and consistency, the data were edited at the following stages: 1. Interviewer: Checks all answers on the household questionnaire, confirming that they are clear and correct. 2. Local Supervisor: Checks to make sure that questions has been correctly completed. 3. Statistical analysis: After exporting data files from excel to SPSS, the Statistical Analysis Unit uses program commands to identify irregular or non-logical values in addition to auditing some variables. 4. World Bank consultants in coordination with the CSO data management team: the World Bank technical consultants use additional programs in SPSS and STAT to examine and correct remaining inconsistencies within the data files. The software detects errors by analyzing questionnaire items according to the expected parameter for each variable.

    ----> Harmonized Data:

    • The SPSS package is used to harmonize the Iraq Household Socio Economic Survey (IHSES) 2007 with Iraq Household Socio Economic Survey (IHSES) 2012.
    • The harmonization process starts with raw data files received from the Statistical Office.
    • A program is generated for each dataset to create harmonized variables.
    • Data is saved on the household and individual level, in SPSS and then converted to STATA, to be disseminated.

    Response rate

    Iraq Household Socio Economic Survey (IHSES) reached a total of 25488 households. Number of households refused to response was 305, response rate was 98.6%. The highest interview rates were in Ninevah and Muthanna (100%) while the lowest rates were in Sulaimaniya (92%).

  8. Z

    Conceptualization of public data ecosystems

    • data.niaid.nih.gov
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasija, Nikiforova; Martin, Lnenicka (2024). Conceptualization of public data ecosystems [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_13842001
    Explore at:
    Dataset updated
    Sep 26, 2024
    Dataset provided by
    University of Hradec Králové
    University of Tartu
    Authors
    Anastasija, Nikiforova; Martin, Lnenicka
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains data collected during a study "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems" conducted by Martin Lnenicka (University of Hradec Králové, Czech Republic), Anastasija Nikiforova (University of Tartu, Estonia), Mariusz Luterek (University of Warsaw, Warsaw, Poland), Petar Milic (University of Pristina - Kosovska Mitrovica, Serbia), Daniel Rudmark (Swedish National Road and Transport Research Institute, Sweden), Sebastian Neumaier (St. Pölten University of Applied Sciences, Austria), Karlo Kević (University of Zagreb, Croatia), Anneke Zuiderwijk (Delft University of Technology, Delft, the Netherlands), Manuel Pedro Rodríguez Bolívar (University of Granada, Granada, Spain).

    As there is a lack of understanding of the elements that constitute different types of value-adding public data ecosystems and how these elements form and shape the development of these ecosystems over time, which can lead to misguided efforts to develop future public data ecosystems, the aim of the study is: (1) to explore how public data ecosystems have developed over time and (2) to identify the value-adding elements and formative characteristics of public data ecosystems. Using an exploratory retrospective analysis and a deductive approach, we systematically review 148 studies published between 1994 and 2023. Based on the results, this study presents a typology of public data ecosystems and develops a conceptual model of elements and formative characteristics that contribute most to value-adding public data ecosystems, and develops a conceptual model of the evolutionary generation of public data ecosystems represented by six generations called Evolutionary Model of Public Data Ecosystems (EMPDE). Finally, three avenues for a future research agenda are proposed.

    This dataset is being made public both to act as supplementary data for "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems ", Telematics and Informatics*, and its Systematic Literature Review component that informs the study.

    Description of the data in this data set

    PublicDataEcosystem_SLR provides the structure of the protocol

    Spreadsheet#1 provides the list of results after the search over three indexing databases and filtering out irrelevant studies

    Spreadsheets #2 provides the protocol structure.

    Spreadsheets #3 provides the filled protocol for relevant studies.

    The information on each selected study was collected in four categories:(1) descriptive information,(2) approach- and research design- related information,(3) quality-related information,(4) HVD determination-related information

    Descriptive Information

    Article number

    A study number, corresponding to the study number assigned in an Excel worksheet

    Complete reference

    The complete source information to refer to the study (in APA style), including the author(s) of the study, the year in which it was published, the study's title and other source information.

    Year of publication

    The year in which the study was published.

    Journal article / conference paper / book chapter

    The type of the paper, i.e., journal article, conference paper, or book chapter.

    Journal / conference / book

    Journal article, conference, where the paper is published.

    DOI / Website

    A link to the website where the study can be found.

    Number of words

    A number of words of the study.

    Number of citations in Scopus and WoS

    The number of citations of the paper in Scopus and WoS digital libraries.

    Availability in Open Access

    Availability of a study in the Open Access or Free / Full Access.

    Keywords

    Keywords of the paper as indicated by the authors (in the paper).

    Relevance for our study (high / medium / low)

    What is the relevance level of the paper for our study

    Approach- and research design-related information

    Approach- and research design-related information

    Objective / Aim / Goal / Purpose & Research Questions

    The research objective and established RQs.

    Research method (including unit of analysis)

    The methods used to collect data in the study, including the unit of analysis that refers to the country, organisation, or other specific unit that has been analysed such as the number of use-cases or policy documents, number and scope of the SLR etc.

    Study’s contributions

    The study’s contribution as defined by the authors

    Qualitative / quantitative / mixed method

    Whether the study uses a qualitative, quantitative, or mixed methods approach?

    Availability of the underlying research data

    Whether the paper has a reference to the public availability of the underlying research data e.g., transcriptions of interviews, collected data etc., or explains why these data are not openly shared?

    Period under investigation

    Period (or moment) in which the study was conducted (e.g., January 2021-March 2022)

    Use of theory / theoretical concepts / approaches? If yes, specify them

    Does the study mention any theory / theoretical concepts / approaches? If yes, what theory / concepts / approaches? If any theory is mentioned, how is theory used in the study? (e.g., mentioned to explain a certain phenomenon, used as a framework for analysis, tested theory, theory mentioned in the future research section).

    Quality-related information

    Quality concerns

    Whether there are any quality concerns (e.g., limited information about the research methods used)?

    Public Data Ecosystem-related information

    Public data ecosystem definition

    How is the public data ecosystem defined in the paper and any other equivalent term, mostly infrastructure. If an alternative term is used, how is the public data ecosystem called in the paper?

    Public data ecosystem evolution / development

    Does the paper define the evolution of the public data ecosystem? If yes, how is it defined and what factors affect it?

    What constitutes a public data ecosystem?

    What constitutes a public data ecosystem (components & relationships) - their "FORM / OUTPUT" presented in the paper (general description with more detailed answers to further additional questions).

    Components and relationships

    What components does the public data ecosystem consist of and what are the relationships between these components? Alternative names for components - element, construct, concept, item, helix, dimension etc. (detailed description).

    Stakeholders

    What stakeholders (e.g., governments, citizens, businesses, Non-Governmental Organisations (NGOs) etc.) does the public data ecosystem involve?

    Actors and their roles

    What actors does the public data ecosystem involve? What are their roles?

    Data (data types, data dynamism, data categories etc.)

    What data do the public data ecosystem cover (is intended / designed for)? Refer to all data-related aspects, including but not limited to data types, data dynamism (static data, dynamic, real-time data, stream), prevailing data categories / domains / topics etc.

    Processes / activities / dimensions, data lifecycle phases

    What processes, activities, dimensions and data lifecycle phases (e.g., locate, acquire, download, reuse, transform, etc.) does the public data ecosystem involve or refer to?

    Level (if relevant)

    What is the level of the public data ecosystem covered in the paper? (e.g., city, municipal, regional, national (=country), supranational, international).

    Other elements or relationships (if any)

    What other elements or relationships does the public data ecosystem consist of?

    Additional comments

    Additional comments (e.g., what other topics affected the public data ecosystems and their elements, what is expected to affect the public data ecosystems in the future, what were important topics by which the period was characterised etc.).

    New papers

    Does the study refer to any other potentially relevant papers?

    Additional references to potentially relevant papers that were found in the analysed paper (snowballing).

    Format of the file.xls, .csv (for the first spreadsheet only), .docx

    Licenses or restrictionsCC-BY

    For more info, see README.txt

  9. Google Data Analytics Case 2

    • kaggle.com
    zip
    Updated Sep 28, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LMVBA (2021). Google Data Analytics Case 2 [Dataset]. https://www.kaggle.com/luisgmolina/google-data-analytics-case-2
    Explore at:
    zip(25280280 bytes)Available download formats
    Dataset updated
    Sep 28, 2021
    Authors
    LMVBA
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Dataset

    This dataset was created by LMVBA

    Released under CC0: Public Domain

    Contents

  10. G

    Surveillance Data Analytics Services Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Surveillance Data Analytics Services Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/surveillance-data-analytics-services-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Surveillance Data Analytics Services Market Outlook




    According to our latest research, the global Surveillance Data Analytics Services market size reached USD 8.7 billion in 2024, supported by a robust year-on-year growth trajectory. The market is expected to expand at a CAGR of 17.2% from 2025 to 2033, reaching a projected value of USD 35.7 billion by 2033. This remarkable surge is driven by the increasing adoption of advanced analytics in surveillance systems, rapid urbanization, and heightened security concerns across both public and private sectors. The proliferation of smart cities, the evolution of AI-powered analytics, and the integration of IoT devices into surveillance networks are further accelerating the demand for surveillance data analytics services globally.




    One of the primary growth factors for the Surveillance Data Analytics Services market is the exponential increase in the volume and complexity of surveillance data generated by modern video and sensor systems. With the deployment of high-definition and 4K cameras in urban, industrial, and transportation environments, organizations are seeking advanced analytics solutions to efficiently process, analyze, and extract actionable insights from vast data streams. This need is compounded by the shift from reactive surveillance to predictive and real-time monitoring, where analytics play a crucial role in threat detection, anomaly identification, and operational optimization. The integration of machine learning, deep learning, and computer vision technologies into analytics platforms is enabling more accurate and timely decision-making, thus fueling the market’s expansion.




    Another significant driver is the growing emphasis on public safety and crime prevention, which is prompting governments and private enterprises to invest in sophisticated surveillance infrastructure and analytics services. Law enforcement agencies, city administrations, and critical infrastructure operators are leveraging surveillance data analytics to enhance situational awareness, automate incident detection, and optimize resource allocation. The COVID-19 pandemic further accelerated the adoption of surveillance analytics, with applications ranging from crowd monitoring and social distancing enforcement to contact tracing and health compliance. As security threats become increasingly sophisticated, the demand for comprehensive analytics solutions capable of integrating video, audio, and sensor data is expected to rise steadily over the forecast period.




    The evolution of cloud computing and edge analytics is also transforming the Surveillance Data Analytics Services market. Organizations are increasingly adopting cloud-based analytics services to achieve greater scalability, flexibility, and cost efficiency. Cloud deployment enables centralized data storage, real-time analytics, and remote access to surveillance insights, which is particularly valuable for large-scale enterprises and government agencies. Simultaneously, edge analytics is gaining traction in scenarios where low latency and real-time processing are critical, such as traffic monitoring and industrial surveillance. The hybrid deployment model, combining both cloud and edge capabilities, is emerging as a preferred approach to address diverse operational requirements and regulatory constraints.




    From a regional perspective, North America currently leads the global market, accounting for the largest revenue share in 2024, followed closely by Europe and Asia Pacific. The United States, in particular, is a frontrunner due to its advanced surveillance infrastructure, high security spending, and early adoption of AI-based analytics. Asia Pacific is expected to witness the fastest growth over the forecast period, driven by rapid urbanization, government-led smart city initiatives, and increasing investments in security across China, India, and Southeast Asia. Meanwhile, Europe continues to experience steady growth, supported by stringent data protection regulations and the modernization of transportation and critical infrastructure. Latin America and the Middle East & Africa are gradually emerging as promising markets, fueled by rising security concerns and digital transformation efforts.



    "https://growthmarketreports.com/request-sample/75205">
    <button class="btn btn-lg text-center" id="

  11. f

    Data dictionary for the ACTORDS 20-year follow-up study

    • auckland.figshare.com
    csv
    Updated Oct 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robyn May (2025). Data dictionary for the ACTORDS 20-year follow-up study [Dataset]. http://doi.org/10.17608/k6.auckland.28732205.v1
    Explore at:
    csvAvailable download formats
    Dataset updated
    Oct 16, 2025
    Dataset provided by
    The University of Auckland
    Authors
    Robyn May
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Metadata (data dictionary) and statistical analysis plan (including outcomes definitions for data dictionary) for the ACTORDS 20-year follow-up study. The DOI for the primary study publication is https://doi.org/10.1371/journal.pmed.1004618.Data and associated documentation for participants who have consented to future re-use of their data are available to other users under the data sharing arrangements provided by the University of Auckland’s Human Health Research Services (HHRS) platform (https://research-hub.auckland.ac.nz/subhub/human-health-research-services-platform). The data dictionary and metadata are published on the University of Auckland’s data repository Figshare, which allocates a DOI and thus makes these details searchable and available indefinitely. Researchers are able to use this information and the provided contact address (dataservices@auckland.ac.nz) to request a de-identified dataset through the HHRS Data Access Committee. Data will be shared with researchers who provide a methodologically sound proposal and have appropriate ethical approval, where necessary, to achieve the research aims in the approved proposal. Data requestors are required to sign a Data Access Agreement that includes a commitment to using the data only for the specified proposal, not to attempt to identify any individual participant, a commitment to secure storage and use of the data, and to destroy or return the data after completion of the project. The HHRS platform reserves the right to charge a fee to cover the costs of making data available, if needed, for data requests that require additional work to prepare.

  12. n

    Data from: Generalizable EHR-R-REDCap pipeline for a national...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Jan 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sophia Shalhout; Farees Saqlain; Kayla Wright; Oladayo Akinyemi; David Miller (2022). Generalizable EHR-R-REDCap pipeline for a national multi-institutional rare tumor patient registry [Dataset]. http://doi.org/10.5061/dryad.rjdfn2zcm
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 9, 2022
    Dataset provided by
    Massachusetts General Hospital
    Harvard Medical School
    Authors
    Sophia Shalhout; Farees Saqlain; Kayla Wright; Oladayo Akinyemi; David Miller
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Objective: To develop a clinical informatics pipeline designed to capture large-scale structured EHR data for a national patient registry.

    Materials and Methods: The EHR-R-REDCap pipeline is implemented using R-statistical software to remap and import structured EHR data into the REDCap-based multi-institutional Merkel Cell Carcinoma (MCC) Patient Registry using an adaptable data dictionary.

    Results: Clinical laboratory data were extracted from EPIC Clarity across several participating institutions. Labs were transformed, remapped and imported into the MCC registry using the EHR labs abstraction (eLAB) pipeline. Forty-nine clinical tests encompassing 482,450 results were imported into the registry for 1,109 enrolled MCC patients. Data-quality assessment revealed highly accurate, valid labs. Univariate modeling was performed for labs at baseline on overall survival (N=176) using this clinical informatics pipeline.

    Conclusion: We demonstrate feasibility of the facile eLAB workflow. EHR data is successfully transformed, and bulk-loaded/imported into a REDCap-based national registry to execute real-world data analysis and interoperability.

    Methods eLAB Development and Source Code (R statistical software):

    eLAB is written in R (version 4.0.3), and utilizes the following packages for processing: DescTools, REDCapR, reshape2, splitstackshape, readxl, survival, survminer, and tidyverse. Source code for eLAB can be downloaded directly (https://github.com/TheMillerLab/eLAB).

    eLAB reformats EHR data abstracted for an identified population of patients (e.g. medical record numbers (MRN)/name list) under an Institutional Review Board (IRB)-approved protocol. The MCCPR does not host MRNs/names and eLAB converts these to MCCPR assigned record identification numbers (record_id) before import for de-identification.

    Functions were written to remap EHR bulk lab data pulls/queries from several sources including Clarity/Crystal reports or institutional EDW including Research Patient Data Registry (RPDR) at MGB. The input, a csv/delimited file of labs for user-defined patients, may vary. Thus, users may need to adapt the initial data wrangling script based on the data input format. However, the downstream transformation, code-lab lookup tables, outcomes analysis, and LOINC remapping are standard for use with the provided REDCap Data Dictionary, DataDictionary_eLAB.csv. The available R-markdown ((https://github.com/TheMillerLab/eLAB) provides suggestions and instructions on where or when upfront script modifications may be necessary to accommodate input variability.

    The eLAB pipeline takes several inputs. For example, the input for use with the ‘ehr_format(dt)’ single-line command is non-tabular data assigned as R object ‘dt’ with 4 columns: 1) Patient Name (MRN), 2) Collection Date, 3) Collection Time, and 4) Lab Results wherein several lab panels are in one data frame cell. A mock dataset in this ‘untidy-format’ is provided for demonstration purposes (https://github.com/TheMillerLab/eLAB).

    Bulk lab data pulls often result in subtypes of the same lab. For example, potassium labs are reported as “Potassium,” “Potassium-External,” “Potassium(POC),” “Potassium,whole-bld,” “Potassium-Level-External,” “Potassium,venous,” and “Potassium-whole-bld/plasma.” eLAB utilizes a key-value lookup table with ~300 lab subtypes for remapping labs to the Data Dictionary (DD) code. eLAB reformats/accepts only those lab units pre-defined by the registry DD. The lab lookup table is provided for direct use or may be re-configured/updated to meet end-user specifications. eLAB is designed to remap, transform, and filter/adjust value units of semi-structured/structured bulk laboratory values data pulls from the EHR to align with the pre-defined code of the DD.

    Data Dictionary (DD)

    EHR clinical laboratory data is captured in REDCap using the ‘Labs’ repeating instrument (Supplemental Figures 1-2). The DD is provided for use by researchers at REDCap-participating institutions and is optimized to accommodate the same lab-type captured more than once on the same day for the same patient. The instrument captures 35 clinical lab types. The DD serves several major purposes in the eLAB pipeline. First, it defines every lab type of interest and associated lab unit of interest with a set field/variable name. It also restricts/defines the type of data allowed for entry for each data field, such as a string or numerics. The DD is uploaded into REDCap by every participating site/collaborator and ensures each site collects and codes the data the same way. Automation pipelines, such as eLAB, are designed to remap/clean and reformat data/units utilizing key-value look-up tables that filter and select only the labs/units of interest. eLAB ensures the data pulled from the EHR contains the correct unit and format pre-configured by the DD. The use of the same DD at every participating site ensures that the data field code, format, and relationships in the database are uniform across each site to allow for the simple aggregation of the multi-site data. For example, since every site in the MCCPR uses the same DD, aggregation is efficient and different site csv files are simply combined.

    Study Cohort

    This study was approved by the MGB IRB. Search of the EHR was performed to identify patients diagnosed with MCC between 1975-2021 (N=1,109) for inclusion in the MCCPR. Subjects diagnosed with primary cutaneous MCC between 2016-2019 (N= 176) were included in the test cohort for exploratory studies of lab result associations with overall survival (OS) using eLAB.

    Statistical Analysis

    OS is defined as the time from date of MCC diagnosis to date of death. Data was censored at the date of the last follow-up visit if no death event occurred. Univariable Cox proportional hazard modeling was performed among all lab predictors. Due to the hypothesis-generating nature of the work, p-values were exploratory and Bonferroni corrections were not applied.

  13. f

    Statistical Analysis on Interdisciplinary Papers in 2016

    • stemfellowship.figshare.com
    • datasetcatalog.nlm.nih.gov
    png
    Updated Feb 5, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lunjun Zhang; Justin Palombo; Sarah Costa (2017). Statistical Analysis on Interdisciplinary Papers in 2016 [Dataset]. http://doi.org/10.6084/m9.figshare.4621036.v1
    Explore at:
    pngAvailable download formats
    Dataset updated
    Feb 5, 2017
    Dataset provided by
    STEM Fellowship Big Data Challenge
    Authors
    Lunjun Zhang; Justin Palombo; Sarah Costa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The goal of the project is to analyze altmetric data regarding published scientific papers in 2016, and to specifically determine which interdisciplinary fields are impactful. After taking some random samples, the program designed uses data clustering as well as data representation techniques to analyze the altmetric data. Trying to classify the papers into different levels of impact, k-means clustering is applied in a creative way. With the focus on the interdisciplinary fields, three kinds of matrices are now calculated to illustrate the strength of the connection between every possible pairing combination: average altmetric score, percentage of published papers in this interdisciplinary field, and total altmetric score. Sorting based on the values obtained in the matrices and comparing three matrices can yield insightful results and help people understand the connections between different subjects better.

  14. Goalkeeper and Midfielder Statistics

    • kaggle.com
    zip
    Updated Dec 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). Goalkeeper and Midfielder Statistics [Dataset]. https://www.kaggle.com/datasets/thedevastator/maximizing-player-performance-with-goalkeeper-an
    Explore at:
    zip(108659 bytes)Available download formats
    Dataset updated
    Dec 8, 2022
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Goalkeeper and Midfielder Statistics

    Leveraging Statistical Data Of Goalkeepers and Midfielders

    By [source]

    About this dataset

    Welcome to Kaggle's dataset, where we provide rich and detailed insights into professional football players. Analyze player performance and team data with over 125 different metrics covering everything from goal involvement to tackles won, errors made and clean sheets kept. With the high levels of granularity included in our analysis, you can identify which players are underperforming or stand out from their peers for areas such as defense, shot stopping and key passes. Discover current trends in the game or uncover players' hidden value with this comprehensive dataset - a must-have resource for any aspiring football analyst!

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    • Define Performance: The first step of using this dataset is defining what type of performance you are measuring. Are you looking at total goals scored? Assists made? Shots on target? This will allow you to choose which metrics from the dataset best fit your criteria.

    • Descriptive Analysis: Once you have chosen your metric(s), it's time for descriptive analysis. This means analyzing the patterns within the data that contribute towards that metric(s). Does one team have more potential assist makers than another? What about shot accuracy or tackles won %? With descriptive analysis, we'll look for general trends across teams or specific players that influence performance in a meaningful way.

    • Predictive Analysis: Finally, we can move onto predictive analysis. This type of analysis seeks to answer two questions: what are factors that predict player performance? And which factors are most important when predicting performance? Utilizing various predictive models—ex – Logistic regression or Random forest -we can determine which variables in our dataset best explain a certain metric’s outcome—for example –expected goals per match -and build models that accurately predict future outcomes based on given input values associated with those factors.

    By following these steps outlined here, you'll be able to get started in finding relationships between different metrics from this dataset and leveraging these insights into predictions about player performance!

    Research Ideas

    • Creating an advanced predictive analytics model: By using the data in this dataset, it would be possible to create an advanced predictive analytics model that can analyze player performance and provide more accurate insights on which players are likely to have the most impact during a given season.
    • Using Machine Learning algorithms to identify potential transfer targets: By using a variety of metrics included in this dataset, such as shots, shots on target and goals scored, it would be possible to use Machine Learning algorithms to identify potential transfer targets for a team.
    • Analyzing positional differences between players: This dataset contains information about each player's position as well as their performance metrics across various aspects of the game (e.g., crosses attempted, defensive clearances). Thus it could be used for analyzing how certain positional groupings perform differently from one another in certain aspects of their play over different stretches of time or within one season or matchday in particular.

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: DEF PerApp 2GWs.csv | Column name | Description | |:----------------------------|:------------------------------------------------------------| | Name | Name of the player. (String) | | App. | Number of appearances. (Integer) | | Minutes | Number of minutes played. (Integer) | | Shots | Number of shots taken. (Integer) | | Shots on Target | Number of shots on target. (Integer) ...

  15. f

    Data from: An Evaluation of the Use of Statistical Procedures in Soil...

    • scielo.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laene de Fátima Tavares; André Mundstock Xavier de Carvalho; Lucas Gonçalves Machado (2023). An Evaluation of the Use of Statistical Procedures in Soil Science [Dataset]. http://doi.org/10.6084/m9.figshare.19944438.v1
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SciELO journals
    Authors
    Laene de Fátima Tavares; André Mundstock Xavier de Carvalho; Lucas Gonçalves Machado
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT Experimental statistical procedures used in almost all scientific papers are fundamental for clearer interpretation of the results of experiments conducted in agrarian sciences. However, incorrect use of these procedures can lead the researcher to incorrect or incomplete conclusions. Therefore, the aim of this study was to evaluate the characteristics of the experiments and quality of the use of statistical procedures in soil science in order to promote better use of statistical procedures. For that purpose, 200 articles, published between 2010 and 2014, involving only experimentation and studies by sampling in the soil areas of fertility, chemistry, physics, biology, use and management were randomly selected. A questionnaire containing 28 questions was used to assess the characteristics of the experiments, the statistical procedures used, and the quality of selection and use of these procedures. Most of the articles evaluated presented data from studies conducted under field conditions and 27 % of all papers involved studies by sampling. Most studies did not mention testing to verify normality and homoscedasticity, and most used the Tukey test for mean comparisons. Among studies with a factorial structure of the treatments, many had ignored this structure, and data were compared assuming the absence of factorial structure, or the decomposition of interaction was performed without showing or mentioning the significance of the interaction. Almost none of the papers that had split-block factorial designs considered the factorial structure, or they considered it as a split-plot design. Among the articles that performed regression analysis, only a few of them tested non-polynomial fit models, and none reported verification of the lack of fit in the regressions. The articles evaluated thus reflected poor generalization and, in some cases, wrong generalization in experimental design and selection of procedures for statistical analysis.

  16. f

    Project for Statistics on Living Standards and Development 1993 - South...

    • microdata.fao.org
    • catalog.ihsn.org
    • +2more
    Updated Oct 20, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Southern Africa Labour and Development Research Unit (2020). Project for Statistics on Living Standards and Development 1993 - South Africa [Dataset]. https://microdata.fao.org/index.php/catalog/1527
    Explore at:
    Dataset updated
    Oct 20, 2020
    Dataset authored and provided by
    Southern Africa Labour and Development Research Unit
    Time period covered
    1993
    Area covered
    South Africa
    Description

    Abstract

    The Project for Statistics on Living standards and Development was a countrywide World Bank Living Standards Measurement Survey. It covered approximately 9000 households, drawn from a representative sample of South African households. The fieldwork was undertaken during the nine months leading up to the country's first democratic elections at the end of April 1994. The purpose of the survey was to collect statistical information about the conditions under which South Africans live in order to provide policymakers with the data necessary for planning strategies. This data would aid the implementation of goals such as those outlined in the Government of National Unity's Reconstruction and Development Programme.

    Geographic coverage

    National

    Analysis unit

    Households

    Universe

    All Household members. Individuals in hospitals, old age homes, hotels and hostels of educational institutions were not included in the sample. Migrant labour hostels were included. In addition to those that turned up in the selected ESDs, a sample of three hostels was chosen from a national list provided by the Human Sciences Research Council and within each of these hostels a representative sample was drawn on a similar basis as described above for the households in ESDs.

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    (a) SAMPLING DESIGN

    Sample size is 9,000 households. The sample design adopted for the study was a two-stage self-weighting design in which the first stage units were Census Enumerator Subdistricts (ESDs, or their equivalent) and the second stage were households. The advantage of using such a design is that it provides a representative sample that need not be based on accurate census population distribution in the case of South Africa, the sample will automatically include many poor people, without the need to go beyond this and oversample the poor. Proportionate sampling as in such a self-weighting sample design offers the simplest possible data files for further analysis, as weights do not have to be added. However, in the end this advantage could not be retained, and weights had to be added.

    (b) SAMPLE FRAME

    The sampling frame was drawn up on the basis of small, clearly demarcated area units, each with a population estimate. The nature of the self-weighting procedure adopted ensured that this population estimate was not important for determining the final sample, however. For most of the country, census ESDs were used. Where some ESDs comprised relatively large populations as for instance in some black townships such as Soweto, aerial photographs were used to divide the areas into blocks of approximately equal population size. In other instances, particularly in some of the former homelands, the area units were not ESDs but villages or village groups. In the sample design chosen, the area stage units (generally ESDs) were selected with probability proportional to size, based on the census population. Systematic sampling was used throughout that is, sampling at fixed interval in a list of ESDs, starting at a randomly selected starting point. Given that sampling was self-weighting, the impact of stratification was expected to be modest. The main objective was to ensure that the racial and geographic breakdown approximated the national population distribution. This was done by listing the area stage units (ESDs) by statistical region and then within the statistical region by urban or rural. Within these sub-statistical regions, the ESDs were then listed in order of percentage African. The sampling interval for the selection of the ESDs was obtained by dividing the 1991 census population of 38,120,853 by the 300 clusters to be selected. This yielded 105,800. Starting at a randomly selected point, every 105,800th person down the cluster list was selected. This ensured both geographic and racial diversity (ESDs were ordered by statistical sub-region and proportion of the population African). In three or four instances, the ESD chosen was judged inaccessible and replaced with a similar one. In the second sampling stage the unit of analysis was the household. In each selected ESD a listing or enumeration of households was carried out by means of a field operation. From the households listed in an ESD a sample of households was selected by systematic sampling. Even though the ultimate enumeration unit was the household, in most cases "stands" were used as enumeration units. However, when a stand was chosen as the enumeration unit all households on that stand had to be interviewed.

    Mode of data collection

    Face-to-face [f2f]

    Cleaning operations

    All the questionnaires were checked when received. Where information was incomplete or appeared contradictory, the questionnaire was sent back to the relevant survey organization. As soon as the data was available, it was captured using local development platform ADE. This was completed in February 1994. Following this, a series of exploratory programs were written to highlight inconsistencies and outlier. For example, all person level files were linked together to ensure that the same person code reported in different sections of the questionnaire corresponded to the same person. The error reports from these programs were compared to the questionnaires and the necessary alterations made. This was a lengthy process, as several files were checked more than once, and completed at the beginning of August 1994. In some cases, questionnaires would contain missing values, or comments that the respondent did not know, or refused to answer a question.

    These responses are coded in the data files with the following values: VALUE MEANING -1 : The data was not available on the questionnaire or form -2 : The field is not applicable -3 : Respondent refused to answer -4 : Respondent did not know answer to question

    Data appraisal

    The data collected in clusters 217 and 218 should be viewed as highly unreliable and therefore removed from the data set. The data currently available on the web site has been revised to remove the data from these clusters. Researchers who have downloaded the data in the past should revise their data sets. For information on the data in those clusters, contact SALDRU http://www.saldru.uct.ac.za/.

  17. m

    Replication Data for: Upcoming issues, new methods: using Interactive...

    • data.mendeley.com
    Updated Oct 18, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gustavo Behling (2021). Replication Data for: Upcoming issues, new methods: using Interactive Qualitative Analysis (IQA) in Management Research [Dataset]. http://doi.org/10.17632/kb76h5jtvw.1
    Explore at:
    Dataset updated
    Oct 18, 2021
    Authors
    Gustavo Behling
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These data refer to the paper “Upcoming issues, new methods: using Interactive Qualitative Analysis (IQA) in Management Research”. This article is a guide to the application of the IQA method in management research and the files available refer to: 1. 1-Affinities, definitions, and cards produced by focus group.docx: all cards, affinities and definitions create by focus group session.docx 2. 2-Step-by-step - Analysis procedures.docx: detailed data analysis procedures.docx 3. 3-Axial Coding Tables – Individual Interviews.docx: detailed axial coding procedures.docx 4. 4-Theoretical Coding Table – Individual Interviews.docx: detailed theoretical coding procedures.docx

  18. M

    Global High Definition Digital Video Capture Module Market Forecast and...

    • statsndata.org
    excel, pdf
    Updated Nov 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stats N Data (2025). Global High Definition Digital Video Capture Module Market Forecast and Trend Analysis 2025-2032 [Dataset]. https://www.statsndata.org/report/high-definition-digital-video-capture-module-market-85669
    Explore at:
    pdf, excelAvailable download formats
    Dataset updated
    Nov 2025
    Dataset authored and provided by
    Stats N Data
    License

    https://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order

    Area covered
    Global
    Description

    The High Definition Digital Video Capture Module market has gained significant traction in recent years, driven by the increasing demand for high-resolution video recording across various industries, including broadcasting, surveillance, entertainment, and education. These modules play a pivotal role in capturing re

  19. High Definition HD Voice Market Size & Share Analysis - Industry Research...

    • mordorintelligence.com
    pdf,excel,csv,ppt
    Updated May 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mordor Intelligence (2025). High Definition HD Voice Market Size & Share Analysis - Industry Research Report - Growth Trends [Dataset]. https://www.mordorintelligence.com/industry-reports/high-definition-hd-voice-market
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset updated
    May 19, 2025
    Dataset authored and provided by
    Mordor Intelligence
    License

    https://www.mordorintelligence.com/privacy-policyhttps://www.mordorintelligence.com/privacy-policy

    Time period covered
    2019 - 2030
    Area covered
    Global
    Description

    High Definition HD Voice Market is segmented by User Type (Enterprise user and Consumer), Access Type (Mobile and Broadband), Application (Video Conferencing, Audio Conferencing, Web Conferencing, Multimedia Conferencing, Audio Broadcast, Announcement Services), and Geography.

  20. G

    5G Network Data Analytics Function Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). 5G Network Data Analytics Function Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/5g-network-data-analytics-function-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Oct 3, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    5G Network Data Analytics Function Market Outlook



    According to our latest research, the global 5G Network Data Analytics Function market size in 2024 stands at USD 1.98 billion, marking a pivotal year for the industry. The market is projected to grow at a robust CAGR of 29.5% during the forecast period, reaching a substantial USD 17.01 billion by 2033. This impressive growth trajectory is primarily driven by the accelerating adoption of 5G networks, the increasing demand for real-time data analytics, and the rapid digital transformation across various industries. As per the latest research, the integration of advanced analytics functions in 5G infrastructure is becoming indispensable for optimizing network performance, enhancing customer experiences, and enabling new revenue streams for telecom operators and enterprises globally.




    The primary growth factor fueling the expansion of the 5G Network Data Analytics Function market is the exponential surge in data traffic resulting from the widespread deployment of 5G networks. With the proliferation of IoT devices, smart applications, and bandwidth-intensive services such as augmented reality, virtual reality, and ultra-high-definition video streaming, network operators are increasingly seeking sophisticated analytics solutions to manage and optimize network resources efficiently. These analytics functions provide actionable insights, enabling dynamic traffic management, predictive maintenance, and real-time decision-making, which are critical in ensuring seamless connectivity and superior service quality in the 5G era. Furthermore, the need for robust network analytics is further amplified by the growing complexity and heterogeneity of 5G infrastructures, which demand advanced tools for monitoring, troubleshooting, and performance optimization.




    Another significant driver is the rising emphasis on enhancing customer experience and operational efficiency among telecom operators and enterprises. The 5G Network Data Analytics Function market is witnessing increased investments in solutions that enable proactive customer experience management, churn prediction, and personalized service delivery. By leveraging AI and machine learning-powered analytics, organizations can gain a holistic view of customer behavior, network usage patterns, and service quality metrics, allowing them to tailor offerings, resolve issues promptly, and foster long-term customer loyalty. Additionally, the deployment of analytics functions has become crucial in identifying and mitigating security threats, detecting fraud, and ensuring compliance with stringent regulatory requirements, thereby safeguarding network integrity and building trust among end-users.




    The ongoing digital transformation across various industry verticals, including healthcare, manufacturing, transportation, and government, is also contributing to the robust growth of the 5G Network Data Analytics Function market. Enterprises are increasingly adopting 5G-enabled analytics solutions to drive innovation, enhance operational agility, and unlock new business models. For instance, in smart manufacturing, real-time analytics facilitate predictive maintenance, process optimization, and quality assurance, while in healthcare, they enable remote patient monitoring, telemedicine, and data-driven clinical decision-making. The convergence of 5G and advanced analytics is thus creating new opportunities for value creation and competitive differentiation across sectors, propelling market expansion further.




    Regionally, the 5G Network Data Analytics Function market exhibits strong momentum in Asia Pacific and North America, driven by early 5G rollouts, substantial investments in network infrastructure, and a vibrant ecosystem of technology providers. Europe is also witnessing steady growth, supported by government initiatives and increasing enterprise adoption. Meanwhile, emerging markets in Latin America and the Middle East & Africa are gradually catching up, fueled by rising digitalization efforts and growing awareness of the strategic benefits of network analytics. The regional landscape is characterized by varying levels of technological maturity, regulatory frameworks, and market readiness, influencing the pace and scale of analytics adoption across geographies.



Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Martin Weise; Martin Weise; Andreas Rauber; Andreas Rauber (2024). Trusted Research Environments: Analysis of Characteristics and Data Availability [Dataset]. http://doi.org/10.48436/cv20m-sg117

Trusted Research Environments: Analysis of Characteristics and Data Availability

Explore at:
bin, csvAvailable download formats
Dataset updated
Jun 25, 2024
Dataset provided by
TU Wien
Authors
Martin Weise; Martin Weise; Andreas Rauber; Andreas Rauber
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Trusted Research Environments (TREs) enable analysis of sensitive data under strict security assertions that protect the data with technical organizational and legal measures from (accidentally) being leaked outside the facility. While many TREs exist in Europe, little information is available publicly on the architecture and descriptions of their building blocks & their slight technical variations. To shine light on these problems, we give an overview of existing, publicly described TREs and a bibliography linking to the system description. We further analyze their technical characteristics, especially in their commonalities & variations and provide insight on their data type characteristics and availability. Our literature study shows that 47 TREs worldwide provide access to sensitive data of which two-thirds provide data themselves, predominantly via secure remote access. Statistical offices make available a majority of available sensitive data records included in this study.

Methodology

We performed a literature study covering 47 TREs worldwide using scholarly databases (Scopus, Web of Science, IEEE Xplore, Science Direct), a computer science library (dblp.org), Google and grey literature focusing on retrieving the following source material:

  • Peer-reviewed articles where available,
  • TRE websites,
  • TRE metadata catalogs.

The goal for this literature study is to discover existing TREs, analyze their characteristics and data availability to give an overview on available infrastructure for sensitive data research as many European initiatives have been emerging in recent months.

Technical details

This dataset consists of five comma-separated values (.csv) files describing our inventory:

  • countries.csv: Table of countries with columns id (number), name (text) and code (text, in ISO 3166-A3 encoding, optional)
  • tres.csv: Table of TREs with columns id (number), name (text), countryid (number, refering to column id of table countries), structureddata (bool, optional), datalevel (one of [1=de-identified, 2=pseudonomized, 3=anonymized], optional), outputcontrol (bool, optional), inceptionyear (date, optional), records (number, optional), datatype (one of [1=claims, 2=linked records]), optional), statistics_office (bool), size (number, optional), source (text, optional), comment (text, optional)
  • access.csv: Table of access modes of TREs with columns id (number), suf (bool, optional), physical_visit (bool, optional), external_physical_visit (bool, optional), remote_visit (bool, optional)
  • inclusion.csv: Table of included TREs into the literature study with columns id (number), included (bool), exclusion reason (one of [peer review, environment, duplicate], optional), comment (text, optional)
  • major_fields.csv: Table of data categorization into the major research fields with columns id (number), life_sciences (bool, optional), physical_sciences (bool, optional), arts_and_humanities (bool, optional), social_sciences (bool, optional).

Additionally, a MariaDB (10.5 or higher) schema definition .sql file is needed, properly modelling the schema for databases:

  • schema.sql: Schema definition file to create the tables and views used in the analysis.

The analysis was done through Jupyter Notebook which can be found in our source code repository: https://gitlab.tuwien.ac.at/martin.weise/tres/-/blob/master/analysis.ipynb

Search
Clear search
Close search
Google apps
Main menu