https://paper.erudition.co.in/termshttps://paper.erudition.co.in/terms
Question Paper Solutions of chapter Overview and Concepts of Data Warehousing of Data Warehousing & Data Mining, 7th Semester , Information Technology
This chapter presents theoretical and practical aspects associated to the implementation of a combined model-based/data-driven approach for failure prognostics based on particle filtering algorithms, in which the current esti- mate of the state PDF is used to determine the operating condition of the system and predict the progression of a fault indicator, given a dynamic state model and a set of process measurements. In this approach, the task of es- timating the current value of the fault indicator, as well as other important changing parameters in the environment, involves two basic steps: the predic- tion step, based on the process model, and an update step, which incorporates the new measurement into the a priori state estimate. This framework allows to estimate of the probability of failure at future time instants (RUL PDF) in real-time, providing information about time-to- failure (TTF) expectations, statistical confidence intervals, long-term predic- tions; using for this purpose empirical knowledge about critical conditions for the system (also referred to as the hazard zones). This information is of paramount significance for the improvement of the system reliability and cost-effective operation of critical assets, as it has been shown in a case study where feedback correction strategies (based on uncertainty measures) have been implemented to lengthen the RUL of a rotorcraft transmission system with propagating fatigue cracks on a critical component. Although the feed- back loop is implemented using simple linear relationships, it is helpful to provide a quick insight into the manner that the system reacts to changes on its input signals, in terms of its predicted RUL. The method is able to manage non-Gaussian pdf’s since it includes concepts such as nonlinear state estimation and confidence intervals in its formulation. Real data from a fault seeded test showed that the proposed framework was able to anticipate modifications on the system input to lengthen its RUL. Results of this test indicate that the method was able to successfully suggest the correction that the system required. In this sense, future work will be focused on the development and testing of similar strategies using different input-output uncertainty metrics.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Python code generated in the context of the dissertation 'Improving the semantic quality of conceptual models through text mining. A proof of concept' (Postgraduate studies Big Data & Analytics for Business and Management, KU Leuven Faculty of Economics and Business, 2018)
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Opal is Australia's national gemstone, however most significant opal discoveries were made in the early 1900's - more than 100 years ago - until recently. Currently there is no formal exploration model for opal, meaning there are no widely accepted concepts or methodologies available to suggest where new opal fields may be found. As a consequence opal mining in Australia is a cottage industry with the majority of opal exploration focused around old opal fields. The EarthByte Group has developed a new opal exploration methodology for the Great Artesian Basin. The work is based on the concept of applying “big data mining” approaches to data sets relevant for identifying regions that are prospective for opal. The group combined a multitude of geological and geophysical data sets that were jointly analysed to establish associations between particular features in the data with known opal mining sites. A “training set” of known opal localities (1036 opal mines) was assembled, using those localities, which were featured in published reports and on maps. The data used include rock types, soil type, regolith type, topography, radiometric data and a stack of digital palaeogeographic maps. The different data layers were analysed via spatio-temporal data mining combining the GPlates PaleoGIS software (www.gplates.org) with the Orange data mining software (orange.biolab.si) to produce the first opal prospectivity map for the Great Artesian Basin. One of the main results of the study is that the geological conditions favourable for opal were found to be related to a particular sequence of surface environments over geological time. These conditions involved alternating shallow seas and river systems followed by uplift and erosion. The approach reduces the entire area of the Great Artesian Basin to a mere 6% that is deemed to be prospective for opal exploration. The work is described in two companion papers in the Australian Journal of Earth Sciences and Computers and Geosciences.
Age-coded multi-layered geological datasets are becoming increasingly prevalent with the surge in open-access geodata, yet there are few methodologies for extracting geological information and knowledge from these data. We present a novel methodology, based on the open-source GPlates software in which age-coded digital palaeogeographic maps are used to “data-mine” spatio-temporal patterns related to the occurrence of Australian opal. Our aim is to test the concept that only a particular sequence of depositional/erosional environments may lead to conditions suitable for the formation of gem quality sedimentary opal. Time-varying geographic environment properties are extracted from a digital palaeogeographic dataset of the eastern Australian Great Artesian Basin (GAB) at 1036 opal localities. We obtain a total of 52 independent ordinal sequences sampling 19 time slices from the Early Cretaceous to the present-day. We find that 95% of the known opal deposits are tied to only 27 sequences all comprising fluvial and shallow marine depositional sequences followed by a prolonged phase of erosion. We then map the total area of the GAB that matches these 27 opal-specific sequences, resulting in an opal-prospective region of only about 10% of the total area of the basin. The key patterns underlying this association involve only a small number of key environmental transitions. We demonstrate that these key associations are generally absent at arbitrary locations in the basin. This new methodology allows for the simplification of a complex time-varying geological dataset into a single map view, enabling straightforward application for opal exploration and for future co-assessment with other datasets/geological criteria. This approach may help unravel the poorly understood opal formation process using an empirical spatio-temporal data-mining methodology and readily available datasets to aid hypothesis testing.
Andrew Merdith - EarthByte Research Group, School of Geosciences, The University of Sydney, Australia. ORCID: 0000-0002-7564-8149
Thomas Landgrebe - EarthByte Research Group, School of Geosciences, The University of Sydney, Australia
Adriana Dutkiewicz - EarthByte Research Group, School of Geosciences, The University of Sydney, Australia
R. Dietmar Müller - EarthByte Research Group, School of Geosciences, The University of Sydney, Australia. ORCID: 0000-0002-3334-5764
This collection contains geological data from Australia used for data mining in the publications Merdith et al. (2013) and Landgrebe et al. (2013). The resulting maps of opal prospectivity are also included.
Note: For details on the files included in this data collection, see “Description_of_Resources.txt”.
Note: For information on file formats and what programs to use to interact with various file formats, see “File_Formats_and_Recommended_Programs.txt”.
For more information on this data collection, and links to other datasets from the EarthByte Research Group please visit EarthByte
For more information about using GPlates, including tutorials and a user manual please visit GPlates or EarthByte
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Technological advances in mass spectrometry (MS) toward more accurate and faster data acquisition result in highly informative but also more complex data sets. Especially the hyphenation of liquid chromatography (LC) and MS yields large data files containing a high amount of compound specific information. Using electrospray-ionization for compounds such as polymers enables highly sensitive detection, yet results in very complex spectra, containing multiply charged ions and adducts. Recent years have seen the development of novel or updated data mining strategies to reduce the MS spectra complexity and to ultimately simplify the data analysis workflow. Among other techniques, the Kendrick mass defect analysis, which graphically highlights compounds containing a given repeating unit, has been revitalized with applications in multiple fields of study, such as lipids and polymers. Especially for the latter, various data mining concepts have been developed, which extend regular Kendrick mass defect analysis to multiply charged ion series. The aim of this work is to collect and subsequently implement these concepts in one of the most popular open-source MS data mining software, i.e., MZmine 2, to make them rapidly available for different MS based measurement techniques and various vendor formats, with a special focus on hyphenated techniques such as LC–MS. In combination with already existing data mining modules, an example data set was processed and simplified, enabling an ever faster evaluation and polymer characterization.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SPHERE is students' performance in physics education research dataset. It is presented as a multi-domain learning dataset of students’ performance on physics that has been collected through several research-based assessments (RBAs) established by the physics education research (PER) community. A total of 497 eleventh-grade students were involved from three large and a small public high school located in a suburban district of a high-populated province in Indonesia. Some variables related to demographics, accessibility to literature resources, and students’ physics identity are also investigated. Some RBAs utilized in this data were selected based on concepts learned by the students in the Indonesian physics curriculum. We commenced the survey of students’ understanding on Newtonian mechanics at the end of the first semester using Force Concept Inventory (FCI) and Force and Motion Conceptual Evaluation (FMCE). In the second semester, we assessed the students’ scientific abilities and learning attitude through Scientific Abilities Assessment Rubrics (SAAR) and the Colorado Learning Attitudes about Science Survey (CLASS) respectively. The conceptual assessments were continued at the second semester measured through Rotational and Rolling Motion Conceptual Survey (RRMCS), Fluid Mechanics Concept Inventory (FMCI), Mechanical Waves Conceptual Survey (MWCS), Thermal Concept Evaluation (TCE), and Survey of Thermodynamic Processes and First and Second Laws (STPFaSL). We expect SPHERE could be a valuable dataset for supporting the advancement of the PER field particularly in quantitative studies. For example, there is a need to help advance research on using machine learning and data mining techniques in PER that might face challenges due to the unavailable dataset for the specific purpose of PER studies. SPHERE can be reused as a students’ performance dataset on physics specifically dedicated for PER scholars which might be willing to implement machine learning techniques in physics education.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Technical notes and documentation on the common data model of the project CONCEPT-DM2.
This publication corresponds to the Common Data Model (CDM) specification of the CONCEPT-DM2 project for the implementation of a federated network analysis of the healthcare pathway of type 2 diabetes.
Aims of the CONCEPT-DM2 project:
General aim: To analyse chronic care effectiveness and efficiency of care pathways in diabetes, assuming the relevance of care pathways as independent factors of health outcomes using data from real life world (RWD) from five Spanish Regional Health Systems.
Main specific aims:
Study Design: It is a population-based retrospective observational study centered on all T2D patients diagnosed in five Regional Health Services within the Spanish National Health Service. We will include all the contacts of these patients with the health services using the electronic medical record systems including Primary Care data, Specialized Care data, Hospitalizations, Urgent Care data, Pharmacy Claims, and also other registers such as the mortality and the population register.
Cohort definition: All patients with code of Type 2 Diabetes in the clinical health records
Files included in this publication:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Four cases of judgment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Top five keyword counts by month.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and model checkpoints for paper "Weakly Supervised Concept Map Generation through Task-Guided Graph Translation" by Jiaying Lu, Xiangjue Dong, and Carl Yang. The paper has been accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE).
GT-D2G-*.tar.gz
are model checkpoints for GT-D2G variants. These models are trained by seed=27
.
nyt/dblp/yelp.*.win5.pickle.gz
are initial graphs generated by NLP pipelines.
glove.840B.restaurant.400d.vec.gz
is the pre-trained embedding for the Yelp dataset.
For more instructions, please refer to our GitHub repo.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Electronic health records (EHR) represent a rich and relatively untapped resource for characterizing the true nature of clinical practice and for quantifying the degree of inter-relatedness of medical entities such as drugs, diseases, procedures and devices. We provide a unique set of co-occurrence matrices, quantifying the pairwise mentions of 3 million terms mapped onto 1 million clinical concepts, calculated from the raw text of 20 million clinical notes spanning 19 years of data. Co-frequencies were computed by means of a parallelized annotation, hashing, and counting pipeline that was applied over clinical notes from Stanford Hospitals and Clinics. The co-occurrence matrix quantifies the relatedness among medical concepts which can serve as the basis for many statistical tests, and can be used to directly compute Bayesian conditional probabilities, association rules, as well as a range of test statistics such as relative risks and odds ratios. This dataset can be leveraged to quantitatively assess comorbidity, drug-drug, and drug-disease patterns for a range of clinical, epidemiological, and financial applications.
This dataset consists of: \( \ \) I. Source code and documentation for the "Shared Lexis Tool", a Windows desktop application that provides a means of exploring all of the words that are statistically associated with a word provided by the user, in a given corpus of text (for certain predefined corpora), over a given date range. \( \ \) II. Source code and documentation for the "Coassociation Grapher", a Windows desktop application. Given a particular word of interest (a “focal token”) in a particular corpus of text, the Coassociation Grapher allows you to view the relative probability of observing other terms (“bound tokens”) before or after the focal token. \( \ \) III. Numerous precomputed files that need to be hosted on a webserver in order for the Shared Lexis Tool to function properly; \( \ \) IV. Files that were created in the course of conducting the research described in "Tracing shifting conceptual vocabularies through time" and "The idea of liberty" (full citations in above section 'SHARING/ACCESS INFORMATION'), including "cliques" (https://en.wikipedia.org/wiki/Clique_(graph_theory)) of words that frequently appear together; \( \ \) V. Source code of text-processing scripts developed by the Concept Lab, primarily for the purpose of generating precomputed files described in section III, and associated data. \( \ \)
The Shared Lexis Tool and Coassociation Grapher (and the required precomputed files) are also being hosted at https://concept-lab.lib.cam.ac.uk/ from 2018 to 2023, and therefore those who are merely interested in using the tools within this time frame will have no use for the present dataset. However, these files may be useful for individuals who wish to host the files on their own webserver, for example, in order to use the Shared Lexis tool past 2023. See README.txt for more information.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The detailed datum of the Experiment C.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
The PRELEARN dataset contains 6607 concept pairs and a “Wikipedia pages file” containing the raw text of the Wikipedia pages referring to the concepts extracted (using WikiExtractor on a Wikipedia dump of Jan. 2020). The dataset has been used for the PRELEARN shared task (https://sites.google.com/view/prelearn20/), organised as part of Evalita 2020 evaluation campaign (http://www.evalita.it/2020). It was extracted from the ITA-PREREQ dataset (Miaschi et al., 2019), built upon the AL-CPL dataset (Liang et al., 2018), a collection of binary-labelled concept pairs extracted from textbooks on four domains: data mining, geometry, physics and pre-calculus.
The concept pairs consist of target and prerequisite concepts (A, B), labelled as follows:
1 if B is a prerequisite of A;
0 in all other cases.
Domain experts were asked to manually annotate if pairs of concepts showed a prerequisite relation or not. The dataset is split into a training set (5908 pairs) and a test set (699 pairs). The distribution of prerequisite and non- prerequisite labels was balanced (50/50) for each domain only in the test datasets.
Real life business processes change over time
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The F-measure values of three experiments.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Extracted phenotypical concepts per cohort. For each cohort, we list the top50 concepts ranked by Frequency and TF-IDF. The first column is the UMLS code of the phenotypical concepts, the second column is the French preferred terms, the third column is the English preferred terms, the fourth column is the frequencies score (FREQ), the fifth column is the TF-IDF score, the sixth column is the rank of the concept sorted by the frequency score, the seventh column is the rank of the concept sorted by the TF-IDF score and the eighth column is the expert evaluation (1: relevant concept, 0: none relevant concept). (XLS 93 kb)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Customs records of are available for DEEP CONCEPT THYSSEN MINING S.A.. Learn about its Importer, supply capabilities and the countries to which it supplies goods
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As there was no large publicly available cross-domain dataset for comparative argument mining, we create one composed of sentences, potentially annotated with BETTER / WORSE markers (the first object is better / worse than the second object) or NONE (the sentence does not contain a comparison of the target objects). The BETTER sentences stand for a pro-argument in favor of the first compared object and WORSE-sentences represent a con-argument and favor the second object. We aim for minimizing dataset domain-specific biases in order to capture the nature of comparison and not the nature of the particular domains, thus decided to control the specificity of domains by the selection of comparison targets. We hypothesized and could confirm in preliminary experiments that comparison targets usually have a common hypernym (i.e., are instances of the same class), which we utilized for selection of the compared objects pairs. The most specific domain we choose, is computer science with comparison targets like programming languages, database products and technology standards such as Bluetooth or Ethernet. Many computer science concepts can be compared objectively (e.g., on transmission speed or suitability for certain applications). The objects for this domain were manually extracted from List of-articles at Wikipedia. In the annotation process, annotators were asked to only label sentences from this domain if they had some basic knowledge in computer science. The second, broader domain is brands. It contains objects of different types (e.g., cars, electronics, and food). As brands are present in everyday life, anyone should be able to label the majority of sentences containing well-known brands such as Coca-Cola or Mercedes. Again, targets for this domain were manually extracted from `List of''-articles at Wikipedia.The third domain is not restricted to any topic: random. For each of 24~randomly selected seed words 10 similar words were collected based on the distributional similarity API of JoBimText (http://www.jobimtext.org). Seed words created using randomlists.com: book, car, carpenter, cellphone, Christmas, coffee, cork, Florida, hamster, hiking, Hoover, Metallica, NBC, Netflix, ninja, pencil, salad, soccer, Starbucks, sword, Tolkien, wine, wood, XBox, Yale.Especially for brands and computer science, the resulting object lists were large (4493 in brands and 1339 in computer science). In a manual inspection, low-frequency and ambiguous objects were removed from all object lists (e.g., RAID (a hardware concept) and Unity (a game engine) are also regularly used nouns). The remaining objects were combined to pairs. For each object type (seed Wikipedia list page or the seed word), all possible combinations were created. These pairs were then used to find sentences containing both objects. The aforementioned approaches to selecting compared objects pairs tend minimize inclusion of the domain specific data, but do not solve the problem fully though. We keep open a question of extending dataset with diverse object pairs including abstract concepts for future work. As for the sentence mining, we used the publicly available index of dependency-parsed sentences from the Common Crawl corpus containing over 14 billion English sentences filtered for duplicates. This index was queried for sentences containing both objects of each pair. For 90% of the pairs, we also added comparative cue words (better, easier, faster, nicer, wiser, cooler, decent, safer, superior, solid, terrific, worse, harder, slower, poorly, uglier, poorer, lousy, nastier, inferior, mediocre) to the query in order to bias the selection towards comparisons but at the same time admit comparisons that do not contain any of the anticipated cues. This was necessary as a random sampling would have resulted in only a very tiny fraction of comparisons. Note that even sentences containing a cue word do not necessarily express a comparison between the desired targets (dog vs. cat: He's the best pet that you can get, better than a dog or cat.). It is thus especially crucial to enable a classifier to learn not to rely on the existence of clue words only (very likely in a random sample of sentences with very few comparisons). For our corpus, we keep pairs with at least 100 retrieved sentences.From all sentences of those pairs, 2500 for each category were randomly sampled as candidates for a crowdsourced annotation that we conducted on figure-eight.com in several small batches. Each sentence was annotated by at least five trusted workers. We ranked annotations by confidence, which is the figure-eight internal measure of combining annotator trust and voting, and discarded annotations with a confidence below 50%. Of all annotated items, 71% received unanimous votes and for over 85% at least 4 out of 5 workers agreed -- rendering the collection procedure aimed at ease of annotation successful.The final dataset contains 7199 sentences with 271 distinct object pairs. The majority of sentences (over 72%) are non-comparative despite biasing the selection with cue words; in 70% of the comparative sentences, the favored target is named first.You can browse though the data here: https://docs.google.com/spreadsheets/d/1U8i6EU9GUKmHdPnfwXEuBxi0h3aiRCLPRC-3c9ROiOE/edit?usp=sharing Full description of the dataset is available in the workshop paper at ACL 2019 conference. Please cite this paper if you use the data: Franzek, Mirco, Alexander Panchenko, and Chris Biemann. ""Categorization of Comparative Sentences for Argument Mining."" arXiv preprint arXiv:1809.06152 (2018).@inproceedings{franzek2018categorization, title={Categorization of Comparative Sentences for Argument Mining}, author={Panchenko, Alexander and Bondarenko, and Franzek, Mirco and Hagen, Matthias and Biemann, Chris}, booktitle={Proceedings of the 6th Workshop on Argument Mining at ACL'2019}, year={2019}, address={Florence, Italy}}
The ASIAS effort builds on demonstrations that an open exchange of information contributes to improved aviation safety. ASIAS is a comprehensive effort, covering the collection and secure maintenance of aviation data, the analysis performed on that data, and long-term research to better extract safety information from the data. In the mid-90s, NASA researchers started briefing the JIMDAT of CAST on how extracting and integrating information from many sources, and multiple perspectives (including controllers and flight crews) could help them improve aviation safety. The NASA Integrated Safety Data for Strategic Response (ISDSR) concept was incorporated into the JIMDAT concept, ASIAS, which was presented to CAST as essential capabilities, and was then adopted. In parallel with these activities, the FAA encouraged NASA to undertake the Information Sharing Initiative (ISI), a collaborative effort among FAA, NASA, the air carriers and the unions, to develop the DNFA and DNAA (two key srouces of data for ASIAS). A 5-yr plan for collaboration between NASA and the FAA to develop ISDSR was proposed, but was never put into place. That plan would have continued the collaboration with provisions for NAS to develop the analytical tools and transfer them to the FAA for implementation. NASA has, and continues to, develop advanced algorithms to mine the various data sources for information that could continue to maintain and improve the safety of the air transportation system. Such algorithms have already been developed by NASA to identify atypical flights revealing unexpected events and etermine why they were anomalous, to identify anomalous cockpit procedures (switches flipped in the cockpit) during takeoff and landing for possible evidence of problems with the automated systems, and to categorize submitted safety reports such as those submitted to ASRS or ASAP into one or more defined categories to aid the search for clues as to why safety-related events may have occurred. ASIAS provides a vital mechanism for monitoring for safety concerns as we transition to the Next Generation Air Transportation System (NextGen). Not only can ASIAS examine for any indication of hypothesized concerns, but, with the NASA-developed data mining tools, ASIAS can also monitor for statistical trends suggesting the potential emergence of new issues unanticipated or unimagined during the design and testing of NextGen concepts. ASIAS has been carefully developed to capitalize upon the best attributes of earlier research at NASA, while also providing necessary guarantees for anonymity and data protection and while using scientifically justified, rigorous methods for estimating frequencies and causality. NASA's role in the ASIAS effort is to continue to develop these advanced data mining tools and methods to better analyze data voluntarily provided by the aviation community. Acronym List: ASAP: Aviation Safety Action Program ASRS: Aviation Safety Reporting System ASIAS: Aviation Safety Information Analysis & Sharing CAST: Commercial Aviation Safety Team FAA: Federal Aviation Administration ISDSR: Integrated Safety Data for Strategic Response ISI: Information Sharing Initiative JIMDAT: Joint Implementation Monitoring Data Analysis Team NASA: National Aeronautics and Space Administration
https://paper.erudition.co.in/termshttps://paper.erudition.co.in/terms
Question Paper Solutions of chapter Overview and Concepts of Data Warehousing of Data Warehousing & Data Mining, 7th Semester , Information Technology