Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The graph shows the changes in the impact factor of ^ and its corresponding percentile for the sake of comparison with the entire literature. Impact Factor is the most common scientometric index, which is defined by the number of citations of papers in two preceding years divided by the number of papers published in those years.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This graph shows how the impact factor of ^ is computed. The left axis depicts the number of papers published in years X-1 and X-2, and the right axis displays their citations in year X.
Facebook
TwitterJournal of Big Data Impact Factor 2024-2025 - ResearchHelpDesk - The Journal of Big Data publishes high-quality, scholarly research papers, methodologies and case studies covering a broad range of topics, from big data analytics to data-intensive computing and all applications of big data research. The journal examines the challenges facing big data today and going forward including, but not limited to: data capture and storage; search, sharing, and analytics; big data technologies; data visualization; architectures for massively parallel processing; data mining tools and techniques; machine learning algorithms for big data; cloud computing platforms; distributed file systems and databases; and scalable storage systems. Academic researchers and practitioners will find the Journal of Big Data to be a seminal source of innovative material. All articles published by the Journal of Big Data are made freely and permanently accessible online immediately upon publication, without subscription charges or registration barriers. As authors of articles published in the Journal of Big Data you are the copyright holders of your article and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate your article, according to the SpringerOpen copyright and license agreement. For those of you who are US government employees or are prevented from being copyright holders for similar reasons, SpringerOpen can accommodate non-standard copyright lines.
Facebook
TwitterThis paper proposes a scalable, local privacy preserving algorithm for distributed Peer-to-Peer (P2P) data aggregation useful for many advanced data mining/analysis tasks such as average/sum computation, decision tree induction, feature selection, and more. Unlike most multi-party privacy-preserving data mining algorithms, this approach works in an asynchronous manner through local interactions and it is highly scalable. It particularly deals with the distributed computation of the sum of a set of numbers stored at different peers in a P2P network in the context of a P2P web mining application. The proposed optimization based privacy-preserving technique for computing the sum allows different peers to specify different privacy requirements without having to adhere to a global set of parameters for the chosen privacy model. Since distributed sum computation is a frequently used primitive, the proposed approach is likely to have significant impact on many data mining tasks such as multi-party privacy-preserving clustering, frequent itemset mining, and statistical aggregate computation.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The graph shows the changes in the impact factor of ^ and its corresponding percentile for the sake of comparison with the entire literature. Impact Factor is the most common scientometric index, which is defined by the number of citations of papers in two preceding years divided by the number of papers published in those years.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionHospitals have seen a rise in Medical Emergency Team (MET) reviews. We hypothesised that the commonest MET calls result in similar treatments. Our aim was to design a pre-emptive management algorithm that allowed direct institution of treatment to patients without having to wait for attendance of the MET team and to model its potential impact on MET call incidence and patient outcomes.MethodsData was extracted for all MET calls from the hospital database. Association rule data mining techniques were used to identify the most common combinations of MET call causes, outcomes and therapies.ResultsThere were 13,656 MET calls during the 34-month study period in 7936 patients. The most common MET call was for hypotension [31%, (2459/7936)]. These MET calls were strongly associated with the immediate administration of intra-venous fluid (70% [1714/2459] v 13% [739/5477] p
Facebook
TwitterInternational Journal of Computational Intelligence Systems Impact Factor 2024-2025 - ResearchHelpDesk - The International Journal of Computational Intelligence Systems is an international peer reviewed journal and the official publication of the European Society for Fuzzy Logic and Technologies (EUSFLAT). The journal publishes original research on all aspects of applied computational intelligence, especially targeting papers demonstrating the use of techniques and methods originating from computational intelligence theory. This is an open access journal, i.e. all articles are immediately and permanently free to read, download, copy & distribute. The journal is published under the CC BY-NC 4.0 user license which defines the permitted 3rd-party reuse of its articles. Aims & Scope The International Journal of Computational Intelligence Systems publishes original research on all aspects of applied computational intelligence, especially targeting papers demonstrating the use of techniques and methods originating from computational intelligence theory. The core theories of computational intelligence are fuzzy logic, neural networks, evolutionary computation and probabilistic reasoning. The journal publishes only articles related to the use of computational intelligence and broadly covers the following topics: Autonomous reasoning Bio-informatics Cloud computing Condition monitoring Data science Data mining Data visualization Decision support systems Fault diagnosis Intelligent information retrieval Human-machine interaction and interfaces Image processing Internet and networks Noise analysis Pattern recognition Prediction systems Power (nuclear) safety systems Process and system control Real-time systems Risk analysis and safety-related issues Robotics Signal and image processing IoT and smart environments Systems integration System control System modelling and optimization Telecommunications Time series prediction Warning systems Virtual reality Web intelligence Deep learning
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Preventive healthcare is a crucial pillar of health as it contributes to staying healthy and having immediate treatment when needed. Mining knowledge from longitudinal studies has the potential to significantly contribute to the improvement of preventive healthcare. Unfortunately, data originated from such studies are characterized by high complexity, huge volume, and a plethora of missing values. Machine Learning, Data Mining and Data Imputation models are utilized a part of solving these challenges, respectively. Toward this direction, we focus on the development of a complete methodology for the ATHLOS Project – funded by the European Union’s Horizon 2020 Research and Innovation Program, which aims to achieve a better interpretation of the impact of aging on health. The inherent complexity of the provided dataset lies in the fact that the project includes 15 independent European and international longitudinal studies of aging. In this work, we mainly focus on the HealthStatus (HS) score, an index that estimates the human status of health, aiming to examine the effect of various data imputation models to the prediction power of classification and regression models. Our results are promising, indicating the critical importance of data imputation in enhancing preventive medicine’s crucial role.
Facebook
TwitterJournal of Computational Design and Engineering Impact Factor 2024-2025 - ResearchHelpDesk - Journal of Computational Design and Engineering is an international journal that aims to provide academia and industry with a venue for rapid publication of research papers reporting innovative computational methods and applications to achieve a major breakthrough, practical improvements, and bold new research directions within a wide range of design and engineering: Theory and its progress in computational advancement for design and engineering Development of computational framework to support large scale design and engineering Interaction issues among human, designed artifacts, and systems Knowledge-intensive technologies for intelligent and sustainable systems Emerging technology and convergence of technology fields presented with convincing design examples Educational issues for academia, practitioners, and future generation Proposal on new research directions as well as survey and retrospectives on mature field. Examples of relevant topics include traditional and emerging issues in design and engineering but are not limited to: Field specific issues in mechanical, aerospace, shipbuilding, industrial, architectural, plant, and civil engineering as well as industrial design Geometric modeling and processing, solid and heterogeneous modeling, computational geometry, features, and virtual prototyping Computer graphics, virtual and augmented reality, and scientific visualization Human modeling and engineering, user interaction and experience, HCI, HMI, human-vehicle interaction(HVI), cognitive engineering, and human factors and ergonomics with computers Knowledge-based engineering, intelligent CAD, AI and machine learning in design, and ontology Product data exchange and management, PDM/PLM/CPC, PDX/PDQ, interoperability, data mining, and database issues Design theory and methodology, sustainable design and engineering, concurrent engineering, and collaborative engineering Digital/virtual manufacturing, rapid prototyping and tooling, and CNC machining Computer aided inspection, geometric and engineering tolerancing, and reverse engineering Finite element analysis, optimization, meshes and discretization, and virtual engineering Bio-CAD, Nano-CAD, and medical applications Industrial design, aesthetic design, new media, and design education Survey and benchmark reports
Facebook
TwitterInternational Journal of Artificial Intelligence Impact Factor 2024-2025 - ResearchHelpDesk - The main aim of the International Journal of Artificial Intelligence™ (ISSN 0974-0635) is to publish refereed, well-written original research articles, and studies that describe the latest research and developments in the area of Artificial Intelligence. This is a broad-based journal covering all branches of Artificial Intelligence and its application in the following topics: Technology & Computing; Fuzzy Logic; Neural Networks; Reasoning and Evolution; Automatic Control; Mechatronics; Robotics; Parallel Processing; Programming Languages; Software & Hardware Architectures; CAD Design & Testing; Web Intelligence Applications; Computer Vision and Speech Understanding; Multimedia & Cognitive Informatics, Data Mining and Machine Learning Tools, Heuristic and AI Planning Strategies and Tools, Computational Theories of Learning; Signal, Image & Speech Processing; Intelligent System Architectures; Knowledge Representation; Bioinformatics; Natural Language Processing; Mathematics & Physics. The International Journal of Artificial Intelligence (IJAI) is a peer-reviewed online journal and is published in Spring and Autumn i.e. two times in a year. The International Journal of Artificial Intelligence (ISSN 0974-0635) was reviewed, abstracted and indexed in the past by the INSPEC The IET, SCOPUS (Elsevier Bibliographic Databases), Zentralblatt MATH (io-port.net) of European Mathematical Society, Indian Science Abstracts, getCITED, SCImago Journal & Country Rank, Newjour, JournalSeek, Math-jobs.com’s Journal Index, Academic keys, Ulrich's Periodicals Directory, IndexCopernicus, and International Statistical Institute (ISI, Netherlands)Journal Index. The IJAI is already in request process to get reviewed, abstracted and indexed by the Clarivate Analytics Web of Science (Also known as Thomson ISI Web of Knowledge SCI), Mathematical Reviews and MathSciNet of American Mathematical Society, and by other agencies.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Sewol Ferry Disaster which took place in 16th of April, 2014, was a national level disaster in South Korea that caused severe social distress nation-wide. No research at the domestic level thus far has examined the influence of the disaster on social stress through a sentiment analysis of social media data. Data extracted from YouTube, Twitter, and Facebook were used in this study. The population was users who were randomly selected from the aforementioned social media platforms who had posted texts related to the disaster from April 2014 to March 2015. ANOVA was used for statistical comparison between negative, neutral, and positive sentiments under a 95% confidence level. For NLP-based data mining results, bar graph and word cloud analysis as well as analyses of phrases, entities, and queries were implemented. Research results showed a significantly negative sentiment on all social media platforms. This was mainly related to fundamental agents such as ex-president Park and her related political parties and politicians. YouTube, Twitter, and Facebook results showed negative sentiment in phrases (63.5, 69.4, and 58.9%, respectively), entity (81.1, 69.9, and 76.0%, respectively), and query topic (75.0, 85.4, and 75.0%, respectively). All results were statistically significant (p < 0.001). This research provides scientific evidence of the negative psychological impact of the disaster on the Korean population. This study is significant because it is the first research to conduct sentiment analysis of data extracted from the three largest existing social media platforms regarding the issue of the disaster.
Facebook
TwitterAdditional file 2: Supplementary Table 2. TF-IDF analysis for mind split disorder after the revision of the schizophrenia name
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The raw and analyzed data for the climate change research in Malaysia using the scientometrics based analysis
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data mining analysis results of the impact of production and operation factors on delayed production control.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A good understanding of the practices followed by software development projects can positively impact their success --- particularly for attracting talent and on-boarding new members. In this paper, we perform a cluster analysis to classify software projects that follow continuous integration in terms of their activity, popularity, size, testing, and stability. Based on this analysis, we identify and discuss four different groups of repositories that have distinct characteristics that separates them from the other groups. With this new understanding, we encourage open source projects to acknowledge and advertise their preferences according to these defining characteristics, so that they can recruit developers who share similar values.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The LSC (Leicester Scientific Corpus)
April 2020 by Neslihan Suzen, PhD student at the University of Leicester (ns433@leicester.ac.uk) Supervised by Prof Alexander Gorban and Dr Evgeny MirkesThe data are extracted from the Web of Science [1]. You may not copy or distribute these data in whole or in part without the written consent of Clarivate Analytics.[Version 2] A further cleaning is applied in Data Processing for LSC Abstracts in Version 1*. Details of cleaning procedure are explained in Step 6.* Suzen, Neslihan (2019): LSC (Leicester Scientific Corpus). figshare. Dataset. https://doi.org/10.25392/leicester.data.9449639.v1.Getting StartedThis text provides the information on the LSC (Leicester Scientific Corpus) and pre-processing steps on abstracts, and describes the structure of files to organise the corpus. This corpus is created to be used in future work on the quantification of the meaning of research texts and make it available for use in Natural Language Processing projects.LSC is a collection of abstracts of articles and proceeding papers published in 2014, and indexed by the Web of Science (WoS) database [1]. The corpus contains only documents in English. Each document in the corpus contains the following parts:1. Authors: The list of authors of the paper2. Title: The title of the paper 3. Abstract: The abstract of the paper 4. Categories: One or more category from the list of categories [2]. Full list of categories is presented in file ‘List_of _Categories.txt’. 5. Research Areas: One or more research area from the list of research areas [3]. Full list of research areas is presented in file ‘List_of_Research_Areas.txt’. 6. Total Times cited: The number of times the paper was cited by other items from all databases within Web of Science platform [4] 7. Times cited in Core Collection: The total number of times the paper was cited by other papers within the WoS Core Collection [4]The corpus was collected in July 2018 online and contains the number of citations from publication date to July 2018. We describe a document as the collection of information (about a paper) listed above. The total number of documents in LSC is 1,673,350.Data ProcessingStep 1: Downloading of the Data Online
The dataset is collected manually by exporting documents as Tab-delimitated files online. All documents are available online.Step 2: Importing the Dataset to R
The LSC was collected as TXT files. All documents are extracted to R.Step 3: Cleaning the Data from Documents with Empty Abstract or without CategoryAs our research is based on the analysis of abstracts and categories, all documents with empty abstracts and documents without categories are removed.Step 4: Identification and Correction of Concatenate Words in AbstractsEspecially medicine-related publications use ‘structured abstracts’. Such type of abstracts are divided into sections with distinct headings such as introduction, aim, objective, method, result, conclusion etc. Used tool for extracting abstracts leads concatenate words of section headings with the first word of the section. For instance, we observe words such as ConclusionHigher and ConclusionsRT etc. The detection and identification of such words is done by sampling of medicine-related publications with human intervention. Detected concatenate words are split into two words. For instance, the word ‘ConclusionHigher’ is split into ‘Conclusion’ and ‘Higher’.The section headings in such abstracts are listed below:
Background Method(s) Design Theoretical Measurement(s) Location Aim(s) Methodology Process Abstract Population Approach Objective(s) Purpose(s) Subject(s) Introduction Implication(s) Patient(s) Procedure(s) Hypothesis Measure(s) Setting(s) Limitation(s) Discussion Conclusion(s) Result(s) Finding(s) Material (s) Rationale(s) Implications for health and nursing policyStep 5: Extracting (Sub-setting) the Data Based on Lengths of AbstractsAfter correction, the lengths of abstracts are calculated. ‘Length’ indicates the total number of words in the text, calculated by the same rule as for Microsoft Word ‘word count’ [5].According to APA style manual [6], an abstract should contain between 150 to 250 words. In LSC, we decided to limit length of abstracts from 30 to 500 words in order to study documents with abstracts of typical length ranges and to avoid the effect of the length to the analysis.
Step 6: [Version 2] Cleaning Copyright Notices, Permission polices, Journal Names and Conference Names from LSC Abstracts in Version 1Publications can include a footer of copyright notice, permission policy, journal name, licence, author’s right or conference name below the text of abstract by conferences and journals. Used tool for extracting and processing abstracts in WoS database leads to attached such footers to the text. For example, our casual observation yields that copyright notices such as ‘Published by Elsevier ltd.’ is placed in many texts. To avoid abnormal appearances of words in further analysis of words such as bias in frequency calculation, we performed a cleaning procedure on such sentences and phrases in abstracts of LSC version 1. We removed copyright notices, names of conferences, names of journals, authors’ rights, licenses and permission policies identified by sampling of abstracts.Step 7: [Version 2] Re-extracting (Sub-setting) the Data Based on Lengths of AbstractsThe cleaning procedure described in previous step leaded to some abstracts having less than our minimum length criteria (30 words). 474 texts were removed.Step 8: Saving the Dataset into CSV FormatDocuments are saved into 34 CSV files. In CSV files, the information is organised with one record on each line and parts of abstract, title, list of authors, list of categories, list of research areas, and times cited is recorded in fields.To access the LSC for research purposes, please email to ns433@le.ac.uk.References[1]Web of Science. (15 July). Available: https://apps.webofknowledge.com/ [2]WoS Subject Categories. Available: https://images.webofknowledge.com/WOKRS56B5/help/WOS/hp_subject_category_terms_tasca.html [3]Research Areas in WoS. Available: https://images.webofknowledge.com/images/help/WOS/hp_research_areas_easca.html [4]Times Cited in WoS Core Collection. (15 July). Available: https://support.clarivate.com/ScientificandAcademicResearch/s/article/Web-of-Science-Times-Cited-accessibility-and-variation?language=en_US [5]Word Count. Available: https://support.office.com/en-us/article/show-word-count-3c9e6a11-a04d-43b4-977c-563a0e0d5da3 [6]A. P. Association, Publication manual. American Psychological Association Washington, DC, 1983.
Facebook
TwitterAdditional file 4: Supplementary table 4. Hospitalization statistics of schizophrenia patients in South Korea
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
20 minute lightning talk presentation by Claudia Wolff, from the Christian-Albrechts-University Kiel, at the Better Science through Better Data 2018 event. The video recording, slides and scribe are included.
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The AI for Big Data Analytics market is booming, projected to reach $250 billion by 2033 with a 25% CAGR. Discover key trends, leading companies, and regional insights in this comprehensive market analysis. Explore applications across various sectors and the impact of advanced AI technologies.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data set for the paper What are the Effects of History Length and Age on Mining Software Change Impact? by Leon Moonen, Thomas Rolfsnes, David Binkley and Stefano di Alesio. In Journal of Empirical Software Engineering (EMSE), 2018, Springer. https://doi.org/10.1007/s10664-017-9588-z Available from https://evolveit.bitbucket.io/publications/emse2018/
Please cite this work by referring to the corresponding journal publication (a preprint is included in this package).
The goal of Software Change Impact Analysis is to identify artifacts (typically source-code files or individual methods therein) potentially affected by a change. Recently, there has been increased interest in mining software change impact based on evolutionary coupling. A particularly promising approach uses association rule mining to uncover potentially affected artifacts from patterns in the system’s change history. Two main considerations when using this approach are the history length, the number of transactions from the change history used to identify the impact of a change, and history age, the number of transactions that have occurred since patterns were last mined from the history. Although history length and age can significantly affect the quality of mining results, few guidelines exist on how to best select appropriate values for these two parameters.
In this paper, we empirically investigate the effects of history length and age on the quality of change impact analysis using mined evolutionary coupling. Specifically, we report on a series of systematic experiments using three state-of-the-art mining algorithms that involve the change histories of two large industrial systems and 17 large open source systems. In these experiments, we vary the length and age of the history used to mine software change impact, and assess how this affects precision and applicability. Results from the study are used to derive practical guidelines for choosing history length and age when applying association rule mining to conduct software change impact analysis.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The graph shows the changes in the impact factor of ^ and its corresponding percentile for the sake of comparison with the entire literature. Impact Factor is the most common scientometric index, which is defined by the number of citations of papers in two preceding years divided by the number of papers published in those years.