Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of network visualisation tools commonly used for the analysis of biological data.
https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Network of 44 papers and 52 citation links related to "Narrative configuration in qualitative analysis".
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Excel spreadsheet contains the quantitative questions (Questions 1, 3 and 4). Each question is analysed in the form of a frequency distribution table and a pie chart.
https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Network of 43 papers and 65 citation links related to "Social media as a data gathering tool for international business qualitative research: opportunities and challenges".
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We include the course syllabus used to teach quantitative research design and analysis methods to graduate Linguistics students using a blended teaching and learning approach. The blended course took place over two weeks and builds on a face to face course presented over two days in 2019. Students worked through the topics in preparation for a live interactive video session each Friday to go through the activities. Additional communication took place on Slack for two hours each week. A survey was conducted at the start and end of the course to ascertain participants' perceptions of the usefulness of the course. The links to online elements and the evaluations have been removed from the uploaded course guide.Participants who complete this workshop will be able to:- outline the steps and decisions involved in quantitative data analysis of linguistic data- explain common statistical terminology (sample, mean, standard deviation, correlation, nominal, ordinal and scale data)- perform common statistical tests using jamovi (e.g. t-test, correlation, anova, regression)- interpret and report common statistical tests- describe and choose from the various graphing options used to display data- use jamovi to perform common statistical tests and graph resultsEvaluationParticipants who complete the course will use these skills and knowledge to complete the following activities for evaluation:- analyse the data for a project and/or assignment (in part or in whole)- plan the results section of an Honours research project (where applicable)Feedback and suggestions can be directed to M Schaefer schaemn@unisa.ac.za
WIDEa is R-based software aiming to provide users with a range of functionalities to explore, manage, clean and analyse "big" environmental and (in/ex situ) experimental data. These functionalities are the following, 1. Loading/reading different data types: basic (called normal), temporal, infrared spectra of mid/near region (called IR) with frequency (wavenumber) used as unit (in cm-1); 2. Interactive data visualization from a multitude of graph representations: 2D/3D scatter-plot, box-plot, hist-plot, bar-plot, correlation matrix; 3. Manipulation of variables: concatenation of qualitative variables, transformation of quantitative variables by generic functions in R; 4. Application of mathematical/statistical methods; 5. Creation/management of data (named flag data) considered as atypical; 6. Study of normal distribution model results for different strategies: calibration (checking assumptions on residuals), validation (comparison between measured and fitted values). The model form can be more or less complex: mixed effects, main/interaction effects, weighted residuals. R, 3.5 (minimal) This software is ranked by IN-SYLVA FRANCE Research Infrastructure. https://www6.inrae.fr/in-sylva-france_eng/Services/In-Silico/Analysis-software https://doi.org/10.15454/1A0P-HE21 Ce logiciel est référencé au sein de l'Infrastructure de Recherche In-Sylva France https://www6.inrae.fr/in-sylva-france/Services/In-Silico/Logiciels-d-analyse. https://doi.org/10.15454/1A0P-HE21
Vision and Change in Undergraduate Biology Education encouraged faculty to focus on core concepts and competencies in undergraduate curriculum. We created a sophomore-level course, Biologists' Toolkit, to focus on the competencies of quantitative reasoning and scientific communication. We introduce students to the statistical analysis of data using the open source statistical language and environment, R and R Studio, in the first two-thirds of the course. During this time the students learn to write basic computer commands to input data and conduct common statistical analysis. The students also learn to graphically represent their data using R. In a final project, we assign students unique data sets that require them to develop a hypothesis that can be explored with the data, analyze and graph the data, search literature related to their data set, and write a report that emulates a scientific paper. The final report includes publication quality graphs and proper reporting of data and statistical results. At the end of the course students reported greater confidence in their ability to read and make graphs, analyze data, and develop hypotheses. Although programming in R has a steep learning curve, we found that students who learned programming in R developed a robust strategy for data analyses and they retained and successfully applied those skills in other courses during their junior and senior years.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Results of the 'Assessing the Overlap of Science Knowledge Graphs: A Quantitative Analysis' papers. There are 2 datasets:
The detailed information refers to the following column:
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-3387https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-3387
Dataset containing supplemental material for the publication "2D, 2.5D, or 3D? An Exploratory Study on Multilayer Network Visualizations in Virtual Reality" This dataset contains: 1) archive containing all raw quantitative results, 2) archive containing all raw qualitative data, 3) archive containing the graphs used for the experiment (.graphml file format), 4) the code to generate the graph library (C++ files using OGDF), 5) a PDF document containing detailed results (with p-values and more charts), 6) a video showing the experimentation from a participant's point of view. 7) complete graph library generated by our graph generator for the experiment
https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Network of 44 papers and 87 citation links related to "Qualitative analysis of an interrupted electric circuit with spike noise".
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Transparency in data visualization is an essential ingredient for scientific communication. The traditional approach of visualizing continuous quantitative data solely in the form of summary statistics (i.e., measures of central tendency and dispersion) has repeatedly been criticized for not revealing the underlying raw data distribution. Remarkably, however, systematic and easy-to-use solutions for raw data visualization using the most commonly reported statistical software package for data analysis, IBM SPSS Statistics, are missing. Here, a comprehensive collection of more than 100 SPSS syntax files and an SPSS dataset template is presented and made freely available that allow the creation of transparent graphs for one-sample designs, for one- and two-factorial between-subject designs, for selected one- and two-factorial within-subject designs as well as for selected two-factorial mixed designs and, with some creativity, even beyond (e.g., three-factorial mixed-designs). Depending on graph type (e.g., pure dot plot, box plot, and line plot), raw data can be displayed along with standard measures of central tendency (arithmetic mean and median) and dispersion (95% CI and SD). The free-to-use syntax can also be modified to match with individual needs. A variety of example applications of syntax are illustrated in a tutorial-like fashion along with fictitious datasets accompanying this contribution. The syntax collection is hoped to provide researchers, students, teachers, and others working with SPSS a valuable tool to move towards more transparency in data visualization.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We propose a novel approach to predict saturation vapor pressures using group contribution-assisted graph convolutional neural networks (GC2NN), which use both, molecular descriptors like molar mass and functional group counts, as well as molecular graphs containing atom and bond features, as representations of molecular structure. Molecular graphs allow the ML model to better infer molecular connectivity and spatial relations compared to methods using other, non-structural embeddings. We achieve best results with an adaptive-depth GC2NN, where the number of evaluated graph layers depends on molecular size. We apply the model to compounds relevant for the formation of SOA, achieving strong agreement between predicted and experimentally-determined vapor pressure. In this study, we present two models: a general model with broader scope, achieving a mean absolute error (MAE) of 0.69 log-units (R2 = 0.86), and a specialized model focused on atmospheric compounds (MAE = 0.37 log-units, R2 = 0.94).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Inhibitory effect of antisense TcCaNA2 oligonucleotides on T. cruzi cell invasion and proliferation. (XLSX)
Techsalerator has access to some of the most qualitative B2C data in the Netherlands.
Thanks to our unique tools and data specialists, we can select the ideal targeted dataset based on unique elements such as the location/ country, gender, age...
Whether you are looking for an entire fill install, an access to one of our API's or if you only need a one-time targeted purchase, get in touch with our company and we will fulfill your international data need.
https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Network of 44 papers and 88 citation links related to "Qualitative temporal analysis: Towards a full implementation of the Fault Tree Handbook".
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.null/customlicense?persistentId=doi:10.5064/F6JOQXNFhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.null/customlicense?persistentId=doi:10.5064/F6JOQXNF
This is an Annotation for Transparent Inquiry (ATI) data project. The annotated article can be viewed on the Publisher's Website. Data Generation The research project engages a story about perceptions of fairness in criminal justice decisions. The specific focus involves a debate between ProPublica, a news organization, and Northpointe, the owner of a popular risk tool called COMPAS. ProPublica wrote that COMPAS was racist against blacks, while Northpointe posted online a reply rejecting such a finding. These two documents were the obvious foci of the qualitative analysis because of the further media attention they attracted, the confusion their competing conclusions caused readers, and the power both companies wield in public circles. There were no barriers to retrieval as both documents have been publicly available on their corporate websites. This public access was one of the motivators for choosing them as it meant that they were also easily attainable by the general public, thus extending the documents’ reach and impact. Additional materials from ProPublica relating to the main debate were also freely downloadable from its website and a third party, open source platform. Access to secondary source materials comprising additional writings from Northpointe representatives that could assist in understanding Northpointe’s main document, though, was more limited. Because of a claim of trade secrets on its tool and the underlying algorithm, it was more difficult to reach Northpointe’s other reports. Nonetheless, largely because its clients are governmental bodies with transparency and accountability obligations, some of Northpointe-associated reports were retrievable from third parties who had obtained them, largely through Freedom of Information Act queries. Together, the primary and (retrievable) secondary sources allowed for a triangulation of themes, arguments, and conclusions. The quantitative component uses a dataset of over 7,000 individuals with information that was collected and compiled by ProPublica and made available to the public on github. ProPublica’s gathering the data directly from criminal justice officials via Freedom of Information Act requests rendered the dataset in the public domain, and thus no confidentiality issues are present. The dataset was loaded into SPSS v. 25 for data analysis. Data Analysis The qualitative enquiry used critical discourse analysis, which investigates ways in which parties in their communications attempt to create, legitimate, rationalize, and control mutual understandings of important issues. Each of the two main discourse documents was parsed on its own merit. Yet the project was also intertextual in studying how the discourses correspond with each other and to other relevant writings by the same authors. Several more specific types of discursive strategies were of interest in attracting further critical examination: Testing claims and rationalizations that appear to serve the speaker’s self-interest Examining conclusions and determining whether sufficient evidence supported them Revealing contradictions and/or inconsistencies within the same text and intertextually Assessing strategies underlying justifications and rationalizations used to promote a party’s assertions and arguments Noticing strategic deployment of lexical phrasings, syntax, and rhetoric Judging sincerity of voice and the objective consideration of alternative perspectives Of equal importance in a critical discourse analysis is consideration of what is not addressed, that is to uncover facts and/or topics missing from the communication. For this project, this included parsing issues that were either briefly mentioned and then neglected, asserted yet the significance left unstated, or not suggested at all. This task required understanding common practices in the algorithmic data science literature. The paper could have been completed with just the critical discourse analysis. However, because one of the salient findings from it highlighted that the discourses overlooked numerous definitions of algorithmic fairness, the call to fill this gap seemed obvious. Then, the availability of the same dataset used by the parties in conflict, made this opportunity more appealing. Calculating additional algorithmic equity equations would not thereby be troubled by irregularities because of diverse sample sets. New variables were created as relevant to calculate algorithmic fairness equations. In addition to using various SPSS Analyze functions (e.g., regression, crosstabs, means), online statistical calculators were useful to compute z-test comparisons of proportions and t-test comparisons of means. Logic of Annotation Annotations were employed to fulfil a variety of functions, including supplementing the main text with context, observations, counter-points, analysis, and source attributions. These fall under a few categories. Space considerations. Critical discourse analysis offers a rich method for studying speech and text. The discourse analyst wishes not simply to describe, but to critically assess, explain, and offer insights about the underlying discourses. In practice, this often means the researcher generates far more material than can comfortably be included in the final paper. As a result, many draft passages, evaluations, and issues typically need to be excised. Annotation offered opportunities to incorporate dozens of findings, explanations, and supporting materials that otherwise would have been redacted. Readers wishing to learn more than within the four corners of the official, published article can review these supplementary offerings through the links. Visuals. The annotations use multiple data sources to provide visuals to explain, illuminate, or otherwise contextualize particular points in the main body of the paper and/or in the analytic notes. For example, a conclusion that the tool was not calibrated the same for blacks and whites could be better understand with reference to a graph to observe the differences in the range of risk scores comparing these two groups. Overall, the visuals deployed here include graphs, screenshots, page extracts, diagrams, and statistical software output. Context. The data for the qualitative segment involved long discourses. Thus, annotations were employed to embed longer portions of quotations from the source material than was justified in the main text. This allows the reader to confirm whether quotations were taken in proper context, and thus hold the author accountable for potential errors in this regard. Sources. Annotations incorporated extra source materials, along with quotations from them to aid the discussion. Sources that carried some indication that they may not be permanently available in the same form and in available formats were more likely to be archived and activated. This practice helps ensure that readers continue to have access to third party materials as relied upon in the research for transparency and authentication purposes.
With close to a 1B records worldwide, Techsalerator has access to some of the most qualitative B2C count Data.
Thanks to our unique tools and data specialists, we can select the ideal targeted dataset based on unique elements such as the location/ country, gender, age...
Whether you are looking for an entire fill install, access to one of our API's or if you are looking for a one-time targeted purchase, get in touch with our company and we will fulfill your international data need.
With close to 38M records in France , Techsalerator has access to some of the most qualitative B2C data in France.
Thanks to our unique tools and data specialists, we can select the ideal targeted dataset based on unique elements such as the location/ country, gender, age...
Whether you are looking for an entire fill install, an access to one of our API's or if you only need a one-time targeted purchase, get in touch with our company and we will fulfill your international data need.
With close to 30M records in Spain , Techsalerator has access to some of the most qualitative B2C data in Spain.
Thanks to our unique tools and data specialists, we can select the ideal targeted dataset based on unique elements such as the location/ country, gender, age...
Whether you are looking for an entire fill install, access to one of our API's or if you are looking for a one-time targeted purchase, get in touch with our company and we will fulfill your international data need.
The ability to observe and interpret images and clinical information is essential for veterinarians in clinical practice. The purpose of this study is to determine the utility of a novel teaching method in veterinary medicine, the incorporation of art interpretation using the Visual Thinking Strategies (VTS), on students’ observational and clinical interpretation skills when evaluating radiographs and patient charts. Students were asked to observe and interpret a set of radiographs and a patient chart, subsequently involved in art interpretation using VTS, and then asked to observe and interpret a different set of radiographs and a different patient chart. Qualitative and quantitative analysis was performed, including scoring of observations and interpretations by a radiologist and emergency and critical care resident. For radiographs, observation and interpretation scores increased significantly after VTS. There was no change in patient chart observation or interpretation scores after VTS. Broadly, VTS provided creative thinking and visual literacy exercises that students felt pushed students them to think more openly, notice subtleties, use evidential reasoning, identify thinking processes, and integrate details into a narrative. However, its impact on clinical reasoning, as assessed by chart observation and interpretation scores, was uncertain. Further studies are needed to determine the optimal way to incorporate art interpretation in the veterinary medical curriculum.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of network visualisation tools commonly used for the analysis of biological data.