9 datasets found
  1. m

    Dataset of development of business during the COVID-19 crisis

    • data.mendeley.com
    • narcis.nl
    Updated Nov 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tatiana N. Litvinova (2020). Dataset of development of business during the COVID-19 crisis [Dataset]. http://doi.org/10.17632/9vvrd34f8t.1
    Explore at:
    Dataset updated
    Nov 9, 2020
    Authors
    Tatiana N. Litvinova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    To create the dataset, the top 10 countries leading in the incidence of COVID-19 in the world were selected as of October 22, 2020 (on the eve of the second full of pandemics), which are presented in the Global 500 ranking for 2020: USA, India, Brazil, Russia, Spain, France and Mexico. For each of these countries, no more than 10 of the largest transnational corporations included in the Global 500 rating for 2020 and 2019 were selected separately. The arithmetic averages were calculated and the change (increase) in indicators such as profitability and profitability of enterprises, their ranking position (competitiveness), asset value and number of employees. The arithmetic mean values of these indicators for all countries of the sample were found, characterizing the situation in international entrepreneurship as a whole in the context of the COVID-19 crisis in 2020 on the eve of the second wave of the pandemic. The data is collected in a general Microsoft Excel table. Dataset is a unique database that combines COVID-19 statistics and entrepreneurship statistics. The dataset is flexible data that can be supplemented with data from other countries and newer statistics on the COVID-19 pandemic. Due to the fact that the data in the dataset are not ready-made numbers, but formulas, when adding and / or changing the values in the original table at the beginning of the dataset, most of the subsequent tables will be automatically recalculated and the graphs will be updated. This allows the dataset to be used not just as an array of data, but as an analytical tool for automating scientific research on the impact of the COVID-19 pandemic and crisis on international entrepreneurship. The dataset includes not only tabular data, but also charts that provide data visualization. The dataset contains not only actual, but also forecast data on morbidity and mortality from COVID-19 for the period of the second wave of the pandemic in 2020. The forecasts are presented in the form of a normal distribution of predicted values and the probability of their occurrence in practice. This allows for a broad scenario analysis of the impact of the COVID-19 pandemic and crisis on international entrepreneurship, substituting various predicted morbidity and mortality rates in risk assessment tables and obtaining automatically calculated consequences (changes) on the characteristics of international entrepreneurship. It is also possible to substitute the actual values identified in the process and following the results of the second wave of the pandemic to check the reliability of pre-made forecasts and conduct a plan-fact analysis. The dataset contains not only the numerical values of the initial and predicted values of the set of studied indicators, but also their qualitative interpretation, reflecting the presence and level of risks of a pandemic and COVID-19 crisis for international entrepreneurship.

  2. m

    Data for:Review on Current Research Directions in Energy Harvesting Power...

    • data.mendeley.com
    Updated Jun 17, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    roskhatijah radzuan (2019). Data for:Review on Current Research Directions in Energy Harvesting Power Conversion (EHPC) System [Dataset]. http://doi.org/10.17632/x4nfg7p7p4.2
    Explore at:
    Dataset updated
    Jun 17, 2019
    Authors
    roskhatijah radzuan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Original excel files of tabular data that have been used to generate the visual presentation using graphs and charts of the techniques for the current research trends within 6 years (from years 2013 to 2018).

  3. A study on real graphs of fake news spreading on Twitter

    • zenodo.org
    bin
    Updated Aug 20, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amirhosein Bodaghi; Amirhosein Bodaghi (2021). A study on real graphs of fake news spreading on Twitter [Dataset]. http://doi.org/10.5281/zenodo.5225338
    Explore at:
    binAvailable download formats
    Dataset updated
    Aug 20, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Amirhosein Bodaghi; Amirhosein Bodaghi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    *** Fake News on Twitter ***

    These 5 datasets are the results of an empirical study on the spreading process of newly fake news on Twitter. Particularly, we have focused on those fake news which have given rise to a truth spreading simultaneously against them. The story of each fake news is as follow:

    1- FN1: A Muslim waitress refused to seat a church group at a restaurant, claiming "religious freedom" allowed her to do so.

    2- FN2: Actor Denzel Washington said electing President Trump saved the U.S. from becoming an "Orwellian police state."

    3- FN3: Joy Behar of "The View" sent a crass tweet about a fatal fire in Trump Tower.

    4- FN4: The animated children's program 'VeggieTales' introduced a cannabis character in August 2018.

    5- FN5: In September 2018, the University of Alabama football program ended its uniform contract with Nike, in response to Nike's endorsement deal with Colin Kaepernick.

    The data collection has been done in two stages that each provided a new dataset: 1- attaining Dataset of Diffusion (DD) that includes information of fake news/truth tweets and retweets 2- Query of neighbors for spreaders of tweets that provides us with Dataset of Graph (DG).

    DD

    DD for each fake news story is an excel file, named FNx_DD where x is the number of fake news, and has the following structure:

    The structure of excel files for each dataset is as follow:

    • Each row belongs to one captured tweet/retweet related to the rumor, and each column of the dataset presents a specific information about the tweet/retweet. These columns from left to right present the following information about the tweet/retweet:
    • User ID (user who has posted the current tweet/retweet)
    • The number of published tweet/retweet by the user at the time of posting the current tweet/retweet
    • Language of the tweet/retweet
    • Number of followers
    • Number of followings (friends)
    • Date and time of posting the current tweet/retweet
    • Number of like (favorite) the current tweet had been acquired before crawling it
    • Number of times the current tweet had been retweeted before crawling it
    • Is there any other tweet inside of the current tweet/retweet (for example this happens when the current tweet is a quote or reply or retweet)
    • The source (OS) of device by which the current tweet/retweet was posted
    • Tweet/Retweet ID
    • Retweet ID (if the post is a retweet then this feature gives the ID of the tweet that is retweeted by the current post)
    • Quote ID (if the post is a quote then this feature gives the ID of the tweet that is quoted by the current post)
    • Reply ID (if the post is a reply then this feature gives the ID of the tweet that is replied by the current post)
    • Frequency of tweet occurrences which means the number of times the current tweet is repeated in the dataset (for example the number of times that a tweet exists in the dataset in the form of retweet posted by others)
    • State of the tweet which can be one of the following forms (achieved by an agreement between the annotators):
    • r : The tweet/retweet is a fake news post
    • a : The tweet/retweet is a truth post
    • q : The tweet/retweet is a question about the fake news, however neither confirm nor deny it
    • n : The tweet/retweet is not related to the fake news (even though it contains the queries related to the rumor, but does not refer to the given fake news)

    DG

    DG for each fake news contains two files:

    • A file in graph format (.graph) which includes the information of graph such as who is linked to whom. (This file named FNx_DG.graph, where x is the number of fake news)
    • A file in Jsonl format (.jsonl) which includes the real user IDs of nodes in the graph file. (This file named FNx_Labels.jsonl, where x is the number of fake news)

    Because in the graph file, the label of each node is the number of its entrance in the graph. For example if node with user ID 12345637 be the first node which has been entered into the graph file then its label in the graph is 0 and its real ID (12345637) would be at the row number 1 (because the row number 0 belongs to column labels) in the jsonl file and so on other node IDs would be at the next rows of the file (each row corresponds to 1 user id). Therefore, if we want to know for example what the user id of node 200 (labeled 200 in the graph) is, then in jsonl file we should look at row number 202.

    The user IDs of spreaders in DG (those who have had a post in DD) would be available in DD to get extra information about them and their tweet/retweet. The other user IDs in DG are the neighbors of these spreaders and might not exist in DD.

  4. B

    Field Variable Permeability Tests (Slug Tests) in Boreholes Made by Driven...

    • borealisdata.ca
    • search.dataone.org
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert P. Chapuis (2024). Field Variable Permeability Tests (Slug Tests) in Boreholes Made by Driven Flush-Joint Casings, or Driven Flush-Joint Casing Permeameters, or Between Packers in Cored Rock Boreholes, or in Monitoring Wells ― Overdamped Response / Essais de perméabilité à niveau variable (Slug Tests) dans des forages faits avec un tubage battu à joints lisses, ou un perméamètre battu à joints lisses, ou entre des obturateurs dans un trou foré dans le roc, ou dans un puits de surveillance ― Cas de la réponse suramortie [Dataset]. http://doi.org/10.5683/SP2/YUAUGX
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    Borealis
    Authors
    Robert P. Chapuis
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Civil and geological engineers have used field variable-head permeability tests (VH tests or slug tests) for over one century to assess the local hydraulic conductivity of tested soils and rocks. The water level in the pipe or riser casing reaches, after some rest time, a static position or elevation, z2. Then, the water level position is changed rapidly, by adding or removing some water volume, or by inserting or removing a solid slug. Afterward, the water level position or elevation z1(t) is recorded vs. time t, yielding a difference in hydraulic head or water column defined as Z(t) = z1(t) - z2. The water level at rest is assumed to be the piezometric level or PL for the tested zone, before drilling a hole and installing test equipment. All equations use Z(t) or Z*(t) = Z(t) / Z(t=0). The water-level response vs. time may be a slow return to equilibrium (overdamped test), or an oscillation back to equilibrium (underdamped test). This document deals exclusively with overdamped tests. Their data may be analyzed using several methods, known to yield different results for the hydraulic conductivity. The methods fit in three groups: group 1 neglects the influence of the solid matrix strain, group 2 is for tests in aquitards with delayed strain caused by consolidation, and group 3 takes into account some elastic and instant solid matrix strain. This document briefly explains what is wrong with certain theories and why. It shows three ways to plot the data, which are the three diagnostic graphs. According to experience with thousands of tests, most test data are biased by an incorrect estimate z2 of the piezometric level at rest. The derivative or velocity plot does not depend upon this assumed piezometric level, but can verify its correctness. The document presents experimental results and explains the three-diagnostic graphs approach, which unifies the theories and, most important, yields a user-independent result. Two free spreadsheet files are provided. The spreadsheet "Lefranc-Test-English-Model" follows the Canadian standards and is used to explain how to treat correctly the test data to reach a user-independent result. The user does not modify this model spreadsheet but can make as many copies as needed, with different names. The user can treat any other data set in a copy, and can also modify any copy if needed. The second Excel spreadsheet contains several sets of data that can be used to practice with the copies of the model spreadsheet. En génie civil et géologique, on a utilisé depuis plus d'un siècle les essais in situ de perméabilité à niveau variable (essais VH ou slug tests), afin d'évaluer la conductivité hydraulique locale des sols et rocs testés. Le niveau d'eau dans le tuyau ou le tubage prend, après une période de repos, une position ou élévation statique, z2. Ensuite, on modifie rapidement la position du niveau d'eau, en ajoutant ou en enlevant rapi-dement un volume d'eau, ou en insérant ou retirant un objet solide. La position ou l'élévation du niveau d'eau, z1(t), est alors notée en fonction du temps, t, ce qui donne une différence de charge hydraulique définie par Z(t) = z1(t) - z2. Le niveau d'eau au repos est supposé être le niveau piézométrique pour la zone testée, avant de forer un trou et d'installer l'équipement pour un essai. Toutes les équations utilisent Z(t) ou Z*(t) = Z(t) / Z(t=0). La réponse du niveau d'eau avec le temps peut être soit un lent retour à l'équilibre (cas suramorti) soit une oscillation amortie retournant à l'équilibre (cas sous-amorti). Ce document ne traite que des cas suramortis. Leurs données peuvent être analysées à l'aide de plusieurs méthodes, connues pour donner des résultats différents pour la conductivité hydraulique. Les méthodes appartiennent à trois groupes : le groupe 1 néglige l'influence de la déformation de la matrice solide, le groupe 2 est pour les essais dans des aquitards avec une déformation différée causée par la consolidation, et le groupe 3 prend en compte une certaine déformation élastique et instantanée de la matrice solide. Ce document explique brièvement ce qui est incorrect dans les théories et pourquoi. Il montre trois façons de tracer les données, qui sont les trois graphiques de diagnostic. Selon l'expérience de milliers d'essais, la plupart des données sont biaisées par un estimé incorrect de z2, le niveau piézométrique supposé. Le graphe de la dérivée ou graphe des vitesses ne dépend pas de la valeur supposée pour le niveau piézomé-trique, mais peut vérifier son exactitude. Le document présente des résultats expérimentaux et explique le diagnostic à trois graphiques, qui unifie les théories et donne un résultat indépendant de l'utilisateur, ce qui est important. Deux fichiers Excel gratuits sont fournis. Le fichier"Lefranc-Test-English-Model" suit les normes canadiennes : il sert à expliquer comment traiter correctement les données d'essai pour avoir un résultat indépendant de l'utilisateur. Celui-ci ne modifie pas ce...

  5. d

    Replication data for: Job-to-Job Mobility and Inflation

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Faccini, Renato; Melosi, Leonardo (2023). Replication data for: Job-to-Job Mobility and Inflation [Dataset]. http://doi.org/10.7910/DVN/SMQFGS
    Explore at:
    Dataset updated
    Nov 8, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Faccini, Renato; Melosi, Leonardo
    Description

    Replication files for "Job-to-Job Mobility and Inflation" Authors: Renato Faccini and Leonardo Melosi Review of Economics and Statistics Date: February 2, 2023 -------------------------------------------------------------------------------------------- ORDERS OF TOPICS .Section 1. We explain the code to replicate all the figures in the paper (except Figure 6) .Section 2. We explain how Figure 6 is constructed .Section 3. We explain how the data are constructed SECTION 1 Replication_Main.m is used to reproduce all the figures of the paper except Figure 6. All the primitive variables are defined in the code and all the steps are commented in code to facilitate the replication of our results. Replication_Main.m, should be run in Matlab. The authors tested it on a DELL XPS 15 7590 laptop wih the follwoing characteristics: -------------------------------------------------------------------------------------------- Processor Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz 2.40 GHz Installed RAM 64.0 GB System type 64-bit operating system, x64-based processor -------------------------------------------------------------------------------------------- It took 2 minutes and 57 seconds for this machine to construct Figures 1, 2, 3, 4a, 4b, 5, 7a, and 7b. The following version of Matlab and Matlab toolboxes has been used for the test: -------------------------------------------------------------------------------------------- MATLAB Version: 9.7.0.1190202 (R2019b) MATLAB License Number: 363305 Operating System: Microsoft Windows 10 Enterprise Version 10.0 (Build 19045) Java Version: Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode -------------------------------------------------------------------------------------------- MATLAB Version 9.7 (R2019b) Financial Toolbox Version 5.14 (R2019b) Optimization Toolbox Version 8.4 (R2019b) Statistics and Machine Learning Toolbox Version 11.6 (R2019b) Symbolic Math Toolbox Version 8.4 (R2019b) -------------------------------------------------------------------------------------------- The replication code uses auxiliary files and save the pictures in various subfolders: \JL_models: It contains the equations describing the model including the observation equations and routine used to solve the model. To do so, the routine in this folder calls other routines located in some fo the subfolders below. \gensystoama: It contains a set of codes that allow us to solve linear rational expectations models. We use the AMA solver. More information are provided in the file AMASOLVE.m. The codes in this subfolder have been developed by Alejandro Justiniano. \filters: it contains the Kalman filter augmented with a routine to make sure that the zero lower bound constraint for the nominal interest rate is satisfied in every period in our sample. \SteadyStateSolver: It contains a set of routines that are used to solved the steady state of the model numerically. \NLEquations: It contains some of the equations of the model that are log-linearized using the symbolic toolbox of matlab. \NberDates: It contains a set of routines that allows to add shaded area to graphs to denote NBER recessions. \Graphics: It contains useful codes enabling features to construct some of the graphs in the paper. \Data: it contains the data set used in the paper. \Params: It contains a spreadsheet with the values attributes to the model parameters. \VAR_Estimation: It contains the forecasts implied by the Bayesian VAR model of Section 2. The output of Replication_Main.m are the figures of the paper that are stored in the subfolder \Figures SECTION 2 The Excel file "Figure-6.xlsx" is used to create the charts in Figure 6. All three panels of the charts (A, B, and C) plot a measure of unexpected wage inflation against the unemployment rate, then fits separate linear regressions for the periods 1960-1985,1986-2007, and 2008-2009. Unexpected wage inflation is given by the difference between wage growth and a measure of expected wage growth. In all three panels, the unemployment rate used is the civilian unemployment rate (UNRATE), seasonally adjusted, from the BLS. The sheet "Panel A" uses quarterly manufacturing sector average hourly earnings growth data, seasonally adjusted (CES3000000008), from the Bureau of Labor Statistics (BLS) Employment Situation report as the measure of wage inflation. The unexpected wage inflation is given by the difference between earnings growth at time t and the average of earnings growth across the previous four months. Growth rates are annualized quarterly values. The sheet "Panel B" uses quarterly Nonfarm Business Sector Compensation Per Hour, seasonally adjusted (COMPNFB), from the BLS Productivity and Costs report as its measure of wage inflation. As in Panel A, expected wage inflation is given by the... Visit https://dataone.org/datasets/sha256%3A44c88fe82380bfff217866cac93f85483766eb9364f66cfa03f1ebdaa0408335 for complete metadata about this dataset.

  6. Z

    Data from: Can calmodulin bind to lipids of the cytosolic leaflet of plasma...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Mar 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jurkiewicz, Piotr (2024). Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes? [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10843994
    Explore at:
    Dataset updated
    Mar 23, 2024
    Dataset provided by
    Scollo, Federica
    Evci, Hüseyin
    Jurkiewicz, Piotr
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?:

    This data set contains all the experimental raw data, analysis and source files for the final figures reported in the manuscript: "Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?". It is divided into five (1-5) zipped folders, named as the technique used to obtain the data. Each of them, where applicable, consists of three different subfolders (raw data, analysed data, final graph). Read below for more details.

    1) ConfocalMicroscopy

    1a) Raw_Data: the raw images are reported as .dat and .tif formats, divided into folders (according to date first yymmdd, and within the same day according to composition). Each folder contains a .txt file reporting the experimental details

    1b) GUVs_Statistics - GUVs_Statistics.txt explains how we generated the bar plot shown in Fig. 1E

    1c) Final_Graph - Figure_1B_1D.png is the figure representing figure 1B and 1D - Figure1E_%ofGUVswithCaMAdsorbptions.csv is the source file x-y of the bar plot shown in figure 1E (% of GUVs which showed adsorption of CaM over the total amount of measured GUVs) - Where_To_Find_Representative_Images.txt states the folders where the raw images chosen for figure 1 can be found

    2) FCS 2a) Raw_Data: - 1_points: .ptu files - 2_points: .ht3 files - Raw_Data_Description.docx which compositions and conditions correspond to which point in the two data sets 2b) Final_Graphs: - Figure_2A.xlsx contains the x-y source file for figure 2A

    2c) Analysis: - FCS_Fits.xlsx outcome of the global fitting procedure described in the .docx below (each group of points represents a certain composition and calcium concentration, read the Raw_Data_Description.docx in the FCS > Raw_Data) - Notes_for_FCS_Analysis.docx contains a brief description of the analysis of the autocorrelation curves

    3) GPLaurdan 3a) Raw Data: all the spectra are stored in folders named by date (yymmdd_lipidcomposition_Laurdan) and are in both .FS and .txt formats

    3b) GP calculations: contains all the .xlsx files calculating the GP values from the raw emission and excitation spectra

    3c) Final_Graphs - Data_Processing_For_Fig_2D.csv contains the data processing from the GP values calculated from the spectra to the DeltaGP (GP with- GP without CaM) reported in fig. 2D - Figure_2C_2D.xlsx contains the x-y source file for the figure 2C and 2D

    4) LiveCellsImaging

    3a) Intensity_Protrusions_vs_Cell_Body: - contains all the .xlsx files calculating the intensity of the various images. File renamed by date (yymmdd) - All data in all excel sheets gathered in another Excel file to create a final graph

    3b) Final_Graphs - Figure_S2B.xlsx contains the x-y source file for the figure S2B

    5) LiveCellImaging_Raw_Data: it contains some of the images, which are given in .tif. They are divided by date (yymmdd) and each contains subfolders renamed by sample name, concentration of ionomycin. Within the subfolders, the images are divided into folders distinguishing the data acquired before and after the ionomycin treatment and the incubation time.

    6) 211124_BioCev_Imaging_1 folder has the .jpg files of the time laps, these are shown in fig 1A and S2.

    7) 211124_BioCev_Imaging_2 and 8) 211124_BioCev_Imaging_3 contain the images of HeLa cells expressing EGFP-CaM after treatment with ionomycin 200 nM (A1) and 1 uM (A2), respectively.

    9) SPR

    9a) Raw Data: - SPR_Raw_Data.xlsx x/y exported sensorgrams - the .jpg files of the software are also reported and named by lipid composition

    9b) Final_Graph: - Fig.2B.xlsx contains the x-y source file for the figure 2B

    9c) Analysis - SPR_Analysis.xlsx: excel file containing step-by-step (sheet by sheet) how we processed the raw data to obtain the final figure (details explained in the .docx below) - Analysis of SPR data_notes.docx: read me for detailed explanation

  7. F

    Dow Jones Industrial Average

    • fred.stlouisfed.org
    json
    Updated Mar 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Dow Jones Industrial Average [Dataset]. https://fred.stlouisfed.org/series/DJIA
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Mar 26, 2025
    License

    https://fred.stlouisfed.org/legal/#copyright-pre-approvalhttps://fred.stlouisfed.org/legal/#copyright-pre-approval

    Description

    Graph and download economic data for Dow Jones Industrial Average (DJIA) from 2015-03-27 to 2025-03-26 about stock market, average, industry, and USA.

  8. f

    ICSE 2025 - Artifact

    • figshare.com
    pdf
    Updated Jan 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FARIDAH AKINOTCHO (2025). ICSE 2025 - Artifact [Dataset]. http://doi.org/10.6084/m9.figshare.28194605.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jan 24, 2025
    Dataset provided by
    figshare
    Authors
    FARIDAH AKINOTCHO
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Mobile Application Coverage: The 30% Curse and Ways Forward## Purpose In this artifact, we provide the information about our benchmarks used for manual and tool exploration. We include coverage results achieved by tools and human analysts as well as plots of the coverage progression over time for analysts. We further provide manual analysis results for our case study, more specifically extracted reasons for unreachability for the case study apps and extracted code-level properties, which constitute a ground truth for future work in coverage explainability. Finally, we identify a list of beyond-GUI exploration tools and categorize them for future work to take inspiration from. We are claiming available and reusable badges; the artifact is fully aligned with the results described in our paper and comprehensively documented.## ProvenanceThe paper preprint is available here: https://people.ece.ubc.ca/mjulia/publications/Mobile_Application_Coverage_ICSE2025.pdf## Data The artifact submission is organized into five parts:- 'BenchInfo' excel sheet describing our experiment dataset- 'Coverage' folder containing coverage results for tools and analysts (RQ1) - 'Reasons' excel sheet describing our manually extracted reasons for unreachability (RQ2)- 'ActivationProperties' excel sheet describing our manually extracted code properties of unreached activities (RQ3)- 'ActivationProperties-Graph' pdf which presents combinations of the extracted code properties in a graph format.- 'BeyondGUI' folder containing information about identified techniques which go beyond GUI exploration.The artifact requires about 15MB of storage.### Dataset: 'BenchInfo.xlsx'This file list the full application dataset used for experiments into three tabs: 'BenchNotGP' (apps from AndroTest dataset which are not on Google Play), 'BenchGP' (apps from AndroTest which are also on Google Play) and 'TopGP' (top ranked free apps from Google Play). Each tab contains the following information:- Application Name- Package Name- Version Used (Latest)- Original Version- # Activities- Minimum SDK- Target SDK- # Permissions (in Manifest)- List of Permissions (in Manifest)- # Features (in Manifest)- List of Features (in Manifest)The 'TopGP' sheet also includes Google-Play-specific information, namely:- Category (one of 32 app categories)- Downloads- Popularity RankThe 'BenchGP' and 'BenchNotGP' sheets also include the original version (included in the AndroTest benchmark) and the source (one of F-Droid, Github or Google Code Archives).### RQ1: 'Coverage'The 'Coverage' folder includes coverage results for tools and analysts, and is structured as follows:- 'CoverageResults.xlsx": An excel sheet containing the coverage results achieved by each human analysts and tool. - The first tab described the results over all apps for analysts combined, tools combined, and analysts + tools, which map to Table II in the paper. - Each of the following 42 tab, one per app in TopGP, marks the activities reached by Analyst 1, Analyst 2, Tool 1 (ape) and Tool 2 (fastbot), with an 'x' in the corresponding column to indicate that the activity was reached by the given agent.- 'Plots': A folder containing plots of the progressive coverage over time of analysts, split into one folder for 'Analyst1' and one for 'Analyst2'. - Each of the analysts' folder includes a subfolder per benchmark ('BenchNotGP', 'BenchGP' and 'TopGP'), containing as many png files as applications in the benchmark (respectively 47, 14 and 42 image files) named 'ANALYST_[X]_[APP_PACKAGE_NAME]'.png.### RQ2: 'Reasons.xslx'This file contains the extracted reasons for unreachability for the 11 apps manually analyzed. - The 'Summary' tab provides an overview of unreached activities per reasons over all apps and per app, which corresponds to Table III in the paper. - The following 11 tabs, each corresponding to and named after a single application, describe the reasons associated with each activity of that application. Each column corresponds to a single reason and 'x' indicates that the activity is unreached due to the reason in that column. The top row sums up the total number of activities unreached due to a given reason in each column.- The activities at the bottom which are greyed out correspond to activities that were reached during exploration, and are thus excluded from the reason extraction.### RQ3: 'ActivationProperties.xslx'This file contains the full list of activation properties extracted for each of the 185 activities analyzed for RQ2.The first half of the columns (columns C-M) correspond to the reasons (excluding Transitive, Inconclusive and No Caller) and the second half (columns N-AD) correspond to properties described in Figure 5 in the paper, namely:- Exported- Activation Location: - Code: GUI/lifecycle, Other Android or App-specific - Manifest- Activation Guards: - Enforcement: In Code or In Resources - Restriction: Mandatory or Discretionary- Data: - Type: Parameters, Execution Dependencies - Format: Primitive, Strings, ObjectsThe rows are grouped by applications, and each row correspond to an activity of that application. 'x' in a given column indicates the presence of the property in that column within the analyzed path to the activity. The third and fourth rows sums up the numbers and percentages for each property, as reported in Figure 5.### RQ3: 'ActivationProperties-Graph.pdf'This file shows combinations of the individual properties listed in 'ActivationProperties.xlsx' in a graph format, extending the combinations described in Table IV with data (types and format) and reasons for unreachability.### BeyondGUIThis folder includes:- 'ToolInfo.xlsx': an excel sheet listing the identified 22 beyond-GUI papers, the date of publication, availability, invasiveness (Source code, Bytecode, framework, OS) and their targeting strategy (None, Manual or Automated).- ToolClassification.pdf': a pdf file describing our paper selection methodology as well as a classication of the techniques in terms of Invocation Strategy, Navigation Strategy, Value Generation Strategy, and Value Generation Types. We fully introduced these categories in the pdf file.## Requirements & technology skills assumed by the reviewer evaluating the artifactThe artifact entirely consists of Excel sheets which can be opened with common Excel visualization software, i.e., Microsoft Excel, coverage plots as PNG files and PDF files. It requires about 15MB of storage in total.No other specific technology skills are required of the reviewer evaluating the artifact.

  9. EMUE-D4-2-OrificePlate

    • zenodo.org
    bin, pdf, zip
    Updated Jul 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    M.J. Reader-Harris; Claire Forsyth; Tariq Boussouara; M.J. Reader-Harris; Claire Forsyth; Tariq Boussouara (2024). EMUE-D4-2-OrificePlate [Dataset]. http://doi.org/10.5281/zenodo.5142095
    Explore at:
    zip, bin, pdfAvailable download formats
    Dataset updated
    Jul 18, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    M.J. Reader-Harris; Claire Forsyth; Tariq Boussouara; M.J. Reader-Harris; Claire Forsyth; Tariq Boussouara
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The report shows how the uncertainty should be evaluated for a measurement from an orifice meter where the flowrate is evaluated using a discharge-coefficient equation based on data from a population of orifice meters: the orifice meter in service is not itself calibrated in a flowing fluid, but is similar (with permitted variability) to those on which the discharge-coefficient equation is based.

    The data on which the discharge-coefficient equation in ISO 5167-2:2003 (the Reader-Harris/Gallagher (1998) Equation) is based are given in the uploaded dataset together with the analysis required by the report (to follow).

    Files contained in the dataset are:

    • D4-2_Report.pdf: Report “The uncertainty of the orifice-plate discharge coefficient”
    • D4-2 LaTeX Source Files.zip: LaTeX files to be compiled in order to produce D4-2_Report.pdf.
    • D4-2_Figures.zip: figures/graphs from the report, including data points.
    • EMUE-D4-2-OrificePlate.xlsm: Excel spreadsheet containing all data associated with the example.
  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Tatiana N. Litvinova (2020). Dataset of development of business during the COVID-19 crisis [Dataset]. http://doi.org/10.17632/9vvrd34f8t.1

Dataset of development of business during the COVID-19 crisis

Explore at:
Dataset updated
Nov 9, 2020
Authors
Tatiana N. Litvinova
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

To create the dataset, the top 10 countries leading in the incidence of COVID-19 in the world were selected as of October 22, 2020 (on the eve of the second full of pandemics), which are presented in the Global 500 ranking for 2020: USA, India, Brazil, Russia, Spain, France and Mexico. For each of these countries, no more than 10 of the largest transnational corporations included in the Global 500 rating for 2020 and 2019 were selected separately. The arithmetic averages were calculated and the change (increase) in indicators such as profitability and profitability of enterprises, their ranking position (competitiveness), asset value and number of employees. The arithmetic mean values of these indicators for all countries of the sample were found, characterizing the situation in international entrepreneurship as a whole in the context of the COVID-19 crisis in 2020 on the eve of the second wave of the pandemic. The data is collected in a general Microsoft Excel table. Dataset is a unique database that combines COVID-19 statistics and entrepreneurship statistics. The dataset is flexible data that can be supplemented with data from other countries and newer statistics on the COVID-19 pandemic. Due to the fact that the data in the dataset are not ready-made numbers, but formulas, when adding and / or changing the values in the original table at the beginning of the dataset, most of the subsequent tables will be automatically recalculated and the graphs will be updated. This allows the dataset to be used not just as an array of data, but as an analytical tool for automating scientific research on the impact of the COVID-19 pandemic and crisis on international entrepreneurship. The dataset includes not only tabular data, but also charts that provide data visualization. The dataset contains not only actual, but also forecast data on morbidity and mortality from COVID-19 for the period of the second wave of the pandemic in 2020. The forecasts are presented in the form of a normal distribution of predicted values and the probability of their occurrence in practice. This allows for a broad scenario analysis of the impact of the COVID-19 pandemic and crisis on international entrepreneurship, substituting various predicted morbidity and mortality rates in risk assessment tables and obtaining automatically calculated consequences (changes) on the characteristics of international entrepreneurship. It is also possible to substitute the actual values identified in the process and following the results of the second wave of the pandemic to check the reliability of pre-made forecasts and conduct a plan-fact analysis. The dataset contains not only the numerical values of the initial and predicted values of the set of studied indicators, but also their qualitative interpretation, reflecting the presence and level of risks of a pandemic and COVID-19 crisis for international entrepreneurship.

Search
Clear search
Close search
Google apps
Main menu