Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To create the dataset, the top 10 countries leading in the incidence of COVID-19 in the world were selected as of October 22, 2020 (on the eve of the second full of pandemics), which are presented in the Global 500 ranking for 2020: USA, India, Brazil, Russia, Spain, France and Mexico. For each of these countries, no more than 10 of the largest transnational corporations included in the Global 500 rating for 2020 and 2019 were selected separately. The arithmetic averages were calculated and the change (increase) in indicators such as profitability and profitability of enterprises, their ranking position (competitiveness), asset value and number of employees. The arithmetic mean values of these indicators for all countries of the sample were found, characterizing the situation in international entrepreneurship as a whole in the context of the COVID-19 crisis in 2020 on the eve of the second wave of the pandemic. The data is collected in a general Microsoft Excel table. Dataset is a unique database that combines COVID-19 statistics and entrepreneurship statistics. The dataset is flexible data that can be supplemented with data from other countries and newer statistics on the COVID-19 pandemic. Due to the fact that the data in the dataset are not ready-made numbers, but formulas, when adding and / or changing the values in the original table at the beginning of the dataset, most of the subsequent tables will be automatically recalculated and the graphs will be updated. This allows the dataset to be used not just as an array of data, but as an analytical tool for automating scientific research on the impact of the COVID-19 pandemic and crisis on international entrepreneurship. The dataset includes not only tabular data, but also charts that provide data visualization. The dataset contains not only actual, but also forecast data on morbidity and mortality from COVID-19 for the period of the second wave of the pandemic in 2020. The forecasts are presented in the form of a normal distribution of predicted values and the probability of their occurrence in practice. This allows for a broad scenario analysis of the impact of the COVID-19 pandemic and crisis on international entrepreneurship, substituting various predicted morbidity and mortality rates in risk assessment tables and obtaining automatically calculated consequences (changes) on the characteristics of international entrepreneurship. It is also possible to substitute the actual values identified in the process and following the results of the second wave of the pandemic to check the reliability of pre-made forecasts and conduct a plan-fact analysis. The dataset contains not only the numerical values of the initial and predicted values of the set of studied indicators, but also their qualitative interpretation, reflecting the presence and level of risks of a pandemic and COVID-19 crisis for international entrepreneurship.
Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).
The program PanPlot 2 was developed as a visualization tool for the information system PANGAEA. It can be used as a stand-alone application to plot data versus depth or time. Data input format is tab-delimited ASCII (e.g. by export from MS-Excel or from PANGAEA). The default scales and graphic features can individualy be modified. PanPlot 2 graphs can be exported in several image formats (BMP, PNG, PDF, and SVG) which can be imported by graphic software for further processing.
!PanPlot is retired since 2017. It is free of charge, is no longer being actively developed or supported, and is provided as-is without warranty.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This article describes a free, open-source collection of templates for the popular Excel (2013, and later versions) spreadsheet program. These templates are spreadsheet files that allow easy and intuitive learning and the implementation of practical examples concerning descriptive statistics, random variables, confidence intervals, and hypothesis testing. Although they are designed to be used with Excel, they can also be employed with other free spreadsheet programs (changing some particular formulas). Moreover, we exploit some possibilities of the ActiveX controls of the Excel Developer Menu to perform interactive Gaussian density charts. Finally, it is important to note that they can be often embedded in a web page, so it is not necessary to employ Excel software for their use. These templates have been designed as a useful tool to teach basic statistics and to carry out data analysis even when the students are not familiar with Excel. Additionally, they can be used as a complement to other analytical software packages. They aim to assist students in learning statistics, within an intuitive working environment. Supplementary materials with the Excel templates are available online.
Authors:Brian Brown Date:27th November 1981Brief Description:Data were recorded from Rod Smallwoood's arm on the 27th November 1981; the dot matrix image which shows the ulna and radius bones. We made a 'radiotherapy type' mould of the arm and then put drawing pins through the plastic (pin head inwards) as electrodes. There two sets of data. One is recorded from the arm and the other is with saline filling the mould. The data were published in: D.C. Barber, B.H. Brown, and I.L. Freeston, "Imaging spatial distributions of resistivity using applied potential tomography", Electronics Letters, 19(22):933-935, 1983 http://digital-library.theiet.org/content/journals/10.1049/el_19830637 License:Creative Commons Artistic License (with Attribution)Attribution Requirement:Use or presentation of these data reference this publication: D.C. Barber, B.H. Brown, and I.L. Freeston, "Imaging spatial distributions of resistivity using applied potential tomography", Electronics Letters, 19(22):933-935, 1983 Format:Data are handwritten and scanned into the linked pdf file. The adjacent drive/receive data sets for both the Uniform(Saline) and Arm data and these are included in the attached Excel file. The are 6 columns of data in the xls file. The first three are for the uniform case and give the two reciprocal data sets and the mean of the two. Columns 4-6 are for the arm. I did a quick reconstruction using columns 3 and 6 as ref and data respectively and it looked OK. Methods:The pdf file that is attached shows the line printer output of the data we recorded from Rod Smallwoood's arm on the 27th November 1981 and the dot matrix image which shows the ulna and radius bones. We made a 'radiotherapy type' mould of the arm and then put drawing pins through the plastic (pin head inwards) as electrodes. There two sets of data. One is recorded from the arm and the other is with saline filling the mould. The pdf file also shows my plot of the XY position of the electrodes. Now the data set on the line printer is a complete data set i.e. Drive 1/2 then 1/3 then 1/4 etc for every combination. I could only find the print out for one of the data sets. However, I found my notebook with the adjacent drive/receive data set and this is page 7 of the pdf file. I have extracted the adjacent drive/receive data sets for both the Uniform(Saline) and Arm data and these are included in the attached Excel file. The are 6 columns of data in the xls file. The first three are for the uniform case and give the two reciprocal data sets and the mean of the two. Columns 4-6 are for the arm. I did a quick reconstruction using columns 3 and 6 as ref and data respectively and it looked OK. The first column of data is 104 point as follows. Drive 1/2 receive 3/4 Drive 1/2 receive 4/5 etc Drive 1/2 receive 16/1 Drive 2/3 receive 4/5 Drive 2/3 receive 5/6 etc Drive 2/3 receive 16/1 Drive 4/5 receive 6/7 Drive 4/5 receive 7/8 etc Drive 4/5 receive 16/1 etc etc Drive 14/15 receive 16/1 The second column is the other reciprocal set. I think these data are the ones used to produce the image in the Electronics Letters paper of 1983 - page 1 of my pdf file.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Civil and geological engineers have used field variable-head permeability tests (VH tests or slug tests) for over one century to assess the local hydraulic conductivity of tested soils and rocks. The water level in the pipe or riser casing reaches, after some rest time, a static position or elevation, z2. Then, the water level position is changed rapidly, by adding or removing some water volume, or by inserting or removing a solid slug. Afterward, the water level position or elevation z1(t) is recorded vs. time t, yielding a difference in hydraulic head or water column defined as Z(t) = z1(t) - z2. The water level at rest is assumed to be the piezometric level or PL for the tested zone, before drilling a hole and installing test equipment. All equations use Z(t) or Z*(t) = Z(t) / Z(t=0). The water-level response vs. time may be a slow return to equilibrium (overdamped test), or an oscillation back to equilibrium (underdamped test). This document deals exclusively with overdamped tests. Their data may be analyzed using several methods, known to yield different results for the hydraulic conductivity. The methods fit in three groups: group 1 neglects the influence of the solid matrix strain, group 2 is for tests in aquitards with delayed strain caused by consolidation, and group 3 takes into account some elastic and instant solid matrix strain. This document briefly explains what is wrong with certain theories and why. It shows three ways to plot the data, which are the three diagnostic graphs. According to experience with thousands of tests, most test data are biased by an incorrect estimate z2 of the piezometric level at rest. The derivative or velocity plot does not depend upon this assumed piezometric level, but can verify its correctness. The document presents experimental results and explains the three-diagnostic graphs approach, which unifies the theories and, most important, yields a user-independent result. Two free spreadsheet files are provided. The spreadsheet "Lefranc-Test-English-Model" follows the Canadian standards and is used to explain how to treat correctly the test data to reach a user-independent result. The user does not modify this model spreadsheet but can make as many copies as needed, with different names. The user can treat any other data set in a copy, and can also modify any copy if needed. The second Excel spreadsheet contains several sets of data that can be used to practice with the copies of the model spreadsheet. En génie civil et géologique, on a utilisé depuis plus d'un siècle les essais in situ de perméabilité à niveau variable (essais VH ou slug tests), afin d'évaluer la conductivité hydraulique locale des sols et rocs testés. Le niveau d'eau dans le tuyau ou le tubage prend, après une période de repos, une position ou élévation statique, z2. Ensuite, on modifie rapidement la position du niveau d'eau, en ajoutant ou en enlevant rapi-dement un volume d'eau, ou en insérant ou retirant un objet solide. La position ou l'élévation du niveau d'eau, z1(t), est alors notée en fonction du temps, t, ce qui donne une différence de charge hydraulique définie par Z(t) = z1(t) - z2. Le niveau d'eau au repos est supposé être le niveau piézométrique pour la zone testée, avant de forer un trou et d'installer l'équipement pour un essai. Toutes les équations utilisent Z(t) ou Z*(t) = Z(t) / Z(t=0). La réponse du niveau d'eau avec le temps peut être soit un lent retour à l'équilibre (cas suramorti) soit une oscillation amortie retournant à l'équilibre (cas sous-amorti). Ce document ne traite que des cas suramortis. Leurs données peuvent être analysées à l'aide de plusieurs méthodes, connues pour donner des résultats différents pour la conductivité hydraulique. Les méthodes appartiennent à trois groupes : le groupe 1 néglige l'influence de la déformation de la matrice solide, le groupe 2 est pour les essais dans des aquitards avec une déformation différée causée par la consolidation, et le groupe 3 prend en compte une certaine déformation élastique et instantanée de la matrice solide. Ce document explique brièvement ce qui est incorrect dans les théories et pourquoi. Il montre trois façons de tracer les données, qui sont les trois graphiques de diagnostic. Selon l'expérience de milliers d'essais, la plupart des données sont biaisées par un estimé incorrect de z2, le niveau piézométrique supposé. Le graphe de la dérivée ou graphe des vitesses ne dépend pas de la valeur supposée pour le niveau piézomé-trique, mais peut vérifier son exactitude. Le document présente des résultats expérimentaux et explique le diagnostic à trois graphiques, qui unifie les théories et donne un résultat indépendant de l'utilisateur, ce qui est important. Deux fichiers Excel gratuits sont fournis. Le fichier"Lefranc-Test-English-Model" suit les normes canadiennes : il sert à expliquer comment traiter correctement les données d'essai pour avoir un résultat indépendant de l'utilisateur. Celui-ci ne modifie pas ce...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These datasets contain many economic variables related to agriculture like crop output value, profit and several others. These datasets can be used for testing several hypotheses related to agricultural economics, both at plot level and household level.
Users can also reproduce these datasets using the STATA 14 do file ‘VDSA data management for agricultural performance’. This STATA program file uses the Village Dynamics in South Asia (VDSA) raw data files in excel format. The resulting output will be two data files in stata format, one at plot level and other at household level.
These plot level and household level data sets are also included in this repository. The word file ‘guidelines’ contain instructions to extract VDSA raw data from VDSA knowledge bank and use them as inputs to run the STATA do file ‘VDSA data management for agricultural performance’
The VDSA raw data files in excel format needed to run the stata do file are also available in this repository for users convenience
The raw VDSA data were generated by the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) in partnership with Indian Council of Agricultural Research (ICAR) Institutes and the International Rice Research Institute (IRRI) and funded by the Bill & Melinda Gates Foundation (BMGF) (Grant ID: 51937). The data were acquired in surveys by resident field investigators. Data collection was mostly through paper based questionnaires and Samsung tablets were also used since 2012. The survey instruments used for different modules are available at http://vdsa.icrisat.ac.in/vdsa-questionaires.aspx
Study sites were selected using a stepwise purposive sampling covering agro-ecological diversity of the region. Three districts within each zone were selected based on soil, climate parameters as well as the share of agricultural land under ICRISAT mandate crops. On similar lines, one typical sub-district within each district and two villages within each sub-district were selected. Within each village, ten random households from four landholding groups were selected.
Selected farmers were visited by well trained, agriculture graduate, resident field investigators, once every three weeks to collect information related to various socioeconomic indicators. Some of the data modules like details on crop cultivation activities including plot wise input, output was collected every three weeks while others like general endowments were collected once at the beginning of every agricultural year.
The compiled data, source data, data descriptions and data management code are all published in a public repository at http://dataverse.icrisat.org/dataverse/socialscience at https://doi.org/10.21421/D2/HDEUKU]
Some of the several benefits of these data are:
Scientists, students, development practitioners can benefit from these data to track changes in the livelihood options of the rural poor as this data provides long-term, multi-generational perspective on agricultural, social and economic change in rural livelihoods.
The survey sites provide a socio-economic field laboratory for teaching and training students and researchers
This dataset can be used for diverse agricultural, development and socio-economic analysis and to better understand the dynamics of Indian agriculture.
The data helps to provide feedback for designing policy interventions, setting research priorities and refining technologies.
Shed light on the pathways in which new technologies, policies, and programs impact poverty, village economies, and societies
TIME PERIOD COVERED These data were collected between May-July 2004 by field crews working for California Department of Fish and Game. GEOGRAPHIC EXTENT OF THE RECORDS Vegetation plots occur in two study areas on public lands in Yuba and Tehama Counties in the Sierra Nevada foothills. NUMBER OF RECORDS There are 183 records with habitat attributes measured from 0.05 - 0.10 hectare sampling plots. BASE DATA STRUCTURE The file is a flat Excel table which gives vegetation attributes for each habitat plot. Each habitat plot is represented by two key fields called "SAMPLE_ID" and "PLOT_NUM" which relate these habitat records to bird and herpetile survey data collected in the same year from the same sample points. WHAT EACH RECORD REPRESENTS Each record in the table represents the average values for habitat attributes from plots that can be linked with bird count data from the same points. Average values for each sample represent the mean of average values from 3 habitat plots measured at each point.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data set for the article [Bahl, C. R., Engelbrecht, K., Gideon, A., Levy, M. A. V., Marcussen, J. B., Imbaquingo, C., & Bjørk, R. (2024). Design, optimization and operation of a high power thermomagnetic harvester. Applied Energy, 376, 124304.]. DOI for the publication: 10.1016/j.apenergy.2024.124304The data are stored in four zip files, each containing a single folder, according to the different sections in the paper.The folder "Simulation_design" contains three Origin plot files, which contains the data for Figs. 3-5, Fig. 6 and Fig. 7. Both plots and data are contained in the Origin files.The folder "Experiment_thin_coils" contains an Origin file with the data and plot for Fig. 9. Furthermore it contains Matlab scripts for plotting Fig. 10 and Fig. 11 as well as the supporting file "lvm_import.m" for importing the lvm files which contains the raw experimental data. The RMS voltage plotted in Fig. 11 is given in the file "Voltage_RMS.txt".The folder "Experiment_big_coils" contains an Excel sheet with the data shown in Fig. 13, as well as the raw data, Raw_data.xslx, from the experiments needed to produce the average data in the Fig_13_data.xslx file. The data is described within the Excel files.The folder "Experiment_big_coils" contains the raw data in lvm format for the experiments with and without foam. The files are named according to the frequency, flow rate and temperature span and can be read with the "lvm_import.m" file in Matlab.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The LDMI experiment (Low-Disturbance Manure Incorporation) was designed to evaluate nutrient losses with conventional and improved liquid dairy manure management practices in a corn silage (Zea mays) / rye cover-crop (Secale cereale) system. The improved manure management treatments were designed to incorporate manure while maintaining crop residue for erosion control. Field observations included greenhouse gas (GHG) fluxes from soil, soil nutrient concentrations, crop growth and harvest biomass and nutrient content, as well as monitoring of soil physical and chemical properties. Observations from LDMI have been used for parameterization and validation of computer simulation models of GHG emissions from dairy farms (Gaillard et al., submitted). The LDMI experiment was performed as part of the Dairy CAP, described below. The experiment included ten different treatments: (1) broadcast manure with disk-harrow incorporation, (2) broadcast manure with no tillage incorporation, (3) manure application with “strip-tillage” which was sweep injection ridged with paired disks, (4) aerator band manure application, (5) low-disturbance sweep injection of manure, (6) Coulter injection of manure with sweep tillage, (7) no manure with urea to supply 60 lb N/acre (67 kg N/ha), (8) no manure with urea to supply 120 lb N/acre (135 kg N/ha), (9) no manure with urea to supply 180 lb N/acre (202 kg N/ha), (10) no manure / no fertilizer control. Manure was applied in the fall; fertilizer was applied in the spring. These ten treatments were replicated four times in a randomized complete block design. The LDMI experiment was conducted at the Marshfield Research Station of the University of Wisconsin and the USDA Agricultural Research Service (ARS) in Stratford, WI (Marathon County, Latitude 44.7627, Longitude -90.0938). Soils at the research station are from the Withee soil series, fine-loamy, mixed, superactive, frigid Aquic Glossudalf. Each experimental plot was approximately 70 square meters. A weather station was located at the south edge of field site. A secondary weather station (MARS South), for snow and snow water equivalence data and for backup of the main weather station, was located at Latitude 44.641445 and Longitude -90.133526 (16,093 meters southwest of the field site). The experiment was initiated on November 28, 2011 with fall tillage and manure application in each plot according to its treatment type. Each spring, corn silage was planted in rows at a rate of 87500 plants per hectare. The cultivar was Pioneer P8906HR. The LDMI experiment ended on November 30, 2015. The manure applied in this experiment was from the dairy herd at the Marshfield Research Station. Cows were fed a diet of 48% dry matter, 17.45% protein, and 72.8% total digestible nutrients. Liquid slurry manure, including feces, urine, and rain, was collected and stored in a lagoon on the site. Manure was withdrawn from the lagoon, spread on the plots and sampled for analysis all on the same day, once per year. Manure samples were analyzed at the University of Wisconsin Soil and Forage Lab in Marshfield (NH4-N, total P and total K) and at the Marshfield ARS (pH, dry matter, volatile solids, total N and total C). GHG fluxes from soil (CO2, CH4, N2O) were measured using static chambers as described in Parkin and Venterea (2010). Measurements were made with the chambers placed across the rows of corn. I Additional soil chemical and physical characteristics were measured as noted in the data dictionary and other metadata of the LDMI data set, included here. This experiment was part of “Climate Change Mitigation and Adaptation in Dairy Production Systems of the Great Lakes Region,” also known as the Dairy Coordinated Agricultural Project (Dairy CAP), funded by the United States Department of Agriculture - National Institute of Food and Agriculture (award number 2013-68002-20525). The main goal of the Dairy CAP was to improve understanding of the magnitudes and controlling factors over GHG emissions from dairy production in the Great Lakes region. Using this knowledge, the Dairy CAP has improved life cycle analysis (LCA) of GHG production by Great Lakes dairy farms, developing farm management tools, and conducting extension, education and outreach activities. Resources in this dataset:Resource Title: Data_dictionary_DairyCAP_LDMI. File Name: Data_dictionary_DairyCAP_LDMI.xlsxResource Description: This is the data dictionary for the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI. (Separate spreadsheet tabs)Resource Software Recommended: Microsoft Excel 2016,url: https://products.office.com/en-us/excel Resource Title: DairyCAP_LDMI. File Name: DairyCAP_LDMI.xlsxResource Description: This is the data from the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI.Resource Software Recommended: Microsoft Excel 2016,url: https://products.office.com/en-us/excel Resource Title: Data Dictionary DairyCAP LDMI. File Name: Data_dictionary_DairyCAP_LDMI.csvResource Description: This is the data dictionary for the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI.
Resource Title: Biomass Data. File Name: LDMI_Biomass.csvResource Title: Experimental Set-up Data. File Name: LDMI_Exp_setup.csvResource Title: Gas Flux Data. File Name: LDMI_Gas_Fluxes.csvResource Title: Management History Data. File Name: LDMI_Management_History.csvResource Title: Manure Analysis Data. File Name: LDMI_Manure_Analysis.csvResource Title: Soil Chemical Data. File Name: LDMI_Soil_Chem.csvResource Title: Soil Physical Data. File Name: LDMI_Soil_Phys.csvResource Title: Weather Data. File Name: LDMI_Weather.csv
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This supplementary material, namely supporting material 2, is organized for readers to better understand our article. Here we describe how to use this supplementary material to assist in reading our paper. Two compressed files (namely, ‘Analysis Data.zip’ and ‘Source Simulation Data.zip’) are attached to this supplementary material (namely, ‘All Source Data.zip’). The compressed file ‘Source Simulation Data.zip’ contains seven folders (namely, ‘Source_Data_for_Fig3_and_Fig7’, ‘Source_Data_for_Fig4’, ‘Source_Data_for_Fig6_and_Fig9’, ‘Source_Data_for_Fig8’, ‘J Error Bar(100txt+1excel)’, ‘Rho Error Bar(100txt+1excel)’, ‘Source_Data_for_OurDegeneratedModel’) and two Origin files (namely, ‘FigurePlot.opju’ and ‘OurDegeneratedModel.opju’). The other compressed file ‘Analysis Data.zip’ contains 10 folders (namely, ‘Local_Detail_Balance_Points_for_Fig3_and_Fig7’, ‘Local_Detail_Balance_Points_for_Fig4_and_Fig8’, ‘Local_Detail_Balance_Points_for_Fig6_and_Fig9’, ‘Local_Detail_Balance_Solution_for_Fig3_and_Fig7’, ‘Local_Detail_Balance_Solution_for_Fig4_and_Fig8’, ‘Local_Detail_Balance_Solution_for_Fig6_and_Fig9’, ‘Global_Balance_Solution_for_Fig3_and_Fig7’, ‘Global_Balance_Solution_for_Fig4_and_Fig8’, ‘Global_Balance_Solution_for_Fig6_and_Fig9’, ‘DB_Calculation_Results’). All folders contain 307 ‘.txt’ files and 2 ‘.xlsx’ files with massive data displaying all our original source data, including density profiles, current profiles, and particle number profiles derived from both theoretical analyses and Monte-Carlo simulations, which are used in our work. 12 ‘Readme.txt’ files are given in these folders as well to help readers know how to read all source data. Here, 2 Excel workbooks inside, showing how mean values and standard deviations are acquired from the original data when making error bars of Monte-Carlo simulations. Origin files provide readers with all the original source data we plot and use in the article. Texts are advised to be viewed with Notepad, and Excel workbooks are advised to be viewed with Microsoft Office 2016 or higher version. Origin files are advised to be viewed with OriginPro 2021b. The contents of all source data used in our paper have been detailly described in this supplementary material (namely, Supplementary Material 2).
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?:
This data set contains all the experimental raw data, analysis and source files for the final figures reported in the manuscript: "Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?". It is divided into five (1-5) zipped folders, named as the technique used to obtain the data. Each of them, where applicable, consists of three different subfolders (raw data, analysed data, final graph). Read below for more details.
1) ConfocalMicroscopy
1a) Raw_Data: the raw images are reported as .dat and .tif formats, divided into folders (according to date first yymmdd, and within the same day according to composition). Each folder contains a .txt file reporting the experimental details
1b) GUVs_Statistics - GUVs_Statistics.txt explains how we generated the bar plot shown in Fig. 1E
1c) Final_Graph - Figure_1B_1D.png is the figure representing figure 1B and 1D - Figure1E_%ofGUVswithCaMAdsorbptions.csv is the source file x-y of the bar plot shown in figure 1E (% of GUVs which showed adsorption of CaM over the total amount of measured GUVs) - Where_To_Find_Representative_Images.txt states the folders where the raw images chosen for figure 1 can be found
2) FCS 2a) Raw_Data: - 1_points: .ptu files - 2_points: .ht3 files - Raw_Data_Description.docx which compositions and conditions correspond to which point in the two data sets 2b) Final_Graphs: - Figure_2A.xlsx contains the x-y source file for figure 2A
2c) Analysis: - FCS_Fits.xlsx outcome of the global fitting procedure described in the .docx below (each group of points represents a certain composition and calcium concentration, read the Raw_Data_Description.docx in the FCS > Raw_Data) - Notes_for_FCS_Analysis.docx contains a brief description of the analysis of the autocorrelation curves
3) GPLaurdan 3a) Raw Data: all the spectra are stored in folders named by date (yymmdd_lipidcomposition_Laurdan) and are in both .FS and .txt formats
3b) GP calculations: contains all the .xlsx files calculating the GP values from the raw emission and excitation spectra
3c) Final_Graphs - Data_Processing_For_Fig_2D.csv contains the data processing from the GP values calculated from the spectra to the DeltaGP (GP with- GP without CaM) reported in fig. 2D - Figure_2C_2D.xlsx contains the x-y source file for the figure 2C and 2D
4) LiveCellsImaging
3a) Intensity_Protrusions_vs_Cell_Body: - contains all the .xlsx files calculating the intensity of the various images. File renamed by date (yymmdd) - All data in all excel sheets gathered in another Excel file to create a final graph
3b) Final_Graphs - Figure_S2B.xlsx contains the x-y source file for the figure S2B
5) LiveCellImaging_Raw_Data: it contains some of the images, which are given in .tif. They are divided by date (yymmdd) and each contains subfolders renamed by sample name, concentration of ionomycin. Within the subfolders, the images are divided into folders distinguishing the data acquired before and after the ionomycin treatment and the incubation time.
6) 211124_BioCev_Imaging_1 folder has the .jpg files of the time laps, these are shown in fig 1A and S2.
7) 211124_BioCev_Imaging_2 and 8) 211124_BioCev_Imaging_3 contain the images of HeLa cells expressing EGFP-CaM after treatment with ionomycin 200 nM (A1) and 1 uM (A2), respectively.
9) SPR
9a) Raw Data: - SPR_Raw_Data.xlsx x/y exported sensorgrams - the .jpg files of the software are also reported and named by lipid composition
9b) Final_Graph: - Fig.2B.xlsx contains the x-y source file for the figure 2B
9c) Analysis - SPR_Analysis.xlsx: excel file containing step-by-step (sheet by sheet) how we processed the raw data to obtain the final figure (details explained in the .docx below) - Analysis of SPR data_notes.docx: read me for detailed explanation
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To create the dataset, the top 10 countries leading in the incidence of COVID-19 in the world were selected as of October 22, 2020 (on the eve of the second full of pandemics), which are presented in the Global 500 ranking for 2020: USA, India, Brazil, Russia, Spain, France and Mexico. For each of these countries, no more than 10 of the largest transnational corporations included in the Global 500 rating for 2020 and 2019 were selected separately. The arithmetic averages were calculated and the change (increase) in indicators such as profitability and profitability of enterprises, their ranking position (competitiveness), asset value and number of employees. The arithmetic mean values of these indicators for all countries of the sample were found, characterizing the situation in international entrepreneurship as a whole in the context of the COVID-19 crisis in 2020 on the eve of the second wave of the pandemic. The data is collected in a general Microsoft Excel table. Dataset is a unique database that combines COVID-19 statistics and entrepreneurship statistics. The dataset is flexible data that can be supplemented with data from other countries and newer statistics on the COVID-19 pandemic. Due to the fact that the data in the dataset are not ready-made numbers, but formulas, when adding and / or changing the values in the original table at the beginning of the dataset, most of the subsequent tables will be automatically recalculated and the graphs will be updated. This allows the dataset to be used not just as an array of data, but as an analytical tool for automating scientific research on the impact of the COVID-19 pandemic and crisis on international entrepreneurship. The dataset includes not only tabular data, but also charts that provide data visualization. The dataset contains not only actual, but also forecast data on morbidity and mortality from COVID-19 for the period of the second wave of the pandemic in 2020. The forecasts are presented in the form of a normal distribution of predicted values and the probability of their occurrence in practice. This allows for a broad scenario analysis of the impact of the COVID-19 pandemic and crisis on international entrepreneurship, substituting various predicted morbidity and mortality rates in risk assessment tables and obtaining automatically calculated consequences (changes) on the characteristics of international entrepreneurship. It is also possible to substitute the actual values identified in the process and following the results of the second wave of the pandemic to check the reliability of pre-made forecasts and conduct a plan-fact analysis. The dataset contains not only the numerical values of the initial and predicted values of the set of studied indicators, but also their qualitative interpretation, reflecting the presence and level of risks of a pandemic and COVID-19 crisis for international entrepreneurship.