20 datasets found
  1. f

    Petre_Slide_CategoricalScatterplotFigShare.pptx

    • figshare.com
    pptx
    Updated Sep 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benj Petre; Aurore Coince; Sophien Kamoun (2016). Petre_Slide_CategoricalScatterplotFigShare.pptx [Dataset]. http://doi.org/10.6084/m9.figshare.3840102.v1
    Explore at:
    pptxAvailable download formats
    Dataset updated
    Sep 19, 2016
    Dataset provided by
    figshare
    Authors
    Benj Petre; Aurore Coince; Sophien Kamoun
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Categorical scatterplots with R for biologists: a step-by-step guide

    Benjamin Petre1, Aurore Coince2, Sophien Kamoun1

    1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK

    Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.

    Protocol

    • Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.

    • Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.

    • Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.

    Notes

    • Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.

    • Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.

    7 Display the graph in a separate window. Dot colors indicate

    replicates

    graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()

    References

    Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.

    Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035

    Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128

    https://cran.r-project.org/

    http://ggplot2.org/

  2. q

    Large Datasets in R - Plant Phenology & Temperature Data from NEON

    • qubeshub.org
    Updated May 10, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Megan Jones Patterson; Lee Stanish; Natalie Robinson; Katherine Jones; Cody Flagg (2018). Large Datasets in R - Plant Phenology & Temperature Data from NEON [Dataset]. http://doi.org/10.25334/Q4DQ3F
    Explore at:
    Dataset updated
    May 10, 2018
    Dataset provided by
    QUBES
    Authors
    Megan Jones Patterson; Lee Stanish; Natalie Robinson; Katherine Jones; Cody Flagg
    Description

    This module series covers how to import, manipulate, format and plot time series data stored in .csv format in R. Originally designed to teach researchers to use NEON plant phenology and air temperature data; has been used in undergraduate classrooms.

  3. P

    titanic5 Dataset Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    titanic5 Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/titanic5-dataset
    Explore at:
    Description

    titanic5 Dataset Created by David Beltran del Rio March 2016.

    Notes This is the final (for now) version of my update to the Titanic data. I think it’s finally ready for publishing if you’d like. What I did was to strip all the passenger and crew data from the Encyclopedia Titanica (ET) web pages (excluding channel crossing passengers), create a unique ID for each passenger and crew member (Name_ID), then (painstakingly and hopefully 100% correctly) match to your earlier titanic3 dataset, in order to compare the two and to get your sibsp and parch variables. Since the ET is updated occasionally the work put into the ID and matching can be reused and refined later. I did eventually hear back from the ET people, they are willing to make the underlying database available in the future, I have not yet taken them up on it.

    The two datasets line up nicely, most of the differences in the newer titanic5 dataset are in the age variable, as I had mentioned before - the new set has less missing ages - 51 missing (vs 263) out of 1309.

    I am in the process of refining my analysis of the data as well, based on your comments below and your Regression Modeling Strategies example.

    titanic3_wID data can be matched to titanic5 using the Name_ID variable. Tab titanic5 Metadata has the variable descriptions and allowable values for Class and Class/Dept.

    A note about the ages - instead of using the add 0.5 trick to indicate estimated birth day / date I have a flag that indicates how the “final” age (Age_F) was arrived at. It’s the Age_F_Code variable - the allowable values are in the Titanic5_metadata tab in the attached excel. The reason for this is that I already had some fractional ages for infants where I had age in months instead of years and I wanted to avoid confusion for 6 month old infants, although I don’t think there are any in the data! Also, I was thinking to make fractional ages or age in days for all passengers for whom I have DoB, but I have not yet done so.

    Here’s what the tabs are:

    Titanic5_all - all (mostly cleaned) Titanic passenger and crew records Titanic5_work - working dataset, crew removed, unnecessary variables removed - this is the one I import into SAS / R to work on Titanic5_metadata - Variable descriptions and allowable values titanic3_wID - Original Titanic3 dataset with Name_ID added for merging to Titanic5 I have a csv, R dataset, and SAS dataset, but the variable names are an older version, so I won’t send those along for now to avoid confusion.

    If it helps send my contact info along to your student in case any questions arise. Gmail address probably best, on weekends for sure: davebdr@gmail.com

    The tabs in titanic5.xls are

    Titanic5_all Titanic5_passenger (the one to be used for analysis) Titanic5_metadata (used during analysis file creation) Titanic3_wID

  4. Updated Data for: Food-washing monkeys recognize the law of diminishing...

    • zenodo.org
    bin, csv
    Updated Oct 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luke Fannin; Luke Fannin (2024). Updated Data for: Food-washing monkeys recognize the law of diminishing returns [Dataset]. http://doi.org/10.5281/zenodo.14002737
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    Oct 28, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Luke Fannin; Luke Fannin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is a new version draft of the data files for "Food washing monkeys recognize the law of diminishing returns" by Rosien et al.

    The original reviewed pre-print was published on the elife website on 22 July 2024: https://elifesciences.org/reviewed-preprints/98520. The data stored here are for the updated version of record.

    The published text contains methods justifications and supporting citations.

    This dataset was revised based on the recommendations of three reviewers. It now contains:

    • two text files, to be run in the R programming environment (version of record is 4.4.1), containing code to replicate the GLMM analyses and produce the based figure files displayed in the paper.
    • Two .csv files for running the GLMM statistics included in the revised text.
    • One .csv file for figure 1, which contains sand geometric and compositional data.
    • One .csv file containing intake rate data
    • Six .csv files used to create figures 2 and S2 in the text.
    • One Mathematica notebook file for producing the optimal cleaning model of figure 3.

    A general note: when running the scripts, the file path you utilize will differ from the ones utilized in the current text, as it depends on where on one's computer the actual .csv files are stored. The "read.csv" command in the R code will need to be customized to a particular file path.

  5. u

    Data from: Data and code from: Environmental influences on drying rate of...

    • agdatacommons.nal.usda.gov
    txt
    Updated May 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Warren Copes; Quentin Read; Barbara J. Smith (2024). Data and code from: Environmental influences on drying rate of spray applied disinfestants from horticultural production services [Dataset]. http://doi.org/10.15482/USDA.ADC/25673073.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    Ag Data Commons
    Authors
    Warren Copes; Quentin Read; Barbara J. Smith
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset includes all the data and R code needed to reproduce the analyses in a forthcoming manuscript:Copes, W. E., Q. D. Read, and B. J. Smith. Environmental influences on drying rate of spray applied disinfestants from horticultural production services. PhytoFrontiers, DOI pending.Study description: Instructions for disinfestants typically specify a dose and a contact time to kill plant pathogens on production surfaces. A problem occurs when disinfestants are applied to large production areas where the evaporation rate is affected by weather conditions. The common contact time recommendation of 10 min may not be achieved under hot, sunny conditions that promote fast drying. This study is an investigation into how the evaporation rates of six commercial disinfestants vary when applied to six types of substrate materials under cool to hot and cloudy to sunny weather conditions. Initially, disinfestants with low surface tension spread out to provide 100% coverage and disinfestants with high surface tension beaded up to provide about 60% coverage when applied to hard smooth surfaces. Disinfestants applied to porous materials were quickly absorbed into the body of the material, such as wood and concrete. Even though disinfestants evaporated faster under hot sunny conditions than under cool cloudy conditions, coverage was reduced considerably in the first 2.5 min under most weather conditions and reduced to less than or equal to 50% coverage by 5 min. Dataset contents: This dataset includes R code to import the data and fit Bayesian statistical models using the model fitting software CmdStan, interfaced with R using the packages brms and cmdstanr. The models (one for 2022 and one for 2023) compare how quickly different spray-applied disinfestants dry, depending on what chemical was sprayed, what surface material it was sprayed onto, and what the weather conditions were at the time. Next, the statistical models are used to generate predictions and compare mean drying rates between the disinfestants, surface materials, and weather conditions. Finally, tables and figures are created. These files are included:Drying2022.csv: drying rate data for the 2022 experimental runWeather2022.csv: weather data for the 2022 experimental runDrying2023.csv: drying rate data for the 2023 experimental runWeather2023.csv: weather data for the 2023 experimental rundisinfestant_drying_analysis.Rmd: RMarkdown notebook with all data processing, analysis, and table creation codedisinfestant_drying_analysis.html: rendered output of notebookMS_figures.R: additional R code to create figures formatted for journal requirementsfit2022_discretetime_weather_solar.rds: fitted brms model object for 2022. This will allow users to reproduce the model prediction results without having to refit the model, which was originally fit on a high-performance computing clusterfit2023_discretetime_weather_solar.rds: fitted brms model object for 2023data_dictionary.xlsx: descriptions of each column in the CSV data files

  6. H

    Syria town database

    • dataverse.harvard.edu
    Updated Nov 22, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kheder Khaddour; Kevin Mazur (2018). Syria town database [Dataset]. http://doi.org/10.7910/DVN/YQQ07L
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 22, 2018
    Dataset provided by
    Harvard Dataverse
    Authors
    Kheder Khaddour; Kevin Mazur
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Syria
    Description

    The purpose of this dataset is to provide a detailed picture of the characteristics of Syrian towns in the years preceding the 2011 Syrian uprising and ensuing civil war. It incorporates the 2004 national census, the last before the uprising, and a newly collected set of data on ethnic identity. The level of analysis is the town (the Syrian Census Bureau’s fourth administrative level). TECHNICAL NOTE: The .csv files in this data package contain both Arabic and English, so are encoded in UTF-8. The Arabic script should render if opened directly in Open Office, Numbers, Google Drive, or R statistical software. To read the Arabic in Excel, you can open the .csv file in any of these applications and save it as an .xlsx file, or open it through Excel using the following steps: (1) open a blank excel document (2) import the data using “Data -> Get External Data -> Import text file” (3) select “File Origin: Unicode (UTF-8)” (4) select “Delimiters: comma” (5) select the top left cell to place the data See the following post for further details: https://stackoverflow.com/questions/6002256/is-it-possible-to-force-excel-recognize-utf-8-csv-files-automatically

  7. Data from: Data and code from: Cover crop and crop rotation effects on...

    • catalog.data.gov
    Updated Apr 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Data and code from: Cover crop and crop rotation effects on tissue and soil population dynamics of Macrophomina phaseolina and yield in no-till system - V2 [Dataset]. https://catalog.data.gov/dataset/data-and-code-from-cover-crop-and-crop-rotation-effects-on-tissue-and-soil-population-dyna-831b9
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    [Note 2023-08-14 - Supersedes version 1, https://doi.org/10.15482/USDA.ADC/1528086 ] This dataset contains all code and data necessary to reproduce the analyses in the manuscript: Mengistu, A., Read, Q. D., Sykes, V. R., Kelly, H. M., Kharel, T., & Bellaloui, N. (2023). Cover crop and crop rotation effects on tissue and soil population dynamics of Macrophomina phaseolina and yield under no-till system. Plant Disease. https://doi.org/10.1094/pdis-03-23-0443-re The .zip archive cropping-systems-1.0.zip contains data and code files. Data stem_soil_CFU_by_plant.csv: Soil disease load (SoilCFUg) and stem tissue disease load (StemCFUg) for individual plants in CFU per gram, with columns indicating year, plot ID, replicate, row, plant ID, previous crop treatment, cover crop treatment, and comments. Missing data are indicated with . yield_CFU_by_plot.csv: Yield data (YldKgHa) at the plot level in units of kg/ha, with columns indicating year, plot ID, replicate, and treatments, as well as means of soil and stem disease load at the plot level. Code cropping_system_analysis_v3.0.Rmd: RMarkdown notebook with all data processing, analysis, and visualization code equations.Rmd: RMarkdown notebook with formatted equations formatted_figs_revision.R: R script to produce figures formatted exactly as they appear in the manuscript The Rproject file cropping-systems.Rproj is used to organize the RStudio project. Scripts and notebooks used in older versions of the analysis are found in the testing/ subdirectory. Excel spreadsheets containing raw data from which the cleaned CSV files were created are found in the raw_data subdirectory.

  8. H

    Consumer Expenditure Survey (CE)

    • dataverse.harvard.edu
    Updated May 30, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Damico (2013). Consumer Expenditure Survey (CE) [Dataset]. http://doi.org/10.7910/DVN/UTNJAH
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 30, 2013
    Dataset provided by
    Harvard Dataverse
    Authors
    Anthony Damico
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    analyze the consumer expenditure survey (ce) with r the consumer expenditure survey (ce) is the primo data source to understand how americans spend money. participating households keep a running diary about every little purchase over the year. those diaries are then summed up into precise expenditure categories. how else are you gonna know that the average american household spent $34 (±2) on bacon, $826 (±17) on cellular phones, and $13 (±2) on digital e-readers in 2011? an integral component of the market basket calculation in the consumer price index, this survey recently became available as public-use microdata and they're slowly releasing historical files back to 1996. hooray! for a t aste of what's possible with ce data, look at the quick tables listed on their main page - these tables contain approximately a bazillion different expenditure categories broken down by demographic groups. guess what? i just learned that americans living in households with $5,000 to $9,999 of annual income spent an average of $283 (±90) on pets, toys, hobbies, and playground equipment (pdf page 3). you can often get close to your statistic of interest from these web tables. but say you wanted to look at domestic pet expenditure among only households with children between 12 and 17 years old. another one of the thirteen web tables - the consumer unit composition table - shows a few different breakouts of households with kids, but none matching that exact population of interest. the bureau of labor statistics (bls) (the survey's designers) and the census bureau (the survey's administrators) have provided plenty of the major statistics and breakouts for you, but they're not psychic. if you want to comb through this data for specific expenditure categories broken out by a you-defined segment of the united states' population, then let a little r into your life. fun starts now. fair warning: only analyze t he consumer expenditure survey if you are nerd to the core. the microdata ship with two different survey types (interview and diary), each containing five or six quarterly table formats that need to be stacked, merged, and manipulated prior to a methodologically-correct analysis. the scripts in this repository contain examples to prepare 'em all, just be advised that magnificent data like this will never be no-assembly-required. the folks at bls have posted an excellent summary of what's av ailable - read it before anything else. after that, read the getting started guide. don't skim. a few of the descriptions below refer to sas programs provided by the bureau of labor statistics. you'll find these in the C:\My Directory\CES\2011\docs directory after you run the download program. this new github repository contains three scripts: 2010-2011 - download all microdata.R lo op through every year and download every file hosted on the bls's ce ftp site import each of the comma-separated value files into r with read.csv depending on user-settings, save each table as an r data file (.rda) or stat a-readable file (.dta) 2011 fmly intrvw - analysis examples.R load the r data files (.rda) necessary to create the 'fmly' table shown in the ce macros program documentation.doc file construct that 'fmly' table, using five quarters of interviews (q1 2011 thru q1 2012) initiate a replicate-weighted survey design object perform some lovely li'l analysis examples replicate the %mean_variance() macro found in "ce macros.sas" and provide some examples of calculating descriptive statistics using unimputed variables replicate the %compare_groups() macro found in "ce macros.sas" and provide some examples of performing t -tests using unimputed variables create an rsqlite database (to minimize ram usage) containing the five imputed variable files, after identifying which variables were imputed based on pdf page 3 of the user's guide to income imputation initiate a replicate-weighted, database-backed, multiply-imputed survey design object perform a few additional analyses that highlight the modified syntax required for multiply-imputed survey designs replicate the %mean_variance() macro found in "ce macros.sas" and provide some examples of calculating descriptive statistics using imputed variables repl icate the %compare_groups() macro found in "ce macros.sas" and provide some examples of performing t-tests using imputed variables replicate the %proc_reg() and %proc_logistic() macros found in "ce macros.sas" and provide some examples of regressions and logistic regressions using both unimputed and imputed variables replicate integrated mean and se.R match each step in the bls-provided sas program "integr ated mean and se.sas" but with r instead of sas create an rsqlite database when the expenditure table gets too large for older computers to handle in ram export a table "2011 integrated mean and se.csv" that exactly matches the contents of the sas-produced "2011 integrated mean and se.lst" text file click here to view these three scripts for...

  9. case study 1 bike share

    • kaggle.com
    Updated Oct 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mohamed osama (2022). case study 1 bike share [Dataset]. https://www.kaggle.com/ososmm/case-study-1-bike-share/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 8, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    mohamed osama
    Description

    Cyclistic: Google Data Analytics Capstone Project

    Cyclistic - Google Data Analytics Certification Capstone Project Moirangthem Arup Singh How Does a Bike-Share Navigate Speedy Success? Background: This project is for the Google Data Analytics Certification capstone project. I am wearing the hat of a junior data analyst working in the marketing analyst team at Cyclistic, a bike-share company in Chicago. Cyclistic is a bike-share program that features more than 5,800 bicycles and 600 docking stations. Cyclistic sets itself apart by also offering reclining bikes, hand tricycles, and cargo bikes, making bike-share more inclusive to people with disabilities and riders who can’t use a standard two-wheeled bike. The majority of riders opt for traditional bikes; about 8% of riders use the assistive options. Cyclistic users are more likely to ride for leisure, but about 30% use them to commute to work each day. Customers who purchase single-ride or full-day passes are referred to as casual riders. Customers who purchase annual memberships are Cyclistic members. The director of marketing believes the company’s future success depends on maximizing the number of annual memberships. Therefore,my team wants to understand how casual riders and annual members use Cyclistic bikes differently. From these insights, my team will design a new marketing strategy to convert casual riders into annual members. But first, Cyclistic executives must approve the recommendations, so they must be backed up with compelling data insights and professional data visualizations. This project will be completed by using the 6 Data Analytics stages: Ask: Identify the business task and determine the key stakeholders. Prepare: Collect the data, identify how it’s organized, determine the credibility of the data. Process: Select the tool for data cleaning, check for errors and document the cleaning process. Analyze: Organize and format the data, aggregate the data so that it’s useful, perform calculations and identify trends and relationships. Share: Use design thinking principles and data-driven storytelling approach, present the findings with effective visualization. Ensure the analysis has answered the business task. Act: Share the final conclusion and the recommendations. Ask: Business Task: Recommend marketing strategies aimed at converting casual riders into annual members by better understanding how annual members and casual riders use Cyclistic bikes differently. Stakeholders: Lily Moreno: The director of marketing and my manager. Cyclistic executive team: A detail-oriented executive team who will decide whether to approve the recommended marketing program. Cyclistic marketing analytics team: A team of data analysts responsible for collecting, analyzing, and reporting data that helps guide Cyclistic’s marketing strategy. Prepare: For this project, I will use the public data of Cyclistic’s historical trip data to analyze and identify trends. The data has been made available by Motivate International Inc. under the license. I downloaded the ZIP files containing the csv files from the above link but while uploading the files in kaggle (as I am using kaggle notebook), it gave me a warning that the dataset is already available in kaggle. So I will be using the dataset cyclictic-bike-share dataset from kaggle. The dataset has 13 csv files from April 2020 to April 2021. For the purpose of my analysis I will use the csv files from April 2020 to March 2021. The source csv files are in Kaggle so I can rely on it's integrity. I am using Microsoft Excel to get a glimpse of the data. There is one csv file for each month and has information about the bike ride which contain details of the ride id, rideable type, start and end time, start and end station, latitude and longitude of the start and end stations. Process: I will use R as language in kaggle to import the dataset to check how it’s organized, whether all the columns have appropriate data type, find outliers and if any of these data have sampling bias. I will be using below R libraries

    Load the tidyverse, lubridate, ggplot2, sqldf and psych libraries

    library(tidyverse) library(lubridate) library(ggplot2) library(plotrix) ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──

    ✔ ggplot2 3.3.5 ✔ purrr 0.3.4 ✔ tibble 3.1.4 ✔ dplyr 1.0.7 ✔ tidyr 1.1.3 ✔ stringr 1.4.0 ✔ readr 2.0.1 ✔ forcats 0.5.1

    ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ── ✖ dplyr::filter() masks stats::filter() ✖ dplyr::lag() masks stats::lag()

    Attaching package: ‘lubridate’

    The following objects are masked from ‘package:base’:

    date, intersect, setdiff, union
    

    Set the working directory

    setwd("/kaggle/input/cyclistic-bike-share")

    Import the csv files

    r_202004 <- read.csv("202004-divvy-tripdata.csv") r_202005 <- read.csv("20...

  10. H

    Time-Series Matrix (TSMx): A visualization tool for plotting multiscale...

    • dataverse.harvard.edu
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Georgios Boumis; Brad Peter (2024). Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends [Dataset]. http://doi.org/10.7910/DVN/ZZDYM9
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Georgios Boumis; Brad Peter
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...

  11. Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic...

    • zenodo.org
    • data.niaid.nih.gov
    bin, csv, zip
    Updated Dec 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander R. Hartloper; Alexander R. Hartloper; Selimcan Ozden; Albano de Castro e Sousa; Dimitrios G. Lignos; Dimitrios G. Lignos; Selimcan Ozden; Albano de Castro e Sousa (2022). Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic Materials [Dataset]. http://doi.org/10.5281/zenodo.6965147
    Explore at:
    bin, zip, csvAvailable download formats
    Dataset updated
    Dec 24, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alexander R. Hartloper; Alexander R. Hartloper; Selimcan Ozden; Albano de Castro e Sousa; Dimitrios G. Lignos; Dimitrios G. Lignos; Selimcan Ozden; Albano de Castro e Sousa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic Materials

    Background

    This dataset contains data from monotonic and cyclic loading experiments on structural metallic materials. The materials are primarily structural steels and one iron-based shape memory alloy is also included. Summary files are included that provide an overview of the database and data from the individual experiments is also included.

    The files included in the database are outlined below and the format of the files is briefly described. Additional information regarding the formatting can be found through the post-processing library (https://github.com/ahartloper/rlmtp/tree/master/protocols).

    Usage

    • The data is licensed through the Creative Commons Attribution 4.0 International.
    • If you have used our data and are publishing your work, we ask that you please reference both:
      1. this database through its DOI, and
      2. any publication that is associated with the experiments. See the Overall_Summary and Database_References files for the associated publication references.

    Included Files

    • Overall_Summary_2022-08-25_v1-0-0.csv: summarises the specimen information for all experiments in the database.
    • Summarized_Mechanical_Props_Campaign_2022-08-25_v1-0-0.csv: summarises the average initial yield stress and average initial elastic modulus per campaign.
    • Unreduced_Data-#_v1-0-0.zip: contain the original (not downsampled) data
      • Where # is one of: 1, 2, 3, 4, 5, 6. The unreduced data is broken into separate archives because of upload limitations to Zenodo. Together they provide all the experimental data.
      • We recommend you un-zip all the folders and place them in one "Unreduced_Data" directory similar to the "Clean_Data"
      • The experimental data is provided through .csv files for each test that contain the processed data. The experiments are organised by experimental campaign and named by load protocol and specimen. A .pdf file accompanies each test showing the stress-strain graph.
      • There is a "db_tag_clean_data_map.csv" file that is used to map the database summary with the unreduced data.
      • The computed yield stresses and elastic moduli are stored in the "yield_stress" directory.
    • Clean_Data_v1-0-0.zip: contains all the downsampled data
      • The experimental data is provided through .csv files for each test that contain the processed data. The experiments are organised by experimental campaign and named by load protocol and specimen. A .pdf file accompanies each test showing the stress-strain graph.
      • There is a "db_tag_clean_data_map.csv" file that is used to map the database summary with the clean data.
      • The computed yield stresses and elastic moduli are stored in the "yield_stress" directory.
    • Database_References_v1-0-0.bib
      • Contains a bibtex reference for many of the experiments in the database. Corresponds to the "citekey" entry in the summary files.

    File Format: Downsampled Data

    These are the "LP_

    • The header of the first column is empty: the first column corresponds to the index of the sample point in the original (unreduced) data
    • Time[s]: time in seconds since the start of the test
    • e_true: true strain
    • Sigma_true: true stress in MPa
    • (optional) Temperature[C]: the surface temperature in degC

    These data files can be easily loaded using the pandas library in Python through:

    import pandas
    data = pandas.read_csv(data_file, index_col=0)

    The data is formatted so it can be used directly in RESSPyLab (https://github.com/AlbanoCastroSousa/RESSPyLab). Note that the column names "e_true" and "Sigma_true" were kept for backwards compatibility reasons with RESSPyLab.

    File Format: Unreduced Data

    These are the "LP_

    • The first column is the index of each data point
    • S/No: sample number recorded by the DAQ
    • System Date: Date and time of sample
    • Time[s]: time in seconds since the start of the test
    • C_1_Force[kN]: load cell force
    • C_1_Déform1[mm]: extensometer displacement
    • C_1_Déplacement[mm]: cross-head displacement
    • Eng_Stress[MPa]: engineering stress
    • Eng_Strain[]: engineering strain
    • e_true: true strain
    • Sigma_true: true stress in MPa
    • (optional) Temperature[C]: specimen surface temperature in degC

    The data can be loaded and used similarly to the downsampled data.

    File Format: Overall_Summary

    The overall summary file provides data on all the test specimens in the database. The columns include:

    • hidden_index: internal reference ID
    • grade: material grade
    • spec: specifications for the material
    • source: base material for the test specimen
    • id: internal name for the specimen
    • lp: load protocol
    • size: type of specimen (M8, M12, M20)
    • gage_length_mm_: unreduced section length in mm
    • avg_reduced_dia_mm_: average measured diameter for the reduced section in mm
    • avg_fractured_dia_top_mm_: average measured diameter of the top fracture surface in mm
    • avg_fractured_dia_bot_mm_: average measured diameter of the bottom fracture surface in mm
    • fy_n_mpa_: nominal yield stress
    • fu_n_mpa_: nominal ultimate stress
    • t_a_deg_c_: ambient temperature in degC
    • date: date of test
    • investigator: person(s) who conducted the test
    • location: laboratory where test was conducted
    • machine: setup used to conduct test
    • pid_force_k_p, pid_force_t_i, pid_force_t_d: PID parameters for force control
    • pid_disp_k_p, pid_disp_t_i, pid_disp_t_d: PID parameters for displacement control
    • pid_extenso_k_p, pid_extenso_t_i, pid_extenso_t_d: PID parameters for extensometer control
    • citekey: reference corresponding to the Database_References.bib file
    • yield_stress_mpa_: computed yield stress in MPa
    • elastic_modulus_mpa_: computed elastic modulus in MPa
    • fracture_strain: computed average true strain across the fracture surface
    • c,si,mn,p,s,n,cu,mo,ni,cr,v,nb,ti,al,b,zr,sn,ca,h,fe: chemical compositions in units of %mass
    • file: file name of corresponding clean (downsampled) stress-strain data

    File Format: Summarized_Mechanical_Props_Campaign

    Meant to be loaded in Python as a pandas DataFrame with multi-indexing, e.g.,

    tab1 = pd.read_csv('Summarized_Mechanical_Props_Campaign_' + date + version + '.csv',
              index_col=[0, 1, 2, 3], skipinitialspace=True, header=[0, 1],
              keep_default_na=False, na_values='')
    • citekey: reference in "Campaign_References.bib".
    • Grade: material grade.
    • Spec.: specifications (e.g., J2+N).
    • Yield Stress [MPa]: initial yield stress in MPa
      • size, count, mean, coefvar: number of experiments in campaign, number of experiments in mean, mean value for campaign, coefficient of variation for campaign
    • Elastic Modulus [MPa]: initial elastic modulus in MPa
      • size, count, mean, coefvar: number of experiments in campaign, number of experiments in mean, mean value for campaign, coefficient of variation for campaign

    Caveats

    • The files in the following directories were tested before the protocol was established. Therefore, only the true stress-strain is available for each:
      • A500
      • A992_Gr50
      • BCP325
      • BCR295
      • HYP400
      • S460NL
      • S690QL/25mm
      • S355J2_Plates/S355J2_N_25mm and S355J2_N_50mm
  12. u

    Data from: United States wildlife and wildlife product imports from...

    • agdatacommons.nal.usda.gov
    bin
    Updated May 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Evan A. Eskew; Allison M. White; Naom Ross; Kristine M. Smith; Katherine F. Smith; Jon Paul Rodríguez; Carlos Zambrana-Torrelio; William B. Karesh; Peter Daszak (2025). Data from: United States wildlife and wildlife product imports from 2000–2014 [Dataset]. https://agdatacommons.nal.usda.gov/articles/dataset/Data_from_United_States_wildlife_and_wildlife_product_imports_from_2000_2014/24853503
    Explore at:
    binAvailable download formats
    Dataset updated
    May 6, 2025
    Dataset provided by
    Scientific Data
    Authors
    Evan A. Eskew; Allison M. White; Naom Ross; Kristine M. Smith; Katherine F. Smith; Jon Paul Rodríguez; Carlos Zambrana-Torrelio; William B. Karesh; Peter Daszak
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    The global wildlife trade network is a massive system that has been shown to threaten biodiversity, introduce non-native species and pathogens, and cause chronic animal welfare concerns. Despite its scale and impact, comprehensive characterization of the global wildlife trade is hampered by data that are limited in their temporal or taxonomic scope and detail. To help fill this gap, we present data on 15 years of the importation of wildlife and their derived products into the United States (2000–2014), originally collected by the United States Fish and Wildlife Service. We curated and cleaned the data and added taxonomic information to improve data usability. These data include >2 million wildlife or wildlife product shipments, representing >60 biological classes and >3.2 billion live organisms. Further, the majority of species in the dataset are not currently reported on by CITES parties. These data will be broadly useful to both scientists and policymakers seeking to better understand the volume, sources, biological composition, and potential risks of the global wildlife trade. Resources in this dataset:Resource Title: United States LEMIS wildlife trade data curated by EcoHealth Alliance (Version 1.1.0) - Zenodo. File Name: Web Page, url: https://doi.org/10.5281/zenodo.3565869 Over 5.5 million USFWS LEMIS wildlife or wildlife product records spanning 15 years and 28 data fields. These records were derived from >2 million unique shipments processed by USFWS during the time period and represent >3.2 billion live organisms. We provide the final cleaned data as a single comma-separated value file. Original raw data as provided by the USFWS are also available. Although relatively large (~1 gigabyte), the cleaned data file can be imported into a software environment of choice for data analysis. Alternatively, the assocated R package provides access to a release of the same cleaned dataset but with a data download and manipulation framework that is designed to work well with this large dataset. Both the Zenodo data repository and the R package contain a metadata file describing each of the data fields as well as a lookup table to retrieve full values for the abbreviated codes used throughout the dataset. Contents: lemis_2000_2014_cleaned.csv: This file represents the compiled, cleaned LEMIS data from 2000-2014. This data is identical to the version 1.1.0 dataset available through the lemis R package. lemis_codes.csv: Full values for all coded values used in the LEMIS data. Identical to the output from the lemis R package function "lemis_codes()". lemis_metadata.csv: Data fields and field descriptions for all variables in the LEMIS data. Identical to the output from the lemis R package function "lemis_metadata()". raw_data.zip: This archive contains all of the raw LEMIS data files that are processed and cleaned with the code contained in the 'data-raw' subdirectory of the lemis R package repository.Resource Software Recommended: R package,url: https://github.com/ecohealthalliance/lemis

  13. A

    ‘School Dataset’ analyzed by Analyst-2

    • analyst-2.ai
    Updated Feb 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com) (2022). ‘School Dataset’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/kaggle-school-dataset-3c70/2a80983f/?iid=004-125&v=presentation
    Explore at:
    Dataset updated
    Feb 13, 2022
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ‘School Dataset’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/smeilisa07/number of school teacher student class on 13 February 2022.

    --- Dataset description provided by original source is as follows ---

    Context

    This is my first analyst data. This dataset i got from open data Jakarta website (http://data.jakarta.go.id/), so mostly the dataset is in Indonesian. But i have try describe it that you can find it on VARIABLE DESCRIPTION.txt file.

    Content

    The title of this dataset is jumlah-sekolah-guru-murid-dan-ruang-kelas-menurut-jenis-sekolah-2011-2016, with type is CSV, so you can easily access it. If you not understand, the title means the number of school, teacher, student, and classroom according to the type of school 2011 - 2016. I think, if you just read from the title, you can imagine the contents. So this dataset have 50 observations and 8 variables, taken from 2011 until 2016.

    In general, this dataset is about the quality of education in Jakarta, which each year some of school level always decreasing and some is increase, but not significant.

    Acknowledgements

    This dataset comes from Indonesian education authorities, which is already established in the CSV file by Open Data Jakarta.

    Inspiration

    Althought this data given from Open Data Jakarta publicly, i want always continue to improve my Data Scientist skill, especially in R programming, because i think R programming is easy to learn and really help me to be always curious about Data Scientist. So, this dataset that I am still struggle with below problem, and i need solution.

    Question :

    1. How can i cleaning this dataset ? I have try cleaning this dataset, but i still not sure. You can check on
      my_hypothesis.txt file, when i try cleaning and visualize this dataset.

    2. How can i specify the model for machine learning ? What recommended steps i should take ?

    3. How should i cluster my dataset, if i want the label is not number but tingkat_sekolah for every tahun and
      jenis_sekolah ? You can check on my_hypothesis.txt file.

    --- Original source retains full ownership of the source dataset ---

  14. B

    Replication Data for: Lameness during the dry period: epidemiology and...

    • borealisdata.ca
    • open.library.ubc.ca
    • +1more
    Updated Sep 9, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ruan R. Daros; Hanna K. Eriksson; Daniel M. Weary; Marina A. G. von Keyserlingk (2019). Replication Data for: Lameness during the dry period: epidemiology and associated factors [Dataset]. http://doi.org/10.5683/SP2/YTZMKX
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 9, 2019
    Dataset provided by
    Borealis
    Authors
    Ruan R. Daros; Hanna K. Eriksson; Daniel M. Weary; Marina A. G. von Keyserlingk
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Original data, R script (code) and code output for the paper published on Journal of Dairy Science. For best use, replicate analysis using R. Importing data using the .csv file may cause some variables (columns of the spreadsheet) to be imported with the wrong format. Any issues, do not hesitate in contact. Happy coding!

  15. Z

    Data and Code for "Does Organic Farming Jeopardize Food Security of Farm...

    • data.niaid.nih.gov
    Updated Apr 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Henningsen, Arne (2024). Data and Code for "Does Organic Farming Jeopardize Food Security of Farm Households in Benin?" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10899544
    Explore at:
    Dataset updated
    Apr 30, 2024
    Dataset provided by
    Aïhounton, Ghislain Boris Dossou
    Henningsen, Arne
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Benin
    Description

    This data and code archive provides all the data and code for replicating the empirical analysis that is presented in the journal article "Does Organic Farming Jeopardize Food Security of Farm Households in Benin?" authored by Ghislain B.D. Aïhounton and Arne Henningsen and published in the journal Food Policy (Volume 124, April 2024, 102622, DOI: 10.1016/j.foodpol.2024.102622).

    We conducted the empirical analysis with the "R" statistical software (version 4.3.3) using the add-on packages "AER" (version 1.2.12), "DescTools" (version 0.99.54), "lmtest" (version 0.9.40), "moments" (version 0.14.1), "sandwich" (version 3.1.0), "stargazer" (version 5.2.3), and "xtable" (version 1.8.4) that are all available at CRAN.

    This replication package contains the following files:

    • READMEThis file.

    • R/dataBenin.csvA CSV file that contains the (unprepared) data set. The variables in this file are described in file R/Variables.csv. This CSV file is imported by R script PrepareDataFoodNutrition.R.

    • R/Variables.csvA CSV file that describes the variables in the (unprepared) data set (file R/dataBenin.csv).

    • R/PrepareData.RAn R script that imports the (unprepared) data set (file R/dataBenin.csv), calculates additional variables and add theses variables to the data set, removes observations that should not be used in the empirical analysis, and saves the prepared data set as CSV file (R/dataFoodNutrition.csv).

    • R/dataPrepared.csvA CSV file that contains the (prepared) data set used in the empirical analysis. This CSV file is created by the R script R/PrepareDataFoodNutrition.R. It is imported by the R scripts R/DescriptiveTab.R, FoodNutritionImpact.R, and GridSearchFoodSecurity.R.

    • R/DescriptiveTab.RAn R script that imports the prepared data set (file R/dataFoodNutrition.R) and creates Table 1 of the paper ("Descriptive statistics", file paper/tables/DescriptiveStat.tex) as LaTeX file.

    • R/Estimations.RAn R script that imports the prepared data set (file R/dataFoodNutrition.R), conducts all the analyses presented in the paper, creates Tables 2 and 3 of the paper ("OLS and IV regression results of the conditional associations between organic farming and outcomes" and "OLS and IV regression results of the conditional associations between organic farming and mediating outcomes", LaTeX files paper/tables/estMainReg.tex and paper/tables/estMedReg.tex), creates Figures 1 and 2 of the paper ("Estimated conditional associations of organic farming with outcomes" and "Estimated conditional associations of organic farming with mediating outcomes", 12 PDF files paper/figures/*.pdf), and 45 Tables that are included in the Supplementary Information: 36 tables with detailed regression results (LaTeX files paper/tables/tabels/est*.tex), one table with results of the first-stage probit regression (LaTeX file paper/tables/tabels/estProbit.tex), 6 tables with detailed regression results of estimations for testing the exogeneity of the instrument as suggested by Di Falco et al. (2011) (LaTeX files paper/tables/tabels/estOLS*Falco.tex), and 2 tables with coefficient bounds obtained as suggested by Oster (2019) (LaTeX files paper/tables/tabels/Oster*.tex).

    • R/GridSearch.RAn R script that re-runs our regression analyses with different units of measurement of IHS-transformed variables and calculates various indicators that can can be used to assess the appropriateness of different units of measurement as suggested by Aihounton and Henningsen (2021) and that creates 28 Tables that are included in the Supplementary Information (LaTeX files paper/tables/tabels/grid*.tex).

    • R/functions/calcOsterBounds.RAn R script that defines the R function calcOsterBounds() that calculates coefficient bounds using the method suggested by Oster (2019). This function is used by the R script R/FoodNutritionImpact.R.

    • R/functions/calcSemiElaOrg.RAn R script that defines the R function calcSemiElaOrg() that calculates the semi-elasticity of various log-transformed or IHS-transformed variables with respect to the dummy variable for organic farming. This function is used by the R scripts R/FoodNutritionImpact.R and R/GridSearchFoodSecurity.R.

    • R/functions/createFormula.RAn R script that defines the R function createFormula() that creates the regression formulas for the various empirical analyses that are presented in the paper. This function is used by the R scripts R/FoodNutritionImpact.R and R/GridSearchFoodSecurity.R.

    • R/functions/functionsTables.RAn R script that defines various R functions that are used to create tables in LaTeX format. These functions are used by the R scripts R/FoodNutritionImpact.R and R/GridSearchFoodSecurity.R.

    • R/functions/predR2.RAn R script that defines the R function predR2() that calculates the predictive R-squared value. This R script has been obtained from the replication package of the article:Aïhounton, G. B. D. and Henningsen, A. (2021). Units of measurement and the inverse hyperbolic sine transformation. The Econometrics Journal, 24(2):334–351. https://doi.org/10.1093/ectj/utaa032The function consists of a slightly modified version of the code that is available at: https://tomhopper.me/2014/05/16/can-we-do-better-than-r-squared/ This function is used by the R script R/GridSearchFoodSecurity.R.

    • paper/figures/*.pdf12 LaTeX files that are the (sub)figures in Figures 1 and 2 of the paper ("Estimated conditional associations of organic farming with outcomes" and "Estimated conditional associations of organic farming with mediating outcomes"). These 12 files are created by the R script R/FoodNutritionImpact.R.

    • paper/tables/DescriptiveStat.texA LaTeX file that creates Table 1 of the paper ("Descriptive statistics"). This file is created by the R script R/DescriptiveTab.R.

    • paper/tables/estMainReg.texA LaTeX file that creates Table 2 of the paper ("OLS and IV regression results of the conditional associations between organic farming and outcomes"). This file is created by the R script R/FoodNutritionImpact.R.

    • paper/tables/estMedReg.texA LaTeX file that creates Table 3 of the paper ("OLS and IV regression results of the conditional associations between organic farming and mediating outcomes"). This file is created by the R script R/FoodNutritionImpact.R.

    • paper/tables/tabels/est*.tex36 LaTeX files that create 36 tables that are included in the Supplementary Information and present detailed regression results. These 36 files are created by the R script R/FoodNutritionImpact.R.

    • paper/tables/tabels/estProbit.texA LaTeX files that creates a table that is included in the Supplementary Information and presents the results of the first-stage probit regression. This file is created by the R script R/FoodNutritionImpact.R.

    • paper/tables/tabels/estOLS*Falco.tex6 LaTeX files that create 6 tables that are included in the Supplementary Information and present detailed regression results for testing the exogeneity of the instrument as suggested by Di Falco et al. (2011). These 6 files are created by the R script R/FoodNutritionImpact.R.

    • paper/tables/tabels/Oster*.tex2 LaTeX files that create 2 tables that are included in the Supplementary Information and present coefficient bounds obtined as suggested by Oster (2019). These 2 files are created by the R script R/FoodNutritionImpact.R.

    • paper/tables/tabels/grid*.tex28 LaTeX files that create 28 tables that are included in the Supplementary Information and present various indicators for assessing the appropriateness of different units of measurement of IHS-transformed variables as suggested by Aihounton and Henningsen (2021). These 28 files are created by the R script R/GridSearchFoodSecurity.R

  16. MovieLens ratings

    • kaggle.com
    zip
    Updated Apr 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Khoa Hoàngg (2024). MovieLens ratings [Dataset]. https://www.kaggle.com/datasets/khoahongg/movielens-ratings/code
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Apr 17, 2024
    Authors
    Khoa Hoàngg
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Each folder contains the following files: - train.csv: A csv file that contains the training data. - test.csv: A csv file that contains the testing data. - movie_to_index.pkl: A Python pickle file that contains a dictionary. The dictionary maps a movie_id to its corresponding index in the similarity matrix. - user_to_index.pkl: A Python pickle file that contains a dictionary. The dictionary maps a user_id to an index. - rating_matrix.npy: A npy file that contains the rating matrix in the training data \( \text{rating\_matrix}[u, i] = \text{r}_{\text{user_at_index_u}, \text{ movie\_at\_index\_i}} \) - similarity_matrix.npy: A npy file that contains a precomputed similarity matrix between movies in the training data. \( \text{similarity\_matrix}[i, j] = \text{purecosine}(R_{\text{movie\_at\_index\_i}}, R_{\text{movie\_at\_index\_j}}) \) - qtus.pkl: A Python pickle file that contains a dictionary. + Keys: Pair of user_index, movie_index (u, t). + Values: Indexes of movies rated by u, sorted by similarity in DESCENDING ORDER. For neighborhood_size = k \( \text{qtus}[(u,t)][:k] = Q_t(u) \)

    Loading the Similarity Matrix

    import numpy as np
    
    # Load the similarity matrix
    similarity_matrix = np.load('path_to_your_folder/similarity_matrix.npy')
    

    Loading the Dictionaries

    import pickle
    
    # Load the movie_to_index dictionary
    with open('path_to_your_folder/movie_to_index.pkl', 'rb') as f:
      movie_to_index = pickle.load(f)
    
    # Load the user_to_index dictionary
    with open('path_to_your_folder/user_to_index.pkl', 'rb') as f:
      user_to_index = pickle.load(f)
    
  17. m

    Dataset to run examples in SmartPLS 3 (teaching and learning)

    • data.mendeley.com
    • narcis.nl
    Updated Mar 7, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Diógenes de Bido (2019). Dataset to run examples in SmartPLS 3 (teaching and learning) [Dataset]. http://doi.org/10.17632/4tkph3mxp9.2
    Explore at:
    Dataset updated
    Mar 7, 2019
    Authors
    Diógenes de Bido
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This zip file contains: - 3 .zip files = projects to be imported into SmartPLS 3

    DLOQ-A model with 7 dimensions DLOQ-A model with second-order latent variable ECSI model (Tenenhaus et al., 2005) to exemplify direct, indirect and total effects, as well as importance-performance map and moderation with continuous variables. ECSI Model (Sanches, 2013) to exemplify MGA (multi-group analysis)

    • 5 files (csv, txt) with data to run 7 examples in SmartPLS 3

    Note: - DLOQ-A = new dataset (ours) - ECSI-Tenenhaus et al. [model for mediation and moderation] = available at: http://www.smartpls.com > Resources > SmartPLS Project Examples - ECSI-Sanches [dataset for MGA] = available in the software R > library(plspm) > data(satisfaction)

  18. f

    Data Sheet 1_An investigation of the load-velocity relationship between...

    • frontiersin.figshare.com
    • figshare.com
    csv
    Updated May 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ziwei Zhu; Jiayong Chen; Ruize Sun; Renchen Wang; Jiaxin He; Wenfeng Zhang; Weilong Lin; Duanying Li (2025). Data Sheet 1_An investigation of the load-velocity relationship between flywheel eccentric and barbell training methods.csv [Dataset]. http://doi.org/10.3389/fpubh.2025.1579291.s001
    Explore at:
    csvAvailable download formats
    Dataset updated
    May 30, 2025
    Dataset provided by
    Frontiers
    Authors
    Ziwei Zhu; Jiayong Chen; Ruize Sun; Renchen Wang; Jiaxin He; Wenfeng Zhang; Weilong Lin; Duanying Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ObjectiveFlywheel resistance training (FRT) is a training modality for developing lower limb athletic performance. The relationship between FRT load parameters and barbell squat loading remains ambiguous in practice, resulting in experience-driven load selection during training. Therefore, this study investigates optimal FRT loading for specific training goals (maximal strength, power, muscular endurance) by analyzing concentric velocity at varying barbell 1RM percentages (%1RM), establishes correlations between flywheel load, velocity, and %1RM, and integrates force-velocity profiling to develop evidence-based guidelines for individualized load prescription.MethodsThirty-nine participants completed 1RM barbell squats to establish submaximal loads (20–90%1RM). Concentric velocities were monitored via linear-position transducer (Gymaware) for FRT inertial load quantification, with test–retest measurements confirming protocol reliability. Simple and multiple linear regression modeled load-velocity interactions and multivariable relationships, while Pearson’s r and R2 quantified correlations and model fit. Predictive equations estimated inertial loads (kg·m2), supported by ICC (2, 1) and CV assessments of relative/absolute reliability.ResultsA strong inverse correlation (r = −0.88) and high linearity (R2 = 0.78) emerged between rotational inertia and velocity. The multivariate model demonstrated excellent fit (R2 = 0.81) and robust correlation (r = 0.90), yielding the predictive equation: y = 0.769–0.846v + 0.002 kg.ConclusionThe strong linear inertial load-velocity relationship enables individualized load prescription through regression equations incorporating velocity and strength parameters. While FRT demonstrates limited efficacy for developing speed-strength, its longitudinal periodization effects require further investigation. Optimal FRT loading ranges were identified: 40–60%1RM for strength-speed, 60–80%1RM for power development, and 80–100% + 1RM for maximal strength adaptations.

  19. Data files for: Huston, D.C. et al. 2021. Stable isotope signatures of an...

    • zenodo.org
    bin, csv
    Updated Sep 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Colgan Huston; Daniel Colgan Huston (2021). Data files for: Huston, D.C. et al. 2021. Stable isotope signatures of an acanthocephalan and trematode from the herbivorous marine fish Kyphosus bigibbus (Perciformes: Kyphosidae). Journal of Parasitology. 107: 726–730 [Dataset]. http://doi.org/10.5281/zenodo.4886698
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    Sep 20, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Colgan Huston; Daniel Colgan Huston
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data files for the paper: Huston, D.C. et al. 2021. Stable isotope signatures of an acanthocephalan and trematode from the herbivorous marine fish Kyphosus bigibbus (Perciformes: Kyphosidae). Journal of Parasitology. 107(5) 726–730

    Includes raw data, .csv files for import of data into R, R script file, and excel spreadsheet file used to create Figure 1.

  20. Z

    Dataset for Repeated double cross validation applied to the PCA-LDA...

    • data.niaid.nih.gov
    Updated Dec 2, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pascut, Devis (2020). Dataset for Repeated double cross validation applied to the PCA-LDA classification of SERS spectra: a case study with serum samples from hepatocellular carcinoma patients [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4277796
    Explore at:
    Dataset updated
    Dec 2, 2020
    Dataset provided by
    Sergo, Valter
    Bonifacio, Alois
    Mitri, Elisa
    Crocè, Lory Saveria
    Di Silvestre, Alessia
    Tiribelli, Claudio
    Gurian, Elisa
    Pascut, Devis
    Giuffrè, Mauro
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains all the spectra used in the paper "Repeated double cross validation applied to the PCA-LDA classification of SERS spectra: a case study with serum samples from hepatocellular carcinoma patients", plus the R code to import the TXT (ASCII) files into a dataset, preprocess data, set-up and cross validate the PCA-LDA model and generate the figures shown in the paper.

    Data are available in 2 different formats:

    • 1 compressed archive ("dataset.zip") containing all the 144 TXT files (1 file = 1 spectrum)

    • 1 single CSV file (“dataset.csv”) with all the 144 spectra in the form of a table. The data are structured as follow, with each row being 1 spectrum, preceded by metadata: "acquisition_date", "substrate_batch", "class", "sample_code".

    The code for R is available as a single file "Rcode.R".

  21. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Benj Petre; Aurore Coince; Sophien Kamoun (2016). Petre_Slide_CategoricalScatterplotFigShare.pptx [Dataset]. http://doi.org/10.6084/m9.figshare.3840102.v1

Petre_Slide_CategoricalScatterplotFigShare.pptx

Explore at:
pptxAvailable download formats
Dataset updated
Sep 19, 2016
Dataset provided by
figshare
Authors
Benj Petre; Aurore Coince; Sophien Kamoun
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Categorical scatterplots with R for biologists: a step-by-step guide

Benjamin Petre1, Aurore Coince2, Sophien Kamoun1

1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK

Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.

Protocol

• Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.

• Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.

• Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.

Notes

• Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.

• Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.

7 Display the graph in a separate window. Dot colors indicate

replicates

graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()

References

Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.

Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035

Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128

https://cran.r-project.org/

http://ggplot2.org/

Search
Clear search
Close search
Google apps
Main menu