Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CSV file used to create the Gephi file / visualization, "All Artists at the Tate Modern". Original data set retrieved from: https://github.com/tategallery/collection
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The score of the csv file is 0.37799. This is the number to beat, so make sure you don't have a number below this.
This is the titanic csv file, but everyone survives.
I also have another csv file: https://www.kaggle.com/brendan45774/test-file This may help you on your mission to get a perfect score.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CSV file used for the visualization, "Data Visualization: Claus Oldenberg and Josepf Bueys" (PDF). Original data set retrieved from: https://github.com/tategallery/collection
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results from BoF session about visualizing biomedical data.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The data is locate in the following link:
https://divvy-tripdata.s3.amazonaws.com/index.html
Data are organized in csv file and are updated monthly. For this analysis I have used the data to 12 months backward from April 2022. CSV file contain 13 columns.
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
The Hubway trip history data includes every trip taken through Nov 2013 ? with date, time, origin and destination stations, plus the bike number and more. Data from 2011/07 through 2013/11 The Hubway trip history data Every time a Hubway user checks a bike out from a station, the system records basic information about the trip. Those anonymous data points have been exported into the spreadsheet. Please note, all private data including member names have been removed from these files. What can the data tell us? The CSV file contains data for every Hubway trip from the system launch on July 28th, 2011, through the end of September, 2012. The file contains the data points listed below for each trip. We ve also posed some of the questions you could answer with this dataset - we re sure you.ll have lots more of your own. Duration - Duration of trip. What s the average trip duration for annual members vs. casual users? Start date - Includes start date and time. What are the peak Hubway hours?
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The CsvViewer is a component designed to read and display the tabular data files. This component is useful for quickly visualizing CSV data that is an output of the CWL workflow.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains all necessary data to reproduce the participation of RMLStreamer with support of RML-view-to-CSV in the KGCW 2024 Challenge, as well as the reported results.
For more information we refer to the related paper:
E. de Vleeschauwer, B. De Meester, RMLStreamer supported by RML-view-to-CSV in the performance track of the KGCW Challenge 2024, in: Proceedings of the 5th International Workshop on Knowledge Graph Construction (KGCW 2024) co-located with 21th Extended Semantic Web Conference (ESWC 2024), 2024
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the raw experimental data and supplementary materials for the "Asymmetry Effects in Virtual Reality Rod and Frame Test". The materials included are:
• Raw Experimental Data: older.csv and young.csv
• Mathematica Notebooks: a collection of Mathematica notebooks used for data analysis and visualization. These notebooks provide scripts for processing the experimental data, performing statistical analyses, and generating the figures used in the project.
• Unity Package: a Unity package featuring a sample scene related to the project. The scene was built using Unity’s Universal Rendering Pipeline (URP). To utilize this package, ensure that URP is enabled in your Unity project. Instructions for enabling URP can be found in the Unity URP Documentation.
Requirements:
• For Data Files: software capable of opening CSV files (e.g., Microsoft Excel, Google Sheets, or any programming language that can read CSV formats).
• For Mathematica Notebooks: Wolfram Mathematica software to run and modify the notebooks.
• For Unity Package: Unity Editor version compatible with URP (2019.3 or later recommended). URP must be installed and enabled in your Unity project.
Usage Notes:
• The dataset facilitates comparative studies between different age groups based on the collected variables.
• Users can modify the Mathematica notebooks to perform additional analyses.
• The Unity scene serves as a reference to the project setup and can be expanded or integrated into larger projects.
Citation: Please cite this dataset when using it in your research or publications.
Mapping incident locations from a CSV file in a web map (YouTube video).
This data set includes gravity measurements for the Island of Hawai`i collected as the source data for "Deep magmatic structures of Hawaiian volcanoes, imaged by three-dimensional gravity models" (Kauahikaua, Hildenbrand, and Webring, 2000). Data for 3,611 observations are stored as a single table and disseminated in .CSV format. Each observation record includes values for field station ID, latitude and longitude (in both Old Hawaiian and WGS84 projections), elevation, and Observed Gravity value. See associated publication for reduction and interpretation of these data.
Please refer to Yelp for the original JSON file and other datasets. This dataset was created in June 2020 by Yelp. The usage of this dataset should be for academic purposes.
I read the JSON file in Python and convert it to three CSV files:
Please read Dataset_User_Agreement.pdf before you proceed with all data files.
It would be interesting to see how virtual services were offered by restaurants during COVID in 2020 and how restaurant businesses strived to communicate and connect with customers on Yelp. There is no numeric data to play with, however, it's still valuable to do some visualizations.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is a network of 14 systematic reviews on the salt controversy and their included studies. Each edge in the network represents an inclusion from one systematic review to an article. Systematic reviews were collected from Trinquart (Trinquart, L., Johns, D. M., & Galea, S. (2016). Why do we think we know what we know? A metaknowledge analysis of the salt controversy. International Journal of Epidemiology, 45(1), 251–260. https://doi.org/10.1093/ije/dyv184 ). FILE FORMATS 1) Article_list.csv - Unicode CSV 2) Article_attr.csv - Unicode CSV 3) inclusion_net_edges.csv - Unicode CSV 4) potential_inclusion_link.csv - Unicode CSV 5) systematic_review_inclusion_criteria.csv - Unicode CSV 6) Supplementary Reference List.pdf - PDF ROW EXPLANATIONS 1) Article_list.csv - Each row describes a systematic review or included article. 2) Article_attr.csv - Each row is the attributes of a systematic review/included article. 3) inclusion_net_edges.csv - Each row represents an inclusion from a systematic review to an article. 4) potential_inclusion_link.csv - Each row shows the available evidence base of a systematic review. 5) systematic_review_inclusion_criteria.csv - Each row is the inclusion criteria of a systematic review. 6) Supplementary Reference List.pdf - Each item is a bibliographic record of a systematic review/included paper. COLUMN HEADER EXPLANATIONS 1) Article_list.csv: ID - Numeric ID of a paper paper assigned ID - ID of the paper from Trinquart et al. (2016) Type - Systematic review / primary study report Study Groupings - Groupings for related primary study reports from the same report, from Trinquart et al. (2016) (if applicable, otherwise blank) Title - Title of the paper year - Publication year of the paper Attitude - Scientific opinion about the salt controversy from Trinquart et al. (2016) Doi - DOIs of the paper. (if applicable, otherwise blank) Retracted (Y/N) - Whether the paper was retracted or withdrawn (Y). Blank if not retracted or withdrawn. 2) Article_attr.csv: ID - Numeric ID of a paper year - Publication year Attitude - Scientific opinion about the salt controversy from Trinquart et al. (2016) Type - Systematic review/ primary study report 3) inclusion_net_edges.csv: citing_ID - The numeric ID of a systematic review cited_ID - The numeric ID of the included articles 4) potential_inclusion_link.csv: This data was translated from the Sankey diagram given in Trinquart et al. (2016) as Web Figure 4. Each row indicates a systematic review and each column indicates a primary study. In the matrix, "p" indicates that a given primary study had been published as of the search date of a given systematic review. 5)systematic_review_inclusion_criteria.csv: ID - The numeric IDs of systematic reviews paper assigned ID - ID of the paper from Trinquart et al. (2016) attitude - Its scientific opinion about the salt controversy from Trinquart et al. (2016) No. of studies included - Number of articles included in the systematic review Study design - Study designs to include, per inclusion criteria population - Populations to include, per inclusion criteria Exposure/Intervention - Exposures/Interventions to include, per inclusion criteria outcome - Study outcomes required for inclusion, per inclusion criteria Language restriction - Report languages to include, per inclusion criteria follow-up period - Follow-up period required for inclusion, per inclusion criteria
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Categorical scatterplots with R for biologists: a step-by-step guide
Benjamin Petre1, Aurore Coince2, Sophien Kamoun1
1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK
Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.
Protocol
• Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.
• Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.
• Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.
Notes
• Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.
• Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.
replicates
graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()
References
Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.
Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035
Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Aifi Store is an autonomus store for cashier-less shopping experience which is achieved by multi modal sensing (Vision modality, weight modality and location modality). Aifi Nano store layout (Fig 1) (Image Credits: AIM3S research paper).
Overview: The store is organized in the gondola's and each gondola has shelfs that holds the products and each shelf has weight sensor plates. These weight sensor plates data is used to find the event trigger (pick up, put down or no event) from which we can find the weight of the product picked.
Gondola is similar to vertical fixture consisting of horizontal shelfs in any normal store and in this case there are 5 to 6 shelfs in a Gondola. Every shelf again is composed of weight sensing plates, weight sensing modalities, there are around 12 plates on each shelf.
Every plate has a sampling rate of 60Hz, so there are 60 samples collected every second from each plate
The pick up event on the plate can be observed and marked when the weight sensor reading decreases with time and increases with time when the put down event happens.
Event Detection:
The event is said to be detected if the moving variance calculated from the raw weight sensor reading exceeds a set threshold of (10000gm^2 or 0.01kg^2) over the sliding window length of 0.5 seconds, which is half of the sampling rate of sensors (i.e 1 second).
There are 3 types of events:
Pick Up Event (Fig 2)= Object being taken from the particular gondola and shelf from the customer
Put Down Event (Fig 3)= Object being placed back from the customer on that particular gondola and shelf
No Event = (Fig 4)No object being picked up from that shelf
NOTE:
1.The python script must be in the same folder as of the weight.csv files and .csv files should not be placed in other subdirectories.
2.The videos for the corresponding weight sensor data can be found in the "Videos folder" in the repository and are named similar to their corresponding ".csv" files.
3.Each video files consists of video data from 13 different camera angles.
Details of the weight sensor files:
These weight.csv (Baseline cases and team particular cases ) files are from the AIFI CPS IoT 2020 week.There are over 50 cases in total and each file has 5 columns (Fig 5) (timestamp, reading (in grams), gondola, shelf, plate number).
Each of these files have data of around 2-5 minutes or 120 seconds in the form of timestamp. In order to unpack date and time from timestamp use datetime module from python.
Details of the product.csv files:
There are product.csv files for each test cases and these files provide the detailed information about the product name, product location (gondola number, shelf number and plate number) in the store, product weight(in grams), also link to the image of the product.
Instruction to run the script:
To start analysing the weigh.csv files using the python script and plot the timeseries plot for corresponding files.
Download the dataset.
Make sure to place the python/ jupyter notebook file is in same directory as the .csv files.
Install the requirements $ pip3 install -r requirements.txt
Run the python script Plot.py $ python3 Plot.py
After the script has run successfully you will find the corresponding folders of weight.csv files which contain the figures (weight vs timestamp) in the format
Instruction to run the Jupyter Notebook:
Run the Plot.ipynb file using Jupyter Notebook by placing .csv files in the same directory as the Plot.ipynb script.
gondola_number,shelf_number.png
Ex: 1,1.png (Fig 4) (Timeseries Graph)
The latest National Statistics for England about the experience of patients in the NHS, produced by the Department of Health and the Care Quality Commission, in Excel and .csv format.
Full publications can be found in the patient experience statistics series.
Supporting documentation including a methodology paper is also available for this series.
<p class="gem-c-attachment_metadata"><span class="gem-c-attachment_attribute">MS Excel Spreadsheet</span>, <span class="gem-c-attachment_attribute">84 KB</span></p>
<p class="gem-c-attachment_metadata">This file may not be suitable for users of assistive technology.</p>
<details data-module="ga4-event-tracker" data-ga4-event='{"event_name":"select_content","type":"detail","text":"Request an accessible format.","section":"Request an accessible format.","index_section":1}' class="gem-c-details govuk-details govuk-!-margin-bottom-0" title="Request an accessible format.">
Request an accessible format.
If you use assistive technology (such as a screen reader) and need a version of this document in a more accessible format, please email <a href="mailto:publications@dhsc.gov.uk" target="_blank" class="govuk-link">publications@dhsc.gov.uk</a>. Please tell us what format you need. It will help us if you say what assistive technology you use.
<p class="gem-c-attachment_metadata"><span class="gem-c-attachment_attribute">MS Excel Spreadsheet</span>, <span class="gem-c-attachment_attribute">5.78 KB</span></p>
<p class="gem-c-attachment_metadata"><a class="govuk-link" aria-label="View Patient experience overall statistics: latest results online" href="/media/5a7b5374e5274a34770eaefc/results_csv_format.csv/preview">View online</a></p>
<p class="gem-c-attachment_metadata">This file may not be suitable for users of assistive technology.</p>
<details data-module="ga4-event-tr
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive contains the evaluation results of the master thesis 'Semantic Zoom With Immersive Detail View for ExplorViz'.
The evaluation is divided into a user evaluation of usability and user performance and a rendering performance evaluation.
The evaluation compares the version of ExplorViz with Semantic Zoom and without Semantic Zoom.
The complete user survey can be viewed in the PDF: 'Printed version of the survey - ExplorViz with Semantic Zoom.pdf'.
- The file 'survey_archive_277626.lsa' is exported from LimeSurvey and contains the survey and the responses.
- results-survey277626.csv' contains the results in csv format.
- The file 'results-statistics.pdf' is a pdf that contains statistics about the survey results.
- The file 'results-all-answers-ExplorViz with Semantic Zoom.pdf' lists all the participants' answers in text format.
- The file 'allChartImages.zip' displays the results data in graphs.
As part of a performance evaluation of the frontend, a Python script using Selenium was used.
The results can be found in the csv files:
- 'performance_RendertimeTracegen - XXXL world with high communication2024-11-19--22-13-36-SZLongTerm'
- 'performance_RendertimeTracegen - XXXL world with high communication2024-11-19--22-09-24-NoSZLongTerm'
The Python script is split into two files:
- 'selenium_test.py'
- 'helpers.py'
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Geospatial_Coordinates.csv [Postal code, latitude & longitude of data points in Toronto] FourSquareCategories.json [Categories and category IDs of FourSquare API] Processed_data_for_analysis.csv [Data file post data preparation and available for analysis]
== Quick starts ==
Batch export podcast metadata to CSV files:
1) Export by search keyword: https://www.listennotes.com/podcast-datasets/keyword/
2) Export by category: https://www.listennotes.com/podcast-datasets/category/
== Quick facts ==
The most up-to-date and comprehensive podcast database available All languages & All countries Includes over 3,500,000 podcasts Features 35+ data fields , such as basic metadata, global rank, RSS feed (with audio URLs), Spotify links, and more Delivered in CSV format
== Data Attributes ==
See the full list of data attributes on this page: https://www.listennotes.com/podcast-datasets/fields/?filter=podcast_only
How to access podcast audio files: Our dataset includes RSS feed URLs for all podcasts. You can retrieve audio for over 170 million episodes directly from these feeds. With access to the raw audio, you’ll have high-quality podcast speech data ideal for AI training and related applications.
== Custom Offers ==
We can provide custom datasets based on your needs, such as language-specific data, daily/weekly/monthly update frequency, or one-time purchases.
We also provide a RESTful API at PodcastAPI.com
Contact us: hello@listennotes.com
== Need Help? ==
If you have any questions about our products, feel free to reach out hello@listennotes.com
== About Listen Notes, Inc. ==
Since 2017, Listen Notes, Inc. has provided the leading podcast search engine and podcast database.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CSV file used to create the Gephi file / visualization, "All Artists at the Tate Modern". Original data set retrieved from: https://github.com/tategallery/collection