Facebook
TwitterThis tutorial will teach you how to take time-series data from many field sites and create a shareable online map, where clicking on a field location brings you to a page with interactive graph(s).
The tutorial can be completed with a sample dataset (provided via a Google Drive link within the document) or with your own time-series data from multiple field sites.
Part 1 covers how to make interactive graphs in Google Data Studio and Part 2 covers how to link data pages to an interactive map with ArcGIS Online. The tutorial will take 1-2 hours to complete.
An example interactive map and data portal can be found at: https://temple.maps.arcgis.com/apps/View/index.html?appid=a259e4ec88c94ddfbf3528dc8a5d77e8
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
🎯 1. Define the Goal
Ask yourself: what do you want to do with the data?
Examples:
📊 Analyze sales, profit, and inventory
🧠 Predict car prices based on features
🧾 Build a car showroom management system (SQL/Flask)
🖥️ Create a dashboard showing cars, sales, and customershttps://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22534838%2Fdfac04a63ca17f24f22024cf647423bb%2FChatGPT%20Image%20Oct%2031%202025%2006_56_39%20PM.png?generation=1761929844815237&alt=media" alt="">
Tools You Can Use | Goal | Tools | | ------------- | ----------------------------------------- | | Data Creation | Excel / Python (Pandas) | | Database | MySQL / SQLite / PostgreSQL | | Dashboard | Power BI / Tableau / Streamlit / Flask | | ML Models | scikit-learn (e.g., car price prediction) |
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
In 2012, GreyNet published a page on its website and made accessible the first edition of IDGL, International Directory of Organizations in Grey Literature . The latest update of this PDF publication was in August 2016, providing a list of some 280 organizations in 40 countries worldwide that have contact with the Grey Literature Network Service. The listing appears by country followed by the names of the organizations in alphabetical order, which are then linked to a URL.This year GreyNet International marks its Twenty Fifth Anniversary and seeks to more fully showcase organizations, whose involvement in grey literature is in one or more ways linked to GreyNet.org. Examples of which include: members, partners, conference hosts, sponsors, authors, service providers, committee members, associate editors, etc.This revised and updated edition of IDGL will benefit from the use of visualization software mapping the cities in which GreyNet’s contacts are located. Behind each point of contact are a number of fields that can be grouped and cross-tabulated for further data analysis. Such fields include the source, name of organization, acronym, affiliate’s job title, sector of information, subject/discipline, city, state, country, ISO code, continent, and URL. Eight of the twelve fields require input, while the other four fields do not.The population of the study was derived by extracting records from GreyNet’s in-house, administrative file. Only recipients on GreyNet’s Distribution List as of February 2017 were included. The records were then further filtered and only those that allowed for completion of the required fields remained. This set of records was then converted to Excel format, duplications were removed, and further normalization of field entries took place. In fine, 510 records form the corpus of this study. In the coming months, an in-depth analysis of the data will be carried out - the results of which will be recorded and made visually accessible.The expected outcome of the project will not only produce a revised, expanded, and updated publication of IDGL, but will also provide a visual overview of GreyNet as an international organization serving diverse communities with shared interests in grey literature. It will be a demonstration of GreyNet’s commitment to research, publication, open access, education, and public awareness in this field of library and information science. Finally, this study will serve to pinpoint geographic and subject based areas currently within as well as outside of GreyNet’s catchment.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset is a cleaned version of the Chicago Crime Dataset, which can be found here. All rights for the dataset go to the original owners. The purpose of this dataset is to display my skills in visualizations and creating dashboards. To be specific, I will attempt to create a dashboard that will allow users to see metrics for a specific crime within a given year using filters and metrics. Due to this, there will not be much of a focus on the analysis of the data, but there will be portions discussing the validity of the dataset, the steps I took to clean the data, and how I organized it. The cleaned datasets can be found below, the Query (which utilized BigQuery) can be found here and the Tableau dashboard can be found here.
The dataset comes directly from the City of Chicago's website under the page "City Data Catalog." The data is gathered directly from the Chicago Police's CLEAR (Citizen Law Enforcement Analysis and Reporting) and is updated daily to present the information accurately. This means that a crime on a specific date may be changed to better display the case. The dataset represents crimes starting all the way from 2001 to seven days prior to today's date.
Using the ROCCC method, we can see that: * The data has high reliability: The data covers the entirety of Chicago from a little over 2 decades. It covers all the wards within Chicago and even gives the street names. While we may not have an idea for how big the sample size is, I do believe that the dataset has high reliability since it geographically covers the entirety of Chicago. * The data has high originality: The dataset was gained directly from the Chicago Police Dept. using their database, so we can say this dataset is original. * The data is somewhat comprehensive: While we do have important information such as the types of crimes committed and their geographic location, I do not think this gives us proper insights as to why these crimes take place. We can pinpoint the location of the crime, but we are limited by the information we have. How hot was the day of the crime? Did the crime take place in a neighborhood with low-income? I believe that these key factors prevent us from getting proper insights as to why these crimes take place, so I would say that this dataset is subpar with how comprehensive it is. * The data is current: The dataset is updated frequently to display crimes that took place seven days prior to today's date and may even update past crimes as more information comes to light. Due to the frequent updates, I do believe the data is current. * The data is cited: As mentioned prior, the data is collected directly from the polices CLEAR system, so we can say that the data is cited.
The purpose of this step is to clean the dataset such that there are no outliers in the dashboard. To do this, we are going to do the following: * Check for any null values and determine whether we should remove them. * Update any values where there may be typos. * Check for outliers and determine if we should remove them.
The following steps will be explained in the code segments below. (I used BigQuery for this so the coding will follow BigQuery's syntax) ```
SELECT
*
FROM
portfolioproject-350601.ChicagoCrime.Crime
LIMIT 1000;
SELECT
*
FROM
portfolioproject-350601.ChicagoCrime.Crime
WHERE
unique_key IS NULL OR
case_number IS NULL OR
date IS NULL OR
primary_type IS NULL OR
location_description IS NULL OR
arrest IS NULL OR
longitude IS NULL OR
latitude IS NULL;
DELETE FROM
portfolioproject-350601.ChicagoCrime.Crime
WHERE
unique_key IS NULL OR
case_number IS NULL OR
date IS NULL OR
primary_type IS NULL OR
location_description IS NULL OR
arrest IS NULL OR
longitude IS NULL OR
latitude IS NULL;
SELECT unique_key, COUNT(unique_key) FROM `portfolioproject-350601.ChicagoCrime....
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Public health-related decision-making on policies aimed at controlling the COVID-19 pandemic outbreak depends on complex epidemiological models that are compelled to be robust and use all relevant available data. This data article provides a new combined worldwide COVID-19 dataset obtained from official data sources with improved systematic measurement errors and a dedicated dashboard for online data visualization and summary. The dataset adds new measures and attributes to the normal attributes of official data sources, such as daily mortality, and fatality rates. We used comparative statistical analysis to evaluate the measurement errors of COVID-19 official data collections from the Chinese Center for Disease Control and Prevention (Chinese CDC), World Health Organization (WHO) and European Centre for Disease Prevention and Control (ECDC). The data is collected by using text mining techniques and reviewing pdf reports, metadata, and reference data. The combined dataset includes complete spatial data such as countries area, international number of countries, Alpha-2 code, Alpha-3 code, latitude, longitude, and some additional attributes such as population. The improved dataset benefits from major corrections on the referenced data sets and official reports such as adjustments in the reporting dates, which suffered from a one to two days lag, removing negative values, detecting unreasonable changes in historical data in new reports and corrections on systematic measurement errors, which have been increasing as the pandemic outbreak spreads and more countries contribute data for the official repositories. Additionally, the root mean square error of attributes in the paired comparison of datasets was used to identify the main data problems. The data for China is presented separately and in more detail, and it has been extracted from the attached reports available on the main page of the CCDC website. This dataset is a comprehensive and reliable source of worldwide COVID-19 data that can be used in epidemiological models assessing the magnitude and timeline for confirmed cases, long-term predictions of deaths or hospital utilization, the effects of quarantine, stay-at-home orders and other social distancing measures, the pandemic’s turning point or in economic and social impact analysis, helping to inform national and local authorities on how to implement an adaptive response approach to re-opening the economy, re-open schools, alleviate business and social distancing restrictions, design economic programs or allow sports events to resume.
Facebook
TwitterThis is an example Showcase demonstrating the use of Showcase tags rather than using the more restrictive 'type' dropdown. Showcase and link to datasets in use. Datasets used in an app, website or visualization, or featured in an article, report or blog post can be showcased within the CKAN website. Showcases can include an image, description, tags and external link. Showcases may contain several datasets, helping users discover related datasets being used together. Showcases can be discovered by searching and filtered by tag. Site sysadmins can promote selected users to become 'Showcase Admins' to help create, populate and maintain showcases. ckanext-showcase is intended to be a more powerful replacement for the 'Related Item' feature.
Facebook
TwitterAbstractThe dataset provided here contains the efforts of independent data aggregation, quality control, and visualization of the University of Arizona (UofA) COVID-19 testing programs for the 2019 novel Coronavirus pandemic. The dataset is provided in the form of machine-readable tables in comma-separated value (.csv) and Microsoft Excel (.xlsx) formats.Additional InformationAs part of the UofA response to the 2019-20 Coronavirus pandemic, testing was conducted on students, staff, and faculty prior to start of the academic year and throughout the school year. These testings were done at the UofA Campus Health Center and through their instance program called "Test All Test Smart" (TATS). These tests identify active cases of SARS-nCoV-2 infections using the reverse transcription polymerase chain reaction (RT-PCR) test and the Antigen test. Because the Antigen test provided more rapid diagnosis, it was greatly used three weeks prior to the start of the Fall semester and throughout the academic year.As these tests were occurring, results were provided on the COVID-19 websites. First, beginning in early March, the Campus Health Alerts website reported the total number of positive cases. Later, numbers were provided for the total number of tests (March 12 and thereafter). According to the website, these numbers were updated daily for positive cases and weekly for total tests. These numbers were reported until early September where they were then included in the reporting for the TATS program.For the TATS program, numbers were provided through the UofA COVID-19 Update website. Initially on August 21, the numbers provided were the total number (July 31 and thereafter) of tests and positive cases. Later (August 25), additional information was provided where both PCR and Antigen testings were available. Here, the daily numbers were also included. On September 3, this website then provided both the Campus Health and TATS data. Here, PCR and Antigen were combined and referred to as "Total", and daily and cumulative numbers were provided.At this time, no official data dashboard was available until September 16, and aside from the information provided on these websites, the full dataset was not made publicly available. As such, the authors of this dataset independently aggregated data from multiple sources. These data were made publicly available through a Google Sheet with graphical illustration provided through the spreadsheet and on social media. The goal of providing the data and illustrations publicly was to provide factual information and to understand the infection rate of SARS-nCoV-2 in the UofA community.Because of differences in reported data between Campus Health and the TATS program, the dataset provides Campus Health numbers on September 3 and thereafter. TATS numbers are provided beginning on August 14, 2020.Description of Dataset ContentThe following terms are used in describing the dataset.1. "Report Date" is the date and time in which the website was updated to reflect the new numbers2. "Test Date" is to the date of testing/sample collection3. "Total" is the combination of Campus Health and TATS numbers4. "Daily" is to the new data associated with the Test Date5. "To Date (07/31--)" provides the cumulative numbers from 07/31 and thereafter6. "Sources" provides the source of information. The number prior to the colon refers to the number of sources. Here, "UACU" refers to the UA COVID-19 Update page, and "UARB" refers to the UA Weekly Re-Entry Briefing. "SS" and "WBM" refers to screenshot (manually acquired) and "Wayback Machine" (see Reference section for links) with initials provided to indicate which author recorded the values. These screenshots are available in the records.zip file.The dataset is distinguished where available by the testing program and the methods of testing. Where data are not available, calculations are made to fill in missing data (e.g., extrapolating backwards on the total number of tests based on daily numbers that are deemed reliable). Where errors are found (by comparing to previous numbers), those are reported on the above Google Sheet with specifics noted.For inquiries regarding the contents of this dataset, please contact the Corresponding Author listed in the README.txt file. Administrative inquiries (e.g., removal requests, trouble downloading, etc.) can be directed to data-management@arizona.edu
Facebook
TwitterThe Tableau View extension for CKAN enables the display of Tableau Public visualizations directly within CKAN datasets. By providing a view plugin, this extension allows users to embed interactive Tableau vizzes, enhancing data presentation and exploration capabilities within the CKAN platform. This offers a seamless integration path for organizations already utilizing Tableau Public to share insights drawn from their data. Key Features: Tableau Public Viz Integration: Embed Tableau Public visualizations within CKAN resources through a dedicated view plugin. This plugin allows for the display of interactive Tableau dashboards alongside the underlying data. Simple Configuration: The extension primarily requires enabling the tableau_view plugin within the CKAN configuration file. Further configuration details and display examples may be available on the extension's wiki page (if any wiki pages exist). Streamlined Data Visualization: Provides a direct method to visually represent data managed in CKAN, improving user engagement and comprehension. Use Cases: Open Data Portals: Governments and organizations can use this extension to embed publicly available Tableau visualizations in their open data portals, enhancing the accessibility and understandability of data. Internal Data Dashboards: Organizations using CKAN for internal data management can use the extension to embed Tableau dashboards providing data summaries, trends, and performance metrics. Technical Integration: The extension integrates into CKAN as a view plugin. Once the tableau_view plugin is enabled in the CKAN configuration file (ckan.plugins), it becomes available as a view option for resources that support it. The readme suggests referring to a wiki page for additional configuration details, which, if available, is crucial for proper setup and usage. Benefits & Impact: The Tableau View extension streamlines data visualization for CKAN users. By embedding interactive Tableau Public visualizations, it becomes easier for users to explore, analyze, and understand the data managed by CKAN. This can lead to improved data literacy, more informed decision-making, and broader engagement with open data initiatives.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Data covering the period from September 18, 2023, to October 17, 2023. Regions: across all of Russia.
1) You can conduct a comparative analysis of the offered salaries in the IT industry: Tasks related to this group will focus on analyzing IT salaries based on various criteria. Examples of charts that can be created: * Median salaries depending on the city * Median salaries depending on the professional role * Median salaries depending on the type of employment
2) Examination of the distribution of required experience from applicants and analysis of remuneration depending on experience: * Distribution of required work experience in the IT industry * Distribution of required experience depending on the work schedule * Dependence of salary on work experience.
3) Determination of the top employers: Tasks related to this group are aimed at determining companies most actively posting vacancies in the IT industry and analyzing their personnel needs, key skills, and work experience. Examples of charts: * Distribution of work experience in large companies. * Distribution of programming languages in large companies.
4) Determination of the most sought-after skills for the profession of a developer programmer
5) Since the coordinates of the workplace are specified in the job listing, the data can be transferred to a geographical map and a heat map of vacancies can be created.
6) Forecasting salary based on experience and skills: You can build a model that predicts salary based on given parameters, such as experience and skills.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This resource contains a SQLite database of temperature values recorded at 15-minute intervals between 2014 and 2018 at the iUTAH GAMUT sites in the Logan River Watershed. Additionally contained with in this resource is a Jupyter Notebook that:
1) defines a function for querying the SQLite database and extracting the data for each site 2) gets temperature time series from database using function 3) creates time series for each year of data at each site 4) re samples the complete time series at each site to get the mean daily temperature 5) creates a figure showing (a) a plot of the mean daily temperature at each site between 2014 and 2018 and (b) a plot for each year of data showing a box plot of the temperature data recorded that year for each site.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional file 1 Source code and sample data for SEQing. The Python code and samples of Arabidopsis thaliana iCLIP (GSE99427) and RNA-seq (GSE99615) data used to start the sample dataset dashboard.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
4D STEM dataset recorded with DECTRIS ARINA detector.
Sample is a monocrystalline domain of SmB6 oriented along the <110> zone axis, prepared with FIB by Elisabeth Mueller at PSI.
Data collection was with a probe-corrected 200kV TEM microscope, supported by Mingjian Wu at FAU.
Further experimental parameters are listed with the included txt file.
Data visualization and processing can be done with NOVENA software, freely available at DECTRIS website.
Alternatively, the files can be opened using a HDF5 file reader.
Facebook
Twitterhttps://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice
Web Development Market Size 2025-2029
The web development market size is forecast to increase by USD 40.98 billion at a CAGR of 10.4% between 2024 and 2029.
The market is experiencing significant growth, driven by the increasing digital transformation across industries and the integration of artificial intelligence (AI) into web applications. This trend is fueled by the need for businesses to enhance user experience, streamline operations, and gain a competitive edge in the market. Furthermore, the rapid evolution of technologies such as Progressive Web Apps (PWAs), serverless architecture, and the Internet of Things (IoT) is creating new opportunities for innovation and expansion. However, this market is not without challenges. The ever-changing technological landscape requires web developers to continuously update their skills and knowledge. Additionally, ensuring web applications are secure and compliant with data protection regulations is becoming increasingly complex.
Companies seeking to capitalize on market opportunities and navigate challenges effectively should focus on building a team of skilled developers, investing in continuous learning and development, and prioritizing security and compliance in their web development projects. By staying abreast of the latest trends and technologies, and adapting quickly to market shifts, organizations can successfully navigate the dynamic the market and drive business growth.
What will be the Size of the Web Development Market during the forecast period?
Request Free Sample
The market continues to evolve at an unprecedented pace, driven by advancements in technology and shifting consumer preferences. Key trends include the adoption of Agile methodologies, DevOps tools, and version control systems for streamlined project management. JavaScript frameworks, such as React and Angular, dominate front-end development, while Magento, Shopify, and WordPress lead in content management and e-commerce. Back-end development sees a rise in Python, PHP, and Ruby on Rails frameworks, enabling faster development and more efficient scalability. Interaction design, user-centered design, and mobile-first design prioritize user experience, while security audits, penetration testing, and disaster recovery solutions ensure website safety.
Marketing automation, email marketing platforms, and CRM systems enhance digital marketing efforts, while social media analytics and Google Analytics provide valuable insights for data-driven decision-making. Progressive enhancement, headless CMS, and cloud migration further expand the market's potential. Overall, the market remains a dynamic, innovative space, with continuous growth fueled by evolving business needs and technological advancements.
How is this Web Development Industry segmented?
The web development industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.
End-user
Retail and e-commerce
BFSI
IT and telecom
Healthcare
Others
Business Segment
SMEs
Large enterprise
Service Type
Front-End Development
Back-End Development
Full-Stack Development
E-Commerce Development
Deployment Type
Cloud-Based
On-Premises
Technology Specificity
JavaScript
Python
PHP
Ruby
Geography
North America
US
Canada
Europe
France
Germany
Spain
UK
APAC
China
India
Japan
South America
Brazil
Rest of World (ROW)
By End-user Insights
The retail and e-commerce segment is estimated to witness significant growth during the forecast period. The market is experiencing significant growth due to the digital transformation sweeping various industries. E-commerce and retail sectors lead the market, driven by the increasing preference for online shopping and improved Internet penetration. To cater to this trend, businesses demand user-engaging web applications with smooth navigation, secure payment gateways, and seamless product search and purchase features. Mobile shopping's rise necessitates mobile app development and mobile-optimized websites. Agile development, microservices architecture, and UI/UX design are essential elements in creating engaging and efficient web solutions. Furthermore, AI, machine learning, and data analytics enable data-driven decision making, customer loyalty, and business intelligence.
Web hosting, cloud computing, API integration, and growth hacking are other critical components. Ensuring web accessibility, data security, and e-commerce development is also crucial for businesses in the digital age. Online advertising, email marketing, content strategy, brand building, and data visualization are essential aspects of digital marketing. Serverless computing, u
Facebook
Twitterhttps://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice
Web Analytics Market Size 2025-2029
The web analytics market size is forecast to increase by USD 3.63 billion, at a CAGR of 15.4% between 2024 and 2029.
The market is experiencing significant growth, driven by the rising preference for online shopping and the increasing adoption of cloud-based solutions. The shift towards e-commerce is fueling the demand for advanced web analytics tools that enable businesses to gain insights into customer behavior and optimize their digital strategies. Furthermore, cloud deployment models offer flexibility, scalability, and cost savings, making them an attractive option for businesses of all sizes. However, the market also faces challenges associated with compliance to data privacy and regulations. With the increasing amount of data being generated and collected, ensuring data security and privacy is becoming a major concern for businesses.
Regulatory compliance, such as GDPR and CCPA, adds complexity to the implementation and management of web analytics solutions. Companies must navigate these challenges effectively to maintain customer trust and avoid potential legal issues. To capitalize on market opportunities and address these challenges, businesses should invest in robust web analytics solutions that prioritize data security and privacy while providing actionable insights to inform strategic decision-making and enhance customer experiences.
What will be the Size of the Web Analytics Market during the forecast period?
Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
Request Free Sample
The market continues to evolve, with dynamic market activities unfolding across various sectors. Entities such as reporting dashboards, schema markup, conversion optimization, session duration, organic traffic, attribution modeling, conversion rate optimization, call to action, content calendar, SEO audits, website performance optimization, link building, page load speed, user behavior tracking, and more, play integral roles in this ever-changing landscape. Data visualization tools like Google Analytics and Adobe Analytics provide valuable insights into user engagement metrics, helping businesses optimize their content strategy, website design, and technical SEO. Goal tracking and keyword research enable marketers to measure the return on investment of their efforts and refine their content marketing and social media marketing strategies.
Mobile optimization, form optimization, and landing page optimization are crucial aspects of website performance optimization, ensuring a seamless user experience across devices and improving customer acquisition cost. Search console and page speed insights offer valuable insights into website traffic analysis and help businesses address technical issues that may impact user behavior. Continuous optimization efforts, such as multivariate testing, data segmentation, and data filtering, allow businesses to fine-tune their customer journey mapping and cohort analysis. Search engine optimization, both on-page and off-page, remains a critical component of digital marketing, with backlink analysis and page authority playing key roles in improving domain authority and organic traffic.
The ongoing integration of user behavior tracking, click-through rate, and bounce rate into marketing strategies enables businesses to gain a deeper understanding of their audience and optimize their customer experience accordingly. As market dynamics continue to evolve, the integration of these tools and techniques into comprehensive digital marketing strategies will remain essential for businesses looking to stay competitive in the digital landscape.
How is this Web Analytics Industry segmented?
The web analytics industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.
Deployment
Cloud-based
On-premises
Application
Social media management
Targeting and behavioral analysis
Display advertising optimization
Multichannel campaign analysis
Online marketing
Component
Solutions
Services
Geography
North America
US
Canada
Europe
France
Germany
Italy
UK
APAC
China
India
Japan
South Korea
Rest of World (ROW)
.
By Deployment Insights
The cloud-based segment is estimated to witness significant growth during the forecast period.
In today's digital landscape, web analytics plays a pivotal role in driving business growth and optimizing online performance. Cloud-based deployment of web analytics is a game-changer, enabling on-demand access to computing resources for data analysis. This model streamlines business intelligence processes by collecting, integra
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThe cornerstone of low anterior resection syndrome (LARS) treatment is self-management, which requires patient engagement. Colorectal surgeons and nurses may use patient-generated health data (PGHD) to help guide patients in their use of self-management strategies for LARS. However, the perspectives of LARS experts on the use of PGHD remain largely unexplored. The objective of this study was to explore the perspectives and experiences of LARS experts regarding the use of PGHD in the management of LARS.MethodsWe utilized purposive snowball sampling to identify international LARS experts, including surgeons, nurses, and LARS researchers with knowledge and expertise in LARS. We conducted individual semi-structured interviews with these experts between August 2022 and February 2024. We performed thematic analysis using the framework method to identify domains and associated themes.ResultsOur sample included 16 LARS experts from five countries. Thematic analysis identified four domains and associated themes. The domains included: data collection practices, data review practices, perceived usefulness, and future directions. Within the data collection practices domain, we found that most experts asked LARS patients to collect some form of PGHD, including bowel diaries, patient-reported outcome measures, or both. Within the data review practices domain, we found that both surgeons and nurses reviewed PGHD. Most participants described finding it difficult to interpret the data and identified time constraints, legibility, and completeness as the most common barriers to reviewing data in clinic. In terms of perceived usefulness, data collection was felt to help clinicians understand symptoms and their impact and assist patients with self-management. The future directions domain revealed that most experts felt that a clinical tool in the form of an online app or website to support data collection and enhance data visualization would be useful. Finally, some participants saw promise in leveraging PGHD to inform the creation of automated treatment algorithms for LARS management.ConclusionsThis study highlights many gaps in the processes of patient-generated LARS data collection and review. A clinical tool including various data collection templates and data visualization prototypes could help to address these gaps. Future research will focus on incorporating the patient perspective.
Facebook
TwitterThe Describable Textures Dataset (DTD) is an evolving collection of textural images in the wild, annotated with a series of human-centric attributes, inspired by the perceptual properties of textures. This data is made available to the computer vision community for research purposes.
The "label" of each example is its "key attribute" (see the official website). The official release of the dataset defines a 10-fold cross-validation partition. Our TRAIN/TEST/VALIDATION splits are those of the first fold.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('dtd', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/dtd-3.0.1.png" alt="Visualization" width="500px">
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rainbow color map is scientifically incorrect and hinders people with color vision deficiency to view visualizations in a correct way. Due to perceptual non-uniform color gradients within the rainbow color map the data representation is distorted what can lead to misinterpretation of results and flaws in science communication. Here we present the data of a paper survey of 797 scientific publication in the journal Hydrology and Earth System Sciences. With in the survey all papers were classified according to color issues. Find details about the data below.
year = year of publication (YYYY)date = date (YYYY-MM-DD) of publicationtitle = full paper title from journal websiteauthors = list of authors comma-separatedn_authors = number of authors (integer between 1 and 27)col_code = color-issue classification (see below)volume = Journal volumestart_page = first page of paper (consecutive)end_page = last page of paper (consecutive)base_url = base url to access the PDF of the paper with /volume/start_page/year/filename = specific file name of the paper PDF (e.g. hess-9-111-2005.pdf)Color classification is stored in the col_code variable with:
0 = chromatic and issue-free,1 = red-green issues,2= rainbow issues andbw= black and white paper.
See more details (e.g., sample code to analyse the survey data) on https://github.com/modche/rainbow_hydrology
Paper: Stoelzle, M. and Stein, L.: Rainbow color map distorts and misleads research in hydrology – guidance for better visualizations and science communication, Hydrol. Earth Syst. Sci., 25, 4549–4565, https://doi.org/10.5194/hess-25-4549-2021, 2021.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset: Online Shopping Dataset;
CustomerID
Description: Unique identifier for each customer. Data Type: Numeric;
Gender:
Description: Gender of the customer (e.g., Male, Female). Data Type: Categorical;
Location:
Description: Location or address information of the customer. Data Type: Text;
Tenure_Months:
Description: Number of months the customer has been associated with the platform. Data Type: Numeric;
Transaction_ID:
Description: Unique identifier for each transaction. Data Type: Numeric;
Transaction_Date:
Description: Date of the transaction. Data Type: Date;
Product_SKU:
Description: Stock Keeping Unit (SKU) identifier for the product. Data Type: Text;
Product_Description:
Description: Description of the product. Data Type: Text;
Product_Category:
Description: Category to which the product belongs. Data Type: Categorical;
Quantity:
Description: Quantity of the product purchased in the transaction. Data Type: Numeric;
Avg_Price:
Description: Average price of the product. Data Type: Numeric;
Delivery_Charges:
Description: Charges associated with the delivery of the product. Data Type: Numeric;
Coupon_Status:
Description: Status of the coupon associated with the transaction. Data Type: Categorical;
GST:
Description: Goods and Services Tax associated with the transaction. Data Type: Numeric;
Date:
Description: Date of the transaction (potentially redundant with Transaction_Date). Data Type: Date;
Offline_Spend:
Description: Amount spent offline by the customer. Data Type: Numeric;
Online_Spend:
Description: Amount spent online by the customer. Data Type: Numeric;
Month:
Description: Month of the transaction. Data Type: Categorical;
Coupon_Code:
Description: Code associated with a coupon, if applicable. Data Type: Text;
Discount_pct:
Description: Percentage of discount applied to the transaction. Data Type: Numeric;
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by our teams at PromptCloud and DataStock. This dataset holds up to 30K records of sample data. You can download the full dataset here.
This dataset holds the following contents:
We wouldn't be here without the help of our in house web scraping and data mining teams from PromptCloud and DataStock.
This data was created for the main reason for the analysis and data visualization of data from different websites. This is a unique website that is amazing thing and helps all the data visualizers out there.
Facebook
TwitterThis dataset contains information about posts made on Famous Cosmetic Brand's Facebook page from 1st of January to 31th of December of 2014. Each row represents a single post and includes the following attributes:
Citation: (Moro et al., 2016) S. Moro, P. Rita and B. Vala. Predicting social media performance metrics and evaluation of the impact on brand building: A data mining approach. Journal of Business Research, Elsevier, In press. Available at: http://dx.doi.org/10.1016/j.jbusres.2016.02.010
Facebook
TwitterThis tutorial will teach you how to take time-series data from many field sites and create a shareable online map, where clicking on a field location brings you to a page with interactive graph(s).
The tutorial can be completed with a sample dataset (provided via a Google Drive link within the document) or with your own time-series data from multiple field sites.
Part 1 covers how to make interactive graphs in Google Data Studio and Part 2 covers how to link data pages to an interactive map with ArcGIS Online. The tutorial will take 1-2 hours to complete.
An example interactive map and data portal can be found at: https://temple.maps.arcgis.com/apps/View/index.html?appid=a259e4ec88c94ddfbf3528dc8a5d77e8