Facebook
TwitterResearch dissemination and knowledge translation are imperative in social work. Methodological developments in data visualization techniques have improved the ability to convey meaning and reduce erroneous conclusions. The purpose of this project is to examine: (1) How are empirical results presented visually in social work research?; (2) To what extent do top social work journals vary in the publication of data visualization techniques?; (3) What is the predominant type of analysis presented in tables and graphs?; (4) How can current data visualization methods be improved to increase understanding of social work research? Method: A database was built from a systematic literature review of the four most recent issues of Social Work Research and 6 other highly ranked journals in social work based on the 2009 5-year impact factor (Thomson Reuters ISI Web of Knowledge). Overall, 294 articles were reviewed. Articles without any form of data visualization were not included in the final database. The number of articles reviewed by journal includes : Child Abuse & Neglect (38), Child Maltreatment (30), American Journal of Community Psychology (31), Family Relations (36), Social Work (29), Children and Youth Services Review (112), and Social Work Research (18). Articles with any type of data visualization (table, graph, other) were included in the database and coded sequentially by two reviewers based on the type of visualization method and type of analyses presented (descriptive, bivariate, measurement, estimate, predicted value, other). Additional revi ew was required from the entire research team for 68 articles. Codes were discussed until 100% agreement was reached. The final database includes 824 data visualization entries.
Facebook
TwitterThis dataset was created by Dishani Mishra
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The reference for the dataset and the dashboard was Youtube Channel codebasics. I have used a fictitious company called Atlix where the Sales Director want the sales data to be in a proper format which can help in decision making.
We have a total of 5 tables namely customers, products, markets, date & transactions. The data is exported from Mysql to Tableau.
In tableau , inner joins were used.
In the transactions table, we notice that sum sales amount figures are either negative or zero while the sales qty is either 1 or more. This cannot be right. Therefore, we filter the sales amount table in Tableau by having the least sales amount as minimum 1.
When currency column from transactions table was grouped in MySql, we could see ‘USD’ and ‘INR’ showing up. We cannot have a sales data showing two currencies. This was rectified by converting the USD sales amount into INR by taking the latest exchange rate at Rs.81.
We make the above change in tableau by creating a new calculated field called ‘Normalised Sales Amount’. If [Sales Amount] == ‘USD’ then [Sales Amount] * 81 else [Sales Amount] End.
Conclusion: The dashboard prepared is an interactive dashboard with filters. For eg. By Clicking on Mumbai under “Sales by Markets” we will see the results change in the other charts as well as they Will now show the results pertaining only to Mumbai. This can be done by year , month, customers , products etc. Parameter with filter has also been created for top customers and top products. This produces a slider which can be used to view the top 10 customers and products and slide it accordingly.
Following information can be passed on to the sales team or director.
Total Sales: from Jun’17 to Feb’20 has been INR 12.83 million. There is a drop of 57% in the sales revenue from 2018 to 2019. The year 2020 has not been considered as it only account for 2 months data. Markets: Mumbai which is the top most performing market and accounts for 51% of the total sales market has seen a drop in sales of almost 64% from 2018 to 2019. Top Customers: Path was on 2nd position in terms of sales in the year 2018. It accounted for 19% of the total sales after Electricalslytical which accounted for 21% of the total sales. But in year 2019, both Electricalslytical and Path were the 2nd and 4th highest customers by sales. By targeting the specific markets and customers through new ideas such as promotions, discounts etc we can look to reverse the trend of decreasing sales.
Facebook
TwitterThere's a story behind every dataset and here's your opportunity to share yours.
What's inside is more than just rows and columns. Make it easy for others to get started by describing how you acquired the data and what time period it represents, too.
We wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.
Your data will be in front of the world's largest data science community. What questions do you want to see answered?
Facebook
TwitterLookup tables used to assign standardized colormaps to land cover rasters for all sources (NLCD, CCAP) and all types (Level 1, Level 2, Natural versus Converted).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Presentation Date: Monday, April 1, 2019. Location: Radcliffe Institute for Advanced Study at Harvard, Cambridge, MA. Abstract: Innovative data visualization reveals patterns and trends otherwise unseen. The four speakers in this program represent a range of visualization expertise, from human cognition to user interaction to tool design to the use of visualizations in journalism. As data sets in science, medicine, and business become larger and more diverse, the need for—and the impact of—good visualization is growing rapidly. The presentations will highlight a wide scope of visualization’s applicability, using examples from personalized medicine, government, education, basic science, climate change, and more.
Facebook
TwitterData Visualizations: How to make them, use them and show them
Facebook
TwitterI created this dataset for anyone wishing to study in Turkey's Covid-19 data. I used the Republic of Turkey Ministry of Health as the source.
You can reach the total and daily numbers from the first day of the spread in Turkey in this data set.
You can use this dataset Turkey's Covid-19 spread of the process in order to better understand it. It can also help to estimate cases and deaths.
Facebook
TwitterVisual map at kumu.io/access2perspectives/covid19-resources
Data set doi: 10.5281/zenodo.3732377 // available in different formats (pdf, xls, ods, csv,)
Correspondence: (JH) info@access2perspectives.com
Objectives
Provide citizens with crucial and reliable information
Encourage and facilitate South South collaboration
Bridging language barriers
Provide local governments and cities with lessons learned about COVID-19 crisis response
Facilitate global cooperation and immediate response on all societal levels
Enable LMICs to collaborate and innovate across distances and leverage locally available and context-relevant resources
Methodology
The data feeding the map at kumu.io was compiled from online resources and information shared in various community communication channels.
Kumu.io is a visualization platform for mapping complex systems and to provide a deeper understanding of their intrinsic relationships. It provides blended systems thinking, stakeholder mapping, and social network analysis.
Explore the map // https://kumu.io/access2perspectives/covid19-resources#global
Click on individual nodes and view the information by country
info hotlines
governmental informational websites, Twitter feeds & Facebook pages
fact checking online resources
language indicator
DIY resources
clinical staff capacity building
etc.
With the navigation buttons to the right, you can zoom in and out, select and focus on specific elements.
If you have comments, questions or suggestions for improvements on this map email us at info@access2perspectives.com
Contribute
Please add data to the spreadsheet at https://tinyurl.com/COVID19-global-response
you can add additional information on country, city or neighbourhood level (see e.g. the Cape Town entry)
Related documents
Google Doc: tinyurl.com/COVID19-Africa-Response
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global biological data visualization market size was valued at approximately USD 800 million in 2023 and is expected to reach USD 2.2 billion by 2032, growing at a Compound Annual Growth Rate (CAGR) of 12%. The rising volume of biological data generated through various research activities and the increasing need for advanced analytical tools are key factors driving this market's growth. The integration of artificial intelligence and machine learning in data visualization tools, combined with the growing application of biological data visualization in personalized medicine, are also significant growth drivers.
One of the primary growth factors of the biological data visualization market is the exponential increase in biological data generation due to advancements in high-throughput technologies such as next-generation sequencing (NGS), mass spectrometry, and microarray technology. These technologies produce vast amounts of data that require sophisticated visualization tools for proper analysis and interpretation. Without effective visualization, the potential insights and discoveries within this data may remain untapped, underscoring the market's critical role in modern biological research.
Additionally, the increasing prevalence of complex diseases and the subsequent demand for personalized medicine are fueling the demand for advanced data visualization tools. Personalized medicine relies heavily on the analysis of genetic, proteomic, and other biological data to tailor treatments to individual patients. Effective visualization tools facilitate the interpretation of this complex data, enabling healthcare providers to make informed clinical decisions. This trend is expected to drive substantial growth in the biological data visualization market over the forecast period.
Moreover, there is a growing adoption of cloud-based visualization solutions. Cloud deployment offers significant advantages, including scalability, cost-effectiveness, and accessibility from various locations. This is particularly beneficial for academic and research institutions and smaller biotech companies with limited resources. The integration of cloud computing with advanced visualization tools is expected to further propel market growth, as it allows for more efficient handling and analysis of large datasets.
From a regional perspective, North America currently holds the largest market share, driven by significant investments in research and development, advanced healthcare infrastructure, and high adoption rates of advanced technologies. Europe follows closely, with substantial growth attributed to government support for research initiatives and a strong presence of pharmaceutical and biotech companies. The Asia Pacific region is anticipated to witness the highest CAGR, owing to increasing investments in biotech research, growing healthcare infrastructure, and expanding adoption of advanced technologies in countries like China and India.
In the realm of Life Sciences Analytics, the role of data visualization is becoming increasingly pivotal. Life Sciences Analytics involves the use of data-driven insights to enhance research and development, clinical trials, and patient care. By leveraging advanced visualization tools, researchers and healthcare professionals can gain a deeper understanding of complex biological data, leading to more informed decisions and innovative solutions. The integration of Life Sciences Analytics with data visualization not only facilitates the interpretation of vast datasets but also accelerates the discovery of new patterns and correlations, ultimately advancing the field of personalized medicine.
The biological data visualization market by component is segmented into software and services. Software solutions constitute the bulk of the market, providing tools that are essential for processing and visually representing complex biological data. These software tools range from basic data plotting programs to advanced systems incorporating machine learning algorithms for predictive modeling. The demand for these tools is driven by their ability to handle large datasets, provide user-friendly interfaces, and offer real-time data visualization capabilities, which are crucial for both research and clinical applications.
In contrast, the services segment, although smaller, plays a crucial role in the market. Services include co
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Learning Data Visualization Tools Market size reached USD 2.8 billion in 2024, demonstrating robust growth driven by the increasing demand for data literacy and analytics skills across various sectors. The market is expected to grow at a CAGR of 13.7% from 2025 to 2033, projecting a value of USD 8.8 billion by 2033. This surge is primarily attributed to the rapid digitization of education and corporate learning environments, the proliferation of big data, and the critical need for interactive, accessible analytical tools to foster effective data comprehension and decision-making.
One of the most significant growth factors for the Learning Data Visualization Tools Market is the widespread integration of data-driven decision-making processes within organizations and educational institutions. As businesses and academic settings increasingly rely on data to guide strategies, there is a parallel surge in the demand for professionals who possess strong data visualization skills. This has led to a marked increase in the adoption of user-friendly data visualization tools such as Tableau, Power BI, and Google Data Studio in both formal education and corporate training programs. The ability of these tools to simplify complex datasets into intuitive visual representations is a key driver, enabling learners to grasp intricate concepts more efficiently and apply them in real-world scenarios.
Technological advancements and the evolution of cloud-based learning platforms have further propelled the market. The shift toward digital and remote learning, especially post-pandemic, has accelerated the adoption of cloud-based data visualization tools, which offer scalability, accessibility, and seamless integration with other e-learning resources. Cloud deployment eliminates geographical barriers, allowing learners and organizations from diverse regions to access advanced visualization tools and resources at any time. Additionally, the increasing availability of free and open-source visualization libraries such as D3.js has democratized access to these technologies, further expanding the market’s reach across different socioeconomic segments.
Another crucial growth driver is the rising emphasis on upskilling and reskilling initiatives across industries. As automation and artificial intelligence reshape job requirements, data literacy has become a fundamental skill for both students and working professionals. Enterprises are investing heavily in learning platforms that incorporate data visualization tools to train their workforce, ensuring they remain competitive in the digital economy. The trend is mirrored in higher education, where curricula are being revamped to include data visualization modules, reflecting the growing recognition of its importance in fostering analytical and critical thinking skills among learners.
From a regional perspective, North America dominates the Learning Data Visualization Tools Market, accounting for the largest revenue share in 2024. This can be attributed to the presence of leading technology providers, a mature e-learning ecosystem, and high levels of digital adoption in both educational and corporate sectors. However, Asia Pacific is emerging as the fastest-growing region, driven by rapid digital transformation, government initiatives to enhance digital literacy, and the increasing penetration of internet and mobile devices. Europe also contributes significantly, with a strong focus on educational innovation and enterprise training. These regional dynamics are shaping the competitive landscape and driving the global expansion of learning data visualization tools.
The Tool Type segment of the Learning Data Visualization Tools Market is highly diverse, encompassing established platforms like Tableau, Power BI, and Qlik, as well as newer entrants such as Google Data Studio and open-source solutions like D3.js. Tableau remains a market leader due to its intuitive drag-and-drop interface, robust analytics capabilities, and widespread adoption in both academic and corporate settings. Its ability to handle large datasets and integrate seamlessly with various data sources makes it a preferred choice for institutions aiming to provide hands-on, practical training in data visualization. Power BI, backed by Microsoft’s ecosystem, is gaining significant traction, particularly among enterpr
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global Visual Analytics market is poised for exceptional growth, projected to reach a substantial $5307.6 million by 2025, with a remarkable Compound Annual Growth Rate (CAGR) of 17.9% extending through 2033. This robust expansion is primarily fueled by the escalating need for businesses across diverse sectors to derive actionable insights from increasingly vast and complex datasets. Key drivers include the digital transformation initiatives gaining momentum across industries, the proliferation of big data technologies, and the imperative for enhanced data-driven decision-making to maintain competitive advantages. Organizations are actively investing in visual analytics solutions to improve operational efficiency, identify new market opportunities, understand customer behavior, and mitigate risks. The growing adoption of cloud-based visual analytics platforms further democratizes access to powerful analytical tools, enabling small and medium-sized enterprises to leverage sophisticated data visualization capabilities. Emerging trends are further shaping the visual analytics landscape. The integration of artificial intelligence (AI) and machine learning (ML) into visual analytics platforms is a significant development, empowering users with automated data discovery, predictive analytics, and intelligent recommendations. This synergy allows for deeper and more intuitive exploration of data. Real-time analytics, enabled by advancements in processing power and data infrastructure, is becoming critical for time-sensitive industries like finance and e-commerce. Furthermore, the increasing demand for self-service BI tools that empower business users to create their own visualizations and reports without extensive IT support is a major market influencer. While the market enjoys strong growth, potential restraints such as data security and privacy concerns, along with the need for skilled professionals to effectively utilize these advanced tools, will require strategic attention from market players to ensure sustained and inclusive growth. This report offers an in-depth examination of the global Visual Analytics market, providing a comprehensive overview of its historical performance, current standing, and future trajectory. Covering the study period from 2019 to 2033, with a base year of 2025 and a forecast period extending from 2025 to 2033, this analysis delves into market dynamics, key players, technological advancements, and segment-specific trends. The estimated market size for Visual Analytics is projected to reach $25 million in 2025, with significant growth anticipated.
Facebook
Twitterhttps://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy
According to our latest research, the global Animal Health Data Visualization Market size in 2024 stands at USD 1.42 billion, fueled by increasing digitalization in veterinary healthcare, growing demand for real-time data analytics, and the rising focus on animal welfare. The market is expected to grow at a CAGR of 11.8% from 2025 to 2033, reaching a projected value of USD 3.59 billion by 2033. This robust expansion is driven by the integration of advanced analytics, AI-powered visualization tools, and the growing adoption of cloud-based platforms within the veterinary sector. As per our latest research, the market is witnessing a notable shift towards data-driven decision-making, enhancing disease management and improving overall animal health outcomes worldwide.
One of the primary growth factors driving the Animal Health Data Visualization Market is the rapid advancement in veterinary informatics and the proliferation of digital health records across veterinary practices. The increasing awareness among veterinary professionals regarding the benefits of data visualization—such as improved disease surveillance, faster diagnostics, and enhanced treatment planning—has resulted in a surge in demand for robust visualization tools. These technologies enable veterinarians and animal health professionals to interpret complex datasets with greater accuracy, leading to more informed clinical decisions and better patient outcomes. Additionally, the rise in zoonotic diseases and the need for efficient outbreak tracking have further propelled the adoption of these solutions, as authorities strive to curb the spread of infectious diseases among both companion and livestock animals.
The growing emphasis on precision livestock farming and companion animal care is another significant growth driver in this market. With the increasing adoption of wearable devices, IoT sensors, and smart monitoring systems, a vast amount of animal health data is being generated daily. Data visualization platforms play a crucial role in transforming this raw data into actionable insights, allowing stakeholders to monitor animal health parameters in real time, optimize nutrition, and detect anomalies at an early stage. Furthermore, the integration of artificial intelligence and machine learning algorithms into visualization tools has enhanced predictive analytics capabilities, enabling proactive interventions and reducing operational costs for veterinarians, farmers, and animal health organizations.
Government initiatives and regulatory mandates aimed at improving animal health surveillance and reporting standards have also contributed significantly to market growth. Numerous countries are investing in digital infrastructure to support veterinary diagnostics, clinical trials, and research initiatives. These efforts are complemented by collaborations between public health agencies, veterinary institutes, and pharmaceutical companies to develop standardized data visualization frameworks. Such initiatives not only facilitate comprehensive disease monitoring but also foster innovation in veterinary research and development. As a result, the market is experiencing heightened investment from both public and private stakeholders, further accelerating the adoption of advanced data visualization solutions across the animal health ecosystem.
Regionally, North America continues to dominate the Animal Health Data Visualization Market, accounting for the largest revenue share in 2024, followed closely by Europe. The Asia Pacific region, however, is witnessing the fastest growth rate, driven by rising pet adoption, increasing investments in veterinary infrastructure, and growing awareness of animal health management. Latin America and the Middle East & Africa are gradually emerging as promising markets, supported by expanding livestock sectors and government-led digital health initiatives. The regional landscape is characterized by varying adoption rates, regulatory environments, and technological readiness, shaping the competitive dynamics and growth opportunities for market participants.
The Animal Health Data Visualization Market is segmented by component into software and services, each playing a pivotal role in shaping the industry landscape. Software solutions form the backbone of this market, offering advanced visualization capabilities that transform raw animal health data into meaningful visual representations. These plat
Facebook
Twitterhttps://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy
According to our latest research, the Global Synthetic-Data Visualization Display market size was valued at $1.2 billion in 2024 and is projected to reach $6.8 billion by 2033, expanding at an impressive CAGR of 21.4% during the forecast period of 2025–2033. The primary driver fueling this robust growth is the escalating demand for advanced data visualization solutions that can securely leverage synthetic data, especially in highly regulated sectors such as healthcare and finance. As organizations strive to harness artificial intelligence and machine learning while ensuring data privacy and compliance, synthetic-data visualization displays are becoming indispensable for developing, testing, and deploying data-driven applications without exposing sensitive information. This trend is further amplified by the proliferation of big data analytics and the growing necessity for real-time insights, which require sophisticated visualization tools capable of handling complex, high-volume synthetic datasets.
North America currently dominates the global Synthetic-Data Visualization Display market, accounting for over 38% of the total market share in 2024. The region's leadership is underpinned by its mature technology landscape, early adoption of artificial intelligence, and a strong presence of key industry players specializing in synthetic data and visualization technologies. The United States, in particular, benefits from robust regulatory frameworks that encourage privacy-preserving data analytics, as well as substantial investments in R&D by both private enterprises and government agencies. The region’s advanced infrastructure, coupled with a high concentration of Fortune 500 companies in finance, healthcare, and IT, has created a fertile environment for the rapid deployment of synthetic-data visualization displays. Furthermore, North America’s focus on innovation and continuous upskilling of its workforce ensures sustained demand for cutting-edge visualization solutions.
The Asia Pacific region is poised to be the fastest-growing market, with an anticipated CAGR of 25.8% from 2025 to 2033. This remarkable growth is attributed to the region’s burgeoning digital transformation initiatives, increasing investments in artificial intelligence, and expanding IT infrastructure across emerging economies such as China, India, and Southeast Asia. Governments and enterprises in the Asia Pacific are actively exploring synthetic data solutions to overcome data scarcity and privacy concerns, especially in sectors like healthcare and finance where data sensitivity is paramount. The rise of local tech startups and the inflow of venture capital are further catalyzing innovation in synthetic-data visualization displays. Additionally, the region’s focus on smart city projects and digital education is creating new avenues for the adoption of these advanced visualization tools.
Emerging economies in Latin America and the Middle East & Africa are also witnessing gradual adoption of synthetic-data visualization display technologies, albeit at a slower pace compared to developed regions. These markets face unique challenges such as limited digital infrastructure, lower awareness of synthetic data benefits, and regulatory uncertainties. However, localized demand is steadily growing, particularly in sectors like government and education, where the need for secure data sharing and analysis is critical. Policy reforms aimed at digital transformation, coupled with international collaborations and technology transfer initiatives, are expected to accelerate adoption in these regions over the next decade. The potential for leapfrogging traditional data analytics methods with synthetic-data visualization displays presents significant long-term growth opportunities, provided that infrastructure and regulatory challenges are addressed.
| Attributes | Details |
| Report Title | Synthetic-Data Visualization Display Market Research Report 2033 |
| By Component | Software, Hardware, Services |
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Objectives: Develop a tool for applying various COVID-19 re-opening guidelines to the more than 120 U.S. Environmental Protection Agency (EPA) facilities.Methods: A geographic information system boundary was created for each EPA facility encompassing the county where the EPA facility is located and the counties where employees commuted from. This commuting area is used for display in the Dashboard and to summarize population and COVID-19 health data for analysis.Results: Scientists in EPA’s Office of Research and Development developed the EPA Facility Status Dashboard, an easy-to-use web application that displays data and statistical analyses on COVID-19 cases, testing, hospitalizations, and vaccination rates.Conclusion: The Dashboard was designed to provide readily accessible information for EPA management and staff to view and understand the COVID-19 risk surrounding each facility. It has been modified several times based on user feedback, availability of new data sources, and updated guidance. The views expressed in this article are those of the authors and do not necessarily represent the views or the policies of the U.S. Environmental Protection Agency.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
read-tv
The main paper is about, read-tv, open-source software for longitudinal data visualization. We uploaded sample use case surgical flow disruption data to highlight read-tv's capabilities. We scrubbed the data of protected health information, and uploaded it as a single CSV file. A description of the original data is described below.
Data source
Surgical workflow disruptions, defined as “deviations from the natural progression of an operation thereby potentially compromising the efficiency or safety of care”, provide a window on the systems of work through which it is possible to analyze mismatches between the work demands and the ability of the people to deliver the work. They have been shown to be sensitive to different intraoperative technologies, surgical errors, surgical experience, room layout, checklist implementation and the effectiveness of the supporting team. The significance of flow disruptions lies in their ability to provide a hitherto unavailable perspective on the quality and efficiency of the system. This allows for a systematic, quantitative and replicable assessment of risks in surgical systems, evaluation of interventions to address them, and assessment of the role that technology plays in exacerbation or mitigation.
In 2014, Drs Catchpole and Anger were awarded NIBIB R03 EB017447 to investigate flow disruptions in Robotic Surgery which has resulted in the detailed, multi-level analysis of over 4,000 flow disruptions. Direct observation of 89 RAS (robitic assisted surgery) cases, found a mean of 9.62 flow disruptions per hour, which varies across different surgical phases, predominantly caused by coordination, communication, equipment, and training problems.
Methods This section does not describe the methods of read-tv software development, which can be found in the associated manuscript from JAMIA Open (JAMIO-2020-0121.R1). This section describes the methods involved in the surgical work flow disruption data collection. A curated, PHI-free (protected health information) version of this dataset was used as a use case for this manuscript.
Observer training
Trained human factors researchers conducted each observation following the completion of observer training. The researchers were two full-time research assistants based in the department of surgery at site 3 who visited the other two sites to collect data. Human Factors experts guided and trained each observer in the identification and standardized collection of FDs. The observers were also trained in the basic components of robotic surgery in order to be able to tangibly isolate and describe such disruptive events.
Comprehensive observer training was ensured with both classroom and floor training. Observers were required to review relevant literature, understand general practice guidelines for observing in the OR (e.g., where to stand, what to avoid, who to speak to), and conduct practice observations. The practice observations were broken down into three phases, all performed under the direct supervision of an experienced observer. During phase one, the trainees oriented themselves to the real-time events of both the OR and the general steps in RAS. The trainee was also introduced to the OR staff and any other involved key personnel. During phase two, the trainer and trainee observed three RAS procedures together to practice collecting FDs and become familiar with the data collection tool. Phase three was dedicated to determining inter-rater reliability by having the trainer and trainee simultaneously, yet independently, conduct observations for at least three full RAS procedures. Observers were considered fully trained if, after three full case observations, intra-class correlation coefficients (based on number of observed disruptions per phase) were greater than 0.80, indicating good reliability.
Data collection
Following the completion of training, observers individually conducted observations in the OR. All relevant RAS cases were pre-identified on a monthly basis by scanning the surgical schedule and recording a list of procedures. All procedures observed were conducted with the Da Vinci Xi surgical robot, with the exception of one procedure at Site 2, which was performed with the Si robot. Observers attended those cases that fit within their allotted work hours and schedule. Observers used Microsoft Surface Pro tablets configured with a customized data collection tool developed using Microsoft Excel to collect data. The data collection tool divided procedures into five phases, as opposed to the four phases previously used in similar research, to more clearly distinguish between task demands throughout the procedure. Phases consisted of phase 1 - patient in the room to insufflation, phase 2 -insufflation to surgeon on console (including docking), phase 3 - surgeon on console to surgeon off console, phase 4 - surgeon off console to patient closure, and phase 5 - patient closure to patient leaves the operating room. During each procedure, FDs were recorded into the appropriate phase, and a narrative, time-stamp, and classification (based off of a robot-specific FD taxonomy) were also recorded.
Each FD was categorized into one of ten categories: communication, coordination, environment, equipment, external factors, other, patient factors, surgical task considerations, training, or unsure. The categorization system is modeled after previous studies, as well as the examples provided for each FD category.
Once in the OR, observers remained as unobtrusive as possible. They stood at an appropriate vantage point in the room without getting in the way of team members. Once an appropriate time presented itself, observers introduced themselves to the circulating nurse and informed them of the reason for their presence. Observers did not directly engage in conversations with operating room staff, however, if a staff member approached them with any questions/comments they would respond.
Data Reduction and PHI (Protected Health Information) Removal
This dataset uses 41 of the aforementioned surgeries. All columns have been removed except disruption type, a numeric timestamp for number of minutes into the day, and surgical phase. In addition, each surgical case had it's initial disruption set to 12 noon, (720 minutes).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundHealthcare data is a rich yet underutilized resource due to its disconnected, heterogeneous nature. A means of connecting healthcare data and integrating it with additional open and social data in a secure way can support the monumental challenge policy-makers face in safely accessing all relevant data to assist in managing the health and wellbeing of all. The goal of this study was to develop a novel health data platform within the MIDAS (Meaningful Integration of Data Analytics and Services) project, that harnesses the potential of latent healthcare data in combination with open and social data to support evidence-based health policy decision-making in a privacy-preserving manner.MethodsThe MIDAS platform was developed in an iterative and collaborative way with close involvement of academia, industry, healthcare staff and policy-makers, to solve tasks including data storage, data harmonization, data analytics and visualizations, and open and social data analytics. The platform has been piloted and tested by health departments in four European countries, each focusing on different region-specific health challenges and related data sources.ResultsA novel health data platform solving the needs of Public Health decision-makers was successfully implemented within the four pilot regions connecting heterogeneous healthcare datasets and open datasets and turning large amounts of previously isolated data into actionable information allowing for evidence-based health policy-making and risk stratification through the application and visualization of advanced analytics.ConclusionsThe MIDAS platform delivers a secure, effective and integrated solution to deal with health data, providing support for health policy decision-making, planning of public health activities and the implementation of the Health in All Policies approach. The platform has proven transferable, sustainable and scalable across policies, data and regions.
Facebook
TwitterThe U.S. Geological Survey, in cooperation with the California Department of Water Resources (DWR), has constructed a new spatially distributed Precipitation-Runoff Modeling System (PRMS) for the Merced River Basin (Koczot and others, 2021), which is a tributary of the San Joaquin River in California. PRMS is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of streamflow and basin hydrology to various combinations of climate and land use (Markstrom and others, 2015). Although further refinement may be required to apply the Merced PRMS for official streamflow forecast operations, this application of PRMS is calibrated with intention to simulate (and eventually, forecast) year-to-year variations of inflows to Lake McClure during the critical April–July snowmelt season, and may become part of a suite of methods used by DWR for forecasting streamflow in and from the basin. The Merced application of PRMS is a high-resolution model defined spatially by discreet, georeferenced mapping units (i.e., "hydrologic response units"; HRUs). Daily inputs of precipitation, maximum and minimum temperatures are used to force the application. This application is designed to capture the effects of land use and climate change on streamflows and general hydrogeology from subareas of the model domain. As described in detail in Koczot and others (2021), simulations were calibrated against (1) solar radiation, (2) potential evapotranspiration, and (3) at 5 nodes representing locations of measured or reconstructed (at the outlet) streamflows. This application uses the PRMS 4.0.2 executable. Users should review the performance of this model to ensure applicability for their specific purpose. The PRMS application developed for this study can be operated through a customized Object User Interface (OUI; Markstrom and Koczot, 2008) coupled with a version of the Ensemble Streamflow Prediction (ESP; Day, 1985) forecasting tool, parameter-file editor, and data visualization tools. Furthermore, this includes daily-climate distribution preprocessing tools (Draper Climate-Distribution Software; Donovan and Koczot, 2019). Hereafter referred to as Merced OUI, this framework is the platform used to operate the Merced River Basin PRMS and perform streamflow simulations and forecasts.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and R-script for a tutorial that explains how to convert spreadsheet data to tidy data. The tutorial is published in a blog for The Node (https://thenode.biologists.com/converting-excellent-spreadsheets-tidy-data/education/)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionThe electronic health record (EHR) has greatly expanded healthcare communication between patients and health workers. However, the volume and complexity of EHR messages have increased health workers' cognitive load, impeding effective care delivery and contributing to burnout.MethodsTo understand these potential detriments resulting from EHR communication, we analyzed EHR messages sent between patients and health workers at Emory Healthcare, a large academic healthcare system in Atlanta, Georgia. We quantified the burden of messages interacted with by each health worker type and visualized the communication patterns using graph theory. Our analysis included 76,694 conversations comprising 144,369 messages sent between 47,460 patients and 3,749 health workers across 85 healthcare specialties.ResultsOn average, nurses/certified nursing assistants/medical assistants (nurses/CNA/MA) interacted with the most messages (350), followed by non-physician practitioners (NPP) (241), physicians (166), and support staff (155), with the average conversation involving 10.51 interactions before resolution. Network analysis of the communication flow revealed that each health worker was connected to approximately two other health workers (average degree = 2.10). In message sending, support staff led in closeness centrality (0.44), followed by nurses/CNA/MA (0.41), highlighting their key role in fast information spread. For message reception, nurses/CNA/MA (0.51) and support staff (0.41) also had the highest values, underscoring their vital role in the communication network on the receiving end as well.DiscussionOur analysis demonstrates the feasibility of applying graph theory to understand communication dynamics between patients and health workers and highlights the burden of EHR-based messaging.
Facebook
TwitterResearch dissemination and knowledge translation are imperative in social work. Methodological developments in data visualization techniques have improved the ability to convey meaning and reduce erroneous conclusions. The purpose of this project is to examine: (1) How are empirical results presented visually in social work research?; (2) To what extent do top social work journals vary in the publication of data visualization techniques?; (3) What is the predominant type of analysis presented in tables and graphs?; (4) How can current data visualization methods be improved to increase understanding of social work research? Method: A database was built from a systematic literature review of the four most recent issues of Social Work Research and 6 other highly ranked journals in social work based on the 2009 5-year impact factor (Thomson Reuters ISI Web of Knowledge). Overall, 294 articles were reviewed. Articles without any form of data visualization were not included in the final database. The number of articles reviewed by journal includes : Child Abuse & Neglect (38), Child Maltreatment (30), American Journal of Community Psychology (31), Family Relations (36), Social Work (29), Children and Youth Services Review (112), and Social Work Research (18). Articles with any type of data visualization (table, graph, other) were included in the database and coded sequentially by two reviewers based on the type of visualization method and type of analyses presented (descriptive, bivariate, measurement, estimate, predicted value, other). Additional revi ew was required from the entire research team for 68 articles. Codes were discussed until 100% agreement was reached. The final database includes 824 data visualization entries.