This dataset collects the slides that were presented at the Data Collaborations Across Boundaries session in SciDataCon 2022, part of the International Data Week.
The following session proposal was prepared by Tyng-Ruey Chuang and submitted to SciDataCon 2022 organizers for consideration on 2022-02-28. The proposal was accepted on 2022-03-28. Six abstracts were submitted and accepted to this session. Five presentations were delivered online in a virtual session on 2022-06-21.
Data Collaborations Across Boundaries
There are many good stories about data collaborations across boundaries. We need more. We also need to share the lessons each of us has learned from collaborating with parties and communities not in our familiar circles.
By boundaries, we mean not just the regulatory borders in between the nation states about data sharing but the various barriers, readily conceivable or not, that hinder collaboration in aggregating, sharing, and reusing data for social good. These barriers to collaboration exist between the academic disciplines, between the economic players, and between the many user communities, just to name a few. There are also cross-domain barriers, for example those that lay among data practitioners, public administrators, and policy makers when they are articulating the why, what, and how of "open data" and debating its economic significance and fair distribution. This session aims to bring together experiences and thoughts on good data practices in facilitating collaborations across boundaries and domains.
The success of Wikipedia proves that collaborative content production and service, by ways of copyleft licenses, can be sustainable when coordinated by a non-profit and funded by the general public. Collaborative code repositories like GitHub and GitLab demonstrate the enormous value and mass scale of systems-facilitated integration of user contributions that run across multiple programming languages and developer communities. Research data aggregators and repositories such as GBIF, GISAID, and Zenodo have served numerous researchers across academic disciplines. Citizen science projects and platforms, for instance eBird, Galaxy Zoo, and Taiwan Roadkill Observation Network (TaiRON), not only collect data from diverse communities but also manage and release datasets for research use and public benefit (e.g. TaiRON datasets being used to improve road design and reduce animal mortality). At the same time large scale data collaborations depend on standards, protocols, and tools for building registries (e.g. Archival Resource Key), ontologies (e.g. Wikidata and schema.org), repositories (e.g. CKAN and Omeka), and computing services (e.g. Jupyter Notebook). There are many types of data collaborations. The above lists only a few.
This session proposal calls for contributions to bring forward lessons learned from collaborative data projects and platforms, especially about those that involve multiple communities and/or across organizational boundaries. Presentations focusing on the following (non-exclusive) topics are sought after:
Support mechanisms and governance structures for data collaborations across organizations/communities.
Data policies --- such as data sharing agreements, memorandum of understanding, terms of use, privacy policies, etc. --- for facilitating collaborations across organizations/communities.
Traditional and non-traditional funding sources for data collaborations across multiple parties; sustainability of data collaboration projects, platforms, and communities.
Data workflows --- collection, processing, aggregation, archiving, and publishing, etc. --- designed with considerations of (external) collaboration.
Collaborative web platforms for data acquisition, curation, analysis, visualization, and education.
Examples and insights from data trusts, data coops, as well as other formal and informal forms of data stewardship.
Debates on the pros and cons of centralized, distributed, and/or federated data services.
Practical lessons learned from data collaboration stories: failure, success, incidence, unexpected turn of event, aftermath, etc. (no story is too small!).
Researchers across the country and around the world expend tremendous resources to gather and analyze vast stores of data and populate models to better understand the process they are studying. Each of those researchers has limited money, time, computational capacity, data storage, and ability to put that data to productive use. What if they could combine their efforts to make collaboration easier? What if those collected data sets and processed model outputs could be used collaboratively to help advance knowledge beyond their original purpose? It is these questions that are motivating the movement towards open data, better data management and collaboration and sharing in the use of data and models. In short, researchers are relying more on teamwork to tackle the big problems of the day. This presentation will describe the HydroShare web based hydrologic information system operated by the Consortium of Universities for the Advancement of Hydrologic Science Inc. (CUAHSI) that is available for use as a service to the hydrology community. HydroShare includes a repository for users to share and publish data and models in a variety of formats, and to make this information available in a citable, shareable, and discoverable manner. HydroShare also includes tools (web apps) that can act on content in HydroShare, providing users with a gateway to high performance computing and computing in the cloud. HydroShare has components that support: (1) resource storage, (2) resource exploration, and (3) web apps for actions on resources. The HydroShare data discovery, sharing and publishing functions as well as HydroShare web apps provide the capability to analyze data and execute models completely in the cloud, overcoming desktop platform limitations. We will discuss how these developments can be used to support collaborative research and modeling in Hydrology, where being web based is of value as collaborators can all have access to the same functionality regardless of their computer. We will illustrate the use of HydroShare for collecting and making accessible to the community data from the US National Water Model and 2017 Atlantic Hurricanes Harvey, Irma and Maria that had significant impacts on parts of the US and islands in the Caribbean. HydroShare is being used to assemble, document and archive hydrologic data from these events to support research to improve our understanding of and capability to prepare for and respond to such extreme events in the future.
Presentation at 2018 AWRA Spring Specialty Conference: Geographic Information Systems (GIS) and Water Resources X, Orlando, Florida, April 23-25, http://awra.org/meetings/Orlando2018/.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundSharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context.MethodologyThe Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling.Principal FindingsThe application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications.ConclusionsBased on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.
How do you manage, track, and share hydrologic data and models within your research group? Do you find it difficult to keep track of who has access to which data and who has the most recent version of a dataset or research product? Do you sometimes find it difficult to share data and models and collaborate with colleagues outside your home institution? Would it be easier if you had a simple way to share and collaborate around hydrologic datasets and models? HydroShare is a new, web-based system for sharing hydrologic data and models with specific functionality aimed at making collaboration easier. Within HydroShare, we have developed new functionality for creating datasets, describing them with metadata, and sharing them with collaborators. In HydroShare we cast hydrologic datasets and models as “social objects” that can be published, collaborated around, annotated, discovered, and accessed. In this presentation, we will discuss and demonstrate the collaborative and social features of HydroShare and how it can enable new, collaborative workflows for you, your research group, and your collaborators across institutions. HydroShare’s access control and sharing functionality enable both public and private sharing with individual users and collaborative user groups, giving you flexibility over who can access data and at what point in the research process. HydroShare can make it easier for collaborators to iterate on shared datasets and models, creating multiple versions along the way, and publishing them with a permanent landing page, metadata description, and citable Digital Object Identifier (DOI). Functionality for creating and sharing resources within collaborative groups can also make it easier to overcome barriers such as institutional firewalls that can make collaboration around large datasets difficult. Functionality for commenting on and rating resources supports community collaboration and quality evaluation of resources in HydroShare.
This presentation was delivered as part of a Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Cyberseminar in June 2016. Cyberseminars are recorded, and archived recordings are available via the CUAHSI website at http://www.cuahsi.org.
Website for brain experimental data and other resources such as stimuli and analysis tools. Provides marketplace and discussion forum for sharing tools and data in neuroscience. Data repository and collaborative tool that supports integration of theoretical and experimental neuroscience through collaborative research projects. CRCNS offers funding for new class of proposals focused on data sharing and other resources.
HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around “resources” which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative’s Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called “BagIt”. HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare’s content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.
Slides for AGU 2015 presentation H42A-04, December 17, 2015
Advances in many domains of earth science increasingly require integration of information from multiple sources, reuse and repurposing of data, and collaboration. HydroShare is a web based hydrologic information system operated by the Consortium of Universities for the Advancement of Hydrologic Science Inc. (CUAHSI). HydroShare includes a repository for users to share and publish data and models in a variety of formats, and to make this information available in a citable, shareable, and discoverable manner. HydroShare also includes tools (web apps) that can act on content in HydroShare, providing users with a gateway to high performance computing and computing in the cloud. Jupyter notebooks, and associated code and data are an effective way to document and make a research analysis or modeling procedure reproducible. This presentation will describe how a Jupyter notebook in a HydroShare resource can be opened from a JupyterHub app using the HydroShare web app resource and API capabilities that enable linking a web app to HydroShare, reading of data from HydroShare and writing of results back to the HydroShare repository in a way that results can be shared among HydroShare users and groups to support research collaboration. This interoperability between HydroShare and other cyberinfrastructure elements serves as an example for how EarthCube cyberinfrastructure may integrate. Base functionality within JupyterHub supports data organization, simple scripting and visualization, while Docker containers are used to encapsulate models that have specific dependency requirements. This presentation will describe the strategy for, and challenges of using models in Docker containers, as well as using Geotrust software to package computational experiments as 'geounits', which are reproducible research objects that describe and package computational experiments.
Presentation at EarthCube all hands meeting, June 6-8, 2018, Washington, DC https://www.earthcube.org/ECAHM2018
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data set contains 40 instances of the Dynamic Pickup and Delivery Problem with Time Windows, each containing 1000 orders, used in the article The Value of Information Sharing for Platform-Based Collaborative Vehicle Routing by J. Los, F. Schulte, M.T.J. Spaan, and R.R. Negenborn, published in Transportation Research Part E.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the poster presented at the Earthcube all hands meeting June 7, 2017.
HydroShare is an online, collaboration system for sharing hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around “resources” which are defined by standardized content types for data formats and models commonly used in hydrology. Currently, with HydroShare you can: share your data and models with colleagues; manage who has access to the content that you share; share, access, visualize, and manipulate a broad set of hydrologic data types and models; publish data and models and obtain a citable digital object identifier (DOI); aggregate your resources into collections; discover and access data and models published by others; use the web services application programming interface (API) to programmatically access resources; and use integrated web applications to visualize, analyze and run models on data in HydroShare. Composite resources allow multiple file types from a study to be combined together, providing, as a single resource, an aggregation of all the data elements associated with a model or study. Hydroshare’s composite resource construct can be used to support software that enables transparency and reproducibility, and thereby enhance trust in the research findings. Toward this, as part of the EarthCube GeoTrust project we are investigating how the composite resource construct can be extended to support transparency and reproducibility. The EarthCube GeoTrust project is creating “geounits” which are self-contained packages of computational experiments that can be guaranteed to repeat or reproduce regardless of deployment issues. Since geounits provide a complete description of all the data elements with an instance (run) of a computational experiment, including input files, parameter files, the model executable, associated libraries, and output files produced, they can be mapped to a specialization of HydroShare’s composite resource type. This has a direct effect of transforming HydroShare into a repository of geounits, and making published and cited experiments not only accessible but also reproducible, thereby enhancing trust in them. Tools that create geounits use HydroShare’s REST API to load them into HydroShare, where they can then be shared with other users and downloaded for reproduction of the computational experiment, or further research with additional or alternate data. This presentation will describe the functionality and architecture of HydroShare that enables the creation of geounits comprising: (1) resource storage, (2) resource exploration, and (3) actions on resources by web applications. HydroShare’s components are loosely coupled and interact through APIs, which enhances robustness, as components can be upgraded and advanced relatively independently. The full power of this paradigm is the extensibility it supports, in that anybody can develop a web application that interacts with resources stored in HydroShare. We welcome discussion of the opportunities this enables for interoperability with other EarthCube tools, to the benefit of the geoscience research community.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Background: Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers –data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results. Methodology/Principal Findings: A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the short- and long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region. Conclusions/Significance: Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles.
Integrated geospatial infrastructure is the modern pattern for connecting organizations across borders, jurisdictions, and sectors to address shared challenges. Implementation starts with a strategy, followed by the pillars of collaborative governance, data and technology, capacity building, and engagement. It is inherently multi-organizational. Whether you call your initiative Open Data, Regional GIS, Spatial Data Infrastructure (SDI), Digital Twin, Knowledge Infrastructure, Digital Ecosystem, or otherwise, collaboration is key.This guide shares good practices to share and collaborate among multiple partners in your OneMap initiative. You’ll learn to create a great group-sharing experience for your contributing partners, invite partners to groups, index groups to your Hub, and populate your OneMap Contributors page with category cards.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results of a survey sent to University College London (UCL) students who have undertaken Gene Ontology biocuration projects.
https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy
According to cognitive market research, the global collaborative writing software market size will be USD 24154.2 million in 2024. It will expand at a compound annual growth rate (CAGR) of 9.50% from 2024 to 2031.
North America held the major market share for more than 40% of the global revenue with a market size of USD 9661.68 million in 2024 and will grow at a compound annual growth rate (CAGR) of 7.7% from 2024 to 2031.
Europe accounted for a market share of over 30% of the global revenue with a market size of USD 7246.26 million.
Asia Pacific held a market share of around 23% of the global revenue with a market size of USD 5555.47 million in 2024 and will grow at a compound annual growth rate (CAGR) of 11.5% from 2024 to 2031.
Latin America had a market share of more than 5% of the global revenue with a market size of USD 1207.71 million in 2024 and will grow at a compound annual growth rate (CAGR) of 8.9% from 2024 to 2031.
Middle East and Africa had a market share of around 2% of the global revenue and was estimated at a market size of USD 483.08 million in 2024 and will grow at a compound annual growth rate (CAGR) of 9.2% from 2024 to 2031.
The real-time editing category is the fastest growing segment of the collaborative writing software industry
Market Dynamics of Collaborative Writing Software Market
Key Drivers for Collaborative Writing Software Market
Increasing Reliance on Cloud-Based Applications to Boost Market Growth
The growing reliance on cloud-based apps is driving the global market for collaborative writing software. The demand for cloud-based collaboration solutions has increased as companies and educational institutions continue to adopt online learning environments and remote work. Multiple users can edit documents at the same time from different locations with cloud-based writing software, which also seamlessly integrates with other cloud services like project management, communication, and storage. It is very tempting to enterprises looking for flexible and affordable solutions because of its scalability and ease of use. Furthermore, improved security, automatic backups, and real-time updates are provided by cloud-based applications, all of which are necessary to guarantee the continuity and integrity of collaborative projects. For instance, teams may collaborate on documents simultaneously from different places using cloud-based solutions like Google Docs and Microsoft Office 365, which cuts down on delays and promotes more effective workflows. The increasing trend towards remote work and dispersed teams are also fueling the need for collaborative writing tools that facilitate smooth, virtual teamwork.
The Growth of Industries Focused on Content Creation to Drive Market Growth
The content creation sectors are driving the robust expansion of the global market for collaborative writing software. The increasing adoption of digital platforms by enterprises, media houses, and educational institutions has led to a growing demand for collaborative tools that optimize content generation and editing procedures. This tendency is especially noticeable in fields where teams must collaborate in real-time on documents, articles, and reports, like academic research, marketing, and media. Furthermore, cloud-based writing platforms are becoming more popular as a result of the growing need for remote and hybrid work environments, which facilitates easy collaboration between different locations. Publishing, content marketing, and digital media are just a few of the industries that are always looking for cutting-edge technologies to streamline their processes and boost output.
Restraint Factor for the Collaborative Writing Software Market
Data Privacy will Limit Market Growth
Concerns over data privacy are a major obstacle to the global market's growth for collaborative writing software. There is an increased danger of data breaches and illegal access when firms use these platforms more frequently for team collaboration, real-time document editing, and file sharing. Compliance with international data privacy legislation, such as the CCPA in the United States and the GDPR in Europe, has become essential for businesses that exchange sensitive information through these platforms. It is difficult for software suppliers to maintain compliance across numerous jurisdictions because of these legal restrictions, which place strict limits on how data is handled, s...
https://www.imarcgroup.com/privacy-policyhttps://www.imarcgroup.com/privacy-policy
The Japan team collaboration software market size reached USD 1.1 Billion in 2024. Looking forward, IMARC Group expects the market to reach USD 3.5 Billion by 2033, exhibiting a growth rate (CAGR) of 12.73% during 2025-2033. The increasing shift toward remote work and hybrid work models, the rising adoption of collaboration tools for sensitive information, and the growing popularity of cloud-based solutions represent some of the key factors driving the market.
Report Attribute
|
Key Statistics
|
---|---|
Base Year
|
2024
|
Forecast Years
|
2025-2033
|
Historical Years
|
2019-2024
|
Market Size in 2024
| USD 1.1 Billion |
Market Forecast in 2033
| USD 3.5 Billion |
Market Growth Rate 2025-2033 | 12.73% |
Team collaboration software, often referred to as collaborative work management or team productivity software, is a type of application or platform designed to facilitate and enhance communication, coordination, and collaboration among team members within an organization. Its primary purpose is to streamline teamwork, improve productivity, and enable more efficient project management. It typically includes a range of communication tools, such as chat, instant messaging, and discussion boards. These features allow team members to communicate in real-time, share information, ask questions, and discuss ideas without the need for lengthy email exchanges or in-person meetings. They also enable easy sharing of documents, files, and resources within the team. Team members can upload, access, edit, and collaborate on documents in a centralized location. This feature helps prevent version control issues and ensures everyone is working with the most up-to-date information. Team collaboration software often includes task and project management tools. Team leaders can assign tasks, set deadlines, and track progress. This helps ensure that everyone knows their responsibilities and that projects are completed on time.
The COVID-19 pandemic accelerated the shift toward remote work and hybrid work models in Japan. As organizations adapted to these new working arrangements, the demand for team collaboration software surged. Businesses needed tools that could facilitate communication, project management, and collaboration among remote and dispersed teams. In addition, many Japanese companies are actively pursuing digital transformation initiatives to stay competitive in the global market. Team collaboration software plays a crucial role in these efforts by enabling efficient communication and collaboration, driving productivity, and supporting innovation. Besides, Japanese businesses are placing a greater emphasis on improving productivity and efficiency in their operations. Team collaboration software helps streamline processes, reduce manual tasks, and enhance communication, ultimately leading to improved productivity levels. Moreover, integration capabilities have become a key driver for team collaboration software. Organizations in Japan seek solutions that can seamlessly integrate with other software tools they use, such as project management, CRM, and document management systems. This integration enhances workflow automation and data sharing. Additionally, cloud-based team collaboration software solutions are preferred for their scalability, accessibility, and reduced IT infrastructure costs. As a result, the adoption of cloud-based collaboration tools is on the rise in Japan, catering to businesses of all sizes. Furthermore, with the increasing use of collaboration tools for sensitive information, organizations in Japan are placing a strong emphasis on security and compliance features. Ensuring data protection and compliance with relevant regulations is a critical driver in the adoption of team collaboration software.
IMARC Group provides an analysis of the key trends in each segment of the market, along with forecasts at the country level for 2025-2033. Our report has categorized the market based on component, software type, deployment mode, and industry vertical.
Component Insights:
https://www.imarcgroup.com/CKEditor/ab96f22d-ee17-48e9-8bd1-a736775351d4japan-team-collaboration-software-market-sagment.webp" style="height:450px; width:800px" />
The report has provided a detailed breakup and analysis of the market based on the component. This includes solution and services.
Software Type Insights:
A detailed breakup and analysis of the market based on the software type have also been provided in the report. This includes conferencing and communication and co-ordination.
Deployment Mode Insights:
The report has provided a detailed breakup and analysis of the market based on deployment mode. This includes on-premises and cloud-based.
Industry Vertical Insights:
A detailed breakup and analysis of the market based on the industry vertical have also been provided in the report. This includes BFSI, manufacturing, healthcare, IT and telecommunications, retail and e-commerce, government and defense, media and entertainment, education, and others.
Regional Insights:
https://www.imarcgroup.com/CKEditor/9ab86a3a-fed7-4359-b4c0-9fdb6fb4abdajapan-team-collaboration-software-market-regional.webp" style="height:450px; width:800px" />
The report has also provided a comprehensive analysis of all the major regional markets, which include Kanto Region, Kansai/Kinki Region, Central/ Chubu Region, Kyushu-Okinawa Region, Tohoku Region, Chugoku Region, Hokkaido Region, and Shikoku Region.
The market research report has also provided a comprehensive analysis of the competitive landscape. Competitive analysis such as market structure, key player positioning, top winning strategies, competitive dashboard, and company evaluation quadrant has been covered in the report. Also, detailed profiles of all major companies have been provided.
Report Features | Details |
---|---|
Base Year of the Analysis | 2024 |
Historical Period | 2019-2024 |
Forecast Period |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThe use of routinely collected health data for secondary research purposes is increasingly recognised as a methodology that advances medical research, improves patient outcomes, and guides policy. This secondary data, as found in electronic medical records (EMRs), can be optimised through conversion into a uniform data structure to enable analysis alongside other comparable health metric datasets. This can be achieved with the Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM), which employs a standardised vocabulary to facilitate systematic analysis across various observational databases. The concept behind the OMOP-CDM is the conversion of data into a common format through the harmonisation of terminologies, vocabularies, and coding schemes within a unique repository. The OMOP model enhances research capacity through the development of shared analytic and prediction techniques; pharmacovigilance for the active surveillance of drug safety; and ‘validation’ analyses across multiple institutions across Australia, the United States, Europe, and the Asia Pacific. In this research, we aim to investigate the use of the open-source OMOP-CDM in the PATRON primary care data repository.MethodsWe used standard structured query language (SQL) to construct, extract, transform, and load scripts to convert the data to the OMOP-CDM. The process of mapping distinct free-text terms extracted from various EMRs presented a substantial challenge, as many terms could not be automatically matched to standard vocabularies through direct text comparison. This resulted in a number of terms that required manual assignment. To address this issue, we implemented a strategy where our clinical mappers were instructed to focus only on terms that appeared with sufficient frequency. We established a specific threshold value for each domain, ensuring that more than 95% of all records were linked to an approved vocabulary like SNOMED once appropriate mapping was completed. To assess the data quality of the resultant OMOP dataset we utilised the OHDSI Data Quality Dashboard (DQD) to evaluate the plausibility, conformity, and comprehensiveness of the data in the PATRON repository according to the Kahn framework.ResultsAcross three primary care EMR systems we converted data on 2.03 million active patients to version 5.4 of the OMOP common data model. The DQD assessment involved a total of 3,570 individual evaluations. Each evaluation compared the outcome against a predefined threshold. A ’FAIL’ occurred when the percentage of non-compliant rows exceeded the specified threshold value. In this assessment of the primary care OMOP database described here, we achieved an overall pass rate of 97%.ConclusionThe OMOP CDM’s widespread international use, support, and training provides a well-established pathway for data standardisation in collaborative research. Its compatibility allows the sharing of analysis packages across local and international research groups, which facilitates rapid and reproducible data comparisons. A suite of open-source tools, including the OHDSI Data Quality Dashboard (Version 1.4.1), supports the model. Its simplicity and standards-based approach facilitates adoption and integration into existing data processes.
Poster for AGU Fall Meeting, December 11, 2023
https://agu.confex.com/agu/fm23/meetingapp.cgi/Paper/1336263
HydroShare (http://www.hydroshare.org) is a web-based repository and hydrologic information system operated by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) that enables users to share, collaborate around, and publish data, models, scripts, and applications associated with water related research to meet Findable, Accessible, Interoperable, and Reusable (FAIR) open data mandates. The HydroShare repository also links with connected computational systems, enabling users to reproducibly run models and analyses and share documented workflows. This presentation will overview the capabilities and best practices developed for collaboration and sharing of data and other research products along with the use of HydroShare and linked computing. It will focus on successes and challenges in engaging scholars, researchers, and practitioners as individuals and as communities, including lessons learned in sharing data across large scientific communities such as the Critical Zone Collaborative Network. It will also include collaboration functions being developed for the Institute for Geospatial Understanding through an Integrative Discovery Environment (I-GUIDE) and Cooperative Institute for Research to Operations in Hydrology (CIROH), where challenges associated with large scale input/output data preparation, staging, and sub setting along with execution of large-scale models and data are faced.
https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Collaboration Software Market size was valued at USD 20.17 Billion in 2024 and is projected to reach USD 8.41 Billion by 2031, growing at a CAGR of 12.75% during the forecast period 2024-2031.
Global Collaboration Software Market Drivers
The market drivers for the Collaboration Software Market can be influenced by various factors. These may include:
Remote Work and Distributed Teams: As remote work and distributed teams become more common, strong collaboration technologies are needed to support coordination and communication amongst personnel who are spread out geographically. Productivity and teamwork are supported by collaboration software, which makes it possible to collaborate seamlessly regardless of one’s physical location.
Digital Transformation efforts: To update their operations and procedures, organizations in a variety of industries are implementing digital transformation efforts. Collaboration software enhances operational efficiency and agility by offering tools for real-time communication, document sharing, project management, and workflow automation. As such, it plays a critical part in digital transformation initiatives.
Improved Communication and Connectivity is Necessary: In the fast-paced corporate climate of today, innovation, decision-making, and problem-solving depend heavily on efficient communication and connectivity. With the help of collaboration software, teams can interact and work together in real time regardless of their actual location thanks to capabilities like video conferencing, instant messaging, and virtual meeting rooms.
The rise of tools for remote and team collaboration: The rising demand for solutions that allow team members to collaborate seamlessly regardless of their location or time zone is reflected in the popularity of team collaboration platforms and remote collaboration tools. Teams can collaborate effectively, discuss ideas, iterate on projects, and make well-informed decisions in a collaborative digital workspace with the help of collaboration tools.
Emphasis on Employee Engagement and contentment: Businesses understand how critical it is to create a happy workplace and encourage employee engagement and contentment. Collaboration software makes it easier for staff members to collaborate, share expertise, and engage socially. This promotes a sense of community, camaraderie, and belonging among employees, all of which can eventually increase worker satisfaction and retention.
Integration with Productivity and Business Applications: The value proposition of collaboration software is improved by integration capabilities with productivity and business applications like calendars, email, document management systems, project management tools, and customer relationship management (CRM) software. Users may access and exchange data across many platforms and apps with seamless integration, which improves productivity and streamlines workflows.
Growing Adoption of Cloud-Based Solutions: Because of their affordability, scalability, and flexibility, cloud-based collaboration software solutions are becoming more and more popular. Cloud-based collaboration solutions provide for rapid deployment and flexibility to meet changing company needs. They also remove the need for on-premises infrastructure maintenance and provide anytime, anywhere access to collaboration tools.
Prioritizing Security and Compliance in Collaboration: Organizations are giving collaboration security and compliance more attention due to the increase in cyber threats and concerns about data privacy. To protect sensitive data and comply with legal requirements, collaboration software providers are investing in strong security features, encryption mechanisms, access controls, and compliance certifications.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This fileset contains the L2TAP (Linked Data Log to Transparency, Accountability, and Privacy) privacy audit logs used in the experimental evaluation described in the associated Blockchain Enabled Privacy Audit Logs, ISWC 2017 research paper. The datasets contain RDF named graphs that illustrate various privacy events in an L2TAP audit log as well as signature and block RDF graphs described in the associated publication. Each zip folder holds synthetic L2TAP log data, simulating the process of an auditor checking the integrity of an audit log. A basic log consists of eight events: log initialization, participants registration, privacy preferences and policies, access request, access response, obligation acceptance, performed obligation, and actual access.The zip file name refers to the number of events in the log, e.g. 9998, while individual .rdf files refer to specific events logged.Data are provided in .rdf format, accessible from standard text edit software, within compressed .zip files, which can be uncompressed using standard compression utilities.Background (associated publication abstract):Privacy audit logs are used to capture the actions of participants in a data sharing environment in order for auditors to check compliance with privacy policies. However, collusion may occur between the auditors and participants to obfuscate actions that should be recorded in the audit logs. In this paper, we propose a Linked Data based method of utilizing blockchain technology to create tamper-proof audit logs that provide proof of log manipulation and non-repudiation. We also provide experimental validation of the scalability of our solution using an existing Linked Data privacy audit log model.
This data set contains sublimation rate data from laboratory studies of snow. Parameters include flow rate, measured sublimation rate, and theoretical maximum sublimation rate. Data were collected in cold rooms at the Cold Regions Research and Engineering Laboratory (CRREL), in Hanover, NH, during 2005 and 2006. The data were collected as part of a collaborative research project. The project aims to develop a quantitative understanding of the processes active in isotopic exchange between snow/firn and water vapor, which is important to Antarctic ice core interpretation. Data are available via FTP in Microsoft Excel (.xls) format.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This paper aims to get a better understanding of the motivational and transaction cost features of
building global scientific research commons, with a view to contributing to the debate on the design of
appropriate policy measures under the recently adopted Nagoya Protocol. For this purpose, the paper
analyses the results of a world-wide survey of managers and users of microbial culture collections, which
focused on the role of social and internalized motivations, organizational networks and external
incentives in promoting the public availability of upstream research assets. Overall, the study confirms
the hypotheses of the social production model of information and shareable goods, but it also shows the
need to complete this model. For the sharing of materials, the underlying collaborative economy in
excess capacity plays a key role in addition to the social production, while for data, competitive pressures
amongst scientists tend to play a bigger role.
This dataset collects the slides that were presented at the Data Collaborations Across Boundaries session in SciDataCon 2022, part of the International Data Week.
The following session proposal was prepared by Tyng-Ruey Chuang and submitted to SciDataCon 2022 organizers for consideration on 2022-02-28. The proposal was accepted on 2022-03-28. Six abstracts were submitted and accepted to this session. Five presentations were delivered online in a virtual session on 2022-06-21.
Data Collaborations Across Boundaries
There are many good stories about data collaborations across boundaries. We need more. We also need to share the lessons each of us has learned from collaborating with parties and communities not in our familiar circles.
By boundaries, we mean not just the regulatory borders in between the nation states about data sharing but the various barriers, readily conceivable or not, that hinder collaboration in aggregating, sharing, and reusing data for social good. These barriers to collaboration exist between the academic disciplines, between the economic players, and between the many user communities, just to name a few. There are also cross-domain barriers, for example those that lay among data practitioners, public administrators, and policy makers when they are articulating the why, what, and how of "open data" and debating its economic significance and fair distribution. This session aims to bring together experiences and thoughts on good data practices in facilitating collaborations across boundaries and domains.
The success of Wikipedia proves that collaborative content production and service, by ways of copyleft licenses, can be sustainable when coordinated by a non-profit and funded by the general public. Collaborative code repositories like GitHub and GitLab demonstrate the enormous value and mass scale of systems-facilitated integration of user contributions that run across multiple programming languages and developer communities. Research data aggregators and repositories such as GBIF, GISAID, and Zenodo have served numerous researchers across academic disciplines. Citizen science projects and platforms, for instance eBird, Galaxy Zoo, and Taiwan Roadkill Observation Network (TaiRON), not only collect data from diverse communities but also manage and release datasets for research use and public benefit (e.g. TaiRON datasets being used to improve road design and reduce animal mortality). At the same time large scale data collaborations depend on standards, protocols, and tools for building registries (e.g. Archival Resource Key), ontologies (e.g. Wikidata and schema.org), repositories (e.g. CKAN and Omeka), and computing services (e.g. Jupyter Notebook). There are many types of data collaborations. The above lists only a few.
This session proposal calls for contributions to bring forward lessons learned from collaborative data projects and platforms, especially about those that involve multiple communities and/or across organizational boundaries. Presentations focusing on the following (non-exclusive) topics are sought after:
Support mechanisms and governance structures for data collaborations across organizations/communities.
Data policies --- such as data sharing agreements, memorandum of understanding, terms of use, privacy policies, etc. --- for facilitating collaborations across organizations/communities.
Traditional and non-traditional funding sources for data collaborations across multiple parties; sustainability of data collaboration projects, platforms, and communities.
Data workflows --- collection, processing, aggregation, archiving, and publishing, etc. --- designed with considerations of (external) collaboration.
Collaborative web platforms for data acquisition, curation, analysis, visualization, and education.
Examples and insights from data trusts, data coops, as well as other formal and informal forms of data stewardship.
Debates on the pros and cons of centralized, distributed, and/or federated data services.
Practical lessons learned from data collaboration stories: failure, success, incidence, unexpected turn of event, aftermath, etc. (no story is too small!).