34 datasets found
  1. G

    Scientific Workflow Management Platforms Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Scientific Workflow Management Platforms Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/scientific-workflow-management-platforms-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Aug 21, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Scientific Workflow Management Platforms Market Outlook



    As per our latest research, the global scientific workflow management platforms market size is valued at USD 2.47 billion in 2024, exhibiting robust momentum driven by rapid digitalization of research and development processes across scientific domains. The market is anticipated to expand at a CAGR of 12.8% from 2025 to 2033, reaching an estimated value of USD 7.34 billion by 2033. This significant growth is underpinned by increasing adoption of automation, data-driven decision making, and the need for reproducibility and scalability in scientific research. The integration of artificial intelligence, cloud computing, and big data analytics into scientific workflow management platforms is further accelerating market expansion and transforming the landscape of research and innovation worldwide.




    One of the primary growth factors for the scientific workflow management platforms market is the surge in data-intensive research activities, particularly in genomics, bioinformatics, and drug discovery. The explosion of next-generation sequencing, high-throughput screening, and other advanced research methodologies has resulted in the generation of vast and complex datasets. Managing, analyzing, and sharing these datasets efficiently requires robust workflow management solutions that can automate routine processes, ensure data integrity, and facilitate collaboration among researchers. As research organizations and laboratories strive to enhance productivity and reduce manual errors, the demand for sophisticated scientific workflow management platforms continues to rise, driving market growth.




    Another key driver is the increasing emphasis on reproducibility and transparency in scientific research. The scientific community is facing mounting pressure to ensure that experimental results can be reliably replicated and validated by independent researchers. Scientific workflow management platforms address this challenge by providing standardized, traceable, and version-controlled environments for executing and documenting research workflows. These platforms not only enhance the credibility and reliability of research findings but also streamline compliance with regulatory requirements, particularly in highly regulated sectors such as pharmaceuticals and biotechnology. As a result, both academic institutions and commercial enterprises are investing heavily in workflow management solutions to strengthen their research governance frameworks.




    Technological advancements are also shaping the future of the scientific workflow management platforms market. The integration of artificial intelligence, machine learning, and cloud-based architectures is enabling new levels of automation, scalability, and accessibility. Cloud-based workflow platforms, in particular, are gaining traction due to their ability to support remote collaboration, elastic computing resources, and seamless integration with other digital research tools. These innovations are democratizing access to advanced scientific computing capabilities, allowing smaller research teams and organizations in emerging markets to participate in global scientific endeavors. The ongoing digital transformation of research and development is expected to create new opportunities for vendors and fuel sustained market growth over the forecast period.




    From a regional perspective, North America continues to dominate the scientific workflow management platforms market, accounting for the largest revenue share in 2024. This leadership position is attributed to the presence of leading research institutions, robust funding for life sciences and healthcare research, and early adoption of digital technologies. However, Asia Pacific is emerging as the fastest-growing region, driven by increasing investments in research infrastructure, expanding pharmaceutical and biotechnology sectors, and government initiatives to promote digital innovation. Europe also holds a significant share, supported by strong academic research networks and collaborative projects across the region. The global market landscape is becoming increasingly competitive, with both established players and new entrants vying for market share through technological innovation and strategic partnerships.



  2. Workflow Based Software Development Environment, Phase II

    • data.nasa.gov
    application/rdfxml +5
    Updated Jun 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Workflow Based Software Development Environment, Phase II [Dataset]. https://data.nasa.gov/dataset/Workflow-Based-Software-Development-Environment-Ph/gx9w-367n
    Explore at:
    xml, application/rssxml, csv, tsv, json, application/rdfxmlAvailable download formats
    Dataset updated
    Jun 26, 2018
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    The goal of this proposed research is to investigate and develop a workflow based tool, the Software Developers Assistant, to facilitate the collaboration between different participants of multiple activities within a Software Development Process. Distributed development teams are becoming the norm for today?s software projects. These distributed teams are faced with the challenge of keeping software projects on track and keeping all involved developers using a consistent and efficient process. Workflow tools have been used for several years to support activities of distributed organizations such as the International Space Station Program. Workflow tools are efficient at automating very constrained and tightly controlled processes such as Change Requests processes. A Software Development Process, though, requires a more informal type of process automation allowing the project manager more control rather than enforcing tight rules through the workflow engine. Issues to be addressed during this project include researching the effects of multiple factors involved in the successful insertion of this technology within NASA organizations. Engine characteristics required to give flexibility to the software development team will be researched. Multiple processes will be captured within the workflow tool to evaluate different needs of process participants, such as Project Management, Requirements, Design, Implementation, and Testing.

  3. f

    Comparison of SoS with other workflow management systems.

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    • +1more
    Updated Feb 27, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peng, Bo; Wang, Gao (2019). Comparison of SoS with other workflow management systems. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000174276
    Explore at:
    Dataset updated
    Feb 27, 2019
    Authors
    Peng, Bo; Wang, Gao
    Description

    The table compare SoS with several popular bioinformatics workflow systems including Nextflow, Snakemake, Bpipe, CWL, and Galaxy, in three broad aspects: 1) basic features (syntax, file format, user interface, etc), 2) workflow features (workflow specification, dependency handling, execution and monitoring, etc), and built-in support for external tools and services (container support, HPC systems, distributed systems and cloud support). It is a snapshot of an interactive table online at https://vatlab.github.io/blog/post/comparison where comments and potential contributions from the community can be continuously incorporated through github issues or pull requests. (XLSX)

  4. qPortal: A platform for data-driven biomedical research

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    docx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher Mohr; Andreas Friedrich; David Wojnar; Erhan Kenar; Aydin Can Polatkan; Marius Cosmin Codrea; Stefan Czemmel; Oliver Kohlbacher; Sven Nahnsen (2023). qPortal: A platform for data-driven biomedical research [Dataset]. http://doi.org/10.1371/journal.pone.0191603
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Christopher Mohr; Andreas Friedrich; David Wojnar; Erhan Kenar; Aydin Can Polatkan; Marius Cosmin Codrea; Stefan Czemmel; Oliver Kohlbacher; Sven Nahnsen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis.We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software’s strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy).qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.

  5. Data from: Bioinformatics workflow for the detection of eQTL in the cattle...

    • data.europa.eu
    unknown
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo, Bioinformatics workflow for the detection of eQTL in the cattle genome using Nextflow DSL2 [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-7949616?locale=el
    Explore at:
    unknown(21366119)Available download formats
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The in silico detection of expression quantitative trait loci (eQTL) demands high throughput processing from hundreds of samples, which is often a challenge to handle and run such large datasets. In order to focus on the core analysis, it is convenient to have simple coding and hassle-free installation of different software tools required for the bioinformatics workflow. In this context, the newly available technologies like workflow managers and software containers enabled to develop workflows with less complexity. In this study, we developed an eQTL bioinformatics pipeline with the workflow manager Nextflow and docker container software, for coding and installing the required software tools. This workflow can be portable to a different computer environment, and the results are reproducible. We tested the functionality of our workflow with a sample dataset and the runtime estimates from this demo run will provide important information in planning future analyses with much larger datasets.

  6. m

    CWL run of Somatic Variant Calling Workflow (CWLProv 0.5.0 Research Object)

    • data.mendeley.com
    • data.niaid.nih.gov
    • +2more
    Updated Oct 27, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stian Soiland-Reyes (2018). CWL run of Somatic Variant Calling Workflow (CWLProv 0.5.0 Research Object) [Dataset]. http://doi.org/10.17632/97hj93mkfd.1
    Explore at:
    Dataset updated
    Oct 27, 2018
    Authors
    Stian Soiland-Reyes
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    The somatic variant calling workflow included in this case study is designed by Blue Collar Bioinformatics (bcbio), a community-driven initiative to develop best-practice pipelines for variant calling, RNA-seq and small RNA analysis workflows. According to the documentation, the goal of this project is to facilitate the automated analysis of high throughput data by making the resources quantifiable, analyzable, scalable, accessible and reproducible.

    All the underlying tools are containerized facilitating software use in the workflow. The somatic variant calling workflow defined in CWL is available on GitHub and equipped with a well defined test dataset.

    This dataset folder is a CWLProv Research Object that captures the Common Workflow Language execution provenance, see https://w3id.org/cwl/prov/0.5.0 or use https://pypi.org/project/cwlprov/ to explore

  7. m

    CWL run of RNA-seq Analysis Workflow (CWLProv 0.5.0 Research Object)

    • data.mendeley.com
    • data.niaid.nih.gov
    • +3more
    Updated Dec 4, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Farah Zaib Khan (2018). CWL run of RNA-seq Analysis Workflow (CWLProv 0.5.0 Research Object) [Dataset]. http://doi.org/10.17632/xnwncxpw42.1
    Explore at:
    Dataset updated
    Dec 4, 2018
    Authors
    Farah Zaib Khan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This workflow adapts the approach and parameter settings of Trans-Omics for precision Medicine (TOPMed). The RNA-seq pipeline originated from the Broad Institute. There are in total five steps in the workflow starting from:

    1. Read alignment using STAR which produces aligned BAM files including the Genome BAM and Transcriptome BAM.
    2. The Genome BAM file is processed using Picard MarkDuplicates producing an updated BAM file containing information on duplicate reads (such reads can indicate biased interpretation).
    3. SAMtools index is then employed to generate an index for the BAM file, in preparation for the next step.
    4. The indexed BAM file is processed further with RNA-SeQC which takes the BAM file, human genome reference sequence and Gene Transfer Format (GTF) file as inputs to generate transcriptome-level expression quantifications and standard quality control metrics.
    5. In parallel with transcript quantification, isoform expression levels are quantified by RSEM. This step depends only on the output of the STAR tool, and additional RSEM reference sequences.

    For testing and analysis, the workflow author provided example data created by down-sampling the read files of a TOPMed public access data. Chromosome 12 was extracted from the Homo Sapien Assembly 38 reference sequence and provided by the workflow authors. The required GTF and RSEM reference data files are also provided. The workflow is well-documented with a detailed set of instructions of the steps performed to down-sample the data are also provided for transparency. The availability of example input data, use of containerization for underlying software and detailed documentation are important factors in choosing this specific CWL workflow for CWLProv evaluation.

    This dataset folder is a CWLProv Research Object that captures the Common Workflow Language execution provenance, see https://w3id.org/cwl/prov/0.5.0 or use https://pypi.org/project/cwl

  8. Synthetic AR Medical Dataset with Realistic Denial

    • kaggle.com
    zip
    Updated Aug 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abuthahir1998 (2025). Synthetic AR Medical Dataset with Realistic Denial [Dataset]. https://www.kaggle.com/datasets/abuthahir1998/synthetic-ar-medical-dataset-with-realistic-denial
    Explore at:
    zip(13843 bytes)Available download formats
    Dataset updated
    Aug 31, 2025
    Authors
    Abuthahir1998
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Subtitle

    A fully synthetic dataset simulating real-world medical billing scenarios, including claim status, denials, team allocation, and AR follow-up logic.

    Description

    This dataset represents a synthetic Account Receivable (AR) data model for medical billing, created using realistic healthcare revenue cycle management (RCM) workflows. It is designed for data analysis, machine learning modeling, automation testing, and process simulation in the healthcare billing domain.

    The dataset includes realistic business logic, mimicking the actual process of claim submission, denial management, follow-ups, and payment tracking. This is especially useful for: ✔ Medical billing trainingPredictive modeling (claim outcomes, denial prediction, payment forecasting)RCM process automation and AI researchData visualization and dashboard creation

    Key Features of This Dataset

    Patient & Claim Information:

    • Visit ID: Unique alphanumeric ID in the format XXXXXZXXXXXX
    • Patient Name: Randomly generated names
    • Date of Service (DOS): In MM/DD/YYYY format
    • Aging Days: Calculated as Today - DOS
    • Aging Bucket: Categorized as 0-30, 31-60, 61-90, 91-120, 120+

    Claim Status & Denial Logic:

    • Status Column: Indicates whether response received or not
    • If No Response → Simulates a follow-up call → Claim may result in denial
    • Status Code: Reflects actual denial reason (e.g., Dx inconsistent with CPT)
    • Action Code: Required follow-up action (e.g., Need Coding Assistance)
    • Team Allocation: Based on denial type

      • Coding-related denialCoding Team
      • Submission/Claim-related denialBilling Team
      • Payment-related denialPayment Team

    Realistic Denial Scenarios Covered:

    • Coding Errors (Dx inconsistent with CPT, Missing Modifier)
    • Claim Issues (Duplicate Claim, Invalid Subscriber ID)
    • Payment Issues (Allowed Amount Paid, No Coverage)

    Other Important Columns:

    • Claim Amount, Paid Amount, Balance
    • Insurance Details (Primary, Secondary, Tertiary)
    • Notes explaining denial or next steps

    Columns in the Dataset

    Column NameDescription
    ClientName of the client/provider
    StateUS State where service provided
    Visit ID#Unique alphanumeric ID (XXXXXZXXXXXX)
    Patient NamePatient’s full name
    DOSDate of Service (MM/DD/YYYY)
    Aging DaysDays from DOS to today
    Aging BucketAging category
    Claim AmountOriginal claim billed
    Paid AmountAmount paid so far
    BalanceRemaining balance
    StatusInitial claim status (No Response, Paid, etc.)
    Status CodeActual reason (e.g., Dx inconsistent with CPT)
    Action CodeNext step (e.g., Need Coding Assistance)
    Team AllocationResponsible team (Coding, Billing, Payment)
    NotesFollow-up notes

    Data Generation Rules Applied

    • Date format: MM/DD/YYYY
    • Aging Days: Calculated dynamically based on DOS
    • Visit ID: Always follows the XXXXXZXXXXXX format
    • Denial Workflow:

      • If claim denied → Status Code & Action Code updated
      • Team allocation based on denial type
    • Payments: Realistic logic where payment may be partial, full, or none

    • Insurance Flow: Balance moves from primary → secondary → tertiary → patient responsibility

    Use Cases

    • Predictive modeling for claim outcome
    • Identifying high-risk claims for early intervention
    • Denial pattern analysis for improving first-pass resolution rate
    • Building RCM dashboards and AR management tools

    License

    CC BY 4.0 – Free to use, modify, and share with attribution.

  9. d

    Universal System Management Tool (USMT)

    • catalog.data.gov
    Updated Oct 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office of the Assistant Secretary for Administration and Management (2022). Universal System Management Tool (USMT) [Dataset]. https://catalog.data.gov/dataset/universal-system-management-tool-usmt
    Explore at:
    Dataset updated
    Oct 8, 2022
    Dataset provided by
    Office of the Assistant Secretary for Administration and Management
    Description

    USMT is an ITIL-compliant workflow system and database for managing incidents, problems, changes, assets, and configurations. It uses the BMC/Remedy COTS product to support several IT Help Desks, and change management processes for segments of DOL IT infrastructure.

  10. m

    Vertex - Other-Operating-Expenses

    • macro-rankings.com
    csv, excel
    Updated Jul 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    macro-rankings (2025). Vertex - Other-Operating-Expenses [Dataset]. https://www.macro-rankings.com/markets/stocks/verx-nasdaq/income-statement/other-operating-expenses
    Explore at:
    csv, excelAvailable download formats
    Dataset updated
    Jul 1, 2025
    Dataset authored and provided by
    macro-rankings
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    united states
    Description

    Other-Operating-Expenses Time Series for Vertex. Vertex, Inc., together with its subsidiaries, provides enterprise tax technology solutions for retail trade, wholesale trade, and manufacturing industries in the United States and internationally. The company offers tax determination; compliance and reporting, including workflow management tools; tax data management and document managementsolutions; analytics and insights; pre-built integration that includes mapping data fields, and business logic and configurations; industry-specific solutions support certain industries for indirect tax needs, such as retail, communications, and leasing; and technology specific solutions, such as chain flow accelerator and SAP-specific tools. It provides implementation services, such as configuration, data migration and implementation, and support and training; E-invoicing, an end-to-end e-invoicing process; and managed services, including indirect tax return preparation, filing and tax payment, and notice management. The company sells its software products through software licenses and software as a service subscription. Vertex, Inc. was founded in 1978 and is headquartered in King of Prussia, Pennsylvania.

  11. Taiwan_Covid19_Dataset

    • kaggle.com
    zip
    Updated Nov 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Burger Wu (2022). Taiwan_Covid19_Dataset [Dataset]. https://www.kaggle.com/burgerwu/taiwan-covid19-dataset
    Explore at:
    zip(4423276 bytes)Available download formats
    Dataset updated
    Nov 1, 2022
    Authors
    Burger Wu
    Area covered
    Taiwan
    Description

    Context

    These datasets are collected from Taiwanese Center of Disease Control (CDC) and National Center for High-performance Computing (NCHC). With the help of ETL automation and workflow management tool, we are able to update the data on a daily basis and update to Kaggle on a monthly basis.

    Content

    There are four tables in this dataset group (latest updated on 2022-11-01) - covid19_tw_cases: Daily confirmation of covid19 statistics - covid19_tw_suspects: Daily number of examinations - covid19_tw_vaccination: Daily vaccination of different vaccine brand - covid19_tw_positive_rate: Aggregated result of covid19_cases and covid19_suspects also with positive rate every day

    Acknowledgements

    Special thanks to Taiwanese CDC and NCHC for providing high quality and reliable data as well as developers who created amazing tools for making life easier

    Inspiration

    This data was originally used to create dashboard with Web framework. You are welcome to apply these datasets on different purpose

  12. c

    Reproducible HPC software deployments, simulations and workflows

    • materialscloud-archive-failover.cineca.it
    • archive.materialscloud.org
    bin, text/markdown
    Updated Aug 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lars Bilke; Thomas Fischer; Tobias Meisel; Dmitri Naumov; Lars Bilke; Thomas Fischer; Tobias Meisel; Dmitri Naumov (2025). Reproducible HPC software deployments, simulations and workflows [Dataset]. http://doi.org/10.24435/materialscloud:4w-5x
    Explore at:
    bin, text/markdownAvailable download formats
    Dataset updated
    Aug 28, 2025
    Dataset provided by
    Materials Cloud
    Authors
    Lars Bilke; Thomas Fischer; Tobias Meisel; Dmitri Naumov; Lars Bilke; Thomas Fischer; Tobias Meisel; Dmitri Naumov
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Reproducibility in running scientific simulations on high-performance computing (HPC) environments is a persistent challenge due to variations in software and hardware stacks. Differences in software versions or hardware-specific optimizations often lead to discrepancies in simulation outputs. While Linux containers are commonly used to standardize software environments, tools like Docker lack reproducibility in image creation, requiring archiving of binary image blobs for future use. This method turns containers into black boxes, preventing verification of how the contained software was built.

    In the linked paper, we demonstrate how we use GNU Guix to create our software stack bit-by-bit reproducible from a source bootstrap. Our approach incorporates a portable OpenMPI implementation, optimized software builds, and deployment via Apptainer images across three HPC environments. We show that our reproducible software stack facilitates consistent multi-physics simulations and complex workflows on diverse HPC platforms, exemplified by the OpenGeoSys software project. To ensure provenance of our findings, we utilized the AiiDA workflow manager.

    This dataset includes the complete AiiDA provenance database underlying the results presented in the paper. The AiiDA workflow itself is defined in and can be reproduced with this repository: https://gitlab.opengeosys.org/bilke/hpc-container-study.

    Version 1 of this record did not use optimized packages (Guix --tune option was not passed properly). This has been fixed in version 2 of this record.

  13. c

    Data from: ServiceNow

    • s.cnmilf.com
    • catalog.data.gov
    Updated Apr 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office of the Assistant Secretary for Administration and Management (2025). ServiceNow [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/servicenow-9919e
    Explore at:
    Dataset updated
    Apr 8, 2025
    Dataset provided by
    Office of the Assistant Secretary for Administration and Management
    Description

    Automation of Enterprise Services customer/employee experiences. Called a Universal Service Management Tool (USMT), ServiceNow is an ITIL-compliant workflow system and database for managing user incidents and problems, IT hardware and software changes, hardware and software assets, and change management processes for the DOL IT infrastructure. ServiceNow also supports several Agency application Help Desks.

  14. c

    Connecticut Open Data Portal - Sites - CKAN Ecosystem Catalog Beta

    • catalog.civicdataecosystem.org
    Updated Apr 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Connecticut Open Data Portal - Sites - CKAN Ecosystem Catalog Beta [Dataset]. https://catalog.civicdataecosystem.org/dataset/connecticut-open-data-portal
    Explore at:
    Dataset updated
    Apr 22, 2025
    Area covered
    Connecticut
    Description

    The Collaborative is a public-private partnership that advocates for the public availability of open and accessible data to drive planning, policy, budgeting and decision making in Connecticut at the state, regional and local levels. We are a user-driven organization, serving nonprofits, advocates, policymakers, community groups, and funders in using data to drive policy and improve programs and services. Michelle is responsible for executing the vision and strategy of the Collaborative which seeks to advance open data statewide and to facilitate data-driven decision making. She seeks to increase the use of public open data and grow the community of users across the state. Sasha Cuerda, Director of Technology: Sasha is a web developer with experience building data visualization tools, developing database systems for managing spatial data, and developing data processing workflows. He is also a trained urban planner and geographer. Brendan Swiniarski, Civics Applications Developer: Brendan is a web developer with experience designing user-centered web sites. He also processes data and develops metadata for CTData.org. Contact: Email us at info@ctdata.org Sasha Cuerda, Director of Technology: Sasha is a web developer with experience building data visualization tools, developing database systems for managing spatial data, and developing data processing workflows. He is also a trained urban planner and geographer. Brendan Swiniarski, Civics Applications Developer: Brendan is a web developer with experience designing user-centered web sites. He also processes data and develops metadata for CTData.org. Contact: Email us at info@ctdata.org Sasha is a web developer with experience building data visualization tools, developing database systems for managing spatial data, and developing data processing workflows. He is also a trained urban planner and geographer. Brendan is a web developer with experience designing user-centered web sites. He also processes data and develops metadata for CTData.org.

  15. e

    Location Identifiers, Metadata, and Map for Field Measurements at the East...

    • knb.ecoinformatics.org
    • search.dataone.org
    • +1more
    Updated Oct 11, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Charuleka Varadharajan; Zarine Kakalia; Madison Burrus; Dylan O'Ryan; Erek Alper; Jillian Banfield; Max Berkelhammer; Curtis Beutler; Eoin Brodie; Wendy Brown; Mariah S. Carbone; Rosemary Carroll; Danielle Christianson; Chunwei Chou; Robert Crystal-Ornelas; K. Dana Chadwick; John Christensen; Baptiste Dafflon; Hesham Elbashandy; Brian J. Enquist; Patricia Fox; David Gochis; Matthew Henderson; Douglas Johnson; Lara Kueppers; Paula Matheus Carnevali; Alexander Newman; Thomas Powell; Kamini Singha; Patrick Sorensen; Matthias Sprenger; Tetsu Tokunaga; Roelof Versteeg; Mike Wilkins; Kenneth Williams; Marshall Worsham; Catherine Wong; Yuxin Wu; Deborah Agarwal (2023). Location Identifiers, Metadata, and Map for Field Measurements at the East River Watershed, Colorado, USA (Version 3.0) [Dataset]. http://doi.org/10.15485/1660962
    Explore at:
    Dataset updated
    Oct 11, 2023
    Dataset provided by
    ESS-DIVE
    Authors
    Charuleka Varadharajan; Zarine Kakalia; Madison Burrus; Dylan O'Ryan; Erek Alper; Jillian Banfield; Max Berkelhammer; Curtis Beutler; Eoin Brodie; Wendy Brown; Mariah S. Carbone; Rosemary Carroll; Danielle Christianson; Chunwei Chou; Robert Crystal-Ornelas; K. Dana Chadwick; John Christensen; Baptiste Dafflon; Hesham Elbashandy; Brian J. Enquist; Patricia Fox; David Gochis; Matthew Henderson; Douglas Johnson; Lara Kueppers; Paula Matheus Carnevali; Alexander Newman; Thomas Powell; Kamini Singha; Patrick Sorensen; Matthias Sprenger; Tetsu Tokunaga; Roelof Versteeg; Mike Wilkins; Kenneth Williams; Marshall Worsham; Catherine Wong; Yuxin Wu; Deborah Agarwal
    Time period covered
    Sep 14, 2015 - Jun 13, 2022
    Area covered
    Description

    This dataset contains identifiers, metadata, and a map of the locations where field measurements have been conducted at the East River Community Observatory located in the Upper Colorado River Basin, United States. This is version 3.0 of the dataset and replaces the prior version 2.0, which should no longer be used (see below for details on changes between the versions). Dataset description: The East River is the primary field site of the Watershed Function Scientific Focus Area (WFSFA) and the Rocky Mountain Biological Laboratory. Researchers from several institutions generate highly diverse hydrological, biogeochemical, climate, vegetation, geological, remote sensing, and model data at the East River in collaboration with the WFSFA. Thus, the purpose of this dataset is to maintain an inventory of the field locations and instrumentation to provide information on the field activities in the East River and coordinate data collected across different locations, researchers, and institutions. The dataset contains (1) a README file with information on the various files, (2) three csv files describing the metadata collected for each surface point location, plot and region registered with the WFSFA, (3) csv files with metadata and contact information for each surface point location registered with the WFSFA, (4) a csv file with with metadata and contact information for plots, (5) a csv file with metadata for geographic regions and sub-regions within the watershed, (6) a compiled xlsx file with all the data and metadata which can be opened in Microsoft Excel, (7) a kml map of the locations plotted in the watershed which can be opened in Google Earth, (8) a jpeg image of the kml map which can be viewed in any photo viewer, and (9) a zipped file with the registration templates used by the SFA team to collect location metadata. The zipped template file contains two csv files with the blank templates (point and plot), two csv files with instructions for filling out the location templates, and one compiled xlsx file with the instructions and blank templates together. Additionally, the templates in the xlsx include drop down validation for any controlled metadata fields. Persistent location identifiers (Location_ID) are determined by the WFSFA data management team and are used to track data and samples across locations. Dataset uses: This location metadata is used to update the Watershed SFA’s publicly accessible Field Information Portal (an interactive field sampling metadata exploration tool; https://wfsfa-data.lbl.gov/watershed/), the kml map file included in this dataset, and other data management tools internal to the Watershed SFA team. Version Information: The latest version of this dataset publication is version 3.0. The latest version contains a breaking change to the Location Map (EastRiverCommunityObservatory_Map_v3_0_20220613.kml), If you had previously downloaded the map file prior to version 3.0, it will no longer work. Use the updated Location Map (EastRiverCommunityObservatory_Map_v3_0_20220613.kml) in this version of the dataset. This version also contains a total of 51 new point locations, 8 new plot locations, and 1 new geographic region. Additionally, it corrects inconsistencies in existing metadata. Refer to methods for further details on the version history. This dataset will be updated on a periodic basis with new measurement location information. Researchers interested in having their East River measurement locations added in this list should reach out to the WFSFA data management team at wfsfa-data@googlegroups.com. Acknowledgements: Please cite this dataset if using any of the location metadata in other publications or derived products. If using the location metadata for the NEON hyperspectral campaign, additionally cite Chadwick et al. (2020). doi:10.15485/1618130.

  16. c

    ckanext-rems - Extensions - CKAN Ecosystem Catalog Beta

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-rems - Extensions - CKAN Ecosystem Catalog Beta [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-rems
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The REMS (Resource Entitlement Management System) extension for CKAN brings access rights management capabilities to datasets. By integrating with REMS, this extension allows organizations to manage and control access to sensitive or restricted data through application workflows and approval processes. This enables a more secure and governed environment for data sharing within CKAN. Key Features: REMS Integration: Integrates CKAN with the REMS system for managing access rights to datasets, providing a centralized control point for permissions. Application Form and Workflow Design: Utilizes REMS' tools for designing application forms and defining workflows for requesting access to datasets. Access Request Management: Enables end-users to apply for access to datasets through the defined REMS application workflows. Workflow Processing: Provides administrators and authorized users with the tools to process access requests, manage approvals, and administer granted access rights within the REMS interface. Shibboleth Configuration Support: Supports Shibboleth configuration for authentication, potentially enabling single sign-on (SSO) capabilities for accessing REMS-protected datasets. Technical Integration: The REMS extension integrates with CKAN through configuration settings defined in the .ini file. It also utilizes the Kata extension as a dependency. Shibboleth configuration details are outlined in the config/shibboleth/README.txt file, giving direction on how to set up single sign on. The extension essentially connects CKAN datasets to the permissioning framework within a separate REMS instance. Benefits & Impact: The REMS extension provides enhanced security and control over dataset access within CKAN. This helps organizations to comply with data governance policies and regulations by enabling a structured and auditable process for granting permissions. Using a separate REMS system offloads access right activities.

  17. Table_1_Management and Quality Control of Large Neuroimaging Datasets:...

    • frontiersin.figshare.com
    • datasetcatalog.nlm.nih.gov
    • +1more
    xlsx
    Updated Jun 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jordi Huguet; Carles Falcon; David Fusté; Sergi Girona; David Vicente; José Luis Molinuevo; Juan Domingo Gispert; Grégory Operto; for the ALFA Study (2023). Table_1_Management and Quality Control of Large Neuroimaging Datasets: Developments From the Barcelonaβeta Brain Research Center.XLSX [Dataset]. http://doi.org/10.3389/fnins.2021.633438.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Jordi Huguet; Carles Falcon; David Fusté; Sergi Girona; David Vicente; José Luis Molinuevo; Juan Domingo Gispert; Grégory Operto; for the ALFA Study
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recent decades have witnessed an increasing number of large to very large imaging studies, prominently in the field of neurodegenerative diseases. The datasets collected during these studies form essential resources for the research aiming at new biomarkers. Collecting, hosting, managing, processing, or reviewing those datasets is typically achieved through a local neuroinformatics infrastructure. In particular for organizations with their own imaging equipment, setting up such a system is still a hard task, and relying on cloud-based solutions, albeit promising, is not always possible. This paper proposes a practical model guided by core principles including user involvement, lightweight footprint, modularity, reusability, and facilitated data sharing. This model is based on the experience from an 8-year-old research center managing cohort research programs on Alzheimer’s disease. Such a model gave rise to an ecosystem of tools aiming at improved quality control through seamless automatic processes combined with a variety of code libraries, command line tools, graphical user interfaces, and instant messaging applets. The present ecosystem was shaped around XNAT and is composed of independently reusable modules that are freely available on GitLab/GitHub. This paradigm is scalable to the general community of researchers working with large neuroimaging datasets.

  18. m

    Vertex - Long-Term-Debt

    • macro-rankings.com
    csv, excel
    Updated Aug 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    macro-rankings (2025). Vertex - Long-Term-Debt [Dataset]. https://www.macro-rankings.com/Markets/Stocks/VERX-NASDAQ/Balance-Sheet/Long-Term-Debt
    Explore at:
    csv, excelAvailable download formats
    Dataset updated
    Aug 10, 2025
    Dataset authored and provided by
    macro-rankings
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    united states
    Description

    Long-Term-Debt Time Series for Vertex. Vertex, Inc., together with its subsidiaries, provides enterprise tax technology solutions for retail trade, wholesale trade, and manufacturing industries in the United States and internationally. The company offers tax determination; compliance and reporting, including workflow management tools; tax data management and document managementsolutions; analytics and insights; pre-built integration that includes mapping data fields, and business logic and configurations; industry-specific solutions support certain industries for indirect tax needs, such as retail, communications, and leasing; and technology specific solutions, such as chain flow accelerator and SAP-specific tools. It provides implementation services, such as configuration, data migration and implementation, and support and training; E-invoicing, an end-to-end e-invoicing process; and managed services, including indirect tax return preparation, filing and tax payment, and notice management. The company sells its software products through software licenses and software as a service subscription. Vertex, Inc. was founded in 1978 and is headquartered in King of Prussia, Pennsylvania.

  19. g

    Supporting materials for "Kronos: a workflow assembler for genome analytics...

    • gigadb.org
    • aspera.gigadb.org
    Updated Jun 14, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Supporting materials for "Kronos: a workflow assembler for genome analytics and informatics" [Dataset]. http://doi.org/10.5524/100307
    Explore at:
    Dataset updated
    Jun 14, 2017
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    The field of next generation sequencing informatics has matured to a point where algorithmic advances in sequence alignment and individual feature detection methods have stabilized. Practical and robust implementation of complex analytical workflows (where such tools are structured into 'best practices' for automated analysis of NGS datasets) still requires significant programming investment and expertise.
    We present Kronos, a software platform for facilitating the development and execution of modular, auditable and distributable bioinformatics workflows. Kronos obviates the need for explicit coding of workflows by compiling a text configuration file into executable Python applications. Making analysis modules would still require programming. The framework of each workflow includes a run manager to execute the encoded workflows locally (or on a cluster or cloud), parallelize tasks, and log all runtime events. Resulting workflows are highly modular and configurable by construction, facilitating flexible and extensible meta-applications which can be modified easily through configuration file editing. The workflows are fully encoded for ease of distribution and can be instantiated on external systems, a step towards reproducible research and comparative analyses. We introduce a framework for building Kronos components which function as shareable, modular nodes in Kronos workflows.
    The Kronos platform provides a standard framework for developers to implement custom tools, reuse existing tools, and contribute to the community at large. Kronos is shipped with both Docker and Amazon AWS machine images. It is free, open source and available through PyPI (Python Package Index) and https://github.com/jtaghiyar/kronos.

  20. G

    Vision Dataset Versioning Platform Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Vision Dataset Versioning Platform Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/vision-dataset-versioning-platform-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Oct 3, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Vision Dataset Versioning Platform Market Outlook



    According to our latest research, the global Vision Dataset Versioning Platform market size reached USD 420 million in 2024, driven by the accelerating adoption of AI-powered computer vision applications across key industries. The market is expected to grow at a robust CAGR of 19.2% from 2025 to 2033, reaching a projected value of USD 1.99 billion by 2033. This remarkable growth is fueled by the increasing demand for scalable, collaborative, and traceable data management solutions to support the full lifecycle of machine learning and computer vision projects.




    A primary growth factor for the Vision Dataset Versioning Platform market is the exponential rise in the deployment of computer vision systems in sectors such as autonomous vehicles, healthcare, manufacturing, and surveillance. As organizations scale their AI initiatives, the need for robust dataset management, version control, and reproducibility becomes critical. Vision dataset versioning platforms enable teams to track changes, manage large volumes of image and video data, and ensure compliance with regulatory standards. The surge in data complexity and the need for continuous model improvement have made these platforms indispensable for both enterprises and research institutions seeking to maintain a competitive edge.




    Another significant driver is the growing emphasis on data quality, governance, and collaboration within AI development workflows. Vision dataset versioning platforms provide advanced tools for annotation, metadata management, and lineage tracking, which are essential for transparent and auditable AI pipelines. These platforms facilitate seamless collaboration among distributed teams, allowing multiple stakeholders to work on the same dataset while maintaining a clear history of changes. This capability not only accelerates innovation but also reduces the risk of data drift and model degradation, ensuring that AI solutions remain reliable and effective in real-world applications.




    The proliferation of cloud computing and the increasing adoption of cloud-based machine learning infrastructure have further accelerated the uptake of vision dataset versioning platforms. Cloud deployment enables organizations to scale storage and compute resources dynamically while benefiting from integrated security and compliance features. As AI projects become more data-intensive and geographically distributed, cloud-based versioning solutions offer the flexibility and accessibility required to support global teams and large-scale initiatives. This trend is particularly pronounced in industries such as autonomous vehicles and medical imaging, where data volumes are immense and collaboration is key to success.




    Regionally, North America remains the largest market for vision dataset versioning platforms, accounting for over 38% of global revenue in 2024, followed by Europe and Asia Pacific. The United States is at the forefront of adoption, driven by a strong ecosystem of AI startups, research institutions, and enterprise innovation. In Asia Pacific, rapid digital transformation and government investments in AI infrastructure are propelling market growth, especially in China, Japan, and South Korea. Europe is witnessing increased adoption in automotive and industrial automation sectors, while Latin America and the Middle East & Africa are gradually emerging as new frontiers for market expansion, albeit at a slower pace.





    Component Analysis



    The Component segment of the Vision Dataset Versioning Platform market is bifurcated into software and services, each playing a crucial role in enabling seamless data management and collaboration for AI and computer vision projects. Software solutions form the backbone of this segment, offering capabilities such as dataset version control, annotation tools, metadata management, and integration with popular machine learning frameworks. These platforms are designed to address the complexities of managing larg

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Growth Market Reports (2025). Scientific Workflow Management Platforms Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/scientific-workflow-management-platforms-market

Scientific Workflow Management Platforms Market Research Report 2033

Explore at:
pptx, csv, pdfAvailable download formats
Dataset updated
Aug 21, 2025
Dataset authored and provided by
Growth Market Reports
Time period covered
2024 - 2032
Area covered
Global
Description

Scientific Workflow Management Platforms Market Outlook



As per our latest research, the global scientific workflow management platforms market size is valued at USD 2.47 billion in 2024, exhibiting robust momentum driven by rapid digitalization of research and development processes across scientific domains. The market is anticipated to expand at a CAGR of 12.8% from 2025 to 2033, reaching an estimated value of USD 7.34 billion by 2033. This significant growth is underpinned by increasing adoption of automation, data-driven decision making, and the need for reproducibility and scalability in scientific research. The integration of artificial intelligence, cloud computing, and big data analytics into scientific workflow management platforms is further accelerating market expansion and transforming the landscape of research and innovation worldwide.




One of the primary growth factors for the scientific workflow management platforms market is the surge in data-intensive research activities, particularly in genomics, bioinformatics, and drug discovery. The explosion of next-generation sequencing, high-throughput screening, and other advanced research methodologies has resulted in the generation of vast and complex datasets. Managing, analyzing, and sharing these datasets efficiently requires robust workflow management solutions that can automate routine processes, ensure data integrity, and facilitate collaboration among researchers. As research organizations and laboratories strive to enhance productivity and reduce manual errors, the demand for sophisticated scientific workflow management platforms continues to rise, driving market growth.




Another key driver is the increasing emphasis on reproducibility and transparency in scientific research. The scientific community is facing mounting pressure to ensure that experimental results can be reliably replicated and validated by independent researchers. Scientific workflow management platforms address this challenge by providing standardized, traceable, and version-controlled environments for executing and documenting research workflows. These platforms not only enhance the credibility and reliability of research findings but also streamline compliance with regulatory requirements, particularly in highly regulated sectors such as pharmaceuticals and biotechnology. As a result, both academic institutions and commercial enterprises are investing heavily in workflow management solutions to strengthen their research governance frameworks.




Technological advancements are also shaping the future of the scientific workflow management platforms market. The integration of artificial intelligence, machine learning, and cloud-based architectures is enabling new levels of automation, scalability, and accessibility. Cloud-based workflow platforms, in particular, are gaining traction due to their ability to support remote collaboration, elastic computing resources, and seamless integration with other digital research tools. These innovations are democratizing access to advanced scientific computing capabilities, allowing smaller research teams and organizations in emerging markets to participate in global scientific endeavors. The ongoing digital transformation of research and development is expected to create new opportunities for vendors and fuel sustained market growth over the forecast period.




From a regional perspective, North America continues to dominate the scientific workflow management platforms market, accounting for the largest revenue share in 2024. This leadership position is attributed to the presence of leading research institutions, robust funding for life sciences and healthcare research, and early adoption of digital technologies. However, Asia Pacific is emerging as the fastest-growing region, driven by increasing investments in research infrastructure, expanding pharmaceutical and biotechnology sectors, and government initiatives to promote digital innovation. Europe also holds a significant share, supported by strong academic research networks and collaborative projects across the region. The global market landscape is becoming increasingly competitive, with both established players and new entrants vying for market share through technological innovation and strategic partnerships.



Search
Clear search
Close search
Google apps
Main menu