3 datasets found
  1. Intelligent Monitor

    • kaggle.com
    Updated Apr 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ptdevsecops (2024). Intelligent Monitor [Dataset]. http://doi.org/10.34740/kaggle/ds/4383210
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 12, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ptdevsecops
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    IntelligentMonitor: Empowering DevOps Environments With Advanced Monitoring and Observability aims to improve monitoring and observability in complex, distributed DevOps environments by leveraging machine learning and data analytics. This repository contains a sample implementation of the IntelligentMonitor system proposed in the research paper, presented and published as part of the 11th International Conference on Information Technology (ICIT 2023).

    If you use this dataset and code or any herein modified part of it in any publication, please cite these papers:

    P. Thantharate, "IntelligentMonitor: Empowering DevOps Environments with Advanced Monitoring and Observability," 2023 International Conference on Information Technology (ICIT), Amman, Jordan, 2023, pp. 800-805, doi: 10.1109/ICIT58056.2023.10226123.

    For any questions and research queries - please reach out via Email.

    Abstract - In the dynamic field of software development, DevOps has become a critical tool for enhancing collaboration, streamlining processes, and accelerating delivery. However, monitoring and observability within DevOps environments pose significant challenges, often leading to delayed issue detection, inefficient troubleshooting, and compromised service quality. These issues stem from DevOps environments' complex and ever-changing nature, where traditional monitoring tools often fall short, creating blind spots that can conceal performance issues or system failures. This research addresses these challenges by proposing an innovative approach to improve monitoring and observability in DevOps environments. Our solution, Intelligent-Monitor, leverages realtime data collection, intelligent analytics, and automated anomaly detection powered by advanced technologies such as machine learning and artificial intelligence. The experimental results demonstrate that IntelligentMonitor effectively manages data overload, reduces alert fatigue, and improves system visibility, thereby enhancing performance and reliability. For instance, the average CPU usage across all components showed a decrease of 9.10%, indicating improved CPU efficiency. Similarly, memory utilization and network traffic showed an average increase of 7.33% and 0.49%, respectively, suggesting more efficient use of resources. By providing deep insights into system performance and facilitating rapid issue resolution, this research contributes to the DevOps community by offering a comprehensive solution to one of its most pressing challenges. This fosters more efficient, reliable, and resilient software development and delivery processes.

    Components The key components that would need to be implemented are:

    • Data Collection - Collect performance metrics and log data from the distributed system components. Could use technology like Kafka or telemetry libraries.
    • Data Processing - Preprocess and aggregate the collected data into an analyzable format. Could use Spark for distributed data processing.
    • Anomaly Detection - Apply machine learning algorithms to detect anomalies in the performance metrics. Could use isolation forest or LSTM models.
    • Alerting - Generate alerts when anomalies are detected. It could integrate with tools like PagerDuty.
    • Visualization - Create dashboards to visualize system health and key metrics. Could use Grafana or Kibana.
    • Data Storage - Store the collected metrics and log data. Could use Elasticsearch or InfluxDB.

    Implementation Details The core of the implementation would involve the following: - Setting up the data collection pipelines. - Building and training anomaly detection ML models on historical data. - Developing a real-time data processing pipeline. - Creating an alerting framework that ties into the ML models. - Building visualizations and dashboards.

    The code would need to handle scaled-out, distributed execution for production environments.

    Proper code documentation, logging, and testing would be added throughout the implementation.

    Usage Examples Usage examples could include:

    • Running the data collection agents on each system component.
    • Visualizing system metrics through Grafana dashboards.
    • Investigating anomalies detected by the ML models.
    • Tuning the alerting rules to minimize false positives.
    • Correlating metrics with log data to troubleshoot issues.

    References The implementation would follow the details provided in the original research paper: P. Thantharate, "IntelligentMonitor: Empowering DevOps Environments with Advanced Monitoring and Observability," 2023 International Conference on Information Technology (ICIT), Amman, Jordan, 2023, pp. 800-805, doi: 10.1109/ICIT58056.2023.10226123.

    Any additional external libraries or sources used would be properly cited.

    Tags - DevOps, Software Development, Collaboration, Streamlini...

  2. Data set from Fischertechnik Smart Factory Model at University of St.Gallen

    • zenodo.org
    • data.niaid.nih.gov
    bin, txt
    Updated Jan 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ronny Seiger; Ronny Seiger (2024). Data set from Fischertechnik Smart Factory Model at University of St.Gallen [Dataset]. http://doi.org/10.5281/zenodo.7440490
    Explore at:
    bin, txtAvailable download formats
    Dataset updated
    Jan 25, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ronny Seiger; Ronny Seiger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    St. Gallen
    Description

    This is the data set of IoT data from the Fischertechnik Smart Factory Model deployed at the Institute of Computer Science at the University of St.Gallen. It is used as basis for the interactive identification of process activity executions from the IoT data. The corresponding publication can be found here:

    Seiger, R., Franceschetti, M., & Weber, B. (2023). An Interactive Method for Detection of Process Activity Executions from IoT Data. Future Internet, 15(2), 77.
    https://doi.org/10.3390/fi15020077

    The data set contains:

    • cps_log.txt: A file of all sensor and actuator readings (in JSON format) from the smart factory during the execution of 3 instances of the storage process and 3 instances of the production process. For visualization, it can be fed line-by-line into an Influx database and Grafana can then be used to create visualizations of the data.
    • wfms_log.txt: A file containing the corresponding event log (in JSON format) recorded and extracted from the Camunda Platform workflow management system during the execution of the process instances. For visualization, it can be fed line-by-line into an Influx database and Grafana can then be used to create visualizations of the data.
    • storage_process.bpmn: Executable BPMN 2.0 model of the storage process executed in the smart factory model.
    • production_process.bpmn: Executable BPMN 2.0 model of the storage process executed in the smart factory model.

    More details on the systems architecture used to execute the processes and record the data from the smart factory can be found in the follow publication:

    Ronny Seiger, Lukas Malburg, Barbara Weber, Ralph Bergmann,
    Integrating process management and event processing in smart factories: A systems architecture and use cases,
    Journal of Manufacturing Systems, Volume 63, 2022, Pages 575-592, ISSN 0278-6125,
    https://doi.org/10.1016/j.jmsy.2022.05.012

  3. Z

    6GSmart - CPE exporter to report 5G and WiFi metrics

    • data.niaid.nih.gov
    Updated Mar 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i2CAT (2025). 6GSmart - CPE exporter to report 5G and WiFi metrics [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14995970
    Explore at:
    Dataset updated
    Mar 12, 2025
    Dataset authored and provided by
    i2CAT
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset which is provided in this document originates from a monitoring system developed within the 6GSmart project, a UNICO I+D framework initiative (registration number 023031500). The outcome provides context of deliverable 6GSMART-SP3-L4-P2-P3-D2.3.3.

    The system deployment incorporates a MultiRAT architecture, integrating 5G small cells and WiFi6 to enable traffic aggregation through MPTCP mechanisms. The diagram below illustrates the deployment architecture.

    6GSmart MultiRAT network deployment

    A Grafana dashboard is also developed to report collected metrics, which can be found from this link.

    The collected dataset has more than 1800 entities of metrics (that are introduced below) which reports one hour of network activities when three rounds of tests were performed. to identify relevant values to each round test, please consider mapping the time-stmp of the dataset with the time which appears in the Grafana dashboard for each round.

    In the fist round, 100 captured images are uploaded toward the image processing server. Each image was 1 MB. The relevant part in Grafane can be seen here. Relevant entities can be found in the dataset with time-stamp from 11:58:09 until 11:58:49.

    In the second round, 100 captured images are uploaded toward the image processing server. Each image was 10 MB. The relevant part in Grafane can be seen here. Relevant entities can be found in the dataset with time-stamp from 12:00:00 until 12:06:42.

    In the third round, 100 captured images are uploaded toward the image processing server. Each image was 150 MB. The relevant part in Grafane can be seen here. Relevant entities can be found in the dataset with time-stamp from 12:30:54 until 12:40:07.

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
ptdevsecops (2024). Intelligent Monitor [Dataset]. http://doi.org/10.34740/kaggle/ds/4383210
Organization logo

Intelligent Monitor

Empowering DevOps Environments With Advanced Monitoring and Observability

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Apr 12, 2024
Dataset provided by
Kagglehttp://kaggle.com/
Authors
ptdevsecops
License

MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically

Description

IntelligentMonitor: Empowering DevOps Environments With Advanced Monitoring and Observability aims to improve monitoring and observability in complex, distributed DevOps environments by leveraging machine learning and data analytics. This repository contains a sample implementation of the IntelligentMonitor system proposed in the research paper, presented and published as part of the 11th International Conference on Information Technology (ICIT 2023).

If you use this dataset and code or any herein modified part of it in any publication, please cite these papers:

P. Thantharate, "IntelligentMonitor: Empowering DevOps Environments with Advanced Monitoring and Observability," 2023 International Conference on Information Technology (ICIT), Amman, Jordan, 2023, pp. 800-805, doi: 10.1109/ICIT58056.2023.10226123.

For any questions and research queries - please reach out via Email.

Abstract - In the dynamic field of software development, DevOps has become a critical tool for enhancing collaboration, streamlining processes, and accelerating delivery. However, monitoring and observability within DevOps environments pose significant challenges, often leading to delayed issue detection, inefficient troubleshooting, and compromised service quality. These issues stem from DevOps environments' complex and ever-changing nature, where traditional monitoring tools often fall short, creating blind spots that can conceal performance issues or system failures. This research addresses these challenges by proposing an innovative approach to improve monitoring and observability in DevOps environments. Our solution, Intelligent-Monitor, leverages realtime data collection, intelligent analytics, and automated anomaly detection powered by advanced technologies such as machine learning and artificial intelligence. The experimental results demonstrate that IntelligentMonitor effectively manages data overload, reduces alert fatigue, and improves system visibility, thereby enhancing performance and reliability. For instance, the average CPU usage across all components showed a decrease of 9.10%, indicating improved CPU efficiency. Similarly, memory utilization and network traffic showed an average increase of 7.33% and 0.49%, respectively, suggesting more efficient use of resources. By providing deep insights into system performance and facilitating rapid issue resolution, this research contributes to the DevOps community by offering a comprehensive solution to one of its most pressing challenges. This fosters more efficient, reliable, and resilient software development and delivery processes.

Components The key components that would need to be implemented are:

  • Data Collection - Collect performance metrics and log data from the distributed system components. Could use technology like Kafka or telemetry libraries.
  • Data Processing - Preprocess and aggregate the collected data into an analyzable format. Could use Spark for distributed data processing.
  • Anomaly Detection - Apply machine learning algorithms to detect anomalies in the performance metrics. Could use isolation forest or LSTM models.
  • Alerting - Generate alerts when anomalies are detected. It could integrate with tools like PagerDuty.
  • Visualization - Create dashboards to visualize system health and key metrics. Could use Grafana or Kibana.
  • Data Storage - Store the collected metrics and log data. Could use Elasticsearch or InfluxDB.

Implementation Details The core of the implementation would involve the following: - Setting up the data collection pipelines. - Building and training anomaly detection ML models on historical data. - Developing a real-time data processing pipeline. - Creating an alerting framework that ties into the ML models. - Building visualizations and dashboards.

The code would need to handle scaled-out, distributed execution for production environments.

Proper code documentation, logging, and testing would be added throughout the implementation.

Usage Examples Usage examples could include:

  • Running the data collection agents on each system component.
  • Visualizing system metrics through Grafana dashboards.
  • Investigating anomalies detected by the ML models.
  • Tuning the alerting rules to minimize false positives.
  • Correlating metrics with log data to troubleshoot issues.

References The implementation would follow the details provided in the original research paper: P. Thantharate, "IntelligentMonitor: Empowering DevOps Environments with Advanced Monitoring and Observability," 2023 International Conference on Information Technology (ICIT), Amman, Jordan, 2023, pp. 800-805, doi: 10.1109/ICIT58056.2023.10226123.

Any additional external libraries or sources used would be properly cited.

Tags - DevOps, Software Development, Collaboration, Streamlini...

Search
Clear search
Close search
Google apps
Main menu