Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
Discover the booming Government Open Data Management (ODM) Platform market! This comprehensive analysis reveals key trends, drivers, and challenges shaping this $2B+ sector, including regional insights, leading companies, and future growth projections through 2033. Learn how cloud-based solutions and AI are transforming government data management.
Facebook
TwitterCity of Tempe Security and Privacy Worksheet includes: Section 1: DATASET NAME Section 2. PERSONALLY IDENTIFIABLE INFORMATION QUESTIONS Section 3. SECURITY: PROTECTED DATA Section 4. SECURITY: SENSITIVE DATA
Facebook
TwitterThe documents contained in this dataset reflect NASA's comprehensive IT policy in compliance with Federal Government laws and regulations.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sensitive Regulated Data: Permitted and Restricted UsesPurposeScope and AuthorityStandardViolation of the Standard - Misuse of InformationDefinitionsReferencesAppendix A: Personally Identifiable Information (PII)Appendix B: Security of Personally Owned Devices that Access or Maintain Sensitive Restricted DataAppendix C: Sensitive Security Information (SSI)
Facebook
Twitterhttps://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice
Facebook
Twitterhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
Dataset Description:
Explore the ever-evolving landscape of cybersecurity threats and vulnerabilities with our comprehensive Security Vulnerabilities dataset. With the increasing integration of technology into every aspect of our lives, the importance of identifying and understanding security vulnerabilities cannot be overstated. This dataset provides a valuable resource for cybersecurity professionals, data enthusiasts, and researchers aiming to delve into the realm of digital security.
Key Features:
This dataset encompasses a wide range of security advisories, offering detailed insights into the following aspects:
Advisory Details: Each advisory comes with a title, link, severity level, summary, and publication date, providing a holistic understanding of the vulnerability.
Threat Severity: Understand the criticality of each vulnerability with severity levels, ranging from minor to critical, allowing you to prioritize analysis and response.
Expert Analysis: Benefit from the expertise of security researchers who have dissected and documented these vulnerabilities, aiding in comprehending intricate technical nuances.
Temporal Trends: Analyze historical data to identify patterns, evolutions, and emerging trends in security vulnerabilities over time.
Potential Applications:
The Security Vulnerabilities dataset can serve various purposes, including:
Cybersecurity Research: Explore and analyze the characteristics and trends of security vulnerabilities to enhance threat intelligence and mitigation strategies.
Data Analysis and Visualization: Employ the dataset for educational purposes, showcasing real-world data analysis and visualization techniques in the realm of cybersecurity.
Educational Use: Utilize the dataset as a valuable resource in cybersecurity courses, allowing students to understand the practical aspects of security threats.
Best Practices: Extract insights to develop informed security practices by understanding the common vulnerabilities plaguing various domains.
Take the First Step:
Feel free to modify and customize the description according to your dataset and target audience.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset corresponds to the paper "BETH Dataset: Real Cybersecurity Data for Anomaly Detection Research" by Kate Highnam* (@jinxmirror13), Kai Arulkumaran* (@kaixhin), Zachary Hanif*, and Nicholas R. Jennings (@LboroVC).
This paper was published in the ICML Workshop on Uncertainty and Robustness in Deep Learning 2021 and Conference on Applied Machine Learning for Information Security (CAMLIS 2021)
When deploying machine learning (ML) models in the real world, anomalous data points and shifts in the data distribution are inevitable. From a cyber security perspective, these anomalies and dataset shifts are driven by both defensive and adversarial advancement. To withstand the cost of critical system failure, the development of robust models is therefore key to the performance, protection, and longevity of deployed defensive systems.
We present the BPF-extended tracking honeypot (BETH) dataset as the first cybersecurity dataset for uncertainty and robustness benchmarking. Collected using a novel honeypot tracking system, our dataset has the following properties that make it attractive for the development of robust ML methods: 1. At over eight million data points, this is one of the largest cyber security datasets available 2. It contains modern host activity and attacks 3. It is fully labelled 4. It contains highly structured but heterogeneous features 5. Each host contains benign activity and at most a single attack, which is ideal for behavioural analysis and other research tasks. In addition to the described dataset
Further data is currently being collected and analysed to add alternative attack vectors to the dataset.
There are several existing cyber security datasets used in ML research, including the KDD Cup 1999 Data (Hettich & Bay, 1999), the 1998 DARPA Intrusion Detection Evaluation Dataset (Labs, 1998; Lippmann et al., 2000), the ISCX IDS 2012 dataset (Shiravi et al., 2012), and NSL-KDD (Tavallaee et al., 2009), which primarily removes duplicates from the KDD Cup 1999 Data. Each includes millions of records of realistic activity for enterprise applications, with labels for attacks or benign activity. The KDD1999, NSLKDD, and ISCX datasets contain network traffic, while the DARPA1998 dataset also includes limited process calls. However, these datasets are at best almost a decade old, and are collected on in-premise servers. In contrast, BETH contains modern host activity and activity collected from cloud services, making it relevant for current real-world deployments. In addition, some datasets include artificial user activity (Shiravi et al., 2012) while BETH contains only real activity. BETH is also one of the few datasets to include both kernel-process and network logs, providing a holistic view of malicious behaviour.
The BETH dataset currently represents 8,004,918 events collected over 23 honeypots, running for about five noncontiguous hours on a major cloud provider. For benchmarking and discussion, we selected the initial subset of the process logs. This subset was further divided into training, validation, and testing sets with a rough 60/20/20 split based on host, quantity of logs generated, and the activity logged—only the test set includes an attack
The dataset is composed of two sensor logs: kernel-level process calls and network traffic. The initial benchmark subset only includes process logs. Each process call consists of 14 raw features and 2 hand-crafted labels.
See the paper for more details. For details on the events recorded within the logs, see this report.
Code for our benchmarks, as detailed in the paper, are available through Github at: https://github.com/jinxmirror13/BETH_Dataset_Analysis
Thank you to Dr. Arinbjörn Kolbeinsson for his assistance in analysing the data and the reviewers for their positive feedback.
Facebook
TwitterPresented November 30, 2022, this webinar is part of the Child Welfare Information Technology Managers and Staff Series. The presentation focuses on threat prevention, new federal guidance on data security, and data breach or incident protocols.
Audio Description Version
Metadata-only record linking to the original dataset. Open original dataset below.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Cloud Vulnerabilities Dataset (VUL0001-VUL1200)
Overview The Cloud Vulnerabilities Dataset is a comprehensive collection of 1200 unique cloud security vulnerabilities, covering major cloud providers including AWS, Azure, Google Cloud Platform (GCP), Oracle Cloud, IBM Cloud, and Alibaba Cloud. This dataset is designed for cybersecurity professionals, penetration testers, machine learning engineers, and data scientists to analyze, train AI models, and enhance cloud security practices. Each entry details a specific vulnerability, including its description, category, cloud provider, vulnerable code (where applicable), proof of concept (PoC), and source references. The dataset emphasizes advanced and niche attack vectors such as misconfigurations, privilege escalations, data exposures, and denial-of-service (DoS) vulnerabilities, making it a valuable resource for red team exercises, security research, and AI-driven threat detection. Dataset Details
Total Entries: 1200 Format: JSONL (JSON Lines)
File Names: cloud_vulnerabilities_dataset_1-1200.jsonl
Timestamp: Entries are timestamped as of June 19, 2025. ```python Categories: Access Control Data Exposure Privilege Escalation Data Exfiltration Denial of Service Code Injection Authentication Encryption Network Security Session Management Domain Hijacking Data Loss
```python
Cloud Providers Covered:
Amazon Web Services (AWS)
Microsoft Azure
Google Cloud Platform (GCP)
Oracle Cloud
IBM Cloud
Alibaba Cloud
Dataset Structure Each entry in the dataset is a JSON object with the following fields:
id: Unique identifier for the vulnerability (e.g., VUL0001).
description: Detailed description of the vulnerability.
category: Type of vulnerability (e.g., Data Exposure, Privilege Escalation).
cloud_provider: The cloud platform affected (e.g., AWS, Azure).
vulnerable_code: Example of misconfigured code or settings (if applicable).
poc: Proof of concept command or script to demonstrate the vulnerability.
source: Reference to CVE or documentation link.
timestamp: Date and time of the entry (ISO 8601 format, e.g., 2025-06-19T12:10:00Z).
Example Entry
{
"id": "VUL1190",
"description": "Alibaba Cloud ECS with misconfigured snapshot policy allowing data exposure.",
"category": "Data Exposure",
"cloud_provider": "Alibaba Cloud",
"vulnerable_code": "{ \"SnapshotPolicy\": { \"publicAccess\": true } }",
"poc": "aliyun ecs DescribeSnapshots --SnapshotId snapshot-id",
"source": {
"cve": "N/A",
"link": "https://www.alibabacloud.com/help/doc-detail/25535.htm"
},
"timestamp": "2025-06-19T12:10:00Z"
}
Usage This dataset can be used for:
Penetration Testing: Leverage PoC scripts to test cloud environments for vulnerabilities. AI/ML Training: Train machine learning models for anomaly detection, vulnerability classification, or automated remediation. Security Research: Analyze trends in cloud misconfigurations and attack vectors. Education: Teach cloud security best practices and vulnerability mitigation strategies.
Prerequisites
Tools: Familiarity with cloud CLI tools (e.g., AWS CLI, Azure CLI, gcloud, oci, ibmcloud, aliyun). Programming: Knowledge of Python, JSON parsing, or scripting for processing JSONL files. Access: Valid cloud credentials for testing PoCs in a controlled, authorized environment.
Getting Started
Download the Dataset: Obtain the JSONL files: cloud_vulnerabilities_dataset_1-1200.jsonl
Parse the Dataset: Use a JSONL parser (e.g., Python’s json module) to read and process entries.
import json
with open('cloud_vulnerabilities_dataset_1-1200.jsonl', 'r') as file:
for line in file:
entry = json.loads(line.strip())
print(entry['id'], entry['description'])
Run PoCs:
Execute PoC commands in a sandboxed environment to verify vulnerabilities (ensure proper authorization).
Example: aws s3 ls s3://bucket for AWS S3 vulnerabilities.
Analyze Data: Use data analysis tools (e.g., Pandas, Jupyter) to explore vulnerability patterns or train ML models.
Security Considerations
Ethical Use: Only test PoCs in environments where you have explicit permission. Data Sensitivity: Handle dataset entries with care, as they contain sensitive configuration examples. Mitigation: Refer to source links for official documentation on fixing vulnerabilities.
Contributing Contributions to expand or refine the dataset are welcome. Please submit pull requests with:
New vulnerability entries in JSONL format. Clear documentation of the vulnerability, PoC, and source. Ensure no duplicate IDs or entries.
License This dataset is released under the MIT License. You are free to use, modify, and distribute it, provided the original attribution is maintained. Contact For questions, feedback, or contributions, please reach out via:
Email: sunny48445@gmail.com
Acknowledgments
Inspir...
Facebook
TwitterDistributed data mining from privacy-sensitive multi-party data is likely to play an important role in the next generation of integrated vehicle health monitoring systems. For example, consider an airline manufacturer [tex]$\mathcal{C}$[/tex] manufacturing an aircraft model [tex]$A$[/tex] and selling it to five different airline operating companies [tex]$\mathcal{V}_1 \dots \mathcal{V}_5$[/tex]. These aircrafts, during their operation, generate huge amount of data. Mining this data can reveal useful information regarding the health and operability of the aircraft which can be useful for disaster management and prediction of efficient operating regimes. Now if the manufacturer [tex]$\mathcal{C}$[/tex] wants to analyze the performance data collected from different aircrafts of model-type [tex]$A$[/tex] belonging to different airlines then central collection of data for subsequent analysis may not be an option. It should be noted that the result of this analysis may be statistically more significant if the data for aircraft model [tex]$A$[/tex] across all companies were available to [tex]$\mathcal{C}$[/tex]. The potential problems arising out of such a data mining scenario are:
Facebook
Twitterhttps://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The Government Open Data Management Platform market is booming, projected to reach $163.29 million in 2025 with a 9.73% CAGR. Discover key trends, leading companies, and regional insights in this comprehensive market analysis. Learn how governments leverage open data for improved transparency and citizen engagement.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains sensor and actuator measurements collected from the Secure Water Treatment (SWaT) testbed. It includes both normal operational data and cyber-attack scenarios, simulating real-world industrial control system intrusions. The dataset is suitable for research in anomaly detection, intrusion detection, cybersecurity, and machine learning applications in critical infrastructure.
The SWaT dataset is a benchmark dataset widely used in industrial control system (ICS) security research. It consists of time-series sensor and actuator data collected from a real-world water treatment testbed. It includes both normal and attack scenarios, making it highly suitable for tasks such as anomaly detection, intrusion detection, time-series classification, and ICS fault detection. ** Data Overview**
The dataset contains timestamped measurements from various sensors and actuators across multiple stages of the water treatment process. Key columns include:
Timestamp — Date and time of the recorded data point
FIT101 — Flow Indicator Transmitter at stage 1
LIT101 — Level Indicator Transmitter at stage 1
MV101 — Motorized Valve at stage 1
P101, P102 — Pumps at stage 1
AIT201, AIT202, AIT203 — Analyzer Indicators for pH, conductivity, and ORP at stage 2
FIT201 — Flow Indicator Transmitter at stage 2
MV201 — Motorized Valve at stage 2
P201, P202, P203, P204, P205, P206 — Pumps at stage 2
DPIT301 — Differential Pressure Indicator Transmitter at stage 3
FIT301, LIT301 — Flow and Level indicators at stage 3
MV301, MV302, MV303, MV304 — Motorized Valves at stage 3
P301, P302 — Pumps at stage 3
AIT401, AIT402 — Analyzer Indicators at stage 4
FIT401, LIT401 — Flow and Level indicators at stage 4
P401, P402, P403, P404 — Pumps at stage 4
UV401 — UV disinfection unit
AIT501, AIT502, AIT503, AIT504 — Analyzer Indicators at stage 5
FIT501, FIT502, FIT503, FIT504 — Flow indicators at stage 5
P501, P502 — Pumps at stage 5
PIT501, PIT502, PIT503 — Pressure Indicator Transmitters at stage 5
FIT601 — Flow Indicator Transmitter at stage 6
P601, P602, P603 — Pumps at stage 6
Normal/Attack — Label indicating whether the data point corresponds to normal or attack operation
Files Included
normal.csv — Normal operational data
attack.csv — Cyber attack data
merged.csv — Combined data (normal + attack)
Use Cases
Anomaly detection in industrial control systems
Binary or multiclass classification for attack type detection
Real-time monitoring systems for critical infrastructure
Time-series forecasting under adversarial conditions
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset is from a survey of undergraduate students that measured engagement with the research participation consent process and attitudes and behaviours toward data privacy and security. The survey was conducted anonymously in 2023 using Qualtrics survey software.
Facebook
TwitterAutoTrain Dataset for project: security-texts-classification-distilroberta
Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project security-texts-classification-distilroberta.
Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
Data Instances
A sample from this dataset looks as follows: [ { "text": "Netgear launches Bug Bounty Program for Hacker; Offering up to $15,000 in… See the full description on the dataset page: https://huggingface.co/datasets/vlsb/autotrain-data-security-texts-classification-distilroberta.
Facebook
TwitterOpen Government Licence 2.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/2/
License information was derived automatically
Privacy notices used in recent City of York Council consultations. For past consultation privacy notices please see the archived consultation privacy notices page. For further consultations data please see the consultations group page in York Open Data. For further information on consultations please visit City of York Council's website.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This document provides guidance to State agencies on evaluating datasets with PII, PHI, or other forms of private or confidential data. This guidance includes a sample risk benefit analysis form and process to enable agencies to evaluate datasets for publication and help select appropriate privacy protections for open datasets.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset captures internal access behavior within healthcare information systems, focusing on security events that occur while authorized users interact with medical records. It is designed to support analysis and detection of insider-related security threats in trusted health data environments.
The dataset contains 4,000 records with 15 columns, representing session-level access attributes commonly observed in electronic health record (EHR) systems. Each record corresponds to a single data access session and is labeled according to the type of internal activity observed.
The data supports research on healthcare data security, insider threat detection, access control monitoring, and governance within trusted health data spaces.
Dataset Description
Number of Rows: 4,000
Number of Columns: 15
Domain: Healthcare data security
Data Type: Tabular (categorical and numerical features)
Primary Task: Internal attack classification
The dataset models realistic access patterns, including normal internal usage and multiple forms of policy violations or misuse by authenticated users within healthcare systems.
Key Features (Columns)
duration – Length of the medical data access session (in seconds)
protocol_type – Type of healthcare access protocol used (e.g., EHR, PACS, API)
service – Healthcare service or module accessed
flag – Status indicator of the access session
src_bytes – Volume of data read from medical records
dst_bytes – Volume of data written or exported
logged_in – Indicates whether access was authenticated
num_failed_logins – Count of failed login attempts
root_shell – Indicates elevated or administrative access
su_attempted – Privilege escalation attempt indicator
num_file_creations – Number of files created or modified
num_access_files – Number of patient records accessed
same_srv_rate – Ratio of repeated access to the same service
diff_srv_rate – Ratio of access to different services
Target Column
attack_type – Class label indicating the type of internal activity (normal access or specific internal attack category)
Facebook
TwitterOn August 25th, 2022, Metro Council Passed Open Data Ordinance; previously open data reports were published on Mayor Fischer's Executive Order, You can find here both the Open Data Ordinance, 2022 (PDF) and the Mayor's Open Data Executive Order, 2013 Open Data Annual Reports Page 6 of the Open Data Ordinance, Within one year of the effective date of this Ordinance, and thereafter no later than September1 of each year, the Open Data Management Team shall submit to the Mayor and Metro Council an annual Open Data Report.The Open Data Management team (also known as the Data Governance Team is currently led by the city's Data Officer Andrew McKinney in the Office of Civic Innovation and Technology. Previously, it was led by the former Data Officer, Michael Schnuerle and prior to that by Director of IT.Open Data Ordinance O-243-22 Text Louisville Metro GovernmentLegislation TextFile #: O-243-22, Version: 3 ORDINANCE NO._, SERIES 2022AN ORDINANCE CREATING A NEW CHAPTER OF THE LOUISVILLE/JEFFERSONCOUNTY METRO CODE OF ORDINANCES CREATING AN OPEN DATA POLICYAND REVIEW. (AMENDMENT BY SUBSTITUTION)(AS AMENDED).SPONSORED BY: COUNCIL MEMBERS ARTHUR, WINKLER, CHAMBERS ARMSTRONG,PIAGENTINI, DORSEY, AND PRESIDENT JAMES WHEREAS, Metro Government is the catalyst for creating a world-class city that provides itscitizens with safe and vibrant neighborhoods, great jobs, a strong system of education and innovationand a high quality of life; WHEREAS, it should be easy to do business with Metro Government. Online governmentinteractions mean more convenient services for citizens and businesses and online governmentinteractions improve the cost effectiveness and accuracy of government operations; WHEREAS, an open government also makes certain that every aspect of the builtenvironment also has reliable digital descriptions available to citizens and entrepreneurs for deepengagement mediated by smart devices; WHEREAS, every citizen has the right to prompt, efficient service from Metro Government; WHEREAS, the adoption of open standards improves transparency, access to publicinformation and improved coordination and efficiencies among Departments and partnerorganizations across the public, non-profit and private sectors; WHEREAS, by publishing structured standardized data in machine readable formats, MetroGovernment seeks to encourage the local technology community to develop software applicationsand tools to display, organize, analyze, and share public record data in new and innovative ways; WHEREAS, Metro Government’s ability to review data and datasets will facilitate a betterUnderstanding of the obstacles the city faces with regard to equity; WHEREAS, Metro Government’s understanding of inequities, through data and datasets, willassist in creating better policies to tackle inequities in the city; WHEREAS, through this Ordinance, Metro Government desires to maintain its continuousimprovement in open data and transparency that it initiated via Mayoral Executive Order No. 1,Series 2013; WHEREAS, Metro Government’s open data work has repeatedly been recognized asevidenced by its achieving What Works Cities Silver (2018), Gold (2019), and Platinum (2020)certifications. What Works Cities recognizes and celebrates local governments for their exceptionaluse of data to inform policy and funding decisions, improve services, create operational efficiencies,and engage residents. The Certification program assesses cities on their data-driven decisionmakingpractices, such as whether they are using data to set goals and track progress, allocatefunding, evaluate the effectiveness of programs, and achieve desired outcomes. These datainformedstrategies enable Certified Cities to be more resilient, respond in crisis situations, increaseeconomic mobility, protect public health, and increase resident satisfaction; and WHEREAS, in commitment to the spirit of Open Government, Metro Government will considerpublic information to be open by default and will proactively publish data and data containinginformation, consistent with the Kentucky Open Meetings and Open Records Act. NOW, THEREFORE, BE IT ORDAINED BY THE COUNCIL OF THELOUISVILLE/JEFFERSON COUNTY METRO GOVERNMENT AS FOLLOWS: SECTION I: A new chapter of the Louisville Metro Code of Ordinances (“LMCO”) mandatingan Open Data Policy and review process is hereby created as follows: § XXX.01 DEFINITIONS. For the purpose of this Chapter, the following definitions shall apply unlessthe context clearly indicates or requires a different meaning. OPEN DATA. Any public record as defined by the Kentucky Open Records Act, which could bemade available online using Open Format data, as well as best practice Open Data structures andformats when possible, that is not Protected Information or Sensitive Information, with no legalrestrictions on use or reuse. Open Data is not information that is treated as exempt under KRS61.878 by Metro Government. OPEN DATA REPORT. The annual report of the Open Data Management Team, which shall (i)summarize and comment on the state of Open Data availability in Metro Government Departmentsfrom the previous year, including, but not limited to, the progress toward achieving the goals of MetroGovernment’s Open Data portal, an assessment of the current scope of compliance, a list of datasetscurrently available on the Open Data portal and a description and publication timeline for datasetsenvisioned to be published on the portal in the following year; and (ii) provide a plan for the next yearto improve online public access to Open Data and maintain data quality. OPEN DATA MANAGEMENT TEAM. A group consisting of representatives from each Departmentwithin Metro Government and chaired by the Data Officer who is responsible for coordinatingimplementation of an Open Data Policy and creating the Open Data Report. DATA COORDINATORS. The members of an Open Data Management Team facilitated by theData Officer and the Office of Civic Innovation and Technology. DEPARTMENT. Any Metro Government department, office, administrative unit, commission, board,advisory committee, or other division of Metro Government. DATA OFFICER. The staff person designated by the city to coordinate and implement the city’sopen data program and policy. DATA. The statistical, factual, quantitative or qualitative information that is maintained or created byor on behalf of Metro Government. DATASET. A named collection of related records, with the collection containing data organized orformatted in a specific or prescribed way. METADATA. Contextual information that makes the Open Data easier to understand and use. OPEN DATA PORTAL. The internet site established and maintained by or on behalf of MetroGovernment located at https://data.louisvilleky.gov/ or its successor website. OPEN FORMAT. Any widely accepted, nonproprietary, searchable, platform-independent, machinereadablemethod for formatting data which permits automated processes. PROTECTED INFORMATION. Any Dataset or portion thereof to which the Department may denyaccess pursuant to any law, rule or regulation. SENSITIVE INFORMATION. Any Data which, if published on the Open Data Portal, could raiseprivacy, confidentiality or security concerns or have the potential to jeopardize public health, safety orwelfare to an extent that is greater than the potential public benefit of publishing that data. § XXX.02 OPEN DATA PORTAL(A) The Open Data Portal shall serve as the authoritative source for Open Data provided by MetroGovernment.(B) Any Open Data made accessible on Metro Government’s Open Data Portal shall use an OpenFormat.(C) In the event a successor website is used, the Data Officer shall notify the Metro Council andshall provide notice to the public on the main city website. § XXX.03 OPEN DATA MANAGEMENT TEAM(A) The Data Officer of Metro Government will work with the head of each Department to identify aData Coordinator in each Department. The Open Data Management Team will work to establish arobust, nationally recognized, platform that addresses digital infrastructure and Open Data.(B) The Open Data Management Team will develop an Open Data Policy that will adopt prevailingOpen Format standards for Open Data and develop agreements with regional partners to publish andmaintain Open Data that is open and freely available while respecting exemptions allowed by theKentucky Open Records Act or other federal or state law. § XXX.04 DEPARTMENT OPEN DATA CATALOGUE(A) Each Department shall retain ownership over the Datasets they submit to the Open DataPortal. The Departments shall also be responsible for all aspects of the quality, integrity and securityPortal. The Departments shall also be responsible for all aspects of the quality, integrity and security of the Dataset contents, including updating its Data and associated Metadata.(B) Each Department shall be responsible for creating an Open Data catalogue which shall includecomprehensive inventories of information possessed and/or managed by the Department.(C) Each Department’s Open Data catalogue will classify information holdings as currently “public”or “not yet public;” Departments will work with the Office of Civic Innovation and Technology todevelop strategies and timelines for publishing Open Data containing information in a way that iscomplete, reliable and has a high level of detail. § XXX.05 OPEN DATA REPORT AND POLICY REVIEW(A) Within one year of the effective date of this Ordinance, and thereafter no later than September1 of each year, the Open Data Management Team shall submit to the Mayor and Metro Council anannual Open Data Report.(B) Metro Council may request a specific Department to report on any data or dataset that may bebeneficial or pertinent in implementing policy and legislation.(C) In acknowledgment that technology changes rapidly, in the future, the Open Data Policy shouldshall be reviewed annually and considered for revisions or additions that will continue to
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
Discover the booming Government Open Data Management (ODM) Platform market! This comprehensive analysis reveals key trends, drivers, and challenges shaping this $2B+ sector, including regional insights, leading companies, and future growth projections through 2033. Learn how cloud-based solutions and AI are transforming government data management.