6 datasets found
  1. c

    Home Sites Niagara Open Data

    • catalog.civicdataecosystem.org
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Home Sites Niagara Open Data [Dataset]. https://catalog.civicdataecosystem.org/dataset/niagara-open-data
    Explore at:
    Dataset updated
    May 13, 2025
    Description

    The Ontario government, generates and maintains thousands of datasets. Since 2012, we have shared data with Ontarians via a data catalogue. Open data is data that is shared with the public. Click here to learn more about open data and why Ontario releases it. Ontario’s Open Data Directive states that all data must be open, unless there is good reason for it to remain confidential. Ontario’s Chief Digital and Data Officer also has the authority to make certain datasets available publicly. Datasets listed in the catalogue that are not open will have one of the following labels: If you want to use data you find in the catalogue, that data must have a licence – a set of rules that describes how you can use it. A licence: Most of the data available in the catalogue is released under Ontario’s Open Government Licence. However, each dataset may be shared with the public under other kinds of licences or no licence at all. If a dataset doesn’t have a licence, you don’t have the right to use the data. If you have questions about how you can use a specific dataset, please contact us. The Ontario Data Catalogue endeavors to publish open data in a machine readable format. For machine readable datasets, you can simply retrieve the file you need using the file URL. The Ontario Data Catalogue is built on CKAN, which means the catalogue has the following features you can use when building applications. APIs (Application programming interfaces) let software applications communicate directly with each other. If you are using the catalogue in a software application, you might want to extract data from the catalogue through the catalogue API. Note: All Datastore API requests to the Ontario Data Catalogue must be made server-side. The catalogue's collection of dataset metadata (and dataset files) is searchable through the CKAN API. The Ontario Data Catalogue has more than just CKAN's documented search fields. You can also search these custom fields. You can also use the CKAN API to retrieve metadata about a particular dataset and check for updated files. Read the complete documentation for CKAN's API. Some of the open data in the Ontario Data Catalogue is available through the Datastore API. You can also search and access the machine-readable open data that is available in the catalogue. How to use the API feature: Read the complete documentation for CKAN's Datastore API. The Ontario Data Catalogue contains a record for each dataset that the Government of Ontario possesses. Some of these datasets will be available to you as open data. Others will not be available to you. This is because the Government of Ontario is unable to share data that would break the law or put someone's safety at risk. You can search for a dataset with a word that might describe a dataset or topic. Use words like “taxes” or “hospital locations” to discover what datasets the catalogue contains. You can search for a dataset from 3 spots on the catalogue: the homepage, the dataset search page, or the menu bar available across the catalogue. On the dataset search page, you can also filter your search results. You can select filters on the left hand side of the page to limit your search for datasets with your favourite file format, datasets that are updated weekly, datasets released by a particular organization, or datasets that are released under a specific licence. Go to the dataset search page to see the filters that are available to make your search easier. You can also do a quick search by selecting one of the catalogue’s categories on the homepage. These categories can help you see the types of data we have on key topic areas. When you find the dataset you are looking for, click on it to go to the dataset record. Each dataset record will tell you whether the data is available, and, if so, tell you about the data available. An open dataset might contain several data files. These files might represent different periods of time, different sub-sets of the dataset, different regions, language translations, or other breakdowns. You can select a file and either download it or preview it. Make sure to read the licence agreement to make sure you have permission to use it the way you want. Read more about previewing data. A non-open dataset may be not available for many reasons. Read more about non-open data. Read more about restricted data. Data that is non-open may still be subject to freedom of information requests. The catalogue has tools that enable all users to visualize the data in the catalogue without leaving the catalogue – no additional software needed. Have a look at our walk-through of how to make a chart in the catalogue. Get automatic notifications when datasets are updated. You can choose to get notifications for individual datasets, an organization’s datasets or the full catalogue. You don’t have to provide and personal information – just subscribe to our feeds using any feed reader you like using the corresponding notification web addresses. Copy those addresses and paste them into your reader. Your feed reader will let you know when the catalogue has been updated. The catalogue provides open data in several file formats (e.g., spreadsheets, geospatial data, etc). Learn about each format and how you can access and use the data each file contains. A file that has a list of items and values separated by commas without formatting (e.g. colours, italics, etc.) or extra visual features. This format provides just the data that you would display in a table. XLSX (Excel) files may be converted to CSV so they can be opened in a text editor. How to access the data: Open with any spreadsheet software application (e.g., Open Office Calc, Microsoft Excel) or text editor. Note: This format is considered machine-readable, it can be easily processed and used by a computer. Files that have visual formatting (e.g. bolded headers and colour-coded rows) can be hard for machines to understand, these elements make a file more human-readable and less machine-readable. A file that provides information without formatted text or extra visual features that may not follow a pattern of separated values like a CSV. How to access the data: Open with any word processor or text editor available on your device (e.g., Microsoft Word, Notepad). A spreadsheet file that may also include charts, graphs, and formatting. How to access the data: Open with a spreadsheet software application that supports this format (e.g., Open Office Calc, Microsoft Excel). Data can be converted to a CSV for a non-proprietary format of the same data without formatted text or extra visual features. A shapefile provides geographic information that can be used to create a map or perform geospatial analysis based on location, points/lines and other data about the shape and features of the area. It includes required files (.shp, .shx, .dbt) and might include corresponding files (e.g., .prj). How to access the data: Open with a geographic information system (GIS) software program (e.g., QGIS). A package of files and folders. The package can contain any number of different file types. How to access the data: Open with an unzipping software application (e.g., WinZIP, 7Zip). Note: If a ZIP file contains .shp, .shx, and .dbt file types, it is an ArcGIS ZIP: a package of shapefiles which provide information to create maps or perform geospatial analysis that can be opened with ArcGIS (a geographic information system software program). A file that provides information related to a geographic area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open using a GIS software application to create a map or do geospatial analysis. It can also be opened with a text editor to view raw information. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format for sharing data in a machine-readable way that can store data with more unconventional structures such as complex lists. How to access the data: Open with any text editor (e.g., Notepad) or access through a browser. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format to store and organize data in a machine-readable way that can store data with more unconventional structures (not just data organized in tables). How to access the data: Open with any text editor (e.g., Notepad). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A file that provides information related to an area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open with a geospatial software application that supports the KML format (e.g., Google Earth). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. This format contains files with data from tables used for statistical analysis and data visualization of Statistics Canada census data. How to access the data: Open with the Beyond 20/20 application. A database which links and combines data from different files or applications (including HTML, XML, Excel, etc.). The database file can be converted to a CSV/TXT to make the data machine-readable, but human-readable formatting will be lost. How to access the data: Open with Microsoft Office Access (a database management system used to develop application software). A file that keeps the original layout and

  2. COKI Open Access Dataset

    • zenodo.org
    • explore.openaire.eu
    • +1more
    zip
    Updated Oct 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard Hosking; Richard Hosking; James P. Diprose; James P. Diprose; Aniek Roelofs; Aniek Roelofs; Tuan-Yow Chien; Tuan-Yow Chien; Lucy Montgomery; Lucy Montgomery; Cameron Neylon; Cameron Neylon (2023). COKI Open Access Dataset [Dataset]. http://doi.org/10.5281/zenodo.7048603
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 3, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Richard Hosking; Richard Hosking; James P. Diprose; James P. Diprose; Aniek Roelofs; Aniek Roelofs; Tuan-Yow Chien; Tuan-Yow Chien; Lucy Montgomery; Lucy Montgomery; Cameron Neylon; Cameron Neylon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The COKI Open Access Dataset measures open access performance for 142 countries and 5117 institutions and is available in JSON Lines format. The data is visualised at the COKI Open Access Dashboard: https://open.coki.ac/.

    The COKI Open Access Dataset is created with the COKI Academic Observatory data collection pipeline, which fetches data about research publications from multiple sources, synthesises the datasets and creates the open access calculations for each country and institution.

    Each week a number of specialised research publication datasets are collected. The datasets that are used for the COKI Open Access Dataset release include Crossref Metadata, Microsoft Academic Graph, Unpaywall and the Research Organization Registry.

    After fetching the datasets, they are synthesised to produce aggregate time series statistics for each country and institution in the dataset. The aggregate timeseries statistics include publication count, open access status and citation count.

    See https://open.coki.ac/data/ for the dataset schema. A new version of the dataset is deposited every week.

    Code

    License
    COKI Open Access Dataset © 2022 by Curtin University is licenced under CC BY 4.0.

    Attributions
    This work contains information from:

  3. A

    API Management Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Jan 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). API Management Market Report [Dataset]. https://www.promarketreports.com/reports/api-management-market-9119
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Jan 30, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The size of the API Management Market was valued at USD 4.044 Billion in 2024 and is projected to reach USD 15.72 Billion by 2033, with an expected CAGR of 21.40% during the forecast period. The API management market is experiencing significant growth, driven by the increasing adoption of cloud computing, digital transformation initiatives, and the rising need for organizations to enhance connectivity across applications and services. Businesses across various industries are leveraging API management solutions to streamline integrations, improve security, and enhance user experiences. The growing popularity of microservices architecture and the shift towards API-driven ecosystems are further fueling market expansion. Key components of API management include API gateways, developer portals, analytics, and security solutions, enabling organizations to efficiently manage, monitor, and monetize their APIs. The demand for hybrid and multi-cloud API management platforms is rising as enterprises seek flexibility in deployment and scalability. Additionally, regulatory compliance requirements and the need for robust security mechanisms are pushing businesses to invest in advanced API management tools. Leading technology providers are focusing on innovation through AI-driven automation, API lifecycle management enhancements, and strategic collaborations to stay competitive. As digital ecosystems continue to evolve, the API management market is expected to witness sustained growth, with increasing adoption across industries such as healthcare, finance, retail, and telecommunications. Recent developments include: Jan 2024: The latest release of IBM API Connect 10.0.5.5 LTS brings notable enhancements Kubernetes deployments now mandate an upgrade to cert-manager 1.11.5 for compatibility. Enhancements to LDAP configuration, notably for Microsoft Active Directory integration, provide improved accuracy. Furthermore, API consumers gain the ability to download APIs in JSON format directly from the Developer Portal UI, enhancing accessibility to API documentation., October 2023: Axway, a leader in API Management, has been recognized by Gartner as a Leader in the 2023 Magic Quadrant™ for API Management. Their offering, Amplify API Management, is praised for its completeness of vision and ability to execute. Axway's platform, Amplify, is the only open, independent platform for managing and governing APIs across teams, the hybrid cloud, and third-party solutions. With a focus on reducing complexity and growing API adoption, Axway enables enterprises to securely open everything by integrating and moving data across various technologies., September 2023: Kong Inc. introduces Kong Konnect Dedicated Cloud Gateways, providing fully managed API gateway services on dedicated cloud infrastructure, catering to the needs of larger enterprises for heightened connectivity, performance, security, and control. The offering, announced at the 2023 API Summit, simplifies adoption with managed services, supports secure private networking, and offers multi-cloud and multi-region coverage. Additionally, Kong Insomnia 8.0 brings AI-powered testing, real-time collaboration features, and support for Server-Sent Events, enhancing developer productivity and API creation capabilities., March 2022: Microsoft launched the Azure Private Link support for Azure API Management service, a fully-managed service that enables customers to publish, secure, transform, maintain, and monitor APIs. API Management now supports Azure Private Link, which allows customers to configure a private endpoint for API Management to allow clients in their private network to securely access the instance. Additionally, the Azure API Management gateway communicates with the customer's virtual network privately and securely over the Microsoft backbone network, so there is no need to expose the service to the public.. Key drivers for this market are: Increasing adoption of cloud-based services and microservices architectures.

    Heightened concerns over API security and compliance.

    Growing demand for real-time API analytics and monitoring.

    Emergence of API monetization as a revenue stream.. Potential restraints include: Complexity of API management implementations.

    Lack of skilled API management professionals.

    Integration challenges with legacy systems.. Notable trends are: Integration of AI and machine learning (ML) for enhanced security and management.

    Adoption of serverless API management solutions.

    Rise of self-service API marketplaces..

  4. G

    GDPR Services Industry Report

    • marketsignalreports.com
    doc, pdf, ppt
    Updated Jun 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Signal Reports (2025). GDPR Services Industry Report [Dataset]. https://www.marketsignalreports.com/reports/gdpr-services-industry-13126
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Jun 3, 2025
    Dataset authored and provided by
    Market Signal Reports
    License

    https://www.marketsignalreports.com/privacy-policyhttps://www.marketsignalreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The GDPR Services market, valued at $3.33 billion in 2025, is experiencing robust growth, projected to expand at a compound annual growth rate (CAGR) of 27.66% from 2025 to 2033. This significant expansion is driven by increasing global data privacy regulations, rising cyber threats necessitating robust data protection measures, and the growing adoption of cloud-based solutions which require sophisticated data governance frameworks. Key market segments include data management, data discovery and mapping, and data governance services, catering primarily to large enterprises across various sectors like BFSI, telecom, and healthcare. The preference for cloud-based deployment models further fuels market growth, offering scalability and cost-effectiveness compared to on-premise solutions. While the market faces challenges like the complexity of GDPR compliance and the associated high costs of implementation, these are being mitigated by the emergence of specialized service providers offering comprehensive solutions and managed services. The competitive landscape features a mix of established IT giants like IBM, Microsoft, and Accenture, alongside specialized GDPR service providers, leading to innovation and enhanced service offerings. North America currently holds a substantial market share due to stringent data privacy regulations and a high level of technological adoption; however, the Asia-Pacific region is expected to witness significant growth owing to expanding digital economies and increasing awareness of data protection. The forecast period (2025-2033) promises sustained growth, particularly in regions like Asia-Pacific and Europe, driven by increasing data breaches and escalating regulatory scrutiny. The continued evolution of data privacy regulations globally, coupled with the rise of emerging technologies like AI and IoT which amplify data security concerns, will contribute to this growth. The market will likely see further consolidation as larger players acquire smaller specialized companies to expand their service portfolios. The focus will shift towards more holistic, integrated solutions that combine data security, compliance, and risk management capabilities, addressing the holistic needs of enterprises. A key trend will be the increasing demand for AI-powered solutions for data privacy management, offering automation and improved efficiency in compliance efforts. Recent developments include: November 2022: Informatica, an enterprise cloud data management player, said the Intelligent Data Management Cloud (IDMC) platform is now available for state and local governments during the Informatica World Tour in Washington, DC. Informatica's IDMC platform, which currently processes over 44 trillion cloud transactions monthly, is intended to assist state and local government agencies in providing timely and efficient public services., October 2022: Gravitee.io, the open-source API management platform, and Solace, the leading facilitator of event-driven architecture for real-time enterprises, announced a strategic alliance today, bringing to market a unified API management experience for synchronous RESTful and asynchronous event-driven APIs. With the expansion of web apps and the rise of digital enterprises that require the exposure and connection of applications and assets utilizing recognized architectural patterns and protocols like HTTP/Representational State Transfer, the API industry has grown.. Key drivers for this market are: Rapid Digital Transformation in the Financial Service Sector, Robust Roll Out of 5G. Potential restraints include: Lack of Awareness among Professionals. Notable trends are: Need for data security and privacy in the wake of a data breach.

  5. Reddit Sentiment VS Stock Price

    • zenodo.org
    bin, csv, json, png +2
    Updated May 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Will Baysingar; Will Baysingar (2025). Reddit Sentiment VS Stock Price [Dataset]. http://doi.org/10.5281/zenodo.15367306
    Explore at:
    csv, bin, png, text/x-python, txt, jsonAvailable download formats
    Dataset updated
    May 8, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Will Baysingar; Will Baysingar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overall, this project was meant test the relationship between social media posts and their short-term effect on stock prices. We decided to use Reddit posts from financial specific subreddit communities like r/wallstreetbets, r/investing, and r/stocks to see the changes in the market associated with a variety of posts made by users. This idea came to light because of the GameStop short squeeze that showed the power of social media in the market. Typically, stock prices should purely represent the total present value of all the future value of the company, but the question we are asking is whether social media can impact that intrinsic value. Our research question was known from the start and it was do Reddit posts for or against a certain stock provide insight into how the market will move in a short window. To solve this problem, we selected five large tech companies including Apple, Tesla, Amazon, Microsoft, and Google. These companies would likely give us more data in the subreddits and would have less volatility day to day allowing us to simulate an experiment easier. They trade at very high values so a change from a Reddit post would have to be significant giving us proof that there is an effect.

    Next, we had to choose our data sources for to have data to test with. First, we tried to locate the Reddit data using a Reddit API, but due to circumstances regarding Reddit requiring approval to use their data we switched to a Kaggle dataset that contained metadata from Reddit. For our second data set we had planned to use Yahoo Finance through yfinance, but due to the large amount of data we were pulling from this public API our IP address was temporarily blocked. This caused us to switch our second data to pull from Alpha Vantage. While this was a large switch in the public it was a minor roadblock and fixing the Finance pulling section allowed for everything else to continue to work in succession. Once we had both of our datasets programmatically pulled into our local vs code, we implemented a pipeline to clean, merge, and analyze all the data. At the end, we implement a Snakemake workflow to ensure the project was easily reproducible. To continue, we utilized Textblob to label our Reddit posts with a sentiment value of positive, negative, or neutral and provide us with a correlation value to analyze with. We then matched the time frame of each post with the stock data and computed any possible changes, found a correlation coefficient, and graphed our findings.

    To conclude the data analysis, we found that there is relatively small or no correlation between the total companies, but Microsoft and Google do show stronger correlations when analyzed on their own. However, this may be due to other circumstances like why the post was made or if the market had other trends on those dates already. A larger analysis with more data from other social media platforms would be needed to conclude for our hypothesis that there is a strong correlation.

  6. c

    NLS Historic Maps API: Historical Maps of Great Britain

    • data.catchmentbasedapproach.org
    • hub.arcgis.com
    Updated Sep 19, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    klokantech (2017). NLS Historic Maps API: Historical Maps of Great Britain [Dataset]. https://data.catchmentbasedapproach.org/maps/131be1ff1498429eacf806f939807f20
    Explore at:
    Dataset updated
    Sep 19, 2017
    Dataset authored and provided by
    klokantech
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Description

    National Library of Scotland Historic Maps APIHistorical Maps of Great Britain for use in mashups and ArcGIS Onlinehttps://nls.tileserver.com/https://maps.nls.uk/projects/api/index.htmlThis seamless historic map can be:embedded in your own websiteused for research purposesused as a backdrop for your own markers or geographic dataused to create derivative work (such as OpenStreetMap) from it.The mapping is based on out-of-copyright Ordnance Survey maps, dating from the 1920s to the 1940s.The map can be directly opened in a web browser by opening the Internet address: https://nls.tileserver.com/The map is ready for natural zooming and panning with finger pinching and dragging.How to embed the historic map in your websiteThe easiest way of embedding the historical map in your website is to copy < paste this HTML code into your website page. Simple embedding (try: hello.html):You can automatically position the historic map to open at a particular place or postal address by appending the name as a "q" parameter - for example: ?q=edinburgh Embedding with a zoom to a place (try: placename.html):You can automatically position the historic map to open at particular latitude and longitude coordinates: ?lat=51.5&lng=0&zoom=11. There are many ways of obtaining geographic coordinates. Embedding with a zoom to coordinates (try: coordinates.html):The map can also automatically detect the geographic location of the visitor to display the place where you are right now, with ?q=auto Embedding with a zoom to coordinates (try: auto.html):How to use the map in a mashupThe historic map can be used as a background map for your own data. You can place markers on top of it, or implement any functionality you want. We have prepared a simple to use JavaScript API to access to map from the popular APIs like Google Maps API, Microsoft Bing SDK or open-source OpenLayers or KHTML. To use our map in your mashups based on these tools you should include our API in your webpage: ... ...

  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2025). Home Sites Niagara Open Data [Dataset]. https://catalog.civicdataecosystem.org/dataset/niagara-open-data

Home Sites Niagara Open Data

Explore at:
Dataset updated
May 13, 2025
Description

The Ontario government, generates and maintains thousands of datasets. Since 2012, we have shared data with Ontarians via a data catalogue. Open data is data that is shared with the public. Click here to learn more about open data and why Ontario releases it. Ontario’s Open Data Directive states that all data must be open, unless there is good reason for it to remain confidential. Ontario’s Chief Digital and Data Officer also has the authority to make certain datasets available publicly. Datasets listed in the catalogue that are not open will have one of the following labels: If you want to use data you find in the catalogue, that data must have a licence – a set of rules that describes how you can use it. A licence: Most of the data available in the catalogue is released under Ontario’s Open Government Licence. However, each dataset may be shared with the public under other kinds of licences or no licence at all. If a dataset doesn’t have a licence, you don’t have the right to use the data. If you have questions about how you can use a specific dataset, please contact us. The Ontario Data Catalogue endeavors to publish open data in a machine readable format. For machine readable datasets, you can simply retrieve the file you need using the file URL. The Ontario Data Catalogue is built on CKAN, which means the catalogue has the following features you can use when building applications. APIs (Application programming interfaces) let software applications communicate directly with each other. If you are using the catalogue in a software application, you might want to extract data from the catalogue through the catalogue API. Note: All Datastore API requests to the Ontario Data Catalogue must be made server-side. The catalogue's collection of dataset metadata (and dataset files) is searchable through the CKAN API. The Ontario Data Catalogue has more than just CKAN's documented search fields. You can also search these custom fields. You can also use the CKAN API to retrieve metadata about a particular dataset and check for updated files. Read the complete documentation for CKAN's API. Some of the open data in the Ontario Data Catalogue is available through the Datastore API. You can also search and access the machine-readable open data that is available in the catalogue. How to use the API feature: Read the complete documentation for CKAN's Datastore API. The Ontario Data Catalogue contains a record for each dataset that the Government of Ontario possesses. Some of these datasets will be available to you as open data. Others will not be available to you. This is because the Government of Ontario is unable to share data that would break the law or put someone's safety at risk. You can search for a dataset with a word that might describe a dataset or topic. Use words like “taxes” or “hospital locations” to discover what datasets the catalogue contains. You can search for a dataset from 3 spots on the catalogue: the homepage, the dataset search page, or the menu bar available across the catalogue. On the dataset search page, you can also filter your search results. You can select filters on the left hand side of the page to limit your search for datasets with your favourite file format, datasets that are updated weekly, datasets released by a particular organization, or datasets that are released under a specific licence. Go to the dataset search page to see the filters that are available to make your search easier. You can also do a quick search by selecting one of the catalogue’s categories on the homepage. These categories can help you see the types of data we have on key topic areas. When you find the dataset you are looking for, click on it to go to the dataset record. Each dataset record will tell you whether the data is available, and, if so, tell you about the data available. An open dataset might contain several data files. These files might represent different periods of time, different sub-sets of the dataset, different regions, language translations, or other breakdowns. You can select a file and either download it or preview it. Make sure to read the licence agreement to make sure you have permission to use it the way you want. Read more about previewing data. A non-open dataset may be not available for many reasons. Read more about non-open data. Read more about restricted data. Data that is non-open may still be subject to freedom of information requests. The catalogue has tools that enable all users to visualize the data in the catalogue without leaving the catalogue – no additional software needed. Have a look at our walk-through of how to make a chart in the catalogue. Get automatic notifications when datasets are updated. You can choose to get notifications for individual datasets, an organization’s datasets or the full catalogue. You don’t have to provide and personal information – just subscribe to our feeds using any feed reader you like using the corresponding notification web addresses. Copy those addresses and paste them into your reader. Your feed reader will let you know when the catalogue has been updated. The catalogue provides open data in several file formats (e.g., spreadsheets, geospatial data, etc). Learn about each format and how you can access and use the data each file contains. A file that has a list of items and values separated by commas without formatting (e.g. colours, italics, etc.) or extra visual features. This format provides just the data that you would display in a table. XLSX (Excel) files may be converted to CSV so they can be opened in a text editor. How to access the data: Open with any spreadsheet software application (e.g., Open Office Calc, Microsoft Excel) or text editor. Note: This format is considered machine-readable, it can be easily processed and used by a computer. Files that have visual formatting (e.g. bolded headers and colour-coded rows) can be hard for machines to understand, these elements make a file more human-readable and less machine-readable. A file that provides information without formatted text or extra visual features that may not follow a pattern of separated values like a CSV. How to access the data: Open with any word processor or text editor available on your device (e.g., Microsoft Word, Notepad). A spreadsheet file that may also include charts, graphs, and formatting. How to access the data: Open with a spreadsheet software application that supports this format (e.g., Open Office Calc, Microsoft Excel). Data can be converted to a CSV for a non-proprietary format of the same data without formatted text or extra visual features. A shapefile provides geographic information that can be used to create a map or perform geospatial analysis based on location, points/lines and other data about the shape and features of the area. It includes required files (.shp, .shx, .dbt) and might include corresponding files (e.g., .prj). How to access the data: Open with a geographic information system (GIS) software program (e.g., QGIS). A package of files and folders. The package can contain any number of different file types. How to access the data: Open with an unzipping software application (e.g., WinZIP, 7Zip). Note: If a ZIP file contains .shp, .shx, and .dbt file types, it is an ArcGIS ZIP: a package of shapefiles which provide information to create maps or perform geospatial analysis that can be opened with ArcGIS (a geographic information system software program). A file that provides information related to a geographic area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open using a GIS software application to create a map or do geospatial analysis. It can also be opened with a text editor to view raw information. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format for sharing data in a machine-readable way that can store data with more unconventional structures such as complex lists. How to access the data: Open with any text editor (e.g., Notepad) or access through a browser. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format to store and organize data in a machine-readable way that can store data with more unconventional structures (not just data organized in tables). How to access the data: Open with any text editor (e.g., Notepad). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A file that provides information related to an area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open with a geospatial software application that supports the KML format (e.g., Google Earth). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. This format contains files with data from tables used for statistical analysis and data visualization of Statistics Canada census data. How to access the data: Open with the Beyond 20/20 application. A database which links and combines data from different files or applications (including HTML, XML, Excel, etc.). The database file can be converted to a CSV/TXT to make the data machine-readable, but human-readable formatting will be lost. How to access the data: Open with Microsoft Office Access (a database management system used to develop application software). A file that keeps the original layout and

Search
Clear search
Close search
Google apps
Main menu