10 datasets found
  1. BigQuery GIS Utility Datasets (U.S.)

    • kaggle.com
    zip
    Updated Mar 20, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Google BigQuery (2019). BigQuery GIS Utility Datasets (U.S.) [Dataset]. https://www.kaggle.com/bigquery/utility-us
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Mar 20, 2019
    Dataset provided by
    BigQueryhttps://cloud.google.com/bigquery
    Googlehttp://google.com/
    Authors
    Google BigQuery
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Querying BigQuery tables You can use the BigQuery Python client library to query tables in this dataset in Kernels. Note that methods available in Kernels are limited to querying data. Tables are at bigquery-public-data.github_repos.[TABLENAME].

    • Project: "bigquery-public-data"
    • Table: "utility_us"

    Fork this kernel to get started to learn how to safely manage analyzing large BigQuery datasets.

    If you're using Python, you can start with this code:

    import pandas as pd
    from bq_helper import BigQueryHelper
    bq_assistant = BigQueryHelper("bigquery-public-data", "utility_us")
    
  2. FitBit Fitness Tracker Data (revised)

    • kaggle.com
    zip
    Updated Dec 17, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    duart2688 (2022). FitBit Fitness Tracker Data (revised) [Dataset]. https://www.kaggle.com/duart2688/fitabase-data-cleaned-using-sql
    Explore at:
    zip(12763010 bytes)Available download formats
    Dataset updated
    Dec 17, 2022
    Authors
    duart2688
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Content

    This dataset generated by respondents to a distributed survey via Amazon Mechanical Turk between 03.12.2016-05.12.2016. Thirty eligible Fitbit users consented to the submission of personal tracker data, including minute-level output for physical activity, heart rate, and sleep monitoring. Individual reports can be parsed by export session ID (column A) or timestamp (column B). Variation between output represents use of different types of Fitbit trackers and individual tracking behaviors / preferences.

    Main modifications

    This is the list of manipulations performed on the original dataset, published by Möbius. All the cleaning process and rearrangements were performed in BigQuery, using SQL functions. 1) After I took a closer look at the source dataset, I realized that for my case study, I did not need some of the tables contained in the original archive. Therefore, I decided not to import - dailyCalories_merged.csv, - dailyIntensities_merged.csv, - dailySteps_merged.csv. as they proved redundant, their content could be found in the dailyActivity_merged.csv file. In addition, the files - minutesCaloriesWide_merged.csv, - minutesIntensitiesWide_merged.csv, - minuteStepsWide_merged.csv.
    were not imported, as they presented the same data contained in other files in a wide format. Hence, only the files with long format containing the same data were imported in the BigQuery database.

    2) To be able to compare and measure the correlation among different variables based on hourly records, I decided to create a new table based on LEFT JOIN function and columns Id and ActivityHour. I repeated the same JOIN on tables with minute records. Hence I obtained 2 new tables: - hourly_activity.csv, - minute_activity.csv.

    3) To validate most of the columns containing DATE and DATETIME values that were imported as STRING data type, I used the PARSE_DATE() and PARSE_DATETIME() commands. While importing the - heartrate_seconds_merged.csv, - hourlyCalories_merged.csv, - hourlyIntensities_merged.csv, - hourlySteps_merged.csv, - minutesCaloriesNarrow_merged.csv, - minuteIntensitiesNarrow_merged.csv, - minuteMETsNarrow_merged.csv, - minuteSleep_merged.csv, - minuteSteps_merged.csv, - sleepDay_merge.csv, - weigthLog_Info_merged.csv files to BigQuery, it was necessary to import the DATETIME and DATE type columns as STRING, because the original syntax, used in the CSV files, couldn’t be recognized as a correct DATETIME data type, due to “AM” and “PM” text at the end of the expression.

    Acknowlegement

    1. Möbius' version of the data set can be found here.
    2. Furberg, Robert; Brinton, Julia; Keating, Michael ; Ortiz, Alexa https://zenodo.org/record/53894#.YMoUpnVKiP9-
  3. gnomAD

    • console.cloud.google.com
    Updated Jul 25, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    https://console.cloud.google.com/marketplace/browse?filter=partner:Broad%20Institute%20of%20MIT%20and%20Harvard&hl=zh_TW (2023). gnomAD [Dataset]. https://console.cloud.google.com/marketplace/product/broad-institute/gnomad?hl=zh_TW
    Explore at:
    Dataset updated
    Jul 25, 2023
    Dataset provided by
    Googlehttp://google.com/
    Description

    The Genome Aggregation Database (gnomAD) is maintained by an international coalition of investigators to aggregate and harmonize data from large-scale sequencing projects. These public datasets are available in VCF format in Google Cloud Storage and in Google BigQuery as integer range partitioned tables . Each dataset is sharded by chromosome meaning variants are distributed across 24 tables (indicated with “_chr*” suffix). Utilizing the sharded tables reduces query costs significantly. Variant Transforms was used to process these VCF files and import them to BigQuery. VEP annotations were parsed into separate columns for easier analysis using Variant Transforms’ annotation support . These public datasets are included in BigQuery's 1TB/mo of free tier processing. This means that each user receives 1TB of free BigQuery processing every month, which can be used to run queries on this public dataset. Watch this short video to learn how to get started quickly using BigQuery to access public datasets. Use this quick start guide to quickly learn how to access public datasets on Google Cloud Storage. Find out more in our blog post, Providing open access to gnomAD on Google Cloud . Questions? Contact gcp-life-sciences-discuss@googlegroups.com.

  4. Intellectual Property Investigations by the USITC

    • kaggle.com
    zip
    Updated Feb 12, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Google BigQuery (2019). Intellectual Property Investigations by the USITC [Dataset]. https://www.kaggle.com/bigquery/usitc-investigations
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Feb 12, 2019
    Dataset provided by
    BigQueryhttps://cloud.google.com/bigquery
    Googlehttp://google.com/
    Authors
    Google BigQuery
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Context

    Section 337, Tariff Act of 1930, Investigations of Unfair Practices in Import Trade. Under section 337, the USITC determines whether there is unfair competition in the importation of products into, or their subsequent sale in, the United States. Section 337 prohibits the importation into the US , or the sale of such articles by owners, importers or consignees, of articles which infringe a patent, copyright, trademark, or semiconductor mask work, or where unfair competition or unfair acts exist that can destroy or substantially injure a US industry or prevent one from developing, or restrain or monopolize trade in US commerce. These latter categories are very broad: unfair competition can involve counterfeit, mismarked or misbranded goods, where the sale of the goods are at unfairly low prices, where other antitrust violations take place such as price fixing, market division or the goods violate a standard applicable to such goods.

    Content

    US International Trade Commission 337Info Unfair Import Investigations Information System contains data on investigations done under Section 337. Section 337 declares the infringement of certain statutory intellectual property rights and other forms of unfair competition in import trade to be unlawful practices. Most Section 337 investigations involve allegations of patent or registered trademark infringement.

    Fork this notebook to get started on accessing data in the BigQuery dataset using the BQhelper package to write SQL queries.

    Acknowledgements

    Data Origin: https://bigquery.cloud.google.com/dataset/patents-public-data:usitc_investigations

    "US International Trade Commission 337Info Unfair Import Investigations Information System" by the USITC, for public use.

    Banner photo by João Silas on Unsplash

  5. Data and code for: The Contributor Role Taxonomy (CRediT) at ten: a...

    • figshare.com
    pdf
    Updated Nov 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Porter; Ruth Whittam; Liz Allen; Veronique Kiermer (2025). Data and code for: The Contributor Role Taxonomy (CRediT) at ten: a retrospective analysis of the diversity ofcontributions to published research output [Dataset]. http://doi.org/10.6084/m9.figshare.28816703.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Nov 19, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Simon Porter; Ruth Whittam; Liz Allen; Veronique Kiermer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    About this notebookThis notebook was created using a helper script: Base.ipynb. This script has some helper functions that push output directly to Datawrapper to generate the graphs that have been included in the opnion piece. To run without the helper functions and bigquery alone use!pip install google-cloud-bigquerythen add:from google.cloud.bigquery import magicsproject_id = "your_project" # update as neededmagics.context.project = project_idbq_params = {}client = bigquery.Client(project=project_id)%load_ext google.cloud.bigqueryfinally, comment out the make_chart lines.### About dimensions-ai-integrity.ds_dp_pipeline_ripeta_staging.trust_markers_rawdimensions-ai-integrity.ds_dp_pipeline_ripeta_staging.trust_markers_raw is an internal table that is the result of runing a process over the text of publications in order to identify trustmarker segments including authors contributions.The process works as follows:The process aims to automatically segment research papers into their constituent sections. It operates by identifying headings within the text based on a pre-defined set of patterns and a rule-based system. The system first cleans and normalizes the input text. It then employs regular expressions to detect potential section headings. These potential headings are validated against a set of rules that consider factors such as capitalization, the context of surrounding words, and the typical order of sections within a research paper (e.g., certain sections not appearing after "References" or before "Abstract"). Specific rules also handle exceptions for particular heading types like "Keywords" or "Appendices." Once valid headings are identified, the system extracts the corresponding textual content for each section. The output is a structured representation of the paper, categorizing text segments under their respective heading types. Any text that doesn't fall under a recognized heading is also identified as unlabeled content. The overall process aims to provide a structured understanding of the document's organization for subsequent analysis.Author Contributions segments are identified using the following regex:"author_contributions": ["((credit|descript(ion(?:s)?|ive)| )*author(s|'s|ship|s')?( |contribution(?:s)?|statement(?:s)?|role(?:s)?){2,})","contribution(?:s)"]Access to dimensions-ai-integrity.ds_dp_pipeline_ripeta_staging.trust_markers_raw is available to peer reveiwers of the opinion piece.Datasets that allow external validation of the credit ontology process identification process have also been produced.

  6. Kimia Farma: Performance Analysis 2020-2023

    • kaggle.com
    zip
    Updated Feb 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anggun Dwi Lestari (2025). Kimia Farma: Performance Analysis 2020-2023 [Dataset]. https://www.kaggle.com/datasets/anggundwilestari/kimia-farma-performance-analysis-2020-2023
    Explore at:
    zip(30284703 bytes)Available download formats
    Dataset updated
    Feb 27, 2025
    Authors
    Anggun Dwi Lestari
    Description

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F19062145%2F025ccf521f62db512b4a98edd0b3508a%2FKimia_Farma_Dashboard.jpg?generation=1748428094441761&alt=media" alt="">This project analyzes Kimia Farma's performance from 2020 to 2023 using Google Looker Studio. The analysis is based on a pre-processed dataset stored in BigQuery, which serves as the data source for the dashboard.

    Project Scope

    The dashboard is designed to provide insights into branch performance, sales trends, customer ratings, and profitability. The development is ongoing, with multiple pages planned for a more in-depth analysis.

    Current Progress

    ✅ The first page of the dashboard is completed
    ✅ A sample dashboard file is available on Kaggle
    🔄 Development will continue with additional pages

    Dataset Overview

    The dataset consists of transaction records from Kimia Farma branches across different cities and provinces. Below are the key columns used in the analysis: - transaction_id: Transaction ID code - date: Transaction date - branch_id: Kimia Farma branch ID code - branch_name: Kimia Farma branch name - kota: City of the Kimia Farma branch - provinsi: Province of the Kimia Farma branch - rating_cabang: Customer rating of the Kimia Farma branch - customer_name: Name of the customer who made the transaction - product_id: Product ID code - product_name: Name of the medicine - actual_price: Price of the medicine - discount_percentage: Discount percentage applied to the medicine - persentase_gross_laba: Gross profit percentage based on the following conditions:
    Price ≤ Rp 50,000 → 10% profit
    Price > Rp 50,000 - 100,000 → 15% profit
    Price > Rp 100,000 - 300,000 → 20% profit
    Price > Rp 300,000 - 500,000 → 25% profit
    Price > Rp 500,000 → 30% profit
    - nett_sales: Price after discount - nett_profit: Profit earned by Kimia Farma - rating_transaksi: Customer rating of the transaction

    Files Provided

    📌 kimia farma_query.txt – Contains SQL queries used for data analysis in Looker Studio
    📌 kimia farma_analysis_table.csv – Preprocessed dataset ready for import and analysis

    📢 Published on : My LinkedIn

  7. CoEdIT

    • kaggle.com
    • huggingface.co
    zip
    Updated Nov 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). CoEdIT [Dataset]. https://www.kaggle.com/datasets/thedevastator/coedit-nlp-editing-dataset
    Explore at:
    zip(4681073 bytes)Available download formats
    Dataset updated
    Nov 26, 2023
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    CoEdIT

    Enhancing AI Text Editing Through 69,000 Instances

    By Huggingface Hub [source]

    About this dataset

    This dataset provides 69,000 instances of natural language processing (NLP) editing tasks to help researchers develop more effective AI text-editing models. Compiled into a convenient JSON format, this collection offers easy access so that researchers have the tools they need to create groundbreaking AI models that efficiently and effectively redefine natural language processing. This is your chance to be at the forefront of NLP technology and make history through innovative AI capabilities. So join in and unlock a world of possibilities with CoEdIT's Text Editing Dataset!

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    • Familiarize yourself with the format of the dataset by taking a look at the columns: task, src, tgt. You’ll see that each row in this dataset contains a specific NLP editing task as well as source text (src) and target text (tgt) which displays what should result from that editing task.
    • Import the JSON file of this dataset into your machine learning environment or analyses software toolbox of choice. Some popular options include Python's Pandas library and BigQuery on Google Cloud Platforms for larger datasets like this one oryoou can also import them into Excel Toolboxes .
    • Once you've imported the data into your chosen program, you can now start exploring! Take a look around at various rows to get an idea of how different types of edits need to be made on source text in order to produce target text successfully meeting given criteria depending on needs/ tasks come together; Make sure you read any documents associated with each column helps understand better context before beginning your analysis or coding part

    • Test out coding solutions which process different types and scales of edits - if understanding how punctuation impacts sentence similarity measures gives key insight into meaning being conveyed then develop code accordingly ,playing around with different methods utilizing common ML/NLP algorithms & libraries like NLTK , etc

    5 Finally – now that have tested conceptual ideas begin work creating efficient & effective AI-powered models system using training data specifically catered towards given tasks at hand; Evaluate performance with validation & test datasets prior getting production ready

    Research Ideas

    • Automated Grammar Checking Solutions: This dataset can be used to train machine learning models to detect grammatical errors and suggest proper corrections.
    • Text Summarization: Using this dataset, researchers can create AI-powered summarization algorithms that summarize long-form passages into shorter summaries while preserving accuracy and readability
    • Natural Language Generation: This dataset could be used to develop AI solutions that generate accurately formatted natural language sentences when given a prompt or some other form of input

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: validation.csv | Column name | Description | |:--------------|:-------------------------------------------------------------------------------------| | Task | This column describes the task that the dataset is intended to be used for. (String) | | src | This column contains the source text input. (String) | | tgt | This column contains the target text output. (String) |

    File: train.csv | Column name | Description | |:--------------|:-------------------------------------------------------------------------------------| | Task | This column describes the task that the dataset is intended to be used for. (String) | | src | This column contains the source text input. (String) | | tgt | This column contains the target text output. (String) ...

  8. d

    Meio Ambiente: Taxa de Precipitação (GOES-16)

    • data.rio
    • datario-pcrj.hub.arcgis.com
    • +1more
    Updated Jun 3, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prefeitura da Cidade do Rio de Janeiro (2022). Meio Ambiente: Taxa de Precipitação (GOES-16) [Dataset]. https://www.data.rio/documents/48c0210e96074b48b401ec2fa4ad99b3
    Explore at:
    Dataset updated
    Jun 3, 2022
    Dataset authored and provided by
    Prefeitura da Cidade do Rio de Janeiro
    License

    Attribution-NoDerivs 3.0 (CC BY-ND 3.0)https://creativecommons.org/licenses/by-nd/3.0/
    License information was derived automatically

    Description

    Taxa de precipitação estimada de áreas do sudeste brasileiro. As estimativas são feitas de hora em hora, cada registro contendo dados desta estimativa. Cada área é um quadrado formado por 4km de lado. Dados coletados pelo satélite GOES-16.

      Como acessar
    
    
      Nessa página
    
    
      Aqui, você encontrará um botão para realizar o download dos dados em formato CSV e compactados com gzip. Ou, para mesmo resultado, pode clicar aqui.
    
    
      BigQuery
    
    
    
    
          SELECT
    
    
          *
    
    
          FROM
    
    
          `datario.meio_ambiente_clima.taxa_precipitacao_satelite`
    
    
          LIMIT
    
    
          1000
    
    
    
    
      Clique aqui
      para ir diretamente a essa tabela no BigQuery. Caso não tenha experiência com BigQuery,
      acesse nossa documentação para entender como acessar os dados.
    
    
      Python
    
    
    
        import
        basedosdados
        as
        bd
    
    
        # Para carregar o dado direto no pandas
    
        df
        =
        bd.read_sql
        (
        "SELECT * FROM `datario.meio_ambiente_clima.taxa_precipitacao_satelite` LIMIT 1000"
        ,
        billing_project_id
        =
        "<id_do_seu_projeto_gcp>"
        )
    
    
    
    
      R
    
    
    
        install.packages(
        "basedosdados"
        )
    
        library(
        "basedosdados"
        )
    
    
        # Defina o seu projeto no Google Cloud
    
        set_billing_id(
        "<id_do_seu_projeto_gcp>"
        )
    
    
        # Para carregar o dado direto no R
    
        tb <- read_sql(
        "SELECT * FROM `datario.meio_ambiente_clima.taxa_precipitacao_satelite` LIMIT 1000"
        )
    
    
    
    
    
    
      Cobertura temporal
    
    
      Desde 2020 até a data corrente
    
    
    
    
      Frequência de atualização
    
    
      Diário
    
    
    
    
      Órgão gestor
    
    
      Centro de Operações da Prefeitura do Rio (COR)
    
    
    
    
      Colunas
    
    
    
        Nome
        Descrição
    
    
    
    
            latitude
            Latitude do centro da área.
    
    
    
            longitude
            Longitude do centro da área.
    
    
    
            rrqpe
            Taxa de precipitação estimada, medidas em milímetros por hora.
    
    
    
            primary_key
            Chave primária criada a partir da concatenação da coluna data, horário, latitude e longitude. Serve para evitar dados duplicados.
    
    
    
            horario
            Horário no qual foi realizada a medição
    
    
    
            data_particao
            Data na qual foi realizada a medição
    
    
    
    
    
    
    
      Dados do publicador
    
    
      Nome: Patrícia Catandi
      E-mail: patriciabcatandi@gmail.com
    
  9. d

    Meio Ambiente: Estações pluviométricas (AlertaRio)

    • data.rio
    • datario-pcrj.hub.arcgis.com
    Updated Jun 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prefeitura da Cidade do Rio de Janeiro (2022). Meio Ambiente: Estações pluviométricas (AlertaRio) [Dataset]. https://www.data.rio/documents/cc4863712d65418abd8b2063a50bf453
    Explore at:
    Dataset updated
    Jun 2, 2022
    Dataset authored and provided by
    Prefeitura da Cidade do Rio de Janeiro
    License

    Attribution-NoDerivs 3.0 (CC BY-ND 3.0)https://creativecommons.org/licenses/by-nd/3.0/
    License information was derived automatically

    Description

    Dados sobre as estações pluviométricas do alertario ( Sistema Alerta Rio da Prefeitura do Rio de Janeiro ) na cidade do Rio de Janeiro.

      Como acessar
    
    
      Nessa página
    
    
      Aqui, você encontrará um botão para realizar o download dos dados em formato CSV e compactados com gzip. Ou,
      para mesmo resultado, pode clicar aqui.
    
    
      BigQuery
    
    
    
    
          SELECT
    
    
          *
    
    
          FROM
    
    
          `datario.meio_ambiente_clima.estacoes_alertario`
    
    
          LIMIT
    
    
          1000
    
    
    
    
      Clique aqui
      para ir diretamente a essa tabela no BigQuery. Caso não tenha experiência com BigQuery,
      acesse nossa documentação para entender como acessar os dados.
    
    
      Python
    
    
    
        import
        basedosdados
        as
        bd
    
    
        # Para carregar o dado direto no pandas
    
        df
        =
        bd.read_sql
        (
        "SELECT * FROM `datario.meio_ambiente_clima.estacoes_alertario` LIMIT 1000"
        ,
        billing_project_id
        =
        "<id_do_seu_projeto_gcp>"
        )
    
    
    
    
      R
    
    
    
        install.packages(
        "basedosdados"
        )
    
        library(
        "basedosdados"
        )
    
    
        # Defina o seu projeto no Google Cloud
    
        set_billing_id(
        "<id_do_seu_projeto_gcp>"
        )
    
    
        # Para carregar o dado direto no R
    
        tb <- read_sql(
        "SELECT * FROM `datario.meio_ambiente_clima.estacoes_alertario` LIMIT 1000"
        )
    
    
    
    
    
    
      Cobertura temporal
    
    
      N/A
    
    
    
    
      Frequência de atualização
    
    
      Anual
    
    
    
    
      Órgão gestor
    
    
      COR
    
    
    
    
      Colunas
    
    
    
        Nome
        Descrição
    
    
    
    
          x
          X UTM (SAD69 Zona 23)
    
    
    
          longitude
          Longitude onde a estação se encontra.
    
    
    
          id_estacao
          ID da estação definido pelo AlertaRIO.
    
    
    
          estacao
          Nome da estação.
    
    
    
          latitude
          Latitude onde a estação se encontra.
    
    
    
          cota
          Altura em metros onde a estação se encontra.
    
    
    
          endereco
          Endereço completo da estação.
    
    
    
          situacao
          Indica se a estação está operante ou com falha.
    
    
    
          data_inicio_operacao
          Data em que a estação começou a operar.
    
    
    
          data_fim_operacao
          Data em que a estação parou de operar.
    
    
    
          data_atualizacao
          Última data em que os dados sobre a data de operação foram atualizados.
    
    
    
          y
          Y UTM (SAD69 Zona 23)
    
    
    
    
    
    
    
      Dados do publicador
    
    
      Nome: Patricia Catandi
      E-mail: patriciabcatandi@gmail.com
    
  10. d

    Dados do sistema Comando (COR): procedimento operacional padrao

    • data.rio
    • hub.arcgis.com
    Updated Oct 4, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prefeitura da Cidade do Rio de Janeiro (2022). Dados do sistema Comando (COR): procedimento operacional padrao [Dataset]. https://www.data.rio/documents/b26f700285ab4fde9495f2851adcf3d8
    Explore at:
    Dataset updated
    Oct 4, 2022
    Dataset authored and provided by
    Prefeitura da Cidade do Rio de Janeiro
    License

    Attribution-NoDerivs 3.0 (CC BY-ND 3.0)https://creativecommons.org/licenses/by-nd/3.0/
    License information was derived automatically

    Description

    Procedimentos operacionais padrões (POP) existentes na PCRJ. Um POP é um procedimento que será usado para solucionar um evento. Um POP é composto de várias atividades. Um evento é uma ocorrência na cidade do Rio de Janeiro que exija um acompanhamento e na maioria das vezes uma ação da PCRJ, como por exemplo um buraco na rua. Acesse também através da API do Escritório de Dados: https://api.dados.rio/v1/

      Como acessar
    
    
      Nessa página
    
    
      Aqui, você encontrará um botão para realizar o download dos dados em formato CSV e compactados com gzip. Ou, para mesmo resultado, pode clicar aqui.
    
    
      BigQuery
    
    
    
    
          SELECT
    
    
          *
    
    
          FROM
    
    
          `datario.adm_cor_comando.procedimento_operacional_padrao`
    
    
          LIMIT
    
    
          1000
    
    
    
    
      Clique aqui
      para ir diretamente a essa tabela no BigQuery. Caso não tenha experiência com BigQuery,
      acesse nossa documentação para entender como acessar os dados.
    
    
      Python
    
    
    
        import
        basedosdados
        as
        bd
    
    
        # Para carregar o dado direto no pandas
    
        df
        =
        bd.read_sql
        (
        "SELECT * FROM `datario.adm_cor_comando.procedimento_operacional_padrao` LIMIT 1000"
        ,
        billing_project_id
        =
        "<id_do_seu_projeto_gcp>"
        )
    
    
    
    
      R
    
    
    
        install.packages(
        "basedosdados"
        )
    
        library(
        "basedosdados"
        )
    
    
        # Defina o seu projeto no Google Cloud
    
        set_billing_id(
        "<id_do_seu_projeto_gcp>"
        )
    
    
        # Para carregar o dado direto no R
    
        tb <- read_sql(
        "SELECT * FROM `datario.adm_cor_comando.procedimento_operacional_padrao` LIMIT 1000"
        )
    
    
    
    
    
    
      Cobertura temporal
    
    
      Não informado.
    
    
    
    
      Frequência de atualização
    
    
      Mensal
    
    
    
    
      Órgão gestor
    
    
      COR
    
    
    
    
      Colunas
    
    
    
        Nome
        Descrição
    
    
    
    
            id_pop
            Identificador do POP procedimento operacional padrão).
    
    
    
            pop_titulo
            Nome do procedimento operacional padrão.
    
    
    
    
    
    
    
      Dados do(a) publicador(a)
    
    
      Nome: Patrícia Catandi
      E-mail: patriciabcatandi@gmail.com
    
  11. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Google BigQuery (2019). BigQuery GIS Utility Datasets (U.S.) [Dataset]. https://www.kaggle.com/bigquery/utility-us
Organization logoOrganization logo

BigQuery GIS Utility Datasets (U.S.)

Useful shapefiles for GIS (BigQuery)

Explore at:
zip(0 bytes)Available download formats
Dataset updated
Mar 20, 2019
Dataset provided by
BigQueryhttps://cloud.google.com/bigquery
Googlehttp://google.com/
Authors
Google BigQuery
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

Querying BigQuery tables You can use the BigQuery Python client library to query tables in this dataset in Kernels. Note that methods available in Kernels are limited to querying data. Tables are at bigquery-public-data.github_repos.[TABLENAME].

  • Project: "bigquery-public-data"
  • Table: "utility_us"

Fork this kernel to get started to learn how to safely manage analyzing large BigQuery datasets.

If you're using Python, you can start with this code:

import pandas as pd
from bq_helper import BigQueryHelper
bq_assistant = BigQueryHelper("bigquery-public-data", "utility_us")
Search
Clear search
Close search
Google apps
Main menu