100+ datasets found
  1. s

    Python Import Data India – Buyers & Importers List

    • seair.co.in
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim, Python Import Data India – Buyers & Importers List [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    India
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  2. e

    Python International Export Import Data | Eximpedia

    • eximpedia.app
    Updated Nov 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Python International Export Import Data | Eximpedia [Dataset]. https://www.eximpedia.app/companies/python-international/17665863
    Explore at:
    Dataset updated
    Nov 25, 2025
    Description

    Python International Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  3. i

    Code to import PSCAD data into Python (Spyder)

    • ieee-dataport.org
    Updated Nov 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Franz Guzman Llanos (2025). Code to import PSCAD data into Python (Spyder) [Dataset]. https://ieee-dataport.org/documents/code-import-pscad-data-python-spyder
    Explore at:
    Dataset updated
    Nov 20, 2025
    Authors
    Franz Guzman Llanos
    Description

    minimizes errors

  4. s

    Python Import Data in February - Seair.co.in

    • seair.co.in
    Updated Feb 18, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2016). Python Import Data in February - Seair.co.in [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Feb 18, 2016
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    Argentina, Nauru, Malaysia, Austria, Gibraltar, Slovakia, French Guiana, Tokelau, Timor-Leste, Korea (Democratic People's Republic of)
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  5. e

    Antonin Python Export Import Data | Eximpedia

    • eximpedia.app
    Updated Sep 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Antonin Python Export Import Data | Eximpedia [Dataset]. https://www.eximpedia.app/companies/antonin-python/40213244
    Explore at:
    Dataset updated
    Sep 2, 2025
    Description

    Antonin Python Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  6. s

    Python Import Data in August - Seair.co.in

    • seair.co.in
    Updated Aug 20, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2016). Python Import Data in August - Seair.co.in [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Aug 20, 2016
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    South Africa, Lebanon, Nepal, Belgium, Christmas Island, Virgin Islands (U.S.), Gambia, Ecuador, Saint Pierre and Miquelon, Falkland Islands (Malvinas)
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  7. s

    Acerola Extract Import Data | Python Logistics Llc Prod

    • seair.co.in
    Updated Mar 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2024). Acerola Extract Import Data | Python Logistics Llc Prod [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Mar 7, 2024
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    United States
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  8. Z

    Storage and Transit Time Data and Code

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Felton (2024). Storage and Transit Time Data and Code [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8136816
    Explore at:
    Dataset updated
    Jun 12, 2024
    Dataset provided by
    Montana State University
    Authors
    Andrew Felton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Author: Andrew J. FeltonDate: 5/5/2024

    This R project contains the primary code and data (following pre-processing in python) used for data production, manipulation, visualization, and analysis and figure production for the study entitled:

    "Global estimates of the storage and transit time of water through vegetation"

    Please note that 'turnover' and 'transit' are used interchangeably in this project.

    Data information:

    The data folder contains key data sets used for analysis. In particular:

    "data/turnover_from_python/updated/annual/multi_year_average/average_annual_turnover.nc" contains a global array summarizing five year (2016-2020) averages of annual transit, storage, canopy transpiration, and number of months of data. This is the core dataset for the analysis; however, each folder has much more data, including a dataset for each year of the analysis. Data are also available is separate .csv files for each land cover type. Oterh data can be found for the minimum, monthly, and seasonal transit time found in their respective folders. These data were produced using the python code found in the "supporting_code" folder given the ease of working with .nc and EASE grid in the xarray python module. R was used primarily for data visualization purposes. The remaining files in the "data" and "data/supporting_data"" folder primarily contain ground-based estimates of storage and transit found in public databases or through a literature search, but have been extensively processed and filtered here.

    Code information

    Python scripts can be found in the "supporting_code" folder.

    Each R script in this project has a particular function:

    01_start.R: This script loads the R packages used in the analysis, sets thedirectory, and imports custom functions for the project. You can also load in the main transit time (turnover) datasets here using the source() function.

    02_functions.R: This script contains the custom function for this analysis, primarily to work with importing the seasonal transit data. Load this using the source() function in the 01_start.R script.

    03_generate_data.R: This script is not necessary to run and is primarilyfor documentation. The main role of this code was to import and wranglethe data needed to calculate ground-based estimates of aboveground water storage.

    04_annual_turnover_storage_import.R: This script imports the annual turnover andstorage data for each landcover type. You load in these data from the 01_start.R scriptusing the source() function.

    05_minimum_turnover_storage_import.R: This script imports the minimum turnover andstorage data for each landcover type. Minimum is defined as the lowest monthlyestimate.You load in these data from the 01_start.R scriptusing the source() function.

    06_figures_tables.R: This is the main workhouse for figure/table production and supporting analyses. This script generates the key figures and summary statistics used in the study that then get saved in the manuscript_figures folder. Note that allmaps were produced using Python code found in the "supporting_code"" folder.

  9. converted json to CSV Traffy Fondue data

    • kaggle.com
    zip
    Updated Jan 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hansen (2025). converted json to CSV Traffy Fondue data [Dataset]. https://www.kaggle.com/datasets/motethansen/converted-json-to-csv-traffy-fondue-data
    Explore at:
    zip(31705770 bytes)Available download formats
    Dataset updated
    Jan 15, 2025
    Authors
    Hansen
    License

    https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

    Description

    Traffy Fondue Data

    Data pulled from Traffy Fondue, by accessing the Traffy Fondue Open API. Date January 2022 until January 2025

    The following code pulled the data:

    
    import os
    import json
    import requests
    from datetime import datetime, timedelta
    import time
    
    class TraffyDataFetcher:
      def _init_(self, start_date, subfolder='traffyfonduedata'):
        self.url = "https://publicapi.traffy.in.th/share/teamchadchart/search"
        self.query = {'offset': '0'}
        self.payload = {}
        self.headers = {}
        self.start_date = datetime.strptime(start_date, '%Y-%m-%d')
        self.end_date = datetime.now()
        self.subfolder = subfolder
        self.max_requests_per_minute = 99
    
        if not os.path.exists(self.subfolder):
          os.makedirs(self.subfolder)
    
      def add_days_to_date(self, start_date_str, days_to_add):
        start_date = datetime.strptime(start_date_str, '%Y-%m-%d')
        new_date = start_date + timedelta(days=days_to_add)
        return new_date.strftime('%Y-%m-%d')
    
      def fetch_data(self):
        current_date = self.start_date
        index = 0
    
        while current_date <= self.end_date:
          start_time = datetime.now()
    
          self.query['start'] = current_date.strftime('%Y-%m-%d')
          new_date = self.add_days_to_date(self.query['start'], 10)
          self.query['end'] = new_date
          response = requests.request("GET", self.url, headers=self.headers, data=self.payload, params=self.query)
          print(f"offset: {index} response: {response.status_code}")
    
          filename = f"traffy_{current_date.strftime('%Y-%m-%d')}.json"
          file_path = os.path.join(self.subfolder, filename)
    
          with open(file_path, "w") as outfile:
            json_object = json.dumps(response.json(), indent=4)
            outfile.write(json_object)
    
          end_time = datetime.now()
          elapsed_time = (end_time - start_time).total_seconds()
          print(f"Elapsed time: {elapsed_time} s")
    
          index += 950
          current_date = datetime.strptime(new_date, '%Y-%m-%d') + timedelta(days=1)
    
          if index % self.max_requests_per_minute == 0:
            time.sleep(60 - elapsed_time)
    
    if _name_ == "_main_":
      fetcher = TraffyDataFetcher(start_date='2022-01-01')
      fetcher.fetch_data()
    

    --

    And the following code converted the json to CSV files

    import os
    import glob
    import json
    import pandas as pd
    #import numpy as np
    
    class TraffyJSONFixer:
      def _init_(self, path_to_json='*.json', subfolder='traffyfonduedata'):
        self.path_to_json = path_to_json
        self.subfolder = subfolder
        self.outputfolder = 'fixedjson'
        self.excelfolder = 'exceloutput'
        self.file_path = os.path.join(self.subfolder, self.path_to_json)
        self.json_files = glob.glob(self.file_path)
        
        # Ensure the subfolder exists
        if not os.path.exists(self.subfolder):
          os.makedirs(self.subfolder)
        # Ensure the outputfolder exists
        if not os.path.exists(self.outputfolder):
          os.makedirs(self.outputfolder)
        # Ensure the excelfolder exists
        if not os.path.exists(self.excelfolder):
          os.makedirs(self.excelfolder)
        
        # Debugging: Print the current working directory and the list of JSON files
        print(f"Current working directory: {os.getcwd()}")
        print(f"Found JSON files: {self.json_files}")
        
      def fix_json_files(self):
        for count, ele in enumerate(self.json_files):
          new_file_name = os.path.join(self.outputfolder, f"data_{os.path.basename(ele)}")
          
          try:
            with open(ele, 'r', encoding='utf-8') as f:
              data = json.load(f)
    
            # Debugging: Print the type of data
            print(f"Processing file: {ele}")
            print(f"Type of data: {type(data)}")
            
            # Handle different JSON structures
            if isinstance(data, dict) and "results" in data:
              results = data["results"]
            elif isinstance(data, list):
              results = data
            else:
              print(f"Unexpected JSON structure in file: {ele}")
              continue
    
            # Ensure results is a list or dict before writing
            if isinstance(results, (list, dict)):
              with open(new_file_name, 'w', encoding='utf-8') as f:
                f.write(json.dumps(results, indent=4))
            else:
              print(f"Unexpected type for results in file: {ele}")
          except (json.JSONDecodeError, KeyError) as e:
            print(f"Error processing file {ele}: {e}")
    
      def jsontoexcel(self):
        jsonfile_path = os.path.join(self.out...
    
  10. e

    Ballroom Python South Export Import Data | Eximpedia

    • eximpedia.app
    Updated Jan 8, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Ballroom Python South Export Import Data | Eximpedia [Dataset]. https://www.eximpedia.app/companies/ballroom-python-south/34498842
    Explore at:
    Dataset updated
    Jan 8, 2025
    Description

    Ballroom Python South Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  11. s

    Acerola Extract USA Import Data, US Acerola Extract Importers / Buyers List

    • seair.co.in
    Updated Mar 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim Solutions (2024). Acerola Extract USA Import Data, US Acerola Extract Importers / Buyers List [Dataset]. https://www.seair.co.in/us-import/product-acerola-extract/i-python-logistics-llc-prod/e-delphi-fretes-internacionais-ltda.aspx
    Explore at:
    .text/.csv/.xml/.xls/.binAvailable download formats
    Dataset updated
    Mar 7, 2024
    Dataset authored and provided by
    Seair Exim Solutions
    Area covered
    United States
    Description

    View details of Acerola Extract import data and shipment reports in US with product description, price, date, quantity, major us ports, countries and US buyers/importers list, overseas suppliers/exporters list.

  12. z

    Open Context Database SQL Dump

    • zenodo.org
    • data-staging.niaid.nih.gov
    • +2more
    zip
    Updated Jan 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric Kansa; Eric Kansa; Sarah Whitcher Kansa; Sarah Whitcher Kansa (2025). Open Context Database SQL Dump [Dataset]. http://doi.org/10.5281/zenodo.14728229
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 23, 2025
    Dataset provided by
    Open Context
    Authors
    Eric Kansa; Eric Kansa; Sarah Whitcher Kansa; Sarah Whitcher Kansa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Open Context (https://opencontext.org) publishes free and open access research data for archaeology and related disciplines. An open source (but bespoke) Django (Python) application supports these data publishing services. The software repository is here: https://github.com/ekansa/open-context-py

    The Open Context team runs ETL (extract, transform, load) workflows to import data contributed by researchers from various source relational databases and spreadsheets. Open Context uses PostgreSQL (https://www.postgresql.org) relational database to manage these imported data in a graph style schema. The Open Context Python application interacts with the PostgreSQL database via the Django Object-Relational-Model (ORM).

    This database dump includes all published structured data organized used by Open Context (table names that start with 'oc_all_'). The binary media files referenced by these structured data records are stored elsewhere. Binary media files for some projects, still in preparation, are not yet archived with long term digital repositories.

    These data comprehensively reflect the structured data currently published and publicly available on Open Context. Other data (such as user and group information) used to run the Website are not included.

    IMPORTANT

    This database dump contains data from roughly 190+ different projects. Each project dataset has its own metadata and citation expectations. If you use these data, you must cite each data contributor appropriately, not just this Zenodo archived database dump.

  13. h

    Python-DPO-Large

    • huggingface.co
    Updated Mar 15, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NextWealth Entrepreneurs Private Limited (2023). Python-DPO-Large [Dataset]. https://huggingface.co/datasets/NextWealth/Python-DPO-Large
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 15, 2023
    Dataset authored and provided by
    NextWealth Entrepreneurs Private Limited
    Description

    Dataset Card for Python-DPO

    This dataset is the larger version of Python-DPO dataset and has been created using Argilla.

      Load with datasets
    

    To load this dataset with datasets, you'll just need to install datasets as pip install datasets --upgrade and then use the following code: from datasets import load_dataset

    ds = load_dataset("NextWealth/Python-DPO")

      Data Fields
    

    Each data instance contains:

    instruction: The problem description/requirements chosen_code:… See the full description on the dataset page: https://huggingface.co/datasets/NextWealth/Python-DPO-Large.

  14. m

    Customers order for a Printing Company (2D Bin Packing and Scheduling)

    • data.mendeley.com
    Updated May 26, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mahdi mostajabdaveh (2019). Customers order for a Printing Company (2D Bin Packing and Scheduling) [Dataset]. http://doi.org/10.17632/bxh46tps75.4
    Explore at:
    Dataset updated
    May 26, 2019
    Authors
    mahdi mostajabdaveh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These data belongs to an actual printing company . Each record in Excel file Raw Data/Big_Data present an order from customers. In column "ColorMode" ; 4+0 means the order is one sided and 4+4 means it is two-sided. Files in Instances folder correspond to the instances used for computational tests in the article. Each of these instances has two related file with the same characteristics. One with gdx suffix and one with out any file extension. These files are used to import data to the python implementation. The code and relevant description can be found in Read_input.py file.

  15. US Livestock Meat Imports

    • kaggle.com
    zip
    Updated Dec 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pushkar Ambastha (2023). US Livestock Meat Imports [Dataset]. https://www.kaggle.com/datasets/pushkar007/us-livestock-meat-imports
    Explore at:
    zip(2880894 bytes)Available download formats
    Dataset updated
    Dec 8, 2023
    Authors
    Pushkar Ambastha
    License

    https://www.usa.gov/government-works/https://www.usa.gov/government-works/

    Area covered
    United States
    Description

    Livestock and Meat International Trade Data

    The Livestock and Meat International Trade Data product includes monthly and annual data for imports of live cattle, hogs, sheep, goats, beef and veal, pork, lamb and mutton, chicken meat, turkey meat, eggs, and egg products. This product does not include any Dairy Data. Using official trade statistics reported by the U.S. Census, this data product provides data aggregated by commodity and converted to the same units used in the USDA’s World Agricultural Supply and Demand Estimates (WASDE). These units are carcass-weight-equivalent (CWE) pounds for meat products and dozen equivalents for eggs and egg products. Live animal numbers are not converted. With breakdowns by partner country and historical data back to 1989, these data can be used to analyze trends in livestock, meat, and poultry shipments alongside domestic production data and WASDE estimates. Timely analysis and discussion can be found in the monthly Livestock, Dairy, and Poultry Outlook report.

    This includes all of the same monthly data as the Excel tables, as well as disaggregated, unconverted data. These files are machine-readable, providing a convenient format for Python users and programmers.

    Details

    The Livestock and Meat Trade Data Set contains monthly and annual data for imports of live cattle, hogs, sheep, and goats, as well as beef and veal, pork, lamb and mutton, chicken meat, turkey meat, and eggs. The tables report physical quantities, not dollar values or unit prices. Data on beef and veal, pork, lamb, and mutton are on a carcass-weight-equivalent basis. Breakdowns by country are included.

  16. Z

    Hyperfine tensor and defect structure of T center

    • datasetcatalog.nlm.nih.gov
    Updated Apr 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiong, Yihuang; Hautier, Geoffroy (2025). Hyperfine tensor and defect structure of T center [Dataset]. http://doi.org/10.5281/zenodo.15231707
    Explore at:
    Dataset updated
    Apr 16, 2025
    Authors
    Xiong, Yihuang; Hautier, Geoffroy
    Description

    About the datasetThis record provides all data to reproduce the hyperfine coupling analysis of a T center defect in silicon. The results are reported in "Long-lived entanglement of a spin-qubit register in silicon photonics". DOI: https://doi.org/10.48550/arXiv.2504.15467 ## Files included- hyperfine_terms.npz: NumPy archive containing three float arrays (Fermi_contact, Dipolar, Hyperfine) saved via numpy.savez.- POSCAR: VASP format structure file specifying a 1002‑atom silicon supercell with a T center defect. ## Computational methods- Electronic structure calculations were performed with VASP using the HSE06 screened hybrid functional. ## Usage & reuse1. Load hyperfine data in Python python import numpy as np data = np.load('hyperfine_terms.npz') Fc = data['Fermi_contact'] Dip = data['Dipolar'] Hf = data['Hyperfine']2. **Load defect structures using pymatgen**python from pymatgen.io.vasp import Poscar poscar = Poscar.from_file('POSCAR') structure = poscar.structure

  17. H

    Hydroinformatics Instruction Module Example Code: Programmatic Data Access...

    • hydroshare.org
    • beta.hydroshare.org
    • +1more
    zip
    Updated Mar 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amber Spackman Jones; Jeffery S. Horsburgh (2022). Hydroinformatics Instruction Module Example Code: Programmatic Data Access with USGS Data Retrieval [Dataset]. https://www.hydroshare.org/resource/a58b5d522d7f4ab08c15cd05f3fd2ad3
    Explore at:
    zip(34.5 KB)Available download formats
    Dataset updated
    Mar 3, 2022
    Dataset provided by
    HydroShare
    Authors
    Amber Spackman Jones; Jeffery S. Horsburgh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This resource contains Jupyter Notebooks with examples for accessing USGS NWIS data via web services and performing subsequent analysis related to drought with particular focus on sites in Utah and the southwestern United States (could be modified to any USGS sites). The code uses the Python DataRetrieval package. The resource is part of set of materials for hydroinformatics and water data science instruction. Complete learning module materials are found in HydroLearn: Jones, A.S., Horsburgh, J.S., Bastidas Pacheco, C.J. (2022). Hydroinformatics and Water Data Science. HydroLearn. https://edx.hydrolearn.org/courses/course-v1:USU+CEE6110+2022/about.

    This resources consists of 6 example notebooks: 1. Example 1: Import and plot daily flow data 2. Example 2: Import and plot instantaneous flow data for multiple sites 3. Example 3: Perform analyses with USGS annual statistics data 4. Example 4: Retrieve data and find daily flow percentiles 3. Example 5: Further examination of drought year flows 6. Coding challenge: Assess drought severity

  18. d

    Data Management Project for Collaborative Groundwater Research

    • search.dataone.org
    • hydroshare.org
    Updated Apr 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abbygael Johnson; Collins Stephenson; Brett Safely; Brooklyn Taylor (2025). Data Management Project for Collaborative Groundwater Research [Dataset]. https://search.dataone.org/view/sha256%3A1fa221662b0ba8fc29abb240dba6a668cd9388334e34eda28a18b4a84e2c51ab
    Explore at:
    Dataset updated
    Apr 26, 2025
    Dataset provided by
    Hydroshare
    Authors
    Abbygael Johnson; Collins Stephenson; Brett Safely; Brooklyn Taylor
    Description

    This project developed a comprehensive data management system designed to support collaborative groundwater research across institutions by establishing a centralized, structured database for hydrologic time series data. Built on the Observations Data Model (ODM), the system stores time series data and metadata in a relational SQLite database. Key project components included database construction, automation of data formatting and importation, development of analytical and visualization tools, and integration with ArcGIS for geospatial representation. The data import workflow standardizes and validates diverse .csv datasets by aligning them with ODM formatting. A Python-based module was created to facilitate data retrieval, analysis, visualization, and export, while an interactive map feature enables users to explore site-specific data availability. Additionally, a custom ArcGIS script was implemented to generate maps that incorporate stream networks, site locations, and watershed boundaries using DEMs from USGS sources. The system was tested using real-world datasets from groundwater wells and surface water gages across Utah, demonstrating its flexibility in handling diverse formats and parameters. The relational structure enabled efficient querying and visualization, and the developed tools promoted accessibility and alignment with FAIR principles.

  19. Digitisation of Weather Records of Seungjeongwon Ilgi: A Historical Weather...

    • zenodo.org
    bin, csv, json, txt
    Updated Sep 27, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zeyu Lyu; Zeyu Lyu; Kohei Ichikawa; Kohei Ichikawa; Yongchao Cheng; Yongchao Cheng; Hisashi Hayakawa; Hisashi Hayakawa; Yukiko Kawamoto; Yukiko Kawamoto (2023). Digitisation of Weather Records of Seungjeongwon Ilgi: A Historical Weather Dynamics Dataset of the Korean Peninsula (1623-1910) [Dataset]. http://doi.org/10.5281/zenodo.7453644
    Explore at:
    csv, json, bin, txtAvailable download formats
    Dataset updated
    Sep 27, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Zeyu Lyu; Zeyu Lyu; Kohei Ichikawa; Kohei Ichikawa; Yongchao Cheng; Yongchao Cheng; Hisashi Hayakawa; Hisashi Hayakawa; Yukiko Kawamoto; Yukiko Kawamoto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Korea
    Description

    Introduction

    This study has exploited the daily weather records of Seungjeongwon Ilgi from the NIKH database. Seungjeongwon Ilgi (http://sjw.history.go.kr/main.do) is a daily record of the Seungjeongwon, the Royal Secretariat of the Joseon Dynasty of Korea. These diaries span from 1623 to 1910 and generally involve daily weather records in the entry header. Their observational site would be located in Seoul (N37°35′, E126°59′). We have exploited the weather records from the NIKH database and classified the daily weather using text mining method. We have also converted the report dates from the traditional lunisolar calendar to the Gregorian calendar, to better contextualise our data into the contemporary daily measurements.

    Data

    We provide different formats (csv, xlsx, json) to facilitate the usage of data. The main contents of data are listed as below.

    • ID: The unique identifier of a specific record in the metadata, which can also serve as the identifier to merge with external data in the NIKH digital database.
    • Traditional calendar: The original lunar dates in the NIKH digital database, which are listed in data format "YYYY-MM-DD". More specifically, "L0" implies the leap year and "L1" implies the common year.
    • Leap: The identifier of a leap year.
    • Gregorian calendar: The Gregorian calendar date that converted by the traditional calendar date.
    • Weather Text: The text that describe the weather conditions. Specifically, multiple weather descriptions of the same day have been put together.
    • Flag: The computed value that indicates different combinations of weather conditions.
    • Volume: The volume of text in the original record.
    • Herbal Volume: The volume of text in the herbal record.
    • Sunny: A dummy variable that represents whether the weather description contains the expression of sunny.
    • Cloudy: A dummy variable that represents whether the weather description contains the expression of cloudy.
    • Rainy: A dummy variable that represents whether the weather description contains the expression of rainy.
    • Snow: A dummy variable that represents whether the weather description contains the expression of snow.
    • Wind: A dummy variable that represents whether the weather description contains the expression of wind.

    Import Data

    # Python
    # CSV file
    import pandas as pd
    data=pd.read_csv('~/SJWilgi_Seoul_Weather_YR1623_1910.csv',encoding="utf-8") 
    # JSON file
    data=pd.read_json('~/SJWilgi_Seoul_Weather_YR1623_1910.json',encoding="utf-8")
    # Excel file
    data=pd.read_excel('~/SJWilgi_Seoul_Weather_YR1623_1910.xlsx') # Excel file
    # R
    # CSV file
    library(readr)
    data<- read_csv("~/SJWilgi_Seoul_Weather_YR1623_1910.csv")
    # Excel file
    library(readxl)
    data <- read_excel("~/SJWilgi_Seoul_Weather_YR1623_1910.xlsx")

  20. e

    Eximpedia Export Import Trade

    • eximpedia.app
    Updated Jan 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2025). Eximpedia Export Import Trade [Dataset]. https://www.eximpedia.app/
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Jan 12, 2025
    Dataset provided by
    Eximpedia PTE LTD
    Eximpedia Export Import Trade Data
    Authors
    Seair Exim
    Area covered
    Moldova (Republic of), Mozambique, Haiti, Austria, Curaçao, Switzerland, Comoros, El Salvador, Gambia, Fiji
    Description

    Python Llc Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Seair Exim, Python Import Data India – Buyers & Importers List [Dataset]. https://www.seair.co.in

Python Import Data India – Buyers & Importers List

Seair Exim Solutions

Seair Info Solutions PVT LTD

Explore at:
27 scholarly articles cite this dataset (View in Google Scholar)
.bin, .xml, .csv, .xlsAvailable download formats
Dataset provided by
Seair Info Solutions PVT LTD
Authors
Seair Exim
Area covered
India
Description

Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

Search
Clear search
Close search
Google apps
Main menu