100+ datasets found
  1. s

    Python Import Data India – Buyers & Importers List

    • seair.co.in
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim, Python Import Data India – Buyers & Importers List [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    India
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  2. e

    Python International Export Import Data | Eximpedia

    • eximpedia.app
    Updated Nov 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Python International Export Import Data | Eximpedia [Dataset]. https://www.eximpedia.app/companies/python-international/17665863
    Explore at:
    Dataset updated
    Nov 25, 2025
    Description

    Python International Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  3. i

    Code to import PSCAD data into Python (Spyder)

    • ieee-dataport.org
    Updated Nov 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Franz Guzman Llanos (2025). Code to import PSCAD data into Python (Spyder) [Dataset]. https://ieee-dataport.org/documents/code-import-pscad-data-python-spyder
    Explore at:
    Dataset updated
    Nov 20, 2025
    Authors
    Franz Guzman Llanos
    Description

    minimizes errors

  4. s

    Python Import Data in February - Seair.co.in

    • seair.co.in
    Updated Feb 18, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2016). Python Import Data in February - Seair.co.in [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Feb 18, 2016
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    Malaysia, Gibraltar, Austria, Nauru, Argentina, Slovakia, Tokelau, Timor-Leste, French Guiana, Korea (Democratic People's Republic of)
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  5. e

    Antonin Python Export Import Data | Eximpedia

    • eximpedia.app
    Updated Sep 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Antonin Python Export Import Data | Eximpedia [Dataset]. https://www.eximpedia.app/companies/antonin-python/40213244
    Explore at:
    Dataset updated
    Sep 2, 2025
    Description

    Antonin Python Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  6. s

    Python Import Data in August - Seair.co.in

    • seair.co.in
    Updated Aug 20, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2016). Python Import Data in August - Seair.co.in [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Aug 20, 2016
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    Christmas Island, Nepal, Belgium, South Africa, Virgin Islands (U.S.), Lebanon, Gambia, Ecuador, Saint Pierre and Miquelon, Falkland Islands (Malvinas)
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  7. e

    Ballroom Python South Export Import Data | Eximpedia

    • eximpedia.app
    Updated Jan 8, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Ballroom Python South Export Import Data | Eximpedia [Dataset]. https://www.eximpedia.app/companies/ballroom-python-south/34498842
    Explore at:
    Dataset updated
    Jan 8, 2025
    Description

    Ballroom Python South Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  8. Z

    Storage and Transit Time Data and Code

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Felton (2024). Storage and Transit Time Data and Code [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8136816
    Explore at:
    Dataset updated
    Jun 12, 2024
    Dataset provided by
    Montana State University
    Authors
    Andrew Felton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Author: Andrew J. FeltonDate: 5/5/2024

    This R project contains the primary code and data (following pre-processing in python) used for data production, manipulation, visualization, and analysis and figure production for the study entitled:

    "Global estimates of the storage and transit time of water through vegetation"

    Please note that 'turnover' and 'transit' are used interchangeably in this project.

    Data information:

    The data folder contains key data sets used for analysis. In particular:

    "data/turnover_from_python/updated/annual/multi_year_average/average_annual_turnover.nc" contains a global array summarizing five year (2016-2020) averages of annual transit, storage, canopy transpiration, and number of months of data. This is the core dataset for the analysis; however, each folder has much more data, including a dataset for each year of the analysis. Data are also available is separate .csv files for each land cover type. Oterh data can be found for the minimum, monthly, and seasonal transit time found in their respective folders. These data were produced using the python code found in the "supporting_code" folder given the ease of working with .nc and EASE grid in the xarray python module. R was used primarily for data visualization purposes. The remaining files in the "data" and "data/supporting_data"" folder primarily contain ground-based estimates of storage and transit found in public databases or through a literature search, but have been extensively processed and filtered here.

    Code information

    Python scripts can be found in the "supporting_code" folder.

    Each R script in this project has a particular function:

    01_start.R: This script loads the R packages used in the analysis, sets thedirectory, and imports custom functions for the project. You can also load in the main transit time (turnover) datasets here using the source() function.

    02_functions.R: This script contains the custom function for this analysis, primarily to work with importing the seasonal transit data. Load this using the source() function in the 01_start.R script.

    03_generate_data.R: This script is not necessary to run and is primarilyfor documentation. The main role of this code was to import and wranglethe data needed to calculate ground-based estimates of aboveground water storage.

    04_annual_turnover_storage_import.R: This script imports the annual turnover andstorage data for each landcover type. You load in these data from the 01_start.R scriptusing the source() function.

    05_minimum_turnover_storage_import.R: This script imports the minimum turnover andstorage data for each landcover type. Minimum is defined as the lowest monthlyestimate.You load in these data from the 01_start.R scriptusing the source() function.

    06_figures_tables.R: This is the main workhouse for figure/table production and supporting analyses. This script generates the key figures and summary statistics used in the study that then get saved in the manuscript_figures folder. Note that allmaps were produced using Python code found in the "supporting_code"" folder.

  9. s

    Acerola Extract Import Data | Python Logistics Llc Prod

    • seair.co.in
    Updated Mar 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2024). Acerola Extract Import Data | Python Logistics Llc Prod [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Mar 7, 2024
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    United States
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  10. converted json to CSV Traffy Fondue data

    • kaggle.com
    zip
    Updated Jan 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hansen (2025). converted json to CSV Traffy Fondue data [Dataset]. https://www.kaggle.com/datasets/motethansen/converted-json-to-csv-traffy-fondue-data
    Explore at:
    zip(31705770 bytes)Available download formats
    Dataset updated
    Jan 15, 2025
    Authors
    Hansen
    License

    https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

    Description

    Traffy Fondue Data

    Data pulled from Traffy Fondue, by accessing the Traffy Fondue Open API. Date January 2022 until January 2025

    The following code pulled the data:

    
    import os
    import json
    import requests
    from datetime import datetime, timedelta
    import time
    
    class TraffyDataFetcher:
      def _init_(self, start_date, subfolder='traffyfonduedata'):
        self.url = "https://publicapi.traffy.in.th/share/teamchadchart/search"
        self.query = {'offset': '0'}
        self.payload = {}
        self.headers = {}
        self.start_date = datetime.strptime(start_date, '%Y-%m-%d')
        self.end_date = datetime.now()
        self.subfolder = subfolder
        self.max_requests_per_minute = 99
    
        if not os.path.exists(self.subfolder):
          os.makedirs(self.subfolder)
    
      def add_days_to_date(self, start_date_str, days_to_add):
        start_date = datetime.strptime(start_date_str, '%Y-%m-%d')
        new_date = start_date + timedelta(days=days_to_add)
        return new_date.strftime('%Y-%m-%d')
    
      def fetch_data(self):
        current_date = self.start_date
        index = 0
    
        while current_date <= self.end_date:
          start_time = datetime.now()
    
          self.query['start'] = current_date.strftime('%Y-%m-%d')
          new_date = self.add_days_to_date(self.query['start'], 10)
          self.query['end'] = new_date
          response = requests.request("GET", self.url, headers=self.headers, data=self.payload, params=self.query)
          print(f"offset: {index} response: {response.status_code}")
    
          filename = f"traffy_{current_date.strftime('%Y-%m-%d')}.json"
          file_path = os.path.join(self.subfolder, filename)
    
          with open(file_path, "w") as outfile:
            json_object = json.dumps(response.json(), indent=4)
            outfile.write(json_object)
    
          end_time = datetime.now()
          elapsed_time = (end_time - start_time).total_seconds()
          print(f"Elapsed time: {elapsed_time} s")
    
          index += 950
          current_date = datetime.strptime(new_date, '%Y-%m-%d') + timedelta(days=1)
    
          if index % self.max_requests_per_minute == 0:
            time.sleep(60 - elapsed_time)
    
    if _name_ == "_main_":
      fetcher = TraffyDataFetcher(start_date='2022-01-01')
      fetcher.fetch_data()
    

    --

    And the following code converted the json to CSV files

    import os
    import glob
    import json
    import pandas as pd
    #import numpy as np
    
    class TraffyJSONFixer:
      def _init_(self, path_to_json='*.json', subfolder='traffyfonduedata'):
        self.path_to_json = path_to_json
        self.subfolder = subfolder
        self.outputfolder = 'fixedjson'
        self.excelfolder = 'exceloutput'
        self.file_path = os.path.join(self.subfolder, self.path_to_json)
        self.json_files = glob.glob(self.file_path)
        
        # Ensure the subfolder exists
        if not os.path.exists(self.subfolder):
          os.makedirs(self.subfolder)
        # Ensure the outputfolder exists
        if not os.path.exists(self.outputfolder):
          os.makedirs(self.outputfolder)
        # Ensure the excelfolder exists
        if not os.path.exists(self.excelfolder):
          os.makedirs(self.excelfolder)
        
        # Debugging: Print the current working directory and the list of JSON files
        print(f"Current working directory: {os.getcwd()}")
        print(f"Found JSON files: {self.json_files}")
        
      def fix_json_files(self):
        for count, ele in enumerate(self.json_files):
          new_file_name = os.path.join(self.outputfolder, f"data_{os.path.basename(ele)}")
          
          try:
            with open(ele, 'r', encoding='utf-8') as f:
              data = json.load(f)
    
            # Debugging: Print the type of data
            print(f"Processing file: {ele}")
            print(f"Type of data: {type(data)}")
            
            # Handle different JSON structures
            if isinstance(data, dict) and "results" in data:
              results = data["results"]
            elif isinstance(data, list):
              results = data
            else:
              print(f"Unexpected JSON structure in file: {ele}")
              continue
    
            # Ensure results is a list or dict before writing
            if isinstance(results, (list, dict)):
              with open(new_file_name, 'w', encoding='utf-8') as f:
                f.write(json.dumps(results, indent=4))
            else:
              print(f"Unexpected type for results in file: {ele}")
          except (json.JSONDecodeError, KeyError) as e:
            print(f"Error processing file {ele}: {e}")
    
      def jsontoexcel(self):
        jsonfile_path = os.path.join(self.out...
    
  11. s

    Acerola Extract USA Import Data, US Acerola Extract Importers / Buyers List

    • seair.co.in
    Updated Mar 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim Solutions (2024). Acerola Extract USA Import Data, US Acerola Extract Importers / Buyers List [Dataset]. https://www.seair.co.in/us-import/product-acerola-extract/i-python-logistics-llc-prod/e-delphi-fretes-internacionais-ltda.aspx
    Explore at:
    .text/.csv/.xml/.xls/.binAvailable download formats
    Dataset updated
    Mar 7, 2024
    Dataset authored and provided by
    Seair Exim Solutions
    Area covered
    United States
    Description

    View details of Acerola Extract import data and shipment reports in US with product description, price, date, quantity, major us ports, countries and US buyers/importers list, overseas suppliers/exporters list.

  12. e

    Eximpedia Export Import Trade

    • eximpedia.app
    Updated Jan 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2025). Eximpedia Export Import Trade [Dataset]. https://www.eximpedia.app/
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Jan 12, 2025
    Dataset provided by
    Eximpedia PTE LTD
    Eximpedia Export Import Trade Data
    Authors
    Seair Exim
    Area covered
    Austria, Fiji, Curaçao, Mozambique, Switzerland, Haiti, Moldova (Republic of), Gambia, Comoros, El Salvador
    Description

    Python Llc Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  13. Smartwatch Purchase Data

    • kaggle.com
    zip
    Updated Dec 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aayush Chourasiya (2022). Smartwatch Purchase Data [Dataset]. https://www.kaggle.com/datasets/albedo0/smartwatch-purchase-data/discussion
    Explore at:
    zip(2230268 bytes)Available download formats
    Dataset updated
    Dec 30, 2022
    Authors
    Aayush Chourasiya
    Description

    Disclaimer: This is an artificially generated data using a python script based on arbitrary assumptions listed down.

    The data consists of 100,000 examples of training data and 10,000 examples of test data, each representing a user who may or may not buy a smart watch.

    ----- Version 1 -------

    trainingDataV1.csv, testDataV1.csv or trainingData.csv, testData.csv The data includes the following features for each user: 1. age: The age of the user (integer, 18-70) 1. income: The income of the user (integer, 25,000-200,000) 1. gender: The gender of the user (string, "male" or "female") 1. maritalStatus: The marital status of the user (string, "single", "married", or "divorced") 1. hour: The hour of the day (integer, 0-23) 1. weekend: A boolean indicating whether it is the weekend (True or False) 1. The data also includes a label for each user indicating whether they are likely to buy a smart watch or not (string, "yes" or "no"). The label is determined based on the following arbitrary conditions: - If the user is divorced and a random number generated by the script is less than 0.4, the label is "no" (i.e., assuming 40% of divorcees are not likely to buy a smart watch) - If it is the weekend and a random number generated by the script is less than 1.3, the label is "yes". (i.e., assuming sales are 30% more likely to occur on weekends) - If the user is male and under 30 with an income over 75,000, the label is "yes". - If the user is female and 30 or over with an income over 100,000, the label is "yes". Otherwise, the label is "no".

    The training data is intended to be used to build and train a classification model, and the test data is intended to be used to evaluate the performance of the trained model.

    Following Python script was used to generate this dataset

    import random
    import csv
    
    # Set the number of examples to generate
    numExamples = 100000
    
    # Generate the training data
    with open("trainingData.csv", "w", newline="") as csvfile:
      fieldnames = ["age", "income", "gender", "maritalStatus", "hour", "weekend", "buySmartWatch"]
      writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
    
      writer.writeheader()
    
      for i in range(numExamples):
        age = random.randint(18, 70)
        income = random.randint(25000, 200000)
        gender = random.choice(["male", "female"])
        maritalStatus = random.choice(["single", "married", "divorced"])
        hour = random.randint(0, 23)
        weekend = random.choice([True, False])
    
        # Randomly assign the label based on some arbitrary conditions
        # assuming 40% of divorcees won't buy a smart watch
        if maritalStatus == "divorced" and random.random() < 0.4:
          buySmartWatch = "no"
        # assuming sales are 30% more likely to occur on weekends.
        elif weekend == True and random.random() < 1.3:
          buySmartWatch = "yes"
        elif gender == "male" and age < 30 and income > 75000:
          buySmartWatch = "yes"
        elif gender == "female" and age >= 30 and income > 100000:
          buySmartWatch = "yes"
        else:
          buySmartWatch = "no"
    
        writer.writerow({
          "age": age,
          "income": income,
          "gender": gender,
          "maritalStatus": maritalStatus,
          "hour": hour,
          "weekend": weekend,
          "buySmartWatch": buySmartWatch
        })
    

    ----- Version 2 -------

    trainingDataV2.csv, testDataV2.csv The data includes the following features for each user: 1. age: The age of the user (integer, 18-70) 1. income: The income of the user (integer, 25,000-200,000) 1. gender: The gender of the user (string, "male" or "female") 1. maritalStatus: The marital status of the user (string, "single", "married", or "divorced") 1. educationLevel: The education level of the user (string, "high school", "associate's degree", "bachelor's degree", "master's degree", or "doctorate") 1. occupation: The occupation of the user (string, "tech worker", "manager", "executive", "sales", "customer service", "creative", "manual labor", "healthcare", "education", "government", "unemployed", or "student") 1. familySize: The number of people in the user's family (integer, 1-5) 1. fitnessInterest: A boolean indicating whether the user is interested in fitness (True or False) 1. priorSmartwatchOwnership: A boolean indicating whether the user has owned a smartwatch in the past (True or False) 1. hour: The hour of the day when the user was surveyed (integer, 0-23) 1. weekend: A boolean indicating whether the user was surveyed on a weekend (True or False) 1. buySmartWatch: A boolean indicating whether the user purchased a smartwatch (True or False)

    Python script used to generate the data:

    import random
    import csv
    
    # Set the number of examples to generate
    numExamples = 100000
    
    with open("t...
    
  14. z

    Open Context Database SQL Dump

    • zenodo.org
    • data-staging.niaid.nih.gov
    • +2more
    zip
    Updated Jan 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric Kansa; Eric Kansa; Sarah Whitcher Kansa; Sarah Whitcher Kansa (2025). Open Context Database SQL Dump [Dataset]. http://doi.org/10.5281/zenodo.14728229
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 23, 2025
    Dataset provided by
    Open Context
    Authors
    Eric Kansa; Eric Kansa; Sarah Whitcher Kansa; Sarah Whitcher Kansa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Open Context (https://opencontext.org) publishes free and open access research data for archaeology and related disciplines. An open source (but bespoke) Django (Python) application supports these data publishing services. The software repository is here: https://github.com/ekansa/open-context-py

    The Open Context team runs ETL (extract, transform, load) workflows to import data contributed by researchers from various source relational databases and spreadsheets. Open Context uses PostgreSQL (https://www.postgresql.org) relational database to manage these imported data in a graph style schema. The Open Context Python application interacts with the PostgreSQL database via the Django Object-Relational-Model (ORM).

    This database dump includes all published structured data organized used by Open Context (table names that start with 'oc_all_'). The binary media files referenced by these structured data records are stored elsewhere. Binary media files for some projects, still in preparation, are not yet archived with long term digital repositories.

    These data comprehensively reflect the structured data currently published and publicly available on Open Context. Other data (such as user and group information) used to run the Website are not included.

    IMPORTANT

    This database dump contains data from roughly 190+ different projects. Each project dataset has its own metadata and citation expectations. If you use these data, you must cite each data contributor appropriately, not just this Zenodo archived database dump.

  15. Z

    Hyperfine tensor and defect structure of T center

    • datasetcatalog.nlm.nih.gov
    Updated Apr 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiong, Yihuang; Hautier, Geoffroy (2025). Hyperfine tensor and defect structure of T center [Dataset]. http://doi.org/10.5281/zenodo.15231707
    Explore at:
    Dataset updated
    Apr 16, 2025
    Authors
    Xiong, Yihuang; Hautier, Geoffroy
    Description

    About the datasetThis record provides all data to reproduce the hyperfine coupling analysis of a T center defect in silicon. The results are reported in "Long-lived entanglement of a spin-qubit register in silicon photonics". DOI: https://doi.org/10.48550/arXiv.2504.15467 ## Files included- hyperfine_terms.npz: NumPy archive containing three float arrays (Fermi_contact, Dipolar, Hyperfine) saved via numpy.savez.- POSCAR: VASP format structure file specifying a 1002‑atom silicon supercell with a T center defect. ## Computational methods- Electronic structure calculations were performed with VASP using the HSE06 screened hybrid functional. ## Usage & reuse1. Load hyperfine data in Python python import numpy as np data = np.load('hyperfine_terms.npz') Fc = data['Fermi_contact'] Dip = data['Dipolar'] Hf = data['Hyperfine']2. **Load defect structures using pymatgen**python from pymatgen.io.vasp import Poscar poscar = Poscar.from_file('POSCAR') structure = poscar.structure

  16. m

    Customers order for a Printing Company (2D Bin Packing and Scheduling)

    • data.mendeley.com
    Updated Dec 30, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mahdi mostajabdaveh (2021). Customers order for a Printing Company (2D Bin Packing and Scheduling) [Dataset]. http://doi.org/10.17632/bxh46tps75.5
    Explore at:
    Dataset updated
    Dec 30, 2021
    Authors
    mahdi mostajabdaveh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These data belongs to an actual printing company . Each record in Excel file Raw Data/Big_Data present an order from customers. In column "ColorMode" ; 4+0 means the order is one sided and 4+4 means it is two-sided. Files in Instances folder correspond to the instances used for computational tests in the article. Each of these instances has two related file with the same characteristics. One with gdx suffix and one with out any file extension.

    Files with gdx suffix can be read by GAMS

    Files without suffix are imported by pickle package in Python as objects of class Input (defined in "Input.py" ). You can read the files using the pickle package and Input.py. More information on pickle package at docs.python.org/3/library/pickle

    These files are used to import data to the python implementation. The code and relevant description can be found in Read_input.py file.

  17. h

    Python-DPO-Large

    • huggingface.co
    Updated Mar 15, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NextWealth Entrepreneurs Private Limited (2023). Python-DPO-Large [Dataset]. https://huggingface.co/datasets/NextWealth/Python-DPO-Large
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 15, 2023
    Dataset authored and provided by
    NextWealth Entrepreneurs Private Limited
    Description

    Dataset Card for Python-DPO

    This dataset is the larger version of Python-DPO dataset and has been created using Argilla.

      Load with datasets
    

    To load this dataset with datasets, you'll just need to install datasets as pip install datasets --upgrade and then use the following code: from datasets import load_dataset

    ds = load_dataset("NextWealth/Python-DPO")

      Data Fields
    

    Each data instance contains:

    instruction: The problem description/requirements chosen_code:… See the full description on the dataset page: https://huggingface.co/datasets/NextWealth/Python-DPO-Large.

  18. H

    Hydroinformatics Instruction Module Example Code: Programmatic Data Access...

    • hydroshare.org
    • beta.hydroshare.org
    • +1more
    zip
    Updated Mar 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amber Spackman Jones; Jeffery S. Horsburgh (2022). Hydroinformatics Instruction Module Example Code: Programmatic Data Access with USGS Data Retrieval [Dataset]. https://www.hydroshare.org/resource/a58b5d522d7f4ab08c15cd05f3fd2ad3
    Explore at:
    zip(34.5 KB)Available download formats
    Dataset updated
    Mar 3, 2022
    Dataset provided by
    HydroShare
    Authors
    Amber Spackman Jones; Jeffery S. Horsburgh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This resource contains Jupyter Notebooks with examples for accessing USGS NWIS data via web services and performing subsequent analysis related to drought with particular focus on sites in Utah and the southwestern United States (could be modified to any USGS sites). The code uses the Python DataRetrieval package. The resource is part of set of materials for hydroinformatics and water data science instruction. Complete learning module materials are found in HydroLearn: Jones, A.S., Horsburgh, J.S., Bastidas Pacheco, C.J. (2022). Hydroinformatics and Water Data Science. HydroLearn. https://edx.hydrolearn.org/courses/course-v1:USU+CEE6110+2022/about.

    This resources consists of 6 example notebooks: 1. Example 1: Import and plot daily flow data 2. Example 2: Import and plot instantaneous flow data for multiple sites 3. Example 3: Perform analyses with USGS annual statistics data 4. Example 4: Retrieve data and find daily flow percentiles 3. Example 5: Further examination of drought year flows 6. Coding challenge: Assess drought severity

  19. Digitisation of Weather Records of Seungjeongwon Ilgi: A Historical Weather...

    • zenodo.org
    bin, csv, json, txt
    Updated Sep 27, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zeyu Lyu; Zeyu Lyu; Kohei Ichikawa; Kohei Ichikawa; Yongchao Cheng; Yongchao Cheng; Hisashi Hayakawa; Hisashi Hayakawa; Yukiko Kawamoto; Yukiko Kawamoto (2023). Digitisation of Weather Records of Seungjeongwon Ilgi: A Historical Weather Dynamics Dataset of the Korean Peninsula (1623-1910) [Dataset]. http://doi.org/10.5281/zenodo.7453644
    Explore at:
    csv, json, bin, txtAvailable download formats
    Dataset updated
    Sep 27, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Zeyu Lyu; Zeyu Lyu; Kohei Ichikawa; Kohei Ichikawa; Yongchao Cheng; Yongchao Cheng; Hisashi Hayakawa; Hisashi Hayakawa; Yukiko Kawamoto; Yukiko Kawamoto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Korea
    Description

    Introduction

    This study has exploited the daily weather records of Seungjeongwon Ilgi from the NIKH database. Seungjeongwon Ilgi (http://sjw.history.go.kr/main.do) is a daily record of the Seungjeongwon, the Royal Secretariat of the Joseon Dynasty of Korea. These diaries span from 1623 to 1910 and generally involve daily weather records in the entry header. Their observational site would be located in Seoul (N37°35′, E126°59′). We have exploited the weather records from the NIKH database and classified the daily weather using text mining method. We have also converted the report dates from the traditional lunisolar calendar to the Gregorian calendar, to better contextualise our data into the contemporary daily measurements.

    Data

    We provide different formats (csv, xlsx, json) to facilitate the usage of data. The main contents of data are listed as below.

    • ID: The unique identifier of a specific record in the metadata, which can also serve as the identifier to merge with external data in the NIKH digital database.
    • Traditional calendar: The original lunar dates in the NIKH digital database, which are listed in data format "YYYY-MM-DD". More specifically, "L0" implies the leap year and "L1" implies the common year.
    • Leap: The identifier of a leap year.
    • Gregorian calendar: The Gregorian calendar date that converted by the traditional calendar date.
    • Weather Text: The text that describe the weather conditions. Specifically, multiple weather descriptions of the same day have been put together.
    • Flag: The computed value that indicates different combinations of weather conditions.
    • Volume: The volume of text in the original record.
    • Herbal Volume: The volume of text in the herbal record.
    • Sunny: A dummy variable that represents whether the weather description contains the expression of sunny.
    • Cloudy: A dummy variable that represents whether the weather description contains the expression of cloudy.
    • Rainy: A dummy variable that represents whether the weather description contains the expression of rainy.
    • Snow: A dummy variable that represents whether the weather description contains the expression of snow.
    • Wind: A dummy variable that represents whether the weather description contains the expression of wind.

    Import Data

    # Python
    # CSV file
    import pandas as pd
    data=pd.read_csv('~/SJWilgi_Seoul_Weather_YR1623_1910.csv',encoding="utf-8") 
    # JSON file
    data=pd.read_json('~/SJWilgi_Seoul_Weather_YR1623_1910.json',encoding="utf-8")
    # Excel file
    data=pd.read_excel('~/SJWilgi_Seoul_Weather_YR1623_1910.xlsx') # Excel file
    # R
    # CSV file
    library(readr)
    data<- read_csv("~/SJWilgi_Seoul_Weather_YR1623_1910.csv")
    # Excel file
    library(readxl)
    data <- read_excel("~/SJWilgi_Seoul_Weather_YR1623_1910.xlsx")

  20. e

    Eximpedia Export Import Trade

    • eximpedia.app
    Updated Sep 8, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2025). Eximpedia Export Import Trade [Dataset]. https://www.eximpedia.app/
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Sep 8, 2025
    Dataset provided by
    Eximpedia PTE LTD
    Eximpedia Export Import Trade Data
    Authors
    Seair Exim
    Area covered
    Iceland, Montenegro, Trinidad and Tobago, Belgium, Nauru, Botswana, Paraguay, Macedonia (the former Yugoslav Republic of), Uganda, Maldives
    Description

    Python Logistics Llc Prod Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Seair Exim, Python Import Data India – Buyers & Importers List [Dataset]. https://www.seair.co.in

Python Import Data India – Buyers & Importers List

Seair Exim Solutions

Seair Info Solutions PVT LTD

Explore at:
27 scholarly articles cite this dataset (View in Google Scholar)
.bin, .xml, .csv, .xlsAvailable download formats
Dataset provided by
Seair Info Solutions PVT LTD
Authors
Seair Exim
Area covered
India
Description

Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

Search
Clear search
Close search
Google apps
Main menu