6 datasets found
  1. Z

    Flow map data of the singel pendulum, double pendulum and 3-body problem

    • data.niaid.nih.gov
    Updated Apr 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Horn, Philipp; Veronica, Saz Ulibarrena; Koren, Barry; Simon, Portegies Zwart (2024). Flow map data of the singel pendulum, double pendulum and 3-body problem [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11032351
    Explore at:
    Dataset updated
    Apr 23, 2024
    Dataset provided by
    Leiden Observatory
    Eindhoven University of Technology
    Authors
    Horn, Philipp; Veronica, Saz Ulibarrena; Koren, Barry; Simon, Portegies Zwart
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was constructed to compare the performance of various neural network architectures learning the flow maps of Hamiltonian systems. It was created for the paper: A Generalized Framework of Neural Networks for Hamiltonian Systems.

    The dataset consists of trajectory data from three different Hamiltonian systems. Namely, the single pendulum, double pendulum and 3-body problem. The data was generated using numerical integrators. For the single pendulum, the symplectic Euler method with a step size of 0.01 was used. The data of the double pendulum was also computed by the symplectic Euler method, however, with an adaptive step size. The trajectories of the 3-body problem were calculated by the arbitrarily high-precision code Brutus.

    For each Hamiltonian system, there is one file containing the entire trajectory information (*_all_runs.h5.1). In these files, the states along all trajectories are recorded with a step size of 0.01. These files are composed of several Pandas DataFrames. One DataFrame per trajectory, called "run0", "run1", ... and finally one large DataFrame in which all the trajectories are combined, called "all_runs". Additionally, one Pandas Series called "constants" is contained in these files, in which several parameters of the data are listed.

    Also, there is a second file per Hamiltonian system in which the data is prepared as features and labels ready for neural networks to be trained (*_training.h5.1). Similar to the first type of files, they contain a Series called "constants". The features and labels are then separated into 6 DataFrames called "features", "labels", "val_features", "val_labels", "test_features" and "test_labels". The data is split into 80% training data, 10% validation data and 10% test data.

    The code used to train various neural network architectures on this data can be found on GitHub at: https://github.com/AELITTEN/GHNN.

    Already trained neural networks can be found on GitHub at: https://github.com/AELITTEN/NeuralNets_GHNN.

    Single pendulum Double pendulum 3-body problem

    Number of trajectories 500 2000 5000

    final time in all_runs T (one period of the pendulum) 10 10

    final time in training data 0.25*T 5 5

    step size in training data 0.1 0.1 0.5

  2. Convert Text to Pandas

    • kaggle.com
    zip
    Updated Sep 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zeyad Usf (2024). Convert Text to Pandas [Dataset]. https://www.kaggle.com/datasets/zeyadusf/convert-text-to-pandas
    Explore at:
    zip(4333134 bytes)Available download formats
    Dataset updated
    Sep 22, 2024
    Authors
    Zeyad Usf
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    kaggle notebook
    Github Repo

    I found two datasets about converting text with context to pandas code on Hugging Face, but the challenge is in the context. The context in both datasets is different which reduces the results of the model. First let's mention the data I found and then show examples, solution and some other problems.

    • Rahima411/text-to-pandas:

      • The data is divided into Train with 57.5k and Test with 19.2k.

      • The data has two columns as you can see in the example:

        • "Input": Contains the context and the question together, in the context it shows the metadata about the data frame.
        • "Pandas Query": Pandas code txt Input | Pandas Query -----------------------------------------------------------|------------------------------------------- Table Name: head (age (object), head_id (object)) | result = management['head.age'].unique() Table Name: management (head_id (object), | temporary_acting (object)) | What are the distinct ages of the heads who are acting? |
    • hiltch/pandas-create-context:

      • It contains 17k rows with three columns:
        • question : text .
        • context : Code to create a data frame with column names, unlike the first data set which contains the name of the data frame, column names and data type.
        • answer : Pandas code.
          question           |            context             |       answer 
    ----------------------------------------|--------------------------------------------------------|---------------------------------------
    What was the lowest # of total votes?  | df = pd.DataFrame(columns=['_number_of_total_votes']) | df['_number_of_total_votes'].min()   
    

    As you can see, the problem with this data is that they are not similar as inputs and the structure of the context is different . My solution to this problem was: - Convert the first data set to become like the second in the context. I chose this because it is difficult to get the data type for the columns in the second data set. It was easy to convert the structure of the context from this shape Table Name: head (age (object), head_id (object)) to this head = pd.DataFrame(columns=['age','head_id']) through this code that I wrote. - Then separate the question from the context. This was easy because if you look at the data, you will find that the context always ends with "(" and then a blank and then the question. You will find all of this in this code. - You will also notice that more than one code or line can be returned to the context, and this has been engineered into the code. ```py def extract_table_creation(text:str)->(str,str): """ Extracts DataFrame creation statements and questions from the given text.

    Args:
      text (str): The input text containing table definitions and questions.
    
    Returns:
      tuple: A tuple containing a concatenated DataFrame creation string and a question.
    """
    # Define patterns
    table_pattern = r'Table Name: (\w+) \(([\w\s,()]+)\)'
    column_pattern = r'(\w+)\s*\((object|int64|float64)\)'
    
    # Find all table names and column definitions
    matches = re.findall(table_pattern, text)
    
    # Initialize a list to hold DataFrame creation statements
    df_creations = []
    
    for table_name, columns_str in matches:
      # Extract column names
      columns = re.findall(column_pattern, columns_str)
      column_names = [col[0] for col in columns]
    
      # Format DataFrame creation statement
      df_creation = f"{table_name} = pd.DataFrame(columns={column_names})"
      df_creations.append(df_creation)
    
    # Concatenate all DataFrame creation statements
    df_creation_concat = '
    

    '.join(df_creations)

    # Extract and clean the question
    question = text[text.rindex(')')+1:].strip()
    
    return df_creation_concat, question
    
    After both datasets were similar in structure, they were merged into one set and divided into _72.8K_ train and _18.6K_ test. We analyzed this dataset and you can see it all through the **[`notebook`](https://www.kaggle.com/code/zeyadusf/text-2-pandas-t5#Exploratory-Data-Analysis(EDA))**, but we found some problems in the dataset as well, such as
    > - `Answer` : `df['Id'].count()` has been repeated, but this is possible, so we do not need to dispense with these rows.
    > - `Context` : We see that it contains `147` rows that do not contain any text. We will see Through the experiment if this will affect the results negatively or positively.
    > - `Question` : It is ...
    
  3. Llama 3.1 8B Correct Labels

    • kaggle.com
    zip
    Updated Aug 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jatin Mehra_666 (2025). Llama 3.1 8B Correct Labels [Dataset]. https://www.kaggle.com/datasets/jatinmehra666/llama-3-1-8b-correct-labels
    Explore at:
    zip(11853454078 bytes)Available download formats
    Dataset updated
    Aug 26, 2025
    Authors
    Jatin Mehra_666
    Description

    training Code ```Python

    from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split import os import pandas as pd import numpy as np os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3" TEMP_DIR = "tmp" os.makedirs(TEMP_DIR, exist_ok=True) train = pd.read_csv('input/map-charting-student-math-misunderstandings/train.csv')

    Fill missing Misconception values with 'NA'

    train.Misconception = train.Misconception.fillna('NA')

    Create a combined target label (Category:Misconception)

    train['target'] = train.Category + ":" + train.Misconception

    Encode target labels to numerical format

    le = LabelEncoder() train['label'] = le.fit_transform(train['target']) n_classes = len(le.classes_) # Number of unique target classes print(f"Train shape: {train.shape} with {n_classes} target classes") print("Train head:") train.head()

    idx = train.apply(lambda row: row.Category.split('_')[0], axis=1) == 'True' correct = train.loc[idx].copy() correct['c'] = correct.groupby(['QuestionId', 'MC_Answer']).MC_Answer.transform('count') correct = correct.sort_values('c', ascending=False) correct = correct.drop_duplicates(['QuestionId']) correct = correct[['QuestionId', 'MC_Answer']] correct['is_correct'] = 1 # Mark these as correct answers

    Merge 'is_correct' flag into the main training DataFrame

    train = train.merge(correct, on=['QuestionId', 'MC_Answer'], how='left') train.is_correct = train.is_correct.fillna(0)

    from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch

    Model_Name = "unsloth/Meta-Llama-3.1-8B-Instruct"

    model = AutoModelForSequenceClassification.from_pretrained(Model_Name, num_labels=n_classes, torch_dtype=torch.bfloat16, device_map="balanced", cache_dir=TEMP_DIR)

    tokenizer = AutoTokenizer.from_pretrained(Model_Name, cache_dir=TEMP_DIR)

    def format_input(row): x = "Yes" if not row['is_correct']: x = "No" return ( f"Question: {row['QuestionText']} " f"Answer: {row['MC_Answer']} " f"Correct? {x} " f"Student Explanation: {row['StudentExplanation']}" )

    train['text'] = train.apply(format_input,axis=1) print("Example prompt for our LLM:") print() print( train.text.values[0] )

    from datasets import Dataset

    Split data into training and validation sets

    train_df, val_df = train_test_split(train, test_size=0.2, random_state=42)

    Convert to Hugging Face Dataset

    COLS = ['text', 'label']

    Create clean DataFrame with the full training data

    train_df_clean = train[COLS].copy() # Use 'train' instead of 'train_df'

    Ensure labels are proper integers

    train_df_clean['label'] = train_df_clean['label'].astype(np.int64)

    Reset index to ensure clean DataFrame structure

    train_df_clean = train_df_clean.reset_index(drop=True)

    Create dataset with the full training data

    train_ds = Dataset.from_pandas(train_df_clean, preserve_index=False)

    def tokenize(batch): """Tokenizes a batch of text inputs.""" return tokenizer(batch["text"], truncation=True, max_length=256)

    Apply tokenization to the full dataset

    train_ds = train_ds.map(tokenize, batched=True, remove_columns=['text'])

    Add a new padding token

    tokenizer.add_special_tokens({'pad_token': '[PAD]'})

    Resize the model's token embeddings to match the new tokenizer

    model.resize_token_embeddings(len(tokenizer))

    Set the pad token id in the model's config

    model.config.pad_token_id = tokenizer.pad_token_id

    2. Clear HF cache after loading

    import os from huggingface_hub import scan_cache_dir

    Then clear cache to free ~16GB

    cache_info = scan_cache_dir() cache_info.delete_revisions(*[repo.revisions for repo in cache_info.repos]).execute()

    --- Training Arguments ---

    from transformers import TrainingArguments, Trainer, DataCollatorWithPadding import tempfile import shutil

    Ensure temp directories exist

    os.makedirs(f"{TEMP_DIR}/training_output/", exist_ok=True) os.makedirs(f"{TEMP_DIR}/logs/", exist_ok=True)

    --- Training Arguments (FIXED) ---

    training_args = TrainingArguments( output_dir=f"{TEMP_DIR}/training_output/",
    do_train=True, do_eval=False, save_strategy="no", num_train_epochs=3, per_device_train_batch_size=16, learning_rate=5e-5, logging_dir=f"{TEMP_DIR}/logs/",
    logging_steps=500, bf16=True, fp16=False, report_to="none", warmup_ratio=0.1, lr_scheduler_type="cosine", dataloader_pin_memory=False, gradient_checkpointing=True,
    )

    --- Custom Metric Computation (MAP@3) ---

    def compute_map3(eval_pred): """ Computes Mean Average Precision at 3 (MAP@3) for evaluation. """ logits, labels = eval_pred probs = torch.nn.functional.softmax(torch.tensor(logits), dim=-1).numpy()

    # Get top 3 predicted class indi...
    
  4. Modified Swiss Dwellings

    • kaggle.com
    zip
    Updated Nov 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Casper van Engelenburg (2023). Modified Swiss Dwellings [Dataset]. https://www.kaggle.com/datasets/caspervanengelenburg/modified-swiss-dwellings
    Explore at:
    zip(4996692802 bytes)Available download formats
    Dataset updated
    Nov 7, 2023
    Authors
    Casper van Engelenburg
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Modified Swiss Dwellings

    The Modified Swiss Dwellings (MSD) dataset is an ML-ready dataset for floor plan generation and analysis at building-level scale. The MSD dataset is completely derived from the Swiss Dwellings database (v3.0.0). The MSD dataset contains highly-detailed 5372 floor plans of single- as well as multi-unit building complexes across Switzerland, hence extending the building scale w.r.t. of other well know floor plan datasets like the RPLAN dataset.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15635478%2F9d43d7618fca2d6ebd7f99ee3009fb5f%2Foutput.png?generation=1688033406322972&alt=media" alt="">

    Naming and dataset split

    The naming (IDs in the folders) is based on the original dataset.

    The dataset is split into train and test based on the buildings the floor plans originate from. There is, for obvious reasons, no overlap between building identities in train and test set. Hence, all floor plans that originate from the same building will be either all in the train set or all in the test set. We included as well a cleaned, filtered, and modified Pandas dataframe with all geometries (such as rooms, walls, etc.) derived from the original dataset. The unique floor plan IDs in the dataframe is the same as train and test set combined. We included it to allow users to develop their own algorithms on top of it, such as image, structure, and graph extraction.

    Example use-case: Floor plan auto-completion

    The MSD dataset is developed with the goal for the computer science community to develop (deep learning) models for tasks such as floor plan auto-completion. The floor plan auto-completion task takes as input the boundary of a building, the structural elements necessary for the building’s structural integrity, and a set of user constraints formalized in a graph structure, with the goal of automatically generating the full floor plan. Specifically, the goal is to learn the correlation between the the joint distribution of graph_in and struct_in with that of full_out. graph_out is provided when researchers want to use / develop methods from graph signal processing, or graph machine learning specifically. This was a challenge which was part of the 1st Workshop on Computer-Aided Architectural Design (CVAAD) - an official half-day workshop at ICCV 2023.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15635478%2F4079b2c34fc70ba5e223fa831bd14ded%2FPicture1.png?generation=1688033493295295&alt=media" alt="">

    Important links

  5. V2 Balloon Detection Dataset

    • kaggle.com
    zip
    Updated Jul 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    vbookshelf (2022). V2 Balloon Detection Dataset [Dataset]. https://www.kaggle.com/vbookshelf/v2-balloon-detection-dataset
    Explore at:
    zip(49788043 bytes)Available download formats
    Dataset updated
    Jul 7, 2022
    Authors
    vbookshelf
    Description

    Context

    I needed a simple image dataset that I could use when trying different object detection algorithms for the first time. It had to be something that could be quickly understood and easily loaded. I didn't want spend a lot of time doing EDA or trying to remember how the data is structured. Moreover, I wanted to be able to clearly see when a model 's prediction was correct or when it had made a mistake. When working with chest x-ray images, for example, it takes an expert to know if a model's predictions are correct.

    I found the Balloons dataset and simplified it. The original data is split into train and test sets and it has two json files that need to be parsed. In this new version, I copied all images into a single folder and replaced the json files with one csv file that can be easily loaded with Pandas.

    Content

    The dataset consists of 74 jpg images and one csv file. Each image contains one or more balloons.

    The csv file has five columns:

    fname - The image file name.
    height - The image height.
    width - The image width.
    num_balloons - The number of balloons on the image.
    bbox - The coordinates of each bounding box on the image.
    

    The coordinates of each bbox are stored in a dictionary. The format is as follows:

    {"xmin": 100, "ymin": 100, "xmax": 300, "ymax": 300}
    
    Where xmin and ymin are the coordinates of the top left corner, and xmax and ymax are the coordinates of the bottom right corner.
    

    Each entry in the bbox column is a list of dictionaries. For example, if an image has two ballons and hence two bounding boxes, the entry will be as follows:

    [{"xmin": 100, "ymin": 100, "xmax": 300, "ymax": 300}, {"xmin": 100, "ymin": 100, "xmax": 300, "ymax": 300}]

    When loaded into a Pandas dataframe all items in the bbox column are of type string. The strings can be converted to a python lists like this:

    import ast
    
    # convert each item in the bbox column from type str to type list
    df['bbox'] = df['bbox'].apply(ast.literal_eval)
    
    

    Acknowledgements

    Many thanks to Waleed Abdulla who created this dataset.

    The original dataset can be downloaded and unzipped using this code:

    !wget https://github.com/matterport/Mask_RCNN/releases/download/v2.1/balloon_dataset.zip
    !unzip balloon_dataset.zip > /dev/null
    

    Inspiration

    Can you create an app that can look at an image and tell you: - how many balloons are on the image, and - what are the colours of those balloons.

    This is something that could help blind people. To help you get started here's an example of a similar project .

    License

    In this blog post the dataset's creator mentions that the images were sourced from Flickr. All images have a "Commercial use & mods allowed" license.



    Header image by andremsantana on Pixabay.

  6. Arabic(Indian) digits MADBase

    • kaggle.com
    zip
    Updated Jul 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HOSSAM_AHMED_SALAH (2023). Arabic(Indian) digits MADBase [Dataset]. https://www.kaggle.com/datasets/hossamahmedsalah/arabicindian-digits-madbase/code
    Explore at:
    zip(15373598 bytes)Available download formats
    Dataset updated
    Jul 26, 2023
    Authors
    HOSSAM_AHMED_SALAH
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Area covered
    India
    Description

    This dataset is flattern images where each image is represented in a row - Objective: Establish benchmark results for Arabic digit recognition using different classification techniques. - Objective: Compare performances of different classification techniques on Arabic and Latin digit recognition problems. - Valid comparison requires Arabic and Latin digit databases to be in the same format. - A Modified version of the ADBase (MADBase) with the same size and format as MNIST is created. - MADBase is derived from ADBase by size-normalizing each digit to a 20x20 box while preserving aspect ratio. - Size-normalization procedure results in gray levels due to anti-aliasing filter. - MADBase and MNIST have the same size and format. - MNIST is a modified version of the NIST digits database. - MNIST is available for download. I used this code to turn 70k arabic digit into a tabular data for ease of use and to waste less time in the preprocessing ```

    Define the root directory of the dataset

    root_dir = "MAHD"

    Define the names of the folders containing the images

    folder_names = ['Part{:02d}'.format(i) for i in range(1, 13)]

    folder_names = ['Part{}'.format(i) if i>9 else 'Part0{}'.format(i) for i in range(1, 13)]

    Define the names of the subfolders containing the training and testing images

    train_test_folders = ['MAHDBase_TrainingSet', 'test']

    Initialize an empty list to store the image data and labels

    data = [] labels = []

    Loop over the training and testing subfolders in each Part folder

    for tt in train_test_folders: for folder_name in folder_names: if tt == train_test_folders[1] and folder_name == 'Part03': break subfolder_path = os.path.join(root_dir, tt, folder_name) print(subfolder_path) print(os.listdir(subfolder_path)) for filename in os.listdir(subfolder_path): # check of the file fromat that it's an image if os.path.splitext(filename)[1].lower() not in '.bmp': continue # Load the image img_path = os.path.join(subfolder_path, filename) img = Image.open(img_path)

        # Convert the image to grayscale and flatten it into a 1D array
        img_grey = img.convert('L')
        img_data = np.array(img_grey).flatten()
    
        # Extract the label from the filename and convert it to an integer
        label = int(filename.split('_')[2].replace('digit', '').split('.')[0])
    
        # Add the image data and label to the lists
        data.append(img_data)
        labels.append(label)
    

    Convert the image data and labels to a pandas dataframe

    df = pd.DataFrame(data) df['label'] = labels ``` This dataset made by https://datacenter.aucegypt.edu/shazeem with 2 datasets - ADBase - MADBase (✅ the one this dataset derived from , similar in form to mnist)

  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Horn, Philipp; Veronica, Saz Ulibarrena; Koren, Barry; Simon, Portegies Zwart (2024). Flow map data of the singel pendulum, double pendulum and 3-body problem [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11032351

Flow map data of the singel pendulum, double pendulum and 3-body problem

Explore at:
Dataset updated
Apr 23, 2024
Dataset provided by
Leiden Observatory
Eindhoven University of Technology
Authors
Horn, Philipp; Veronica, Saz Ulibarrena; Koren, Barry; Simon, Portegies Zwart
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset was constructed to compare the performance of various neural network architectures learning the flow maps of Hamiltonian systems. It was created for the paper: A Generalized Framework of Neural Networks for Hamiltonian Systems.

The dataset consists of trajectory data from three different Hamiltonian systems. Namely, the single pendulum, double pendulum and 3-body problem. The data was generated using numerical integrators. For the single pendulum, the symplectic Euler method with a step size of 0.01 was used. The data of the double pendulum was also computed by the symplectic Euler method, however, with an adaptive step size. The trajectories of the 3-body problem were calculated by the arbitrarily high-precision code Brutus.

For each Hamiltonian system, there is one file containing the entire trajectory information (*_all_runs.h5.1). In these files, the states along all trajectories are recorded with a step size of 0.01. These files are composed of several Pandas DataFrames. One DataFrame per trajectory, called "run0", "run1", ... and finally one large DataFrame in which all the trajectories are combined, called "all_runs". Additionally, one Pandas Series called "constants" is contained in these files, in which several parameters of the data are listed.

Also, there is a second file per Hamiltonian system in which the data is prepared as features and labels ready for neural networks to be trained (*_training.h5.1). Similar to the first type of files, they contain a Series called "constants". The features and labels are then separated into 6 DataFrames called "features", "labels", "val_features", "val_labels", "test_features" and "test_labels". The data is split into 80% training data, 10% validation data and 10% test data.

The code used to train various neural network architectures on this data can be found on GitHub at: https://github.com/AELITTEN/GHNN.

Already trained neural networks can be found on GitHub at: https://github.com/AELITTEN/NeuralNets_GHNN.

Single pendulum Double pendulum 3-body problem

Number of trajectories 500 2000 5000

final time in all_runs T (one period of the pendulum) 10 10

final time in training data 0.25*T 5 5

step size in training data 0.1 0.1 0.5

Search
Clear search
Close search
Google apps
Main menu