2 datasets found
  1. TabPFN

    • kaggle.com
    Updated Jun 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Inzhirov (2023). TabPFN [Dataset]. https://www.kaggle.com/datasets/neutrino404/tabpfn
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 14, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Mark Inzhirov
    Description

    Use this data set when submitting code offline for competitions otherwise just use !pip install tabpfn for online use. Usage for offline code submissions within Kaggle notebooks is as follows:

    1**.First add the dataset by selecting "add data" and searching for this dataset and adding it to your input. **

    2.**Next add the following code to a code block in your notebook ** python !pip install tabpfn --no-index --find-links=file:///kaggle/input/tabpfn !mkdir -p /opt/conda/lib/python3.10/site-packages/tabpfn/models_diff !cp /kaggle/input/tabpfn/prior_diff_real_checkpoint_n_0_epoch_100.cpkt /opt/conda/lib/python3.10/site-packages/tabpfn/models_diff/ 3.** Import** :
    from tabpfn import TabPFNClassifier

    4.**Now you are all set you can create a classifier and run it offline for submission in offline kaggle code competitions:** python classifier = TabPFNClassifier(device='cpu',N_ensemble_configurations=64) classifier.fit(X_train, Y_train) y_eval, p_eval = classifier.predict(X_cv, return_winning_probability=True)

    If you want to use TabPFN with GPU use the following code when you make the model: classifier = TabPFNClassifier(device='cuda',N_ensemble_configurations=32)

    You can find documentation for this package on GitHub: https://github.com/automl/TabPFN.git Original paper on TabPFN can be found at: https://arxiv.org/abs/2207.01848 License Copyright 2022 Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

  2. torchsummary-1.5.1-wheel

    • kaggle.com
    Updated Mar 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rito Ghosh (2021). torchsummary-1.5.1-wheel [Dataset]. https://www.kaggle.com/datasets/truthr/torchsummary/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 20, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Rito Ghosh
    Description

    Starter Notebook

    ABOUT (from project's README)

    Keras style model.summary() in PyTorch

    PyPI version

    Keras has a neat API to view the visualization of the model which is very helpful while debugging your network. Here is a barebone code to try and mimic the same in PyTorch. The aim is to provide information complementary to, what is not provided by print(your_model) in PyTorch.

    Usage

    from torchsummary import summary
    summary(your_model, input_size=(channels, H, W))
    
    • Note that the input_size is required to make a forward pass through the network.

    Examples

    CNN for MNIST

    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    from torchsummary import summary
    
    class Net(nn.Module):
      def _init_(self):
        super(Net, self)._init_()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
        self.conv2_drop = nn.Dropout2d()
        self.fc1 = nn.Linear(320, 50)
        self.fc2 = nn.Linear(50, 10)
    
      def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
        x = x.view(-1, 320)
        x = F.relu(self.fc1(x))
        x = F.dropout(x, training=self.training)
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)
    
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # PyTorch v0.4.0
    model = Net().to(device)
    
    summary(model, (1, 28, 28))
    
    ----------------------------------------------------------------
        Layer (type)        Output Shape     Param #
    ================================================================
          Conv2d-1      [-1, 10, 24, 24]       260
          Conv2d-2       [-1, 20, 8, 8]      5,020
         Dropout2d-3       [-1, 20, 8, 8]        0
          Linear-4          [-1, 50]     16,050
          Linear-5          [-1, 10]       510
    ================================================================
    Total params: 21,840
    Trainable params: 21,840
    Non-trainable params: 0
    ----------------------------------------------------------------
    Input size (MB): 0.00
    Forward/backward pass size (MB): 0.06
    Params size (MB): 0.08
    Estimated Total Size (MB): 0.15
    ----------------------------------------------------------------
    

    VGG16

    import torch
    from torchvision import models
    from torchsummary import summary
    
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    vgg = models.vgg16().to(device)
    
    summary(vgg, (3, 224, 224))
    
    ----------------------------------------------------------------
        Layer (type)        Output Shape     Param #
    ================================================================
          Conv2d-1     [-1, 64, 224, 224]      1,792
           ReLU-2     [-1, 64, 224, 224]        0
          Conv2d-3     [-1, 64, 224, 224]     36,928
           ReLU-4     [-1, 64, 224, 224]        0
         MaxPool2d-5     [-1, 64, 112, 112]        0
          Conv2d-6    [-1, 128, 112, 112]     73,856
           ReLU-7    [-1, 128, 112, 112]        0
          Conv2d-8    [-1, 128, 112, 112]     147,584
           ReLU-9    [-1, 128, 112, 112]        0
        MaxPool2d-10     [-1, 128, 56, 56]        0
          Conv2d-11     [-1, 256, 56, 56]     295,168
           ReLU-12     [-1, 256, 56, 56]        0
          Conv2d-13     [-1, 256, 56, 56]     590,080
           ReLU-14     [-1, 256, 56, 56]        0
          Conv2d-15     [-1, 256, 56, 56]     590,080
           ReLU-16     [-1, 256, 56, 56]        0
        MaxPool2d-17     [-1, 256, 28, 28]        0
          Conv2d-18     [-1, 512, 28, 28]    1,180,160
           ReLU-19     [-1, 512, 28, 28]        0
          Conv2d-20     [-1, 512, 28, 28]    2,359,808
           ReLU-21     [-1, 512, 28, 28]        0
          Conv2d-22     [-1, 512, 28, 28]    2,359,808
           ReLU-23     [-1, 512, 28, 28]        0
        MaxPool2d-24     [-1, 512, 14, 14]        0
          Conv2d-25     [-1, 512, 14, 14]    2,359,808
           ReLU-26     [-1, 512, 14, 14]        0
          Conv2d-27     [-1, 512, 14, 14]    2,359,808
           ReLU...
    
  3. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Mark Inzhirov (2023). TabPFN [Dataset]. https://www.kaggle.com/datasets/neutrino404/tabpfn
Organization logo

TabPFN

A TRANSFORMER THAT SOLVES SMALL TABULAR CLASSIFICATION PROBLEMS IN A SECOND

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jun 14, 2023
Dataset provided by
Kagglehttp://kaggle.com/
Authors
Mark Inzhirov
Description

Use this data set when submitting code offline for competitions otherwise just use !pip install tabpfn for online use. Usage for offline code submissions within Kaggle notebooks is as follows:

1**.First add the dataset by selecting "add data" and searching for this dataset and adding it to your input. **

2.**Next add the following code to a code block in your notebook ** python !pip install tabpfn --no-index --find-links=file:///kaggle/input/tabpfn !mkdir -p /opt/conda/lib/python3.10/site-packages/tabpfn/models_diff !cp /kaggle/input/tabpfn/prior_diff_real_checkpoint_n_0_epoch_100.cpkt /opt/conda/lib/python3.10/site-packages/tabpfn/models_diff/ 3.** Import** :
from tabpfn import TabPFNClassifier

4.**Now you are all set you can create a classifier and run it offline for submission in offline kaggle code competitions:** python classifier = TabPFNClassifier(device='cpu',N_ensemble_configurations=64) classifier.fit(X_train, Y_train) y_eval, p_eval = classifier.predict(X_cv, return_winning_probability=True)

If you want to use TabPFN with GPU use the following code when you make the model: classifier = TabPFNClassifier(device='cuda',N_ensemble_configurations=32)

You can find documentation for this package on GitHub: https://github.com/automl/TabPFN.git Original paper on TabPFN can be found at: https://arxiv.org/abs/2207.01848 License Copyright 2022 Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Search
Clear search
Close search
Google apps
Main menu