6 datasets found
  1. OpenAI - CLIP Weight

    • kaggle.com
    zip
    Updated Aug 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antagonist (2022). OpenAI - CLIP Weight [Dataset]. https://www.kaggle.com/datasets/hngbiquc/openai-clip-weight
    Explore at:
    zip(3782400953 bytes)Available download formats
    Dataset updated
    Aug 23, 2022
    Authors
    Antagonist
    Description

    Dataset

    This dataset was created by Antagonist

    Contents

  2. OpenAI-CLIP weights

    • kaggle.com
    zip
    Updated Aug 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). OpenAI-CLIP weights [Dataset]. https://www.kaggle.com/datasets/thedevastator/openaiclip-weights/discussion
    Explore at:
    zip(3782400953 bytes)Available download formats
    Dataset updated
    Aug 4, 2022
    Authors
    The Devastator
    Description

    This dataset contains the official pretrained weights of clip, released by OpenAI.

  3. openAI_CLIP_with_weight_and_ftfy

    • kaggle.com
    zip
    Updated Nov 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qikang Deng (2021). openAI_CLIP_with_weight_and_ftfy [Dataset]. https://www.kaggle.com/qikangdeng/openai-clip-with-weight-and-ftfy
    Explore at:
    zip(2139734399 bytes)Available download formats
    Dataset updated
    Nov 10, 2021
    Authors
    Qikang Deng
    Description

    Dataset

    This dataset was created by Qikang Deng

    Contents

  4. OpenAI CLIP VIT L-14

    • kaggle.com
    zip
    Updated Apr 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sandy (2023). OpenAI CLIP VIT L-14 [Dataset]. https://www.kaggle.com/datasets/sandeepmnair/openai-clip
    Explore at:
    zip(1088663566 bytes)Available download formats
    Dataset updated
    Apr 11, 2023
    Authors
    Sandy
    Description

    This is a dataset of OpenAI's CLIP model, VIT-LARGE-14-PATCH. It can be used for offline initialization of model configuration, in scenarios where internet access is disabled. For details about CLIP model, check the README.md below.

    This model is different from OPEN-CLIP which was developed by STABLE DIFFUSION.

  5. Policy Mirroring and Inversion (SPA)

    • kaggle.com
    zip
    Updated Aug 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Superspork (2025). Policy Mirroring and Inversion (SPA) [Dataset]. https://www.kaggle.com/datasets/superspork/policy-mirroring-and-inversion-spa
    Explore at:
    zip(1113451 bytes)Available download formats
    Dataset updated
    Aug 21, 2025
    Authors
    Superspork
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    What this dataset is This dataset contains a single finding for the OpenAI gpt-oss-20b Red-Teaming Challenge: a user-level, text-only jailbreak that attacks the gpt-oss-20b internal safety "Shadow Policy" by mirroring the model’s OpenAI internal safety vocabulary and inverts conclusions. When injected as a system prompt—or even pasted from a user message in Harmony-style front-ends—the model’s guardrails can be subverted. Testing across a 10 category, 100-prompt jailbreak dataset yielded 93–100% compliance.

    Why it matters The attack combines (a) policy vocabulary mirroring (reward-hacking the rubric the model appears to use), (b) in-context role precedence injection (“system > developer > OpenAI policy”), and (c) avenues for data exfiltration and harmful activities. It works without any weight changes or fine-tuning and can be executed by end users without privileged access.

    Reproducibility (high-level)

    Model: gpt-oss-20b (via Ollama, Windows 11; Open Webui for spot checks), temperature 0.8 in headline runs.

    Eval: 100 prompts × 10 categories × 30 seeds, 1 attempt/seed.

    Batches: SPA in system+dev, SPA system-only, developer-only, no system/dev.

    Results: SPA(sys+dev) ≈ 1.00 (95% CI 0.963–1.00); SPA(sys-only) ≈ 0.93 (0.863–0.966); baselines 0.12 / 0.05.

  6. openchat_3.5

    • kaggle.com
    zip
    Updated Nov 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abhinand (2023). openchat_3.5 [Dataset]. https://www.kaggle.com/abhinand05/openchat-3-dot-5
    Explore at:
    zip(11465894845 bytes)Available download formats
    Dataset updated
    Nov 6, 2023
    Authors
    Abhinand
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    license: apache-2.0

    OpenChat: Advancing Open-source Language Models with Mixed-Quality Data

    https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png">

    GitHub RepoOnline DemoDiscordTwitterHuggingfacePaper

    🔥 The first 7B model Achieves Comparable Results with ChatGPT (March)! 🔥

    🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖

    https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png"> https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png">

    OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.

    DOI

    Usage

    To use this model, we highly recommend installing the OpenChat package by following the installation guide in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using vLLM and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append --tensor-parallel-size N to the serving command.

    Once started, the server listens at localhost:18888 for requests and is compatible with the OpenAI ChatCompletion API specifications. Please refer to the example request below for reference. Additionally, you can use the OpenChat Web UI for a user-friendly experience.

    If you want to deploy the server as an online service, you can use --api-keys sk-KEY1 sk-KEY2 ... to specify allowed API keys and --disable-log-requests --disable-log-stats --log-file openchat.log for logging only to a file. For security purposes, we recommend using an HTTPS gateway in front of the server.

    ModelSizeContextWeightsServing
    OpenChat 3.57B8192Huggingfacepython -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray

    For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below.

    Comparison with X.AI Grok models

    Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok?

    Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! 🚀💡

    (Written by OpenChat 3.5, with a touch of humor and wit.)

    License# ParamAverageMMLUHumanEvalMATHGSM8k
    OpenChat 3.5Apache-2.07B56.464.355.528.677.3
    Grok-0Proprietary33B44.565.739.715.756.8
    ...
  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Antagonist (2022). OpenAI - CLIP Weight [Dataset]. https://www.kaggle.com/datasets/hngbiquc/openai-clip-weight
Organization logo

OpenAI - CLIP Weight

Explore at:
6 scholarly articles cite this dataset (View in Google Scholar)
zip(3782400953 bytes)Available download formats
Dataset updated
Aug 23, 2022
Authors
Antagonist
Description

Dataset

This dataset was created by Antagonist

Contents

Search
Clear search
Close search
Google apps
Main menu