Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The dataset includes the Fantasy RPG context.txt file, which is utilized in the notebook Fantasy RPG Game with Gemini: Ultimate Tutorial as the foundational game context for a Fantasy RPG text adventure. This context can be uploaded to Gemini using its context caching feature.
The notebook demonstrates how to generate consistent and expanded game contexts, showcasing the method previously used to create the original Fantasy RPG context.txt. The newly generated game context is stored in separate text files, which are subsequently concatenated to form the updated Fantasy RPG context new.txt. This updated file, exceeding 100,000 tokens, meets the token-length requirements and is uploaded to Gemini for context caching, allowing users or players to interact with the enhanced game world.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset contains questions and answers related to injection molding, focusing on topics such as 'Materials', 'Techniques', 'Machinery', 'Troubleshooting', 'Safety','Design','Maintenance','Manufacturing','Development','R&D'. The dataset is provided in CSV format with two columns: Questions and Answers.
Researchers, practitioners, and enthusiasts in the field of injection molding can utilize this dataset for tasks such as:
import pandas as pd
# Load the dataset
dataset = pd.read_csv('injection_molds_dataset.csv')
# Display the first few rows
print(dataset. Head())
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("mustafakeser/injection-molding-QA")
# Display dataset info
print(dataset)
# Accessing the first few examples
print(dataset['train'][:5])
#or
dataset['train'].to_pandas()
If you use this dataset in your work, please consider citing it as:
@misc{injectionmold_dataset,
author = {Your Name},
title = {Injection Molds Dataset},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face Datasets},
howpublished = {\url{link to the dataset}},
}
https://huggingface.co/datasets/mustafakeser/injection-molding-QA mustafakeser/injection-molding-QA
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data used in this tutorial are a subset of the data published previously in Training material for the course "Exome analysis with GALAXY". Credit for uploading the original data goes to Paolo Uva and Gianmauro Cuccuru!
Specifically, you may need the following datasets for following the tutorial:
Raw sequencing reads
Premapped sequencing reads
Reference sequence (human chromosome 8)
If you would just like to play with GEMINI rather than work through the full tutorial, you'll find below a prebuilt GEMINI database (for GEMINI version 0.20.1) for the family trio. You can start exploring this database without having to run GEMINI load and, in fact, without having to install GEMINI's bundled annotation data.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset originates from a multi-year enterprise survey conducted across industries and countries. It focuses on the organizational effects of adopting Generative AI tools such as ChatGPT, Claude, Gemini, Mixtral, LLaMA, and Groq. The dataset captures detailed metrics on job role creation, workforce transformation, productivity changes, and employee sentiment.
columns = [
"Company Name", # Anonymized name
"Industry", # Sector (e.g., Finance, Healthcare)
"Country", # Country of operation
"GenAI Tool", # GenAI platform used
"Adoption Year", # Year of initial deployment (2022–2024)
"Number of Employees Impacted", # Affected staff count
"New Roles Created", # Number of AI-driven job roles introduced
"Training Hours Provided", # Upskilling time investment
"Productivity Change (%)", # % shift in reported productivity
"Employee Sentiment" # Textual feedback from employees
]
import pandas as pd
df = pd.read_csv("Large_Enterprise_GenAI_Adoption_Impact.csv")
df.shape
df.head(10)
df.describe()
df["GenAI Tool"].value_counts()
df["Industry"].unique()
df[(df["Adoption Year"] == 2023) & (df["Country"] == "India")]
df.groupby("Industry")["Productivity Change (%)"].mean().sort_values(ascending=False).head()
from collections import Counter
import re
text = " ".join(df["Employee Sentiment"].dropna().tolist())
words = re.findall(r'\b\w+\b', text.lower())
common_words = Counter(words).most_common(20)
print(common_words)
df["Sentiment Length"] = df["Employee Sentiment"].apply(lambda x: len(x.split()))
df["Sentiment Length"].hist(bins=50)
df.groupby("GenAI Tool")["New Roles Created"].mean().sort_values(ascending=False)
df.groupby("Industry")["Training Hours Provided"].mean().sort_values(ascending=False)
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The dataset includes the Fantasy RPG context.txt file, which is utilized in the notebook Fantasy RPG Game with Gemini: Ultimate Tutorial as the foundational game context for a Fantasy RPG text adventure. This context can be uploaded to Gemini using its context caching feature.
The notebook demonstrates how to generate consistent and expanded game contexts, showcasing the method previously used to create the original Fantasy RPG context.txt. The newly generated game context is stored in separate text files, which are subsequently concatenated to form the updated Fantasy RPG context new.txt. This updated file, exceeding 100,000 tokens, meets the token-length requirements and is uploaded to Gemini for context caching, allowing users or players to interact with the enhanced game world.