Facebook
TwitterThe dataset in .csv format includes the top daily tweets containing the keyword 'Climate Change'.
It contains 11 columns and it covers the period from 01/01/2022 to 19/07/2022.
You can perform an exploratory data analysis of the dataset, working with Pandas, Numpy (if you use Python) or other data analysis libraries.
You also can use this dataset for running queries and plot graphs with Matplotlib, Seaborn and other libraries.
Also, it's possible to analyze the tweets performing NLP tasks such as sentiment analysis.
Remember to upvote if you found the dataset useful :).
The tool used to scrape the data from Twitter can be found here.
https://github.com/Altimis/Scweet https://github.com/Altimis/Scweet/blob/master/LICENSE.txt
Facebook
TwitterThis code was used for a northern Idaho northern leopoard frog reintroduction feasability analysis. It can be quite cumbersome to run - the current version is intended to be run using parallel computing and can take several days/weeks to run. For questions or to discuss further, please reach out to Laura Keating at LauraK@calgaryzoo.com
Technical report: https://www.researchgate.net/publication/356069387_Feasibility_assessmen...
Journal article: https://pubsonline.informs.org/doi/10.1287/deca.2023.0472
Code Use
License
MIT License
Recommended Citation
Keating L, Randall L, Seaborn T. 2023. Code from: Using Decision Analysis to Determine the Feasibility of a Conservation Translocation (Version 1.0.0). GitHub. https://github.com/conservationresearch/dapva4nlf
Funding
Wilder Institute/Calgary Zoo
US Fish and Wildlife Service: F18AS00095
US National Science Foundation and Idaho EPSCoR: OIA-1757324
Hunt Family Foundation
Facebook
TwitterData from the 2023 Ecological Applications manuscript: Using social-ecological models to explore stream connectivity outcomes for stakeholders and Yellowstone cutthroat trout.
Input files and R scripts for running YCT connectivity simulations in CDMetaPOP. Full-resolution mental models constructed by Teton Valley stakeholders. Data are accessible from the Zenodo, and are the v1.0.0 release of the Connectivity_YCT_2022 GitHub repository
Data Use
License
Open
Recommended Citation
Jossie L, Seaborn T, Baxter CV, Burnham M. 2023. lizziejossie/Connectivity_YCT_2022: YCT_Connectivity_EcologicalApplications (v1.0.0) [Dataset]. Zenodo. https://doi.org/10.5281/zenodo.8161826
Funding
US National Science Foundation and Idaho EPSCoR: OIA-1757324
Facebook
Twitterhttps://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4576https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4576
This entry contains the data used to implement the bachelor thesis. It was investigated how embeddings can be used to analyze supersecondary structures. Abstract of the thesis: This thesis analyzes the behavior of supersecondary structures in the context of embeddings. For this purpose, data from the Protein Topology Graph Library was provided with embeddings. This resulted in a structured graph database, which will be used for future work and analyses. In addition, different projections were made into the two-dimensional space to analyze how the embeddings behave there. In the Jupyter Notebook 1_data_retrival.ipynb the download process of the graph files from the Protein Topology Graph Library (https://ptgl.uni-frankfurt.de) can be found. The downloaded .gml files can also be found in graph_files.zip. These form graphs that represent the relationships of supersecondary structures in the proteins. These form the data basis for further analyses. These graph files are then processed in the Jupyter Notebook 2_data_storage_and_embeddings.ipynb and entered into a graph database. The sequences of the supersecondary and secondary structures from the PTGL can be found in fastas.zip. The embeddings were also calculated using the ESM model of the Facebook Research Group (huggingface.co/facebook/esm2_t12_35M_UR50D), which can be found in three .h5 files. These are then added there subsequently. The whole process in this notebook serves to build up the database, which can then be searched using Cypher querys. In the Jupyter Notebook 3_data_science.ipynb different visualizations and analyses are then carried out, which were made with the help of UMAP. For the installation of all dependencies, it is recommended to create a Conda environment and then install all packages there. To use the project, PyEED should be installed using the snapshot of the original repository (source repository: https://github.com/PyEED/pyeed). The best way to install PyEED is to execute the pip install -e . command in the pyeed_BT folder. The dependencies can also be installed by using poetry and the .toml file. In addition, seaborn, h5py and umap-learn are required. These can be installed using the following commands: pip install h5py==3.12.1 pip install seaborn==0.13.2 umap-learn==0.5.7
Facebook
TwitterInput files for MigClim Simulations. All scripts for re-creating the analyses are available at https://github.com/trasea986/cc_disp_amphib.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
I have created an Artificial Intelligence software that Predictions the Year of the Orbital Elements of Near-Earth Comets. This artificial intelligence software is built according to the regression principles. It has 99.25% accuracy value, 0.0406 MAE loss value. The code system is open-sourced publicly by me on Kaggle and GitHub in notebook style and python code style. Data taken from Nasa.gov. Emirhan BULUT Senior Artificial Intelligence Engineer
Python 3.9.8
scikit-learn (sklearn)
NumPy
Matplotlib
Pandas
glob
os
Seaborn
https://github.com/emirhanai/Predictions-the-Year-of-the-Orbital-Elements-of-Near-Earth-Comets---Artificial-Intelligence-Project/blob/main/Predictions%20the%20Year%20of%20the%20Orbital%20Elements%20of%20Near-Earth%20Comets%20-%20Artificial%20Intelligence%20Project.png?raw=true" alt="Predictions the Year of the Orbital Elements of Near-Earth Comets - Artificial Intelligence Project">
Name-Surname: Emirhan BULUT
Contact (Email) : emirhan@isap.solutions
LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/
Kaggle: https://www.kaggle.com/emirhanai
Official Website: https://www.emirhanbulut.com.tr
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
I made an artificial intelligence CNN software that detects and classifies human movements. I trained 9 ready-made models and added 1 36-layer CNN artificial intelligence algorithm (model) that I created myself to the software. In order for the code to be instructive and understandable, I interpreted the code blocks with the help of notebook-like titles. I shared the dataset and software with humanity for free on Kaggle and GitHub.
Enjoyable software...
Emirhan BULUT
AI Inventor - Senior Artificial Intelligence Engineer
Python 3.9.8
Tensorflow - Keras
NumPy
Matplotlib
Pandas
glob
os
Seaborn
https://github.com/emirhanai/Human-Action-Detection-with-Artificial-Intelligence/blob/main/Human%20Action%20Detection%20with%20Artificial%20Intelligence.png?raw=true" alt="Human Action Detection with Artificial Intelligence - Emirhan BULUT">
Name-Surname: Emirhan BULUT
Contact (Email) : emirhan@isap.solutions
LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/
Kaggle: https://www.kaggle.com/emirhanai
Official Website: https://www.emirhanbulut.com.tr
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Code and data to reproduce figures and analyses of the paper: Gourgue, O., van Belzen, J., Schwarz, C., Vandenbruwaene, W., Vanlede, J., Belliard, J.-P., Fagherazzi, S., Bouma, T.J., van de Koppel, J., and Temmerman, S.: Biogeomorphic modeling to assess resilience of tidal marsh restoration to sea level rise and sediment supply, Earth Surf. Dynam., submitted. Standard Python dependencies: GDAL Geopandas Matplotlib NumPy Rasterio SciPy Seaborn Shapely scikit-learn Third-party Python dependencies: Centerline (https://github.com/fitodic/centerline) pputils (https://github.com/pprodano/pputils) pysheds (https://github.com/mdbartos/pysheds) In-house Python dependencies: Demeter 1.0.5 (https://doi.org/10.5281/zenodo.5205258) OGTools 1.1 (https://doi.org/10.5281/zenodo.3994952) TidalGeoPro 0.1 (https://doi.org/10.5281/zenodo.5205285)
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This project aims to develop a model for identifying five different flower species (rose, tulip, sunflower, dandelion, daisy) using Convolutional Neural Networks (CNNs).
The dataset consists of 5,000 images (1,000 images per class) collected from various online sources. The model achieved an accuracy of 98.58% on the test set. Usage
TensorFlow: For making Neural Networks numpy: For numerical computing and array operations. pandas: For data manipulation and analysis. matplotlib: For creating visualizations such as line plots, bar plots, and histograms. seaborn: For advanced data visualization and creating statistically-informed graphics. scikit-learn: For machine learning algorithms and model training. To run the project:
Install the required libraries. Run the Jupyter Notebook: jupyter notebook flower_classification.ipynb Additional Information Link to code: https://github.com/Harshjaglan01/flower-classification-cnn License: MIT License
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Daily Machine Learning Practice – 1 Commit per Day
Author: Astrid Villalobos Location: Montréal, QC LinkedIn: https://www.linkedin.com/in/astridcvr/
Objective The goal of this project is to strengthen Machine Learning and data analysis skills through small, consistent daily contributions. Each commit focuses on a specific aspect of data processing, feature engineering, or modeling using Python, Pandas, and Scikit-learn.
Dataset Source: Kaggle – Sample Sales Data File: data/sales_data_sample.csv Variables: ORDERNUMBER, QUANTITYORDERED, PRICEEACH, SALES, COUNTRY, etc. Goal: Analyze e-commerce performance, predict sales trends, segment customers, and forecast demand.
**Project Rules **Rule Description 🟩 1 Commit per Day Minimum one line of code daily to ensure consistency and discipline 🌍 Bilingual Comments Code and documentation in English and French 📈 Visible Progress Daily green squares = daily learning 🧰 Tech Stack
Languages: Python Libraries: Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn Tools: Jupyter Notebook, GitHub, Kaggle
Learning Outcomes By the end of this challenge: Develop a stronger understanding of data preprocessing, modeling, and evaluation. Build consistent coding habits through daily practice. Apply ML techniques to real-world sales data scenarios.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThe dataset in .csv format includes the top daily tweets containing the keyword 'Climate Change'.
It contains 11 columns and it covers the period from 01/01/2022 to 19/07/2022.
You can perform an exploratory data analysis of the dataset, working with Pandas, Numpy (if you use Python) or other data analysis libraries.
You also can use this dataset for running queries and plot graphs with Matplotlib, Seaborn and other libraries.
Also, it's possible to analyze the tweets performing NLP tasks such as sentiment analysis.
Remember to upvote if you found the dataset useful :).
The tool used to scrape the data from Twitter can be found here.
https://github.com/Altimis/Scweet https://github.com/Altimis/Scweet/blob/master/LICENSE.txt