Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This upload contains data and documentation for the Python analysis undertaken in Google Colab as part of Episode 1 of the webinar series, conducted by Sambodhi's Center for Health Systems Research and Implementation (CHSRI). You can find the link to the Google Colab notebook here.
All the data uploaded here is open data published by the Toronto Police Public Safety Data Portal and the Ontario Ministry of Health.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The purpose of this code is to produce a line graph visualization of COVID-19 data. This Jupyter notebook was built and run on Google Colab. This code will serve mostly as a guide and will need to be adapted where necessary to be run locally. The separate COVID-19 datasets uploaded to this Dataverse can be used with this code. This upload is made up of the IPYNB and PDF files of the code.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains only the COCO 2017 train images (118K images) and a caption annotation JSON file, designed to fit within Google Colab's available disk space of approximately 50GB when connected to a GPU runtime.
If you're using PyTorch on Google Colab, you can easily utilize this dataset as follows:
Manually downloading and uploading the file to Colab can be time-consuming. Therefore, it's more efficient to download this data directly into Google Colab. Please ensure you have first added your Kaggle key to Google Colab. You can find more details on this process here
from google.colab import drive
import os
import torch
import torchvision.datasets as dset
import torchvision.transforms as transforms
os.environ["KAGGLE_KEY"] = userdata.get('KAGGLE_KEY')
os.environ["KAGGLE_USERNAME"] = userdata.get('KAGGLE_USERNAME')
# Download the Dataset and unzip it
!kaggle datasets download -d seungjunleeofficial/coco2017-image-caption-train
!mkdir "/content/Dataset"
!unzip "coco2017-image-caption-train" -d "/content/Dataset"
# load the dataset
cap = dset.CocoCaptions(root = '/content/Dataset/COCO2017 Image Captioning Train/train2017',
annFile = '/content/Dataset/COCO2017 Image Captioning Train/captions_train2017.json',
transform=transforms.PILToTensor())
You can then use the dataset in the following way:
print(f"Number of samples: {len(cap)}")
img, target = cap[3]
print(img.shape)
print(target)
# Output example: torch.Size([3, 425, 640])
# ['A zebra grazing on lush green grass in a field.', 'Zebra reaching its head down to ground where grass is.',
# 'The zebra is eating grass in the sun.', 'A lone zebra grazing in some green grass.',
# 'A Zebra grazing on grass in a green open field.']
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
📥 Load Dataset in Python
To load this dataset in Google Colab or any Python environment:
!pip install huggingface_hub pandas openpyxl
from huggingface_hub import hf_hub_download import pandas as pd
repo_id = "onurulu17/Turkish_Basketball_Super_League_Dataset"
files = [ "leaderboard.xlsx", "player_data.xlsx", "team_data.xlsx", "team_matches.xlsx", "player_statistics.xlsx", "technic_roster.xlsx" ]
datasets = {}
for f in files: path =… See the full description on the dataset page: https://huggingface.co/datasets/onurulu17/Turkish_Basketball_Super_League_Dataset.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset provides information about top-rated TV shows, collected from The Movie Database (TMDb) API. It can be used for data analysis, recommendation systems, and insights on popular television content.
Key Stats:
Total Pages: 109 Total Results: 2098 TV shows Data Source: TMDb API Sorting Criteria: Highest-rated by vote_average (average rating) with a minimum vote count of 200 Data Fields (Columns):
id: Unique identifier for the TV show name: Title of the TV show vote_average: Average rating given by users vote_count: Total number of votes received first_air_date: The date when the show was first aired original_language: Language in which the show was originally produced genre_ids: Genre IDs linked to the show's genres overview: A brief summary of the show popularity: Popularity score based on audience engagement poster_path: URL path for the show's poster image Accessing the Dataset via API (Python Example):
python Copy code import requests
api_key = 'YOUR_API_KEY_HERE' url = "https://api.themoviedb.org/3/discover/tv" params = { 'api_key': api_key, 'include_adult': 'false', 'language': 'en-US', 'page': 1, 'sort_by': 'vote_average.desc', 'vote_count.gte': 200 }
response = requests.get(url, params=params) data = response.json()
print(data['results'][0]) Dataset Use Cases:
Data Analysis: Explore trends in highly-rated TV shows. Recommendation Systems: Build personalized TV show suggestions. Visualization: Create charts to showcase ratings or genre distribution. Machine Learning: Predict show popularity using historical data. Exporting and Sharing the Dataset (Google Colab Example):
python Copy code import pandas as pd
df = pd.DataFrame(data['results'])
from google.colab import drive drive.mount('/content/drive') df.to_csv('/content/drive/MyDrive/top_rated_tv_shows.csv', index=False) Ways to Share the Dataset:
Google Drive: Upload and share a public link. Kaggle: Create a public dataset for collaboration. GitHub: Host the CSV file in a repository for easy sharing.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Hope_Park_original.csv file.## Contents- sample park analysis.ipynb — The main analysis notebook (Colab/Jupyter format)- Hope_Park_original.csv — Source dataset containing park information- README.md — Documentation for the contents and usage## Usage1. Open the notebook in Google Colab or Jupyter.2. Upload the Hope_Park_original.csv file to the working directory (or adjust the file path in the notebook).3. Run each cell sequentially to reproduce the analysis.## RequirementsThe notebook uses standard Python data science libraries:```pythonpandasnumpymatplotlibseaborn
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Use this dataset with Misra's Pandas tutorial: How to use the Pandas GroupBy function | Pandas tutorial
The original dataset came from this site: https://data.cityofnewyork.us/City-Government/NYC-Jobs/kpav-sd4t/data
I used Google Colab to filter the columns with the following Pandas commands. Here's a Colab Notebook you can use with the commands listed below: https://colab.research.google.com/drive/17Jpgeytc075CpqDnbQvVMfh9j-f4jM5l?usp=sharing
Once the csv file is uploaded to Google Colab, use these commands to process the file.
import pandas as pd # load the file and create a pandas dataframe df = pd.read_csv('/content/NYC_Jobs.csv') # keep only these columns df = df[['Job ID', 'Civil Service Title', 'Agency', 'Posting Type', 'Job Category', 'Salary Range From', 'Salary Range To' ]] # save the csv file without the index column df.to_csv('/content/NYC_Jobs_filtered_cols.csv', index=False)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here's a clear Zenodo description for your dataset:
Dataset Description
This dataset supports the research paper "Nou Pa Bèt: Civic Substitution and Expressive Freedoms in Post-State Governance" which examines how civic participation functions as institutional substitution in fragile states, with Haiti as the primary case study. The dataset combines governance indicators from the World Bank's Worldwide Governance Indicators (WGI) with civic engagement measures from the Varieties of Democracy (V-Dem) project.
Files Included:
How to Use in Google Colab:
Step 1: Upload Files
from google.colab import files
import pandas as pd
import numpy as np
# Upload the files to your Colab environment
uploaded = files.upload()
# Select and upload: CivicEngagement_SelectedCountries_Last10Years.xlsx and wgidataset.xlsx
Step 2: Load the Datasets
# Load the civic engagement data (main analysis dataset)
civic_data = pd.read_excel('CivicEngagement_SelectedCountries_Last10Years.xlsx')
# Load the WGI data (if needed for extended analysis)
wgi_data = pd.read_excel('wgidataset.xlsx')
# Display basic information
print("Civic Engagement Dataset Shape:", civic_data.shape)
print("
Columns:", civic_data.columns.tolist())
print("
First few rows:")
civic_data.head()
Step 3: Run the Analysis Notebook
# Download and run the complete analysis notebook
!wget https://zenodo.org/record/[RECORD_ID]/files/civic.ipynb
# Then open civic.ipynb in Colab or copy/paste the code cells
Key Variables:
Dependent Variables (WGI):
Control_of_Corruption - Extent to which public power is exercised for private gainGovernment_Effectiveness - Quality of public services and policy implementationIndependent Variables (V-Dem):
v2x_partip - Participatory Component Indexv2x_cspart - Civil Society Participation Indexv2cademmob - Freedom of Peaceful Assemblyv2cafres - Freedom of Expressionv2csantimv - Anti-System Movementsv2xdd_dd - Direct Popular Vote IndexSample Countries: 21 fragile states including Haiti, Sierra Leone, Liberia, DRC, CAR, Guinea-Bissau, Chad, Niger, Burundi, Yemen, South Sudan, Mozambique, Sudan, Eritrea, Somalia, Mali, Afghanistan, Papua New Guinea, Togo, Cambodia, and Timor-Leste.
Quick Start Analysis:
# Install required packages
!pip install statsmodels scipy
# Basic regression replication
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
# Prepare variables for regression
X = civic_data[['v2x_partip', 'v2x_cspart', 'v2cademmob', 'v2cafres', 'v2csantimv', 'v2xdd_dd']].dropna()
y_corruption = civic_data['Control_of_Corruption'].dropna()
y_effectiveness = civic_data['Government_Effectiveness'].dropna()
# Run regression (example for Control of Corruption)
X_const = sm.add_constant(X)
model = sm.OLS(y_corruption, X_const).fit(cov_type='HC3')
print(model.summary())
Citation: Brown, Scott M., Fils-Aime, Jempsy, & LaTortue, Paul. (2025). Nou Pa Bèt: Civic Substitution and Expressive Freedoms in Post-State Governance [Dataset]. Zenodo. https://doi.org/10.5281/zenodo.15058161
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Contact: For questions about data usage or methodology, please contact the corresponding author through the institutional affiliations provided in the paper.
This description provides clear, step-by-step instructions for researchers to immediately begin working with your data in Google Colab while explaining the theoretical and methodological context.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The full semantic dataset is hosted on Kaggle:
👉 https://www.kaggle.com/datasets/cjc0013/epstein-bge-large-hdbscan-bm25/data
Epstein Semantic Explorer v5 is a lightweight, open-source investigation toolkit for analyzing the text fragments released by the House Oversight Committee (November 2025).
This tool does not add new allegations. It simply makes the chaotic, fragmented congressional release usable, by providing:
Everything runs locally in Colab, with no external APIs, servers, or private models.
Explore semantically grouped themes: legal strategy, PR coordination, iMessage logs, internal disputes, travel notes, media monitoring, and more.
view_cluster(96)
Instant relevance-ranked search across all 9,666 documents.
search("Prince Andrew")
search("Clinton")
search("Ghislaine")
Get a fast narrative overview of what a cluster contains.
summarize_cluster(96)
Shows the most meaningful terms defining each cluster.
show_topics()
Identify the most-referenced people, places, and organizations in any cluster.
cluster_entities(12)
Searches all documents for dates and assembles a chronological list.
show_timeline()
See which clusters relate to which — using cosine similarity on text centroids.
cluster_similarity()
Find out where a name appears most often across the entire corpus.
entity_to_clusters("Epstein")
entity_to_clusters("Maxwell")
entity_to_clusters("Barak")
You only need one file:
epstein_semantic.jsonlEach line is:
{"id": "HOUSE_OVERSIGHT_023051", "cluster": 96, "text": "...document text..."}
{"id": "HOUSE_OVERSIGHT_028614", "cluster": 122, "text": "...document text..."}
id — original document identifiercluster — HDBSCAN semantic clustertext — raw text fragmentNo PDFs, images, or external metadata required.
Open Google Colab → upload:
Epstein_Semantic_Explorer_v5.ipynb
Colab → Runtime → Run all
When prompted:
Upload epstein_semantic.jsonl
If the file is already in /content/, the notebook will auto-detect it.
Now try:
view_cluster(96)
search("Prince Andrew")
show_topics()
cluster_entities(96)
Everything runs on CPU. No GPU required.
No. This only reorganizes public text fragments released by Congress.
No. All analysis stays inside your Colab runtime.
Yes. It’s intentionally simple and transparent — point, click, search.
Yes, as long as you clarify:
Epstein Semantic Explorer v5 turns the unstructured House Oversight text archive into a searchable, analyzable, cluster-organized dataset, enabling:
This tool makes the archive usable — but does not alter or invent any content.
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Prediction of Phakic Intraocular Lens Vault Using Machine Learning of Anterior Segment Optical Coherence Tomography Metrics. Authors: Kazutaka Kamiya, MD, PhD1, Ik Hee Ryu, MD, MS2, Tae Keun Yoo, MD2, Jung Sub Kim MD2, In Sik Lee, MD, PhD2, Jin Kook Kim MD2, Wakako Ando CO3, Nobuyuki Shoji, MD, PhD3, Tomofusa, Yamauchi, MD, PhD4, Hitoshi Tabuchi, MD, PhD4. Author Affiliation: 1Visual Physiology, School of Allied Health Sciences, Kitasato University, Kanagawa, Japan, 2B&VIIT Eye Center, Seoul, Korea, 3Department of Ophthalmology, School of Medicine, Kitasato University, Kanagawa, Japan, 4Department of Ophthalmology, Tsukazaki Hospital, Hyogo, Japan.
We hypothesize that machine learning of preoperative biometric data obtained by the As-OCT may be clinically beneficial for predicting the actual ICL vault. Therefore, we built the machine learning model using Random Forest to predict ICL vault after surgery.
This multicenter study comprised one thousand seven hundred forty-five eyes of 1745 consecutive patients (656 men and 1089 women), who underwent EVO ICL implantation (V4c and V5 Visian ICL with KS-AquaPORT) for the correction of moderate to high myopia and myopic astigmatism, and who completed at least a 1-month follow-up, at Kitasato University Hospital (Kanagawa, Japan), or at B&VIIT Eye Center (Seoul, Korea).
This data file (RFR_model(feature=12).mat) is the final trained random forest model for MATLAB 2020a.
Python version:
from sklearn.model_selection import train_test_split import pandas as pd import numpy as np from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import RandomForestRegressor
from google.colab import auth auth.authenticate_user() from google.colab import drive drive.mount('/content/gdrive')
dataset = pd.read_csv('gdrive/My Drive/ICL/data_icl.csv') dataset.head()
y = dataset['Vault_1M'] X = dataset.drop(['Vault_1M'], axis = 1)
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.2, random_state=0)
parameters = {'bootstrap': True, 'min_samples_leaf': 3, 'n_estimators': 500, 'criterion': 'mae' 'min_samples_split': 10, 'max_features': 'sqrt', 'max_depth': 6, 'max_leaf_nodes': None}
RF_model = RandomForestRegressor(**parameters) RF_model.fit(train_X, train_y) RF_predictions = RF_model.predict(test_X) importance = RF_model.feature_importances_
Facebook
TwitterPlease follow the steps below to download and use Kaggle data within Google Colab:
1) from google.colab import files files.upload()
Choose the kaggle.json file that you downloaded 2) ! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
Make directory named kaggle and copy kaggle.json file there. 4) ! chmod 600 ~/.kaggle/kaggle.json
Change the permissions of the file. 5) ! kaggle datasets list - That's all ! You can check if everything's okay by running this command.
Use unzip command to unzip the data:
unzip train data there,
! unzip train.zip -d train
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This deposit contains the dataset and analysis code supporting the research paper "Recognition Without Implementation: Institutional Gaps and Forestry Expansion in Post-Girjas Swedish Sápmi" by Stefan Holgersson and Scott Brown.
Research Overview: This study examines forestry permit trends in Swedish Sámi territories following the landmark 2020 Girjas Supreme Court ruling, which recognized exclusive Sámi rights over hunting and fishing in traditional lands. Using 432 region-year observations (1998-2024) from the Swedish Forest Agency, we document a 242% increase in clearcutting approvals during 2020-2024 compared to pre-2020 averages, with state/corporate actors showing 313% increases and private landowners 197%.
Key Findings:
Important Limitation: We cannot isolate causal effects of the Girjas ruling from concurrent shocks including COVID-19 economic disruption, EU Taxonomy implementation, and commodity price volatility. The analysis documents institutional conditions and correlational patterns rather than establishing causation.
Dataset Contents:
Clearcut.xlsx: Swedish Forest Agency clearcutting permit data (1998-2024) disaggregated by region, ownership type, and yearSAMI.ipynb: Jupyter notebook containing Python code for descriptive statistics, time series analysis, and figure generationHow to Use These Files in Google Colab:
SAMI.ipynb from your downloadsClearcut.xlsx from your downloads/content/ directoryClearcut.xlsx from the current directoryAlternative method (direct from Zenodo):
# Add this cell at the top of the notebook to download files directly
!wget https://zenodo.org/record/[RECORD_ID]/files/Clearcut.xlsx
Replace [RECORD_ID] with the actual Zenodo record number after publication.
Requirements: The notebook uses standard Python libraries: pandas, numpy, matplotlib, seaborn. These are pre-installed in Google Colab. No additional setup required.
Methodology: Descriptive statistical analysis combined with institutional document review. Data covers eight administrative regions in northern Sweden with mountain-adjacent forests relevant to Sámi reindeer herding territories.
Policy Relevance: Findings inform debates on Indigenous land rights implementation, forestry governance reform, ESG disclosure requirements, and the gap between legal recognition and operational constraints in resource extraction contexts.
Keywords: Indigenous rights, Sámi, forestry governance, legal pluralism, Sweden, Girjas ruling, land tenure, corporate accountability, ESG disclosure
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Facebook
TwitterDataset Card for "lsun_church_train"
Uploading lsun church train dataset for convenience I've split this into 119915 train and 6312 test but if you want the original test set see https://github.com/fyu/lsun Notebook that I used to download then upload this dataset: https://colab.research.google.com/drive/1_f-D2ENgmELNSB51L1igcnLx63PkveY2?usp=sharing More Information needed
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset accompanies the study The Cultural Resource Curse: How Trade Dependence Undermines Creative Industries. It contains country-year panel data for 2000–2023 covering both OECD economies and the ten largest Latin American countries by land area. Variables include GDP per capita (constant PPP, USD), trade openness, internet penetration, education indicators, cultural exports per capita, and executive constraints from the Polity V dataset.
The dataset supports a comparative analysis of how economic structure, institutional quality, and infrastructure shape cultural export performance across development contexts. Within-country fixed effects models show that trade openness constrains cultural exports in OECD economies but has no measurable effect in resource-dependent Latin America. In contrast, strong executive constraints benefit cultural industries in advanced economies while constraining them in extraction-oriented systems. The results provide empirical evidence for a two-stage development framework in which colonial extraction legacies create distinct constraints on creative industry growth.
All variables are harmonized to ISO3 country codes and aligned on a common panel structure. The dataset is fully reproducible using the included Jupyter notebooks (OECD.ipynb, LATAM+OECD.ipynb, cervantes.ipynb).
Contents:
GDPPC.csv — GDP per capita series from the World Bank.
explanatory.csv — Trade openness, internet penetration, and education indicators.
culture_exports.csv — UNESCO cultural export data.
p5v2018.csv — Polity V institutional indicators.
Jupyter notebooks for data processing and replication.
Potential uses: Comparative political economy, cultural economics, institutional development, and resource curse research.
These steps reproduce the OECD vs. Latin America analyses from the paper using the provided CSVs and notebooks.
Click File → New notebook.
(Optional) If your files are in Google Drive, mount it:
from google.colab import drive
drive.mount('/content/drive')
You have two easy options:
A. Upload the 4 CSVs + notebooks directly
In the left sidebar, click the folder icon → Upload.
Upload: GDPPC.csv, explanatory.csv, culture_exports.csv, p5v2018.csv, and any .ipynb you want to run.
B. Use Google Drive
Put those files in a Drive folder.
After mounting Drive, refer to them with paths like /content/drive/MyDrive/your_folder/GDPPC.csv.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description:
This dataset accompanies the empirical analysis in Legality Without Justice, a study examining the relationship between public trust in institutions and perceived governance legitimacy using data from the World Values Survey Wave 7 (2017–2022). It includes:
WVS_Cross-National_Wave_7_csv_v6_0.csv — World Values Survey Wave 7 core data.
GDP.csv — World Bank GDP per capita (current US$) for 2022 by country.
denial.ipynb — Fully documented Jupyter notebook with code for data merging, exploratory statistics, and ordinal logistic regression using OrderedModel. Includes GDP as a control for institutional trust and perceived governance.
All data processing and analysis were conducted in Python using FAIR reproducibility principles and can be replicated or extended on Google Colab.
DOI: 10.5281/zenodo.16361108
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Authors: Anon Annotator
Publication date: 2025-07-23
Language: English
Version: 1.0.0
Publisher: Zenodo
Programming language: Python
Go to https://colab.research.google.com
Click File > Upload notebook, and upload the denial.ipynb file.
Also upload the CSVs (WVS_Cross-National_Wave_7_csv_v6_0.csv and GDP.csv) using the file browser on the left sidebar.
In denial.ipynb, ensure file paths match:
wvs = pd.read_csv('/content/WVS_Cross-National_Wave_7_csv_v6_0.csv')
gdp = pd.read_csv('/content/GDP.csv')
Execute the notebook cells from top to bottom. You may need to install required libraries:
!pip install statsmodels pandas numpy
The notebook performs:
Data cleaning
Merging WVS and GDP datasets
Summary statistics
Ordered logistic regression to test if confidence in courts/police (Q57, Q58) predicts belief that the country is governed in the interest of the people (Q183), controlling for GDP.
Facebook
TwitterThis is the data of a social media platform of an organization. You have been hired by the organization & given their social media data to analyze, visualize and prepare a report on it.
You are required to prepare a neat notebook on it using Jupyter Notebook/Jupyter Lab or Google Colab. Then, zip everything including the notebook file (.ipynb file) and the dataset. Finally, upload through the google forms link stated below. The notebook should be neat, containing codes with details regarding your code, visualizations, and description of your purpose of doing each task.
You are suggested but not limited to go through the general steps like -> Data Cleaning, Data preparation, Exploratory Data Analysis(EDA), Correlations finding, Feature extraction, and more. (There is no limit to your skills and ideas)
After doing what needs to be done, you are to give your organization insights and facts. For example, are they reaching more audiences on weekends? Is posting content on the weekdays turn out to be more effective? Is posting many contents on the same day make more sense? Or, should they post content regularly and keep day-to-day consistency? Did you find any trend patterns in the data? What are your advice after completing the analysis? Mention them clearly at the end of the Notebook. (These are just a few examples, your findings may be entirely different and that is totally acceptable. )
Note that, we will value clear documentation which states clear insights from analysis of data & visualizations, more than anything else. It will not matter how complex methods are you applying if it eventually does not find anything useful.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Instructions (with screenshots) to replicate results from Section 3 of the manuscript are available in "Step-by-step Instructions to Replicate Results.pdf".-------------------------------------------------------------------------------------------------------------------Step 1: Download the replication materialsDownload the whole replication folder on figshare containing the code, data and replication files.Step 2: Replicate Tables in Section 3All of the data is available inside the sub-folder replication/Data. To replicate Tables 1 and 2 from section 3 of the manuscript run the Python file replicate_section3_tables.py locally on your computer. This will produce two .csv files containing Tables 1 and 2 (already provided). Note that it is not necessary to run the code in order to replicate the tables. The output data needed for replication is provided.Step 3: Replicate Figures in QGISThe Figures must be replicated using QGIS, freely available at https://www.qgis.org/. Open the QGIS project replicate_figures.qgz inside the replication/Replicate Figures sub-folder. It should auto-find the layer data. The Figures are replicated as layers in the project. Step 4: Running the code from scratchThe accompanying code for the manuscript IJGIS-2024-1305, entitled "Route-based Geocoding of Traffic Congestion-Related Social Media Texts on a Complex Network" runs on Google Colab as Python notebooks. Please follow the instructions below to run the entire geocoder and network mapper from scratch. The expected running time is of the order of 10 hours on free tier Google Colab. 4a) Upload to Google DriveUpload the entire replication folder to your Google Drive. Note the path (location) to which you have uploaded it. There are two Google Colab notebooks that need to be executed in their entirety. These are Code/Geocoder/The_Geocoder.ipynb and Code/Complex_Network/Complex_network_code.ipynb. They need to be run in order (Geocoder first and Complex Network second). 4b) Set the path In each Google Colab notebook, you have to set the variable called “REPL_PATH” to the location on your Google Drive where you uploaded the replication folder. Include the replication folder in the path. For example "/content/drive/MyDrive/replication"4c) Run the codeThe code is available in two sub-folders, replication/Code/Geocoder and replication/Code/Complex_Network. You may simply open the Google Colab notebooks inside each folder, mount your Google Drive, set the path and run all cells.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
2,121,458 records
I used Google Colab to check out this dataset and pull the column names using Pandas.
Sample code example: Python Pandas read csv file compressed with gzip and load into Pandas dataframe https://pastexy.com/106/python-pandas-read-csv-file-compressed-with-gzip-and-load-into-pandas-dataframe
Columns: ['Date received', 'Product', 'Sub-product', 'Issue', 'Sub-issue', 'Consumer complaint narrative', 'Company public response', 'Company', 'State', 'ZIP code', 'Tags', 'Consumer consent provided?', 'Submitted via', 'Date sent to company', 'Company response to consumer', 'Timely response?', 'Consumer disputed?', 'Complaint ID']
I did not modify the dataset.
Use it to practice with dataframes - Pandas or PySpark on Google Colab:
!unzip complaints.csv.zip
import pandas as pd df = pd.read_csv('complaints.csv') df.columns
df.head() etc.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Australia -> Aus200
Brazil -> Bra50 and MinDol
Spain -> Esp35
France -> Fra40
Germany -> Ger40
Hong Kong -> HkInd
Italy-> Ita40
Netherlands -> Neth25
Switzerland -> Swi20
United Kingdom -> UK100
United States -> Usa500, UsaTec and UsaRus
Note: the MinDol, Swi20 and Neth25 data were taken by it's monthly contract, because MetaTrader5 don't have their historical series (like S&P 500, that has the 'Usa500' and 'Usa500Mar24'):
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F17272056%2Fefa5c9f6d7841c496d20d467d4a1c874%2Ffutures_dailycontract.png?generation=1704756245532483&alt=media" alt="">
import MetaTrader5 as mt5
import pandas as pd
import numpy as np
import pytz
from datetime import datetime
if not mt5.initialize(login= , server= "server", password=""):
# you can use your login and password if you have an account on a broker to use mt5
print("initialize() failed, error code =", mt5.last_error())
quit()
symbols = mt5.symbols_get()
list_symbols = []
for num in range(0, len(symbols)):
list_symbols.append(symbols[num].name)
print(list_symbols)
list_futures = ['Aus200', 'Bra50', 'Esp35', 'Fra40', 'Ger40', 'HKInd', 'Ita40Mar24', 'Jp225', 'MinDolFeb24', 'Neth25Jan24', 'UK100', 'Usa500', 'UsaRus', 'UsaTec', 'Swi20Mar24']
time_frame = mt5.TIMEFRAME_D1
dynamic_vars = {}
time_zone = pytz.timezone('Etc/UTC')
time_start = datetime(2017, 1, 1, tzinfo= time_zone)
time_end = datetime(2023, 12, 31, tzinfo= time_zone)
for sym in list_futures:
var = f'{sym}'
rates = mt5.copy_rates_range(sym, time_frame, time_start, time_end)
rates_frame = pd.DataFrame(rates)
rates_frame['time'] = pd.to_datetime(rates_frame['time'], unit='s')
rates_frame = rates_frame[['time', 'close']]
rates_frame.rename(columns = {'close': var}, inplace = True)
dynamic_vars[var] = rates_frame
dynamic_vars[sym].to_csv(f'{sym}.csv', index = False)
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The dataset is a Neo4j knowledge graph based on TMF Business Process Framework v22.0 data.
CSV files contain data about the model entities, and the JSON file contains knowledge graph mapping.
The script used to generate CSV files based on the XML model can be found here.
To import the dataset, download the zip archive and upload it to Neo4j.
You also can check this dataset here.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This upload contains data and documentation for the Python analysis undertaken in Google Colab as part of Episode 1 of the webinar series, conducted by Sambodhi's Center for Health Systems Research and Implementation (CHSRI). You can find the link to the Google Colab notebook here.
All the data uploaded here is open data published by the Toronto Police Public Safety Data Portal and the Ontario Ministry of Health.