Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is based on the TravisTorrent dataset released 2017-01-11 (https://travistorrent.testroots.org), the Google BigQuery GHTorrent dataset accessed 2017-07-03, and the Git log history of all projects in the dataset, retrieved 2017-07-16 and 2017-07-17.
We selected projects hosted on GitHub that employ the Continuous Integration (CI) system Travis CI. We identified the projects using the TravisTorrent data set and considered projects that:
used GitHub from the beginning (first commit not more than seven days before project creation date according to GHTorrent),
were active for at least one year (365 days) before the first build with Travis CI (before_ci),
used Travis CI at least for one year (during_ci),
had commit or merge activity on the default branch in both of these phases, and
used the default branch to trigger builds.
To derive the time frames, we employed the GHTorrent Big Query data set. The resulting sample contains 113 projects. Of these projects, 89 are Ruby projects and 24 are Java projects. For our analysis, we only consider the activity one year before and after the first build.
We cloned the selected project repositories and extracted the version history for all branches (see https://github.com/sbaltes/git-log-parser). For each repo and branch, we created one log file with all regular commits and one log file with all merges. We only considered commits changing non-binary files and applied a file extension filter to only consider changes to Java or Ruby source code files. From the log files, we then extracted metadata about the commits and stored this data in CSV files (see https://github.com/sbaltes/git-log-parser).
We also retrieved a random sample of GitHub project to validate the effects we observed in the CI project sample. We only considered projects that:
have Java or Ruby as their project language
used GitHub from the beginning (first commit not more than seven days before project creation date according to GHTorrent)
have commit activity for at least two years (730 days)
are engineered software projects (at least 10 watchers)
were not in the TravisTorrent dataset
In total, 8,046 projects satisfied those constraints. We drew a random sample of 800 projects from this sampling frame and retrieved the commit and merge data in the same way as for the CI sample. We then split the development activity at the median development date, removed projects without commits or merges in either of the two resulting time spans, and then manually checked the remaining projects to remove the ones with CI configuration files. The final comparision sample contained 60 non-CI projects.
This dataset contains the following files:
tr_projects_sample_filtered_2.csv A CSV file with information about the 113 selected projects.
tr_sample_commits_default_branch_before_ci.csv tr_sample_commits_default_branch_during_ci.csv One CSV file with information about all commits to the default branch before and after the first CI build. Only commits modifying, adding, or deleting Java or Ruby source code files were considered. Those CSV files have the following columns:
project: GitHub project name ("/" replaced by "_"). branch: The branch to which the commit was made. hash_value: The SHA1 hash value of the commit. author_name: The author name. author_email: The author email address. author_date: The authoring timestamp. commit_name: The committer name. commit_email: The committer email address. commit_date: The commit timestamp. log_message_length: The length of the git commit messages (in characters). file_count: Files changed with this commit. lines_added: Lines added to all files changed with this commit. lines_deleted: Lines deleted in all files changed with this commit. file_extensions: Distinct file extensions of files changed with this commit.
tr_sample_merges_default_branch_before_ci.csv tr_sample_merges_default_branch_during_ci.csv One CSV file with information about all merges into the default branch before and after the first CI build. Only merges modifying, adding, or deleting Java or Ruby source code files were considered. Those CSV files have the following columns:
project: GitHub project name ("/" replaced by "_"). branch: The destination branch of the merge. hash_value: The SHA1 hash value of the merge commit. merged_commits: Unique hash value prefixes of the commits merged with this commit. author_name: The author name. author_email: The author email address. author_date: The authoring timestamp. commit_name: The committer name. commit_email: The committer email address. commit_date: The commit timestamp. log_message_length: The length of the git commit messages (in characters). file_count: Files changed with this commit. lines_added: Lines added to all files changed with this commit. lines_deleted: Lines deleted in all files changed with this commit. file_extensions: Distinct file extensions of files changed with this commit. pull_request_id: ID of the GitHub pull request that has been merged with this commit (extracted from log message). source_user: GitHub login name of the user who initiated the pull request (extracted from log message). source_branch : Source branch of the pull request (extracted from log message).
comparison_project_sample_800.csv A CSV file with information about the 800 projects in the comparison sample.
commits_default_branch_before_mid.csv commits_default_branch_after_mid.csv One CSV file with information about all commits to the default branch before and after the medium date of the commit history. Only commits modifying, adding, or deleting Java or Ruby source code files were considered. Those CSV files have the same columns as the commits tables described above.
merges_default_branch_before_mid.csv merges_default_branch_after_mid.csv One CSV file with information about all merges into the default branch before and after the medium date of the commit history. Only merges modifying, adding, or deleting Java or Ruby source code files were considered. Those CSV files have the same columns as the merge tables described above.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset generated by respondents to a distributed survey via Amazon Mechanical Turk between 03.12.2016-05.12.2016. Thirty eligible Fitbit users consented to the submission of personal tracker data, including minute-level output for physical activity, heart rate, and sleep monitoring. Individual reports can be parsed by export session ID (column A) or timestamp (column B). Variation between output represents use of different types of Fitbit trackers and individual tracking behaviors / preferences.
This is the list of manipulations performed on the original dataset, published by Möbius.
All the cleaning process and rearrangements were performed in BigQuery, using SQL functions.
1) After I took a closer look at the source dataset, I realized that for my case study, I did not need some of the tables contained in the original archive. Therefore, I decided not to import
- dailyCalories_merged.csv,
- dailyIntensities_merged.csv,
- dailySteps_merged.csv.
as they proved redundant, their content could be found in the dailyActivity_merged.csv file.
In addition, the files
- minutesCaloriesWide_merged.csv,
- minutesIntensitiesWide_merged.csv,
- minuteStepsWide_merged.csv.
were not imported, as they presented the same data contained in other files in a wide format. Hence, only the files with long format containing the same data were imported in the BigQuery database.
2) To be able to compare and measure the correlation among different variables based on hourly records, I decided to create a new table based on LEFT JOIN function and columns Id and ActivityHour. I repeated the same JOIN on tables with minute records. Hence I obtained 2 new tables: - hourly_activity.csv, - minute_activity.csv.
3) To validate most of the columns containing DATE and DATETIME values that were imported as STRING data type, I used the PARSE_DATE() and PARSE_DATETIME() commands. While importing the - heartrate_seconds_merged.csv, - hourlyCalories_merged.csv, - hourlyIntensities_merged.csv, - hourlySteps_merged.csv, - minutesCaloriesNarrow_merged.csv, - minuteIntensitiesNarrow_merged.csv, - minuteMETsNarrow_merged.csv, - minuteSleep_merged.csv, - minuteSteps_merged.csv, - sleepDay_merge.csv, - weigthLog_Info_merged.csv files to BigQuery, it was necessary to import the DATETIME and DATE type columns as STRING, because the original syntax, used in the CSV files, couldn’t be recognized as a correct DATETIME data type, due to “AM” and “PM” text at the end of the expression.
Facebook
TwitterLandsat is an ongoing mission of Earth observation satellites developed under a joint program of the USGS and NASA. The Landsat mission provides the longest continuous space-based record of Earth’s land, dating back to 1972 and the Landsat 1 satellite. Starting with Landsat 4, each of the satellites imaged the Earth’s surface at a 30-meter resolution about once every two weeks using multispectral and thermal instruments. This collection includes the complete USGS archive from Landsat 4, 5, 7, and 8. It covers their full operational lifetimes, with over four million unique scenes over 35 years: Landsat 4: 1982 - 1993 Landsat 5: 1984 - 2013 Landsat 7: 1999 - present Landsat 8: 2013 - present Landsat data has set the standard for Earth observation data due to the length of the mission and the rich data provided by the multispectral sensors. Landsat data has proven invaluable to agriculture, geology, forestry, regional planning, education, mapping, and tracking global change. Landsat images have also been invaluable for emergency response and disaster relief. The image data can be used easily with any software that recognizes GeoTIFF files. Each scene also includes metadata in an accompanying text file. To help locate data of interest, an index CSV file of the Landsat data is available. This CSV file lists basic properties of the available images, including their acquisition dates and their spatial extent as minimum and maximum latitudes and longitudes. The file is found in the Landsat Cloud Storage bucket: gs://gcp-public-data-landsat/index.csv.gz Alternatively, this index data is available in BigQuery for you to easily query using SQL, at: https://bigquery.cloud.google.com/table/bigquery-public-data:cloud_storage_geo_index.landsat_index
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Content of this repository
This is the repository that contains the scripts and dataset for the MSR 2019 mining challenge
Github Repository with the software used : here.
=======
DATASET
The dataset was retrived utilizing google bigquery and dumped to a csv
file for further processing, this original file with no treatment is called jsanswers.csv, here we can find the following information :
1. The Id of the question (PostId)
2. The Content (in this case the code block)
3. the lenght of the code block
4. the line count of the code block
5. The score of the post
6. The title
A quick look at this files, one can notice that a postID can have multiple rows related to it, that's how multiple codeblocks are saved in the database.
Filtered Dataset:
Extracting code from CSV
We used a python script called "ExtractCodeFromCSV.py" to extract the code from the original csv and merge all the codeblocks in their respective javascript file with the postID as name, this resulted in 336 thousand files.
Running ESlint
Due to the single threaded nature of ESlint, we needed to create a script to run ESlint because it took a huge toll on the machine to run it on 336 thousand files, this script is named "ESlintRunnerScript.py", it splits the files in 20 evenly distributed parts and runs 20 processes of esLinter to generate the reports, as such it generates 20 json files.
Number of Violations per Rule
This information was extracted using the script named "parser.py", it generated the file named "NumberofViolationsPerRule.csv" which contains the number of violations per rule used in the linter configuration in the dataset.
Number of violations per Category
As a way to make relevant statistics of the dataset, we generated the number of violations per rule category as defined in the eslinter website, this information was extracted using the same "parser.py" script.
Individual Reports
This information was extracted from the json reports, it's a csv file with PostID and violations per rule.
Rules
The file Rules with categories contains all the rules used and their categories.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Here is a description, how the datasets for a training notebook used for Telegram ML Contest solution were prepared.
The first part of the code samples was taken from a private version of this notebook.
Here is the statistics about classes of programming languages from Github Code Snippets database
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F833757%2F2fdc091661198e80559f8cb1d1a306ff%2FScreenshot%202023-11-07%20at%2021.24.42.png?generation=1699390166413391&alt=media" alt="">
From this database, 2 csv files were created - with 50000 code samples for each of the 20 programming languages included, with equal by numbers and stratified sampling. The files related here are sample_equal_prop_50000.csv and sample_equal_prop_50000.csv and sample_stratified_50000.csv, respectively.
Second option for capturing out additional examples was to run this notebook with making up larger amount of queries, 10000.
The resulted file is dataset-10000.csv - included to the data card
The statistics for the code programming languages is as on the next chart - it has 32 labeled classes
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F833757%2F7c04342da8ec1df266cd90daf00204f9%2FScreenshot%202023-10-13%20at%2020.52.13.png?generation=1699392769199533&alt=media" alt="">
To get a model more robust, code samples of 20 additional languages were collected in amount from 10 till 15 samples on more-less popular use cases. Also, for the class "OTHER", like regular language examples, according to the task of the competition, the text examples from this dataset with promts on Huggingface were added to the file. The resulted file here is rare_languages.csv - also in data card
The statistics for rare languages code snippets is as follows:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F833757%2F0b340781c774d2acb988ce1567f4afa3%2FScreenshot%202023-11-08%20at%2001.13.07.png?generation=1699402436798661&alt=media" alt="">
For this stage of dataset creation, the number of the columns in sample_equal_prop_50000.csv and sample_stratified_50000.csv was cut out just for 2 - "snippet", "language", the version of file with equal numbers is in the data card - sample_equal_prop_50000_clean.csv
To prepare Bigquery dataset file, the column with index was cut out, and the column "content" was renamed to "snippet". These changes were saved in dataset-10000-clean.csv
After that, the files sample_equal_prop_50000_clean.csv and dataset-10000-clean.csv were combined together and saved as github-combined-file.csv
The prepared files took too much RAM to be read by Pandas library, so that is why additional prepocessing has been made - the symbols like quatas, commas, ampersands, new lines and adding tabs characters were cleaned out. After clieaning, the flies were merged with rare_languages.csv file and saved as github-combined-file-no-symbols-rare-clean.csv and sample_equal_prop_50000_-no-symbols-rare-clean.csv, respectively.
The final distribution of classes turned out to be the next one
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F833757%2Ff43e0cea4c565c9f7c808527b0dfa2da%2FScreenshot%202023-11-09%20at%2020.26.30.png?generation=1699558064765454&alt=media" alt="">
To be suitable for TF-DF format, to each programming language a certain label was given as well. The final labels are in the data card.
Facebook
TwitterI used dailyActivity_merged and hourlyIntensities_merged from FitBit Fitness Tracker Data (Möbius (Owner): dailyActivity_merged hourlyIntensities_merged I used BigQuery with SQL to pre-clean and analyze data. I saved the results of this analysis in .csv files. I imported the files chosen for analysis in BigQuery:
dailyActivity_merged --> dailyActivity_merged (table)
hourlyIntensities_merged --> hourlyIntensities_mo (table)
and I aggregated with SQL
"Studying the activity progress from the first day till the last day of observation"
select
ActivityDate,
max(TotalSteps) as max_total_steps,
max(TotalDistance) as max_total_distance,
max(Calories) as max_calories,
round(avg(TotalSteps)) as avg_total_steps,
round(avg(TotalDistance)) as avg_total_distance,
round(avg(Calories)) as avg_calories,
count(distinct(id)) as number_of_active_people
from casestuy.BellaBeatData.dailyActivity_merged
group by ActivityDate
order by ActivityDate asc;
I saved the result in a .csv file named activity_progress_from_the_first_to_the_last_day.csv
"Studying the average activities intensity per each day of the week"
select
FORMAT_TIMESTAMP("%w", ActivityDay) AS dow_number,
FORMAT_TIMESTAMP("%A", ActivityDay) AS day_of_week,
max(SedentaryMinutes) as max_sedentary_minutes,
max(LightlyActiveMinutes) as max_lightly_active_minutes,
max(FairlyActiveMinutes) as max_fairly_active_minutes,
max(VeryActiveMinutes) as max_very_active_minutes,
min(SedentaryMinutes) as min_sedentary_minutes,
min(LightlyActiveMinutes) as min_lightly_active_minutes,
min(FairlyActiveMinutes) as min_fairly_active_minutes,
min(VeryActiveMinutes) as min_very_active_minutes,
round(avg(SedentaryMinutes)) as avg_sedentary_minutes,
round(avg(LightlyActiveMinutes)) as avg_lightly_active_minutes,
round(avg(FairlyActiveMinutes)) as avg_fairly_active_minutes,
round(avg(VeryActiveMinutes)) as avg_very_active_minutes,
from casestuy.BellaBeatData.dailyIntensities_merged
group by dow_number, day_of_week
order by dow_number asc;
I saved the result in a .csv file named avg_max_intensities_for_each_day_of_the_week.csv
"Studying number of days in which steps are above or below average"
select id,
round(avg_steps) as average_steps,
sum(count_step_greater_avg) as number_of_days_above_avg_steps,
sum(count_step_less_avg) as number_of_days_below_avg_steps,
case when sum(count_step_greater_avg) > sum(count_step_less_avg) then 1 else 0 end as people_with_more_days_above_avg_steps,
case when sum(count_step_greater_avg) < sum(count_step_less_avg) then 1 else 0 end as people_with_more_days_below_avg_steps,
case when round(avg_steps) >7500 then 1 else 0 end as people_with_avg_steps_above_7500_recommended,
case when round(avg_steps) <7500 then 1 else 0 end as people_with_avg_steps_below_7500_recommended
from (
select a.Id,avg_steps, case when a.TotalSteps > table_avg_steps.avg_steps then 1 else 0 end as count_step_greater_avg,
case when a.TotalSteps < table_avg_steps.avg_steps then 1 else 0 end as count_step_less_avg,
from
(select id, avg(TotalSteps) as avg_steps
from casestuy.BellaBeatData.dailyActivity_merged
group by id) table_avg_steps, casestuy.BellaBeatData.dailyActivity_merged as a
where table_avg_steps.Id=a.Id)
group by Id, average_steps, people_with_avg_steps_below_7500_recommended, people_with_avg_steps_above_7500_recommended
order by Id;
I saved the result in a .csv file named number_of_day_with_steps_greater_or_lower_the_avg.csv)
Facebook
TwitterThis dataset shows reported crimes that happened in the City of Chicago from 2001 to the present, excluding the most recent seven days, with the exception of murders, for which data is available for each victim. Data is taken from the CLEAR (Citizen Law Enforcement Analysis and Reporting) system of the Chicago Police Department. Addresses are only displayed at the block level and exact locations are not specified to preserve the anonymity of crime victims. This information includes reports that were given to the Police Department but were not validated. There is always a chance of human or technological error, and the initial categories of crimes could alter later based on further research. Therefore, the Chicago Police Department does not guarantee (either expressed or implied) the accuracy, completeness, timeliness, or correct sequencing of the information and the information should not be used for comparison purposes over time.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The CMS National Plan and Provider Enumeration System (NPPES) was developed as part of the Administrative Simplification provisions in the original HIPAA act. The primary purpose of NPPES was to develop a unique identifier for each physician that billed medicare and medicaid. This identifier is now known as the National Provider Identifier Standard (NPI) which is a required 10 digit number that is unique to an individual provider at the national level.
Once an NPI record is assigned to a healthcare provider, parts of the NPI record that have public relevance, including the provider’s name, speciality, and practice address are published in a searchable website as well as downloadable file of zipped data containing all of the FOIA disclosable health care provider data in NPPES and a separate PDF file of code values which documents and lists the descriptions for all of the codes found in the data file.
The dataset contains the latest NPI downloadable file in an easy to query BigQuery table, npi_raw. In addition, there is a second table, npi_optimized which harnesses the power of Big Query’s next-generation columnar storage format to provide an analytical view of the NPI data containing description fields for the codes based on the mappings in Data Dissemination Public File - Code Values documentation as well as external lookups to the healthcare provider taxonomy codes . While this generates hundreds of columns, BigQuery makes it possible to process all this data effectively and have a convenient single lookup table for all provider information.
Fork this kernel to get started.
https://console.cloud.google.com/marketplace/details/hhs/nppes?filter=category:science-research
Dataset Source: Center for Medicare and Medicaid Services. This dataset is publicly available for anyone to use under the following terms provided by the Dataset Source - http://www.data.gov/privacy-policy#data_policy — and is provided "AS IS" without any warranty, express or implied, from Google. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
Banner Photo by @rawpixel from Unplash.
What are the top ten most common types of physicians in Mountain View?
What are the names and phone numbers of dentists in California who studied public health?
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains information on Chicago crime reported between 2015 and 2020.
This dataset is a subset of the BigQuery public database on Chicago Crime.
I appreciate the efforts of BigQuery hosting and allowing access to their public databases and Kaggle for providing a space for the widespread sharing of data and knowledge.
This dataset is a useful learning tool for applying descriptive statistics, analytics, and visualisations. For example, one could look at crime trends over time, identify areas with the lowest amount of crime, calculate the propability that an arrest is made based on crime type or area, and determine days of the week with the highest and lowest crime.
Facebook
TwitterThis dataset is a custom reference of Google Analytics field definitions.
It was specifically compiled to enhance datasets like the Google Analytics 360 data from the Google Merchandise Store, which lacks field descriptions in its original BigQuery schema. By providing detailed definitions for each field, this reference aims to improve the interpretability of the data—especially when used by language models or analytics tools that rely on contextual understanding to process and answer queries effectively.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is based on the TravisTorrent dataset released 2017-01-11 (https://travistorrent.testroots.org), the Google BigQuery GHTorrent dataset accessed 2017-07-03, and the Git log history of all projects in the dataset, retrieved 2017-07-16 and 2017-07-17.
We selected projects hosted on GitHub that employ the Continuous Integration (CI) system Travis CI. We identified the projects using the TravisTorrent data set and considered projects that:
used GitHub from the beginning (first commit not more than seven days before project creation date according to GHTorrent),
were active for at least one year (365 days) before the first build with Travis CI (before_ci),
used Travis CI at least for one year (during_ci),
had commit or merge activity on the default branch in both of these phases, and
used the default branch to trigger builds.
To derive the time frames, we employed the GHTorrent Big Query data set. The resulting sample contains 113 projects. Of these projects, 89 are Ruby projects and 24 are Java projects. For our analysis, we only consider the activity one year before and after the first build.
We cloned the selected project repositories and extracted the version history for all branches (see https://github.com/sbaltes/git-log-parser). For each repo and branch, we created one log file with all regular commits and one log file with all merges. We only considered commits changing non-binary files and applied a file extension filter to only consider changes to Java or Ruby source code files. From the log files, we then extracted metadata about the commits and stored this data in CSV files (see https://github.com/sbaltes/git-log-parser).
We also retrieved a random sample of GitHub project to validate the effects we observed in the CI project sample. We only considered projects that:
have Java or Ruby as their project language
used GitHub from the beginning (first commit not more than seven days before project creation date according to GHTorrent)
have commit activity for at least two years (730 days)
are engineered software projects (at least 10 watchers)
were not in the TravisTorrent dataset
In total, 8,046 projects satisfied those constraints. We drew a random sample of 800 projects from this sampling frame and retrieved the commit and merge data in the same way as for the CI sample. We then split the development activity at the median development date, removed projects without commits or merges in either of the two resulting time spans, and then manually checked the remaining projects to remove the ones with CI configuration files. The final comparision sample contained 60 non-CI projects.
This dataset contains the following files:
tr_projects_sample_filtered_2.csv A CSV file with information about the 113 selected projects.
tr_sample_commits_default_branch_before_ci.csv tr_sample_commits_default_branch_during_ci.csv One CSV file with information about all commits to the default branch before and after the first CI build. Only commits modifying, adding, or deleting Java or Ruby source code files were considered. Those CSV files have the following columns:
project: GitHub project name ("/" replaced by "_"). branch: The branch to which the commit was made. hash_value: The SHA1 hash value of the commit. author_name: The author name. author_email: The author email address. author_date: The authoring timestamp. commit_name: The committer name. commit_email: The committer email address. commit_date: The commit timestamp. log_message_length: The length of the git commit messages (in characters). file_count: Files changed with this commit. lines_added: Lines added to all files changed with this commit. lines_deleted: Lines deleted in all files changed with this commit. file_extensions: Distinct file extensions of files changed with this commit.
tr_sample_merges_default_branch_before_ci.csv tr_sample_merges_default_branch_during_ci.csv One CSV file with information about all merges into the default branch before and after the first CI build. Only merges modifying, adding, or deleting Java or Ruby source code files were considered. Those CSV files have the following columns:
project: GitHub project name ("/" replaced by "_"). branch: The destination branch of the merge. hash_value: The SHA1 hash value of the merge commit. merged_commits: Unique hash value prefixes of the commits merged with this commit. author_name: The author name. author_email: The author email address. author_date: The authoring timestamp. commit_name: The committer name. commit_email: The committer email address. commit_date: The commit timestamp. log_message_length: The length of the git commit messages (in characters). file_count: Files changed with this commit. lines_added: Lines added to all files changed with this commit. lines_deleted: Lines deleted in all files changed with this commit. file_extensions: Distinct file extensions of files changed with this commit. pull_request_id: ID of the GitHub pull request that has been merged with this commit (extracted from log message). source_user: GitHub login name of the user who initiated the pull request (extracted from log message). source_branch : Source branch of the pull request (extracted from log message).
comparison_project_sample_800.csv A CSV file with information about the 800 projects in the comparison sample.
commits_default_branch_before_mid.csv commits_default_branch_after_mid.csv One CSV file with information about all commits to the default branch before and after the medium date of the commit history. Only commits modifying, adding, or deleting Java or Ruby source code files were considered. Those CSV files have the same columns as the commits tables described above.
merges_default_branch_before_mid.csv merges_default_branch_after_mid.csv One CSV file with information about all merges into the default branch before and after the medium date of the commit history. Only merges modifying, adding, or deleting Java or Ruby source code files were considered. Those CSV files have the same columns as the merge tables described above.