Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of multiple files which contain bug prediction training data.
The entries in the dataset are JavaScript functions either being buggy or non-buggy. Bug related information was obtained from the project EsLint contained in BugsJS (https://github.com/BugsJS/eslint). The buggy instances were collected throughout the lifetime of the project, however we added non-buggy entries from the latest version which is tagged as fix (entries which were previously included as buggy were not included as non-buggy later on).
The dataset is based on hybrid call graphs which are constructed by https://github.com/sed-szeged/hcg-js-framework. The result of this tool is a call graph where the edges are associated with a confidence level which shows how likely the given edge is a valid call edge.
We used different threshold values from which we considered the edges to be valid. The following threshold values were used:
0.00
0.05
0.20
0.30
The prefix in the dataset file names are coming from the used threshold. The the datasets include coupling metrics NII (Nubmer of Incoming Invocations) and NOI (Number of Outgoing Invocations) which were calculated by a static source code analyzer called SourceMeter. Hybrid counterparts of these metrics (HNII and HNOI) are based on the given threshold values.
There are four variants for all of these datasets:
Both static (NII, NOi) and hybrid (HNII, HNOI) coupling metrics are included with additional static source code metrics and information about the entries (file without any postfix). Column contained only in this dataset are:
ID
Name
Longname
Parent ID
Component ID
Path
Line
Column
EndLine
EndColumn
Both static (NII, NOi) and hybrid (HNII, HNOI) coupling metrics are included with additional static source code metrics (file with '_h+s' postfix)
Only static (NII, NOI) coupling metrics are included with additional static source code metrics (file with '_s' postfix)
Only hybrid (HNII, HNOI) coupling metrics are included with additional static source code metrics (file with '_h' postfix)
Static source code metrics which are contained in all dataset are the following:
McCC - McCabe Cyclomatic Complexity
NL - Nesting Level
NLE - Nesting Level Else If
CD - Comment Density
CLOC - Comment Lines of Code
DLOC - Documentation Lines of Code
TCD - Total Comment Density (Comment Lines in an emedded function will be also considered)
TCLOC - Total Comment Lines of Code (Comment Lines in an emedded function will be also considered)
LLOC - Logical Lines of Code (Comment and empty lines not counted)
LOC - Lines of Code (Comment and empty lines are counted)
NOS - Number of Statements
NUMPAR - Number of Parameters
TLLOC - Logical Lines of Code (Lines in embedded functions are also counted)
TLOC - Lines of Code (Lines in embedded functions are also counted)
TNOS - Total Number of Statements (Statements in embedded functions are also counted)
Facebook
TwitterOpen Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
This Website Statistics dataset has four resources showing usage of the Lincolnshire Open Data website. Web analytics terms used in each resource are defined in their accompanying Metadata file.
Website Usage Statistics: This document shows a statistical summary of usage of the Lincolnshire Open Data site for the latest calendar year.
Website Statistics Summary: This dataset shows a website statistics summary for the Lincolnshire Open Data site for the latest calendar year.
Webpage Statistics: This dataset shows statistics for individual Webpages on the Lincolnshire Open Data site by calendar year.
Dataset Statistics: This dataset shows cumulative totals for Datasets on the Lincolnshire Open Data site that have also been published on the national Open Data site Data.Gov.UK - see the Source link.
Note: Website and Webpage statistics (the first three resources above) show only UK users, and exclude API calls (automated requests for datasets). The Dataset Statistics are confined to users with javascript enabled, which excludes web crawlers and API calls.
These Website Statistics resources are updated annually in January by the Lincolnshire County Council Business Intelligence team. For any enquiries about the information contact opendata@lincolnshire.gov.uk.
Facebook
Twittersaurabh5/rlvr-code-data-JavaScript dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
After scenario questionnaire results. The data contains the results of the After Scenario Questionnaire answered by 14 participants. (CSV 149 kb)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains an anonymized list of surveyed developers who provided their expertise level on three popular JavaScript libraries:
ReactJS, a library for building enriched web interfaces
MongoDB, a driver for accessing MongoDB databased
Socket.IO, a library for realtime communication
Facebook
TwitterAutoTrain Dataset for project: javascript-traing-1
Dataset Description
This dataset has been automatically processed by AutoTrain for project javascript-traing-1.
Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
Data Instances
A sample from this dataset looks as follows: [ { "target": "test/NavbarSpec.js", "feat_repo_name": "aabenoja/react-bootstrap", "text": "import React from 'react'; import… See the full description on the dataset page: https://huggingface.co/datasets/ars-1/autotrain-data-javascript-traing-1.
Facebook
TwitterThis dataset contains the predicted prices of the asset JavaScript over the next 16 years. This data is calculated initially using a default 5 percent annual growth rate, and after page load, it features a sliding scale component where the user can then further adjust the growth rate to their own positive or negative projections. The maximum positive adjustable growth rate is 100 percent, and the minimum adjustable growth rate is -100 percent.
Facebook
TwitterThis data was collected by the team https://dou.ua/ . This resource is very popular in Ukraine. It provides salary statistics, shows current vacancies and publishes useful articles related to the life of an IT specialist. This dataset was taken from the public repository https://github.com/devua/csv/tree/master/salaries . This dataset will include the following data for each of the developer: salary, position (f.e. Junior, Middle), experience, city, tech (f.e C#/.NET, JavaScript, Python). I think this dataset will be useful to our community. Thank you.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Laboratory testing tasks. The data contains the task identifier and the instructions given to the participants to complete the task. (CSV 618 kb)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Content of this repository
This is the repository that contains the scripts and dataset for the MSR 2019 mining challenge
Github Repository with the software used : here.
=======
DATASET
The dataset was retrived utilizing google bigquery and dumped to a csv
file for further processing, this original file with no treatment is called jsanswers.csv, here we can find the following information :
1. The Id of the question (PostId)
2. The Content (in this case the code block)
3. the lenght of the code block
4. the line count of the code block
5. The score of the post
6. The title
A quick look at this files, one can notice that a postID can have multiple rows related to it, that's how multiple codeblocks are saved in the database.
Filtered Dataset:
Extracting code from CSV
We used a python script called "ExtractCodeFromCSV.py" to extract the code from the original csv and merge all the codeblocks in their respective javascript file with the postID as name, this resulted in 336 thousand files.
Running ESlint
Due to the single threaded nature of ESlint, we needed to create a script to run ESlint because it took a huge toll on the machine to run it on 336 thousand files, this script is named "ESlintRunnerScript.py", it splits the files in 20 evenly distributed parts and runs 20 processes of esLinter to generate the reports, as such it generates 20 json files.
Number of Violations per Rule
This information was extracted using the script named "parser.py", it generated the file named "NumberofViolationsPerRule.csv" which contains the number of violations per rule used in the linter configuration in the dataset.
Number of violations per Category
As a way to make relevant statistics of the dataset, we generated the number of violations per rule category as defined in the eslinter website, this information was extracted using the same "parser.py" script.
Individual Reports
This information was extracted from the json reports, it's a csv file with PostID and violations per rule.
Rules
The file Rules with categories contains all the rules used and their categories.
Facebook
TwitterThis dataset provides geospatial location data and scripts used to analyze the relationship between MODIS-derived NDVI and solar and sensor angles in a pinyon-juniper ecosystem in Grand Canyon National Park. The data are provided in support of the following publication: "Solar and sensor geometry, not vegetation response, drive satellite NDVI phenology in widespread ecosystems of the western United States". The data and scripts allow users to replicate, test, or further explore results. The file GrcaScpnModisCellCenters.csv contains locations (latitude-longitude) of all the 250-m MODIS (MOD09GQ) cell centers associated with the Grand Canyon pinyon-juniper ecosystem that the Southern Colorado Plateau Network (SCPN) is monitoring through its land surface phenology and integrated upland monitoring programs. The file SolarSensorAngles.csv contains MODIS angle measurements for the pixel at the phenocam location plus a random 100 point subset of pixels within the GRCA-PJ ecosystem. The script files (folder: 'Code') consist of 1) a Google Earth Engine (GEE) script used to download MODIS data through the GEE javascript interface, and 2) a script used to calculate derived variables and to test relationships between solar and sensor angles and NDVI using the statistical software package 'R'. The file Fig_8_NdviSolarSensor.JPG shows NDVI dependence on solar and sensor geometry demonstrated for both a single pixel/year and for multiple pixels over time. (Left) MODIS NDVI versus solar-to-sensor angle for the Grand Canyon phenocam location in 2018, the year for which there is corresponding phenocam data. (Right) Modeled r-squared values by year for 100 randomly selected MODIS pixels in the SCPN-monitored Grand Canyon pinyon-juniper ecosystem. The model for forward-scatter MODIS-NDVI is log(NDVI) ~ solar-to-sensor angle. The model for back-scatter MODIS-NDVI is log(NDVI) ~ solar-to-sensor angle + sensor zenith angle. Boxplots show interquartile ranges; whiskers extend to 10th and 90th percentiles. The horizontal line marking the average median value for forward-scatter r-squared (0.835) is nearly indistinguishable from the back-scatter line (0.833). The dataset folder also includes supplemental R-project and packrat files that allow the user to apply the workflow by opening a project that will use the same package versions used in this study (eg, .folders Rproj.user, and packrat, and files .RData, and PhenocamPR.Rproj). The empty folder GEE_DataAngles is included so that the user can save the data files from the Google Earth Engine scripts to this location, where they can then be incorporated into the r-processing scripts without needing to change folder names. To successfully use the packrat information to replicate the exact processing steps that were used, the user should refer to packrat documentation available at https://cran.r-project.org/web/packages/packrat/index.html and at https://www.rdocumentation.org/packages/packrat/versions/0.5.0. Alternatively, the user may also use the descriptive documentation phenopix package documentation, and description/references provided in the associated journal article to process the data to achieve the same results using newer packages or other software programs.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is the result of three crawls of the web performed in May 2018. The data contains raw crawl data and instrumentation captured by OpenWPM-Mobile, as well as analysis that identifies which scripts access mobile sensors, which ones perform some of browser fingerprinting, as well as clustering of scripts based on their intended use. The dataset is described in the included README.md file; more details about the methodology can be found in our ACM CCS'18 paper: Anupam Das, Gunes Acar, Nikita Borisov, Amogh Pradeep. The Web's Sixth Sense: A Study of Scripts Accessing Smartphone Sensors. In Proceedings of the 25th ACM Conference on Computer and Communications Security (CCS), Toronto, Canada, October 15–19, 2018. (Forthcoming)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data comes from an effort to render the top 1M domains on the web in a scripted browser, and recording performance metrics of each page. These metrics are published here in numpy format. See the starter notebook for an example showing how to use the data, and what the columns contain. The following posts for a more in depth write ups:
Facebook
TwitterThis dataset contains daily snapshots of offers scraped from JustJoinIT - one of the biggest IT job board in Poland. Dataset covers variety of programming languages or areas offers (Java, C#, Python, JavaScript, data engineering and more).
Job offers were fetched from an API endpoint that exposed all job offers. I created a simple AWS lambda function that was invoked once per day and persisted extracted data on S3. Data is raw - the original JSON served by the API was saved on S3 and there was no processing in between.
First captured day: 23rd of October, 2021. Last captured day: 25th of September, 2023.
Dataset is incomplete (due to lack of retry in data fetching script). Missing days:
2022-06-05
2022-09-12
2022-10-03
2022-10-10
2022-10-14
2022-10-17
2022-10-22
2022-10-23
2022-10-25
2022-10-29
2022-11-06
2022-11-12
2022-11-13
2022-12-11
2022-12-18
2022-12-26
2023-02-04
2023-02-07
2023-02-08
2023-02-26
2023-03-11
2023-03-12
2023-03-27
2023-04-03
2023-04-12
2023-04-14
2023-04-17
2023-04-19
2023-04-20
2023-04-21
2023-04-22
2023-04-24
Facebook
TwitterDescriptive statistics of the number of missed frames for SVG+JavaScript animations.
Facebook
TwitterTraffic analytics, rankings, and competitive metrics for javascript.com as of September 2025
Facebook
TwitterComprehensive YouTube channel statistics for JavaScript Mastery, featuring 1,200,000 subscribers and 104,656,847 total views. This dataset includes detailed performance metrics such as subscriber growth, video views, engagement rates, and estimated revenue. The channel operates in the Lifestyle category and is based in HR. Track 187 videos with daily and monthly performance data, including view counts, subscriber changes, and earnings estimates. Analyze growth trends, engagement patterns, and compare performance against similar channels in the same category.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The proposed dataset aims to facilitate research in automatic dark pattern detection on e-commerce websites. Unlike previous approaches that relied on manually extracted features, this dataset focuses solely on text data automatically extracted from web pages. The inspiration for this dataset comes from previous work by Mathur et al. in 2019, which contained 1,818 dark pattern texts from shopping sites. To create a balanced dataset, non-dark pattern texts were added to this existing dataset.
A. Dark Pattern Texts in E-commerce Sites: The initial dataset of dark patterns, manually curated by Mathur et al., contained 1,818 dark pattern texts from 1,254 shopping sites. From this dataset, texts with missing or duplicate data were excluded, resulting in 1,178 dark pattern texts.
B. Non-Dark Pattern Texts in E-commerce Sites: Negative samples, or non-dark pattern texts, were collected from the same e-commerce websites where the dark patterns were sourced. This involved the following steps:
Collecting web pages: Web pages from e-commerce sites were gathered using headless Chrome. If a website was unreachable or encountered errors, it was ignored. JavaScript execution was employed to ensure comprehensive content retrieval, as most websites rely on JavaScript for page rendering.
Extracting texts: After collecting web pages, the Puppeteer library was used to scrape content, including screenshots and text. Unlike Mathur et al.'s approach, which focused on text within UI components, this method targeted text from the entire web page.
By combining these steps, the dataset comprises both dark pattern and non-dark pattern texts, enabling research into automatic dark pattern detection without the need for manually extracted features.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the data and analysis from an empirical study investigating the adoption trends of modern JavaScript features introduced with ECMAScript 6 (ES6) and beyond. By mining the source code history of 158 open-source JavaScript projects, the study identifies efforts to rejuvenate legacy code by replacing outdated constructs with modern ones. The findings highlight the extensive use of modern features, their widespread adoption within one to two years after ES6's release, and ongoing trends in the rejuvenation of JavaScript codebases.
scripts.zip: Contains Python scripts used to analyze data and generate the graphs presented in the study's results.
jsminer-tool.zip: Includes the tool developed to analyze GitHub repository history and collect metrics on the adoption of modern JavaScript features.
jsminer_database_backup.zip: Provides a PostgreSQL database dump containing all code review comments from the repositories analyzed in the study.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset shows whether each dataset on data.maryland.gov has been updated recently enough. For example, datasets containing weekly data should be updated at least every 7 days. Datasets containing monthly data should be updated at least every 31 days. This dataset also shows a compendium of metadata from all data.maryland.gov datasets.
This report was created by the Department of Information Technology (DoIT) on August 12 2015. New reports will be uploaded daily (this report is itself included in the report, so that users can see whether new reports are consistently being uploaded each week). Generation of this report uses the Socrata Open Data (API) to retrieve metadata on date of last data update and update frequency. Analysis and formatting of the metadata use Javascript, jQuery, and AJAX.
This report will be used during meetings of the Maryland Open Data Council to curate datasets for maintenance and make sure the Open Data Portal's data stays up to date.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of multiple files which contain bug prediction training data.
The entries in the dataset are JavaScript functions either being buggy or non-buggy. Bug related information was obtained from the project EsLint contained in BugsJS (https://github.com/BugsJS/eslint). The buggy instances were collected throughout the lifetime of the project, however we added non-buggy entries from the latest version which is tagged as fix (entries which were previously included as buggy were not included as non-buggy later on).
The dataset is based on hybrid call graphs which are constructed by https://github.com/sed-szeged/hcg-js-framework. The result of this tool is a call graph where the edges are associated with a confidence level which shows how likely the given edge is a valid call edge.
We used different threshold values from which we considered the edges to be valid. The following threshold values were used:
0.00
0.05
0.20
0.30
The prefix in the dataset file names are coming from the used threshold. The the datasets include coupling metrics NII (Nubmer of Incoming Invocations) and NOI (Number of Outgoing Invocations) which were calculated by a static source code analyzer called SourceMeter. Hybrid counterparts of these metrics (HNII and HNOI) are based on the given threshold values.
There are four variants for all of these datasets:
Both static (NII, NOi) and hybrid (HNII, HNOI) coupling metrics are included with additional static source code metrics and information about the entries (file without any postfix). Column contained only in this dataset are:
ID
Name
Longname
Parent ID
Component ID
Path
Line
Column
EndLine
EndColumn
Both static (NII, NOi) and hybrid (HNII, HNOI) coupling metrics are included with additional static source code metrics (file with '_h+s' postfix)
Only static (NII, NOI) coupling metrics are included with additional static source code metrics (file with '_s' postfix)
Only hybrid (HNII, HNOI) coupling metrics are included with additional static source code metrics (file with '_h' postfix)
Static source code metrics which are contained in all dataset are the following:
McCC - McCabe Cyclomatic Complexity
NL - Nesting Level
NLE - Nesting Level Else If
CD - Comment Density
CLOC - Comment Lines of Code
DLOC - Documentation Lines of Code
TCD - Total Comment Density (Comment Lines in an emedded function will be also considered)
TCLOC - Total Comment Lines of Code (Comment Lines in an emedded function will be also considered)
LLOC - Logical Lines of Code (Comment and empty lines not counted)
LOC - Lines of Code (Comment and empty lines are counted)
NOS - Number of Statements
NUMPAR - Number of Parameters
TLLOC - Logical Lines of Code (Lines in embedded functions are also counted)
TLOC - Lines of Code (Lines in embedded functions are also counted)
TNOS - Total Number of Statements (Statements in embedded functions are also counted)