Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains extracted attributes from websites that can be used for Classification of webpages as malicious or benign. The dataset also includes raw page content including JavaScript code that can be used as unstructured data in Deep Learning or for extracting further attributes. The data has been collected by crawling the Internet using MalCrawler [1]. The labels have been verified using the Google Safe Browsing API [2]. Attributes have been selected based on their relevance [3]. The details of dataset attributes is as given below: 'url' - The URL of the webpage. 'ip_add' - IP Address of the webpage. 'geo_loc' - The geographic location where the webpage is hosted. 'url_len' - The length of URL. 'js_len' - Length of JavaScript code on the webpage. 'js_obf_len - Length of obfuscated JavaScript code. 'tld' - The Top Level Domain of the webpage. 'who_is' - Whether the WHO IS domain information is compete or not. 'https' - Whether the site uses https or http. 'content' - The raw webpage content including JavaScript code. 'label' - The class label for benign or malicious webpage.
Python code for extraction of the above listed dataset attributes is attached. The Visualisation of this dataset and it python code is also attached. This visualisation can be seen online on Kaggle [5].
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Data set containing Tweets captured during the Nintendo E3 2018 Conference.
All Twitter APIs that return Tweets provide that data encoded using JavaScript Object Notation (JSON). JSON is based on key-value pairs, with named attributes and associated values. The JSON file include the following objects and attributes:
Tweet - Tweets are the basic atomic building block of all things Twitter. The Tweet object has a long list of ‘root-level’ attributes, including fundamental attributes such as id, created_at, and text. Tweet child objects include user, entities, and extended_entities. Tweets that are geo-tagged will have a place child object.
User - Contains public Twitter account metadata and describes the author of the Tweet with attributes as name, description, followers_count, friends_count, etc.
Entities - Provide metadata and additional contextual information about content posted on Twitter. The entities section provides arrays of common things included in Tweets: hashtags, user mentions, links, stock tickers (symbols), Twitter polls, and attached media.
Extended Entities - All Tweets with attached photos, videos and animated GIFs will include an extended_entities JSON object.
Places - Tweets can be associated with a location, generating a Tweet that has been ‘geo-tagged.’
More information here.
I used the filterStream() function to open a connection to Twitter's Streaming API, using the keywords #NintendoE3 and #NintendoDirect. The capture started on Tuesday, June 12th 04:00 am UCT and finished on Tuesday, June 12th 05:00 am UCT.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The description of the attributes from the Dimension class in version 1.0 of the CSD model.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The description of the attributes from the DependentVariable class in version 1.0 of the CSD model.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview of Data
This dataset is a data dump containing data from June 2008 to March 2013. Note that Stack Overflow originated only in June 2008. Therefore, this dump includes all the questions and answers on Stack Overflow until March 2013.
Stack Overflow provides data dumps of all user generated data, including questions asked with the list of answers, the accepted answer per question, up/down votes, favourite counts, post score, comments, and anonymized user reputation. Stack Overflow allows users to tag discussions and has a reputation-based mechanism to rank users based on their active participation and contributions.
Attribute Information
Attribute info the datasets are in xml format including questions and answers for the following topics:
* CSS
* CSS-mobile
* HTML5
* HTML5-mobile
* JavaScript
* Javascript-mobile
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3305472%2F9b6db5199f19afd866a33bc89e56ef07%2F1706183889560.jpeg?generation=1712978213724236&alt=media" alt="">Understanding JSON Data Extraction:
Have you ever wondered how datasets are prepared from JSON after calling their APIs? This repository aims to demystify this process by providing five JSON files for exploration. Each file represents a snapshot of data obtained from different API endpoints.
Dataset Overview:
Data Source: API endpoints providing JSON data. File Formats: JSON (JavaScript Object Notation). Number of Files: 5 Total Records: Varies across files. Data Exploration:
Each JSON file contains structured data representing various aspects of the dataset. Explore different attributes and nested structures within the JSON files. Understand how to navigate and extract relevant information using programming languages like Python.
Included Files:
file1.json file2.json file3.json file4.json file5.json
Final Dataset: Zomato_Final_Data.csv
After extracting and preprocessing data from the five JSON files, a consolidated data frame has been created. The data frame provides a unified view of the data, facilitating analysis and modeling tasks.
Contribute Your Version: Feel free to contribute your code snippets for data extraction. Share your insights and techniques with the community to foster learning and collaboration.
Acknowledgements: Special thanks to Krish Naik and Zomato for providing the data used in this repository.
Feedback and Support: For any questions, feedback, or assistance, please reach out via [contact information]. Feel free to adjust any sections or add more details according to your specific dataset and preferences!
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Data set containing Tweets captured during the 2018 UEFA Champions League Final between Real Madrid and Liverpool.
All Twitter APIs that return Tweets provide that data encoded using JavaScript Object Notation (JSON). JSON is based on key-value pairs, with named attributes and associated values. The JSON file include the following objects and attributes:
Tweet - Tweets are the basic atomic building block of all things Twitter. The Tweet object has a long list of ‘root-level’ attributes, including fundamental attributes such as id, created_at, and text. Tweet child objects include user, entities, and extended_entities. Tweets that are geo-tagged will have a place child object.
User - Contains public Twitter account metadata and describes the author of the Tweet with attributes as name, description, followers_count, friends_count, etc.
Entities - Provide metadata and additional contextual information about content posted on Twitter. The entities section provides arrays of common things included in Tweets: hashtags, user mentions, links, stock tickers (symbols), Twitter polls, and attached media.
Extended Entities - All Tweets with attached photos, videos and animated GIFs will include an extended_entities JSON object.
Places - Tweets can be associated with a location, generating a Tweet that has been ‘geo-tagged.’
More information here.
I used the filterStream() function to open a connection to Twitter's Streaming API, using the keyword #UCLFinal.
The capture started on Saturday, May 27th 6:45 pm UCT (beginning of the match) and finished on Saturday, May 27th 8:45 pm UCT.
Facebook
Twitterhttps://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy
According to our latest research, the Global JavaScript Supply Chain Security market size was valued at $1.8 billion in 2024 and is projected to reach $7.4 billion by 2033, expanding at a robust CAGR of 16.7% during the forecast period of 2024–2033. The primary growth driver for the JavaScript Supply Chain Security market is the escalating frequency and sophistication of supply chain attacks targeting JavaScript dependencies within enterprise application ecosystems. As organizations continue to rely heavily on third-party libraries and open-source components, the attack surface has expanded, making comprehensive supply chain security solutions critical for safeguarding enterprise data and maintaining business continuity.
North America currently commands the largest share of the global JavaScript Supply Chain Security market, accounting for approximately 38% of the total market value in 2024. This dominance is attributed to the region's mature cybersecurity infrastructure, high digitalization rates, and early adoption of advanced security technologies. The presence of major technology firms and robust regulatory frameworks, such as the Cybersecurity Maturity Model Certification (CMMC) and the National Institute of Standards and Technology (NIST) guidelines, further propel market growth. Enterprises in the United States and Canada are increasingly investing in proactive supply chain risk management solutions, driven by the growing threat landscape and stringent compliance mandates. These factors collectively ensure that North America remains at the forefront of innovation and market adoption within the JavaScript Supply Chain Security sector.
Asia Pacific emerges as the fastest-growing region in the JavaScript Supply Chain Security market, with a projected CAGR of 20.4% from 2024 to 2033. This rapid expansion is fueled by accelerating digital transformation initiatives, the proliferation of web and mobile applications, and increased awareness of supply chain vulnerabilities among enterprises. Major economies such as China, India, Japan, and South Korea are witnessing significant investments in cybersecurity infrastructure, driven by both government mandates and heightened cyberattack incidents. The growing presence of technology startups, coupled with favorable government policies supporting cybersecurity innovation, is catalyzing market growth. Furthermore, the region's expanding e-commerce and fintech sectors are creating substantial demand for robust supply chain security solutions to protect sensitive customer and financial data.
Emerging economies in Latin America and the Middle East & Africa are gradually embracing JavaScript Supply Chain Security solutions, though adoption remains in its nascent stages. These regions face unique challenges, including limited cybersecurity budgets, a shortage of skilled professionals, and varying regulatory landscapes. However, increasing digitalization, the rising adoption of cloud-based applications, and growing awareness of supply chain risks are expected to drive steady market growth. Localized demand for tailored security solutions, combined with international collaborations and capacity-building initiatives, is helping bridge the adoption gap. Nevertheless, the pace of implementation will depend on continued investments in digital infrastructure, regulatory harmonization, and the development of local cybersecurity expertise.
| Attributes | Details |
| Report Title | JavaScript Supply Chain Security Market Research Report 2033 |
| By Component | Software, Services |
| By Deployment Mode | On-Premises, Cloud |
| By Organization Size | Small and Medium Enterprises, Large Enterprises |
| By Application | Web Applications, Mobile Applications, Enterprise Applic |
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets used for this manuscript were derived from multiple sources: Denver Public Health, Esri, Google, and SafeGraph. Any reuse or redistribution of the datasets are subjected to the restrictions of the data providers: Denver Public Health, Esri, Google, and SafeGraph and should consult relevant parties for permissions.1. COVID-19 case dataset were retrieved from Denver Public Health (Link: https://storymaps.arcgis.com/stories/50dbb5e7dfb6495292b71b7d8df56d0a )2. Point of Interests (POIs) data were retrieved from Esri and SafeGraph (Link: https://coronavirus-disasterresponse.hub.arcgis.com/datasets/6c8c635b1ea94001a52bf28179d1e32b/data?selectedAttribute=naics_code) and verified with Google Places Service (Link: https://developers.google.com/maps/documentation/javascript/reference/places-service)3. The activity risk information is accessible from Texas Medical Association (TMA) (Link: https://www.texmed.org/TexasMedicineDetail.aspx?id=54216 )The datasets for risk assessment and mapping are included in a geodatabase. Per SafeGraph data sharing guidelines, raw data cannot be shared publicly. To view the content of the geodatabase, users should have installed ArcGIS Pro 2.7. The geodatabase includes the following:1. POI. Major attributes are locations, name, and daily popularity.2. Denver neighborhood with weekly COVID-19 cases and computed regional risk levels.3. Simulated four travel logs with anchor points provided. Each is a separate point layer.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
1- The Zieni Dataset (2024): This is a recent, balanced dataset comprising 10,000 websites, with 5,000 phishing and 5,000 legitimate samples. The phishing URLs were sourced from PhishTank and Tranco, while legitimate URLs came from Alexa. Each of the 10,000 instances is characterized by 74 features, with 70 being numerical and 4 binary. These features comprehensively describe various components of a URL, including the domain, path, filename, and parameters.
2- The UCI Phishing Websites Dataset: This dataset contains 11,055 website instances, each labeled as either phishing (1) or legitimate (-1). It provides 30 diverse features that capture address bar characteristics, domain-based attributes, and other HTML and JavaScript elements (e.g., prefix-suffix, google_index, iframe, https_token). The data was aggregated from several reputable sources, including the PhishTank and MillerSmiles archives.
3- The Mendeley Phishing Dataset: This dataset includes 10,000 webpages, evenly split between phishing and legitimate categories. It describes each sample using 48 features. The data was collected in two periods: from January to May 2015 and from May to June 2017.
References [1] R. Zieni, “Zieni dataset for Phishing detection,” vol. 1, 2024. doi: 10.17632/8MCZ8JSGNB.1. [2] R. Mohammad et al., “An assessment of features related to phishing websites using an automated technique,” in International Conference for Internet Technology and Secured Transactions, 2012. [3] C. L. Tan, “Phishing Dataset for Machine Learning: Feature Evaluation,” vol. 1, 2018. doi: 10.17632/H3CGNJ8HFT.1.
Facebook
TwitterThis dataset is for software vulnerability detection and includes source code in eight programming languages (C, C++, Java, JavaScript, Go, PHP, Ruby, Python). All data is collected from GitHub. data{programming language}_vul.json: a set of vulnerable code samples in a certain programming language. data{programming language}_patch.json: a set of patching code samples in a certain programming language. Each source code sample includes the following 16 properties: index: index of code. If is_vulnerable==False, this index indicates that this code is a patch of the indexing vulnerable code. code: raw source code (may include comments). is_vulnerable: the code is vulnerable (True) or a patch (False). programming_language: programming language of the code. method_name: name of the method. file_name: name of the file where the source code is extracted. repo_url: url of the project repository. repo_owner: owner of the repository. committer: developer who pushed the commit. committer_date: date when the commit was pushed. commit_msg: the commit message. cwe_id: If is_vulnerable==True, the CWE id; otherwise None. cwe_name: If is_vulnerable==True, the name of corresponding CWE; otherwise None. cwe_description: If is_vulnerable==True, the description of corresponding CWE; otherwise None. cwe_url: If is_vulnerable==True, the url to obtain more details of corresponding CWE; otherwise None. cve_id: If is_vulnerable==True, the CVE id; otherwise None.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Mapping of CSD model attribute values to JSON serialized values.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains extracted attributes from websites that can be used for Classification of webpages as malicious or benign. The dataset also includes raw page content including JavaScript code that can be used as unstructured data in Deep Learning or for extracting further attributes. The data has been collected by crawling the Internet using MalCrawler [1]. The labels have been verified using the Google Safe Browsing API [2]. Attributes have been selected based on their relevance [3]. The details of dataset attributes is as given below: 'url' - The URL of the webpage. 'ip_add' - IP Address of the webpage. 'geo_loc' - The geographic location where the webpage is hosted. 'url_len' - The length of URL. 'js_len' - Length of JavaScript code on the webpage. 'js_obf_len - Length of obfuscated JavaScript code. 'tld' - The Top Level Domain of the webpage. 'who_is' - Whether the WHO IS domain information is compete or not. 'https' - Whether the site uses https or http. 'content' - The raw webpage content including JavaScript code. 'label' - The class label for benign or malicious webpage.
Python code for extraction of the above listed dataset attributes is attached. The Visualisation of this dataset and it python code is also attached. This visualisation can be seen online on Kaggle [5].