Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the data-urls technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
TwitterPredictLeads Job Openings Data provides high-quality hiring insights sourced directly from company websites - not job boards. Using advanced web scraping technology, our dataset offers real-time access to job trends, salaries, and skills demand, making it a valuable resource for B2B sales, recruiting, investment analysis, and competitive intelligence.
Key Features:
✅232M+ Job Postings Tracked – Data sourced from 92 Million company websites worldwide. ✅7,1M+ Active Job Openings – Updated in real-time to reflect hiring demand. ✅Salary & Compensation Insights – Extract salary ranges, contract types, and job seniority levels. ✅Technology & Skill Tracking – Identify emerging tech trends and industry demands. ✅Company Data Enrichment – Link job postings to employer domains, firmographics, and growth signals. ✅Web Scraping Precision – Directly sourced from employer websites for unmatched accuracy.
Primary Attributes:
Job Metadata:
Salary Data (salary_data)
Occupational Data (onet_data) (object, nullable)
Additional Attributes:
📌 Trusted by enterprises, recruiters, and investors for high-precision job market insights.
PredictLeads Dataset: https://docs.predictleads.com/v3/guide/job_openings_dataset
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
China Internet Service: Number of Website data was reported at 4.460 Unit mn in Dec 2024. This records an increase from the previous number of 3.910 Unit mn for Jun 2024. China Internet Service: Number of Website data is updated semiannually, averaging 2.939 Unit mn from Dec 2000 (Median) to Dec 2024, with 49 observations. The data reached an all-time high of 5.440 Unit mn in Jun 2018 and a record low of 0.243 Unit mn in Jun 2001. China Internet Service: Number of Website data remains active status in CEIC and is reported by China Internet Network Information Center. The data is categorized under China Premium Database’s Information and Communication Sector – Table CN.ICE: Internet: Number of Domain and Website.
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the Advanced Database Cleaner technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
TwitterWeb Designer Express is a reputable Miami-based company that has been in business for 20 years. With a team of experienced web designers and developers, they offer a wide range of services, including web design, e-commerce development, web development, and more. Their portfolio showcases over 10,000 websites designed, with a focus on creating custom, unique solutions for each client. With a presence in Miami, Florida, they cater to businesses and individuals seeking to establish a strong online presence. As a company, Web Designer Express is dedicated to building long-lasting relationships with their clients, providing personalized service, and exceeding expectations.
Facebook
TwitterDaily utilization metrics for data.lacity.org and geohub.lacity.org. Updated monthly
Facebook
TwitterDuring a 2024 survey, ** percent of responding consumers from the United States said they were fine with a website or app that they trusted or valued using their personal data to send them relevant advertising. The share stood at ** percent for Generation Z respondents.
Facebook
TwitterAs of September 2024, 75 percent of the 100 most visited websites in the United States shared personal data with advertising 3rd parties, even when users opted out. Moreover, 70 percent of them drop advertising 3rd party cookies even when users opt out.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains and processes results of a large-scale survey of 708 websites, made in December 2019, in order to measure various features related to their size and structure: DOM tree size, maximum degree, depth, diversity of element types and CSS classes, among others. The goal of this research is to serve as a reference point for studies that include an empirical evaluation on samples of web pages.
See the Readme.md file inside the archive for more details about its contents.
Facebook
TwitterData from the State of California. From website:
Access raw State data files, databases, geographic data, and other data sources. Raw State data files can be reused by citizens and organizations for their own web applications and mashups.
Open. Effectively in the public domain. Terms of use page says:
In general, information presented on this web site, unless otherwise indicated, is considered in the public domain. It may be distributed or copied as permitted by law. However, the State does make use of copyrighted data (e.g., photographs) which may require additional permissions prior to your use. In order to use any information on this web site not owned or created by the State, you must seek permission directly from the owning (or holding) sources. The State shall have the unlimited right to use for any purpose, free of any charge, all information submitted via this site except those submissions made under separate legal contract. The State shall be free to use, for any purpose, any ideas, concepts, or techniques contained in information provided through this site.
Facebook
TwitterList of State of Oklahoma city government websites.
Facebook
TwitterThe data can only be used for scientific research and commercial use is strictly prohibited. This is a underground industry web site dataset. It contains nearly 400,000 pieces of data. Each piece of data contains 14 attributes. All properties are contained in the result.json file. | Property | describes | data type | | --- | --- | --- | | ip | IP address | character string | | port | port number | continuous data| | server | web container |discrete data | | domain | domain name |text (domain name) | | title | site title |text | | org | organization |discrete data | | country | country |discrete data | | city | city |discrete data | | html | HTML original code |text | | screen | website screenshot | image| | header | Web response header information | text| | subject.CN | Common name information for SSL certificates |text (domain name) | | subject.N | SSL certificate subject optional name | text (list of domain names)| | links | Site external link |text (list of domain names) |
Facebook
TwitterOpenWeb Ninja’s Website Contacts Scraper API provides real-time access to B2B contact data directly from company websites and related public sources. The API delivers clean, structured results including B2B email data, phone number data, and social profile links, making it simple to enrich leads and build accurate company contact lists at scale.
What's included: - Emails & Phone Numbers: extract business emails and phone contacts from a website domain. - Social Profile Links: capture company accounts on LinkedIn, Facebook, Instagram, TikTok, Twitter/X, YouTube, GitHub, and Pinterest. - Domain Search: input a company website domain and get all available contact details. - Company Name Lookup: find a company’s website domain by name, then retrieve its contact data. - Comprehensive Coverage: scrape across all accessible website pages for maximum data capture.
Coverage & Scale: - 1,000+ emails and phone numbers per company website supported. - 8+ major social networks covered. - Real-time REST API for fast, reliable delivery.
Use cases: - B2B contact enrichment and CRM updates. - Targeted email marketing campaigns. - Sales prospecting and lead generation. - Digital ads audience targeting. - Marketing and sales intelligence.
With OpenWeb Ninja’s Website Contacts Scraper API, you get structured B2B email data, phone numbers, and social profiles straight from company websites - always delivered in real time via a fast and reliable API.
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/34895/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/34895/terms
The Congressional Candidate Websites study uses congressional candidate Web site data from 2002 to 2006 to understand campaign behavior. The content analysis data includes information on major party House and Senate candidates, their districts/states, and aspects of their campaign Web sites including their use of technology and political variables such as endorsements, issue positions, image promotion, and negative commentary.
Facebook
TwitterThe majority of the Swedes who took part in a survey conducted on 2019, stated they were concerned that their online information was not kept secure by websites (** percent). ** percent of the respondents disagreed with that statement.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Code:
Packet_Features_Generator.py & Features.py
To run this code:
pkt_features.py [-h] -i TXTFILE [-x X] [-y Y] [-z Z] [-ml] [-s S] -j
-h, --help show this help message and exit -i TXTFILE input text file -x X Add first X number of total packets as features. -y Y Add first Y number of negative packets as features. -z Z Add first Z number of positive packets as features. -ml Output to text file all websites in the format of websiteNumber1,feature1,feature2,... -s S Generate samples using size s. -j
Purpose:
Turns a text file containing lists of incomeing and outgoing network packet sizes into separate website objects with associative features.
Uses Features.py to calcualte the features.
startMachineLearning.sh & machineLearning.py
To run this code:
bash startMachineLearning.sh
This code then runs machineLearning.py in a tmux session with the nessisary file paths and flags
Options (to be edited within this file):
--evaluate-only to test 5 fold cross validation accuracy
--test-scaling-normalization to test 6 different combinations of scalers and normalizers
Note: once the best combination is determined, it should be added to the data_preprocessing function in machineLearning.py for future use
--grid-search to test the best grid search hyperparameters - note: the possible hyperparameters must be added to train_model under 'if not evaluateOnly:' - once best hyperparameters are determined, add them to train_model under 'if evaluateOnly:'
Purpose:
Using the .ml file generated by Packet_Features_Generator.py & Features.py, this program trains a RandomForest Classifier on the provided data and provides results using cross validation. These results include the best scaling and normailzation options for each data set as well as the best grid search hyperparameters based on the provided ranges.
Data
Encrypted network traffic was collected on an isolated computer visiting different Wikipedia and New York Times articles, different Google search queres (collected in the form of their autocomplete results and their results page), and different actions taken on a Virtual Reality head set.
Data for this experiment was stored and analyzed in the form of a txt file for each experiment which contains:
First number is a classification number to denote what website, query, or vr action is taking place.
The remaining numbers in each line denote:
The size of a packet,
and the direction it is traveling.
negative numbers denote incoming packets
positive numbers denote outgoing packets
Figure 4 Data
This data uses specific lines from the Virtual Reality.txt file.
The action 'LongText Search' refers to a user searching for "Saint Basils Cathedral" with text in the Wander app.
The action 'ShortText Search' refers to a user searching for "Mexico" with text in the Wander app.
The .xlsx and .csv file are identical
Each file includes (from right to left):
The origional packet data,
each line of data organized from smallest to largest packet size in order to calculate the mean and standard deviation of each packet capture,
and the final Cumulative Distrubution Function (CDF) caluclation that generated the Figure 4 Graph.
Facebook
TwitterThe Business Websites Database of European Companies serves as an invaluable and comprehensive resource, meticulously curated to include an extensive and diverse collection of links directing users to the official websites of prominent and influential companies headquartered or operating within Europe. This database spans a wide array of industries and sectors, ranging from technology and finance to manufacturing, healthcare, retail, and beyond, ensuring that users have access to a broad spectrum of business information. By offering direct access to these companies' online platforms, the database not only facilitates seamless navigation to their digital presence but also provides users with the opportunity to explore detailed insights about their products, services, corporate values, and market activities, making it an essential tool for researchers, professionals, and anyone seeking to engage with the European business landscape.
Facebook
TwitterCompany information such as employee credentials is one of the most common assets online vendors trade illegally on the darknet. According to the source, Zalando.com has suffered thousands of data leakage incidents on the deep web in the 12 months leading up to ********, in which more than ***** employee credentials were compromised. Amazon registered a relatively low number of deep web data leaks, with roughly *** in the last 12 months.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is the result of merging two datasets with identical features. However, not all features from the original datasets have been retained in the merged dataset. This selective feature inclusion was done to focus on the most relevant data and to avoid redundancy. The resulting dataset provides a comprehensive view of the shared characteristics between the two original datasets, while maintaining a streamlined and focused set of features.
Dataset 1 : Web page phishing detection Hannousse, Abdelhakim; Yahiouche, Salima (2021), “Web page phishing detection”, Mendeley Data, V3, doi: 10.17632/c2gw7fy2j4.3
Dataset 2: Phishing Websites Dataset Vrbančič, Grega (2020), “Phishing Websites Dataset”, Mendeley Data, V1, doi: 10.17632/72ptz43s9v.1
The data is provided in CSV format, with each row representing a website and each column representing a feature. The last column contains the label for each website.
This dataset contains the following features: 1. url_length: The length of the URL. 2. n_dots: The count of ‘.’ characters in the URL. 3. n_hypens: The count of ‘-’ characters in the URL. 4. n_underline: The count of ‘_’ characters in the URL. 5. n_slash: The count of ‘/’ characters in the URL. 6. n_questionmark: The count of ‘?’ characters in the URL. 7. n_equal: The count of ‘=’ characters in the URL. 8. n_at: The count of ‘@’ characters in the URL. 9. n_and: The count of ‘&’ characters in the URL. 10. n_exclamation: The count of ‘!’ characters in the URL. 11. n_space: The count of ’ ’ characters in the URL. 12. n_tilde: The count of ‘~’ characters in the URL. 13. n_comma: The count of ‘,’ characters in the URL. 14. n_plus: The count of ‘+’ characters in the URL. 15. n_asterisk: The count of ‘*’ characters in the URL. 16. n_hastag: The count of ‘#’ characters in the URL. 17. n_dollar: The count of ‘$’ characters in the URL. 18. n_percent: The count of ‘%’ characters in the URL. 19. n_redirection: The count of redirections in the URL. 20. phishing: The Labels of the URL. 1 is phishing and 0 is legitimate.
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
Sites that were or are currently banned.
This data was created by each country's own users.
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the data-urls technology, compiled through global website indexing conducted by WebTechSurvey.