Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the data-urls technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
TwitterOpen Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
This Website Statistics dataset has four resources showing usage of the Lincolnshire Open Data website. Web analytics terms used in each resource are defined in their accompanying Metadata file.
Website Usage Statistics: This document shows a statistical summary of usage of the Lincolnshire Open Data site for the latest calendar year.
Website Statistics Summary: This dataset shows a website statistics summary for the Lincolnshire Open Data site for the latest calendar year.
Webpage Statistics: This dataset shows statistics for individual Webpages on the Lincolnshire Open Data site by calendar year.
Dataset Statistics: This dataset shows cumulative totals for Datasets on the Lincolnshire Open Data site that have also been published on the national Open Data site Data.Gov.UK - see the Source link.
Note: Website and Webpage statistics (the first three resources above) show only UK users, and exclude API calls (automated requests for datasets). The Dataset Statistics are confined to users with javascript enabled, which excludes web crawlers and API calls.
These Website Statistics resources are updated annually in January by the Lincolnshire County Council Business Intelligence team. For any enquiries about the information contact opendata@lincolnshire.gov.uk.
Facebook
TwitterPredictLeads Job Openings Data provides high-quality hiring insights sourced directly from company websites - not job boards. Using advanced web scraping technology, our dataset offers real-time access to job trends, salaries, and skills demand, making it a valuable resource for B2B sales, recruiting, investment analysis, and competitive intelligence.
Key Features:
✅232M+ Job Postings Tracked – Data sourced from 92 Million company websites worldwide. ✅7,1M+ Active Job Openings – Updated in real-time to reflect hiring demand. ✅Salary & Compensation Insights – Extract salary ranges, contract types, and job seniority levels. ✅Technology & Skill Tracking – Identify emerging tech trends and industry demands. ✅Company Data Enrichment – Link job postings to employer domains, firmographics, and growth signals. ✅Web Scraping Precision – Directly sourced from employer websites for unmatched accuracy.
Primary Attributes:
Job Metadata:
Salary Data (salary_data)
Occupational Data (onet_data) (object, nullable)
Additional Attributes:
📌 Trusted by enterprises, recruiters, and investors for high-precision job market insights.
PredictLeads Dataset: https://docs.predictleads.com/v3/guide/job_openings_dataset
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the pretty-data technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data about nola.gov provides a window into how people are interacting with the the City of New Orleans online. The data comes from a unified Google Analytics account for New Orleans. We do not track individuals and we anonymize the IP addresses of all visitors.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
1- The Zieni Dataset (2024): This is a recent, balanced dataset comprising 10,000 websites, with 5,000 phishing and 5,000 legitimate samples. The phishing URLs were sourced from PhishTank and Tranco, while legitimate URLs came from Alexa. Each of the 10,000 instances is characterized by 74 features, with 70 being numerical and 4 binary. These features comprehensively describe various components of a URL, including the domain, path, filename, and parameters.
2- The UCI Phishing Websites Dataset: This dataset contains 11,055 website instances, each labeled as either phishing (1) or legitimate (-1). It provides 30 diverse features that capture address bar characteristics, domain-based attributes, and other HTML and JavaScript elements (e.g., prefix-suffix, google_index, iframe, https_token). The data was aggregated from several reputable sources, including the PhishTank and MillerSmiles archives.
3- The Mendeley Phishing Dataset: This dataset includes 10,000 webpages, evenly split between phishing and legitimate categories. It describes each sample using 48 features. The data was collected in two periods: from January to May 2015 and from May to June 2017.
References [1] R. Zieni, “Zieni dataset for Phishing detection,” vol. 1, 2024. doi: 10.17632/8MCZ8JSGNB.1. [2] R. Mohammad et al., “An assessment of features related to phishing websites using an automated technique,” in International Conference for Internet Technology and Secured Transactions, 2012. [3] C. L. Tan, “Phishing Dataset for Machine Learning: Feature Evaluation,” vol. 1, 2018. doi: 10.17632/H3CGNJ8HFT.1.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Code:
Packet_Features_Generator.py & Features.py
To run this code:
pkt_features.py [-h] -i TXTFILE [-x X] [-y Y] [-z Z] [-ml] [-s S] -j
-h, --help show this help message and exit -i TXTFILE input text file -x X Add first X number of total packets as features. -y Y Add first Y number of negative packets as features. -z Z Add first Z number of positive packets as features. -ml Output to text file all websites in the format of websiteNumber1,feature1,feature2,... -s S Generate samples using size s. -j
Purpose:
Turns a text file containing lists of incomeing and outgoing network packet sizes into separate website objects with associative features.
Uses Features.py to calcualte the features.
startMachineLearning.sh & machineLearning.py
To run this code:
bash startMachineLearning.sh
This code then runs machineLearning.py in a tmux session with the nessisary file paths and flags
Options (to be edited within this file):
--evaluate-only to test 5 fold cross validation accuracy
--test-scaling-normalization to test 6 different combinations of scalers and normalizers
Note: once the best combination is determined, it should be added to the data_preprocessing function in machineLearning.py for future use
--grid-search to test the best grid search hyperparameters - note: the possible hyperparameters must be added to train_model under 'if not evaluateOnly:' - once best hyperparameters are determined, add them to train_model under 'if evaluateOnly:'
Purpose:
Using the .ml file generated by Packet_Features_Generator.py & Features.py, this program trains a RandomForest Classifier on the provided data and provides results using cross validation. These results include the best scaling and normailzation options for each data set as well as the best grid search hyperparameters based on the provided ranges.
Data
Encrypted network traffic was collected on an isolated computer visiting different Wikipedia and New York Times articles, different Google search queres (collected in the form of their autocomplete results and their results page), and different actions taken on a Virtual Reality head set.
Data for this experiment was stored and analyzed in the form of a txt file for each experiment which contains:
First number is a classification number to denote what website, query, or vr action is taking place.
The remaining numbers in each line denote:
The size of a packet,
and the direction it is traveling.
negative numbers denote incoming packets
positive numbers denote outgoing packets
Figure 4 Data
This data uses specific lines from the Virtual Reality.txt file.
The action 'LongText Search' refers to a user searching for "Saint Basils Cathedral" with text in the Wander app.
The action 'ShortText Search' refers to a user searching for "Mexico" with text in the Wander app.
The .xlsx and .csv file are identical
Each file includes (from right to left):
The origional packet data,
each line of data organized from smallest to largest packet size in order to calculate the mean and standard deviation of each packet capture,
and the final Cumulative Distrubution Function (CDF) caluclation that generated the Figure 4 Graph.
Facebook
TwitterNOTE: To review the latest plan, make sure to filter the "Report Year" column to the latest year.
Data on public websites maintained by or on behalf of the city agencies.
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the Export User Data technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the Experian Data Quality technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the Head Meta Data technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
TwitterWP-Script is a company that provides WordPress themes and plugins for creating adult sites. They offer a range of products, including seven customizable adult WordPress themes and thirteen powerful adult WordPress plugins. Their products are designed to be easy to use and can help entrepreneurs create professional-looking adult sites with minimal technical expertise.
With WP-Script, you can start your adult site in six easy steps. They also offer a 14-day money-back guarantee, giving you the opportunity to test their products risk-free. Additionally, they provide premium support to help you resolve any issues you may encounter. Their customers love their products, citing excellent themes, easy installation, and good customer support.
Facebook
TwitterBy Throwback Thursday [source]
Here are some tips on how to make the most out of this dataset:
Data Exploration:
- Begin by understanding the structure and contents of the dataset. Evaluate the number of rows (sites) and columns (attributes) available.
- Check for missing values or inconsistencies in data entry that may impact your analysis.
- Assess column descriptions to understand what information is included in each attribute.
Geographical Analysis:
- Leverage geographical features such as latitude and longitude coordinates provided in this dataset.
- Plot these sites on a map using any mapping software or library like Google Maps or Folium for Python. Visualizing their distribution can provide insights into patterns based on location, climate, or cultural factors.
Analyzing Attributes:
- Familiarize yourself with different attributes available for analysis. Possible attributes include Name, Description, Category, Region, Country, etc.
- Understand each attribute's format and content type (categorical, numerical) for better utilization during data analysis.
Exploring Categories & Regions:
- Look at unique categories mentioned in the Category column (e.g., Cultural Site, Natural Site) to explore specific interests. This could help identify clusters within particular heritage types across countries/regions worldwide.
- Analyze regions with high concentrations of heritage sites using data visualizations like bar plots or word clouds based on frequency counts.
Identify Trends & Patterns:
- Discover recurring themes across various sites by analyzing descriptive text attributes such as names and descriptions.
- Identify patterns and correlations between attributes by performing statistical analysis or utilizing machine learning techniques.
Comparison:
- Compare different attributes to gain a deeper understanding of the sites.
- For example, analyze the number of heritage sites per country/region or compare the distribution between cultural and natural heritage sites.
Additional Data Sources:
- Use this dataset as a foundation to combine it with other datasets for in-depth analysis. There are several sources available that provide additional data on UNESCO World Heritage Sites, such as travel blogs, official tourism websites, or academic research databases.
Remember to cite this dataset appropriately if you use it in
- Travel Planning: This dataset can be used to identify and plan visits to UNESCO World Heritage sites around the world. It provides information about the location, category, and date of inscription for each site, allowing users to prioritize their travel destinations based on personal interests or preferences.
- Cultural Preservation: Researchers or organizations interested in cultural preservation can use this dataset to analyze trends in UNESCO World Heritage site listings over time. By studying factors such as geographical distribution, types of sites listed, and inscription dates, they can gain insights into patterns of cultural heritage recognition and protection.
- Statistical Analysis: The dataset can be used for statistical analysis to explore various aspects related to UNESCO World Heritage sites. For example, it could be used to examine the correlation between a country's economic indicators (such as GDP per capita) and the number or type of World Heritage sites it possesses. This analysis could provide insights into the relationship between economic development and cultural preservation efforts at a global scale
If you use this dataset in your research, please credit the original authors. Data Source
See the dataset description for more information.
If you use this dataset in your research, please credit the original authors. If you use this dataset in your research, please credit Throwback Thursday.
Facebook
TwitterList of State of Oklahoma city government websites.
Facebook
TwitterOpenWeb Ninja’s Website Contacts Scraper API provides real-time access to B2B contact data directly from company websites and related public sources. The API delivers clean, structured results including B2B email data, phone number data, and social profile links, making it simple to enrich leads and build accurate company contact lists at scale.
What's included: - Emails & Phone Numbers: extract business emails and phone contacts from a website domain. - Social Profile Links: capture company accounts on LinkedIn, Facebook, Instagram, TikTok, Twitter/X, YouTube, GitHub, and Pinterest. - Domain Search: input a company website domain and get all available contact details. - Company Name Lookup: find a company’s website domain by name, then retrieve its contact data. - Comprehensive Coverage: scrape across all accessible website pages for maximum data capture.
Coverage & Scale: - 1,000+ emails and phone numbers per company website supported. - 8+ major social networks covered. - Real-time REST API for fast, reliable delivery.
Use cases: - B2B contact enrichment and CRM updates. - Targeted email marketing campaigns. - Sales prospecting and lead generation. - Digital ads audience targeting. - Marketing and sales intelligence.
With OpenWeb Ninja’s Website Contacts Scraper API, you get structured B2B email data, phone numbers, and social profiles straight from company websites - always delivered in real time via a fast and reliable API.
Facebook
TwitterThe dataset contains a hierarchal listing of New York State counties, cities, towns, and villages, as well as official locality websites
Facebook
TwitterThis dataset was created by Shivam Mishra
Facebook
TwitterA site analytics story page discussing data freshness on the Maryland Open Data Portal with links to the State's Data Freshness Homepage.
Facebook
TwitterInformation about pages on the City's website including their age and their Google Analytics data (everything from "PageViews" and to the right). If the Google Analytics fields are empty, the page hasn't been visited recently at all.
Facebook
TwitterDuring a study conducted among e-commerce professionals in the UK and the U.S. in *********, respondents were asked about their use of personalization on their websites. According to the results, ** percent of survey participants were already using real-time behavioral data to personalize user experience on their e-commerce websites.
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the data-urls technology, compiled through global website indexing conducted by WebTechSurvey.