Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comprehensive dataset containing 42 verified Standard locations in United States with complete contact information, ratings, reviews, and location data.
Facebook
TwitterQuadrant provides Insightful, accurate, and reliable mobile location data.
Our privacy-first mobile location data unveils hidden patterns and opportunities, provides actionable insights, and fuels data-driven decision-making at the world's biggest companies.
These companies rely on our privacy-first Mobile Location and Points-of-Interest Data to unveil hidden patterns and opportunities, provide actionable insights, and fuel data-driven decision-making. They build better AI models, uncover business insights, and enable location-based services using our robust and reliable real-world data.
We conduct stringent evaluations on data providers to ensure authenticity and quality. Our proprietary algorithms detect, and cleanse corrupted and duplicated data points – allowing you to leverage our datasets rapidly with minimal processing or cleaning. During the ingestion process, our proprietary Data Filtering Algorithms remove events based on a number of both qualitative factors, as well as latency and other integrity variables to provide more efficient data delivery. The deduplicating algorithm focuses on a combination of four important attributes: Device ID, Latitude, Longitude, and Timestamp. This algorithm scours our data and identifies rows that contain the same combination of these four attributes. Post-identification, it retains a single copy and eliminates duplicate values to ensure our customers only receive complete and unique datasets.
We actively identify overlapping values at the provider level to determine the value each offers. Our data science team has developed a sophisticated overlap analysis model that helps us maintain a high-quality data feed by qualifying providers based on unique data values rather than volumes alone – measures that provide significant benefit to our end-use partners.
Quadrant mobility data contains all standard attributes such as Device ID, Latitude, Longitude, Timestamp, Horizontal Accuracy, and IP Address, and non-standard attributes such as Geohash and H3. In addition, we have historical data available back through 2022.
Through our in-house data science team, we offer sophisticated technical documentation, location data algorithms, and queries that help data buyers get a head start on their analyses. Our goal is to provide you with data that is “fit for purpose”.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
related parameters are set in the paper and code.
Facebook
TwitterThis feature class is part of the Cadastral National Spatial Data Infrastructure (NSDI) CADNSDI publication data set for rectangular and non-rectangular Public Land Survey System (PLSS) data set. The metadata description in the Cadastral Reference System Feature Data Set more fully describes the entire data set. The PLSS Reference Grid is a generalized data set providing the Township and First Divisions of the PLSS as a separate feature class to support data requests, mapping and indexing. The spatial location and position and attributes of this feature class are the same as those in the primary data sets from which this data is built. These data are often used for map sheet layouts and general location reference
Facebook
Twitter
According to our latest research, the global Port Call Data Standardization Services market size reached USD 487.2 million in 2024, and is expected to grow at a robust CAGR of 12.6% from 2025 to 2033. By the end of 2033, the market is forecasted to achieve a value of USD 1,427.8 million. This growth is primarily driven by the increasing need for accurate and harmonized data across the maritime industry, which is fueling investments in advanced data management and standardization solutions worldwide.
One of the primary growth factors propelling the Port Call Data Standardization Services market is the dramatic surge in global maritime trade volumes. As international shipping becomes more complex, the need for seamless, standardized, and interoperable data exchange between ports, shipping companies, and logistics providers has become paramount. The proliferation of digitalization initiatives within the maritime sector, including the adoption of smart port technologies and integrated logistics platforms, is further amplifying demand for services that ensure data consistency, accuracy, and reliability. The implementation of regulations from international maritime organizations, such as the International Maritime Organization (IMO), is also compelling stakeholders to invest in robust data standardization frameworks, thereby accelerating market expansion.
Another significant driver is the growing emphasis on operational efficiency and cost optimization within port and shipping operations. Standardized port call data enables real-time visibility, predictive analytics, and enhanced coordination among various stakeholders, leading to reduced vessel turnaround times and streamlined port operations. This, in turn, results in substantial cost savings and improved resource utilization for both port authorities and shipping companies. Furthermore, the integration of advanced technologies such as Artificial Intelligence (AI), Machine Learning (ML), and the Internet of Things (IoT) into maritime analytics is creating new opportunities for data-driven decision-making, which is heavily reliant on standardized and high-quality data inputs.
The rapid advancement of cloud computing and the increasing adoption of cloud-based solutions are also playing a pivotal role in the growth of the Port Call Data Standardization Services market. Cloud platforms offer scalability, flexibility, and centralized data management capabilities, making them an attractive option for ports and maritime organizations seeking to modernize their IT infrastructure. The shift towards cloud-based deployment models not only facilitates seamless data integration and collaboration across geographically dispersed locations but also enhances data security and compliance with evolving regulatory standards. As a result, cloud-based data standardization services are witnessing accelerated uptake, particularly among large ports and multinational shipping conglomerates.
From a regional perspective, Asia Pacific continues to dominate the Port Call Data Standardization Services market, accounting for the largest share in 2024. This is attributable to the region’s status as a global maritime trade hub, with high port activity in countries such as China, Singapore, and South Korea. North America and Europe are also significant contributors, driven by early adoption of digitalization and stringent regulatory mandates. Meanwhile, the Middle East & Africa and Latin America are witnessing steady growth, supported by increasing investments in port infrastructure and modernization initiatives. Each region presents unique opportunities and challenges, shaping the overall trajectory of the global market.
The Service Type segment within the Port Call Data Standardization Services market encompasses a range of offerings, including Data Cleansing, Data Integration, Data Validation, Data Mapping, and other specialized services. Data Cleansing remains a
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comprehensive dataset containing 16 verified Standard locations in RS with complete contact information, ratings, reviews, and location data.
Facebook
Twitter
According to our latest research, the global Indoor Map Data Standards Compliance market size reached USD 2.4 billion in 2024, driven by the rapid digitalization of indoor spaces and the increasing demand for standardized mapping frameworks. The market is projected to expand at a robust CAGR of 13.2% from 2025 to 2033, reaching an estimated USD 7.3 billion by 2033. This growth trajectory is primarily attributed to the proliferation of smart infrastructure, growing adoption of IoT devices, and the imperative need for interoperability and accuracy in indoor mapping solutions across diverse sectors.
A significant growth factor for the Indoor Map Data Standards Compliance market is the escalating demand for precise indoor navigation and wayfinding solutions in complex environments such as airports, hospitals, shopping malls, and corporate campuses. As organizations increasingly invest in digital transformation initiatives, the need for interoperable and standardized indoor mapping data has become paramount. The adoption of standards such as OGC IndoorGML, ISO/TC 211, IMDF, and CityGML enables seamless integration of indoor maps with various location-based services, ensuring consistency, reliability, and scalability. This not only enhances the user experience but also supports critical applications such as asset tracking, emergency response, and facility management, further fueling market expansion.
Another key driver is the growing emphasis on public safety and emergency response. Regulatory authorities and building managers are prioritizing compliance with indoor mapping standards to facilitate efficient evacuation planning, incident management, and real-time location tracking during emergencies. Standardized indoor maps enable first responders to access accurate spatial data, navigate complex building layouts, and coordinate rescue operations effectively. This increasing focus on safety compliance, coupled with stringent regulatory mandates in developed regions, is propelling the adoption of indoor map data standards across public and private sectors.
The rapid evolution of smart buildings and the integration of IoT technologies are also catalyzing the growth of the Indoor Map Data Standards Compliance market. Modern facilities are equipped with a multitude of connected devices and sensors that generate vast amounts of spatial data. To harness the full potential of these technologies, organizations require standardized frameworks that ensure data interoperability, security, and real-time accessibility. The convergence of indoor mapping with advanced analytics, artificial intelligence, and location-based services is opening new avenues for innovation, operational efficiency, and enhanced occupant experiences, thereby driving sustained market growth.
From a regional perspective, North America currently dominates the Indoor Map Data Standards Compliance market, accounting for over 36% of the global revenue in 2024. This leadership is underpinned by the presence of leading technology vendors, early adoption of digital mapping solutions, and robust investments in smart infrastructure projects. Europe follows closely, driven by stringent regulatory frameworks and widespread implementation of indoor mapping standards in transportation and public sector applications. Meanwhile, the Asia Pacific region is witnessing the fastest growth, fueled by rapid urbanization, expanding smart city initiatives, and increasing deployment of indoor navigation solutions in commercial and healthcare sectors. Latin America and the Middle East & Africa are also emerging as promising markets, supported by government-led digital transformation programs and growing awareness of the benefits of standards compliance in indoor mapping.
The Indoor Map Data Standards Compliance market is segmented by component into software and services, each playing a pivotal role in the ecosystem. The software segment encompasses platforms and solutions designed to create, manage, and vi
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The USDA Long-Term Agroecosystem Research was established to develop national strategies for sustainable intensification of agricultural production. As part of the Agricultural Research Service, the LTAR Network incorporates numerous geographies consisting of experimental areas and locations where data are being gathered. Starting in early 2019, two working groups of the LTAR Network (Remote Sensing and GIS, and Data Management) set a major goal to jointly develop a geodatabase of LTAR Standard GIS Data Layers. The purpose of the geodatabase was to enhance the Network's ability to utilize coordinated, harmonized datasets and reduce redundancy and potential errors associated with multiple copies of similar datasets. Project organizers met at least twice with each of the 18 LTAR sites from September 2019 through December 2020, compiling and editing a set of detailed geospatial data layers comprising a geodatabase, describing essential data collection areas within the LTAR Network.
The LTAR Standard GIS Data Layers geodatabase consists of geospatial data that represent locations and areas associated with the LTAR Network as of late 2020, including LTAR site locations, addresses, experimental plots, fields and watersheds, eddy flux towers, and phenocams. There are six data layers in the geodatabase available to the public. This geodatabase was created in 2019-2020 by the LTAR network as a national collaborative effort among working groups and LTAR sites. The creation of the geodatabase began with initial requests to LTAR site leads and data managers for geospatial data, followed by meetings with each LTAR site to review the initial draft. Edits were documented, and the final draft was again reviewed and certified by LTAR site leads or their delegates. Revisions to this geodatabase will occur biennially, with the next revision scheduled to be published in 2023.
Resources in this dataset:Resource Title: LTAR Standard GIS Data Layers, 2020 version, File Geodatabase. File Name: LTAR_Standard_GIS_Layers_v2020.zipResource Description: This file geodatabase consists of authoritative GIS data layers of the Long-Term Agroecosystem Research Network. Data layers include: LTAR site locations, LTAR site points of contact and street addresses, LTAR experimental boundaries, LTAR site "legacy region" boundaries, LTAR eddy flux tower locations, and LTAR phenocam locations.Resource Software Recommended: ArcGIS,url: esri.com Resource Title: LTAR Standard GIS Data Layers, 2020 version, GeoJSON files. File Name: LTAR_Standard_GIS_Layers_v2020_GeoJSON_ADC.zipResource Description: The contents of the LTAR Standard GIS Data Layers includes geospatial data that represent locations and areas associated with the LTAR Network as of late 2020. This collection of geojson files includes spatial data describing LTAR site locations, addresses, experimental plots, fields and watersheds, eddy flux towers, and phenocams. There are six data layers in the geodatabase available to the public. This dataset was created in 2019-2020 by the LTAR network as a national collaborative effort among working groups and LTAR sites. Resource Software Recommended: QGIS,url: https://qgis.org/en/site/
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comprehensive dataset containing 73 verified American Standard locations in United States with complete contact information, ratings, reviews, and location data.
Facebook
TwitterQuadrant provides Insightful, accurate, and reliable mobile location data.
Our privacy-first mobile location data unveils hidden patterns and opportunities, provides actionable insights, and fuels data-driven decision-making at the world's biggest companies.
These companies rely on our privacy-first Mobile Location and Points-of-Interest Data to unveil hidden patterns and opportunities, provide actionable insights, and fuel data-driven decision-making. They build better AI models, uncover business insights, and enable location-based services using our robust and reliable real-world data.
We conduct stringent evaluations on data providers to ensure authenticity and quality. Our proprietary algorithms detect, and cleanse corrupted and duplicated data points – allowing you to leverage our datasets rapidly with minimal processing or cleaning. During the ingestion process, our proprietary Data Filtering Algorithms remove events based on a number of both qualitative factors, as well as latency and other integrity variables to provide more efficient data delivery. The deduplicating algorithm focuses on a combination of four important attributes: Device ID, Latitude, Longitude, and Timestamp. This algorithm scours our data and identifies rows that contain the same combination of these four attributes. Post-identification, it retains a single copy and eliminates duplicate values to ensure our customers only receive complete and unique datasets.
We actively identify overlapping values at the provider level to determine the value each offers. Our data science team has developed a sophisticated overlap analysis model that helps us maintain a high-quality data feed by qualifying providers based on unique data values rather than volumes alone – measures that provide significant benefit to our end-use partners.
Quadrant mobility data contains all standard attributes such as Device ID, Latitude, Longitude, Timestamp, Horizontal Accuracy, and IP Address, and non-standard attributes such as Geohash and H3. In addition, we have historical data available back through 2022.
Through our in-house data science team, we offer sophisticated technical documentation, location data algorithms, and queries that help data buyers get a head start on their analyses. Our goal is to provide you with data that is “fit for purpose”.
Facebook
TwitterXavvy fuel is the leading source for location data and market insights worldwide. We specialize in data quality and enrichment, providing high-quality POI data for convenience stores in the United States.
Base data • Name/Brand • Adress • Geocoordinates • Opening Hours • Phone • ...
15+ Services • Fuel • Wifi • ChargePoints • …
10+ Payment options • Visa • MasterCard • Google Pay • individual Apps • ...
Our data offering is highly customizable and flexible in delivery – whether one-time or regular data delivery, push or pull services, and various data formats – we adapt to our customers' needs.
Brands included: • 7-Eleven • Circle K • SAlimentation Couche Tard • Speedway • Casey's • ...
The total number of convenience stores per region, market share distribution among competitors, or the ideal location for new branches – our convenience store data provides valuable insights into the market and serves as the perfect foundation for in-depth analyses and statistics. Our data helps businesses across various industries make informed decisions regarding market development, expansion, and competitive strategies. Additionally, our data contributes to the consistency and quality of existing datasets. A simple data mapping allows for accuracy verification and correction of erroneous entries.
Especially when displaying information about restaurants and fast-food chains on maps or in applications, high data quality is crucial for an optimal customer experience. Therefore, we continuously optimize our data processing procedures: • Regular quality controls • Geocoding systems to refine location data • Cleaning and standardization of datasets • Consideration of current developments and mergers • Continuous expansion and cross-checking of various data sources
Integrate the most comprehensive database of convenience store locations in the USA into your business. Explore our additional data offerings and gain valuable market insights directly from the experts!
Facebook
TwitterISO 6709:2008 is applicable to the interchange of coordinates describing geographic point location. It specifies the representation of coordinates, including latitude and longitude, to be used in data interchange. ( 29.11.2011 )
Facebook
TwitterThe ROTI Index is the Standard Deviation of the Rate of Change of the Total Electron Content, TEC, during a 15 min Interval. The TEC Values are measured between a Global Positioning Satellite, GPS, and Ground Receiver Station.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comprehensive dataset containing 76 verified Standard Parking locations in United States with complete contact information, ratings, reviews, and location data.
Facebook
TwitterThis feature class is part of the Cadastral National Spatial Data Infrastructure (NSDI) CADNSDI publication data set for rectangular and non-rectangular Public Land Survey System (PLSS) data set. The metadata description in the Cadastral Reference System Feature Data Set more fully describes the entire data set. The PLSS Reference Grid is a generalized data set providing the Township and First Divisions of the PLSS as a separate feature class to support data requests, mapping and indexing. The spatial location and position and attributes of this feature class are the same as those in the primary data sets from which this data is built. These data are often used for map sheet layouts and general location reference
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Dataset Description
This dataset is a collection of customer, product, sales, and location data extracted from a CRM and ERP system for a retail company. It has been cleaned and transformed through various ETL (Extract, Transform, Load) processes to ensure data consistency, accuracy, and completeness. Below is a breakdown of the dataset components: 1. Customer Information (s_crm_cust_info)
This table contains information about customers, including their unique identifiers and demographic details.
Columns:
cst_id: Customer ID (Primary Key)
cst_gndr: Gender
cst_marital_status: Marital status
cst_create_date: Customer account creation date
Cleaning Steps:
Removed duplicates and handled missing or null cst_id values.
Trimmed leading and trailing spaces in cst_gndr and cst_marital_status.
Standardized gender values and identified inconsistencies in marital status.
This table contains information about products, including product identifiers, names, costs, and lifecycle dates.
Columns:
prd_id: Product ID
prd_key: Product key
prd_nm: Product name
prd_cost: Product cost
prd_start_dt: Product start date
prd_end_dt: Product end date
Cleaning Steps:
Checked for duplicates and null values in the prd_key column.
Validated product dates to ensure prd_start_dt is earlier than prd_end_dt.
Corrected product costs to remove invalid entries (e.g., negative values).
This table contains information about sales transactions, including order dates, quantities, prices, and sales amounts.
Columns:
sls_order_dt: Sales order date
sls_due_dt: Sales due date
sls_sales: Total sales amount
sls_quantity: Number of products sold
sls_price: Product unit price
Cleaning Steps:
Validated sales order dates and corrected invalid entries.
Checked for discrepancies where sls_sales did not match sls_price * sls_quantity and corrected them.
Removed null and negative values from sls_sales, sls_quantity, and sls_price.
This table contains additional customer demographic data, including gender and birthdate.
Columns:
cid: Customer ID
gen: Gender
bdate: Birthdate
Cleaning Steps:
Checked for missing or null gender values and standardized inconsistent entries.
Removed leading/trailing spaces from gen and bdate.
Validated birthdates to ensure they were within a realistic range.
This table contains country information related to the customers' locations.
Columns:
cntry: Country
Cleaning Steps:
Standardized country names (e.g., "US" and "USA" were mapped to "United States").
Removed special characters (e.g., carriage returns) and trimmed whitespace.
This table contains product category information.
Columns:
Product category data (no significant cleaning required).
Key Features:
Customer demographics, including gender and marital status
Product details such as cost, start date, and end date
Sales data with order dates, quantities, and sales amounts
ERP-specific customer and location data
Data Cleaning Process:
This dataset underwent extensive cleaning and validation, including:
Null and Duplicate Removal: Ensuring no duplicate or missing critical data (e.g., customer IDs, product keys).
Date Validations: Ensuring correct date ranges and chronological consistency.
Data Standardization: Standardizing categorical fields (e.g., gender, country names) and fixing inconsistent values.
Sales Integrity Checks: Ensuring sales amounts match the expected product of price and quantity.
This dataset is now ready for analysis and modeling, with clean, consistent, and validated data for retail analytics, customer segmentation, product analysis, and sales forecasting.
Facebook
TwitterQuadrant provides Insightful, accurate, and reliable mobile location data.
Our privacy-first mobile location data unveils hidden patterns and opportunities, provides actionable insights, and fuels data-driven decision-making at the world's biggest companies.
These companies rely on our privacy-first Mobile Location and Points-of-Interest Data to unveil hidden patterns and opportunities, provide actionable insights, and fuel data-driven decision-making. They build better AI models, uncover business insights, and enable location-based services using our robust and reliable real-world data.
We conduct stringent evaluations on data providers to ensure authenticity and quality. Our proprietary algorithms detect, and cleanse corrupted and duplicated data points – allowing you to leverage our datasets rapidly with minimal processing or cleaning. During the ingestion process, our proprietary Data Filtering Algorithms remove events based on a number of both qualitative factors, as well as latency and other integrity variables to provide more efficient data delivery. The deduplicating algorithm focuses on a combination of four important attributes: Device ID, Latitude, Longitude, and Timestamp. This algorithm scours our data and identifies rows that contain the same combination of these four attributes. Post-identification, it retains a single copy and eliminates duplicate values to ensure our customers only receive complete and unique datasets.
We actively identify overlapping values at the provider level to determine the value each offers. Our data science team has developed a sophisticated overlap analysis model that helps us maintain a high-quality data feed by qualifying providers based on unique data values rather than volumes alone – measures that provide significant benefit to our end-use partners.
Quadrant mobility data contains all standard attributes such as Device ID, Latitude, Longitude, Timestamp, Horizontal Accuracy, and IP Address, and non-standard attributes such as Geohash and H3. In addition, we have historical data available back through 2022.
Through our in-house data science team, we offer sophisticated technical documentation, location data algorithms, and queries that help data buyers get a head start on their analyses. Our goal is to provide you with data that is “fit for purpose”.
Facebook
TwitterImproving the quality of places is a crucial element in addressing the inequalities that exist across the UK. While standardised tools exist to structure conversations about place, the extent to which these capture inequalities remain unclear. This study examined the utility of the Place Standard Tool (PST) as a means of understanding inequalities in relation to place. A dataset of 8,218 PST responses collected in the north of England, and the PST itself, were analysed using an inequalities lens with a particular focus on the qualitative data collected through the tool. The results showed that despite limits to the demographic data recorded by the PST such as the lack of ethnicity and disability data, key themes relating to protected characteristic groups were captured in the data. The analysis identified the themes of ethnicity, gender, physical mobility, economic status, and housing situation as particularly prominent within the dataset, and reflects on how these themes affect people’s relationships with place. In its current form, the PST demonstrates an ability to improve understanding of inequalities in relation to place. However, extra consideration, particularly relating to ensuring the PST is applied equitably, and some adaptation of questions would unlock its full potential. Improving the quality of demographic data collected is a key part of improving the accuracy and equity of data collection.Responding proactively to gaps in response rates during data collection exercises can improve the overall quality of data collected, particularly for minority groups.Considering equitable and accessible ways to collect data using the Place Standard Tool is key to fulfilling its potential as a tool for examining inequalities in relation to place. Improving the quality of demographic data collected is a key part of improving the accuracy and equity of data collection. Responding proactively to gaps in response rates during data collection exercises can improve the overall quality of data collected, particularly for minority groups. Considering equitable and accessible ways to collect data using the Place Standard Tool is key to fulfilling its potential as a tool for examining inequalities in relation to place.
Facebook
TwitterBy Tim Renner [source]
Welcome to the world of UFO sightings! This vast dataset contains records of reported UFO sightings from North America, including detailed information about the time, location, duration, shape and more for each sighting. All reports come directly from the NUFORC site's public database. With this dataset you can uncover everything from strange glowing orbs in the night sky to mysterious cylindrical objects circling high above us.
What will you find? Dive deep into a world of mystery as you explore date and time stamps of sightings along with GPS coordinates and full descriptions of what was seen in communities near and far. Examine shapes that range beyond simple circles or triangles; zig zags, chevrons or crescents may all make an appearance here - who knows what strange anomalies lurk in these reports? For those seeking something concrete look no further: each record also contains a direct link to the original report posted on NUFORC’s website where further details are provided. So study up on your ESTs (Estimated Time Sightings) - your journey begins now!
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
UFO sightings have long been a subject of curiosity and debate among people from all walks of life. With the ease of global travel, it's now possible to uncover mysterious UFO sightings from all over the world, including North America. This dataset contains detailed reports on hundreds of thousands of UFOs reported in North America over the past five years, giving researchers an unprecedented opportunity to investigate these phenomena.
In this guide we will explore how to get started with using this dataset and extracting valuable insights about unknown aircrafts in our skies. To begin with, let's take a look at some key pieces of information contained within the dataset.
There is information regarding the date_time and duration when it was seen – enabling us to investigate trends in visibility across various timeframes or locations if necessary - as well as details about the reported objects such as their country , city , state , and most notably – its physical description detailed under the field ‘shape’ (e.g triangle-shaped) . Other essential attributes contained within this dataset include text research--summarized under ‘summary’–and links directly taking users to further information about particular reports (covered by ‘report_link’). Moreover; stats contains fused raw records; consisting also date/time etc but expressed differently than
date_timefield - nonetheless useful for other analysis scenarios than provided here ;By leveraging data science techniques such as clustering algorithms or trend analyses – academics or curious enthusiasts alike will not only be able to make note of patterns related specifically with individual sightings but also potentially cast light on some unsolved mysteries regarding aerial activity buzzed around since ancient times.
- Building a geographic heatmap of UFO sightings around North America over time that can track spikes in particular regions to better understand where they may originate from.
- Creating a Natural Language Processing algorithm that classifies the features of each UFO sighting, such as shape and duration, in order to more effectively detect patterns between sightings or compare them against one another for research purposes.
- Using city location data to calculate the distance travelled by certain UFOs in order to measure the speed by which they traverse space and analyze any potential relationship between speed and sighting attributes like shape or duration with other information such as strange anomalies detected on the ground nearby or any other correlated variables from surrounding areas
If you use this dataset in your research, please credit the original authors. Data Source
Unknown License - Please check the dataset description for more information.
File: nuforc_reports.csv | Column name | Description | |:----------------|:------------------------------------------------------| | level_0 | Unique identifier for each sighting. (Integer) | | text | Full text of the report. (String) | | stats | Date/time location etc. (String) ...
Facebook
Twitterhttps://www.usa.gov/government-workshttps://www.usa.gov/government-works
This dataset is sourced from the U.S. Department of Transportation Bureau of Transportation Statistics. All data and metadata is sourced from the page linked below. Metadata is not updated automatically; data updates weekly.
Source Data Link: https://data.bts.gov/Research-and-Statistics/Trips-by-Distance/w96p-f2qv
How many people are staying at home? How far are people traveling when they don’t stay home? Which states and counties have more people taking trips? The Bureau of Transportation Statistics (BTS) now provides answers to those questions through our new mobility statistics.
The Trips by Distance data and number of people staying home and not staying home are estimated for the Bureau of Transportation Statistics by the Maryland Transportation Institute and Center for Advanced Transportation Technology Laboratory at the University of Maryland. The travel statistics are produced from an anonymized national panel of mobile device data from multiple sources. All data sources used in the creation of the metrics contain no personal information. Data analysis is conducted at the aggregate national, state, and county levels. A weighting procedure expands the sample of millions of mobile devices, so the results are representative of the entire population in a nation, state, or county. To assure confidentiality and support data quality, no data are reported for a county if it has fewer than 50 devices in the sample on any given day.
Trips are defined as movements that include a stay of longer than 10 minutes at an anonymized location away from home. Home locations are imputed on a weekly basis. A movement with multiple stays of longer than 10 minutes before returning home is counted as multiple trips. Trips capture travel by all modes of transportation. including driving, rail, transit, and air.
The daily travel estimates are from a mobile device data panel from merged multiple data sources that address the geographic and temporal sample variation issues often observed in a single data source. The merged data panel only includes mobile devices whose anonymized location data meet a set of data quality standards, which further ensures the overall data quality and consistency. The data quality standards consider both temporal frequency and spatial accuracy of anonymized location point observations, temporal coverage and representativeness at the device level, spatial representativeness at the sample and county level, etc. A multi-level weighting method that employs both device and trip-level weights expands the sample to the underlying population at the county and state levels, before travel statistics are computed.
These data are experimental and may not meet all of our quality standards. Experimental data products are created using new data sources or methodologies that benefit data users in the absence of other relevant products. We are seeking feedback from data users and stakeholders on the quality and usefulness of these new products. Experimental data products that meet our quality standards and demonstrate sufficient user demand may enter regular production if resources permit.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comprehensive dataset containing 42 verified Standard locations in United States with complete contact information, ratings, reviews, and location data.