Facebook
TwitterHydrographic and Impairment Statistics (HIS) is a National Park Service (NPS) Water Resources Division (WRD) project established to track certain goals created in response to the Government Performance and Results Act of 1993 (GPRA). One water resources management goal established by the Department of the Interior under GRPA requires NPS to track the percent of its managed surface waters that are meeting Clean Water Act (CWA) water quality standards. This goal requires an accurate inventory that spatially quantifies the surface water hydrography that each bureau manages and a procedure to determine and track which waterbodies are or are not meeting water quality standards as outlined by Section 303(d) of the CWA. This project helps meet this DOI GRPA goal by inventorying and monitoring in a geographic information system for the NPS: (1) CWA 303(d) quality impaired waters and causes; and (2) hydrographic statistics based on the United States Geological Survey (USGS) National Hydrography Dataset (NHD). Hydrographic and 303(d) impairment statistics were evaluated based on a combination of 1:24,000 (NHD) and finer scale data (frequently provided by state GIS layers).
Facebook
TwitterGRSDB is a database of G-quadruplexes and contains information on composition and distribution of putative Quadruplex-forming G-Rich Sequences (QGRS) mapped in the eukaryotic pre-mRNA sequences, including those that are alternatively processed (alternatively spliced or alternatively polyadenylated). The data stored in the GRSDB is based on computational analysis of NCBI Entrez Gene entries and their corresponding annotated genomic nucleotide sequences of RefSeq/GenBank.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Ice-rich permafrost in the circum-Arctic and sub-Arctic, such as late Pleistocene Yedoma, are especially prone to degradation due to climate change or human activity. When Yedoma deposits thaw, large amounts of frozen organic matter and biogeochemically relevant elements return into current biogeochemical cycles. Building on previous mapping efforts, the objective of this paper is to compile the first digital pan-Arctic Yedoma map and spatial database of Yedoma coverage. Therefore, we 1) synthesized, analyzed, and digitized geological and stratigraphical maps allowing identification of Yedoma occurrence at all available scales, and 2) compiled field data and expert knowledge for creating Yedoma map confidence classes. We used GIS-techniques to vectorize maps and harmonize site information based on expert knowledge. Hence, here we synthesize data on the circum-Arctic and sub-Arctic distribution and thickness of Yedoma for compiling a preliminary circum-polar Yedoma map.
To harmonize the different datasets and to avoid merging artifacts, we applied map edge cleaning while merging data from different database layers. For the digitalization and spatial integration, we used Adobe Photoshop CS6 (Version: 13.0 x64), Adobe Illustrator CS6 (Version 16.0.3 x64), Avenza MAPublisher 9.5.4 (Illustrator Plug-In) and ESRI ArcGIS 10.6.1 for Desktop (Advanced License). Generally, we followed workflow of figure 2 of the related publication (IRYP Version 2, Strauss et al 2021, https://doi.org/10.3389/feart.2021.758360).
We included a range of attributes for Yedoma areas based on lithological and stratigraphic information from the source maps and assigned three different confidence levels of the presence of Yedoma (confirmed, likely, or uncertain). Using a spatial buffer of 20 km around mapped Yedoma occurrences, we derived an extent of the Yedoma domain. Our result is a vector-based map of the current pan-Arctic Yedoma domain that covers approximately 2,587,000 km², whereas Yedoma deposits are found within 480,000 km² of this region. We estimate that 35% of the total Yedoma area today is located in the tundra zone, and 65% in the taiga zone. With this Yedoma mapping, we outlined the substantial spatial extent of late Pleistocene Yedoma deposits and created a unique pan-Arctic dataset including confidence estimates.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comprehensive dataset containing 12 verified Rich locations in United States with complete contact information, ratings, reviews, and location data.
Facebook
TwitterThis map shows the USGS (United States Geologic Survey), NWIS (National Water Inventory System) Hydrologic Data Sites for Rich County, Utah. The scope and purpose of NWIS is defined on the web site: http://water.usgs.gov/public/pubs/FS/FS-027-98/
Facebook
TwitterRichardson, TX, Ground-based Vector Magnetic Field Level 2 Data, 0.5 s Time Resolution, Station Code: (RICH), Station Location: (GEO Latitude 33.0, Longitude 263.2), McMAC Network
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We investigate wealth returns on an administrative panel containing the disaggregated balance sheets of Swedish residents. The expected return on household net wealth is strongly persistent, determined primarily by systematic risk, and increasing in net worth, exceeding the risk-free rate by the size of the equity premium for households in the top 0.01%. Idiosyncratic risk is transitory but generates substantial long-term dispersion in returns in top brackets. Systematic and idiosyncratic risk both drive the cross-sectional distribution of the geometric average return over a generation. Furthermore, wealth returns explain most of the historical increase in top wealth shares.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset motivation and summaryThe human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions coming from the THINGS database. We release this dataset as a tool to foster research in visual neuroscience and computer vision.Useful materialAdditional dataset informationFor information regarding the experimental paradigm, the EEG recording protocol and the dataset validation through computational modeling analyses please refer to our paper.Additional dataset resourcesPlease visit the dataset page for the paper, dataset tutorial, code and more.OSFFor additional data and resources visit our OSF project, where you can find:A detailed description of the raw EEG data filesThe preprocessed EEG dataThe stimuli imagesThe EEG resting state dataCitationsIf you use any of our data, please cite our paper.
Facebook
TwitterThis dataset contains the predicted prices of the asset Eat the Rich over the next 16 years. This data is calculated initially using a default 5 percent annual growth rate, and after page load, it features a sliding scale component where the user can then further adjust the growth rate to their own positive or negative projections. The maximum positive adjustable growth rate is 100 percent, and the minimum adjustable growth rate is -100 percent.
Facebook
TwitterUpdated 30 January 2023
There has been some confusion around licensing for this data set. Dr. Carla Patalano and Dr. Rich Huebner are the original authors of this dataset.
We provide a license to anyone who wishes to use this dataset for learning or teaching. For the purposes of sharing, please follow this license:
CC-BY-NC-ND This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
https://rpubs.com/rhuebner/hrd_cb_v14
PLEASE NOTE -- I recently updated the codebook - please use the above link. A few minor discrepancies were identified between the codebook and the dataset. Please feel free to contact me through LinkedIn (www.linkedin.com/in/RichHuebner) to report discrepancies and make requests.
HR data can be hard to come by, and HR professionals generally lag behind with respect to analytics and data visualization competency. Thus, Dr. Carla Patalano and I set out to create our own HR-related dataset, which is used in one of our graduate MSHRM courses called HR Metrics and Analytics, at New England College of Business. We created this data set ourselves. We use the data set to teach HR students how to use and analyze the data in Tableau Desktop - a data visualization tool that's easy to learn.
This version provides a variety of features that are useful for both data visualization AND creating machine learning / predictive analytics models. We are working on expanding the data set even further by generating even more records and a few additional features. We will be keeping this as one file/one data set for now. There is a possibility of creating a second file perhaps down the road where you can join the files together to practice SQL/joins, etc.
Note that this dataset isn't perfect. By design, there are some issues that are present. It is primarily designed as a teaching data set - to teach human resources professionals how to work with data and analytics.
We have reduced the complexity of the dataset down to a single data file (v14). The CSV revolves around a fictitious company and the core data set contains names, DOBs, age, gender, marital status, date of hire, reasons for termination, department, whether they are active or terminated, position title, pay rate, manager name, and performance score.
Recent additions to the data include: - Absences - Most Recent Performance Review Date - Employee Engagement Score
Dr. Carla Patalano provided the baseline idea for creating this synthetic data set, which has been used now by over 200 Human Resource Management students at the college. Students in the course learn data visualization techniques with Tableau Desktop and use this data set to complete a series of assignments.
We've included some open-ended questions that you can explore and try to address through creating Tableau visualizations, or R or Python analyses. Good luck and enjoy the learning!
There are so many other interesting questions that could be addressed through this interesting data set. Dr. Patalano and I look forward to seeing what we can come up with.
If you have any questions or comments about the dataset, please do not hesitate to reach out to me on LinkedIn: http://www.linkedin.com/in/RichHuebner
You can also reach me via email at: Richard.Huebner@go.cambridgecollege.edu
Facebook
TwitterThe data included here were used to evaluate the prospectivity for lithium in brines of playas of the western part of the Basin and Range Physiographic Province of the United States. Prospectivity is derived from the mappable criteria used in the descriptive deposit model published by Bradley and others (2013) and focused mainly from the remote sensing point of view. The playas in the study area have been ranked according to size (compared to Clayton Valley, the only area where lithium from brines is being produced in the country), the presence and abundance of source rocks, vegetation (as an indicator of water), reported prospects, and remote sensing data. The remote sensing products used are from data acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor because it has regional coverage not available with other sensors. New in this version: Four records in the Playas feature class ( and the corresponding shapefile and csv files) were modified, affecting the Prospects, Score, and Rank fields.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comprehensive dataset containing 43 verified Rich Oil locations in United States with complete contact information, ratings, reviews, and location data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
661 Global export shipment records of Resin Rich Coils with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
Facebook
TwitterThis dataset contains the predicted prices of the asset Rich Guy over the next 16 years. This data is calculated initially using a default 5 percent annual growth rate, and after page load, it features a sliding scale component where the user can then further adjust the growth rate to their own positive or negative projections. The maximum positive adjustable growth rate is 100 percent, and the minimum adjustable growth rate is -100 percent.
Facebook
TwitterUnlock powerful B2B marketing with the AmeriList U.S. Business Database, your gateway to connecting with over 20 million public and private companies across the U.S. and Canada.
Whether your goal is lead generation, account-based marketing, email campaigns, sales outreach, or market analysis, this database gives you the depth, accuracy, and segmentation you need to reach key decision makers efficiently.
AmeriList is a proven leader in direct marketing and data services since 2002. We combine multiple data sources, rigorous verification processes, and ongoing hygiene services to deliver one of the most dependable B2B data assets in the market.
Key Features & Data Coverage:
Aggregated from multiple trusted sources: Yellow Pages, white pages, SEC filings, government records, trade publications, etc.
Rich Firmographic & Demographic Selects For precise targeting, you can filter and segment by:
SIC & NAICS codes (industry classification)
Business size: employee count, sales volume, year established
Executive names, titles, decision makers
Public vs private status, location, executive roles, and more
Data Quality & Hygiene Services Your success hinges on clean data. AmeriList offers:
List hygiene services including merge/purge, data suppression, deceased handling, DMA suppression, etc.
Address correction & postal accuracy via NCOA, LACS, DSF2, CASS, ZIP+4 processing
Data enhancement services to append missing emails, phone numbers, firmographics, and demographic data
Specialty & Vertical Lists: In addition to the main business database, you can access more than 65,000 specialty mailing lists (e.g. auto owners, executives on the go, brides-to-be, healthcare professionals, etc.).
Some niche examples: dentists, lawyers, real estate professionals, contractors, home-based businesses (SOHO), credit-seeking businesses, start-ups, and more.
SOHO (Home-based Businesses) database: reach entrepreneurs running their business from home with selective targeting on industry, revenue, email, etc.
Booming Start-Ups database: newly formed, rapidly growing businesses that may be highly responsive to service providers.
Credit-Seeking Businesses list: businesses actively seeking financing, great for loan, leasing, or financial service vendors.
Channel & Delivery Options:
Receive your data in flexible formats (electronic lists, print, mail house fulfillment)
Ready for postal, telemarketing, or email campaigns depending on your strategy
Turnaround and fulfillment options are competitive, with support from AmeriList’s list services team
Benefits & Use Cases:
✔ Boost Sales & Lead Generation: Use the database to identify potential customers in your target verticals, then build campaigns to reach them via email, direct mail, phone, or multi-channel strategies.
✔ Precision Targeting & Better ROI: Eliminate guesswork, segment by industry, revenue, business size, location, executive role, and more. Your marketing budgets go further with high-conversion prospects.
✔ Decision-Maker Access: Reach business owners, executives, and purchasing managers directly with accurate contact details that cut through gatekeepers
✔ Market Expansion & Competitive Intelligence: Find new markets or underserved geographies. Analyze competitive landscapes and business trends across industries.
✔ List Maintenance & Data Refresh: Ensure that your internal CRM or lead lists stay clean, up-to-date, and enriched, reducing bounce rates, undeliverables, and wasted outreach.
✔ Specialized Campaigns & Niche Targeting: Tap into industry-specific, interest-based, or buyer-behavior lists (e.g. credit-seeking businesses, start-ups, niche professionals) to tailor outreach campaigns.
Why Choose AmeriList:
The AmeriList U.S. Business Database is the ultimate resource for marketers, sales teams, and agencies looking to connect with verified companies and decision makers across every industry. With over 20 million U.S. businesses, rich firmographics, executive contacts, and advanced segmentation options, this B2B database ...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database belongs to the paper "Sulfide-rich continental roots at cratonic margins formed by carbonated melts", and includes Extended Data Table 1, Supplementary Data 1, Supplementary Data 2, Supplementary Data 3, Supplementary Data 4, and Supplementary Data 5.
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
The NIST Extensible Resource Data Model (NERDm) is a set of schemas for encoding in JSON format metadata that describe digital resources. The variety of digital resources it can describe includes not only digital data sets and collections, but also software, digital services, web sites and portals, and digital twins. It was created to serve as the internal metadata format used by the NIST Public Data Repository and Science Portal to drive rich presentations on the web and to enable discovery; however, it was also designed to enable programmatic access to resources and their metadata by external users. Interoperability was also a key design aim: the schemas are defined using the JSON Schema standard, metadata are encoded as JSON-LD, and their semantics are tied to community ontologies, with an emphasis on DCAT and the US federal Project Open Data (POD) models. Finally, extensibility is also central to its design: the schemas are composed of a central core schema and various extension schemas. New extensions to support richer metadata concepts can be added over time without breaking existing applications. Validation is central to NERDm's extensibility model. Consuming applications should be able to choose which metadata extensions they care to support and ignore terms and extensions they don't support. Furthermore, they should not fail when a NERDm document leverages extensions they don't recognize, even when on-the-fly validation is required. To support this flexibility, the NERDm framework allows documents to declare what extensions are being used and where. We have developed an optional extension to the standard JSON Schema validation (see ejsonschema below) to support flexible validation: while a standard JSON Schema validater can validate a NERDm document against the NERDm core schema, our extension will validate a NERDm document against any recognized extensions and ignore those that are not recognized. The NERDm data model is based around the concept of resource, semantically equivalent to a schema.org Resource, and as in schema.org, there can be different types of resources, such as data sets and software. A NERDm document indicates what types the resource qualifies as via the JSON-LD "@type" property. All NERDm Resources are described by metadata terms from the core NERDm schema; however, different resource types can be described by additional metadata properties (often drawing on particular NERDm extension schemas). A Resource contains Components of various types (including DCAT-defined Distributions) that are considered part of the Resource; specifically, these can include downloadable data files, hierachical data collecitons, links to web sites (like software repositories), software tools, or other NERDm Resources. Through the NERDm extension system, domain-specific metadata can be included at either the resource or component level. The direct semantic and syntactic connections to the DCAT, POD, and schema.org schemas is intended to ensure unambiguous conversion of NERDm documents into those schemas. As of this writing, the Core NERDm schema and its framework stands at version 0.7 and is compatible with the "draft-04" version of JSON Schema. Version 1.0 is projected to be released in 2025. In that release, the NERDm schemas will be updated to the "draft2020" version of JSON Schema. Other improvements will include stronger support for RDF and the Linked Data Platform through its support of JSON-LD.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Rich Square population distribution across 18 age groups. It lists the population in each age group along with the percentage population relative of the total population for Rich Square. The dataset can be utilized to understand the population distribution of Rich Square by age. For example, using this dataset, we can identify the largest age group in Rich Square.
Key observations
The largest age group in Rich Square, NC was for the group of age 65-69 years with a population of 80 (10.75%), according to the 2021 American Community Survey. At the same time, the smallest age group in Rich Square, NC was the 30-34 years with a population of 0 (0.00%). Source: U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Age groups:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Rich Square Population by Age. You can refer the same here
Facebook
Twitterhttps://www.worldbank.org/en/about/legal/terms-of-use-for-datasetshttps://www.worldbank.org/en/about/legal/terms-of-use-for-datasets
Derived from publicly available sources, this dataset contains data on a variety of indicators from the years 2001 and 2011 for Bangladesh at four levels of administrative geography.
Thematic areas include:
Business Demographics Economic Activity Education Environment Finance Health Information Technology Infrastructure Jobs Living Standards Urban Extent
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
For details about the scraping process, visit the code repository on GitHub.
The final_data.csv file is a consolidated dataset combining data for the most popular 500–600 movies per year from 1920 to 2025, extracted from IMDb. This dataset aggregates all the yearly merged_movies_data_[year].csv files into a comprehensive CSV file for streamlined analysis.
The final_data.csv file includes:
- Basic movie details: id, title, year, duration, MPA, rating, votes, meta_score, description, Movie_Link.
- Financial data: budget, opening_weekend_gross, gross_worldwide, gross_us_canada.
- Credits: directors, writers, stars.
- Additional details: genres, countries_origin, filming_locations, production_companies, languages.
- Awards: awards_content (wins, nominations, Oscars).
- Release info: release_date.
Columns:
id,title,year,duration,MPA,rating,votes,meta_score,description,Movie_Link,writers,directors,stars,budget,opening_weekend_gross,gross_worldwide,gross_us_canada,release_date,countries_origin,filming_locations,production_companies,awards_content,genres,languages
The final_data.csv file is updated annually in December to reflect the most recent data additions and corrections.
This dataset is ideal for:
- Longitudinal Analysis: Studying trends in movie production, popularity, and financial performance over a century.
- Predictive Analytics: Building models to forecast box office performance or award outcomes.
- Recommender Systems: Leveraging attributes like genres, cast, and ratings for personalized recommendations.
- Comparative Studies: Comparing cinematic trends across different eras, regions, or genres.
Please feel free to contact me for more features, errors in the data, suggestions, and enhancements.
Feel free to contact me by mail or open an issue on GitHub.
Facebook
TwitterHydrographic and Impairment Statistics (HIS) is a National Park Service (NPS) Water Resources Division (WRD) project established to track certain goals created in response to the Government Performance and Results Act of 1993 (GPRA). One water resources management goal established by the Department of the Interior under GRPA requires NPS to track the percent of its managed surface waters that are meeting Clean Water Act (CWA) water quality standards. This goal requires an accurate inventory that spatially quantifies the surface water hydrography that each bureau manages and a procedure to determine and track which waterbodies are or are not meeting water quality standards as outlined by Section 303(d) of the CWA. This project helps meet this DOI GRPA goal by inventorying and monitoring in a geographic information system for the NPS: (1) CWA 303(d) quality impaired waters and causes; and (2) hydrographic statistics based on the United States Geological Survey (USGS) National Hydrography Dataset (NHD). Hydrographic and 303(d) impairment statistics were evaluated based on a combination of 1:24,000 (NHD) and finer scale data (frequently provided by state GIS layers).