Database consists of filing data for Top Hat plan notices for management and HCE's, who defer income until termination of employment, and are therefore exempt from ERISA.
The statistic displays the most popular SQL databases used by software developers worldwide, as of April 2015. According to the survey, 64 percent of software developers were using MySQL, an open-source relational database management system (RDBMS).
Leverage high-quality B2B data with 468 enriched attributes, covering firmographics, financial stability, and industry classifications. Our AI-optimized dataset ensures accuracy through advanced deduplication and continuous updates. With 30+ years of expertise and 1,100+ trusted sources, we provide fully compliant, structured business data to power lead generation, risk assessment, CRM enrichment, market research, and more.
Key use cases of B2B Data have helped our customers in several areas :
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
John Ioannidis and co-authors [1] created a publicly available database of top-cited scientists in the world. This database, intended to address the misuse of citation metrics, has generated a lot of interest among the scientific community, institutions, and media. Many institutions used this as a yardstick to assess the quality of researchers. At the same time, some people look at this list with skepticism citing problems with the methodology used. Two separate databases are created based on career-long and, single recent year impact. This database is created using Scopus data from Elsevier[1-3]. The Scientists included in this database are classified into 22 scientific fields and 174 sub-fields. The parameters considered for this analysis are total citations from 1996 to 2022 (nc9622), h index in 2022 (h22), c-score, and world rank based on c-score (Rank ns). Citations without self-cites are considered in all cases (indicated as ns). In the case of a single-year case, citations during 2022 (nc2222) instead of Nc9622 are considered.
To evaluate the robustness of c-score-based ranking, I have done a detailed analysis of the matrix parameters of the last 25 years (1998-2022) of Nobel laureates of Physics, chemistry, and medicine, and compared them with the top 100 rank holders in the list. The latest career-long and single-year-based databases (2022) were used for this analysis. The details of the analysis are presented below:
Though the article says the selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field, the actual career-based ranking list has 204644 names[1]. The single-year database contains 210199 names. So, the list published contains ~ the top 4% of scientists. In the career-based rank list, for the person with the lowest rank of 4809825, the nc9622, h22, and c-score were 41, 3, and 1.3632, respectively. Whereas for the person with the No.1 rank in the list, the nc9622, h22, and c-score were 345061, 264, and 5.5927, respectively. Three people on the list had less than 100 citations during 96-2022, 1155 people had an h22 less than 10, and 6 people had a C-score less than 2.
In the single year-based rank list, for the person with the lowest rank (6547764), the nc2222, h22, and c-score were 1, 1, and 0. 6, respectively. Whereas for the person with the No.1 rank, the nc9622, h22, and c-score were 34582, 68, and 5.3368, respectively. 4463 people on the list had less than 100 citations in 2022, 71512 people had an h22 less than 10, and 313 people had a C-score less than 2. The entry of many authors having single digit H index and a very meager total number of citations indicates serious shortcomings of the c-score-based ranking methodology. These results indicate shortcomings in the ranking methodology.
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This is dataset of the 10,000 most popular movies across the world, irrespective of language and recency. These have been extracted using TMDb API.
What is TMDB's API? The closed-source API service is for those people interested in using their movies, TV shows or actor images and/or data in their application. TMDb's API is a system that they provide for developers and their team to programmatically fetch and use TMDb's data and/or images. Their API is free to use as long as you attribute TMDb as the source of the data and/or images. Also, they update their API from time to time.
This dataset lists 10,000 most popular movies across the globe. Information held inside the dataset - A. Dataset 1 : Movies dataset - 1. title - Title of the Movie in English. 2. overview - A small summary of the plot. 3. original_lang - Original language it was shot in. 4. rel_date - Date of release. 5. popularity - Popularity. 6. vote_count - Votes received. 7. vote_average - Average of all votes received.
B. Dataset 2 : Genres dataset 1. id 2. Movie ID 3. Genre
This statistic shows the leading vendors of big data and analytics software from 2015 to 2017. In 2017, Splunk was the largest big data and analytics software provider with 11 percent of the market.
Annual Excel pivot tables display the top 25 MS-DRGs (Medicare Severity-Diagnosis Related Groups) per hospital. The ranking can be sorted by the number of discharges, average charge per stay, or average length of stay.
Developers using the DOL-wide API have access to a variety of queries providing usage metrics for their app's key.
Previous studies on supporting free-form keyword queries over RDBMSs provide users with linked-structures (e.g.,a set of joined tuples) that are relevant to a given keyword query. Most of them focus on ranking individual tuples from one table or joins of multiple tables containing a set of keywords. In this paper, we study the problem of keyword search in a data cube with text-rich dimension(s) (so-called text cube). The text cube is built on a multidimensional text database, where each row is associated with some text data (a document) and other structural dimensions (attributes). A cell in the text cube aggregates a set of documents with matching attribute values in a subset of dimensions. We define a keyword-based query language and an IR-style relevance model for coring/ranking cells in the text cube. Given a keyword query, our goal is to find the top-k most relevant cells. We propose four approaches, inverted-index one-scan, document sorted-scan, bottom-up dynamic programming, and search-space ordering. The search-space ordering algorithm explores only a small portion of the text cube for finding the top-k answers, and enables early termination. Extensive experimental studies are conducted to verify the effectiveness and efficiency of the proposed approaches. Citation: B. Ding, B. Zhao, C. X. Lin, J. Han, C. Zhai, A. N. Srivastava, and N. C. Oza, “Efficient Keyword-Based Search for Top-K Cells in Text Cube,” IEEE Transactions on Knowledge and Data Engineering, 2011.
The statistic displays the most wanted data science skills in the United States as of April 2019. As of the measured period, 76.13 percent of data scientist job openings on LinkedIn required a knowledge of the programming language Python.
A dataset containing drug response profiles for over 600 compounds across multiple cancer cell lines.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The code is about how to extract data from the MIMIC-III. (7Z)
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
.TOP Whois Database, discover comprehensive ownership details, registration dates, and more for .TOP TLD with Whois Data Center.
Data contain dominating soil type in a grid cell for the whole world at 30 arc-second (~1km) horizontal resolution. The data are based on Harmonized World Soil Database (HWSD), but reclassified according to the State Soil Geographic (STATSGO) classification table of the Weather Research and Forecasting (WRF) Model for NOAH and NOAH-MP Land Surface Models (LSMs). The source of the data is HWSD version 1.21, provided by the Food and Agriculture Organization of the United Nations (FAO), the International Institute for Applied Systems Analysis (IIASA), International Soil Reference and Information Centre (ISRIC), Institute of Soil Science - Chinese Academy of Sciences (ISSCAS) and Joint Research Centre of the European Commission (JRC) in 2012 (FAO/IIASA/ISRIC/ISSCAS/JRC, 2012). Harmonized World Soil Database (version 1.21). FAO, Rome, Italy and IIASA, Laxenburg, Austria). Horizontal resolution: 0.00833333333333° Type/units: categorical/1-16 categories Missing_value: -9999(ascii file), 241.0 (WRF-bin) Projection: regular latitude longitude
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The Species of Greatest Conservation Need National Database is an aggregation of lists from State Wildlife Action Plans. Species of Greatest Conservation Need (SGCN) are wildlife species that need conservation attention as listed in action plans. In this database, we have validated scientific names from original documents against taxonomic authorities to increase consistency among names enabling aggregation and summary. This database does not replace the information contained in the original State Wildlife Action Plans. The database includes SGCN lists from 56 states, territories, and districts, encompassing action plans spanning from 2005 to 2022. State Wildlife Action Plans undergo updates at least once every 10 years by respective wildlife agencies. The SGCN list data from these action plans have been compiled in partnership with individual wildlife management agencies, the United States Fish and Wildlife Service, and the Association of Fish and Wildlife Agencies. The SGCN ...
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains structured and cleaned records of athletic performances from international track and field events. It includes metadata about the athletes, event types, wind conditions, venues, mark and score.
Contains over 620k rows.
Latest data: 24-06-2025
If you want the latest data, go to this GitHub page, fork and run the code: Github .
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset contains historical price data for the top global cryptocurrencies, sourced from Yahoo Finance. The data spans the following time frames for each cryptocurrency:
BTC-USD (Bitcoin): From 2014 to December 2024 ETH-USD (Ethereum): From 2017 to December 2024 XRP-USD (Ripple): From 2017 to December 2024 USDT-USD (Tether): From 2017 to December 2024 SOL-USD (Solana): From 2020 to December 2024 BNB-USD (Binance Coin): From 2017 to December 2024 DOGE-USD (Dogecoin): From 2017 to December 2024 USDC-USD (USD Coin): From 2018 to December 2024 ADA-USD (Cardano): From 2017 to December 2024 STETH-USD (Staked Ethereum): From 2020 to December 2024
Key Features:
Date: The date of the record. Open: The opening price of the cryptocurrency on that day. High: The highest price during the day. Low: The lowest price during the day. Close: The closing price of the cryptocurrency on that day. Adj Close: The adjusted closing price, factoring in stock splits or dividends (for stablecoins like USDT and USDC, this value should be the same as the closing price). Volume: The trading volume for that day.
Data Source:
The dataset is sourced from Yahoo Finance and spans daily data from 2014 to December 2024, offering a rich set of data points for cryptocurrency analysis.
Use Cases:
Market Analysis: Analyze price trends and historical market behavior of leading cryptocurrencies. Price Prediction: Use the data to build predictive models, such as time-series forecasting for future price movements. Backtesting: Test trading strategies and financial models on historical data. Volatility Analysis: Assess the volatility of top cryptocurrencies to gauge market risk. Overview of the Cryptocurrencies in the Dataset: Bitcoin (BTC): The pioneer cryptocurrency, often referred to as digital gold and used as a store of value. Ethereum (ETH): A decentralized platform for building smart contracts and decentralized applications (DApps). Ripple (XRP): A payment protocol focused on enabling fast and low-cost international transfers. Tether (USDT): A popular stablecoin pegged to the US Dollar, providing price stability for trading and transactions. Solana (SOL): A high-speed blockchain known for low transaction fees and scalability, often seen as a competitor to Ethereum. Binance Coin (BNB): The native token of Binance, the world's largest cryptocurrency exchange, used for various purposes within the Binance ecosystem. Dogecoin (DOGE): Initially a meme-inspired coin, Dogecoin has gained a strong community and mainstream popularity. USD Coin (USDC): A fully-backed stablecoin pegged to the US Dollar, commonly used in decentralized finance (DeFi) applications. Cardano (ADA): A proof-of-stake blockchain focused on scalability, sustainability, and security. Staked Ethereum (STETH): A token representing Ethereum staked in the Ethereum 2.0 network, earning staking rewards.
This dataset provides a comprehensive overview of key cryptocurrencies that have shaped and continue to influence the digital asset market. Whether you're conducting research, building prediction models, or analyzing trends, this dataset is an essential resource for understanding the evolution of cryptocurrencies from 2014 to December 2024.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Variability in mean payment per physician, number of physicians, and aggregated payments for transactions in the Open Payments database, 2014–2018, for each top-category specialty available for allopathic and osteopathic physicians.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
18226 United States import shipment records of Table top wooden with prices, volume & current Buyer’s suppliers relationships based on actual United States import trade database.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This layer contains the fire perimeters from the previous calendar year, and those dating back to 1878, for California. Perimeters are sourced from the Fire and Resource Assessment Program (FRAP) and are updated shortly after the end of each calendar year. Information below is from the FRAP web site. There is also a tile cache version of this layer.
About the Perimeters in this Layer
Initially CAL FIRE and the USDA Forest Service jointly developed a fire perimeter GIS layer for public and private lands throughout California. The data covered the period 1950 to 2001 and included USFS wildland fires 10 acres and greater, and CAL FIRE fires 300 acres and greater. BLM and NPS joined the effort in 2002, collecting fires 10 acres and greater. Also in 2002, CAL FIRE’s criteria expanded to include timber fires 10 acres and greater in size, brush fires 50 acres and greater in size, grass fires 300 acres and greater in size, wildland fires destroying three or more structures, and wildland fires causing $300,000 or more in damage. As of 2014, the monetary requirement was dropped and the damage requirement is 3 or more habitable structures or commercial structures.
In 1989, CAL FIRE units were requested to fill in gaps in their fire perimeter data as part of the California Fire Plan. FRAP provided each unit with a preliminary map of 1950-89 fire perimeters. Unit personnel also verified the pre-1989 perimeter maps to determine if any fires were missing or should be re-mapped. Each CAL FIRE Unit then generated a list of 300+ acre fires that started since 1989 using the CAL FIRE Emergency Activity Reporting System (EARS). The CAL FIRE personnel used this list to gather post-1989 perimeter maps for digitizing. The final product is a statewide GIS layer spanning the period 1950-1999.
CAL FIRE has completed inventory for the majority of its historical perimeters back to 1950. BLM fire perimeters are complete from 2002 to the present. The USFS has submitted records as far back as 1878. The NPS records date to 1921.
About the Program
FRAP compiles fire perimeters and has established an on-going fire perimeter data capture process. CAL FIRE, the United States Forest Service Region 5, the Bureau of Land Management, and the National Park Service jointly develop the fire perimeter GIS layer for public and private lands throughout California at the end of the calendar year. Upon release, the data is current as of the last calendar year.
The fire perimeter database represents the most complete digital record of fire perimeters in California. However it is still incomplete in many respects. Fire perimeter database users must exercise caution to avoid inaccurate or erroneous conclusions. For more information on potential errors and their source please review the methodology section of these pages.
The fire perimeters database is an Esri ArcGIS file geodatabase with three data layers (feature classes):
There are many uses for fire perimeter data. For example, it is used on incidents to locate recently burned areas that may affect fire behavior (see map left).
Other uses include:
Database consists of filing data for Top Hat plan notices for management and HCE's, who defer income until termination of employment, and are therefore exempt from ERISA.