CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The USDA-NRCS Soil Series Classification Database contains the taxonomic classification of each soil series identified in the United States, Territories, Commonwealths, and Island Nations served by USDA-NRCS. Along with the taxonomic classification, the database contains other information about the soil series, such as office of responsibility, series status, dates of origin and establishment, and geographic areas of usage. The database is maintained by the soils staff of the NRCS MLRA Soil Survey Region Offices across the country. Additions and changes are continually being made, resulting from on going soil survey work and refinement of the soil classification system. As the database is updated, the changes are immediately available to the user, so the data retrieved is always the most current. The Web access to this soil classification database provides capabilities to view the contents of individual series records, to query the database on any data element and produce a report with the selected soils, or to produce national reports with all soils in the database. The standard reports available allow the user to display the soils by series name or by taxonomic classification. The SC database was migrated into the NASIS database with version 6.2. Resources in this dataset:Resource Title: Website Pointer to Soil Series Classification Database (SC). File Name: Web Page, url: https://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/class/data/?cid=nrcs142p2_053583 Supports the following queries:
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The Organic INTEGRITY Database is a certified organic operations database that contains up-to-date and accurate information about operations that may and may not sell as organic, deterring fraud, increases supply chain transparency for buyers and sellers, and promotes market visibility for organic operations. Only certified operations can sell, label, or represent products as organic, unless exempt or excluded from certification. The INTEGRITY database improves access to certified organic operation information by giving industry and public users an easier way to search for data with greater precision than the formerly posted Annual Lists of Certified Operations. You can find a certified organic farm or business, or search for an operation with specific characteristics such as:
The status of an operation: Certified, Surrendered, Revoked, or Suspended The scopes for which an operation is certified: Crops, Livestock, Wild Crops, or Handling
The organic commodities and services that operations offer. A new, more structured classification system (sample provided) will help you find more of what you’re looking for and details about the flexible taxonomy can be found in the INTEGRITY Categories and Items list. Resources in this dataset:Resource Title: Organic INTEGRITY Database. File Name: Web Page, url: https://organic.ams.usda.gov/integrity/Default.aspx Find a specific certified organic farm or business, or search for an operation with specific characteristics. Listings come from USDA-Accredited Certifying Agents. Historical Annual Lists of Certified Organic Operations and monthly snapshots of the full data set are available for download on the Data History page. Only certified operations can sell, label or represent products as organic, unless exempt or excluded from certification.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium. NLCD 2011 provides - for the first time - the capability to assess wall-to-wall, spatially explicit, national land cover changes and trends across the United States from 2001 to 2011. As with two previous NLCD land cover products NLCD 2011 keeps the same 16-class land cover classification scheme that has been applied consistently across the United States at a spatial resolution of 30 meters. NLCD 2011 is based primarily on a decision-tree classification of circa 2011 Landsat satellite data. [Note: The scheduled release date for NLCD 2016 products is Friday, December 28, 2018] Resources in this dataset:Resource Title: Website Pointer to National Land Cover Database 2011 (NLCD 2011). File Name: Web Page, url: https://www.mrlc.gov/nlcd2011.php Includes product description, data downloads (Conterminous United States, Alaska, Hawaii, Puerto Rico), production statistics, and related references.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset description:
The Encyclopedia of Domains (TED) is a joint effort by CATH (Orengo group) and the Jones group at University College London to identify and classify protein domains in AlphaFold2 models from AlphaFold Database version 4, covering over 188 million unique sequences and 324 million domain assignments.
In this data release, we will be making available to the community a table of domain boundaries and additional metadata on quality (pLDDT, globularity, number of secondary structures), taxonomy and putative CATH SuperFamily or Fold assignments for all 324 million domains in TED100.
For all chains in the TED-redundant dataset, the attached file contains boundaries predictions, consensus level and information on the TED100 representative.
Additionally, an archive with chain-level consensus domain assignments are available for 21 model organisms and 25 global health proteomes:
For both TED100 and TEDredundant we provide domain boundaries predictions outputted by each of the three methods employed in the project (Chainsaw, Merizo, UniDoc).
We are making available 7,427 novel folds PDB files, identified during the TED classification process with an annotation table sorted by novelty.
Please use the gunzip command to extract files with a '.gz' extension.
CATH annotations have been assigned using the FoldSeek algorithm applied in various modes and the FoldClass algorithm, both of which are used to report significant structural similarity to a known CATH domain. Note: The TED protocol differs from that of our standard CATH Assignment protocol for superfamily assignment, which also involves HMM-based protocols and manual curation for remote matches.
This dataset contains:
ted_214m_per_chain_segmentation.tsvThe file contains all 214M protein chains in TED with consensus domain boundaries and proteome information in the following columns.1. AFDB_model_ID: chain identifier from AFDB in the format AF-
ted_365m_domain_boundaries_consensus_level.tsv.gzThe file contains all domain assignments in TED100 and TED-redundant (365M) in the format:1. TED_ID: TED domain identifier in the format AF-
ted_100_324m.domain_summary.cath.globularity.taxid.tsv and novel_folds_set.domain_summary.tsv are header-less with the following columns separated by tabs (.tsv).
ted_324m_seq_clustering.cathlabels.tsv The file contains the results of the domain sequences clustering with MMseqs2. Columns:1. Cluster_representative2. Cluster_member3. CATH code assignment if available i.e. 3.40.50.300 for a domain with a homologous match or 3.20.20 for a domain matching at the fold level in the CATH classification4. CATH assignment type - either Foldseek-T, Foldseek-H or Foldclass
novel_folds_set.domain_summary.tsv is sorted by novelty. 1. ted_id - TED domain identifier in the format AF-
Domain assignments for TED redundant using single-chain and multi-chain consensus in ted_redundant_39m.multichain.consensus_domain_summary.taxid.tsv and ted_redundant_39m.singlechain.consensus_domain_summary.taxid.tsv The files contain a header with the following fields. Each column is tab-separated (.tsv). 1. TED_redundant_id - TED chain identifier in the format AF-
and ted_redundant_39m.singlechain.consensus_domain_summary.taxid.tsv The file contains a header with the following fields. Each column is tab-separated (.tsv). 1. TED_redundant_id - TED chain identifier in the format AF-
novel_folds_set_models.tar.gz contains PDB files of all novel folds identified in TED100.
All per-tool domain boundaries predictions are in the same format with the following columns. 1. TED_chainID - TED chain identifier in the format AF-<UniProtID>-F1-model_v4 i.e. AF-A0A1V6M2Y0-F1-model_v4 2. TED_chain_md5 - md5 hash for chain sequence 3. TED_chain_length - number of residues in chain 4. ndoms - number of domains predicted in chains 5. Domain boundaries - domain boundaries in the format <start>-<stop> or <start>-<stop>_<start>-<stop> for discontinuous domains 6. Prediction probability - probability of each per-chain prediction
Domain boundaries predictions share the same format, with each segment separated by '_' and segment boundaries (start,stop) separated by '-' i.e.domain prediction by Merizo for AF-A0A000-F1-model_v4 AF-A0A000-F1-model_v4 e8872c7a0261b9e88e6ff47eb34e4162 394 2 10-52_289-394,53-288 0.90077 Merizo predicts one continuous domain and a discontinuous domain, Domain1 (discontinuous): 10-52_289-394 segment1: 10-52 segment2: 289-394 Domain 2 (continuous): segment 1: 53-288
ted-tools-main.zip - copy of the https://github.com/psipred/ted-tools repository, containing tools and software used to generate TED.
cath-alphaflow-main.zip - copy of CATH-AlphaFlow, used to generate globularity scores for TED domains.
ted-web-master.zip - copy of TED-web, containing code to generate the web interface of TED (https://ted.cathdb.info)
gofocus_data.tar.bz2 - GOFocus model weights
The Easiest Way to Collect Data from the Internet Download anything you see on the internet into spreadsheets within a few clicks using our ready-made web crawlers or a few lines of code using our APIs
We have made it as simple as possible to collect data from websites
Easy to Use Crawlers Amazon Product Details and Pricing Scraper Amazon Product Details and Pricing Scraper Get product information, pricing, FBA, best seller rank, and much more from Amazon.
Google Maps Search Results Google Maps Search Results Get details like place name, phone number, address, website, ratings, and open hours from Google Maps or Google Places search results.
Twitter Scraper Twitter Scraper Get tweets, Twitter handle, content, number of replies, number of retweets, and more. All you need to provide is a URL to a profile, hashtag, or an advance search URL from Twitter.
Amazon Product Reviews and Ratings Amazon Product Reviews and Ratings Get customer reviews for any product on Amazon and get details like product name, brand, reviews and ratings, and more from Amazon.
Google Reviews Scraper Google Reviews Scraper Scrape Google reviews and get details like business or location name, address, review, ratings, and more for business and places.
Walmart Product Details & Pricing Walmart Product Details & Pricing Get the product name, pricing, number of ratings, reviews, product images, URL other product-related data from Walmart.
Amazon Search Results Scraper Amazon Search Results Scraper Get product search rank, pricing, availability, best seller rank, and much more from Amazon.
Amazon Best Sellers Amazon Best Sellers Get the bestseller rank, product name, pricing, number of ratings, rating, product images, and more from any Amazon Bestseller List.
Google Search Scraper Google Search Scraper Scrape Google search results and get details like search rank, paid and organic results, knowledge graph, related search results, and more.
Walmart Product Reviews & Ratings Walmart Product Reviews & Ratings Get customer reviews for any product on Walmart.com and get details like product name, brand, reviews, and ratings.
Scrape Emails and Contact Details Scrape Emails and Contact Details Get emails, addresses, contact numbers, social media links from any website.
Walmart Search Results Scraper Walmart Search Results Scraper Get Product details such as pricing, availability, reviews, ratings, and more from Walmart search results and categories.
Glassdoor Job Listings Glassdoor Job Listings Scrape job details such as job title, salary, job description, location, company name, number of reviews, and ratings from Glassdoor.
Indeed Job Listings Indeed Job Listings Scrape job details such as job title, salary, job description, location, company name, number of reviews, and ratings from Indeed.
LinkedIn Jobs Scraper Premium LinkedIn Jobs Scraper Scrape job listings on LinkedIn and extract job details such as job title, job description, location, company name, number of reviews, and more.
Redfin Scraper Premium Redfin Scraper Scrape real estate listings from Redfin. Extract property details such as address, price, mortgage, redfin estimate, broker name and more.
Yelp Business Details Scraper Yelp Business Details Scraper Scrape business details from Yelp such as phone number, address, website, and more from Yelp search and business details page.
Zillow Scraper Premium Zillow Scraper Scrape real estate listings from Zillow. Extract property details such as address, price, Broker, broker name and more.
Amazon product offers and third party sellers Amazon product offers and third party sellers Get product pricing, delivery details, FBA, seller details, and much more from the Amazon offer listing page.
Realtor Scraper Premium Realtor Scraper Scrape real estate listings from Realtor.com. Extract property details such as Address, Price, Area, Broker and more.
Target Product Details & Pricing Target Product Details & Pricing Get product details from search results and category pages such as pricing, availability, rating, reviews, and 20+ data points from Target.
Trulia Scraper Premium Trulia Scraper Scrape real estate listings from Trulia. Extract property details such as Address, Price, Area, Mortgage and more.
Amazon Customer FAQs Amazon Customer FAQs Get FAQs for any product on Amazon and get details like the question, answer, answered user name, and more.
Yellow Pages Scraper Yellow Pages Scraper Get details like business name, phone number, address, website, ratings, and more from Yellow Pages search results.
✔️ Easy-to-handle Excel Sheet ✔️ Human Researched and Verified leads with Direct contacts ✔️ Up to 🇨🇳 10K Chinese Active Amazon third-party private Label Sellers lead direct contact and info ✔️ Up to 30+ data points for each prospect ✔️ Sort your list by store size, product category, company location, and much more! ✔️ Enjoy a list that has been hand-researched and verified. No scrapped contacts!
Data includes: Seller ID Seller Business Model: Private Label Seller/Wholesaler Estimated Annual Revenue in $ ( accurate +/-30%) Annual Revenue Bracket [$] % of goods shipped in FBA Amazon Seller Page Seller Website Decision Maker First Name Decision Maker Last Name Direct Email Generic Email Office Phone Number Decision Maker Linkedin URL Seller Name Seller Storefront Link Seller Business Name Seller Business Full Address City State Zip Seller Business Country Number of Reviews [30 days] Number of Reviews [90 days] Number of Reviews [12 months] Number of Reviews [lifetime] Seller Rating Lifetime Total number of products Total number of brands Brands Top Product URL Top Product Shipping From Top Product Category Top Product SubCategory
About SellerDirectories: SellerDirectories.com provides human-researched and verified data on Amazon sellers and eCommerce Brands.
Our human-researched and verified data is trusted by companies like Walmart, Microsoft, Tencent, Helium10, and 250 more! ⭐ Check our verified 5-star reviews on Trustpilot ⭐ https://www.trustpilot.com/review/sellerdirectories.com ⭐
✅ Data Sources: Aggregated from 50+ sources
✅ Data Collection: Human Researched and verified (we don't like scrapped contact data)
✅ 98%+ accurate and up-to-date data (verified)
✅ Brand /seller Targeting Options: Multiple filters available (including revenue, location, business model, and product category)
✅ Customer Service: Lifetime support and accuracy guarantee on your list. Our lists include resources on how best to run outreach campaigns to turn a prospect list into actual business opportunities. Buy B2B contact database with SellerDirectories.com, and get your B2B contacts database sorted!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Classification of matrix metalloproteinase enzymes.
Multipurpose Landcover Database For Egypt - Africover
This dataset falls under the category Traffic Generating Parameters Land Cover.
It contains the following data: The full resolution land cover has been produced from visual interpretation of digitally enhanced high-resolution LANDSAT TM images (Bands 4,3,2) acquired mainly in the year 1997. The land cover classes have been developed using the FAO/UNEP international standard LCCS classification system.
This dataset was scouted on 2022-02-03 as part of a data sourcing project conducted by TUMI. License information might be outdated: Check original source for current licensing.
The data can be accessed using the following URL / API Endpoint: https://data.apps.fao.org/map/catalog/srv/eng/catalog.search#/metadata/a7fd1a64-475f-4e34-a3b3-c79e014311ecSee URL for data access and license information.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
For more information, please visit HART.ubc.ca. Housing Assessment Resource Tools (HART) This database was created to accompany a report prepared by Joe Daniels, PhD, and Martine August, PhD, entitled “Acquisitions Programs for Affordable Housing: Creating non-market supply and preserving affordability with existing multi-family housing.” The database and report form part of the work performed under the HART project, and the report can be found at HART’s website: HART.ubc.ca. The database is a single table that summarizes 11 key elements, plus notes and references, of a growing list of policies from governments across the world. There are currently 108 policies included in the database. The authors expect to update this database with additional policies from time to time. The authors hope this database will serve as a resource for governments looking to become familiar with a variety of policies in order to help them evaluate what policies might be most applicable in their communities. Data Fields: List of data fields (15 total): 1. Government Order 2. Government Jurisdiction 3. Policy Name/Action 4. Acquisition Target 5. Years Active 6. Funder/Funding 7. Funding Amount (Program) 8. Funding Form 9. Affordability Standard 10. Affordability Term 11. Features/Requirements 12. Comments 13. Reference link 1 14. Reference link 2 15. Reference link 3 Description of data fields (15) 1. Government Order: - Categorizes the relative political authority in terms of one of three categories: Municipal (responsible for a city or small region), Provincial (responsible for multiple municipalities), or Country (responsible for multiple provinces; highest political authority). - This field may be used to help identify those policies most relevant to the reader. 2. Government Jurisdiction: - Indicates the name of the government. - For example, a country might be named “Canada,” a province might be named “Quebec,” and a municipality might be named “Calgary.” 3. Policy Name/Action: - Indicates the name of the policy. - This generally serves as the unique identifier for the record. However, there may be some programs that are only known by a common term; for example, “Right of First Refusal.” 4. Acquisition Target: - Describes the type of housing asset that the policy is concerned with. For example, acquiring land, acquiring existing rental buildings, renovating existing supportive housing. 5. Years Active: - The time period that the policy has been active. - Typically formatted as “[Year started] - [Year ended]”. If just a single year is listed (e.g. “2009”) that means the policy was only active that one year. - If the policy is active with no end date, then the format will be “[Year started] - ongoing.” If the policy has a specified end date in the future, that year will be listed instead: “[Year started] – [Expected final year].” 6. Funder/Funding: - The government, government agency, or organization responsible for the use of those funds made available through the policy. 7. Funding Amount (Program): - The dollar value of funds connected to the policy. - Sometimes this is the total value of funds available to the policy, and sometimes it is the actual value of funds that were used. - The funds indicated here do not necessarily correspond to the time period indicated in the ‘Years Active’ field. Additional detail will be added to clarify whenever possible. - If policy has “N/A” listed here, see ‘Features/Requirements’ for more information. 8. Funding Form: - Indicates the type of financial tools available to the policy. For example, “capital funding,” “forgivable loans,” or “rent supplements.” - If policy has “N/A” listed here, see ‘Features/Requirements’ for more information. 9. Affordability Standard: - Indicates whether the policy includes an explicit standard or benchmark of affordability that is used to guide or otherwise inform the policy’s goals. 10. Affordability Term: - Indicates whether the affordability standard applies to a specific time period. - This field may also contain other information on time periods that are relevant to the policy; for example, an operating loan guaranteed to be active for a specific number of years. 11. Features/Requirements: - Describes the broad objectives of the policy as well as any specific guidelines that the policy must follow. 12. Comments: - Author’s commentary on the policy. 13. Reference link 1: - A web address (URL) or citation indicating the source of the details on the policy. 14. Reference link 2: - A second web address (URL) or citation indicating the source of the details on the policy. 15. Reference link 3: - A third web address (URL) or citation indicating the source of the details on the policy. File list (1): 1. Property Acquisition Policy Database.xlsx
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This table provides the name, category, vocabulary and URL of disease-related databases integrated into SIDD. All the databases are divided into 10 major categories. In addition, the last column presents the number of diseases documented in each database.
This dataset is a collection of marine environmental data layers suitable for use in Southern Ocean species distribution modelling. All environmental layers have been generated at a spatial resolution of 0.1 degrees, covering the Southern Ocean extent (80 degrees S - 45 degrees S, -180 - 180 degrees). The layers include information relating to bathymetry, sea ice, ocean currents, primary production, particulate organic carbon, and other oceanographic data.
An example of reading and using these data layers in R can be found at https://australianantarcticdivision.github.io/blueant/articles/SO_SDM_data.html.
The following layers are provided:
Source: This study. Derived from GEBCO URL: https://www.gebco.net/data_and_products/gridded_bathymetry_data/ Citation: Fabri-Ruiz S, Saucede T, Danis B and David B (2017). Southern Ocean Echinoids database_An updated version of Antarctic, Sub-Antarctic and cold temperate echinoid database. ZooKeys, (697), 1.
Layer name: geomorphology Description: Last update on biodiversity.aq portal. Derived from O'Brien et al. (2009) seafloor geomorphic feature dataset. Mapping based on GEBCO contours, ETOPO2, seismic lines). 27 categories Value range: 27 categories Units: categorical Source: This study. Derived from Australian Antarctic Data Centre URL: https://data.aad.gov.au/metadata/records/Polar_Environmental_Data Citation: O'Brien, P.E., Post, A.L., and Romeyn, R. (2009) Antarctic-wide geomorphology as an aid to habitat mapping and locating vulnerable marine ecosystems. CCAMLR VME Workshop 2009. Document WS-VME-09/10
Layer name: sediments Description: Sediment features Value range: 14 categories Units: categorical Source: Griffiths 2014 (unpublished) URL: http://share.biodiversity.aq/GIS/antarctic/
Layer name: slope Description: Seafloor slope derived from bathymetry with the terrain function of raster R package. Computation according to Horn (1981), ie option neighbor=8. The computation was done on the GEBCO bathymetry layer (0.0083 degrees resolution) and the resolution was then changed to 0.1 degrees. Unit set at degrees. Value range: 0.000252378 - 16.94809 Units: degrees Source: This study. Derived from GEBCO URL: https://www.gebco.net/data_and_products/gridded_bathymetry_data/ Citation: Horn, B.K.P., 1981. Hill shading and the reflectance map. Proceedings of the IEEE 69:14-47
Layer name: roughness Description: Seafloor roughness derived from bathymetry with the terrain function of raster R package. Roughness is the difference between the maximum and the minimum value of a cell and its 8 surrounding cells. The computation was done on the GEBCO bathymetry layer (0.0083 degrees resolution) and the resolution was then changed to 0.1 degrees. Value range: 0 - 5171.278 Units: unitless Source: This study. Derived from GEBCO URL: https://www.gebco.net/data_and_products/gridded_bathymetry_data/
Layer name: mixed layer depth Description: Summer mixed layer depth climatology from ARGOS data. Regridded from 2-degree grid using nearest neighbour interpolation Value range: 13.79615 - 461.5424 Units: m Source: https://data.aad.gov.au/metadata/records/Polar_Environmental_Data
Layer name: seasurface_current_speed Description: Current speed near the surface (2.5m depth), derived from the CAISOM model (Galton-Fenzi et al. 2012, based on ROMS model) Value range: 1.50E-04 - 1.7 Units: m/s Source: This study. Derived from Australian Antarctic Data Centre URL: https://data.aad.gov.au/metadata/records/Polar_Environmental_Data Citation: see Galton-Fenzi BK, Hunter JR, Coleman R, Marsland SJ, Warner RC (2012) Modeling the basal melting and marine ice accretion of the Amery Ice Shelf. Journal of Geophysical Research: Oceans, 117, C09031. http://dx.doi.org/10.1029/2012jc008214, https://data.aad.gov.au/metadata/records/polar_environmental_data
Layer name: seafloor_current_speed Description: Current speed near the sea floor, derived from the CAISOM model (Galton-Fenzi et al. 2012, based on ROMS) Value range: 3.40E-04 - 0.53 Units: m/s Source: This study. Derived from Australian Antarctic Data Centre URL: https://data.aad.gov.au/metadata/records/Polar_Environmental_Data Citation: see Galton-Fenzi BK, Hunter JR, Coleman R, Marsland SJ, Warner RC (2012) Modeling the basal melting and marine ice accretion of the Amery Ice Shelf. Journal of Geophysical Research: Oceans, 117, C09031. http://dx.doi.org/10.1029/2012jc008214, https://data.aad.gov.au/metadata/records/polar_environmental_data
Layer name: distance_antarctica Description: Distance to the nearest part of the Antarctic continent Value range: 0 - 3445 Units: km Source: https://data.aad.gov.au/metadata/records/Polar_Environmental_Data
Layer name: distance_canyon Description: Distance to the axis of the nearest canyon Value range: 0 - 3117 Units: km Source: https://data.aad.gov.au/metadata/records/Polar_Environmental_Data
Layer name: distance_max_ice_edge Description: Distance to the mean maximum winter sea ice extent (derived from daily estimates of sea ice concentration) Value range: -2614.008 - 2314.433 Units: km Source: https://data.aad.gov.au/metadata/records/Polar_Environmental_Data
Layer name: distance_shelf Description: Distance to nearest area of seafloor of depth 500m or shallower Value range: -1296 - 1750 Units: km Source: https://data.aad.gov.au/metadata/records/Polar_Environmental_Data
Layer name: ice_cover_max Description: Ice concentration fraction, maximum on [1957-2017] time period Value range: 0 - 1 Units: unitless Source: BioOracle accessed 24/04/2018, see Assis et al. (2018) URL: http://www.bio-oracle.org/ Citation: Assis J, Tyberghein L, Bosch S, Verbruggen H, Serrao EA and De Clerck O (2018). Bio_ORACLE v2. 0: Extending marine data layers for bioclimatic modelling. Global Ecology and Biogeography, 27(3), 277-284 , see also https://www.ecmwf.int/en/research/climate-reanalysis/ocean-reanalysis
Layer name: ice_cover_mean Description: Ice concentration fraction, mean on [1957-2017] time period Value range: 0 - 0.9708595 Units: unitless Source: BioOracle accessed 24/04/2018, see Assis et al. (2018) URL: http://www.bio-oracle.org/ Citation: Assis J, Tyberghein L, Bosch S, Verbruggen H, Serrao EA and De Clerck O (2018). Bio_ORACLE v2. 0: Extending marine data layers for bioclimatic modelling. Global Ecology and Biogeography, 27(3), 277-284 , see also https://www.ecmwf.int/en/research/climate-reanalysis/ocean-reanalysis
Layer name: ice_cover_min Description: Ice concentration fraction, minimum on [1957-2017] time period Value range: 0 - 0.8536261 Units: unitless Source: BioOracle accessed 24/04/2018, see Assis et al. (2018) URL: http://www.bio-oracle.org/ Citation: Assis J, Tyberghein L, Bosch S, Verbruggen H, Serrao EA and De Clerck O (2018). Bio_ORACLE v2. 0: Extending marine data layers for bioclimatic modelling. Global Ecology and Biogeography, 27(3), 277-284 , see also https://www.ecmwf.int/en/research/climate-reanalysis/ocean-reanalysis
Layer name: ice_cover_range Description: Ice concentration fraction, difference maximum-minimum on [1957-2017] time period Value range: 0 - 1 Units: unitless Source: BioOracle accessed 24/04/2018, see Assis et al. (2018) URL: http://www.bio-oracle.org/ Citation: Assis J, Tyberghein L, Bosch S, Verbruggen H, Serrao EA and De Clerck O (2018). Bio_ORACLE v2. 0: Extending marine data layers for bioclimatic modelling. Global Ecology and Biogeography, 27(3), 277-284 , see also https://www.ecmwf.int/en/research/climate-reanalysis/ocean-reanalysis
Layer name: ice_thickness_max Description: Ice thickness, maximum on [1957-2017] time period Value range: 0 - 3.471811 Units: m Source: BioOracle accessed 24/04/2018, see Assis et al. (2018) URL: http://www.bio-oracle.org/ Citation: Assis J, Tyberghein L, Bosch S, Verbruggen H, Serrao EA and De Clerck O (2018). Bio_ORACLE v2. 0: Extending marine data layers for bioclimatic modelling. Global Ecology and Biogeography, 27(3), 277-284 , see also https://www.ecmwf.int/en/research/climate-reanalysis/ocean-reanalysis
Layer name: ice_thickness_mean Description: Ice thickness, mean on [1957-2017] time period Value range: 0 - 1.614133 Units: m Source: BioOracle accessed 24/04/2018, see Assis et al. (2018) URL: http://www.bio-oracle.org/ Citation: Assis J, Tyberghein L, Bosch S, Verbruggen H, Serrao EA and De Clerck O (2018). Bio_ORACLE v2. 0: Extending marine data layers for bioclimatic modelling. Global Ecology and Biogeography, 27(3), 277-284 , see also https://www.ecmwf.int/en/research/climate-reanalysis/ocean-reanalysis
Layer name: ice_thickness_min Description: Ice thickness, minimum on [1957-2017] time period Value range: 0 - 0.7602701 Units: m Source: BioOracle accessed 24/04/2018, see Assis et al. (2018) URL: http://www.bio-oracle.org/ Citation: Assis J, Tyberghein L, Bosch S, Verbruggen H, Serrao EA and De Clerck O (2018). Bio_ORACLE v2. 0: Extending marine data layers for bioclimatic modelling. Global Ecology and Biogeography, 27(3), 277-284 , see also https://www.ecmwf.int/en/research/climate-reanalysis/ocean-reanalysis
Layer name: ice_thickness_range Description: Ice thickness, difference maximum-minimum on [1957-2017] time period Value range: 0 - 3.471811 Units: m Source: BioOracle accessed 24/04/2018, see Assis et al. (2018) URL: http://www.bio-oracle.org/ Citation: Assis J, Tyberghein L, Bosch S, Verbruggen H, Serrao EA and De Clerck O (2018). Bio_ORACLE v2. 0: Extending marine data layers for bioclimatic modelling. Global Ecology and Biogeography, 27(3), 277-284 , see also https://www.ecmwf.int/en/research/climate-reanalysis/ocean-reanalysis
Layer name:
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Dataset description
Based on databases, scientific and internet publications, this dataset lists small armed UAVs and missiles deployed and used worldwide, as well as systems under research and development, with their properties. Non-armed UAVs are included to investigate the global usage of small UAVs and thus overall interest in smaller systems. This comprises non-armed systems which could be provided with or used as weapons.
The two datasets list properties of small and very small UAVs below 2 m size (wingspan, length and rotor diameter) and of missiles with diameters below 69 mm. Currently (Version 2.0) the datasets are comprised of 152 UAVs and 50 missiles, respectively.
In order to minimise a contribution to proliferation of these systems, only public sources were investigated, i.e. the internet as well as publicly available databases and catalogues. Furthermore, where information is incomplete, no estimates based on the laws of physics or stemming from engineering expertise are given. Improvised or modified versions of UAVs or missiles, already in use by non-state actors, are left out for the same reason.
As far as has been available, for UAVs the basic properties with the year of introduction are listed to allow statements on trends of UAV capabilities in recent years. Due to the sheer number of UAV types available today, we focused mainly on UAVs intended to fulfil military roles, such as reconnaissance or combat. An exception are UAVs that fall under the very small (<0.2 m) category. There, most UAVs are still in the research or development stages and not in military service nor designed for military use. However, research and development (R&D) of some systems had been funded originally by military institutions. In any case, these projects are important indicators of the future potential of these small-sized aircraft.
Project Webpage
The datasets are a part of the research project "Preventive Arms Control for Small and Very Small Aircraft and Missiles" of TU Dortmund University. The project has been funded by the German Foundation for Peace Research (DSF, https://bundesstiftung-friedensforschung.de/) in its funding line "New Technologies: Risks and Chances for International Security and Peace".
For a full description of this project, visit https://url.tu-dortmund.de/pacsam.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open-access database of englacial temperature measurements compiled from data submissions and published literature. It is developed on GitHub and published to Zenodo.
Data structure
The dataset adheres to the Frictionless Data Tabular Data Package specification. The metadata in datapackage.json describes, in detail, the contents of the tabular data files in the data folder:
source.csv: Description of each data source (either a personal communication or the reference to a published study).
borehole.csv: Description of each borehole (location, elevation, etc), linked to source.csv via source_id and less formally via source identifiers in notes.
profile.csv: Description of each profile (date, etc), linked to borehole.csv via borehole_id and to source.csv via source_id and less formally via source identifiers in notes.
measurement.csv: Description of each measurement (depth and temperature), linked to profile.csv via borehole_id and profile_id.
For boreholes with many profiles (e.g. from automated loggers), pairs of profile.csv and measurement.csv are stored separately in subfolders of data named {source.id}-{glacier}, where glacier is a simplified and kebab-cased version of the glacier name (e.g. flowers2022-little-kluane).
data/source.csv
Sources of information considered in the compilation of this database. Column names and categorical values closely follow the Citation Style Language (CSL) 1.0.2 specification. Names of people in non-Latin scripts are followed by a latinization in square brackets (e.g. В. С. Загороднов [V. S. Zagorodnov]) and non-English titles are followed by a translation in square brackets.
name type description
id (required) string Unique identifier constructed from the first author's lowercase, latinized, family name and the publication year, followed as needed by a lowercase letter to ensure uniqueness (e.g. Загороднов 1981 → zagorodnov1981a).
author (required) string Author names (optionally followed by their ORCID in parentheses) as a pipe-delimited list.
year (required) year Year of publication.
type (required) string Item type.- article-journal: Journal article- book: Book (if the entire book is relevant)- chapter: Book section- document: Document not fitting into any other category- dataset: Collection of data- map: Geographic map- paper-conference: Paper published in conference proceedings- personal-communication: Personal communication between individuals- speech: Presentation (talk, poster) at a conference- report: Report distributed by an institution- thesis: Thesis written to satisfy degree requirements- webpage: Website or page on a website
title string Item title.
url string URL (DOI if available).
language (required) string Language as ISO 639-1 two-letter language code.- de: German- en: English- fr: French- ko: Korean- ru: Russian- sv: Swedish- zh: Chinese
container_title string Title of the container (e.g. journal, book).
volume integer Volume number of the item or container.
issue string Issue number (e.g. 1) or range (e.g. 1-2) of the item or container, with an optional letter prefix (e.g. F1).
page string Page number (e.g. 1) or range (e.g. 1-2) of the item in the container.
version string Version number (e.g. 1.0) of the item.
editor string Editor names (e.g. of the containing book) as a pipe-delimited list.
collection_title string Title of the collection (e.g. book series).
collection_number string Number (e.g. 1) or range (e.g. 1-2) in the collection (e.g. book series volume).
publisher string Publisher name.
data/borehole.csv
Metadata about each borehole.
name type description
id (required) integer Unique identifier.
source_id (required) string Identifier of the source of the earliest temperature measurements. This is also the source of the borehole attributes unless otherwise stated in notes.
glacier_name (required) string Glacier or ice cap name (as reported).
glims_id string Global Land Ice Measurements from Space (GLIMS) glacier identifier.
location_origin (required) string Origin of location (latitude, longitude).- submitted: Provided in data submission- published: Reported as coordinates in original publication- digitized: Digitized from published map with complete axes- estimated: Estimated from published plot by comparing to a map (e.g. Google Maps, CalTopo)- guessed: Estimated with difficulty, for example by comparing elevation to a map (e.g. Google Maps, CalTopo)
latitude (required) number [degree] Latitude (EPSG 4326).
longitude (required) number [degree] Longitude (EPSG 4326).
elevation_origin (required) string Origin of elevation (elevation).- submitted: Provided in data submission- published: Reported as number in original publication- digitized: Digitized from published plot with complete axes- estimated: Estimated from elevation contours in published map- guessed: Estimated with difficulty, for example by comparing location (latitude, longitude) to a map of contemporary elevations (e.g. CalTopo, Google Maps)
elevation (required) number [m] Elevation above sea level.
label string Borehole name (e.g. as labeled on a plot).
date_min date (%Y-%m-%d) Begin date of drilling, or if not known precisely, the first possible date (e.g. 2019 → 2019-01-01).
date_max date (%Y-%m-%d) End date of drilling, or if not known precisely, the last possible date (e.g. 2019 → 2019-12-31).
drill_method string Drilling method.- mechanical: Push, percussion, rotary- thermal: Hot point, electrothermal, steam- combined: Mechanical and thermal
ice_depth number [m] Starting depth of ice. Infinity (INF) indicates that ice was not reached.
depth number [m] Total borehole depth (not including drilling in the underlying bed).
to_bed boolean Whether the borehole reached the glacier bed.
temperature_accuracy number [°C] Thermistor accuracy or precision (as reported). Typically understood to represent one standard deviation.
notes string Additional remarks about the study site, the borehole, or the measurements therein. Souces are referenced by their id.
curator string Names of people who added the data to the database, as a pipe-delimited list.
data/profile.csv
Date and time of each measurement profile.
name type description
borehole_id (required) integer Borehole identifier.
id (required) integer Borehole profile identifier (starting from 1 for each borehole).
source_id (required) string Source identifier.
measurement_origin (required) string Origin of measurements (measurement.depth, measurement.temperature).- submitted: Provided as numbers in data submission- published: Numbers read from original publication- digitized: Digitized from published plot(s) with Plot Digitizer
date_min date (%Y-%m-%d) Measurement date, or if not known precisely, the first possible date (e.g. 2019 → 2019-01-01).
date_max (required) date (%Y-%m-%d) Measurement date, or if not known precisely, the last possible date (e.g. 2019 → 2019-12-31).
time time (%H:%M:%S) Measurement time.
utc_offset number [h] Time offset relative to Coordinated Universal Time (UTC).
equilibrated boolean Whether temperatures have equilibrated following drilling.
notes string Additional remarks about the profile or the measurements therein. Sources are referenced by source.id.
data/measurement.csv
Temperature measurements with depth.
name type description
borehole_id (required) integer Borehole identifier.
profile_id (required) integer Borehole profile identifier.
depth (required) number [m] Depth below the glacier surface.
temperature (required) number [°C] Temperature.
https://www.nist.gov/open/licensehttps://www.nist.gov/open/license
Non-rigid 3D objects are commonly seen in our surroundings. However, previous efforts have been mainly devoted to the retrieval of rigid 3D models, and thus comparing non-rigid 3D shapes is still a challenging problem in content-based 3D object retrieval. Therefore, we organize this track to promote the development of non-rigid 3D shape retrieval. The objective of this track is to evaluate the performance of 3D shape retrieval approaches on the subset of a publicly available non-rigid 3D models database----McGill Articulated Shape Benchmark database. Task description: The task is to evaluate the dissimilarity between every two objects in the database and then output the dissimilarity matrix. Data set: The McGill Articulated Shape Benchmark database consists of 255 non-rigid 3D models which are classified into 10 categories. The maximum number of the objects in a class is 31, while the minimum number is 20. 200 models are selected (or modified) to generate our test database to ensure that every class contains equal number of models. The models are represented as watertight triangle meshes and the file format is selected as the ASCII Object File Format (*.off). The original database is publicly available on the website: http://www.cim.mcgill.ca/~shape/benchMark/ Evaluation Methodology: We will employ the following evaluation measures: Precision-Recall curve; Average Precision (AP) and Mean Average Precision (MAP); E-Measure; Discounted Cumulative Gain; Nearest Neighbor, First-Tier (Tier1) and Second-Tier (Tier2). Please Cite the paper: SHREC'10 Track: Non-rigid 3D Shape Retrieval., Z. Lian, A. Godil, T. Fabry, T. Furuya, J. Hermans, R. Ohbuchi, C. Shu, D. Smeets, P. Suetens, D. Vandermeulen, S. Wuhrer In: M. Daoudi, T. Schreck, M. Spagnuolo, I. Pratikakis, R. Veltkamp (eds.), Proceedings of the Eurographics/ACM SIGGRAPH Symposium on 3D Object Retrieval, 2010.
PROSITE consists of documentation entries describing protein domains, families and functional sites as well as associated patterns and profiles to identify them [More... / References / Commercial users ]. PROSITE is complemented by ProRule , a collection of rules based on profiles and patterns, which increases the discriminatory power of profiles and patterns by providing additional information about functionally and/or structurally critical amino acids [More...].
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
Open Prices What is Open Prices? Open Prices is a project to collect and share prices of products around the world. It's a publicly available dataset that can be used for research, analysis, and more. Open Prices is developed and maintained by Open Food Facts. There are currently few companies that own large databases of product prices at the barcode level. These prices are not freely available, but sold at a high price to private actors, researchers and other organizations that can afford them. Open Prices aims to democratize access to price data by collecting and sharing product prices under an open licence. The data is available under the Open Database License (ODbL), which means that it can be used for any purpose, as long as you credit Open Prices and share any modifications you make to the dataset. Images submitted as proof are licensed under the Creative Commons Attribution-ShareAlike 4.0 International. Dataset description This dataset contains in Parquet format all price information contained in the Open Prices database. The dataset is updated daily. Here is a description of the most important columns: id: The ID of the price in DB product_code: The barcode of the product, null if the product is a "raw" product (fruit, vegetable, etc.) category_tag: The category of the product, only present for "raw" products. We follow Open Food Facts category taxonomy for category IDs. labels_tags: The labels of the product, only present for "raw" products. We follow Open Food Facts label taxonomy for label IDs. origins_tags: The origins of the product, only present for "raw" products. We follow Open Food Facts origin taxonomy for origin IDs. price: The price of the product, with the discount if any. price_is_discounted: Whether the price is discounted or not. price_without_discount: The price of the product without discount, null if the price is not discounted. price_per: The unit for which the price is given (e.g. "KILOGRAM", "UNIT") currency: The currency of the price location_osm_id: The OpenStreetMap ID of the location where the price was recorded. We use OpenStreetMap to identify uniquely the store where the price was recorded. location_osm_type: The type of the OpenStreetMap location (e.g. "NODE", "WAY") location_id: The ID of the location in the Open Prices database date: The date when the price was recorded proof_id: The ID of the proof of the price in the Open Prices DB owner: a hash of the owner of the price, for privacy. created: The date when the price was created in the Open Prices DB updated: The date when the price was last updated in the Open Prices DB proof_file_path: The path to the proof file in the Open Prices DB proof_type: The type of the proof. Possible values are RECEIPT, PRICE_TAG, GDPR_REQUEST, SHOP_IMPORT proof_date: The date of the proof proof_currency: The currency of the proof, should be the same as the price currency proof_created: The datetime when the proof was created in the Open Prices DB proof_updated: The datetime when the proof was last updated in the Open Prices DB location_osm_display_name: The display name of the OpenStreetMap location location_osm_address_city: The city of the OpenStreetMap location location_osm_address_postcode: The postcode of the OpenStreetMap location How can I download images? All images can be accessed under the https://prices.openfoodfacts.org/img/ base URL. You just have to concatenate the proof_file_path column to this base URL to get the full URL of the image (ex: https://prices.openfoodfacts.org/img/0010/lqGHf3ZcVR.webp). Can I contribute to Open Prices? Of course! You can contribute by adding prices, trough the Open Prices website or through Open Food Facts mobile app. To participate in the technical development, you can check the Open Prices GitHub repository.
The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. The publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld. ILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”. The ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.
Total number of non-empty WordNet synsets: 21841 Total number of images: 14197122 Number of images with bounding box annotations: 1,034,908 Number of synsets with SIFT features: 1000 Number of images with SIFT features: 1.2 million
✔️ Easy-to-handle Excel Sheet ✔️ Human Researched and Verified leads with Direct contacts ✔️ Up to 🇫🇷10K French Active Amazon third-party private Label Sellers lead direct contact and info ✔️ Up to 30+ data points for each prospect ✔️ Sort your list by store size, product category, company location, and much more! ✔️ Enjoy a list that has been hand-researched and verified. No scrapped contacts! Buy B2B contact database with SellerDirectories.com, and get your B2B contacts database sorted!
Data includes: Seller ID Seller Business Model: Private Label Seller/Wholesaler Estimated Annual Revenue in $ ( accurate +/-30%) Annual Revenue Bracket [$] % of goods shipped in FBA Amazon Seller Page Seller Website Decision Maker First Name Decision Maker Last Name Direct Email Generic Email Office Phone Number Decision Maker Linkedin URL Seller Name Seller Storefront Link Seller Business Name Seller Business Full Address City State Zip Seller Business Country Number of Reviews [30 days] Number of Reviews [90 days] Number of Reviews [12 months] Number of Reviews [lifetime] Seller Rating Lifetime Total number of products Total number of brands Brands Top Product URL Top Product Shipping From Top Product Category Top Product SubCategory
About SellerDirectories: SellerDirectories.com provides human-researched and verified data on Amazon sellers and eCommerce Brands.
Our human-researched and verified data is trusted by companies like Walmart, Microsoft, Tencent, Helium10, and 250 more! ⭐ Check our verified 5-star reviews on Trustpilot ⭐ https://www.trustpilot.com/review/sellerdirectories.com ⭐
✅ Data Sources: Aggregated from 50+ sources
✅ Data Collection: Human Researched and verified (we don't like scrapped contact data)
✅ 98%+ accurate and up-to-date data (verified)
✅ Brand /seller Targeting Options: Multiple filters available (including revenue, location, business model, and product category)
✅ Customer Service: Lifetime support and accuracy guarantee on your list. Our lists include resources on how best to run outreach campaigns to turn a prospect list into actual business opportunities.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The USDA-NRCS Soil Series Classification Database contains the taxonomic classification of each soil series identified in the United States, Territories, Commonwealths, and Island Nations served by USDA-NRCS. Along with the taxonomic classification, the database contains other information about the soil series, such as office of responsibility, series status, dates of origin and establishment, and geographic areas of usage. The database is maintained by the soils staff of the NRCS MLRA Soil Survey Region Offices across the country. Additions and changes are continually being made, resulting from on going soil survey work and refinement of the soil classification system. As the database is updated, the changes are immediately available to the user, so the data retrieved is always the most current. The Web access to this soil classification database provides capabilities to view the contents of individual series records, to query the database on any data element and produce a report with the selected soils, or to produce national reports with all soils in the database. The standard reports available allow the user to display the soils by series name or by taxonomic classification. The SC database was migrated into the NASIS database with version 6.2. Resources in this dataset:Resource Title: Website Pointer to Soil Series Classification Database (SC). File Name: Web Page, url: https://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/class/data/?cid=nrcs142p2_053583 Supports the following queries: