15 datasets found
  1. [Crypto] CoinGecko vs CoinMarketCap Data

    • kaggle.com
    Updated May 11, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sherpa (2020). [Crypto] CoinGecko vs CoinMarketCap Data [Dataset]. https://www.kaggle.com/thesherpafromalabama/coingecko-vs-coinmarketcap-data/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 11, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Sherpa
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Use the CMC_CG_Combo dataset, unless you want to recollect and DIY!

    Context

    On a quest to compare different cryptoexchanges, I came up with the idea to compare metrics across multiple platforms (at the moment just two). CoinGecko and CoinMarketCap are two of the biggest websites for monitoring both exchanges and cryptoprojects. In response to over-inflated volumes faked by crypto exchanges, both websites came up with independent metrics for assessing the worth of a given exchange.

    Content

    Collected on May 10, 2020

    CoinGecko's data is a bit more holistic, containing metrics across a multitude of areas (you can read more in the original blog post here. The data from CoinGecko consists of the following:

    -Exchange Name -Trust Score (on a scale of N/A-10) -Type (centralized/decentralized) -AML (risk: How well prepared are they to handle financial crime?) -API Coverage (Blanket Measure that includes: (1) Tickers Data (2) Historical Trades Data (3) Order Book Data (4) Candlestick/OHLC (5) WebSocket API (6) API Trading (7) Public Documentation -API Last Updated (When was the API last updated?) -Bid Ask Spread (Average buy/sell spread across all pairs) -Candlestick (Available/Not) -Combined Orderbook Percentile (See above link) -Estimated_Reserves (estimated holdings of major crypto) -Grade_Score (Overall API score) -Historical Data (available/not) -Jurisdiction Risk (risk: risk of Terrorist activity/bribery/corruption?) -KYC Procedures (risk: Know Your Customer?) -License and Authorization (risk: has exchange sought regulatory approval?) -Liquidity (don't confuse with "CMC Liquidity". THIS column is a combo of (1) Web traffic & Reported Volume (2) Order book spread (3) Trading Activity (4) Trust Score on Trading Pairs -Negative News (risk: any bad news?) -Normalized Trading Volume (Trading Volume normalized to web traffic) -Normalized Volume Percentile (see above blog link) -Orderbook (available/not) -Public Documentation (got well documented API available to everyone?) -Regulatory Compliance (risk rating from compliance perspective) -Regulatory last updated (last time regulatory metrics were updated) -Reported Trading Volume (volume as listed by the exchange) -Reported Normalized Trading Volume (Ratio of normalized to reported volume [0-1]) -Sanctions (risk: risk of sanctions?) -Scale (based on: (1) Normalized Trading Volume Percentile (2) Normalized Order Book Depth Percentile -Senior Public Figure (risk: does exchange have transparent public relations? etc) -Tickers (tick tick tick...) -Trading via API (can data be traded through the API?) -Websocket (got websockets?)

    -Green Pairs (Percentage of trading pairs deemed to have good liquidity) -Yellow Pairs (Percentage of trading pairs deemed to have fair liquidity -Red Pairs (Percentage of trading pairs deemed to have poor liquidity) -Unknown Pairs (percentage of trading pairs that do not have sufficient order book data)

    ~

    Again, CoinMarketCap only has one metric (that was recently updated and scales from 1-1000, 1000 being very liquid and 1 not. You can go check the article out for yourself. In the dataset, this is the "CMC Liquidity" column, not to be confused with the "Liquidity" column, which refers to the CoinGecko Metric!

    Acknowledgements

    Thanks to coingecko and cmc for making their data scrapable :)

    [CMC, you should try to give us a little more access to the figures that define your metric. Thanks!]

    Inspiration

    Your data will be in front of the world's largest data science community. What questions do you want to see answered?

  2. S

    Metabolomics data for crude protein content in diets for Huangjiang...

    • scidb.cn
    Updated Apr 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md. Abul Kalam Azad (2024). Metabolomics data for crude protein content in diets for Huangjiang mini-pigs [Dataset]. http://doi.org/10.57760/sciencedb.17962
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 12, 2024
    Dataset provided by
    Science Data Bank
    Authors
    Md. Abul Kalam Azad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Huangjiangzhen
    Description

    The metabolite contents in the jejunum and ileum of Huanjiang mini-pigs were determined using a non-targeted metabolomics approach with the UPLC-HDMS. The metabolomics procedures included sample preparation, metabolite separation and detection, data preprocessing, and statistical analysis. For metabolite identification, approximately 25 mg of each sample was weighed into a 2-mL EP tube and then added 500 mL extract solution (acetonitrile: methanol: water = 2:2:1 (v/v), with the isotopically-labeled internal standard mixture) to the EP tube. After 30 s of vortexing, the mixed samples were homogenized at 35 Hz for 4 min and sonicated in an ice-water bath for 5 min. The homogenization and sonication cycles were repeated three times. Then the samples were incubated for 1 h at -40 °C and centrifuged at 12,000 ´ g for 15 min at 4 °C. The resulting supernatants were filtered through a 0.22-µm membrane and transferred to fresh glass vials for further analysis. The quality control (QC) sample was obtained by mixing an equal aliquot of the supernatants from all samples. An ultra-performance liquid chromatography (UPLC) system (Vanquish, Thermo Fisher Scientific, Waltham, MA, USA) with a UPLC BEH Amide column (2.10 × 100 mm, 1.70 mm) coupled with Q Exactive HFX mass spectrometer (Orbitrap MS, Thermo Fisher Scientific, Waltham, MA, USA) was used to perform LC-MS/MS analyses. The mobile phase A contained 25 mmol/L ammonium acetate and 25 mmol/L ammonia hydroxide in water, and the mobile phase B contained acetonitrile. The injection volume was 3 mL, and the temperature of the auto-sampler was set at 4 °C. To acquire MS/MS spectra on an information-dependent acquisition (IDA) mode, the QE HFX mass spectrometer was used for its ability in the control of the acquisition software (Xcalibur, Thermo Fisher Scientific, Waltham, MA, USA). In this mode, the acquisition software continuously evaluated the full scan of the MS spectrum. The conditions for ESI source were set as follows: sheath gas flow rate 30 Arb, Aux gas flow rate 25 Arb, capillary temperature 350 °C, full MS resolution 60,000, MS/MS resolution a7500, collision energy 10/30/60 in NCE mode, and spray voltage 3.60 kV (positive ion mode) or -3.20 kV (negative ion mode), respectively. For peak detection, extraction, alignment, and integration, obtained raw data were converted into mzXML format by ProteoWizard and then processed with an in-house program, which was developed using R and based on XCMS. The metabolites were annotated using an in-house MS2 (secondary mass spectrometry) database (BiotreeDB v2.1). The value of the cutoff was 0.3. The PCA and orthogonal partial least squares discriminant analysis (OPLS-DA) were established by the SIMCA software v.16.0.2 (Sartorious Stedim Data Analytics AB, Umea, Sweden) to visualize the distinction and detect differential metabolites among different CP content groups. Moreover, the Kyoto Encyclopedia of Genes and Genomes (KEGG) and MetaboAnalyst 5.0 were used for pathway analysis.

  3. q

    Introductory Data Science Pipeline Activity – Yellow Fever and Global...

    • qubeshub.org
    Updated May 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mary Mulcahy (2024). Introductory Data Science Pipeline Activity – Yellow Fever and Global Precipitation [Dataset]. http://doi.org/10.25334/SP15-ZK40
    Explore at:
    Dataset updated
    May 15, 2024
    Dataset provided by
    QUBES
    Authors
    Mary Mulcahy
    Description

    Students follow the steps of a tiny data science project from start to finish. They are given a research question "Are the number of cases of yellow fever associated with global average precipitation?" The students locate the data from the World Health Organization and Environmental Protection Agency, download it, and use the merged and cleaned data to see whether the evidence supports the hypothesis that yellow fever cases are higher in wetter than drier years. The activity is intended to be used early in a course to prepare introductory students to eventually explore their own questions.

  4. Mini Data Center Market Analysis North America, APAC, Europe, South America,...

    • technavio.com
    Updated May 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2024). Mini Data Center Market Analysis North America, APAC, Europe, South America, Middle East and Africa - US, China, Japan, UK, Germany - Size and Forecast 2024-2028 [Dataset]. https://www.technavio.com/report/mini-data-center-market-industry-analysis
    Explore at:
    Dataset updated
    May 15, 2024
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    Japan, China, United States, United Kingdom, Germany, Global
    Description

    Snapshot img

    Mini Data Center Market Size 2024-2028

    The mini data center market size is forecast to increase by USD 8.68 billion, at a CAGR of 21.68% between 2023 and 2028.

    The market is experiencing significant growth, driven by the increasing demand from Small and Medium Enterprises (SMEs) for reliable and efficient data storage solutions. This trend is further fueled by the growing need for edge computing, which requires data processing to occur closer to the source, reducing latency and enhancing responsiveness. However, the market faces a notable challenge: the lack of awareness and understanding among businesses regarding the benefits and implementation of mini data centers. This obstacle presents an opportunity for market participants to educate potential clients and demonstrate the value proposition of mini data centers in addressing their specific data management needs.
    Companies that successfully navigate this challenge and effectively communicate the advantages of mini data centers will be well-positioned to capitalize on the market's potential for growth.
    

    What will be the Size of the Mini Data Center Market during the forecast period?

    Explore in-depth regional segment analysis with market size data - historical 2018-2022 and forecasts 2024-2028 - in the full report.
    Request Free Sample

    The market continues to evolve, driven by the ever-increasing demand for reliable and efficient IT infrastructure. Businesses across various sectors are adopting modular data centers to address their unique requirements, from server consolidation and disaster recovery to network optimization and capacity planning. These data centers incorporate advanced technologies such as redundant power supplies, precision cooling, and remote monitoring, seamlessly integrated into their design. Mini data centers come in various forms, including micro data centers and edge data centers, catering to the diverse needs of organizations. Their modular design allows for easy deployment, scalability, and flexibility, making them an attractive option for businesses seeking to minimize their carbon footprint and optimize operational efficiency.

    Data center construction and lifecycle management are crucial aspects of mini data center operations. From site selection and network infrastructure to HVAC systems and energy efficiency, every detail is meticulously planned and executed to ensure high availability and reliability. As the market continues to unfold, we see the integration of innovative technologies such as network virtualization, liquid cooling, and data center relocation services. These advancements enable businesses to optimize their IT infrastructure, reduce energy consumption, and enhance their overall IT infrastructure's performance and security. Maintenance services and support contracts are essential components of mini data center management, ensuring the seamless operation of these complex systems.

    Capacity planning and space optimization are also critical considerations, as businesses look to maximize their investment in IT infrastructure while minimizing costs and ensuring business continuity. The market's continuous dynamism is reflected in its ongoing evolution, with new technologies and applications emerging regularly. From rackmount servers and blade servers to fiber optic cables and ethernet switches, the market's diverse offerings cater to the ever-changing needs of businesses in various sectors. In conclusion, the market's ongoing evolution is driven by the need for reliable, efficient, and flexible IT infrastructure. From server consolidation and disaster recovery to network optimization and capacity planning, mini data centers offer businesses a range of solutions to meet their unique requirements.

    With a focus on energy efficiency, operational efficiency, and carbon footprint reduction, these data centers are an essential component of modern IT infrastructure.

    How is this Mini Data Center Industry segmented?

    The mini data center industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2024-2028, as well as historical data from 2018-2022 for the following segments.

    Type
    
      Containerized data centers
      Micro data centers
    
    
    Business Segment
    
      SMEs
      Large enterprises
    
    
    Geography
    
      North America
    
        US
    
    
      Europe
    
        Germany
        UK
    
    
      APAC
    
        China
        Japan
    
    
      Rest of World (ROW)
    

    By Type Insights

    The containerized data centers segment is estimated to witness significant growth during the forecast period.

    Containerized modular data centers are gaining prominence in the business landscape, serving as crucial infrastructure for edge computing and disaster recovery applications. As companies strive for operational efficiency and expansion, the demand for reliable data centers with robust storage and processing capacities is

  5. M

    Global Tiny Homes Market Demand Forecasting 2025-2032

    • statsndata.org
    excel, pdf
    Updated Jun 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stats N Data (2025). Global Tiny Homes Market Demand Forecasting 2025-2032 [Dataset]. https://www.statsndata.org/report/tiny-homes-market-145732
    Explore at:
    pdf, excelAvailable download formats
    Dataset updated
    Jun 2025
    Authors
    Stats N Data
    License

    https://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order

    Area covered
    Global
    Description

    The Tiny Homes market has emerged as a revolutionary concept in the housing industry, driven by the need for affordable living solutions and a growing desire for sustainable lifestyles. As urban populations swell and housing costs spiral, more individuals and families are turning to tiny homes as a feasible alternat

  6. JASMINE mini-Mock survey dataset

    • zenodo.org
    application/gzip, pdf
    Updated Aug 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryou Ohsawa; Ryou Ohsawa; Daisuke Kawata; Daisuke Kawata (2024). JASMINE mini-Mock survey dataset [Dataset]. http://doi.org/10.5281/zenodo.10403895
    Explore at:
    application/gzip, pdfAvailable download formats
    Dataset updated
    Aug 21, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ryou Ohsawa; Ryou Ohsawa; Daisuke Kawata; Daisuke Kawata
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Japan Astrometry Satellite Mission for INfrared Exploration (JASMINE) is a satellite mission that measures the precise positions and motions of stars in the Galactic nucleus region and inspects a periodic subtle dimming caused by exoplanets orbiting around M-type dwarf stars.

    We have generated small-scale JASMINE Galactic Centre survey data, JASMINE mini-Mock Survey (JmMS) data, for testing and validating JASMINE astrometric data analysis, especially for correction of optical distortion. For preliminary software validation, we design a simplified small-scale mock survey for “plate analysis”, where stellar motions and the variation in image distortion are treated as negligible within a dataset.

    To this end, we select a 1′×1′-target region around the Galactic center. We assume that JASMINE observes 4 fields per one orbit. Two of these fields overlap half of their field-of-view in the Galactic longitude direction, and another two fields overlap by half of their field-of-view in the Galactic latitude direction. The fields of these observations are randomly selected, but at least one of these 4 fields in one orbit covers the selected target region. In this mini-mock survey data, we consider 100 orbits. These data would be valuable for us to validate whether our astrometric analysis code can achieve the expected astrometric accuracy by correcting the image distortion using the assumption that the stellar position in the sky and the higher-order image distortion do not change during the observational time, ∼ 50 min, of one orbit. We summarise how to generate the mock data from the mini JASMINE Galactic Centre survey.

    The JASMINE mini-survey dataset contains the following files:

    1. readme.pdf: A document explaining how the dataset is generated.
    2. jasmine_validation_augumented_source_catalog.fits.gz: A FITS binary table of the source catalog that contains the ground truth astrometric parameters.
    3. jasmine_validation_reference_catalog.fits.gz: A FITS binary table of the reference catalog that contains the astrometric parameters resampled from the source catalog.
    4. jasmine_validation_survey_strategy.fits.gz: A FITS binary table of the survey strategy that describes the observation schedules with the telescope pointings and position angles.
    5. jasmine_validation_observation-vanilla_e4.0_20240530.tar.gz: An archive of the JmMS dataset in case the telescope has no image distortion and the focal length does not change throughout the mission. The measurement errors are set to ~4.0 mas for all the sources.
    6. jasmine_validation_observation-distortion_e4.0_20240602.tar.gz: An archive of the JmMS dataset in case the telescope suffers from image distortion and the focal length changes with every plate. The measurement errors are set to ~4.0 mas for all the sources.
  7. Feature Engineering Data

    • kaggle.com
    Updated Jul 23, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mat Leonard (2019). Feature Engineering Data [Dataset]. https://www.kaggle.com/matleonard/feature-engineering-data/metadata
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 23, 2019
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Mat Leonard
    Description

    This dataset is a sample from the TalkingData AdTracking competition. I kept all the positive examples (where is_attributed == 1), while discarding 99% of the negative samples. The sample has roughly 20% positive examples.

    For this competition, your objective was to predict whether a user will download an app after clicking a mobile app advertisement.

    File descriptions

    train_sample.csv - Sampled data

    Data fields

    Each row of the training data contains a click record, with the following features.

    • ip: ip address of click.
    • app: app id for marketing.
    • device: device type id of user mobile phone (e.g., iphone 6 plus, iphone 7, huawei mate 7, etc.)
    • os: os version id of user mobile phone
    • channel: channel id of mobile ad publisher
    • click_time: timestamp of click (UTC)
    • attributed_time: if user download the app for after clicking an ad, this is the time of the app download
    • is_attributed: the target that is to be predicted, indicating the app was downloaded

    Note that ip, app, device, os, and channel are encoded.

    I'm also including Parquet files with various features for use within the course.

  8. I

    Global Tiny Machine Learning (TinyML) Market Risk Analysis 2025-2032

    • statsndata.org
    excel, pdf
    Updated Jun 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stats N Data (2025). Global Tiny Machine Learning (TinyML) Market Risk Analysis 2025-2032 [Dataset]. https://www.statsndata.org/report/tiny-machine-learning-tinyml-market-350861
    Explore at:
    excel, pdfAvailable download formats
    Dataset updated
    Jun 2025
    Authors
    Stats N Data
    License

    https://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order

    Area covered
    Global
    Description

    The Tiny Machine Learning (TinyML) market is rapidly transforming the landscape of artificial intelligence by enabling machine learning capabilities on resource-constrained devices. With its rise, TinyML leverages the capabilities of lightweight models to perform inference tasks directly on edge devices, minimizing

  9. Global SME Big Data market size is USD xx million in 2024.

    • cognitivemarketresearch.com
    pdf,excel,csv,ppt
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cognitive Market Research, Global SME Big Data market size is USD xx million in 2024. [Dataset]. https://www.cognitivemarketresearch.com/sme-big-data-market-report
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset authored and provided by
    Cognitive Market Research
    License

    https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy

    Time period covered
    2021 - 2033
    Area covered
    Global
    Description

    According to Cognitive Market Research, the global SME Big Data market size is USD xx million in 2024. It will expand at a compound annual growth rate (CAGR) of 4.60% from 2024 to 2031. North America held the major market share for more than 40% of the global revenue with a market size of USD xx million in 2024 and will grow at a compound annual growth rate (CAGR) of 2.8% from 2024 to 2031. Europe accounted for a market share of over 30% of the global revenue with a market size of USD xx million. Asia Pacific held a market share of around 23% of the global revenue with a market size of USD xx million in 2024 and will grow at a compound annual growth rate (CAGR) of 6.6% from 2024 to 2031. Latin America had a market share for more than 5% of the global revenue with a market size of USD xx million in 2024 and will grow at a compound annual growth rate (CAGR) of 4.0% from 2024 to 2031. Middle East and Africa had a market share of around 2% of the global revenue and was estimated at a market size of USD xx million in 2024 and will grow at a compound annual growth rate (CAGR) of 4.3% from 2024 to 2031. The Software held the highest SME Big Data market revenue share in 2024. Market Dynamics of SME Big Data Market Key Drivers for SME Big Data Market Growing Recognition of Data-Driven Decision Making The growing recognition of data-driven decision making is a key driver in the SME Big Data market as businesses increasingly understand the value of leveraging data for strategic decisions. This shift enables SMEs to optimize operations, enhance customer experiences, and gain competitive advantages. Access to affordable big data technologies and analytics tools has democratized data usage, making it feasible for smaller enterprises to adopt these solutions. SMEs can now analyze market trends, customer behaviors, and operational inefficiencies, leading to more informed and agile business strategies. This recognition propels demand for big data solutions, as SMEs seek to harness data insights to improve outcomes, innovate, and stay competitive in a rapidly evolving business landscape. Growing Number of Affordable Big Data Solutions The growing number of affordable big data solutions is driving the SME Big Data market by lowering the entry barrier for smaller enterprises to adopt advanced analytics. Cost-effective technologies, particularly cloud-based services, allow SMEs to access powerful data analytics tools without substantial upfront investments in infrastructure. This affordability enables SMEs to harness big data to gain insights into customer behavior, streamline operations, and enhance decision-making processes. As a result, more SMEs are integrating big data into their business models, leading to improved efficiency, innovation, and competitiveness. The availability of scalable and flexible solutions tailored to SME needs further accelerates adoption, making big data analytics an accessible and valuable resource for small and medium-sized businesses aiming for growth and success. Restraint Factor for the SME Big Data Market High Initial Investment Cost to Limit the Sales High initial costs are a significant restraint on the SME Big Data market, as they can deter smaller businesses from adopting big data technologies. Implementing big data solutions often requires substantial investment in hardware, software, and skilled personnel, which can be prohibitively expensive for SMEs with limited budgets. These costs include purchasing or subscribing to analytics platforms, upgrading IT infrastructure, and hiring data scientists or analysts. The financial burden associated with these initial expenses can make SMEs hesitant to commit to big data projects, despite the potential long-term benefits. Consequently, high initial costs limit the accessibility of big data analytics for SMEs, slowing the market's overall growth and the widespread adoption of these transformative technologies among smaller enterprises. Impact of Covid-19 on the SME Big Data Market The COVID-19 pandemic significantly impacted the SME Big Data market, accelerating digital transformation as businesses sought to adapt to rapidly changing conditions. With disruptions in traditional operations and a shift towards remote work, SMEs increasingly turned to big data analytics to maintain efficiency, manage supply chains, and understand evolving customer behaviors. The pandemic underscored the importance of real-time data insights for agile decision-making, dr...

  10. P

    Mini Data Center Market, By Products (Containerized Data Center and Micro...

    • prophecymarketinsights.com
    pdf
    Updated Aug 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prophecy Market Insights (2023). Mini Data Center Market, By Products (Containerized Data Center and Micro Data Center) and By Region (North America, Europe, Asia Pacific, Latin America, and Middle East & Africa) - Trends, Analysis and Forecast till 2030 [Dataset]. https://www.prophecymarketinsights.com/market_insight/Global-Mini-Data-Center-Market-1565
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Aug 2023
    Dataset authored and provided by
    Prophecy Market Insights
    License

    https://www.prophecymarketinsights.com/privacy_policyhttps://www.prophecymarketinsights.com/privacy_policy

    Time period covered
    2024 - 2034
    Area covered
    Global
    Description

    Mini data center market is estimated to be USD 13.29 Billion by 2030 with a CAGR of 16.2% during the forecast period.

  11. A

    Global Tiny Machine Learning Devices Market Technological Advancements...

    • statsndata.org
    excel, pdf
    Updated Jun 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stats N Data (2025). Global Tiny Machine Learning Devices Market Technological Advancements 2025-2032 [Dataset]. https://www.statsndata.org/report/tiny-machine-learning-devices-market-270343
    Explore at:
    excel, pdfAvailable download formats
    Dataset updated
    Jun 2025
    Dataset authored and provided by
    Stats N Data
    License

    https://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order

    Area covered
    Global
    Description

    The Tiny Machine Learning Devices market is an emerging sector that encapsulates the confluence of miniaturized computing power and sophisticated artificial intelligence, enabling devices to perform complex tasks with minimal human intervention. These compact devices, often embedded in various consumer electronics,

  12. Article Dataset (Mini)

    • kaggle.com
    Updated Oct 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sani Kamal (2024). Article Dataset (Mini) [Dataset]. https://www.kaggle.com/datasets/sanikamal/article-50
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 18, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Sani Kamal
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Overview

    This dataset contains 50 articles sourced from Medium, focusing on AI-related content. It is designed for business owners, content creators, and AI developers looking to analyze successful articles, improve engagement, and fine-tune AI language models (LLMs). The data can be used to explore what makes articles perform well, including sentiment analysis, follower counts, and headline effectiveness.

    Dataset Contents

    • articles_50.db - Sample database with 50 articles(Free Version)

    The database includes pre-analyzed data such as sentiment scores, follower counts, and headline metadata, helping users gain insights into high-performing content.

    Use Cases

    • Content Strategy Optimization: Identify trends in successful AI-related articles to enhance your content approach.
    • Headline Crafting: Study patterns in top-performing headlines to create more compelling article titles.
    • LLM Fine-Tuning: Utilize the dataset to fine-tune AI models with real-world data on content performance.
    • Sentiment-Driven Content: Create content that resonates with readers by aligning with sentiment insights.

    This dataset is a valuable tool for anyone aiming to harness the power of data-driven insights to enhance their content or AI models.

  13. I

    Global Tiny Home Design Software Market Overview and Outlook 2025-2032

    • statsndata.org
    excel, pdf
    Updated Jun 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stats N Data (2025). Global Tiny Home Design Software Market Overview and Outlook 2025-2032 [Dataset]. https://www.statsndata.org/report/tiny-home-design-software-market-8011
    Explore at:
    excel, pdfAvailable download formats
    Dataset updated
    Jun 2025
    Dataset authored and provided by
    Stats N Data
    License

    https://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order

    Area covered
    Global
    Description

    The Tiny Home Design Software market has emerged as a vital niche within the broader architectural and design software industry, reflecting a growing trend toward minimalist living and sustainable housing solutions. As the demand for tiny homes skyrockets, fueled by changing consumer attitudes toward real estate, af

  14. Data from: SCIENTIFIC MINI OFAD CYCLE 4 CALIBRATION

    • esdcdoi.esac.esa.int
    Updated Apr 29, 1994
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    European Space Agency (1994). SCIENTIFIC MINI OFAD CYCLE 4 CALIBRATION [Dataset]. http://doi.org/10.57780/esa-aqwpfqs
    Explore at:
    https://www.iana.org/assignments/media-types/application/fitsAvailable download formats
    Dataset updated
    Apr 29, 1994
    Dataset authored and provided by
    European Space Agencyhttp://www.esa.int/
    Time period covered
    Apr 27, 1994
    Description
  15. Data from: Bank loan dataset

    • kaggle.com
    Updated Sep 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mini Pathak (2022). Bank loan dataset [Dataset]. https://www.kaggle.com/datasets/minipathak/bank-loan-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 21, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Mini Pathak
    Description

    Dataset

    This dataset was created by Mini Pathak

    Contents

  16. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Sherpa (2020). [Crypto] CoinGecko vs CoinMarketCap Data [Dataset]. https://www.kaggle.com/thesherpafromalabama/coingecko-vs-coinmarketcap-data/discussion
Organization logo

[Crypto] CoinGecko vs CoinMarketCap Data

A tiny dataset for comparing CG's "Trust Score" With CMC's "Liquidity Metric"

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
May 11, 2020
Dataset provided by
Kagglehttp://kaggle.com/
Authors
Sherpa
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

Use the CMC_CG_Combo dataset, unless you want to recollect and DIY!

Context

On a quest to compare different cryptoexchanges, I came up with the idea to compare metrics across multiple platforms (at the moment just two). CoinGecko and CoinMarketCap are two of the biggest websites for monitoring both exchanges and cryptoprojects. In response to over-inflated volumes faked by crypto exchanges, both websites came up with independent metrics for assessing the worth of a given exchange.

Content

Collected on May 10, 2020

CoinGecko's data is a bit more holistic, containing metrics across a multitude of areas (you can read more in the original blog post here. The data from CoinGecko consists of the following:

-Exchange Name -Trust Score (on a scale of N/A-10) -Type (centralized/decentralized) -AML (risk: How well prepared are they to handle financial crime?) -API Coverage (Blanket Measure that includes: (1) Tickers Data (2) Historical Trades Data (3) Order Book Data (4) Candlestick/OHLC (5) WebSocket API (6) API Trading (7) Public Documentation -API Last Updated (When was the API last updated?) -Bid Ask Spread (Average buy/sell spread across all pairs) -Candlestick (Available/Not) -Combined Orderbook Percentile (See above link) -Estimated_Reserves (estimated holdings of major crypto) -Grade_Score (Overall API score) -Historical Data (available/not) -Jurisdiction Risk (risk: risk of Terrorist activity/bribery/corruption?) -KYC Procedures (risk: Know Your Customer?) -License and Authorization (risk: has exchange sought regulatory approval?) -Liquidity (don't confuse with "CMC Liquidity". THIS column is a combo of (1) Web traffic & Reported Volume (2) Order book spread (3) Trading Activity (4) Trust Score on Trading Pairs -Negative News (risk: any bad news?) -Normalized Trading Volume (Trading Volume normalized to web traffic) -Normalized Volume Percentile (see above blog link) -Orderbook (available/not) -Public Documentation (got well documented API available to everyone?) -Regulatory Compliance (risk rating from compliance perspective) -Regulatory last updated (last time regulatory metrics were updated) -Reported Trading Volume (volume as listed by the exchange) -Reported Normalized Trading Volume (Ratio of normalized to reported volume [0-1]) -Sanctions (risk: risk of sanctions?) -Scale (based on: (1) Normalized Trading Volume Percentile (2) Normalized Order Book Depth Percentile -Senior Public Figure (risk: does exchange have transparent public relations? etc) -Tickers (tick tick tick...) -Trading via API (can data be traded through the API?) -Websocket (got websockets?)

-Green Pairs (Percentage of trading pairs deemed to have good liquidity) -Yellow Pairs (Percentage of trading pairs deemed to have fair liquidity -Red Pairs (Percentage of trading pairs deemed to have poor liquidity) -Unknown Pairs (percentage of trading pairs that do not have sufficient order book data)

~

Again, CoinMarketCap only has one metric (that was recently updated and scales from 1-1000, 1000 being very liquid and 1 not. You can go check the article out for yourself. In the dataset, this is the "CMC Liquidity" column, not to be confused with the "Liquidity" column, which refers to the CoinGecko Metric!

Acknowledgements

Thanks to coingecko and cmc for making their data scrapable :)

[CMC, you should try to give us a little more access to the figures that define your metric. Thanks!]

Inspiration

Your data will be in front of the world's largest data science community. What questions do you want to see answered?

Search
Clear search
Close search
Google apps
Main menu