34 datasets found
  1. w

    Books called D3.js 4.x data visualization : learn to visualize your data...

    • workwithdata.com
    Updated Mar 3, 2003
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2003). Books called D3.js 4.x data visualization : learn to visualize your data with JavaScript [Dataset]. https://www.workwithdata.com/datasets/books?f=1&fcol0=book&fop0=%3D&fval0=D3.js+4.x+data+visualization+%3A+learn+to+visualize+your+data+with+JavaScript
    Explore at:
    Dataset updated
    Mar 3, 2003
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about books and is filtered where the book is D3.js 4.x data visualization : learn to visualize your data with JavaScript, featuring 7 columns including author, BNB id, book, book publisher, and ISBN. The preview is ordered by publication date (descending).

  2. f

    Data_Sheet_1_“R” U ready?: a case study using R to analyze changes in gene...

    • frontiersin.figshare.com
    docx
    Updated Mar 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amy E. Pomeroy; Andrea Bixler; Stefanie H. Chen; Jennifer E. Kerr; Todd D. Levine; Elizabeth F. Ryder (2024). Data_Sheet_1_“R” U ready?: a case study using R to analyze changes in gene expression during evolution.docx [Dataset]. http://doi.org/10.3389/feduc.2024.1379910.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Mar 22, 2024
    Dataset provided by
    Frontiers
    Authors
    Amy E. Pomeroy; Andrea Bixler; Stefanie H. Chen; Jennifer E. Kerr; Todd D. Levine; Elizabeth F. Ryder
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    As high-throughput methods become more common, training undergraduates to analyze data must include having them generate informative summaries of large datasets. This flexible case study provides an opportunity for undergraduate students to become familiar with the capabilities of R programming in the context of high-throughput evolutionary data collected using macroarrays. The story line introduces a recent graduate hired at a biotech firm and tasked with analysis and visualization of changes in gene expression from 20,000 generations of the Lenski Lab’s Long-Term Evolution Experiment (LTEE). Our main character is not familiar with R and is guided by a coworker to learn about this platform. Initially this involves a step-by-step analysis of the small Iris dataset built into R which includes sepal and petal length of three species of irises. Practice calculating summary statistics and correlations, and making histograms and scatter plots, prepares the protagonist to perform similar analyses with the LTEE dataset. In the LTEE module, students analyze gene expression data from the long-term evolutionary experiments, developing their skills in manipulating and interpreting large scientific datasets through visualizations and statistical analysis. Prerequisite knowledge is basic statistics, the Central Dogma, and basic evolutionary principles. The Iris module provides hands-on experience using R programming to explore and visualize a simple dataset; it can be used independently as an introduction to R for biological data or skipped if students already have some experience with R. Both modules emphasize understanding the utility of R, rather than creation of original code. Pilot testing showed the case study was well-received by students and faculty, who described it as a clear introduction to R and appreciated the value of R for visualizing and analyzing large datasets.

  3. a

    How Python Can Work For You

    • code-deegsnccu.hub.arcgis.com
    • cope-open-data-deegsnccu.hub.arcgis.com
    Updated Aug 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    East Carolina University (2023). How Python Can Work For You [Dataset]. https://code-deegsnccu.hub.arcgis.com/items/6d5c27fa87564d52b0b753d4a3168ef1
    Explore at:
    Dataset updated
    Aug 26, 2023
    Dataset authored and provided by
    East Carolina University
    Description

    Python is a free computer language that prioritizes readability for humans and general application. It is one of the easier computer languages to learn and start especially with no prior programming knowledge. I have been using Python for Excel spreadsheet automation, data analysis, and data visualization. It has allowed me to better focus on learning how to automate my data analysis workload. I am currently examining the North Carolina Department of Environmental Quality (NCDEQ) database for water quality sampling for the Town of Nags Head, NC. It spans over 26 years (1997-2023) and lists a total of currently 41 different testing site locations. You can see at the bottom of image 2 below that I have 148,204 testing data points for the entirety of the NCDEQ testing for the state. From this large dataset 34,759 data points are from Dare County (Nags Head) specifically with this subdivided into testing sites.

  4. High Interactivity Visualization Software for Large Computational Data Sets,...

    • data.nasa.gov
    application/rdfxml +5
    Updated Jun 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). High Interactivity Visualization Software for Large Computational Data Sets, Phase II [Dataset]. https://data.nasa.gov/dataset/High-Interactivity-Visualization-Software-for-Larg/ttzp-wtjx
    Explore at:
    application/rdfxml, xml, csv, application/rssxml, tsv, jsonAvailable download formats
    Dataset updated
    Jun 26, 2018
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Existing scientific visualization tools have specific limitations for large scale scientific data sets. Of these four limitations can be seen as paramount: (i) memory management, (ii) remote visualization, (iii) interactivity, and (iv) specificity. In Phase I, we proposed and successfully developed a prototype of a collection of computer tools and libraries called SciViz that overcome these limitations and enable researchers to visualize large scale data sets (greater than 200 gigabytes) on HPC resources remotely from their workstations at interactive rates. A key element of our technology is the stack oriented rather than a framework driven approach which allows it to interoperate with common existing scientific visualization software thereby eliminating the need for the user to switch and learn new software. The result is a versatile 3D visualization capability that will significantly decrease the time to knowledge discovery from large, complex data sets.

    Typical visualization activity can be organized into a simple stack of steps that leads to the visualization result. These steps can broadly be classified into data retrieval, data analysis, visual representation, and rendering. Our approach will be to continue with the technique selected in Phase I of utilizing existing visualization tools at each point in the visualization stack and to develop specific tools that address the core limitations identified and seamlessly integrate them into the visualization stack. Specifically, we intend to complete technical objectives in four areas that will complete the development of visualization tools for interactive visualization of very large data sets in each layer of the visualization stack. These four areas are: Feature Objectives, C++ Conversion and Optimization, Testing Objectives, and Domain Specifics and Integration. The technology will be developed and tested at NASA and the San Diego Supercomputer Center.

  5. a

    ArcGIS Pro: Mapping and Visualization

    • hub.arcgis.com
    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    Updated May 3, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    State of Delaware (2019). ArcGIS Pro: Mapping and Visualization [Dataset]. https://hub.arcgis.com/documents/delaware::arcgis-pro-mapping-and-visualization/about?path=
    Explore at:
    Dataset updated
    May 3, 2019
    Dataset authored and provided by
    State of Delaware
    Description

    Discover how to display and symbolize both 2D and 3D data. Search, access, and create new map symbols. Learn to specify and configure text symbols for your map. Complete your map by creating an effective layout to display and distribute your work.

  6. Alternative Data Market Analysis North America, Europe, APAC, South America,...

    • technavio.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio, Alternative Data Market Analysis North America, Europe, APAC, South America, Middle East and Africa - US, Canada, China, UK, Mexico, Germany, Japan, India, Italy, France - Size and Forecast 2025-2029 [Dataset]. https://www.technavio.com/report/alternative-data-market-industry-analysis
    Explore at:
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    Europe, Canada, France, Germany, United States, United Kingdom, Mexico, Global
    Description

    Snapshot img

    Alternative Data Market Size 2025-2029

    The alternative data market size is forecast to increase by USD 60.32 billion at a CAGR of 52.5% between 2024 and 2029.

    The market is experiencing significant growth due to the increased availability and diversity of data sources. This trend is driven by the rise of alternative data-driven investment strategies, which offer unique insights and opportunities for businesses and investors. However, challenges persist in the form of issues related to data quality and standardization. big data analytics and machine learning help businesses gain insights from vast amounts of data, enabling data-driven innovation and competitive advantage. Data governance, data security, and data ethics are crucial aspects of managing alternative data.
    As more data becomes available, ensuring its accuracy and consistency is crucial for effective decision-making. The market analysis report provides an in-depth examination of these factors and their impact on the growth of the market. With the increasing importance of data-driven strategies, staying informed about the latest trends and challenges is essential for businesses looking to remain competitive in today's data-driven economy.
    

    What will be the Size of the Alternative Data Market During the Forecast Period?

    To learn more about the market report, Request Free Sample

    Alternative data, the non-traditional information sourced from various industries and domains, is revolutionizing business landscapes by offering new opportunities for data monetization. This trend is driven by the increasing availability of data from various sources such as credit card transactions, IoT devices, satellite data, social media, and more. Data privacy is a critical consideration in the market. With the increasing focus on data protection regulations, businesses must ensure they comply with stringent data privacy standards. Data storytelling and data-driven financial analysis are essential applications of alternative data, providing valuable insights for businesses to make informed decisions. Data-driven product development and sales prediction are other significant areas where alternative data plays a pivotal role.
    Moreover, data management platforms and analytics tools facilitate data integration, data quality, and data visualization, ensuring data accuracy and consistency. Predictive analytics and data-driven risk management help businesses anticipate trends and mitigate risks. Data enrichment and data-as-a-service are emerging business models that enable businesses to access and utilize alternative data. Economic indicators and data-driven operations are other areas where alternative data is transforming business processes.
    

    How is the Alternative Data Market Segmented?

    The market research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Type
    
      Credit and debit card transactions
      Social media
      Mobile application usage
      Web scrapped data
      Others
    
    
    End-user
    
      BFSI
      IT and telecommunication
      Retail
      Others
    
    
    Geography
    
      North America
    
        Canada
        Mexico
        US
    
    
      Europe
    
        Germany
        UK
        France
        Italy
    
    
      APAC
    
        China
        India
        Japan
    
    
      South America
    
    
    
      Middle East and Africa
    

    By Type Insights

    The credit and debit card transactions segment is estimated to witness significant growth during the forecast period.
    

    Alternative data derived from card and debit card transactions offers valuable insights into consumer spending behaviors and lifestyle choices. This data is essential for market analysts, financial institutions, and businesses seeking to enhance their strategies and customer experiences. The two primary categories of card transactions are credit and debit. Credit card transactions provide information on discretionary spending, luxury purchases, and credit management skills. In contrast, debit card transactions reveal essential spending habits, budgeting strategies, and daily expenses. By analyzing this data using advanced methods, businesses can gain a competitive advantage, understand market trends, and cater to consumer needs effectively. IT & telecommunications companies, hedge funds, and other organizations rely on web scraped data, social and sentiment analysis, and public data to supplement their internal data sources. Adhering to GDPR regulations ensures ethical data usage and compliance.

    Get a glance at the market report of share of various segments. Request Free Sample

    The credit and debit card transactions segment was valued at USD 228.40 million in 2019 and showed a gradual increase during the forecast period.

    Regional Analysis

    North America is estimated to contribute 56% to the growth of the global market during the forecast period.
    

    T

  7. a

    03.1 Survey123 for ArcGIS: Ask Questions, Get the Facts, Make Decisions

    • training-iowadot.opendata.arcgis.com
    • hub.arcgis.com
    Updated Feb 17, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Iowa Department of Transportation (2017). 03.1 Survey123 for ArcGIS: Ask Questions, Get the Facts, Make Decisions [Dataset]. https://training-iowadot.opendata.arcgis.com/documents/260affb0c46a49ad87741c1b53acb8e2
    Explore at:
    Dataset updated
    Feb 17, 2017
    Dataset authored and provided by
    Iowa Department of Transportation
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this seminar, you will learn how to create, share, and analyze smart form-based surveys using a lightweight app called Survey123 for ArcGIS. You will discover a three-step approach to deploying surveys that simplify the data collection experience, facilitate easy data visualization and analysis, and support informed decision making.This seminar was developed to support the following:Survey123 for ArcGIS

  8. f

    Data_Sheet_2_“R” U ready?: a case study using R to analyze changes in gene...

    • figshare.com
    docx
    Updated Mar 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amy E. Pomeroy; Andrea Bixler; Stefanie H. Chen; Jennifer E. Kerr; Todd D. Levine; Elizabeth F. Ryder (2024). Data_Sheet_2_“R” U ready?: a case study using R to analyze changes in gene expression during evolution.docx [Dataset]. http://doi.org/10.3389/feduc.2024.1379910.s002
    Explore at:
    docxAvailable download formats
    Dataset updated
    Mar 22, 2024
    Dataset provided by
    Frontiers
    Authors
    Amy E. Pomeroy; Andrea Bixler; Stefanie H. Chen; Jennifer E. Kerr; Todd D. Levine; Elizabeth F. Ryder
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    As high-throughput methods become more common, training undergraduates to analyze data must include having them generate informative summaries of large datasets. This flexible case study provides an opportunity for undergraduate students to become familiar with the capabilities of R programming in the context of high-throughput evolutionary data collected using macroarrays. The story line introduces a recent graduate hired at a biotech firm and tasked with analysis and visualization of changes in gene expression from 20,000 generations of the Lenski Lab’s Long-Term Evolution Experiment (LTEE). Our main character is not familiar with R and is guided by a coworker to learn about this platform. Initially this involves a step-by-step analysis of the small Iris dataset built into R which includes sepal and petal length of three species of irises. Practice calculating summary statistics and correlations, and making histograms and scatter plots, prepares the protagonist to perform similar analyses with the LTEE dataset. In the LTEE module, students analyze gene expression data from the long-term evolutionary experiments, developing their skills in manipulating and interpreting large scientific datasets through visualizations and statistical analysis. Prerequisite knowledge is basic statistics, the Central Dogma, and basic evolutionary principles. The Iris module provides hands-on experience using R programming to explore and visualize a simple dataset; it can be used independently as an introduction to R for biological data or skipped if students already have some experience with R. Both modules emphasize understanding the utility of R, rather than creation of original code. Pilot testing showed the case study was well-received by students and faculty, who described it as a clear introduction to R and appreciated the value of R for visualizing and analyzing large datasets.

  9. q

    The nose knows: How tri-trophic interactions and natural history shape bird...

    • qubeshub.org
    Updated May 28, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    pamela scheffler (2019). The nose knows: How tri-trophic interactions and natural history shape bird foraging behavior. Introduction to data visualization. [Dataset]. http://doi.org/10.25334/Q40F2K
    Explore at:
    Dataset updated
    May 28, 2019
    Dataset provided by
    QUBES
    Authors
    pamela scheffler
    Description

    Students investigate the role of olfaction and infochemicals on bird foraging behavior.. In this module students learn to analyze and present graphic data in ways that simplify interpretation.

  10. Big Data Infrastructure Market Analysis North America, Europe, APAC, South...

    • technavio.com
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2024). Big Data Infrastructure Market Analysis North America, Europe, APAC, South America, Middle East and Africa - US, China, UK, Germany, Canada - Size and Forecast 2024-2028 [Dataset]. https://www.technavio.com/report/big-data-infrastructure-market-analysis
    Explore at:
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    Global, United States
    Description

    Snapshot img

    Big Data Infrastructure Market Size 2024-2028

    The big data infrastructure market size is forecast to increase by USD 1.12 billion, at a CAGR of 5.72% between 2023 and 2028. The growth of the market depends on several factors, including increasing data generation, increasing demand for data-driven decision-making across organizations, and rapid expansion in the deployment of big data infrastructure by SMEs. The market is referred to as the systems and technologies used to collect, process, analyze, and store large amounts of data. Big data infrastructure is important because it helps organizations capture and use insights from large datasets that would otherwise be inaccessible.

    What will be the Size of the Market During the Forecast Period?

    To learn more about this report, View Report Sample

    Market Dynamics

    In the dynamic landscape of big data infrastructure, cluster design, and concurrent processing are pivotal for handling vast amounts of data created daily. Organizations rely on technology roadmaps to navigate through the evolving landscape, leveraging data processing engines and cloud-native technologies. Specialized tools and user-friendly interfaces enhance accessibility and efficiency, while integrated analytics and business intelligence solutions unlock valuable insights. The market landscape depends on the Organization Size, Data creation, and Technology roadmap. Emerging technologies like quantum computing and blockchain are driving innovation, while augmented reality and virtual reality offer great experiences. However, assumptions and fragmented data landscapes can lead to bottlenecks, performance degradation, and operational inefficiencies, highlighting the need for infrastructure solutions to overcome these challenges and ensure seamless data management and processing. Also, the market is driven by solutions like IBM Db2 Big SQL and the Internet of Things (IoT). Key elements include component (solution and services), decentralized solutions, and data storage policies, aligning with client requirements and resource allocation strategies.

    Key Market Driver

    Increasing data generation is notably driving market growth. The market plays a pivotal role in enabling businesses and organizations to manage and derive insights from the massive volumes of structured and unstructured data generated daily. This data, characterized by its high volume, velocity, and variety, is collected from diverse sources, including transactions, social media activities, and Machine-to-Machine (M2M) data. The data can be of various types, such as texts, images, audio, and structured data. Big Data Infrastructure solutions facilitate advanced analytics, business intelligence, and customer insights, powering digital transformation initiatives across industries. Solutions like Azure Databricks and SAP Analytics Cloud offer real-time processing capabilities, advanced machine learning algorithms, and data visualization tools.

    Digital Solutions, including telecommunications, social media platforms, and e-commerce, are major contributors to the data generation. Large Enterprises and Small & Medium Enterprises (SMEs) alike are adopting these solutions to gain a competitive edge, improve operational efficiency, and make data-driven decisions. The implementation of these technologies also addresses security concerns and cybersecurity risks, ensuring data privacy and protection. Advanced analytics, risk management, precision farming, virtual assistants, and smart city development are some of the industry sectors that significantly benefit from Big Data Infrastructure. Blockchain technology and decentralized solutions are emerging trends in the market, offering decentralized data storage and secure data sharing. The financial sector, IT, and the digital revolution are also major contributors to the growth of the market. Scalability, query languages, and data valuation are essential factors in selecting the right Big Data Infrastructure solution. Use cases include fraud detection, real-time processing, and industry-specific applications. The market is expected to continue growing as businesses increasingly rely on data for decision-making and digital strategies. Thus, such factors are driving the growth of the market during the forecast period.

    Significant Market Trends

    Increasing use of data analytics in various sectors is the key trend in the market. In today's digital transformation era, Big Data Infrastructure plays a pivotal role in enabling businesses to derive valuable insights from vast amounts of data. Large Enterprises and Small & Medium Enterprises alike are adopting advanced analytical tools, including Azure Databricks, SAP Analytics Cloud, and others, to gain customer insights, improve operational efficiency, and enhance business intelligence. These tools facilitate the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms for predictive ana

  11. d

    Understanding the Influence of Parameter Value Uncertainty on Climate Model...

    • dataone.org
    • search.dataone.org
    • +1more
    Updated May 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sofia Ingersoll; Heather Childers; Sujan Bhattarai (2024). Understanding the Influence of Parameter Value Uncertainty on Climate Model Output: Developing an Interactive Web Dashboard [Dataset]. http://doi.org/10.5061/dryad.vq83bk422
    Explore at:
    Dataset updated
    May 31, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    Sofia Ingersoll; Heather Childers; Sujan Bhattarai
    Description

    Scientists at the National Center for Atmospheric Research have recently carried out several experiments to better understand the uncertainties associated with future climate projections. In particular, the NCAR Climate and Global Dynamics Lab (CGDL) working group has completed a large Parameter Perturbation Experiment (PPE) utilizing the Community Land Model (CLM), testing the effects of 32 parameters over thousands of simulations over a range of 250 years. The CLM model experiment is focused on understanding uncertainty around biogeophysical parameters that influence the balance of chemical cycling and sequestration variables. The current website for displaying model results is not intuitive or informative to the broader scientific audience or the general public. The goal of this project is to develop an improved data visualization dashboard for communicating the results of the CLM PPE. The interactive dashboard would provide an interface where new or experienced users can query the e..., Data Source:

    University of California, Santa Barbara – Climate and Global Dynamics Lab, National Center for Atmospheric Research: Parameter Perturbation Experiment (CGD NCAR PPE-5). https://webext.cgd.ucar.edu/I2000/PPEn11_OAAT/ (Only public version of the data currently accessible. Data leveraged in this project is currently stored on the NCAR server and is not publicly available), https://www.cgd.ucar.edu/events/seminar/2023/katie-dagon-and-daniel-kennedy-132940 (Learn more about this complex data via this amazing presentation by Katie Dragon & Daniel Kennedy ^) The Parameter Perturbation Experiment data leveraged by our project was generated utilizing the Community Land Model v5 (CLM5) predictions. https://www.earthsystemgrid.org/dataset/ucar.cgd.ccsm4.CLM_LAND_ONLY.html

    Data Processing: We were working inside of NCAR’s CASPER cluster HPC server, this enabled us direct access to the raw data files. We created a script to read in 500 LHC PPE simulations as a data set with input..., , # Understanding the Influence of Parameter Value Uncertainty on Climate Model Output: Developing an Interactive Dashboard

    This README.md file was generated on 2024-05-22 by SOFIA INGERSOLL

    https://doi.org/10.5061/dryad.vq83bk422

    GENERAL INFORMATION

    1. Title of the Project: Understanding the Influence of Parameter Value Uncertainty on Climate Model Output: Developing an Interactive Dashboard

    3. Author Information: Sofia Ingersoll

    A. Principal Investigator Contact Information

    Name: Sofia Ingersoll

    Institution: Bren School of Environmental Science & Management

    Email: singersoll@bren.edu

    B. Associate or Co-investigator Contact Information

    Name: Heather Childers

    Institution: Bren School of Environmental Science & Management

    Email: hmchilders@bren.edu

    C. Alternate Contact Information

    Name: Sofia Ingersoll

    Institution: Bren School of Environmental Science & M...

  12. a

    Searching for Abrupt Climate Change Precursors Using Ultra High Resolution...

    • arcticdata.io
    • catalog.northslopescience.org
    Updated Jul 17, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrei Kurbatov (2020). Searching for Abrupt Climate Change Precursors Using Ultra High Resolution Ice Core Analysis [Dataset]. http://doi.org/10.18739/A2CC0TT6Z
    Explore at:
    Dataset updated
    Jul 17, 2020
    Dataset provided by
    Arctic Data Center
    Authors
    Andrei Kurbatov
    Time period covered
    Sep 1, 2012 - Aug 31, 2015
    Area covered
    Description

    The project will undertake an ultra high resolution, multi-parameter investigation of past climate to develop predictors for future abrupt climate change through identification of the "tipping points" which occur prior to an abrupt climate change events. Ultra-high resolution (up to 4 micron) ice core records will be produced -- well beyond the ~1cm resolution currently available -- using continuous flow injection into both a Thermo Element 2 ICP-MS and a Picarro Inc. L-2130-I H2O isotope analyzer sampled using a newly developed state-of-the-art cryocell chamber integrated with digital image recording and annotating software. The PI will look for precursors that reveal evidence of changes in, for example, the timing and magnitude of temperature, precipitation and atmospheric circulation from multi-decadal down to hundreds of sampling levels per year. The project will train students to be well versed in advanced laboratory and numerical modeling methods. Students will learn data reduction and visualization algorithms while also helping to provide a data stream that has broad applicability to climate. Cyberinfrastructure developed under NSF funding to the Climate Change Institute will be used to store, process, and visualize this high volume of ice core data while also making the information available to the public with high transparency for climate change use and attribution.

  13. Enterprise Data Warehouse (Edw) Market Analysis North America, Europe, APAC,...

    • technavio.com
    Updated Oct 1, 2002
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2002). Enterprise Data Warehouse (Edw) Market Analysis North America, Europe, APAC, Middle East and Africa, South America - US, China, UK, India, Germany - Size and Forecast 2024-2028 [Dataset]. https://www.technavio.com/report/enterprise-data-warehouse-market-industry-analysis
    Explore at:
    Dataset updated
    Oct 1, 2002
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    Global, United States
    Description

    Snapshot img

    Enterprise Data Warehouse Market Size 2024-2028

    The enterprise data warehouse market size is forecast to increase by USD 39.24 billion, at a CAGR of 30.08% between 2023 and 2028. The market is experiencing significant growth due to the data explosion across various industries. With the increasing volume, velocity, and variety of data, businesses are investing heavily in EDW solutions and data warehousing to gain insights and make informed decisions. A key growth driver is the spotlight on innovative solution launches, designed with cutting-edge features and functionalities to keep pace with the ever-evolving demands of modern businesses.

    However, concerns related to data security continue to pose a challenge in the market. With the increasing amount of sensitive data being stored in EDWs, ensuring its security has become a top priority for organizations. Despite these challenges, the market is expected to grow at a strong pace, driven by the need for efficient data management and analysis.

    What will be the Size of the Enterprise Data Warehouse Market During the Forecast Period?

    To learn more about the EDW market report, Request Free Sample

    An enterprise data warehouse (EDW) is a centralized, large-scale database designed to collect, store, and manage an organization's valuable business information from multiple sources. The EDW acts as the 'brain' of an organization, processing and integrating data from various physical recordings, flat files, and real-time data sources. Data engineering plays a crucial role in the EDW, responsible for data ingestion, cleaning, and digital transformation. Business units across the organization rely on Business Intelligence (BI) tools like Tableau, PowerBI, Qlik, and data visualization tools to extract insights from the EDW. The EDW is a collection of databases, including Teradata, Netezza, Exadata, Amazon Redshift, and Google BigQuery, which serve as the backbone for data-driven decision-making.

    Moreover, the cloud has significantly impacted the EDW market, enabling cost-effective and scalable solutions for businesses of all sizes. BI tools and data visualization tools enable departments to access and analyze data, improving operational efficiency and driving innovation. The EDW market continues to grow, with organizations recognizing the importance of a centralized, integrated data platform for managing their valuable assets.

    Enterprise Data Warehouse Market Segmentation

    The enterprise data warehouse market research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD Billion' for the period 2024-2028, as well as historical data from 2018 - 2022 for the following segments.

    Product Type
    
      Information and analytical processing
      Data mining
    
    
    Deployment
    
      Cloud based
      On-premises
    
    
    Geography
    
      North America
    
        US
    
    
      Europe
    
        Germany
        UK
    
    
      APAC
    
        China
        India
    
    
      Middle East and Africa
    
    
    
      South America
    

    By Product Type

    The information and analytical processing segment is estimated to witness significant growth during the forecast period. The market is witnessing significant growth due to the increasing data requirements of various industries such as IT, BFSI, education, healthcare, and retail. The primary function of an EDW system is to extract, transform, and load data from source systems into a central repository for data integration and analysis. This process enables businesses to gain timely insights and make informed decisions based on historical data and real-time analytics. EDW systems are designed to be scalable to cater to the data processing needs of the largest organizations. The use of Extract, Transform, Load (ETL) and Extract, Load, Transform (ELT) processes in data warehousing has become a popular trend to address processing bottlenecks and ensure Service Level Agreements (SLAs) are met.

    Furthermore, business users increasingly rely on these systems for business intelligence and data analytics. Big Data technologies like Hadoop MapReduce and Apache Spark are being integrated with ETL tools to enable the processing of large volumes of data. Precisely, as a pioneer in data integration, offers solutions that cater to the needs of various business teams and departments. Data visualization tools like Tableau, PowerBI, Qlik, Teradata, Netezza, Exadata, Amazon Redshift, Google BigQuery, Snowflake, and Data virtualization are being used to gain insights from the data in the EDW. The history of transactions and multiple users accessing the data make the need for data warehousing more critical than ever.

    Get a glance at the market share of various segments. Request Free Sample

    The information and analytical processing segment was valued at USD 3.65 billion in 2018 and showed a gradual increase during the forecast period.

    Regional Insights

    APAC is estimated to contribute 32% to the growt

  14. d

    \"glue-ing together the Universe\"

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Goodman, Alyssa (2023). \"glue-ing together the Universe\" [Dataset]. http://doi.org/10.7910/DVN/59NPHX
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Goodman, Alyssa
    Description

    Presentation Date: Friday, March 1, 2019 Location: Visual Communication Symposium, Rice University, Houston, TX Abstract: Astronomy has long been a field reliant on visualization. First, it was literal visualization—looking at the Sky. Today, though, astronomers are faced with the daunting task of understanding gigantic digital images from across the electromagnetic spectrum and contextualizing them with hugely complex physics simulations, in order to make more sense of our Universe. In this talk, I will explain how new approaches to simultaneously exploring and explaining vast data sets allow astronomers—and other scientists—to make sense of what the data have to say, and to communicate what they learn, to each other and to the public. I will focus on the multi-dimensional linked-view data visualization environment known as “glue” (glueviz.org), explaining how it is being used in astronomy, medical imaging, and geographic information sciences. I will discuss its future potential to expand into all fields where diverse but related multi-dimensional data sets can be profitably analyzed together. Toward the aim of bringing the fruits of visualization to a broader audience, I will also introduce the new “10 Questions to Ask When Creating a Visualization” website, 10QViz.org. Full program downloadable from: https://vcs.rice.edu/sites/g/files/bxs2036/f/VCS%202019%20program%20booklet.pdf

  15. g

    Dataset visualization service: Areas covered by fire - Year 2020 sc. 1:10000...

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataset visualization service: Areas covered by fire - Year 2020 sc. 1:10000 | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_r_liguri-d-2563-vs
    Explore at:
    Description

    Acquisition on Mountain Information System, GPS surveys and digitization on technical paper and / or cadastral paper. The areas covered by the fire refer to the events that occurred in 2020. They were taken over by the "Liguria" Carabinieri Forestry Region Command and acquired by the offices of the same Command. The data are not valid pursuant to paragraph 2, Article 10 of Law 353/2000. To learn more, you can also consult Areas covered by fire - Year 1996/2002 sc. 1:10000 and Areas covered by fire - Year 2003/2019 sc. 1:10000. The data may be subject to further verification - Coverage: entire regional territory - Scale 1:10000 - Projection System: Gauss Boaga - West Cast

  16. PM/IDE - An Integrated Development Environment for Planning Models, Phase II...

    • data.nasa.gov
    • catalog.data.gov
    application/rdfxml +5
    Updated Jun 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). PM/IDE - An Integrated Development Environment for Planning Models, Phase II [Dataset]. https://data.nasa.gov/dataset/PM-IDE-An-Integrated-Development-Environment-for-P/frip-3wx6
    Explore at:
    application/rdfxml, application/rssxml, json, tsv, xml, csvAvailable download formats
    Dataset updated
    Jun 26, 2018
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    We propose to develop a planning model integrated development environment (PM/IDE) that will help people construct, review, understand, test, and debug high-quality planning domain models expressed in the Action Notation Modeling Language (ANML) more quickly and effectively. PM/IDE will enable novice modelers to review and understand models more expediently, so they can learn modeling techniques more efficiently. PM/IDE also will enable experienced modelers to review the models of others more quickly, so they can share modeling techniques and best practices. Interactive graphical displays will enable modelers to describe planning domain models under construction and the plans they can generate to domain experts in order to facilitate more efficient knowledge elicitation and model review. Without PM/IDE, ANML modeling will remain a tedious and difficult task that can be carried out only by the small number of people who have the necessary specialized skills and patience. This, in turn, will severely limit the use of ANML-based automated planning systems.

    During Phase I, we characterized the planning domain modeling task to identify the types of analyses and decisions that modelers carry out and the kinds of information they review and assess. Based on this understanding of the task, we designed and prototyped PM/IDE capabilities and user-system interactions that help people develop ANML models. During Phase 2, we propose to develop a TRL 7 version of PM/IDE. Our design approach draws upon our experience using a top-down, decision-centered software requirements and design process to develop data visualization and decision support systems.

  17. T

    wider_face

    • tensorflow.org
    • opendatalab.com
    • +1more
    Updated Dec 6, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). wider_face [Dataset]. https://www.tensorflow.org/datasets/catalog/wider_face
    Explore at:
    Dataset updated
    Dec 6, 2022
    Description

    WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. We adopt the same evaluation metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets, we do not release bounding box ground truth for the test images. Users are required to submit final prediction files, which we shall proceed to evaluate.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('wider_face', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/wider_face-0.1.0.png" alt="Visualization" width="500px">

  18. 世界の新車登録台数 データセット | S&P Global Marketplace

    • marketplace.spglobal.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    S&P Global, 世界の新車登録台数 データセット | S&P Global Marketplace [Dataset]. https://www.marketplace.spglobal.com/ja/datasets/global-new-vehicle-registrations-(248)
    Explore at:
    Dataset authored and provided by
    S&P Globalhttps://www.spglobal.com/
    Description

    Global New Vehicle Registrationsデータセットを世界の新車登録台数を月毎のモニタリング・分析にご活用ください。

  19. Z

    Pergola: boosting visualization and analysis of longitudinal data by...

    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Notredame, Cedric (2020). Pergola: boosting visualization and analysis of longitudinal data by unlocking genomic analysis tools - Mouse feeding behavior dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_1154827
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Erb, Ionas
    Notredame, Cedric
    Espinosa-Carrasco, Jose
    Dierssen, Mara
    License

    https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.htmlhttps://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html

    Description

    Dataset contains feeding and drinking behavioral recordings of C57BL6/J male mice. Mice were distributed into 2 groups (9 control mice and 8 high-fat diet mice) and tracked individually on Phecomp cages for 9 weeks. During the first experimental week all animals were given ad libitum access to a standard chow (habituation phase). After this first week, control mice continued with the same diet regime while high-fat mice were exclusively given ad libitum access to a high-fat chow. Data was used originally in this publication 10.1111/adb.12595.

    The data set consist in:

    • a "mouse_recordings" folder containing a CSV file containing mouse recordings and the files.

    • a "mappings" folder containing all the mappings used by the pergola in the pipeline to convert data.

    • a "phases" folder containing a CSV file containing experimental phases.

    • a "chromHMM_files" folder containing a cellmarkfiletable table used by chromHMM to learn a HMM model

  20. e

    Data from: Macrosystems EDDIE Module 8: Using Ecological Forecasts to Guide...

    • portal.edirepository.org
    zip
    Updated Sep 22, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Whitney Woelmer; R. Thomas; Tadhg Moore; Cayelan Carey (2022). Macrosystems EDDIE Module 8: Using Ecological Forecasts to Guide Decision-Making (Instructor Materials) [Dataset]. http://doi.org/10.6073/pasta/ad8adb1329f2a75bdd522fd22f2cb201
    Explore at:
    zip(3939643 bytes)Available download formats
    Dataset updated
    Sep 22, 2022
    Dataset provided by
    EDI
    Authors
    Whitney Woelmer; R. Thomas; Tadhg Moore; Cayelan Carey
    Time period covered
    Jan 23, 2022 - Sep 22, 2022
    Description

    Because of increased variability in populations, communities, and ecosystems due to land use and climate change, there is a pressing need to know the future state of ecological systems across space and time. Ecological forecasting is an emerging approach which provides an estimate of the future state of an ecological system with uncertainty, allowing society to preemptively prepare for fluctuations in important ecosystem services. However, forecasts must be effectively designed and communicated to those who need them to make decisions in order to realize their potential for protecting natural resources. In this module, students will explore real ecological forecast visualizations, identify ways to represent uncertainty, make management decisions using forecast visualizations, and learn decision support techniques. Lastly, students customize a forecast visualization for a specific stakeholder's decision needs. The overarching goal of this module is for students to understand how forecasts are connected to decision-making of stakeholders, or the managers, policy-makers, and other members of society who use forecasts to inform decision-making. The A-B-C structure of this module makes it flexible and adaptable to a range of student levels and course structures. This EDI data package contains instructional materials and the files necessary to teach the module. Readers are referred to the Zenodo data package (Woelmer et al. 2022; DOI: 10.5281/zenodo.7074674) for the R Shiny application code needed to run the module locally.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Work With Data (2003). Books called D3.js 4.x data visualization : learn to visualize your data with JavaScript [Dataset]. https://www.workwithdata.com/datasets/books?f=1&fcol0=book&fop0=%3D&fval0=D3.js+4.x+data+visualization+%3A+learn+to+visualize+your+data+with+JavaScript

Books called D3.js 4.x data visualization : learn to visualize your data with JavaScript

Explore at:
Dataset updated
Mar 3, 2003
Dataset authored and provided by
Work With Data
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset is about books and is filtered where the book is D3.js 4.x data visualization : learn to visualize your data with JavaScript, featuring 7 columns including author, BNB id, book, book publisher, and ISBN. The preview is ordered by publication date (descending).

Search
Clear search
Close search
Google apps
Main menu