100+ datasets found
  1. U

    Input data and results of WRTDS models and seasonal rank-sum tests to...

    • data.usgs.gov
    • search.dataone.org
    • +4more
    Updated Aug 11, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    R. Hickman (2022). Input data and results of WRTDS models and seasonal rank-sum tests to determine trends in the quality of water in New Jersey streams, water years 1971-2011 [Dataset]. http://doi.org/10.5066/F7NS0RZ3
    Explore at:
    Dataset updated
    Aug 11, 2022
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    R. Hickman
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Oct 1, 1970 - Sep 30, 2011
    Area covered
    New Jersey
    Description

    This USGS data release represents the input data used to identify trends in New Jersey streams, water years 1971-2011 and the results of Weighted Regression on Time, Discharge, and Season (WRTDS) models and seasonal rank-sum tests. The data set consists of CSV tables and Excel workbooks of: • trends_InputData_NJ_1971_2011: Reviewed water-quality values and qualifiers at selected stream stations in New Jersey over water years 1971-2011 • trends_WRTDS_AnnualValues_NJ_1971_2011: Annual concentrations and fluxes for each water-quality characteristic at each station from WRTDS models • trends_WRTDS_Changes_NJ_1971_2011: Changes and trends in flow-normalized concentrations and fluxes determined from WRTDS models • trends_SeasonalRankSum_results_NJ_1971_2011: Results of seasonal rank-sum tests to identify step trends between concentrations in the 1970s, 1980s, 1990s, and 2000s at selected stations on streams in New Jersey. These data support the following publication: Hickman, R.E. ...

  2. a

    Global Deforestation Trends and Hotspots

    • hub.arcgis.com
    Updated Apr 17, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    World Wide Fund for Nature (2020). Global Deforestation Trends and Hotspots [Dataset]. https://hub.arcgis.com/maps/28ccef7736f0400ba348b831e86052ac
    Explore at:
    Dataset updated
    Apr 17, 2020
    Dataset authored and provided by
    World Wide Fund for Nature
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    South Pacific Ocean, Pacific Ocean
    Description

    WWF developed a global analysis of the world's most important deforestation areas or deforestation fronts in 2015. This assessment was revised in 2020 as part of the WWF Deforestation Fronts Report.Emerging Hotspots analysisThe goal of this analysis was to assess the presence of deforestation fronts: areas where deforestation is significantly increasing and is threatening remaining forests. We selected the emerging hotspots analysis to assess spatio-temporal trends of deforestation in the pan-tropics.Spatial UnitWe selected hexagons as the spatial unit for the hotspots analysis for several reasons. They have a low perimeter-to-area ratio, straightforward neighbor relationships, and reduced distortion due to curvature of the earth. For the hexagon size we decided on a unit of 1,000 ha, based on the resolution of the deforestation data (250m) meant that we could aggregate several deforestation events inside units over time. Hexagons that are closer to or equal to the size of a deforestation event means there could only be one event before the forest is gone and limit statistical analysis.We processed over 13 million hexagons for this analysis and limited the emerging hotspots analysis to only hexagons with at least 15% forest cover remaining (from the all-evidence forest map). This prevented including hotspots in agricultural areas or where all forest has been converted.OutputsThis analysis uses the Getis-Ord and Mann-Kendall statistics to identify spatial clusters of deforestation which have a non-parametric significant trend across a time series. The spatial clusters are defined by the spatial unit and a temporal neighborhood parameter. We use a neighborhood parameter of 5km to include spatial neighbors in the hotspots assessment and time slices for each country described below. Deforestation events are summarized by a spatial unit (hexagons described below) and the results comprise a trends assessment which defines increasing or decreasing deforestation in the units determined at 3 different confidence intervals (90%, 95% and 99%) and the spatio-temporal analysis classifying areas into 8 hot unique or cold spot categories. Our analysis identified 7 hotspot categories:Hotspot TypeDefinitionNewA location with a statistically significant increasing hotspots only in the final time stepConsecutiveAn uninterrupted run of statistically significant hotspot in the final time-steps IntensifyingA statistically significant hotspot for >90% of the bins, including the final time stepPersistentA statistically significant hotspot for >90% of the bins with no upward or downward trend in clustering intensityDiminishingA statistically significant hotspot for >90% of the time steps, with where the clustering is decreasing, or the most recent time step is not hot.SporadicA on-again then off-again hotspot where <90% of the time-step intervals have been statistically significant hot spots and none have been statistically significant cold spots.HistoricalAt least ninety percent of the time-step intervals have been statistically significant hot spots, with the exception of the final time steps..For the evaluation of spatio-temporal trends of tropical deforestation we selected the Terra-i deforestation dataset to define the temporal deforestation patterns. Terra-i is a freely available monitoring system derived from the analysis of MODIS (NVDI) and TRMM (rainfall) data which are used to assess forest cover changes due to anthropic interventions at a 250 m resolution [ref]. It was first developed for Latin American countries in 2012, and then expanded to pan-tropical countries around the world. Terra-i has generated maps of vegetation loss every 16 days, since January 2004. This relatively high temporal resolution of twice monthly observations allows for a more detailed emerging hotspots analysis, increasing the number of time steps or bins available for assessing spatio-temporal patterns relative to annual datasets. Next, the spatial resolution of 250m is more relevant for detecting forest loss than changes in individual tree cover or canopies and is better adapted to process trends on large scales. Finally, the added value of the Terra-i algorithm is that it employs an additional neural network machine learning to identify vegetation loss that is due to anthropic causes as opposed to natural events or other causes. Our dataset comprised all Terra-i deforestation events observed between 2004 and 2017. Temporal unitThe temporal unit or time slice was selected for each country according to the distribution of data. The deforestation data comprised 16-day periods between 2004 and 2017 for a total of 312 potential observation time periods. These were aggregated to time bins to overcome any seasonality in the detection of deforestation events (due to clouds). The temporal unit is combined with the spatial parameter (i.e. 5km) to create the space-time bins for hotspot analysis. For dense time series or countries with a lot of deforestation events (i.e. Brazil) a smaller time slice was used (i.e. 3 months, n=54) with a neighborhood interval of 8 months, meaning that the previous year and next year together were combined to assess statistical trends relative to the global variables together. The rule we employed was that the time slice x neighborhood interval was equal to 24 months, or 2 years, in order to look at general trends over the entire time period and prevent the hotspots analysis from being biased to short time intervals of a few months.Deforestation FrontsFinally, using trends and hotpots we identify 24 major deforestation fronts, areas of significantly increasing deforestation and the focus of WWF's call for action to slow deforestation.

  3. H

    Introduction to Time Series Analysis for Hydrologic Data

    • hydroshare.org
    • beta.hydroshare.org
    • +1more
    zip
    Updated Jan 29, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Introduction to Time Series Analysis for Hydrologic Data [Dataset]. https://www.hydroshare.org/resource/ee2a4c2151f24115a12e34d4d22d96fe
    Explore at:
    zip(1.1 MB)Available download formats
    Dataset updated
    Jan 29, 2021
    Dataset provided by
    HydroShare
    Authors
    Gabriela Garcia; Kateri Salk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 1974 - Jan 27, 2021
    Area covered
    Description

    This lesson was adapted from educational material written by Dr. Kateri Salk for her Fall 2019 Hydrologic Data Analysis course at Duke University. This is the first part of a two-part exercise focusing on time series analysis.

    Introduction

    Time series are a special class of dataset, where a response variable is tracked over time. The frequency of measurement and the timespan of the dataset can vary widely. At its most simple, a time series model includes an explanatory time component and a response variable. Mixed models can include additional explanatory variables (check out the nlme and lme4 R packages). We will be covering a few simple applications of time series analysis in these lessons.

    Opportunities

    Analysis of time series presents several opportunities. In aquatic sciences, some of the most common questions we can answer with time series modeling are:

    • Has there been an increasing or decreasing trend in the response variable over time?
    • Can we forecast conditions in the future?

      Challenges

    Time series datasets come with several caveats, which need to be addressed in order to effectively model the system. A few common challenges that arise (and can occur together within a single dataset) are:

    • Autocorrelation: Data points are not independent from one another (i.e., the measurement at a given time point is dependent on previous time point(s)).

    • Data gaps: Data are not collected at regular intervals, necessitating interpolation between measurements. There are often gaps between monitoring periods. For many time series analyses, we need equally spaced points.

    • Seasonality: Cyclic patterns in variables occur at regular intervals, impeding clear interpretation of a monotonic (unidirectional) trend. Ex. We can assume that summer temperatures are higher.

    • Heteroscedasticity: The variance of the time series is not constant over time.

    • Covariance: the covariance of the time series is not constant over time. Many of these models assume that the variance and covariance are similar over the time-->heteroschedasticity.

      Learning Objectives

    After successfully completing this notebook, you will be able to:

    1. Choose appropriate time series analyses for trend detection and forecasting

    2. Discuss the influence of seasonality on time series analysis

    3. Interpret and communicate results of time series analyses

  4. Groundwater level and its trend/cluster analysis results in the...

    • data.csiro.au
    • researchdata.edu.au
    Updated Mar 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guobin Fu; Dennis Gonzalez; Stephanie Clark; Rodrigo Rojas; Sreekanth Janardhanan (2024). Groundwater level and its trend/cluster analysis results in the Murray-Darling Basin [Dataset]. http://doi.org/10.25919/6fkm-9a54
    Explore at:
    Dataset updated
    Mar 6, 2024
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Guobin Fu; Dennis Gonzalez; Stephanie Clark; Rodrigo Rojas; Sreekanth Janardhanan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1971 - Jan 1, 2021
    Area covered
    Dataset funded by
    CSIROhttp://www.csiro.au/
    Description

    There are three parts in this datasets: 1) Annual times series of groundwater level (in terms of depth to water level, DTW) for 910 groundwater bores in the main alluvial systems in MDB; 2) Trend analysis results with three trend detection methods; and 3) Clustering results of temporal patterns of groundwater levels from both HCA and SOM. Lineage: 1) Bore depth to water table (DTW) data (available at http://www.bom.gov.au/water/groundwater/ngis/) were accessed using the National Groundwater Information System (NGIS) Version 1.7.0 last updated in July 2021. 2) Suspicious observations are very common for groundwater level measurements. A simple data quality control method was used to remove all obvious errors and outliers (https://doi.org/10.3390/w14111808). 3) Three trend analysis methods (The non-parametric MK test, liner regression and the innovative trend analysis (ITA) ) are employed to detect long-term (1971–2021) trends in annual mean DTW (https://doi.org/10.3390/w14111808)). 4) The two most popular clustering analysis methods, hierarchical clustering analysis (HCA) and self-organizing maps (SOM), were used to investigate the temporal patterns of groundwater levels (https://doi.org/10.3390/su152316295).

  5. o

    Desights: The Future of Crypto: Google Trends Decomposition Analysis &...

    • market.oceanprotocol.com
    Updated Apr 30, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Desights User (2024). Desights: The Future of Crypto: Google Trends Decomposition Analysis & Forecasting Models [Dataset]. https://market.oceanprotocol.com/asset/did:op:0d15341dad61a616e99bf27bd4996d0fb41a1697dcafe87d80f6cb508f767af5
    Explore at:
    Dataset updated
    Apr 30, 2024
    Dataset authored and provided by
    Desights User
    Description

    This is a submission for Challenge #24 by Desights User

    Click here for Challenge Details Note: This submission is in REVIEW state and is only accessible by Challenge Reviewers. So you might get errors when you try to download this asset directly from Ocean Market.

    Submission Description

    The cryptocurrency is not just a new form of value store and exchange, it is a revolution of its own. Beginning with its use to provide peer-to-peer payment network (or digital money) like Bitcoin, today’s cryptocurrency, or crypto for short, have evolved way beyond its humble start. Underlying the crypto world, there lies amazing technology called Blockchain. In simple term, blockchain is a decentralized and shared digital ledger that records transactions transparently and immutably across nodes in the network. Today’s Crypto community has slowly turned into industry of its own introducing a whole spectrum of enigmatic pattern, trends, and economic framework. In this report we will explore the trend, correlations, and dynamics related to 20 selected Crypto projects to derived insights and build models that predict the future of crypto. Key Findings: Our exploratory data analysis (EDA) underlines the span and general pattern of the Google Trend and Price related data. The data being analyzed span from the earliest entry on 2014-09-17 up to the latest on 2024-04-07. Time series decomposition was performed to extract trend, seasonal cycle, and residuals that made up the Google Interest Trend data. Analysis on the time-series decomposition help us distinguish cluster (a) with projects on the rise such as Solana, SingularityNet, Fetch.ai, and Ocean Protocol; and cluster (b) containing old project such as Dogecoin, Litecoin Filecoin, Tezos that are facing stagnant/downfall trend. Based on the Google Trends’s Correlation across projects we characterize Highly correlated projects cluster with correlation of about >0.8, and up to 0.92 with Bitcoin-Ethereum-Chainlink-Litecoin-Monero as the prominent group members. By introducing additional Google Trend data to understand Crypto Narrative, we worked toward building interpretable Event/Entity driving the market sentiment to explain our decomposed Time-series data. Based on Lag Characteristics in Correlation of Google Trend and Price/Trade Volume we highlight the tendency for the correlation to accumulate at longer lag time. Using NeuralProphet Framework we build forecasting models for Google Trend and Token Price for all 20 projects investigated here. We deployed these models to predict Trend and Price for all 20 projects for the following 52 weeks (up until April 2025). The developed models performed extraordinarily well with the R^2 value for most fall between the range of 0.75-0.88, while the highest goes up to 0.919. We highlight the correlation between Bitcoin, Ethereum, Ocean, with the rest of other projects. Ocean and Bitcoin, also Ethereum and Solana are the most correlated, both with correlation value of 0.89. The Kucoin’s KCS token is the least correlated with both Ocean and Bitcoin (0.31), while with Ethereum, Filecoin have the least correlation (0.41).

    Conclusion This investigative study presents a thorough data analysis and exploration of correlations, time-lag characteristics, and time-series decomposition concerning Google Trends and token prices for 20 selected crypto/blockchain projects. By decomposing the time-series data, we have identified several clusters of crypto projects that is moving up in popularity such as Fetch.ai, SingularityNet, Solana, Ocean and some others that are stuck or in downfall trend, such as Dogecoin and Litecoin. Our analysis also includes a detailed exploration of various factors that contribute to understanding the data better, such as the incorporation of event-driven trends that explain outlier spikes in the residual data from our decomposed time-series.

    In addition to our in-depth analysis, we build strong mini-library of forecasting models for predicting the Google Trend as well as price for the upcoming year with R^2 score that goes as high as 0.88 for most cases. Moreover, in order to demonstrate the utility of our exploratory data analysis tools and pipeline in full we also include all the results and analysis output produced in this work.

    Looking ahead, we plan to expand our developed forecasting models and the presented data into a "CryptoForecast MiniApp." This application, based on the Streamlit package, will be hosted on a decentralized cloud (Akash) and connected to the Ocean marketplace and Predictoor, enhancing accessibility and utility for users interested in real-time data for Google Trends and Crypto Token Price forecasts.

  6. g

    Trend analysis for sites used in RESTORE Streamflow alteration assessments

    • gimi9.com
    • data.usgs.gov
    • +2more
    Updated Dec 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Trend analysis for sites used in RESTORE Streamflow alteration assessments [Dataset]. https://gimi9.com/dataset/data-gov_trend-analysis-for-sites-used-in-restore-streamflow-alteration-assessments
    Explore at:
    Dataset updated
    Dec 13, 2018
    Description

    Daily streamflow discharge data from 139 streamgages located on tributaries and streams flowing to the Gulf of Mexico were used to calculate mean monthly, mean seasonal, and decile values. Streamgages used to calculate trends required a minimum of 65 years of continuous daily streamflow data. These values were used to analyze trends in streamflow using the Mann-Kendall trend test in the R package entitled “Trends” and a new methodology created by Robert M. Hirsch known as a “Quantile-Kendall” plot. Data were analyzed based on water year using the Mann-Kendall trend test and by climate year using the Quantile-Kendall methodology to: (1) identify regions which are statistically similar for estimating streamflow characteristics; (2) identify trends related to changing streamflow and streamflow alteration over time; and (3) to identify possible correlations with estuary health in the Gulf of Mexico.

  7. D

    Clinical Data Analytics in Healthcare Market Research Report 2032

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Clinical Data Analytics in Healthcare Market Research Report 2032 [Dataset]. https://dataintelo.com/report/global-clinical-data-analytics-in-healthcare-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Clinical Data Analytics in Healthcare Market Outlook



    The Clinical Data Analytics in Healthcare Market is experiencing a significant surge in demand, with a market size valued at $12 billion in 2023 and projected to reach approximately $35 billion by 2032, expanding at an impressive compound annual growth rate (CAGR) of 12.5%. The driving force behind this robust growth is the increasing need for data-driven decision-making processes in healthcare that enhance operational efficiency and improve patient outcomes. This demand is further fueled by the global shift towards value-based healthcare, which emphasizes the quality of care provided and patient satisfaction over the quantity of services rendered.



    A primary growth factor propelling this market is the technological advancements in data processing and storage capacities, allowing healthcare providers to manage and analyze vast amounts of clinical data efficiently. The integration of technologies such as artificial intelligence and machine learning into healthcare data analytics has revolutionized the way data is interpreted, enabling predictive analytics and personalized medicine. These technologies aid in early disease detection and facilitate the creation of tailored treatment plans, which are proving to be more effective than traditional approaches in managing chronic diseases and improving patient care outcomes.



    Another significant growth factor is the increasing adoption of electronic health records (EHRs) across healthcare facilities worldwide. EHRs play a crucial role in data collection, providing a comprehensive view of patient histories that is essential for effective data analytics. The widespread implementation of EHRs improves data accuracy and accessibility, which are critical for successful clinical data analytics. Furthermore, healthcare regulations globally are increasingly mandating the digital recording and sharing of patient data, further accelerating the adoption of EHRs and subsequently driving the demand for data analytics solutions.



    The growing emphasis on population health management is also a strong catalyst for market growth. As healthcare systems shift towards a more holistic approach to patient care, there is a heightened focus on understanding and managing the health of entire populations. Clinical data analytics provides the tools necessary for identifying health trends and risk factors within populations, allowing healthcare providers to develop targeted interventions and preventive measures. This trend is especially pertinent amid the increasing prevalence of lifestyle-related diseases, which require ongoing monitoring and management to mitigate their impact on healthcare systems.



    In the realm of healthcare, operational analytics plays a pivotal role in streamlining processes and enhancing the efficiency of healthcare delivery systems. By leveraging Healthcare Operational Analytics, healthcare organizations can optimize resource allocation, reduce operational costs, and improve patient flow management. This approach enables healthcare providers to identify bottlenecks and inefficiencies within their operations, allowing for data-driven decisions that enhance overall service delivery. As healthcare systems continue to face increasing demands and financial pressures, the adoption of operational analytics becomes essential in maintaining high standards of care while ensuring sustainability and cost-effectiveness.



    Regionally, North America dominates the Clinical Data Analytics in Healthcare Market, accounting for the largest market share due to advanced healthcare infrastructure and significant investments in R&D. The region's well-established EHR systems and the presence of major market players spearheading technological innovations further bolster this dominance. However, Asia Pacific is expected to witness the highest growth rate, driven by the rapid adoption of healthcare IT solutions, increasing government initiatives towards digital health transformation, and the growing burden of chronic diseases. Europe follows closely, benefiting from stringent healthcare regulations and a strong focus on improving healthcare outcomes through data analytics.



    Component Analysis



    The component segment of the Clinical Data Analytics in Healthcare Market is bifurcated into software and services, both integral to the effective deployment of data analytics solutions. Software, the larger of the two segments, encompasses a range of applications designed to

  8. Online Search Trends Data API | Track Market Behavior | Best Price Guarantee...

    • datarade.ai
    Updated Oct 27, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Success.ai (2021). Online Search Trends Data API | Track Market Behavior | Best Price Guarantee [Dataset]. https://datarade.ai/data-products/online-search-trends-data-api-track-market-behavior-best-success-ai
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Oct 27, 2021
    Dataset provided by
    Area covered
    Honduras, Tuvalu, Sint Eustatius and Saba, Macedonia (the former Yugoslav Republic of), Senegal, Rwanda, Czech Republic, Jersey, Myanmar, Croatia
    Description

    Success.ai’s Online Search Trends Data API empowers businesses, marketers, and product teams to stay ahead by monitoring real-time online search behaviors of over 700 million users worldwide. By tapping into continuously updated, AI-validated data, you can track evolving consumer interests, pinpoint emerging keywords, and better understand buyer intent.

    This intelligence allows you to refine product positioning, anticipate market shifts, and deliver hyper-relevant campaigns. Backed by our Best Price Guarantee, Success.ai’s solution provides the valuable insight needed to outpace competitors, adapt to changing market dynamics, and consistently meet consumer expectations.

    Why Choose Success.ai’s Online Search Trends Data API?

    1. Real-Time Global Insights

      • Leverage up-to-the-minute search data from users spanning all major industries, regions, and demographics.
      • Confidently tailor campaigns, content, and product roadmaps to match dynamic consumer interests and seasonality.
    2. AI-Validated Accuracy

      • Rely on 99% data accuracy through AI-driven validation, reducing guesswork and improving conversion rates.
      • Make data-driven decisions supported by credible, continuously refreshed intelligence.
    3. Continuous Data Updates

      • Stay aligned with changing market conditions, competitor moves, and evolving consumer behaviors as they happen.
      • Adapt swiftly to shifting trends, product demands, and industry developments, maintaining long-term relevance.
    4. Ethical and Compliant

      • Fully adheres to GDPR, CCPA, and other global data privacy regulations, ensuring responsible data usage and brand protection.

    Data Highlights:

    • 700M+ Global User Insights: Access search trends, queries, and user behaviors for unparalleled audience understanding.
    • Real-Time Updates: Maintain agility in content creation, product development, and marketing strategies.
    • AI-Validated Accuracy: Trust in high-fidelity data to inform critical decisions, reducing wasted investments.
    • Best Price Guarantee: Maximize ROI by accessing premium-quality data at unbeatable value.

    Key Features of the Online Search Trends Data API:

    1. On-Demand Trend Analysis

      • Query the API to identify emerging keywords, popular topics, and changing consumer priorities.
      • React rapidly to new opportunities, delivering content and offers that resonate with current market interests.
    2. Advanced Filtering and Segmentation

      • Filter by region, industry vertical, time frames, or user attributes.
      • Focus on audiences and themes most relevant to your strategic goals, improving campaign performance and message relevance.
    3. Real-Time Validation and Reliability

      • Benefit from AI-driven validation to ensure data integrity and accuracy.
      • Reduce risk, optimize resource allocation, and confidently direct initiatives supported by up-to-date, trustworthy data.
    4. Scalable and Flexible Integration

      • Easily integrate the API into existing marketing automation platforms, analytics tools, or product management software.
      • Adjust parameters as goals evolve, ensuring long-term flexibility and alignment with strategic objectives.

    Strategic Use Cases:

    1. Product Development and Innovation

      • Identify rising user interests, unmet needs, or competitive gaps by analyzing search trends.
      • Shape product features, enhancements, or entirely new offerings based on verified consumer demand.
    2. Content Marketing and SEO

      • Uncover trending topics, popular keywords, and seasonal interests to produce relevant content.
      • Improve organic reach, engagement, and lead generation by meeting users at the intersection of their search intent.
    3. Market Entry and Expansion

      • Validate market readiness and user curiosity in new regions or niches.
      • Enter unfamiliar territories or launch product lines confidently, backed by real-time search insights.
    4. Advertising and Campaign Optimization

      • Align ad creatives, messaging, and promotions with the most popular search terms.
      • Increase CTRs, conversions, and overall campaign efficiency by resonating more deeply with consumer interests.

    Why Choose Success.ai?

    1. Best Price Guarantee

      • Access high-quality search trends data at the most competitive prices, ensuring exceptional ROI on data-driven initiatives.
    2. Seamless Integration

      • Incorporate the API into your workflow with ease, enhancing productivity and eliminating data silos.
    3. Data Accuracy with AI Validation

      • Trust in 99% accuracy to guide strategies, refine targeting, and achieve stronger engagement outcomes.
    4. Customizable and Scalable Solutions

      • Tailor datasets, filters, and time frames to your evolving market conditions, strategic ambitions, and audience needs.

    Additional APIs for Enhanced Functionality:

    1. Data Enrichment API
      • Combine search trends data with o...
  9. Airbnb Data | Travel Data | Airbnb Listings | Pricing, Rating, Amenities |...

    • datarade.ai
    .json, .xml, .csv
    Updated Feb 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PromptCloud (2024). Airbnb Data | Travel Data | Airbnb Listings | Pricing, Rating, Amenities | Custom Web Scraping & Data Extraction Solutions, Globally | PromptCloud [Dataset]. https://datarade.ai/data-products/airbnb-data-scrape-airbnb-listings-pricing-rating-ameni-promptcloud
    Explore at:
    .json, .xml, .csvAvailable download formats
    Dataset updated
    Feb 1, 2024
    Dataset authored and provided by
    PromptCloud
    Area covered
    French Southern Territories, Andorra, French Guiana, Bonaire, Turkmenistan, Grenada, San Marino, Barbados, Finland, Yemen
    Description

    Our Airbnb data scraping solutions offer unparalleled access to extensive data from listings worldwide. In seconds, extract vital information such as host details, property addresses, location specifics, pricing, availability, star ratings, guest reviews, images, and more. This service is invaluable for those in the travel and tourism industry seeking a comprehensive understanding of market trends and customer preferences.

    Use our scraped Airbnb data to: - Monitor Real-Time Market Changes: Stay updated with the latest price changes and listing details in your selected locations. - Forecast Pricing Trends: Predict future pricing for specific locations, enhancing your strategy for the upcoming tourist season. - Identify Market Trends: Discover emerging trends, gaining a competitive edge by adapting your pricing and offers accordingly. - Understand Customer Preferences: Dive deep into customer expectations concerning price ranges, property sizes, features, and local infrastructure. - Sentiment Analysis on Reviews: Employ sentiment analysis on reviews to pinpoint the most successful locations, understanding customer satisfaction at a deeper level. - Data-Driven Decision Making: Base your decisions on robust data when considering opening or exploring new spots, especially those away from mainstream destinations.

    Our service ensures that you receive the most comprehensive and up-to-date information, in user-friendly formats, to support your business decisions and strategies in the dynamic world of travel and hospitality.

    With a decade-long track record in data extraction, PromptCloud is your go-to partner for reliable, high-quality Airbnb data Our stringent data verification process ensures the highest level of data accuracy, offering you trustworthy insights for informed decision-making.

    We are committed to putting data at the heart of your business. Reach out for a no-frills PromptCloud experience- professional, technologically ahead and reliable.

  10. H

    Data from: Assessing the Performance of Parametric and Non-Parametric Tests...

    • beta.hydroshare.org
    • hydroshare.org
    • +1more
    zip
    Updated Oct 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Renato Amorim (2023). Assessing the Performance of Parametric and Non-Parametric Tests for Trend Detection in Partial Duration Time Series [Dataset]. http://doi.org/10.4211/hs.547308e5f90d4ff499e67c37f7cdd621
    Explore at:
    zip(406.9 MB)Available download formats
    Dataset updated
    Oct 17, 2023
    Dataset provided by
    HydroShare
    Authors
    Renato Amorim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The detection of non-stationarities in partial duration time series (or peak-over-threshold, POT) depends on a number of factors, including the length of the time series, the selected statistical test, and the heaviness of the tail of the distribution. Because of the more limited attention received in the literature when compared to the trend detection on block maxima variables, we perform a Monte Carlo simulation study to evaluate the performance of different approaches (Spearman’s rho (SP), Mann-Kendall test (MK), Ordinary Least Squared Regression (OLS), Sen’s slope estimator (SEN), and the non-stationary Generalized Pareto distribution fit (GPD_NS)) to identify the presence of trends in POT records characterized by different sample sizes (n), shape parameter and degrees of non-stationarity. We also estimate the probability of occurrence of Type S errors when using the OLS and SEN to determine the magnitude of trends. The results point to a power gain for all tests by increasing sample size and degree of non-stationarity. The same increased detection is noted when reducing the shape parameter (i.e., going from unbounded to bounded distributions). While the GPD_NS has the best performance overall, the OLS performs well when detecting trends for low or negative shape values. On the other hand, the use of a non-parametric test is recommended in samples with a high positive skew. Furthermore, the use of sampling rates greater than 1 (i.e., selecting more than just one event per year on average) to increase the POT sample size is encouraged, especially when dealing with small records. In this case, gains in power of detection and a reduction in the probability of type S error occurrence are observed, especially when the sampling rate ≤ 0 (i.e., unbounded distribution). Moreover, the use of SEN to estimate the magnitude of a trend is preferable over OLS due to its slightly smaller probability of occurrence of type S error when the shape parameter is positive.

  11. Consumer Behavior Data | Consumer Goods & Electronics Industry Leaders in...

    • datarade.ai
    Updated Jan 1, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Success.ai (2018). Consumer Behavior Data | Consumer Goods & Electronics Industry Leaders in Asia, US, and Europe | Verified Global Profiles from 700M+ Dataset [Dataset]. https://datarade.ai/data-products/consumer-behavior-data-consumer-goods-electronics-industr-success-ai
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Jan 1, 2018
    Dataset provided by
    Area covered
    United States
    Description

    Success.ai’s Consumer Behavior Data for Consumer Goods & Electronics Industry Leaders in Asia, the US, and Europe offers a robust dataset designed to empower businesses with actionable insights into global consumer trends and professional profiles. Covering executives, product managers, marketers, and other professionals in the consumer goods and electronics sectors, this dataset includes verified contact information, professional histories, and geographic business data.

    With access to over 700 million verified global profiles and firmographic data from leading companies, Success.ai ensures your outreach, market analysis, and strategic planning efforts are powered by accurate, continuously updated, and GDPR-compliant data. Backed by our Best Price Guarantee, this solution is ideal for businesses aiming to navigate and lead in these fast-paced industries.

    Why Choose Success.ai’s Consumer Behavior Data?

    1. Verified Contact Data for Precision Engagement

      • Access verified email addresses, phone numbers, and LinkedIn profiles of professionals in the consumer goods and electronics industries.
      • AI-driven validation ensures 99% accuracy, optimizing communication efficiency and minimizing data gaps.
    2. Comprehensive Global Coverage

      • Includes profiles from key markets in Asia, the US, and Europe, covering regions such as China, India, Germany, and the United States.
      • Gain insights into region-specific consumer trends, product preferences, and purchasing behaviors.
    3. Continuously Updated Datasets

      • Real-time updates capture career progressions, company expansions, market shifts, and consumer trend data.
      • Stay aligned with evolving market dynamics and seize emerging opportunities effectively.
    4. Ethical and Compliant

      • Fully adheres to GDPR, CCPA, and other global data privacy regulations, ensuring responsible use and legal compliance for all data-driven campaigns.

    Data Highlights:

    • 700M+ Verified Global Profiles: Connect with industry leaders, marketers, and decision-makers in consumer goods and electronics industries worldwide.
    • Consumer Trend Insights: Gain detailed insights into product preferences, purchasing patterns, and demographic influences.
    • Business Locations: Access geographic data to identify regional markets, operational hubs, and emerging consumer bases.
    • Professional Histories: Understand career trajectories, skills, and expertise of professionals driving innovation and strategy.

    Key Features of the Dataset:

    1. Decision-Maker Profiles in Consumer Goods and Electronics

      • Identify and engage with professionals responsible for product development, marketing strategy, and supply chain optimization.
      • Target individuals making decisions on consumer engagement, distribution, and market entry strategies.
    2. Advanced Filters for Precision Campaigns

      • Filter professionals by industry focus (consumer electronics, FMCG, luxury goods), geographic location, or job function.
      • Tailor campaigns to align with specific industry trends, market demands, and regional preferences.
    3. Consumer Trend Data and Insights

      • Access data on regional product preferences, spending behaviors, and purchasing influences across key global markets.
      • Leverage these insights to shape product development, marketing campaigns, and customer engagement strategies.
    4. AI-Driven Enrichment

      • Profiles enriched with actionable data allow for personalized messaging, highlight unique value propositions, and improve engagement outcomes.

    Strategic Use Cases:

    1. Marketing and Demand Generation

      • Design campaigns tailored to consumer preferences, regional trends, and target demographics in the consumer goods and electronics industries.
      • Leverage verified contact data for multi-channel outreach, including email, social media, and direct marketing.
    2. Market Research and Competitive Analysis

      • Analyze global consumer trends, spending patterns, and product preferences to refine your product portfolio and market positioning.
      • Benchmark against competitors to identify gaps, emerging needs, and growth opportunities in target regions.
    3. Sales and Partnership Development

      • Build relationships with key decision-makers at companies specializing in consumer goods or electronics manufacturing and distribution.
      • Present innovative solutions, supply chain partnerships, or co-marketing opportunities to grow your market share.
    4. Product Development and Innovation

      • Utilize consumer trend insights to inform product design, pricing strategies, and feature prioritization.
      • Develop offerings that align with regional preferences and purchasing behaviors to maximize market impact.

    Why Choose Success.ai?

    1. Best Price Guarantee
      • Access premium-quality consumer behavior data at competitive prices, ensuring maximum ROI for your outreach, research, and ma...
  12. a

    Narragansett Bay Estuarine and Marine Wetlands Trend Analysis

    • hub.arcgis.com
    • rigis.org
    Updated Oct 30, 2004
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Environmental Data Center (2004). Narragansett Bay Estuarine and Marine Wetlands Trend Analysis [Dataset]. https://hub.arcgis.com/datasets/edc::narragansett-bay-estuarine-and-marine-wetlands-trend-analysis/about
    Explore at:
    Dataset updated
    Oct 30, 2004
    Dataset authored and provided by
    Environmental Data Center
    Area covered
    Description

    A trend analysis of estuarine and marine wetlands in Narragansett Bay and their 500-foot upland buffer delineated from 1990s era true color aerial photography and 1950s era black and white aerial photography and coded according to U.S. Fish and Wildlife Service's Classification of Wetlands and Deepwater Habitats of the United States (Cowardin, L.M., V. Carter, F.C. Golet, and T. Laroe. 1979.(Reprinted 1992). U.S. Fish and Wildlife Service, Washington DC. FWS/OBS-79/31. 103 pp." ) and Anderson, J.R., E.E. Hardy, J.T. Roach and R.E. Witmer. 1976. A Land Use and Land Cover Classification System for Use with Remote Sensor Data. U.S. Geological Survey Professional Paper 96A. U.S. Government Printing Office, Washington, D.C. 28pp.These data identify coastal wetland and buffer zone trends including losses, gains and changes in classification in the Rhode Island portion of the Narragansett Bay Estuary and to identify these same trends in 6 pilot areas including 1930s to 1950s trends. Planning for the protection and restoration of coastal wetland and buffer zone restoration. Target minimum polygonal mapping unit was 0.25 acre for discrete coastal wetlands.

  13. f

    Drift versus Shift: Decoupling Trends and Changepoint Analysis

    • tandf.figshare.com
    txt
    Updated Feb 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haoxuan Wu; Toryn L. J. Schafer; Sean Ryan; David S. Matteson (2025). Drift versus Shift: Decoupling Trends and Changepoint Analysis [Dataset]. http://doi.org/10.6084/m9.figshare.26018126.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Feb 18, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Haoxuan Wu; Toryn L. J. Schafer; Sean Ryan; David S. Matteson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We introduce a new approach for decoupling trends (drift) and changepoints (shifts) in time series. Our locally adaptive model-based approach for robustly decoupling combines Bayesian trend filtering and machine learning based regularization. An over-parameterized Bayesian dynamic linear model (DLM) is first applied to characterize drift. Then a weighted penalized likelihood estimator is paired with the estimated DLM posterior distribution to identify shifts. We show how Bayesian DLMs specified with so-called shrinkage priors can provide smooth estimates of underlying trends in the presence of complex noise components. However, their inability to shrink exactly to zero inhibits direct changepoint detection. In contrast, penalized likelihood methods are highly effective in locating changepoints. However, they require data with simple patterns in both signal and noise. The proposed decoupling approach combines the strengths of both, that is, the flexibility of Bayesian DLMs with the hard thresholding property of penalized likelihood estimators, to provide changepoint analysis in complex, modern settings. The proposed framework is outlier robust and can identify a variety of changes, including in mean and slope. It is also easily extended for analysis of parameter shifts in time-varying parameter models like dynamic regressions. We illustrate the flexibility and contrast the performance and robustness of our approach with several alternative methods across a wide range of simulations and application examples.

  14. d

    Tempe COVID-19 Wastewater Collection Data Dashboard v4

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Nov 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2024). Tempe COVID-19 Wastewater Collection Data Dashboard v4 [Dataset]. https://catalog.data.gov/dataset/tempe-covid-19-wastewater-collection-data-dashboard-v4
    Explore at:
    Dataset updated
    Nov 15, 2024
    Dataset provided by
    City of Tempe
    Area covered
    Tempe
    Description

    Wastewater collection areas are comprised of merged sewage drainage basins that flow to a shared testing location for the COVID-19 wastewater study. The collection area polygons are published with related wastewater testing data, which are provided by scientists from Arizona State University's Biodesign Institute.Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes COVID-19. People infected with SARS-CoV-2 excrete the virus in their feces in a process known as “shedding”. The municipal wastewater treatment system (sewage system) collects and aggregates these bathroom contributions across communities. Tempe wastewater samples are collected downstream of a community and the samples are brought to the ASU lab to analyze for the virus. Analysis is based on the genetic material inside the virus. This dashboard focuses on the genome copies per liter. The absence of a value in a chart indicates that either no samples were collected or that samples are still being analyzed. A value of 5,000 represents samples that are below detection or reporting limits for the test being used. Note of Caution:The influence of this data on community health decisions in the future is unknown. Data collection is being used to depict overall weekly trends and should not be interpreted without a holistic assessment of public health data. The purpose of this weekly data is to support research as well as to identify overall trends of the genome copies in each liter of wastewater per collection area. In the future these trend data could be used alongside other authoritative data, including the number of daily new confirmed cases in Tempe published by the Arizona Department of Health and data documenting the state and local interventions (i.e. social distancing, closures and safe openings). The numeric values of the results should not be viewed as actionable right now; they represent one potentially helpful piece of information among various data sources.We share this information with the public with the disclaimer that only the future can tell how much “diagnostic value” we can and should attribute to the numeric measurements we obtain from the sewer. However, what we measure, the COVID-19-related RNA in wastewater, we know is real and we share that info with our community.In the Tempe COVID -19 Wastewater Results Dashboard, please note:These data illustrate a trend of the signal of the weekly average of COVID-19 genome copies per liter of wastewater in Tempe's sewage. The dashboard and collection area map do not depict the number of individuals infected. Each collection area includes at least one sampling location, which collects wastewater from across the collection area. It does not reflect the specific location where the deposit occurs.While testing can successfully quantify the results, research has not yet determined the relationship between these genome values and the number of people who are positive for COVID-19 in the community.The quantity of RNA detected in sewage is real; the interpretation of that signal and its implication for public health is ongoing research. Currently, there is not a baseline for determining a strong or weak signal.The shedding rate and shedding duration for individuals, both symptomatic and asymptomatic, is still unknown.Data are shared as the testing results become available. As results may not be released at the same time, testing results for each area may not yet be seen for a given day or week. The dashboard presents the weekly averages. Data are collected from 2-7 days per week. The quantifiable level of 5,000 copies per liter is the lowest amount measurable with current testing. Results that are below the quantifiable level of 5,000 copies per liter do not suggest the absence of the virus in the collection area. It is possible to have results below the quantifiable level of 5,000 on one day/week and then have a greater signal on a subsequent day/week.For Collection Area 1, Tempe's wastewater co-mingles with wastewater from a regional sewage line. Tempe's sewage makes up the majority of Collection Area 1 samples. After the collection period of April 7-24, 2020, Collection Area 1 samples include only Tempe wastewater.For Collection Area 3, Tempe's wastewater co-mingles with wastewater from a regional sewage line. For analysis and reporting, Tempe’s wastewater is separated from regional sewage. This operations dashboard is used in an associated story map Fighting Coronavirus/COVID-19 with Public Health Data https://storymaps.arcgis.com/stories/e6a45aad50c24e22b7285412d2d6ff2a about the COVID-19 wastewater testing project. This operations dashboard also support's the main Tempe Wastewater BioIntel Program hub site https://wastewater.tempe.gov/.

  15. d

    Opah Labs | Product Data & Consumer Insights | Puerto Rico | 112M+ Records

    • datarade.ai
    .json, .csv
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Opah Labs, Opah Labs | Product Data & Consumer Insights | Puerto Rico | 112M+ Records [Dataset]. https://datarade.ai/data-products/opah-labs-product-data-consumer-insights-puerto-rico-opah-labs
    Explore at:
    .json, .csvAvailable download formats
    Dataset authored and provided by
    Opah Labs
    Area covered
    Puerto Rico
    Description

    This Food & Grocery dataset offers unparalleled depth and accuracy, providing comprehensive insights into the Puerto Rican market with a strong focus on authentic Hispanic food products and consumer behavior. The data is sourced from a wide range of industry channels, including grocery retailers, product data, and real-time transactional data, ensuring its relevance and precision. This dataset is particularly valuable for businesses seeking to understand niche consumer preferences, offering detailed visibility into product demand, purchasing habits, and market trends within the Hispanic food and grocery sector. It serves as a powerful tool for optimizing decision-making in retail, supply chain management, and marketing strategies.

    | Volume and Stats |

    Industry records undergo an unmatched refresh every two weeks. Many prominent sales and marketing platforms rely on curating firsthand data.

    | Use Cases | 1. Supply Chain Optimization: Improve efficiency by leveraging detailed insights on product demand and delivery patterns. 2. Inventory Management: Streamline stock levels based on real-time data on product performance and consumer purchasing trends. 3. Personalized Marketing: Tailor marketing campaigns to target specific consumer behaviors and preferences within the Hispanic market. 4. Trend Identification: Spot emerging trends in food and grocery consumption to stay ahead of market demands. 5. Retail Strategy: Enhance retail strategies by understanding sales dynamics and customer preferences in Puerto Rico.

    This data product integrates seamlessly into broader data solutions, complementing datasets on consumer behavior, retail trends, and market performance, enabling businesses to drive informed decision-making across multiple sectors.

    | Delivery Options | Choose from various delivery options such as flat files, databases, APIs, and more, tailored to your needs. JSON, XLS, CSV

    | Other key features | Free data samples

    Tags: B2B2C Platform, Hispanic Grocers, Authentic Hispanic Food, User Engagement, Latam User Base, Ecommerce Dataset, Mobile Application Insights, User Behavior, User Experiences, Strategic Decisions, Hispanic Food Market Landscape.

  16. Fields of rapidly shifting trends South Korea 2024

    • statista.com
    Updated Nov 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Fields of rapidly shifting trends South Korea 2024 [Dataset]. https://www.statista.com/statistics/1535410/south-korea-fields-of-rapidly-shifting-trends/
    Explore at:
    Dataset updated
    Nov 7, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Jul 9, 2024 - Jul 12, 2024
    Area covered
    South Korea
    Description

    According to a July 2024 survey on rapidly shifting trends in South Korea, approximately 71 percent of respondents identified fashion as one of the fastest-changing fields, making it the category with the highest response rate. In contrast, the area with the slowest rate of change was identified as the stock market, with only 10.7 percent of respondents selecting it.

  17. Global Wet Alarm Check Valves Market Forecast and Trend Analysis 2025-2032

    • statsndata.org
    excel, pdf
    Updated Feb 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stats N Data (2025). Global Wet Alarm Check Valves Market Forecast and Trend Analysis 2025-2032 [Dataset]. https://www.statsndata.org/report/wet-alarm-check-valves-market-231487
    Explore at:
    excel, pdfAvailable download formats
    Dataset updated
    Feb 2025
    Dataset authored and provided by
    Stats N Data
    License

    https://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order

    Area covered
    Global
    Description

    The Wet Alarm Check Valves market is a critical segment within the broader fire protection industry, primarily used in fire suppression systems to prevent water from flowing back into the supply line while allowing alarm systems to function correctly during emergencies. These valves play an essential role in ensurin

  18. Data from: Monitoring wildlife population trends with sample counts: A case...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Oct 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matteo Panaccio; Alice Brambilla; Bruno Bassano; Tessa Smith; Achaz von Hardenberg (2023). Monitoring wildlife population trends with sample counts: A case study on the Alpine ibex (Capra ibex) [Dataset]. http://doi.org/10.5061/dryad.cfxpnvxcj
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 3, 2023
    Dataset provided by
    University of Chester
    University of Zurich
    Gran Paradiso National Park
    Authors
    Matteo Panaccio; Alice Brambilla; Bruno Bassano; Tessa Smith; Achaz von Hardenberg
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Alps
    Description

    Monitoring population dynamics is of fundamental importance in conservation but assessing trends in abundance can be costly, especially in large and rough areas. Obtaining trend estimations from counts performed in only a portion of the total area (sample counts) can be a cost-effective method to improve the monitoring and conservation of species difficult to count. We tested the effectiveness of sample counts in monitoring population trends of wild animals, using as a model population the Alpine ibex (Capra ibex) in the Gran Paradiso National Park (Italy), both with computer simulations and using historical count data collected over the last 65 years. Despite sample counts failed to correctly estimate the true population abundance, sampling half of the target area could reliably monitor the trend of the target population. In case of strong changes in abundance, an even lower proportion of the total area could be sufficient to identify the direction of the population trend. However, when there is a high yearly trend variability, the required number of samples increases and even counting in the entire area can be ineffective to detect population trends. The effect of other parameters, such as which portion of the area is sampled and detectability, was lower, but these should be tested case by case. Sample counts could therefore constitute a viable alternative to assess population trends, allowing for important, cost-effective improvements in the monitoring of wild animals of conservation interest. Methods We here provide the R script to run all the simulations in the paper. See Methods and Supplementary materials S1 and S2 for more info

  19. F

    Flight Data Monitoring Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Feb 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). Flight Data Monitoring Market Report [Dataset]. https://www.promarketreports.com/reports/flight-data-monitoring-market-719
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Feb 10, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Flight data monitoring (FDM) systems offer a comprehensive suite of product features to enhance aviation safety and operational efficiency: Flight Data Recording and Storage: Captures and stores critical flight parameters, including aircraft performance, environmental data, and pilot inputs. Data Analysis and Visualization: Analyzes recorded data to identify trends, anomalies, and potential safety concerns, presenting insights through intuitive visualizations. Trend Analysis and Reporting: Generates custom reports to monitor performance metrics over time, allowing airlines to identify areas for improvement and optimize operations. Predictive Maintenance and Alerting: Utilizes advanced algorithms to predict potential equipment failures, enabling proactive maintenance and minimizing operational disruptions. Event Reconstruction and Investigation: Provides a detailed record of flight events in case of incidents or accidents, aiding in investigations and improving safety protocols. Recent developments include: In April 2020, Airbus, as one of the prominent market parts, known to have as many as 7,645 aircraft on the backlog. This is available with more than 80% of these prevailing orders for one of the market products - A320 Family aircraft. A350XWBs alongside A220s account for approximately 14% of the order backlog. The planned aircraft deliveries of new aircraft programs like Boeing 777X, COMAC C919, and MC-21, as part of the upcoming years, are anticipated to propel the demand for the prevalent commercial aircraft flight data monitoring systems.. Key drivers for this market are: Increased focus on aviation safety. Technological advancements in data analysis.. Potential restraints include: Data security and privacy concerns. High cost of FDM systems.. Notable trends are: Predictive analytics for proactive maintenance. Integration with other aviation systems..

  20. Amount of data created, consumed, and stored 2010-2023, with forecasts to...

    • statista.com
    Updated Nov 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Amount of data created, consumed, and stored 2010-2023, with forecasts to 2028 [Dataset]. https://www.statista.com/statistics/871513/worldwide-data-created/
    Explore at:
    Dataset updated
    Nov 21, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    May 2024
    Area covered
    Worldwide
    Description

    The total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching 149 zettabytes in 2024. Over the next five years up to 2028, global data creation is projected to grow to more than 394 zettabytes. In 2020, the amount of data created and replicated reached a new high. The growth was higher than previously expected, caused by the increased demand due to the COVID-19 pandemic, as more people worked and learned from home and used home entertainment options more often. Storage capacity also growing Only a small percentage of this newly created data is kept though, as just two percent of the data produced and consumed in 2020 was saved and retained into 2021. In line with the strong growth of the data volume, the installed base of storage capacity is forecast to increase, growing at a compound annual growth rate of 19.2 percent over the forecast period from 2020 to 2025. In 2020, the installed base of storage capacity reached 6.7 zettabytes.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
R. Hickman (2022). Input data and results of WRTDS models and seasonal rank-sum tests to determine trends in the quality of water in New Jersey streams, water years 1971-2011 [Dataset]. http://doi.org/10.5066/F7NS0RZ3

Input data and results of WRTDS models and seasonal rank-sum tests to determine trends in the quality of water in New Jersey streams, water years 1971-2011

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Aug 11, 2022
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Authors
R. Hickman
License

U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically

Time period covered
Oct 1, 1970 - Sep 30, 2011
Area covered
New Jersey
Description

This USGS data release represents the input data used to identify trends in New Jersey streams, water years 1971-2011 and the results of Weighted Regression on Time, Discharge, and Season (WRTDS) models and seasonal rank-sum tests. The data set consists of CSV tables and Excel workbooks of: • trends_InputData_NJ_1971_2011: Reviewed water-quality values and qualifiers at selected stream stations in New Jersey over water years 1971-2011 • trends_WRTDS_AnnualValues_NJ_1971_2011: Annual concentrations and fluxes for each water-quality characteristic at each station from WRTDS models • trends_WRTDS_Changes_NJ_1971_2011: Changes and trends in flow-normalized concentrations and fluxes determined from WRTDS models • trends_SeasonalRankSum_results_NJ_1971_2011: Results of seasonal rank-sum tests to identify step trends between concentrations in the 1970s, 1980s, 1990s, and 2000s at selected stations on streams in New Jersey. These data support the following publication: Hickman, R.E. ...

Search
Clear search
Close search
Google apps
Main menu