Facebook
TwitterThis data release makes available the data used to test a method for remote sensing of river discharge based on critical flow theory. In rivers where flow conditions are near critical and well-defined standing waves are present, simple measurements of the wavelengths of the standing waves and the width of the channel made using readily available image data can be used to infer river discharge. This approach could provide an efficient, non-contact method of estimating streamflow in certain rivers. Sites near established USGS gaging stations with standing waves (also known as undular hydraulic jumps) were identified by examining images within Google Earth Pro (https://www.google.com/earth/about/versions/#earth-pro). The time slider within that software was then used to select specific image dates on which waves were clearly expressed. For each site on each date, Google Earth Pro was used to manually digitize four types of features: 1) a shadow to infer the image acquisition time using a sundial method; 2) one initial wavelength measurement made between the first and second crests in a standing wave train; 3) a second wavelength measurement made between the second and third crests in a standing wave train and thus located adjacent to and immediately downstream of the first wavelength measurement; and 4) the channel width, measured perpendicular to the two waves at the midpoint between them. These features are saved in the file CriticalFlowMeasurements.kmz, which consists of a separate folder for each site with subfolders for each date. In cases where more than one site was associated with a given gaging station, the description field in the parent folder name was used to provide a unique name for each rapid. In addition to the CriticalFlowMeasurements.kmz file created in Google Earth Pro, the file CriticalFlowResultCompilationWithMeasurements.csv also contains the latitude and longitude coordinates of each of the digitized features. This tabular data file also contains the measurements of wavelength and width derived from these features, the image acquisition time inferred from shadow orientation using the sundial method, the discharge recorded at the gaging station at that time, the results of the critical flow calculations, and an accuracy assessment based on a comparison of the inferred discharge to that recorded at the gaging station. Please refer to the entity and attribute section of the metadata for further detail regarding this file. Associated primary publications provide further detail on the sundial method and the underlying critical flow theory, but the relevant calculations can be summarized briefly as follows. Measuring the orientation of a shadow provides a means of inferring the acquisition time of an image by calculating the position of the sun at various times throughout the day and identifying when the sun would have been positioned so as to cast the observed shadow. Note that the latitude, longitude, and image acquisition date must be known to perform this analysis. Under critical flow conditions, relationships between the wavelengths observed in a standing wave train and the depth and velocity of the flow can be used to estimate river discharge. Critical flow occurs when the Froude number (Fr) is equal to 1 and the velocity v is equal to the square root of the product of depth d and acceleration due to gravity g: Fr = 1 --> v = sqrt(gd). Velocity is also related to wavelength L as v = sqrt((Lg)/(2pi)), which can in turn be used to calculate depth via the definition of the Froude number: d = v^2/g. Finally, the discharge Q is calculated by multiplying the estimated depth by the measured width to obtain the cross-sectional area and then multiplying this result by the estimated velocity: Q = wdv. Users are advised to thoroughly read the metadata file associated with this data release to understand the appropriate use and limitations of the data and code provided herein. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Facebook
TwitterThis dataset contains 3.5 GHz drone-based wireless channel measurements collected at the University of Southern California (USC). The measurements were designed for analyzing cell-free massive MIMO systems and developing path loss models.
Facebook
TwitterThe Building Assessment Survey and Evaluation (BASE) study was a five year study to characterize determinants of indoor air quality and occupant perceptions in representative public and commercial office buildings across the U.S. This data source is the raw data from this study about the indoor air quality.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Performance metrics (compared with DXA-based measurement) for different prediction algorithms using all predictors in test data, based on 25 datasets where the DXA-based outcomes were randomly permuted (to provide a null or baseline scenario to compare against performance on the real data).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the measurement and calibration data used in the CIT article 'Qualification of Image‐Based Measurement Systems for Characterization of Sprays'.
The data set contains three archives:
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains multi-source educational data for evaluating teaching quality in higher education. It reflects realistic records from a Comprehensive Education Management Platform (CEMP), combining academic performance, evaluation scores, attendance, student feedback, course completion statistics, and technology integration measures.
The target variable, Teaching_Quality, has four categories (Excellent, Good, Average, Poor) assigned using a weighted combination of core performance indicators. It provides a comprehensive view of teaching effectiveness across multiple dimensions, making it suitable for academic quality monitoring, decision-making, and research in higher education management.
Key Features
Teacher_ID / Course_ID: Identifiers for instructors and courses.
Student_Avg_Score: Average student academic performance.
Teacher_Evaluation_Score: Peer or management evaluation results.
Attendance_Rate: Percentage of student attendance.
Student_Feedback_Rating: Student survey-based rating of teaching.
Course_Completion_Rate: Percentage of students completing the course.
Interactive_Sessions_Percent: Proportion of classes with active participation.
Assignments_On_Time_Percent: Timeliness of assignment submissions.
Teaching_Experience_Years: Instructor’s teaching experience.
Research_Publications: Number of publications in the last three years.
Tech_Integration_Score: Use of technology in teaching.
Teaching_Quality (Target): Quality classification label.
This dataset supports educational analytics, institutional quality assurance, and data-driven policy formulation in higher education.
Facebook
Twitterhttps://www.law.cornell.edu/uscode/text/17/106https://www.law.cornell.edu/uscode/text/17/106
Preventing eutrophication of inland freshwater ecosystems requires quantifying the excess nutrient (i.e., phosphorus (P) and nitrogen (N)) content of the streams and rivers that feed them. Identification of these at-risk waterways will help prevent harmful algae blooms that make drinking water unsafe. Here we present a novel method for measuring the more relevant bioavailable P (BAP). Where typical methods for measuring P assess soluble reactive P (SRP) or total P (TP) and require expensive analytical techniques that produce hazardous waste, this assay utilizes the growth of familiar baker’s yeast, avoids production of hazardous waste, and reduces cost relative to measurements of SRP and TP. The yeast BAP (yBAP) assay takes advantage of the observation that yeast density at saturating growth increases linearly with provided P. We show that this relationship can be used to measure P in freshwater in concentration ranges relevant to eutrophication. In addition, we measured yBAP in water containing a known amount of fertilizer and in samples from agricultural waterways. We observed that the majority of yBAP values were between those obtained from standard SRP and TP measurements, demonstrating that the assay is compatible with real-world settings. To expand the yBAP assay, we developed two additional derivatives for use with turbid samples that utilize the fact that growing cultures produce the gas carbon dioxide. The first derivative directly measures the released carbon dioxide by measuring pressure build-up in a closed culture chamber. The second derivative indirectly measures carbon dioxide release by measuring pH in the headspace (space above the yeast culture in a closed system) as indicated by the color of a gel containing cresol red suspended above the liquid culture. In either case, the amount of P in a sample can be determined by comparing the observed result to standard curves constructed from solutions of known P concentration. These new variants of the assay allow for the detection of yBAP in turbid (sediment-laden) or soil samples due to their readouts not requiring measurement through the sample matrix itself but rather above it. The final portion of this thesis focuses on applying the same principles of growth to measure the excess nutrient nitrogen, a large component of eutrophication of saltwater systems, using the yeast strain Blastobotrys adeninivorans, which is capable of metabolizing more complex forms of N. In either system the cost-effective and nonhazardous nature of yeast-based assays suggests that they could have utility in a range of settings, offering added insight to identify water systems at risk of eutrophication.
Facebook
TwitterThe goal of this study was to develop a suite of inter-related water quality monitoring approaches capable of modeling and estimating the spatial and temporal gradients of particulate and dissolved total mercury (THg) concentration, and particulate and dissolved methyl mercury (MeHg), concentration, in surface waters across the Sacramento / San Joaquin River Delta (SSJRD). This suite of monitoring approaches included: a) data collection at fixed continuous monitoring stations (CMS) outfitted with in-situ sensors, b) spatial mapping using boat-mounted flow-through sensors, and c) satellite-based remote sensing. The focus of this specific Child Page is to present all field and laboratory-based data associated with discrete surface water samples collected as part of the CMS and boat mapping components of the study. The data provided in the table herein constitute a collection of field-based and laboratory-based measurements that coincide with the timestamps of samples collected at 33 sites across the Delta. Laboratory-based measurements presented herein were conducted by the U.S. Geological Survey (USGS) Organic Matter Research Laboratory (OMRL) in Sacramento, CA, the USGS Earths System Processes Division (ESPD) microbial biogeochemistry laboratory in Menlo Park, CA, the USGS Reston Stable Isotope Laboratory (RSIL) in Reston, VA and the USGS National Water Quality Laboratory (NWQL) in Denver, CO. The machine-readable (comma separated value, *.csv) file presented herein includes laboratory-based measurements for discrete samples collected from 33 established field sites (sampled repeatedly). In addition, field-based sensor data from continuous measurement platforms (CMS locations or as part of the mapping boat flow-through system) are also included in this discrete sample dataset by ensuring that the field sensor measurements were both spatially and temporally coincident with the physically discrete water sample collected for laboratory analysis.
Facebook
TwitterA load estimation algorithm based on k-means cluster analysis was developed. The developed algorithm applies cluster centres - of previously clustered load profiles - and distance functions to estimate missing and future measurements. Canberra, Manhattan, Euclidean, and Pearson correlation distances were investigated. Several case studies were implemented using daily and segmented load profiles of aggregated smart meters. Segmented profiles cover a time window that is less than or equal to 24 hours. Simulation results show that Canberra distance outperforms the other distance functions. Results also show that the segmented cluster centres produce more accurate load estimates than daily cluster centres. Higher accuracy estimates were obtained with cluster centres in the range of 16-24 hours. The developed load estimation algorithm can be integrated with state estimation or other network operational tools to enable better monitoring and control of distribution networks.This dataset provides details to theInput load profiles;Output load profiles; andCluster centreswhich comprise the average active power demand (measured in kilo-Watts) at each half-hourly time step during a day. The dataset also includes the values of the Mean Absolute Percentage Error (MAPE) between the actual and the estimated values of the active power demand. A readme.txt file has been included in each folder to help the reader trace the type of information provided within the folders.Research results based upon these data are published athttp://dx.doi.org/10.1016/j.apenergy.2016.06.046
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Two different machine-learning methods (classification and regression trees (CART) and random forests (RF)) were applied to the artificially generated data set 4, which comprises two groups with sizes of n = 1000 cases and d = 20 variables. The results represent the medians of test performance measures from 100-fold cross-validation runs using random splits of the data set into training data (2/3 of the data set) and test data (1/3 of the data set). In addition, a negative control data set was created by permutating the variables from the training data set, with the expectation that the machine learning algorithms should not perform group assignment better than chance when trained with such data; otherwise, there could be overfitting involved.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synthetic data used in the case study (section 3) of the paper.
The data in XDMF (Milou.xdmf ) + hdf5 (Milou.hdf5) format comprises:
Facebook
TwitterThis chapter presents theoretical and practical aspects associated to the implementation of a combined model-based/data-driven approach for failure prognostics based on particle filtering algorithms, in which the current esti- mate of the state PDF is used to determine the operating condition of the system and predict the progression of a fault indicator, given a dynamic state model and a set of process measurements. In this approach, the task of es- timating the current value of the fault indicator, as well as other important changing parameters in the environment, involves two basic steps: the predic- tion step, based on the process model, and an update step, which incorporates the new measurement into the a priori state estimate. This framework allows to estimate of the probability of failure at future time instants (RUL PDF) in real-time, providing information about time-to- failure (TTF) expectations, statistical confidence intervals, long-term predic- tions; using for this purpose empirical knowledge about critical conditions for the system (also referred to as the hazard zones). This information is of paramount significance for the improvement of the system reliability and cost-effective operation of critical assets, as it has been shown in a case study where feedback correction strategies (based on uncertainty measures) have been implemented to lengthen the RUL of a rotorcraft transmission system with propagating fatigue cracks on a critical component. Although the feed- back loop is implemented using simple linear relationships, it is helpful to provide a quick insight into the manner that the system reacts to changes on its input signals, in terms of its predicted RUL. The method is able to manage non-Gaussian pdf’s since it includes concepts such as nonlinear state estimation and confidence intervals in its formulation. Real data from a fault seeded test showed that the proposed framework was able to anticipate modifications on the system input to lengthen its RUL. Results of this test indicate that the method was able to successfully suggest the correction that the system required. In this sense, future work will be focused on the development and testing of similar strategies using different input-output uncertainty metrics.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data/code files for the following project: I study the viability of Twitter-based measures for measuring public attitudes about the police. I find that Twitter-based measures track Gallup's measure of public attitudes starting around 2014, when Twitter user base stabilized, but not before 2014. Increases in Black Lives Matter protests are also associated with increases in negative sentiment measures from Twitter. The findings suggest that Twitter-based measures can be used to acquire granular evaluations of police performance, but they can be more useful in analyzing panel data of multiple agencies over time than in tracking a single geographical area over time.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Abstract: How can we measure and explain the precision of international organizations’ (IOs) founding treaties? We define precision by its negative – imprecision – as indeterminate language that intentionally leaves a wide margin of interpretation for actors after agreements enter into force. Compiling a “dictionary of imprecision” from almost 500 scholarly contributions and leveraging insight from linguists that a single vague word renders the whole sentence vague, we introduce a dictionary-based measure of imprecision (DIMI) that is replicable, applicable to all written documents, and yields a continuous measure bound between zero and one. To demonstrate that DIMI usefully complements existing approaches and advances the study of (im-)precision, we apply it to a sample of 76 IOs. Our descriptive results show high face validity and closely track previous characterizations of these IOs. Finally, we explore patterns in the data, expecting that imprecision in IO treaties increases with the number of states, power asymmetries, and the delegation of authority, while it decreases with the pooling of authority. In a sample of major IOs, we find robust empirical support for the power asymmetries and delegation propositions. Overall, DIMI provides exciting new avenues to study precision in International Relations and beyond. The files uploaded entail the material necessary to replicate the results from the article and Online appendix published in: Gastinger, M. and Schmidtke, H. (2022) ‘Measuring precision precisely: A dictionary-based measure of imprecision’, The Review of International Organizations, available at Doi: 10.1007/s11558-022-09476-y. Please let us know if you spot any mistakes or if we may be of any further assistance!
Facebook
TwitterThe data presented in this level 2 orbital product are rain rate estimates expressed as mm/hour determined from brightness temperatures (Tbs) obtained from the Special Sensor Microwave/Imager (SSM/I) flown on the US Defense Meteorological Satellite Program (DMSP) F08 mission. Most of the products generated in this data set are based upon the algorithms developed for the 3rd Algorithm Intercomparison Project (AIP-3) of the Global Precipitation Climatology Project (GPCP). Details of these 15 algorithms and development of a quality score which is a measure of confidence in the estimate, along with processing and algorithmic flags, can be found in the Algorithm Theoretical Basis Document (ATBD). The data in this product cover the period from 1987 to 1991 with one file per orbit.
Facebook
Twitterhttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/noLimitationshttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/noLimitations
This dataset comprises a map of groundwater recharge for Africa and a database of the 134 observations used to generate the map. The map shows long term average annual groundwater recharge in mm per annum relevant to the period 1970 to 2020. It is in the form of a GIS shapefile and is available as a layer package for ESRI and also as a georeferenced TIFF and BIL file for easy exchange with other software. The database contains 134 sites for which ground based observations for groundwater recharge are available. These 134 sites are from previously published material and have gone through a QA procedure and been accurately geolocated to be included in the dataset. For each record there is a latitude, longitude, recharge estimate, recharge range, time period for the measurement; scale for which the estimate is made, methods used, a confidence rating and reason for this rating, and the reference from where the data originate. In addition, the database includes for each observation information from other continental datasets including: climate data, landcover, aquifer type, soil group and the normalized difference vegetation index (NDVI).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
General overview of the criteria used for assessment of methodological quality.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global cross-platform measurement market size in 2024 stands at USD 4.95 billion, driven by the increasing complexity of digital ecosystems and the growing need for unified measurement solutions across multiple channels. The market is experiencing robust growth, with a compound annual growth rate (CAGR) of 11.2% projected from 2025 to 2033. By the end of 2033, the cross-platform measurement market is expected to reach a value of USD 13.1 billion. The primary growth factor fueling this expansion is the relentless demand for accurate, real-time analytics that enable brands and advertisers to optimize their marketing strategies across diverse media platforms.
One of the most significant growth drivers for the cross-platform measurement market is the proliferation of digital channels and connected devices. As consumers engage with content across smartphones, tablets, desktops, smart TVs, and other connected devices, brands face increasing challenges in tracking user journeys and measuring campaign effectiveness holistically. This fragmentation necessitates advanced cross-platform measurement solutions that can aggregate and analyze data from multiple sources, offering a unified view of audience behavior. The integration of artificial intelligence and machine learning technologies into measurement platforms further enhances their accuracy and predictive capabilities, enabling marketers to make data-driven decisions in real time.
Another critical factor contributing to the growth of the cross-platform measurement market is the rising focus on return on investment (ROI) and accountability in advertising spend. Organizations are under pressure to justify marketing budgets and demonstrate tangible business outcomes from their campaigns. Cross-platform measurement tools empower advertisers to attribute conversions and sales to specific touchpoints across the customer journey, providing granular insights into what drives engagement and revenue. This level of transparency is increasingly demanded by both brands and agencies, fostering greater adoption of sophisticated measurement solutions and fueling market expansion.
Moreover, regulatory changes and evolving data privacy standards such as GDPR and CCPA are reshaping the digital measurement landscape. These regulations require organizations to adopt more transparent and compliant data collection practices, which has accelerated the adoption of cross-platform measurement solutions that prioritize privacy and consent management. Vendors are responding by developing platforms with robust security features and transparent data handling processes, ensuring compliance while still delivering actionable insights. This regulatory environment is pushing the industry toward more ethical and reliable measurement methodologies, which in turn is boosting market growth.
Regionally, North America continues to dominate the cross-platform measurement market, accounting for the largest share in 2024 due to its advanced digital advertising ecosystem and early adoption of measurement technologies. Europe follows closely, driven by strong regulatory frameworks and mature media industries. The Asia Pacific region, however, is expected to witness the highest growth rate over the forecast period, fueled by rapid digital transformation, increasing internet penetration, and a burgeoning e-commerce sector. Latin America and the Middle East & Africa are also experiencing steady growth as local businesses embrace digital marketing and seek integrated measurement solutions to stay competitive.
The cross-platform measurement market is primarily segmented by component into software and services, each playing a pivotal role in the overall ecosystem. Software solutions form the backbone of cross-platform measurement, encompassing data aggregation, analytics engines, dashboards, and reporting tools. These platforms are designed to collect data from disparate sources such as websites, mobile apps, connected TVs, and social media channels, consolidating them into a unified view for marketers and advertisers. The software segment is witnessing rapid innovation, with vendors incorporating advanced analytics, artificial intelligence, and machine learning to enhance data accuracy and deliver predictive insights. The growing complexity of digital campaigns and the need for real-time, actio
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data set contains the replication data for the article "Knowing and doing: The development of information literacy measures to assess knowledge and practice." This article was published in the Journal of Information Literacy, in June 2021. The data was collected as part of the contact author's PhD research on information literacy (IL). One goal of this study is to assess students' levels of IL using three measures: 1) a 21-item IL test for assessing students' knowledge of three aspects of IL: evaluating sources, using sources, and seeking information. The test is multiple choice, with four alternative answers for each item. This test is a "KNOW-measure," intended to measure what students know. 2) a source-evaluation measure to assess students' abilities to critically evaluate information sources in practice. This is a "DO-measure," intended to measure what students do in practice, in actual assignments. 3) a source-use measure to assess students' abilities to use sources correctly when writing. This is a "DO-measure," intended to measure what students do in practice, in actual assignments. The data set contains survey results from 626 Norwegian and international students at three levels of higher education: bachelor, master's and PhD. The data was collected in Qualtrics from fall 2019 to spring 2020. In addition to the data set and this README file, two other files are available here: 1) test questions in the survey, including answer alternatives (IL_knowledge_tests.txt) 2) details of the assignment-based measures for assessing source evaluation and source use (Assignment_based_measures_assessing_IL_skills.txt) Publication abstract: This study touches upon three major themes in the field of information literacy (IL): the assessment of IL, the association between IL knowledge and skills, and the dimensionality of the IL construct. Three quantitative measures were developed and tested with several samples of university students to assess knowledge and skills for core facets of IL. These measures are freely available, applicable across disciplines, and easy to administer. Results indicate they are likely to be reliable and support valid interpretations. By measuring both knowledge and practice, the tools indicated low to moderate correlations between what students know about IL, and what they actually do when evaluating and using sources in authentic, graded assignments. The study is unique in using actual coursework to compare knowing and doing regarding students’ evaluation and use of sources. It provides one of the most thorough documentations of the development and testing of IL assessment measures to date. Results also urge us to ask whether the source-focused components of IL – information seeking, source evaluation and source use – can be considered unidimensional constructs or sets of disparate and more loosely related components, and findings support their heterogeneity.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Marketing Measurement Platform market size reached USD 4.9 billion in 2024. The market is projected to grow at a robust CAGR of 14.2% during the forecast period, propelling the market to a value of USD 15.2 billion by 2033. This impressive growth trajectory is primarily fueled by the increasing demand for data-driven marketing strategies and the proliferation of digital channels, which have made precise campaign measurement and analytics indispensable for organizations worldwide. The market’s rapid expansion is also underpinned by technological advancements in artificial intelligence and machine learning, enabling more sophisticated and actionable marketing insights.
One of the key growth factors driving the Marketing Measurement Platform market is the escalating complexity of consumer journeys across multiple touchpoints and channels. Today’s marketers are tasked with understanding and optimizing interactions that span web, mobile, social media, email, and offline channels, making integrated measurement platforms essential. As organizations strive to maximize ROI and prove marketing value to stakeholders, the demand for platforms that can unify disparate data sources and deliver holistic analytics has surged. Furthermore, the rising adoption of omnichannel marketing strategies, coupled with heightened competition in virtually every sector, has elevated the importance of real-time campaign performance tracking and attribution modeling.
Another significant factor contributing to market growth is the increased emphasis on regulatory compliance and data privacy. With stringent regulations such as GDPR and CCPA reshaping how companies collect, store, and process consumer data, marketing measurement platforms have evolved to provide robust governance, consent management, and secure data handling capabilities. This has not only built greater trust among end-users but has also encouraged enterprises to invest in advanced measurement solutions that ensure compliance while delivering actionable insights. Additionally, the proliferation of cloud-based deployment models has democratized access to sophisticated analytics tools, enabling small and medium enterprises (SMEs) to leverage capabilities previously reserved for large corporations.
The rapid integration of artificial intelligence and machine learning is transforming the capabilities of marketing measurement platforms. These technologies enable predictive analytics, automated reporting, and advanced customer journey mapping, significantly enhancing marketers’ ability to make data-driven decisions. The growing availability of big data and the need for real-time insights have further accelerated the adoption of AI-powered measurement tools. As organizations increasingly prioritize agility and responsiveness in their marketing operations, platforms that offer automated and scalable analytics solutions have become critical assets for maintaining a competitive edge.
Regionally, North America continues to dominate the Marketing Measurement Platform market, accounting for the largest revenue share in 2024, followed closely by Europe and Asia Pacific. North America’s leadership is attributed to the high concentration of technology-driven enterprises, early adoption of digital marketing solutions, and a mature ecosystem of analytics vendors. Meanwhile, Asia Pacific is witnessing the fastest growth, driven by rapid digital transformation, expanding e-commerce sectors, and rising investments in marketing technologies across emerging economies such as China and India. Europe’s market is characterized by a strong focus on data privacy and regulatory compliance, which has spurred demand for measurement platforms with advanced security features. Latin America and the Middle East & Africa are also experiencing steady growth, albeit from a smaller base, as businesses in these regions increasingly recognize the value of data-driven marketing.
The Component segment of the Marketing Measurement Platform market is bifurcated into Software and Services, each playing a critical role in the value chain. Software solutions form the backbone of this market, providing marketers with the tools necessary to collect, analyze, and visualize data from various marketing channels. These platforms offer a suite of functionalities, including campaign tracking, attribution mo
Facebook
TwitterThis data release makes available the data used to test a method for remote sensing of river discharge based on critical flow theory. In rivers where flow conditions are near critical and well-defined standing waves are present, simple measurements of the wavelengths of the standing waves and the width of the channel made using readily available image data can be used to infer river discharge. This approach could provide an efficient, non-contact method of estimating streamflow in certain rivers. Sites near established USGS gaging stations with standing waves (also known as undular hydraulic jumps) were identified by examining images within Google Earth Pro (https://www.google.com/earth/about/versions/#earth-pro). The time slider within that software was then used to select specific image dates on which waves were clearly expressed. For each site on each date, Google Earth Pro was used to manually digitize four types of features: 1) a shadow to infer the image acquisition time using a sundial method; 2) one initial wavelength measurement made between the first and second crests in a standing wave train; 3) a second wavelength measurement made between the second and third crests in a standing wave train and thus located adjacent to and immediately downstream of the first wavelength measurement; and 4) the channel width, measured perpendicular to the two waves at the midpoint between them. These features are saved in the file CriticalFlowMeasurements.kmz, which consists of a separate folder for each site with subfolders for each date. In cases where more than one site was associated with a given gaging station, the description field in the parent folder name was used to provide a unique name for each rapid. In addition to the CriticalFlowMeasurements.kmz file created in Google Earth Pro, the file CriticalFlowResultCompilationWithMeasurements.csv also contains the latitude and longitude coordinates of each of the digitized features. This tabular data file also contains the measurements of wavelength and width derived from these features, the image acquisition time inferred from shadow orientation using the sundial method, the discharge recorded at the gaging station at that time, the results of the critical flow calculations, and an accuracy assessment based on a comparison of the inferred discharge to that recorded at the gaging station. Please refer to the entity and attribute section of the metadata for further detail regarding this file. Associated primary publications provide further detail on the sundial method and the underlying critical flow theory, but the relevant calculations can be summarized briefly as follows. Measuring the orientation of a shadow provides a means of inferring the acquisition time of an image by calculating the position of the sun at various times throughout the day and identifying when the sun would have been positioned so as to cast the observed shadow. Note that the latitude, longitude, and image acquisition date must be known to perform this analysis. Under critical flow conditions, relationships between the wavelengths observed in a standing wave train and the depth and velocity of the flow can be used to estimate river discharge. Critical flow occurs when the Froude number (Fr) is equal to 1 and the velocity v is equal to the square root of the product of depth d and acceleration due to gravity g: Fr = 1 --> v = sqrt(gd). Velocity is also related to wavelength L as v = sqrt((Lg)/(2pi)), which can in turn be used to calculate depth via the definition of the Froude number: d = v^2/g. Finally, the discharge Q is calculated by multiplying the estimated depth by the measured width to obtain the cross-sectional area and then multiplying this result by the estimated velocity: Q = wdv. Users are advised to thoroughly read the metadata file associated with this data release to understand the appropriate use and limitations of the data and code provided herein. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.