https://www.nist.gov/open/licensehttps://www.nist.gov/open/license
In this paper we describe an enhanced three-antenna gain extrapolation technique that allows one to determine antenna gain with significantly fewer data points and at closer distances than with the well-established traditional three-antenna gain extrapolation technique that has been in use for over five decades. As opposed to the traditional gain extrapolation technique, where high-order scattering is purposely ignored so as to isolate only the direct antenna-to-antenna coupling, we show that by incorporating third-order scattering the enhanced gain extrapolation technique can be obtained. The theoretical foundation using third-order scattering is developed and experimental results are presented comparing the enhanced technique and traditional technique for two sets of internationally recognized NIST reference standard gain horn antennas at X-band and Ku-band. We show that with the enhanced technique gain values for these antennas are readily obtained to within stated uncertainties of ±0.07 dB using as few as 10 data points per antenna pair, as opposed to approximately 4000 -to- 8000 data points per antenna pair that is needed with the traditional technique. Furthermore, with the described enhanced technique, antenna-to-antenna distances can be reduced by a factor of three, and up a factor of six in some cases, compared to the traditional technique, a significant reduction in the overall size requirement of facilities used to perform gain extrapolation measurements.
The total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching *** zettabytes in 2024. Over the next five years up to 2028, global data creation is projected to grow to more than *** zettabytes. In 2020, the amount of data created and replicated reached a new high. The growth was higher than previously expected, caused by the increased demand due to the COVID-19 pandemic, as more people worked and learned from home and used home entertainment options more often. Storage capacity also growing Only a small percentage of this newly created data is kept though, as just * percent of the data produced and consumed in 2020 was saved and retained into 2021. In line with the strong growth of the data volume, the installed base of storage capacity is forecast to increase, growing at a compound annual growth rate of **** percent over the forecast period from 2020 to 2025. In 2020, the installed base of storage capacity reached *** zettabytes.
This Level 1 (L1) dataset contains the Version 2.1 geo-located Delay Doppler Maps (DDMs) calibrated into Power Received (Watts) and Bistatic Radar Cross Section (BRCS) expressed in units of meters squared from the Delay Doppler Mapping Instrument aboard the CYGNSS satellite constellation. This version supersedes Version 2.0. Other useful scientific and engineering measurement parameters include the DDM of Normalized Bistatic Radar Cross Section (NBRCS), the Delay Doppler Map Average (DDMA) of the NBRCS near the specular reflection point, and the Leading Edge Slope (LES) of the integrated delay waveform. The L1 dataset contains a number of other engineering and science measurement parameters, including sets of quality flags/indicators, error estimates, and bias estimates as well as a variety of orbital, spacecraft/sensor health, timekeeping, and geolocation parameters. At most, 8 netCDF data files (each file corresponding to a unique spacecraft in the CYGNSS constellation) are provided each day; under nominal conditions, there are typically 6-8 spacecraft retrieving data each day, but this can be maximized to 8 spacecraft under special circumstances in which higher than normal retrieval frequency is needed (i.e., during tropical storms and or hurricanes). Latency is approximately 6 days (or better) from the last recorded measurement time. The Version 2.1 release represents the second science-quality release. Here is a summary of improvements that reflect the quality of the Version 2.1 data release: 1) data is now available when the CYGNSS satellites are rolled away from nadir during orbital high beta-angle periods, resulting in a significant amount of additional data; 2) correction to coordinate frames result in more accurate estimates of receiver antenna gain at the specular point; 3) improved calibration for analog-to-digital conversion results in better consistency between CYGNSS satellites measurements at nearly the same location and time; 4) improved GPS EIRP and transmit antenna pattern calibration results in significantly reduced PRN-dependence in the observables; 5) improved estimation of the location of the specular point within the DDM; 6) an altitude-dependent scattering area is used to normalize the scattering cross section (v2.0 used a simpler scattering area model that varied with incidence and azimuth angles but not altitude); 7) corrections added for noise floor-dependent biases in scattering cross section and leading edge slope of delay waveform observed in the v2.0 data. Users should also note that the receiver antenna pattern calibration is not applied per-DDM-bin in this v2.1 release.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Included here are data from Study 1 (Sheet 1 in .xlxs) and Study 2 (Sheet 2 in .xlxs) of a research article published in the journal PLOS One. The title of the article is: "Social Bonds and Exercise: Evidence for a Reciprocal Relationship" and the authors are Arran Davis (University of Oxford), Jacob Taylor (University of Oxford), and Dr. Emma Cohen (University of Oxford, Wadham College).
Abstract: In two experimental studies, we investigated mechanisms hypothesized to underpin two pervasive and interrelated phenomena: that certain forms of group movement and exercise lead to social bonding and that social bonding can lead to enhanced exercise performance. In Study 1 we manipulated synchrony and exercise intensity among rowers and found that, compared with low intensity exercise, moderate intensity exercise led to significantly higher levels of cooperation in an economic game; no effect of synchrony vs. non-synchrony was found. In Study 2, an elite, highly bonded team of rugby players participated in solo, synchronized and non-synchronized warm-up sessions; participants' anaerobic performance significantly improved after the brief synchronous warm-up relative to a non-synchronous warm-up. The findings substantiate claims concerning the reciprocal links between group exercise and social bonding, and may help to explain the ubiquity of collective physical activity across cultural domains as varied as play, ritual, sport, and dance. Please see research article for a description of the methods used in Study 1 and Study 2. Data Labels – Study 1: Participant #: The number assigned to each participant for consent forms, bookkeeping, analysis, etc. (*n.b., participants 51-53 should be excluded from data analysis as they did not follow the experimental procedure). Gaps in participant numbers are due cancelled sessions as a result of last-minute participant cancellations. Group: This is the experimental trial or group with which each participant did the study. Some groups have 2 participants and some have 3, depending on whether there was a cancelation and a confederate was used to fill in. Intensity: 0 represents low intensity rowing and 1 represents high intensity rowing. Synchrony: 0 represents asynchronous rowing and 1 represents synchronous rowing. Condition: This is the participant’s experimental condition. 1 represents asynchronous, low intensity rowing. 2 represents synchronous, low intensity rowing. 3 represents asynchronous, high intensity rowing. 4 represents synchronous, high intensity rowing. Group Fund Contribution: this is how much in GBP that the participant donated to the group fund in the public goods game (from £0-£5). IOS: this is the participant’s score on the modified Inclusion of the Other in the Self Scale (from 1-7, with 7 representing the highest interdependent self-construal). PWB: this is the participant’s score on the “psychological well-being” component of the Subjective Exercise Experience Scale. PD: this is the participant’s score on the “psychological distress” component of the Subjective Exercise Experience Scale. FAT: this is the participant’s score on the “fatigue” component of the Subjective Exercise Experience Scale. Cooperation: this is the participant’s answer to the question “How much did you and the other participants cooperate during the experiment?” on a 7-point Likert Scale (with 1 being “not at all”, 4 being “moderately”, and 7 being “very much so”). Similarity: this is the participant’s answer to the question “How similar are you to the other participants?” on a 7-point Likert Scale (with 1 being “not at all”, 4 being “moderately”, and 7 being “very much so”). Trust: this is the participant’s answer to the question “How much do you trust the other participants?” on a 7-point Likert Scale (with 1 being “not at all”, 4 being “moderately”, and 7 being “very much so” – see attached post-experiment questionnaire). Like: this is the participant’s answer to the question “How much do you like the other participants?” on a 7-point Likert Scale (with 1 being “not at all”, 4 being “moderately”, and 7 being “very much so” – see attached post-experiment questionnaire). Same Team: this is the participant’s answer to the question “How much do you feel that you and the other participants were on the same team?” on a 7-point Likert Scale (with 1 being “not at all”, 4 being “moderately”, and 7 being “very much so”). Difficulty: this is the participant’s answer to the question “How difficult was the rowing trial?” on a 7-point Likert Scale (with 1 being “not at all”, 4 being “moderately”, and 7 being “very much so”). Know: this is the participant’s answer to the question “How much do you know the other participants? (Please make a circle for each of the other two participants – you can circle the same number twice).” on a 7-point Likert Scale (with 1 being “not at all”, 4 being “moderately”, and 7 being “very much so”). Pain - Pre: this is participant’s pre-experiment pain threshold measured in mmGH (see research article). Pain - Post: this is participant’s post-experiment pain threshold measured in mmGH (see research article). Heart BPM: this is the participant’s average heart rate in beats per minute (BPM) during the rowing portion of the experiment. Confederate: this is whether or not a confederate was used. This was done when a participant did not show up for the experiment and an extra rower was needed. 0 represents trials in which the confederate was not used, 1 represents trials in which the confederate was used. Pain - Change: this is the difference in participant’s pre-experiment and post-experiment pain threshold measured in mmHG (see research article). Sex: this is the participant’s sex - 0 represents female and 1 represents male. Mixed Group: this is whether the group in which the participants rowed with was same sex or mixed sex – 0 represents same sex groups (all male or all female) and 1 represents mixed sex groups. Bondedness Factor Score: a principal components analysis (PCA) was conducted on the IOS scale and the questions from the post-experiment questionnaire on similarity, trust, cooperation, team work and how much participants liked each other (see research article) due to the potential that they could represent a single underlying construct related to bondedness. Indeed, the PCA revealed that one factor should be extracted from these questions. It was theorized that this factor represents a single underlying construct related to bondedness. The number in this column is the participant’s score on this factor, with larger numbers representing higher perceived bondedness with the other group members in the participant’s trial. Cooperation level: another possible way to analyze PGG contirbutions is through a ordinal logistic regression. Cooperation levels offer potential bins for participant group fund contributions in the public goods game: ‘low cooperation’ (£0 - £1.67), ‘medium cooperation’ (£1.68 - £3.33), and ‘high cooperation’ (£3.34 - £5) categories. 1 represents low cooperation, 2 represents medium cooperation, and 3 represent high cooperation. Data Labels – Study 2: Participant #: The number assigned to each participant for consent forms, bookkeeping, analysis, etc. Condition: This is the participant’s experimental condition. 1 represents solo warm-up manipulation, 2 represents asynchronous warm-up manipulation, and 3 represents synchronous warm-up manipulation. In all conditions, warm-up manipulations were immediately followed by the EAET Performance test EAET Performance: this is the performance in the England Anaerobic Endurance Test, measured in time (seconds). HR Max: this denotes participant’s maximum heart rate in beats per minute (BPM) for each condition. RPE: this is the participant’s score on the Borg Scale for Rating Perceived Exertion (with 1 being “nothing at all” and 10 being “impossible”. FAT: this is the participant’s score on the “fatigue” component of the Subjective Exercise Experience Scale. PD: this is the participant’s score on the “psychological distress” component of the Subjective Exercise Experience Scale. PWB: this is the participant’s score on the “psychological well-being” component of the Subjective Exercise Experience Scale. Condition Order: this is the order in which the participants performed each condition. Pain Change – Pre: this is the change in pain threshold after the warm-up manipulation and before the EAET, calculated against a baseline pain threshold measure recorded before exercise. Pain Change – Post: this is the change in pain threshold after the EAET performance test, calculated against a baseline pain threshold measure recorded before exercise.
https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The Higher Education Data Management Service market is experiencing robust growth, driven by the increasing need for efficient data management within educational institutions. The rising adoption of cloud-based solutions, coupled with the expanding use of data analytics for improved decision-making and personalized learning experiences, is fueling market expansion. A projected Compound Annual Growth Rate (CAGR) of, for example, 12% (this is an example, replace with actual or logically estimated CAGR from your data), points to a significant market expansion from an estimated $5 billion in 2025 to potentially over $10 billion by 2033. This growth is further supported by the increasing digitization of higher education processes and the rising demand for advanced data security and compliance solutions. Key players like Ellucian, Oracle, Workday, and Blackboard are actively shaping market dynamics through innovation and strategic partnerships, further driving competition and service enhancements. While the on-premises segment currently holds a substantial market share, cloud-based solutions are rapidly gaining traction due to their scalability, cost-effectiveness, and accessibility. Geographic segmentation reveals that North America currently dominates the market, owing to high technology adoption rates and significant investments in educational infrastructure. However, the Asia-Pacific region is poised for significant growth, fueled by increasing government initiatives promoting digital education and rising student enrollment. Europe also represents a substantial market, driven by technological advancements and a growing demand for efficient data management in higher education institutions. Market restraints include challenges related to data integration across disparate systems, concerns about data security and privacy, and the need for robust IT infrastructure within institutions. Overcoming these challenges through innovative solutions and robust cybersecurity measures will be crucial for sustained market growth in the coming years.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Early seedling emergence can increase plant fitness under competition. Seed oil composition (the types and relative amounts of fatty acids in the oils) may play an important role in determining emergence timing and early growth rate in oilseeds. Saturated fatty acids provide more energy per carbon atom than unsaturated fatty acids but have substantially higher melting points (when chain length is held constant). This characteristic forms the basis of an adaptive hypothesis that lower melting point seeds (lower proportion of saturated fatty acids) should be favored under colder germination temperatures due to earlier germination and faster growth before photosynthesis, while at warmer germination temperatures, seeds with a higher amount of energy (higher proportion of saturated fatty acids) should be favored. To assess the effects of seed oil melting point on timing of seedling emergence and fitness, high- and low-melting point lines from a recombinant inbred cross of Arabidopsis thaliana were competed in a fully factorial experiment at warm and cold temperatures with two different density treatments. Emergence timing between these lines was not significantly different at either temperature, which aligned with warm temperature predictions, but not cold temperature predictions. Under all conditions, plants competing against high-melting point lines had lower fitness relative to those against low-melting point lines, which matched expectations for undifferentiated emergence times.
[1] Status is determined using the baseline, final, and target value. The statuses used in Healthy People 2020 were:
1 - Target met or exceeded—One of the following applies: (i) At baseline, the target was not met or exceeded, and the most recent value was equal to or exceeded the target. (The percentage of targeted change achieved was equal to or greater than 100%.); (ii) The baseline and most recent values were equal to or exceeded the target. (The percentage of targeted change achieved was not assessed.)
2 - Improved—One of the following applies: (i) Movement was toward the target, standard errors were available, and the percentage of targeted change achieved was statistically significant; (ii) Movement was toward the target, standard errors were not available, and the objective had achieved 10% or more of the targeted change.
3 - Little or no detectable change—One of the following applies: (i) Movement was toward the target, standard errors were available, and the percentage of targeted change achieved was not statistically significant; (ii) Movement was toward the target, standard errors were not available, and the objective had achieved less than 10% of the targeted change; (iii) Movement was away from the baseline and target, standard errors were available, and the percent change relative to the baseline was not statistically significant; (iv) Movement was away from the baseline and target, standard errors were not available, and the objective had moved less than 10% relative to the baseline; (v) No change was observed between the baseline and the final data point.
4 - Got worse—One of the following applies: (i) Movement was away from the baseline and target, standard errors were available, and the percent change relative to the baseline was statistically significant; (ii) Movement was away from the baseline and target, standard errors were not available, and the objective had moved 10% or more relative to the baseline.
5 - Baseline only—The objective only had one data point, so progress toward target attainment could not be assessed. Note that if additional data points did not meet the criteria for statistical reliability, data quality, or confidentiality, the objective was categorized as baseline only.
6 - Informational—A target was not set for this objective, so progress toward target attainment could not be assessed.
[2] The final value is generally based on data available on the Healthy People 2020 website as of January 2020. For objectives that are continuing into Healthy People 2030, more recent data are available on the Healthy People 2030 website: https://health.gov/healthypeople.
[3] For objectives that moved toward their targets, movement toward the target was measured as the percentage of targeted change achieved (unless the target was already met or exceeded at baseline):
Percentage of targeted change achieved = (Final value - Baseline value) / (HP2020 target - Baseline value) * 100
[4] For objectives that were not improving, did not meet or exceed their targets, and did not move towards their targets, movement away from the baseline was measured as the magnitude of the percent change from baseline:
Magnitude of percent change from baseline = |Final value - Baseline value| / Baseline value * 100
[5] Statistical significance was tested when the objective had a target, at least two data points (of unequal value), and available standard errors of the data. A normal distribution was assumed. All available digits were used to test statistical significance. Statistical significance of the percentage of targeted change achieved or the magnitude of the percentage change from baseline was assessed at the 0.05 level using a normal one-sided test.
[6] For more information on the Healthy People 2020 methodology for measuring progress toward target attainment and the elimination of health disparities, see: Healthy People Statistical Notes, no 27; available from: https://www.cdc.gov/nchs/data/sta
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The acceptance of students to a medical school places a considerable emphasis on performance in standardized tests and undergraduate grade point average (uGPA). Traditionally, applicants may be judged as a homogeneous population according to simple quantitative thresholds that implicitly assume a linear relationship between scores and academic success. This ‘one-size-fits-all’ approach ignores the notion that individuals may show distinct patterns of achievement and follow diverse paths to success. In this study, we examined a dataset composed of 53 variables extracted from the admissions application records of 1,088 students matriculating to NYU School of Medicine between the years 2006–2014. We defined training and test groups and applied K-means clustering to search for distinct groups of applicants. Building an optimized logistic regression model, we then tested the predictive value of this clustering for estimating the success of applicants in medical school, aggregating eight performance measures during the subsequent medical school training as a success factor. We found evidence for four distinct clusters of students—we termed ‘signatures’—which differ most substantially according to the absolute level of the applicant’s uGPA and its trajectory over the course of undergraduate education. The ‘risers’ signature showed a relatively higher uGPA and also steeper trajectory; the other signatures showed each remaining combination of these two main factors: ‘improvers’ relatively lower uGPA, steeper trajectory; ‘solids’ higher uGPA, flatter trajectory; ‘statics’ both lower uGPA and flatter trajectory. Examining the success index across signatures, we found that the risers and the statics have significantly higher and lower likelihood of quantifiable success in medical school, respectively. We also found that each signature has a unique set of features that correlate with its success in medical school. The big data approach presented here can more sensitively uncover success potential since it takes into account the inherent heterogeneity within the student population.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global pour point tester market size was valued at approximately USD 57 million in 2023 and is projected to reach around USD 98 million by 2032, reflecting a Compound Annual Growth Rate (CAGR) of 6.4%. The growth in this market is influenced by the expanding industrial applications and the increasing importance of ensuring product quality under varying temperature conditions.
One of the primary growth factors for the pour point tester market is the ever-increasing demand from the petroleum industry. Pour point testers are essential in the petroleum sector for determining the lowest temperature at which petroleum products flow, which is critical for operational efficiency and safety. The burgeoning oil and gas industry, coupled with the ongoing need for high-quality fuel and lubricants, has significantly driven the demand for pour point testers. Additionally, advancements in technology have led to the development of more precise and user-friendly automatic pour point testers, further boosting market growth.
Another significant driver is the chemical industry, where pour point testers are used to ensure the quality and performance of various chemical products under low-temperature conditions. As the global chemical industry continues to expand, especially in developing regions, the necessity for pour point testing equipment is increasing. Moreover, stringent regulatory standards regarding the quality of chemical products are also propelling the market. These regulations necessitate accurate testing to ensure that products meet the required standards, thereby driving the demand for pour point testers.
The pharmaceutical industry also plays a crucial role in the growth of the pour point tester market. In pharmaceuticals, the stability and efficacy of certain products at low temperatures are critical. Pour point testers help in assessing these attributes, ensuring that pharmaceutical products are safe and effective for use. With the increasing focus on biosimilars and biopharmaceuticals, which require precise temperature control, the need for pour point testers is anticipated to rise. Additionally, the growing investment in pharmaceutical research and development is expected to bolster the market further.
Regionally, the Asia Pacific is expected to witness substantial growth in the pour point tester market. The rapid industrialization and urbanization in countries like China and India have led to increased demand for petroleum, chemicals, and pharmaceuticals. Consequently, the need for pour point testers in these industries is on the rise. Furthermore, favorable government policies and the presence of a large number of manufacturing units in the region are likely to contribute significantly to market growth.
In the pour point tester market, the product type segment is divided into automatic pour point testers and manual pour point testers. Automatic pour point testers are witnessing a higher demand due to their enhanced accuracy, efficiency, and ease of use. These devices are equipped with advanced technologies that allow for precise measurements, which are crucial in industries like petroleum and pharmaceuticals. The automation of the testing process reduces human error and increases throughput, making them a preferred choice for large-scale industrial applications.
Manual pour point testers, on the other hand, are typically used in smaller operations and research settings where the volume of testing is relatively low. While they are less expensive than their automatic counterparts, manual pour point testers require more skilled technicians to operate and interpret the results accurately. Despite these limitations, manual testers continue to hold a significant market share due to their cost-effectiveness and applicability in various small-scale operations.
The trend towards automation is expected to continue, with automatic pour point testers capturing a larger market share over the forecast period. This shift is driven by the increasing demand for high-throughput testing in large industrial settings and the continuous advancements in automation technologies. Moreover, the integration of advanced features such as digital displays, automated temperature control, and data logging capabilities in automatic pour point testers is making them more attractive to end-users.
However, the market for manual pour point testers is not expected to diminish entirely. There is still a considerable demand for these devices in educational instit
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global Gigabit Ethernet Adapters market size is anticipated to grow significantly, reaching an estimated valuation of $2.5 billion by 2023 and projecting to soar to $5.8 billion by 2032, driven by a robust CAGR of 10.2% during the forecast period. This growth is primarily fueled by the increasing demand for high-speed internet connectivity and the proliferation of data-intensive applications across various sectors.
The growth of the Gigabit Ethernet Adapters market is underpinned by several key factors. One major driver is the escalating need for enhanced network performance and speed in both residential and commercial settings. As more households and businesses rely on connected devices and applications that demand high bandwidth, such as 4K streaming, online gaming, and cloud services, the necessity for efficient and reliable Ethernet solutions becomes paramount. Additionally, the growing trend of remote work and the subsequent increase in home office setups have significantly contributed to the rising demand for Gigabit Ethernet Adapters, as consumers seek to optimize their home networks for professional use.
Another pivotal growth factor is the expansion of data centers and the increasing adoption of cloud computing services. Data centers are the backbone of modern digital infrastructure, requiring robust and high-speed networking solutions to handle vast amounts of data traffic efficiently. Gigabit Ethernet Adapters play a crucial role in ensuring seamless data transmission and enhancing the overall performance of data centers. Furthermore, the shift towards virtualization and the implementation of software-defined networking (SDN) technologies are driving the need for scalable and flexible Ethernet solutions, further propelling market growth.
Technological advancements in Ethernet technology, such as the development of 10 Gigabit Ethernet (10GbE) and beyond, are also contributing to market expansion. These advancements provide improved data transfer rates, reduced latency, and enhanced reliability, making them highly desirable for applications that require high-speed connectivity, such as industrial automation, telecommunications, and advanced consumer electronics. The continuous innovation in Ethernet adapters, including the integration of advanced features like power over Ethernet (PoE) and improved security protocols, is expected to create new growth opportunities in the market.
In the realm of network connectivity, the role of an Ethernet Access Point is becoming increasingly significant. These devices serve as a bridge between wired and wireless networks, facilitating seamless data transmission and enhancing network performance. As businesses and homes continue to integrate more smart devices, the demand for efficient and reliable Ethernet Access Points is on the rise. They provide a stable connection that is often more secure and faster than traditional Wi-Fi, making them ideal for environments where high-speed data transfer is crucial. The integration of Ethernet Access Points in network setups ensures that all connected devices can communicate effectively, supporting the growing trend of interconnected smart technologies.
Geographically, the Gigabit Ethernet Adapters market exhibits varying growth patterns across different regions. North America is expected to maintain a dominant position due to the region's early adoption of advanced networking technologies and the presence of major market players. Asia Pacific is projected to witness the highest growth rate, driven by rapid urbanization, increasing internet penetration, and the burgeoning demand for high-speed connectivity in emerging economies like China and India. Europe, Latin America, and the Middle East & Africa are also anticipated to contribute to the market's growth, albeit at a more moderate pace, owing to varying levels of technological adoption and infrastructure development.
The Gigabit Ethernet Adapters market can be segmented by product type into USB Ethernet Adapters, PCI Ethernet Adapters, Thunderbolt Ethernet Adapters, and Others. Each of these product types caters to different usage scenarios and offers unique benefits, contributing to the overall growth of the market.
USB Ethernet Adapters are widely favored for their portability and ease of use. These adapters can be easily connected to various devices, including laptops, desktops, and eve
This is a MD iMAP hosted service. Find more information at http://imap.maryland.gov. The Maryland Department of the Environment (MDE) identifies and maintains locations of significant wastewater treatment plants throughout Maryland.Feature Service Link:https://mdgeodata.md.gov/imap/rest/services/Environment/MD_PointSourceDischarges/FeatureServer ADDITIONAL LICENSE TERMS: The Spatial Data and the information therein (collectively "the Data") is provided "as is" without warranty of any kind either expressed implied or statutory. The user assumes the entire risk as to quality and performance of the Data. No guarantee of accuracy is granted nor is any responsibility for reliance thereon assumed. In no event shall the State of Maryland be liable for direct indirect incidental consequential or special damages of any kind. The State of Maryland does not accept liability for any damages or misrepresentation caused by inaccuracies in the Data or as a result to changes to the Data nor is there responsibility assumed to maintain the Data in any manner or form. The Data can be freely distributed as long as the metadata entry is not modified or deleted. Any data derived from the Data must acknowledge the State of Maryland in the metadata.
Planck is ESA's third generation space based cosmic microwave background experiment, operating at nine frequencies between 30 and 857 GHz and was launched May 2009. Planck provides all-sky survey data at all nine frequencies with higher resolution at the 6 higher frequencies. It provides substantially higher resolution and sensitivity than WMAP. Planck orbits in the L2 Lagrange point. These data come from the legacy Release 3 of the Planck mission. These products include polarization information available to visualize in several ways. The data contain Stokes parameters I, Q, and U, and in addition to these, it is possible to visualize the polarized intensity PI=sqrt(Q^2+U^2) and the polarization angle PA=1/2atan(U/Q). Note that at their native resolution of a few arcmin (depending on the frequency), these polarization data will appear very noisy. In order to visualize the polarization information, it is highly recommended that the data be resampled with the "Clip (intensive)" sampler and the result smoothed. That sampler will average all the data points within a given output pixel rather than the more common nearest neighbor. It will do this averaging before computing either PI or PA to reduce the effects of the noise. This sampler is set as the default for this survey. If the output pixel resolution is not significantly larger than the resolution, a smoothing of the output pixels will also be necessary. Note also that Q and U are defined relative to a given co-ordinate system, in this case Galactic, and following the CMB convention (not the IAU); see https://lambda.gsfc.nasa.gov/product/about/pol_convention.cfm. This means that they will appear to vary rapidly near the pole of that coordinate system. The PI and PA will be computed correctly for any position on the sky. The original data are stored in HEALPix pixels. SkyView treats HEALPix as a standard projection but assumes that the HEALPix data is in a projection plane with a rotation of -45 degrees. The rotation transforms the HEALPix pixels from diamonds to squares so that the boundaries of the pixels are treated properly. The special HealPixImage class is used so that SkyView can use the HEALPix FITS files directly. The HealPixImage simulates a rectangular image but translates the pixels from that image to the nested HEALPix structure that is used by the HEALPix data. Users of the SkyView Jar will be able to access this survey through the web but performance may be poor since the FITS files are 150 to 600 MB in size and must be completely read in. SkyView will not automatically cache these files on the user machine as is done for non-HEALPix surveys. Data from the frequencies of 100 GHz or higher are stored in a HEALPix file with a resolution of approximately 1.7' while lower frequencies are stored with half that resolution, approximately 3.4'. Provenance: Data split using skyview.survey.HealPixSplitter from the PR3 distriuted by the Planck Science team.. This is a service of NASA HEASARC.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The prevalence of low birth weight (LBW) remains disproportionately high in sub-Saharan Africa. LBW is a significant risk factor for infant mortality and is associated with long-term physical and cognitive impairments. While antenatal care (ANC) has the potential to improve birth outcomes, there is limited causal evidence on its impact in sub-Saharan settings. This study estimates the causal effects of ANC on birth weight and LBW in The Gambia using data from the 2019–20 Gambia Demographic and Health Survey (GDHS). The GDHS recorded birth weight for 8,362 children born in the five years preceding the survey; after excluding cases with missing data, the final analytical sample included 4,443 children. Multivariable regression and propensity score matching (PSM) methods were used to assess the relationship between ANC and birth outcomes, controlling for child sex and birth order, maternal age and education, household wealth, marital status, rural residence, number of children under five, and regional fixed effects. Multivariable regression estimates indicate that each additional ANC visit is associated with a 22-gram increase in birth weight and a 1.2 percentage point reduction in the likelihood of LBW. Mothers who attended four or more ANC visits (ANC 4+) had a 3.9 percentage point lower risk of delivering an LBW infant compared to those with fewer visits. PSM analysis corroborated these findings, showing that ANC 4 + was associated with a 71-gram increase in birth weight and a 4.7 percentage point reduction in LBW probability. These findings highlight the importance of health policies that promote adequate ANC coverage to reduce the high burden of LBW in resource-limited settings.
In 2024, the number of data compromises in the United States stood at 3,158 cases. Meanwhile, over 1.35 billion individuals were affected in the same year by data compromises, including data breaches, leakage, and exposure. While these are three different events, they have one thing in common. As a result of all three incidents, the sensitive data is accessed by an unauthorized threat actor. Industries most vulnerable to data breaches Some industry sectors usually see more significant cases of private data violations than others. This is determined by the type and volume of the personal information organizations of these sectors store. In 2024 the financial services, healthcare, and professional services were the three industry sectors that recorded most data breaches. Overall, the number of healthcare data breaches in some industry sectors in the United States has gradually increased within the past few years. However, some sectors saw decrease. Largest data exposures worldwide In 2020, an adult streaming website, CAM4, experienced a leakage of nearly 11 billion records. This, by far, is the most extensive reported data leakage. This case, though, is unique because cyber security researchers found the vulnerability before the cyber criminals. The second-largest data breach is the Yahoo data breach, dating back to 2013. The company first reported about one billion exposed records, then later, in 2017, came up with an updated number of leaked records, which was three billion. In March 2018, the third biggest data breach happened, involving India’s national identification database Aadhaar. As a result of this incident, over 1.1 billion records were exposed.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
1) Autonomous sound recorders are increasingly used to survey birds, and other wildlife taxa. Species richness estimates from sound recordings are usually compared with estimates obtained from established methods like point counts, but so far the comparisons were biased: Detection ranges usually differ between the survey methods, and bird detection distance data are needed for standardizing data from sound recordings. 2) We devised and tested a method for estimating bird detection distances from sound recordings, using a reference recording of test sounds at different frequencies, emitted from known distances. We used our method to estimate bird detection distances in sound recordings from tropical forest sites where point counts were also used. We derived bird abundance and richness measures and compared them between point counts and sound recordings using unlimited radius and fixed radius counts, as well as distance sampling modelling. 3) First we show that it is possible to accurately estimate bird detection distances in sound recordings. We then demonstrate that these data can be used to standardize the detection ranges between point counts and sound recordings with a fixed-radius approach, leading to higher abundance and richness estimates for sound recordings. Our distance-sampling approach also revealed that sound recorders sampled significantly higher bird densities than human point counts. 4) We show for the first time that it is possible to standardize detection ranges in sound recordings and that distance sampling can successfully be used too. We revealed that birds were flushed by human observers and that this possibly leads to lower density estimates in point counts, although sound recorders could also have sampled more birds because of their earlier deployment times. Sound recordings are more amenable to distance-sampling modelling than point counts as they do not exhibit an observer-induced avoidance effect, and they can easily collect more replicates for obtaining more accurate bird density estimates. Quantifying bird detection distances was so far one important shortcoming that hindered the adoption of modern autonomous sound recording methods for ecological surveys.
May 2013 - performed by Kjeldsen, Sinnock & Neudeck Inc. (KSN) at the request of Reclamation District 2058 to assess bed elevation change versus 2004 and 2005 DWR surveys. The KSN survey consisted one single-beam sonar along-axis profile that included direct GPS observation points through the non-navigable, weed-filled eastern reach of the slough (starting at about cross section 8). The nine cross sections were surveyed with direct GPS observations of limited data density due to the method used.January 2018 – performed by DWR, North Central Region Office at the request of Bay Delta Office to assess channel elevations. This survey was intended to fully cover the target area with three along-axis profiles and the nine cross sections. All data was collected with sonar.The 2013 KSN dataset was very sparse in areas, with as much as 125 feet separating points along the profile, on average, and as few as five observations within the channel banks for cross sections. The 2018 DWR survey had much improved data density, but the comparison is necessarily limited by the availability of data from 2013. An estimate of bed elevation change in Tom Paine Slough along the centerline was made by subtracting 2013 elevation from calculated 2018 elevation for points that were within a 5-foot radius of one another according to an algorithm calculating the weighted average of nearest neighbor points (WANN). Change for the cross sections was analyzed separately. No attempt was made to produce an estimate of change for the whole of Tom Paine Slough because the data from 2013 only contained one centerline and the cross sections were too far apart to reasonably interpolate a surface. Change along the 2013 centerline was further broken down by 3,000 foot segments along the profile to examine change more locally. To prevent under-weighting areas with fewer qualifying data points, all statistics were calculated on an evenly-spaced dataset, with change in elevation interpolated to one foot intervals. Analysis of change along the centerline indicates a small, but significant, net deposition of sediment in Tom Paine Slough between 2013 and 2018. Sediment trends vary locally and are sensitive to differences in boat path 1, but suggest increasing deposition from west to east, with a short reach of scour near cross section 7. Since accuracy for two surveys was 0.24 ft and 0.16 ft for KSN and DWR, respectively, as much as 0.4 ft of difference could be explained by systematic survey error. However, this does not explain the increasing trend of deposition along Tom Paine Slough. The difference in data collection methods used by KSN — sonar for most of the slough, with direct GPS observation in weedy areas — could also be responsible for some of the increase seen towards the eastern end of the survey. There was a much higher mean elevation change for KSN points collected by direct observation (1.1 ft) compared to points collected by sonar (0.6 ft). Despite this, it is entirely possible that this observed difference between collection techniques was caused by an actual trend in deposition. It appears most likely that both are true: some of the increase in elevation towards the east is due to differing collection techniques, but most of it probably reflects an actual increase in elevation. Mean change in elevation and change in cross-sectional area was calculated for each individual cross section. Because the 2013 KSN cross sections were all conducted by labor intensive manual GPS observations, data points within the channel were sparse, but land elevations beyond the channel banks were included. The 2018 DWR survey was conducted by sonar within the navigable channel limits, and thus had much greater data density within the channel, but without land elevations beyond the banks.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Wireless Data Communication System (WDCS) market is experiencing robust growth, driven by the increasing demand for reliable and high-speed data transmission across diverse sectors. The market, estimated at $15 billion in 2025, is projected to expand significantly over the forecast period (2025-2033), fueled by a Compound Annual Growth Rate (CAGR) of approximately 12%. Key drivers include the proliferation of Internet of Things (IoT) devices, the expansion of 5G networks, and the growing adoption of WDCS in critical infrastructure applications such as military operations and industrial automation. The military field, benefiting from enhanced situational awareness and secure communication, represents a significant segment, while the business field is experiencing growth due to increased reliance on efficient data transfer for improved operational efficiency. Technological advancements, particularly in multiple-point information transmission technologies, are further boosting market expansion. However, factors like the high initial investment cost of implementing WDCS and potential security concerns related to data breaches could pose challenges to market growth. Nevertheless, the overall trend points toward significant expansion, particularly in emerging economies with rapidly developing infrastructure. Segmentation analysis reveals a strong preference for two-point information transmission systems, although multiple-point systems are gaining traction due to their capacity for handling larger data volumes and supporting more interconnected devices. The geographic distribution showcases significant market share in North America and Europe, driven by early adoption and technological maturity. However, Asia-Pacific is expected to witness the highest growth rate due to increasing digitalization and substantial infrastructure investment. Major players like SCI Technology, Inc., Silicon Laboratories, Inc., and others are actively involved in product innovation and strategic partnerships to maintain their competitive edge. The continued adoption of sophisticated WDCS in diverse application areas will solidify the market's long-term growth trajectory.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Non-thermal atmospheric-pressure plasma (NTAPP) has been widely studied for clinical applications, e.g., disinfection, wound healing, cancer therapy, hemostasis, and bone regeneration. It is being revealed that the physical and chemical actions of plasma have enabled these clinical applications. Based on our previous report regarding plasma-stimulated bone regeneration, this study focused on Achilles tendon repair by NTAPP. This is the first study to reveal that exposure to NTAPP can accelerate Achilles tendon repair using a well-established Achilles tendon injury rat model. Histological evaluation using the Stoll’s and histological scores showed a significant improvement at 2 and 4 weeks, with type I collagen content being substantial at the early time point of 2 weeks post-surgery. Notably, the replacement of type III collagen with type I collagen occurred more frequently in the plasma-treated groups at the early stage of repair. Tensile strength test results showed that the maximum breaking strength in the plasma-treated group at two weeks was significantly higher than that in the untreated group. Overall, our results indicate that a single event of NTAPP treatment during the surgery can contribute to an early recovery of an injured tendon.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains macroseismic data points and source parameters for 28 shallow 20th-century earthquakes in the Hainaut area, as well as for the 1983-11-08 Liège earthquake, all in Belgium. For each earthquake, there is 1 CSV-file containing minimum and maximum evaluated macroseismic intensity, latitude, longitude, commune name, epicentral distance and azimuth. The source parameters (origin time, epicentral coordinates, hypocentral depth, magnitude, maximum intensity, macroseismic radius and number of observations) are listed in a XLSX file. The ID_EARTH column in this file corresponds to the first part of the CSV filenames.
The most significant difference with respect to the original dataset is an update of the coordinates of several Belgian localities that are used to locate the IDPs, resulting mainly in insignificant changes (<1 km difference for 96% of the IDPs used here), but a few outliers up to a difference of 22 km occur as well. This can result in significantly higher or lower epicentral distances for a few IDPs. Other adjustments to the data include the addition of intensity 1 values (not felt) and the removal, addition or modification of some IDPs (>20 IDPs in total). For the 1983 Liège earthquake (ID_EARTH=651), more significant changes were made to the IDP dataset as part of a major update of the ROB traditional macroseismic database, such as the addition of 297 new IDPs (~90% not felt), the modification of intensity values of 19 IDPs and the removal of 3 IDPs that were previously assigned to the wrong municipality.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Our multidisciplinary team of legal, clinician, and perinatal epidemiology experts designed a study to assess the effects of state regulation of midwives on patient access to high quality maternity care in the US. We developed a novel, weighted scoring system that ranks all 50 states and DC on level of midwifery integration, and then linked state scores to maternal and newborn outcomes. In our study we demonstrate that greater integration of midwives is associated with significantly higher rates of physiologic birth outcomes, lower rates of obstetric interventions, and fewer adverse neonatal outcomes. Our new Midwifery Integration Scoring System provides an evidenced-informed tool that can identify barriers to effective health human resource allocation in maternity care, based on population-level health outcomes data. In the current context of the Sustainable Development Goals to facilitate equitable access to skilled maternity providers, we believe that our findings will be of great interest to your readers. We uploaded the 1) Midwifery Integration Scoring System and 2) the data set that includes all data points needed to replicate the results presented in our paper. Most of the data is for the year 2014 and comes from the CDC. Other data sources are detailed in the publication and a short data dictionary will be uploaded soon.
https://www.nist.gov/open/licensehttps://www.nist.gov/open/license
In this paper we describe an enhanced three-antenna gain extrapolation technique that allows one to determine antenna gain with significantly fewer data points and at closer distances than with the well-established traditional three-antenna gain extrapolation technique that has been in use for over five decades. As opposed to the traditional gain extrapolation technique, where high-order scattering is purposely ignored so as to isolate only the direct antenna-to-antenna coupling, we show that by incorporating third-order scattering the enhanced gain extrapolation technique can be obtained. The theoretical foundation using third-order scattering is developed and experimental results are presented comparing the enhanced technique and traditional technique for two sets of internationally recognized NIST reference standard gain horn antennas at X-band and Ku-band. We show that with the enhanced technique gain values for these antennas are readily obtained to within stated uncertainties of ±0.07 dB using as few as 10 data points per antenna pair, as opposed to approximately 4000 -to- 8000 data points per antenna pair that is needed with the traditional technique. Furthermore, with the described enhanced technique, antenna-to-antenna distances can be reduced by a factor of three, and up a factor of six in some cases, compared to the traditional technique, a significant reduction in the overall size requirement of facilities used to perform gain extrapolation measurements.