Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The scaled-uniform model has been lately considered to illustrate problems in frequentist point estimation arising when the minimal sufficient statistic is not complete. Here we consider the problem of interval estimation and derive pivotal quantities based on a series of point estimators proposed in the literature. We compare the resulting intervals of given confidence level in terms of expected lengths. Pivotal quantities, confidence intervals and expected lengths are all computed using simulations implemented with R (code is available). Numerical results suggest that the maximum likelihood estimator, regardless of its inefficiency, yields confidence intervals that outperform the other available sets of the same level.
http://www.donorhealth-btru.nihr.ac.uk/wp-content/uploads/2020/04/Data-Access-Policy-v1.0-14Apr2020.pdfhttp://www.donorhealth-btru.nihr.ac.uk/wp-content/uploads/2020/04/Data-Access-Policy-v1.0-14Apr2020.pdf
In over 100 years of blood donation practice, INTERVAL is the first randomised controlled trial to assess the impact of varying the frequency of blood donation on donor health and the blood supply. It provided policy-makers with evidence that collecting blood more frequently than current intervals can be implemented over two years without impacting on donor health, allowing better management of the supply to the NHS of units of blood with in-demand blood groups. INTERVAL was designed to deliver a multi-purpose strategy: an initial purpose related to blood donation research aiming to improve NHS Blood and Transplant’s core services and a longer-term purpose related to the creation of a comprehensive resource that will enable detailed studies of health-related questions.
Approximately 50,000 generally healthy blood donors were recruited between June 2012 and June 2014 from 25 NHS Blood Donation centres across England. Approximately equal numbers of men and women; aged from 18-80; ~93% white ancestry. All participants completed brief online questionnaires at baseline and gave blood samples for research purposes. Participants were randomised to giving blood every 8/10/12 weeks (for men) and 12/14/16 weeks (for women) over a 2-year period. ~30,000 participants returned after 2 years and completed a brief online questionnaire and gave further blood samples for research purposes.
The baseline questionnaire includes brief lifestyle information (smoking, alcohol consumption, etc), iron-related questions (e.g., red meat consumption), self-reported height and weight, etc. The SF-36 questionnaire was completed online at baseline and 2-years, with a 6-monthly SF-12 questionnaire between baseline and 2-years.
All participants have had the Affymetrix Axiom UK Biobank genotyping array assayed and then imputed to 1000G+UK10K combined reference panel (80M variants in total). 4,000 participants have 50X whole-exome sequencing and 12,000 participants have 15X whole-genome sequencing. Whole-blood RNA sequencing has commenced in ~5,000 participants.
The dataset also contains data on clinical chemistry biomarkers, blood cell traits, >200 lipoproteins, metabolomics (Metabolon HD4), lipidomics, and proteomics (SomaLogic, Olink), either cohort-wide or is large sub-sets of the cohort.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The mean and standard deviation (μ ± σ) of correlation coefficients when the sequence sizes vary from 15 to 50 with an equal interval 5.
https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/
Abstract A calculator program has been written to give confidence intervals on branching ratios for rare decay modes (or similar quantities) calculated from the number of events observed, the acceptance factor, the background estimate and the associated errors. Results from different experiments (or different channels from the same experiment) can be combined. The calculator is available in http://www.slac.stanford.edu/~barlow/limits.html.
Title of program: syslimit Catalogue Id: ADQN_v1_0
Nature of problem Calculating confidence intervals for a Poisson mean based on observed data, with uncertainties in efficiencies and backgrounds.
Versions of this program held in the CPC repository in Mendeley Data ADQN_v1_0; syslimit; 10.1016/S0010-4655(02)00588-X
This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2019)
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Question: How equal is access to power? Clarification: The Equal Access subcomponent is based on the idea that neither the protections of rights and freedoms nor the equal distribution of resources is sufficient to ensure adequate representation. Ideally, all groups should enjoy equal de facto capabilities to participate, to serve in positions of political power, to put issues on the agenda, and to influence policymaking. Scale: Interval, from low to high (0-1).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We divided the world into 9 elevation intervals with equal intervals of 500 meters, and calculated the global climate diversity for each interval separately, on a time scale of 1901-2098.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The 2010 Census Production Settings Demographic and Housing Characteristics (DHC) Approximate Monte Carlo (AMC) method seed Privacy Protected Microdata File (PPMF0) and PPMF replicates (PPMF1, PPMF2, ..., PPMF25) are a set of microdata files intended for use in estimating the magnitude of error(s) introduced by the 2020 Decennial Census Disclosure Avoidance System (DAS) into the Redistricting and DHC products. The PPMF0 was created by executing the 2020 DAS TopDown Algorithm (TDA) using the confidential 2010 Census Edited File (CEF) as the initial input; the replicates were then created by executing the 2020 DAS TDA repeatedly with the PPMF0 as its initial input. Inspired by analogy to the use of bootstrap methods in non-private contexts, U.S. Census Bureau (USCB) researchers explored whether simple calculations based on comparing each PPMFi to the PPMF0 could be used to reliably estimate the scale of errors introduced by the 2020 DAS, and generally found this approach worked well.
The PPMF0 and PPMFi files contained here are provided so that external researchers can estimate properties of DAS-introduced error without privileged access to internal USCB-curated data sets; further information on the estimation methodology can be found in Ashmead et. al 2024.
The 2010 DHC AMC seed PPMF0 and PPMF replicates have been cleared for public dissemination by the USCB Disclosure Review Board (CBDRB-FY24-DSEP-0002). The 2010 PPMF0 included in these files was produced using the same parameters and settings as were used to produce the 2010 Demonstration Data Product Suite (2023-04-03) PPMF, but represents an independent execution of the TopDown Algorithm. The PPMF0 and PPMF replicates contain all Person and Units attributes necessary to produce the Redistricting and DHC publications for both the United States and Puerto Rico, and include geographic detail down to the Census Block level. They do not include attributes specific to either the Detailed DHC-A or Detailed DHC-B products; in particular, data on Major Race (e.g., White Alone) is included, but data on Detailed Race (e.g., Cambodian) is not included in the PPMF0 and replicates.
The 2020 AMC replicate files for estimating confidence intervals for the official 2020 Census statistics are available.
The SACS Tier 1 Cultural Resources Exposure Index depicts a weighted aggregation of national GIS datasets related to cultural resources within the SACS study area. Input datasets include the National Register of Historic Places as well as the USGS Protected Areas Database– Historic or Cultural Areas. These national datasets were clipped to the SACS study area and weighted using values cited in the USACE NACCS effort: Page 109 https://www.nad.usace.army.mil/Portals/40/docs/NACCS/NACCS_Appendix_C.pdf. These vector data were then converted to a uniform grid based on the NACCS weighting, and summed. The resulting raster was then normalized between 0 and 1, with 1 containing the highest value, or the most overlapping datasets. The resulting index is displayed with a 4-class equal interval symbology to be able to identify point features that have been converted to grid pixels. The grid resolution is 30m. Input datasets, weighting, and download locations are referenced as follows:NPSNational Register of Historic Places – Weight - 75https://www.nps.gov/subjects/nationalregister/data-downloads.htmUSGSProtected Areas Database- Historic or Cultural Areas – Weight - 75https://www.usgs.gov/core-science-systems/science-analytics-and-synthesis/gap/science/protected-areasThis Tier 1 dataset is available for download here:Tier 1 Risk Assessment Download
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Companion data for the creation of a banksia plot:Background:In research evaluating statistical analysis methods, a common aim is to compare point estimates and confidence intervals (CIs) calculated from different analyses. This can be challenging when the outcomes (and their scale ranges) differ across datasets. We therefore developed a plot to facilitate pairwise comparisons of point estimates and confidence intervals from different statistical analyses both within and across datasets.Methods:The plot was developed and refined over the course of an empirical study. To compare results from a variety of different studies, a system of centring and scaling is used. Firstly, the point estimates from reference analyses are centred to zero, followed by scaling confidence intervals to span a range of one. The point estimates and confidence intervals from matching comparator analyses are then adjusted by the same amounts. This enables the relative positions of the point estimates and CI widths to be quickly assessed while maintaining the relative magnitudes of the difference in point estimates and confidence interval widths between the two analyses. Banksia plots can be graphed in a matrix, showing all pairwise comparisons of multiple analyses. In this paper, we show how to create a banksia plot and present two examples: the first relates to an empirical evaluation assessing the difference between various statistical methods across 190 interrupted time series (ITS) data sets with widely varying characteristics, while the second example assesses data extraction accuracy comparing results obtained from analysing original study data (43 ITS studies) with those obtained by four researchers from datasets digitally extracted from graphs from the accompanying manuscripts.Results:In the banksia plot of statistical method comparison, it was clear that there was no difference, on average, in point estimates and it was straightforward to ascertain which methods resulted in smaller, similar or larger confidence intervals than others. In the banksia plot comparing analyses from digitally extracted data to those from the original data it was clear that both the point estimates and confidence intervals were all very similar among data extractors and original data.Conclusions:The banksia plot, a graphical representation of centred and scaled confidence intervals, provides a concise summary of comparisons between multiple point estimates and associated CIs in a single graph. Through this visualisation, patterns and trends in the point estimates and confidence intervals can be easily identified.This collection of files allows the user to create the images used in the companion paper and amend this code to create their own banksia plots using either Stata version 17 or R version 4.3.1
The SACS Tier 1 Cultural Resources Exposure Index depicts a weighted aggregation of national GIS datasets related to cultural resources within the SACS study area. Input datasets include the National Register of Historic Places as well as the USGS Protected Areas Database– Historic or Cultural Areas. These national datasets were clipped to the SACS study area and weighted using values cited in the USACE NACCS effort: Page 109 https://www.nad.usace.army.mil/Portals/40/docs/NACCS/NACCS_Appendix_C.pdf. These vector data were then converted to a uniform grid based on the NACCS weighting, and summed. The resulting raster was then normalized between 0 and 1, with 1 containing the highest value, or the most overlapping datasets. The resulting index is displayed with a 4-class equal interval symbology to be able to identify point features that have been converted to grid pixels. The grid resolution is 30m. Input datasets, weighting, and download locations are referenced as follows:NPSNational Register of Historic Places – Weight - 75https://www.nps.gov/subjects/nationalregister/data-downloads.htmUSGSProtected Areas Database- Historic or Cultural Areas – Weight - 75https://www.usgs.gov/core-science-systems/science-analytics-and-synthesis/gap/science/protected-areasThis Tier 1 dataset is available for download here:Tier 1 Risk Assessment Download
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The mean z-coordinate error decreasing percentages (%) of SSI-RIK compared to EM-SFM, CSF and RIK, when the sequence sizes vary from 15 to 50 with an equal interval 5.
This dataset uses Census Data following published social vulnerability index literature to provide an index at the Place level.
The Corps of Engineers has chosen SoVI as the “foundational SVA (Social Vulnerability Analysis) method for characterizing social vulnerability….” (Dunning and Durden 2013) The University of South Carolina has provided extensive and historic data for this methodology. Susan Cutter and her team have published their methodology and continue to maintain their database. Thus it was chosen as the “primary tool for [Army] Corps SVA applications.” (ibid) The downside is that this method is complex and hard to communicate and understand at times. (S. Cutter, Boruff, and Shirley 2003) The Social Vulnerability Index (SoVI) for this study was constructed at the U.S. Census Place level for the state of Utah. We utilized the conventions put forth by Cutter (2011) as closely as possible using the five-year American Community Survey (ACS) data from 2008 to 2012. The ACS collects a different, more expansive set of variables than the Census Long Form utilized in Cutter et al. (2003), which required some deviation in variable selection from the original method. However, Holand and Lujala (2013) demonstrated that the SoVI could be constructed using regional contextually appropriate variables rather than the specific variables presented by Cutter et al. (2003). Where possible, variables were selected which matched with the Cutter et al. (2003) work. The Principle Components Analysis was conducted using the statistical software R version 3.2.3 (R 2015) and the prcomp function. Using the Cutter (2011) conventions for component selection, we chose to use the first ten principle components which explained 76% of the variance in the data. Once the components were selected, we assessed the correlation coefficients for each component and determined the tendency (how it increases or decreases) of each component for calculating the final index values. With the component tendencies assessed, we created an arithmetic function to calculate the final index scores in ESRI’s ArcGIS software (ESRI 2014). The scores were then classified using an equal interval classification in ArcGIS to produce five classes of vulnerability, ranging from very low to very high. The SoVI constructed for our study is largely consistent with previous indices published by Susan Cutter at a macro scale, which were used as a crude validation for the analysis. The pattern of vulnerability in the state is clustered, with the lowest vulnerability in the most densely populated area of the state, centered on Salt Lake City (see Figure [UT_SoVI.png]). Most of the state falls in the moderate vulnerability class, which is to be expected.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Music perception remains challenging for many cochlear implant (CI) recipients, due perhaps in part to the frequency mismatch that occurs between the electrode-neural interface and the frequencies allocated by the programming. Individual differences in ear anatomy, electrode array length, and surgical insertion can lead to great variability in the positions of electrodes within the cochlea, but these differences are not typically accounted for by current CI programming techniques. Flat panel computed tomography (FPCT) can be used to visualize the location of the electrodes and calculate the corresponding spiral ganglion characteristic frequencies. Such FPCT-based CI frequency mapping may improve pitch perception accuracy, and thus music appreciation, as well as speech perception. The present study seeks to develop a behavioral assessment metric for how well place-based pitch is represented across the frequency spectrum. Listeners were asked to match the pitch interval created by two tones, played sequentially, across different frequency ranges to estimate the extent to which pitch is evenly distributed across the CI array. This test was piloted with pure tones in normal hearing listeners, using both unprocessed and vocoder-processed sounds to simulate both matched and mismatched frequency-to-place maps. We hypothesized that the vocoded stimuli would be more difficult to match in terms of pitch intervals than unprocessed stimuli and that a warped map (as may occur with current clinical maps) would produce poorer matches than a veridical and even map (as may be achieved using FPCT-based frequency allocation). Preliminary results suggest that the task can reveal differences between veridical and warped maps in normal-hearing listeners under vocoded conditions. A small cohort of CI recipients performed similarly to a vocoded condition employing the same pitch map. The next steps will be to test this procedure in CI users and compare results with traditional clinical maps and FPCT-based frequency allocation to determine whether the FPCT-based maps result in improved pitch-interval perception.
Methods
Subjects
Two primary groups were enlisted for this study: normal hearing (NH) individuals and cochlear implant (CI) recipients. NH listeners were used to establish baseline data that could be used to compare against CI recipients. CI recipients are included here as a pilot to determine whether this approach is feasible for these listeners.
Normal Hearing (NH) Participants
Recruited through the University of Minnesota, 31 NH individuals participated. The group that assessed unprocessed stimuli comprised 15 participants (average age: 22.6 years, SD: ±1.5; gender distribution: 5 males, 10 females) with an average of 8.1 years (SD: ±3.9) of musical experience. The vocoded stimuli group included 16 participants (average age: 28.6 years, SD: ±13.8; gender distribution: 7 males, 9 females), reporting an average of 11.1 years (SD: ±11.1) of musical experience. Testing for both NH groups was completed remotely via an online MATLAB platform, requiring the use of headphones.
Cochlear Implant (CI) Recipients
Nine CI recipients (Table 1, average age: 57.4 years, SD: ±13.2; gender distribution: 6 males, 3 females) were recruited through UCSF. This group consisted of one bilateral and eight unilateral CI users, all equipped with MED-EL CIs and using their clinical everyday listening programs. Their reported musical experience averaged 11.3 years (SD: ±12.3). Similar to the NH group, the CI cohort completed the task via an online MATLAB platform. CI recipients were instructed to choose the transducer that they regularly use with success at home; this could have included sound field speakers, headphones, or streaming, with care taken to isolate the test ear.
Pitch Interval Assessment Procedure
For these experiments, we focused on pitch interval comparisons across a frequency range utilized by contemporary CI processors. This frequency range was divided into three regions, assuming a logarithmic distribution of frequencies, resulting in low (root note 150 Hz, interval range 126-505 Hz), mid (root note 572 Hz, interval range 480-1924 Hz), and high (root note 2181 Hz, interval range 1833-7314 Hz) categories.
NH participants completed the pitch interval assessment with pure tones across both frequency test ranges (Low vs Mid, Mid vs High), whereas CI recipients and NH subjects with vocoded stimuli were tested only in the Mid vs High ranges due to limitations of the vocoded conditions in low frequencies and time constraints.
The task involved comparing two pitch intervals between two frequency regions presented in succession. Participants identified the larger of two presented pitch intervals in a forced choice paradigm (Zarate et al., 2012; McDermott et al., 2010).
A single pitch interval consisted of a 3-tone melody in a low-high-low sequence, where the first and last notes were the same (e.g., C4 - G4 - C4). The melody's root note was roved within a half-octave range. Each note was a pure tone of 300 ms, including 30-ms onset and 50-ms offset raised-cosine ramps. The notes within each 3-note sequence were separated by 150-ms gaps.
For the NH listeners presented with pure tones, the fixed interval was either 4 ST (a major 3rd in music notation) or 7 ST (a perfect 5th); for the CI users and NH listeners presented with vocoded stimuli, the fixed interval was always 7 ST. Intervals were defined using equal temperament tuning, where 1 ST always represents a change in frequency of 21/12.
Adaptive Tracking Procedure
The assessment employed an adaptive testing approach (e.g., Jesteadt, 1980) to determine each participant's point of subjective equality (PSE) for pitch intervals across different frequency regions. One of the two intervals was fixed, and the other interval was adaptively varied, based on the listener’s previous responses. A value of 0 semitones (ST) in this procedure indicates that the adaptively varying interval was the same size as the fixed interval (either 4 or 7 ST).
Each run consisted of four randomly interleaved adaptive tracks, two of which used a 2-down 1-up procedure and two of which used a 1-up 2-down procedure, tracking the 71% and 29% points of the psychometric function, respectively (Levitt, 1971). For each of these pairs of tracks, one pair varied the first (lower) interval and the other pair varied the second (higher) interval. For each track, the starting size of the varying interval was ±3 STs, and the starting step size was 4 STs and decreased after the first two reversals to 2 STs. Four reversals were required during the initial phase and two reversals were required during the measurement phase.
Once all the tracks had terminated, the PSE was defined as the average of the four tracks (as the mean of the 71% and 29% points approximates the 50% point). A total of 5 runs were completed per participant in each condition, with a prompt for participants to rest between runs.
The adaptive tracking procedure was limited to values of between -7 and +10 ST. If the adaptive procedure called for a value exceeding the maximum or minimum more than 6 times in one track, the track was terminated and a value of -8 or +11 ST was assigned to that track.
Prior to data collection, participants completed a short training module in the initial phase of the experiment that utilized 7 ST intervals and provided feedback to ensure understanding of the task. No feedback was given during testing.
Stimuli
Normal Hearing Cohort
Normal hearing (NH) participants were assessed using either pure-tone (unprocessed) stimuli or vocoded stimuli (to simulate aspects of cochlear implant sound perception).
To create the vocoded stimuli, a frequency warp was first applied to simulate either a full-length (28 mm) or a shorter (24 mm) electrode array placement. The basis for these two frequency warps was generated by calculation of the spiral ganglion characteristic frequencies of electrodes measured using lateral wall cochlear duct length measurements of our previously FPCT-imaged cohort (Jiam et al., 2021; Helpard et al., 2020; Li et al., 2021). The most apical electrode of the 28 mm array corresponds to 350 Hz (Figures 1A and 1B, black dots), and of the 24 mm array corresponds to 500 Hz (Figure 1C, black dots), consistent with larger cohorts reported elsewhere (Canfarotta et al., 2020, ***).
The second step to create the vocoded stimuli was to apply a frequency allocation table to simulate either default or custom CI filterbank settings, which yielded the following three conditions: (1) Vocoded 28 mm Array with Default Frequencies (“Voc Default”, black bars in Figure 1A), (2) Vocoded 28 mm Array with Middle Frequencies Matched (“Voc MidFreq Match”, gray bars, Figure 1B), and (3) Vocoded 24 mm Array with All Frequencies Matched (“Voc AllFreq Match”, gray bars, Figure 1C).
The Voc Default setting (Figure 1A, black bars) used a frequency range of 70-8500 Hz, logarithmically divided into 12 channels, and was modeled after the manufacturer’s default frequency allocation table (i.e., “LogFS”).
The Voc MidFreq Match applied a strict match of center frequencies to the mid-frequency range (950-4000 Hz) and the remaining frequency ranges were then redistributed across the most apical and basal electrodes (70-950 Hz, 4000-8500 Hz, as seen in Figure 1B, gray bars). This approach attempted to maintain audibility across the entire frequency range while also maintaining pitch interval integrity where feasible.
The Voc AllFreq Match utilized a strictly CT-based approach (Figure 1C, gray bars), which matched all channel center frequencies to the electrode contact locations as much as is feasible (<4000 Hz). To avoid deactivating electrodes that were located above the bandwidth limit of the software (8500 Hz), a logarithmic redistribution was applied in the highest frequencies (>4000 Hz) to make the best use of available
This layer shows median household income by race and by age of householder. This is shown by tract, county, and state boundaries. This service is updated annually to contain the most currently released American Community Survey (ACS) 5-year data, and contains estimates and margins of error. There are also additional calculated attributes related to this topic, which can be mapped or used within analysis. Median income and income source is based on income in past 12 months of survey. This layer is symbolized to show median household income. To see the full list of attributes available in this service, go to the "Data" tab, and choose "Fields" at the top right. Current Vintage: 2019-2023ACS Table(s): B19013B, B19013C, B19013D, B19013E, B19013F, B19013G, B19013H, B19013I, B19049, B19053Data downloaded from: Census Bureau's API for American Community Survey Date of API call: December 12, 2024National Figures: data.census.govThe United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. For more information about ACS layers, visit the FAQ. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:This layer is updated automatically when the most current vintage of ACS data is released each year, usually in December. The layer always contains the latest available ACS 5-year estimates. It is updated annually within days of the Census Bureau's release schedule. Click here to learn more about ACS data releases.Boundaries come from the US Census TIGER geodatabases, specifically, the National Sub-State Geography Database (named tlgdb_(year)_a_us_substategeo.gdb). Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines erased for cartographic and mapping purposes. For census tracts, the water cutouts are derived from a subset of the 2020 Areal Hydrography boundaries offered by TIGER. Water bodies and rivers which are 50 million square meters or larger (mid to large sized water bodies) are erased from the tract level boundaries, as well as additional important features. For state and county boundaries, the water and coastlines are derived from the coastlines of the 2023 500k TIGER Cartographic Boundary Shapefiles. These are erased to more accurately portray the coastlines and Great Lakes. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters).The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The mean correlation coefficient increasing percentages (%) of SSI-RIK compared to EM-SFM, CSF and RIK, when the sequence sizes vary from 15 to 50 with an equal interval 5.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
FluidHarmony is an algorithmic method for defining a hierarchical harmonic lexicon in equal temperaments. It utilizes an enharmonic weighted Fourier transform space to represent pitch class set (pcsets) relations. The method ranks pcsets based on user-defined constraints: the importance of interval classes (ICs) and a reference pcset. Evaluation of 5,184 Western musical pieces from the 16th to 20th centuries shows FluidHarmony captures 8% of the corpus's harmony in its top pcsets. This highlights the role of ICs and a reference pcset in regulating harmony in Western tonal music while enabling systematic approaches to define hierarchies and establish metrics beyond 12-TET.
This map contains the 2020 Vulnerable Population Index along with the component demographic layers. The following seven populations were determined to be vulnerable based on an understanding of both federal requirements and regional demographics: 1) Low-Income Population (below 200% of poverty level) 2) Non-Hispanic Minority Population 3) Hispanic or Latino Population (all races) 4) Population with Limited English Proficiency (LEP) 5) Population with Disabilities 6) Elderly Population (age 75 and up) 7) Households with No CarFor each of these populations, Census tracts with concentrations above the regional mean concentration are divided into two categories above the regional mean. These categories are calculated by dividing the range of values between the regional mean and the regional maximum into two equal-sized intervals. Tracts in the lower interval are given a score of 1 and tracts in the upper interval are given a score of 2 for that demographic variable. The scores are totaled from the seven individual demographic variables to yield the Vulnerable Population Index (VPI). The VPI can range from zero to fourteen (0 to 14). A lower VPI indicates a less vulnerable area, while a higher VPI indicates a more vulnerable area.FIELDSP_PovL100: Percent Below 100% of the Poverty Level, P_PovL200: Percent Below 200% of the Poverty Level, P_Minrty: Percent Minority (non-White, non-Hispanic), P_Hisp: Percent Hispanic, P_LEP: Percent Limited English Proficiency (speak English "not well" or "not at all"), P_Disabld: Percent with Disabilities, P_Elderly: Percent Elderly (age 75 and over), P_NoCarHH: Percent Households with No Vehicle, RG_PovL100: Regional Average (Mean) of Percent Below 100% of the Poverty Level, RG_PovL200: Regional Average (Mean) of Percent Below 200% of the Poverty Level, RG_Minrty: Regional Average (Mean) of Percent Minority (non-White, non-Hispanic), RG_Hisp: Regional Average (Mean) of Percent Hispanic, RG_LEP: Regional Average (Mean) of Percent Limited English Proficiency (speak English "not well" or "not at all"), RG_Disabld: Regional Average (Mean) of Percent with Disabilities, RG_Elderly: Regional Average (Mean) of Percent Elderly (age 75 and over), RG_NoCarHH: Regional Average (Mean) of Percent Households with No Vehicle, [NO SC_PovL100: Note: Percent Below 100% of the Poverty Level not used in VPI 2020 calculation],SC_PovL200: VPI Score for Below 200% of the Poverty Level (Values: 0, 1, or 2),SC_Minrty: VPI Score for Minority (non-White, non-Hispanic) (Values: 0, 1, or 2),SC_Hisp: VPI Score for Hispanic (Values: 0, 1, or 2),SC_LEP: VPI Score for Limited English Proficiency (speak English "not well" or "not at all") (Values: 0, 1, or 2),SC_Disabld: VPI Score for Disabilities (Values: 0, 1, or 2),SC_Elderly: VPI Score for Elderly (age 75 and over) (Values: 0, 1, or 2),SC_NoCarHH: VPI Score for Households with No Vehicle (Values: 0, 1, or 2),VPI_2020: Total VPI Score (0 minimum to 14 maximum).Additional information on equity planning at BMC can be found here.Sources: Baltimore Metropolitan Council, U.S. Census Bureau 2016–2020 American Community Survey 5-Year Estimates. Margins of error are not shown.Updated: April 2022
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The service shall display data in the form of an indicator for emissions related to the macro-sector “Other sources and removals” of the Regional Inventory of Atmospheric Emissions (IREA).Estimates made are calculated on the basis of the INEMAR (air emission inventory) system based on the EMEP — CORINAIR methodology and relate to sources classified according to the SNAP (Selected Nomenclature for Air Pollution) nomenclature. They are classified according to the following parameters: reference year, province and municipality, reference activities according to the SNAP methodology (macrosector, sector and emissive activity), fuel used and pollutant emitted. The main pollutants exposed are: CH4 (tonnes/year); Co (t/year); CO2 (kt/year); N2O (t/year); NH3 (t/year); NMVOC (t/year); NOx (tonnes/year); PM10 (tonnes/year); PM2.5 (tonnes/year); PTS (t/year); SO2 (tonnes/year). The data shall be rounded to the fourth decimal place. The service exposes the data in four different spatial resolutions: Municipalities, Provinces, Region, Air Quality Zones.By a special function in the Environmental Knowledge System it is possible to view the themed inventory data according to three different types of statistical classification (Jenks, Equal Interval, Quantile).The WFS service can also be used in any GIS desktop (e.g. QGIS).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The service displays data in the form of an indicator for emissions related to the macro-sector “Non-industrial combustion” of the Regional Inventory of Atmospheric Emissions (IREA).Estimates made are calculated on the basis of the INEMAR system (air emissions inventory) based on the EMEP — CORINAIR methodology and relate to sources classified according to the SNAP (Selected Nomenclature for Air Pollution) nomenclature. They are classified according to the following parameters: reference year, province and municipality, reference activities according to the SNAP methodology (macrosector, sector and emissive activity), fuel used and pollutant emitted. The main pollutants exposed are: CH4 (tonnes/year); Co (t/year); CO2 (kt/year); N2O (t/year); NH3 (t/year); NMVOC (t/year); NOx (tonnes/year); PM10 (tonnes/year); PM2.5 (tonnes/year); PTS (t/year); SO2 (tonnes/year). The data shall be rounded to the fourth decimal place. The service exposes the data in four different spatial resolutions: Municipalities, Provinces, Region, Air Quality Zones.By a special function in the Environmental Knowledge System it is possible to view the themed inventory data according to three different types of statistical classification (Jenks, Equal Interval, Quantile).The WFS service can also be used in any GIS desktop (e.g. QGIS).
This map contains the 2020 Vulnerable Population Index along with the component demographic layers. The following seven populations were determined to be vulnerable based on an understanding of both federal requirements and regional demographics: 1) Low-Income Population (below 200% of poverty level) 2) Non-Hispanic Minority Population 3) Hispanic or Latino Population (all races) 4) Population with Limited English Proficiency (LEP) 5) Population with Disabilities 6) Elderly Population (age 75 and up) 7) Households with No CarFor each of these populations, Census tracts with concentrations above the regional mean concentration are divided into two categories above the regional mean. These categories are calculated by dividing the range of values between the regional mean and the regional maximum into two equal-sized intervals. Tracts in the lower interval are given a score of 1 and tracts in the upper interval are given a score of 2 for that demographic variable. The scores are totaled from the seven individual demographic variables to yield the Vulnerable Population Index (VPI). The VPI can range from zero to fourteen (0 to 14). A lower VPI indicates a less vulnerable area, while a higher VPI indicates a more vulnerable area.FIELDSP_PovL100: Percent Below 100% of the Poverty Level, P_PovL200: Percent Below 200% of the Poverty Level, P_Minrty: Percent Minority (non-White, non-Hispanic), P_Hisp: Percent Hispanic, P_LEP: Percent Limited English Proficiency (speak English "not well" or "not at all"), P_Disabld: Percent with Disabilities, P_Elderly: Percent Elderly (age 75 and over), P_NoCarHH: Percent Households with No Vehicle, RG_PovL100: Regional Average (Mean) of Percent Below 100% of the Poverty Level, RG_PovL200: Regional Average (Mean) of Percent Below 200% of the Poverty Level, RG_Minrty: Regional Average (Mean) of Percent Minority (non-White, non-Hispanic), RG_Hisp: Regional Average (Mean) of Percent Hispanic, RG_LEP: Regional Average (Mean) of Percent Limited English Proficiency (speak English "not well" or "not at all"), RG_Disabld: Regional Average (Mean) of Percent with Disabilities, RG_Elderly: Regional Average (Mean) of Percent Elderly (age 75 and over), RG_NoCarHH: Regional Average (Mean) of Percent Households with No Vehicle, [NO SC_PovL100: Note: Percent Below 100% of the Poverty Level not used in VPI 2020 calculation],SC_PovL200: VPI Score for Below 200% of the Poverty Level (Values: 0, 1, or 2),SC_Minrty: VPI Score for Minority (non-White, non-Hispanic) (Values: 0, 1, or 2),SC_Hisp: VPI Score for Hispanic (Values: 0, 1, or 2),SC_LEP: VPI Score for Limited English Proficiency (speak English "not well" or "not at all") (Values: 0, 1, or 2),SC_Disabld: VPI Score for Disabilities (Values: 0, 1, or 2),SC_Elderly: VPI Score for Elderly (age 75 and over) (Values: 0, 1, or 2),SC_NoCarHH: VPI Score for Households with No Vehicle (Values: 0, 1, or 2),VPI_2020: Total VPI Score (0 minimum to 14 maximum).Additional information on equity planning at BMC can be found here.Sources: Baltimore Metropolitan Council, U.S. Census Bureau 2016–2020 American Community Survey 5-Year Estimates. Margins of error are not shown.Updated: April 2022
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The scaled-uniform model has been lately considered to illustrate problems in frequentist point estimation arising when the minimal sufficient statistic is not complete. Here we consider the problem of interval estimation and derive pivotal quantities based on a series of point estimators proposed in the literature. We compare the resulting intervals of given confidence level in terms of expected lengths. Pivotal quantities, confidence intervals and expected lengths are all computed using simulations implemented with R (code is available). Numerical results suggest that the maximum likelihood estimator, regardless of its inefficiency, yields confidence intervals that outperform the other available sets of the same level.