Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Preliminary analysis report generated automatically by the iMAP to provide a summary of conserved taxonomy assigned to OTUs and the initial analysis of OTUs and taxa data. The preliminary analysis report was automatically saved in the “reports” folder as “report4_preliminary_analysis.html”. (HTML 20379 kb)
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Unsupervised exploratory data analysis (EDA) is often the first step in understanding complex data sets. While summary statistics are among the most efficient and convenient tools for exploring and describing sets of data, they are often overlooked in EDA. In this paper, we show multiple case studies that compare the performance, including clustering, of a series of summary statistics in EDA. The summary statistics considered here are pattern recognition entropy (PRE), the mean, standard deviation (STD), 1-norm, range, sum of squares (SSQ), and X4, which are compared with principal component analysis (PCA), multivariate curve resolution (MCR), and/or cluster analysis. PRE and the other summary statistics are direct methods for analyzing datathey are not factor-based approaches. To quantify the performance of summary statistics, we use the concept of the “critical pair,” which is employed in chromatography. The data analyzed here come from different analytical methods. Hyperspectral images, including one of a biological material, are also analyzed. In general, PRE outperforms the other summary statistics, especially in image analysis, although a suite of summary statistics is useful in exploring complex data sets. While PRE results were generally comparable to those from PCA and MCR, PRE is easier to apply. For example, there is no need to determine the number of factors that describe a data set. Finally, we introduce the concept of divided spectrum-PRE (DS-PRE) as a new EDA method. DS-PRE increases the discrimination power of PRE. We also show that DS-PRE can be used to provide the inputs for the k-nearest neighbor (kNN) algorithm. We recommend PRE and DS-PRE as rapid new tools for unsupervised EDA.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BACKGROUND: The Health Insurance Institute of Slovenia (ZZZS) began publishing service-related data in May 2023, following a directive from the Ministry of Health (MoH). The ZZZS website provides easily accessible information about the services provided by individual doctors, including their names. The user is provided relevant information about the doctor's employer, including whether it is a public or private institution. The data provided is useful for studying the public system's operations and identifying any errors or anomalies.
METHODS: The data for services provided in May 2023 was downloaded and analysed. The published data were cross-referenced using the provider's RIZDDZ number with the daily updated data on ambulatory workload from June 9, 2023, published by ZZZS. The data mentioned earlier were found to be inaccurate and were improved using alerts from the zdravniki.sledilnik.org portal. Therefore, they currently provide an accurate representation of the current situation. The total number of services provided by each provider in a given month was determined by adding up the individual services and then assigning them to the corresponding provider.
RESULTS: A pivot table was created to identify 307 unique operators, with 15 operators not appearing in both lists. There are 66 public providers, which make up about 72% of the contractual programme in the public system. There are 241 private providers, accounting for about 28% of the contractual programme. In May 2023, public providers accounted for 69% (n=646,236) of services in the family medicine system, while private providers contributed 31% (n=291,660). The total number of services provided by public and private providers was 937,896. Three linear correlations were analysed. The initial analysis of the entire sample yielded a high R-squared value of .998 (adjusted R-squared value of .996) and a significant level below 0.001. The second analysis of the data from private providers showed a high R Squared value of .904 (Adjusted R Squared = .886), indicating a strong correlation between the variables. Furthermore, the significance level was < 0.001, providing additional support for the statistical significance of the results. The third analysis used data from public providers and showed a strong level of explanatory power, with a R Squared value of 1.000 (Adjusted R Squared = 1.000). Furthermore, the statistical significance of the findings was established with a p-value < 0.001.
CONCLUSION: Our analysis shows a strong linear correlation between contract size of the program signed and number services rendered by family medicine providers. A stronger linear correlation is observed among providers in the public system compared to those in the private system. Our study found that private providers generally offer more services than public providers. However, it is important to acknowledge that the evaluation framework for assessing services may have inherent flaws when examining the data. Prescribing a prescription and resuscitating a patient are both assigned a rating of one service. It is crucial to closely monitor trends and identify comparable databases for pairing at the secondary and tertiary levels.
Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Offshore wind represents a potentially significant source of low-carbon energy for Canada, and ensuring that relevant, high-quality data and scientifically sound analyses are brought forward into decision-making processes will increase the chances of success for any future deployment of offshore wind in Canada. To support this objective, CanmetENERGY-Ottawa (CE-O), a federal laboratory within Natural Resources Canada (NRCan), completed a preliminary analysis of relevant considerations for offshore wind, with an initial focus on Atlantic Canada. To conduct the analysis, CE-O used geographic information system (GIS) software and methods and engaged with multiple federal government departments to acquire relevant data and obtain insights from subject matter experts on the appropriate use of these data in the context of the analysis. The purpose of this work is to support the identification of candidate regions within Atlantic Canada that could become designated offshore wind energy areas in the future. The study area for the analysis included the Gulf of St. Lawrence, the western and southern coasts of the island of Newfoundland, and the coastal waters south of Nova Scotia. Twelve input data layers representing various geophysical, ecological, and ocean use considerations were incorporated as part of a multi-criteria analysis (MCA) approach to evaluate the effects of multiple inputs within a consistent framework. Six scenarios were developed which allow for visualization of a range of outcomes according to the influence weighting applied to the different input layers and the suitability scoring applied within each layer. This preliminary assessment resulted in the identification of several areas which could be candidates for future designated offshore wind areas, including the areas of the Gulf of St. Lawrence north of Prince Edward Island and west of the island of Newfoundland, and areas surrounding Sable Island. This study is subject to several limitations, namely missing and incomplete data, lack of emphasis on temporal and cumulative effects, and the inherent subjectivity of the scoring scheme applied. Further work is necessary to address data gaps and take ecosystem wide impacts into account before deployment of offshore wind projects in Canada’s coastal waters. Despite these limitations, this study and the data compiled in its preparation can aid in identifying promising locations for further review. A description of the methodology used to undertake this study is contained in the accompanying report, available at the following link: https://doi.org/10.4095/331855. This report provides in depth detail into how these data layers were compiled and details any analysis that was done on the data to produce the final data layers in this package.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Pre-analysis plans (PAPs) have been championed as a solution to the problem of research credibility, but without any evidence that PAPs actually bolster the credibility of research. We analyze a representative sample of 195 PAPs registered on the Evidence in Governance and Politics (EGAP) and American Economic Association (AEA) registration platforms to assess whether PAPs registered in the early days of pre-registration (2011-2016) were sufficiently clear, precise and comprehensive to achieve their objective of preventing “fishing” and reducing the scope for post-hoc adjustment of research hypotheses. We also analyze a subset of 93 PAPs from projects that resulted in publicly available papers to ascertain how faithfully they adhere to their pre-registered specifications and hypotheses. We find significant variation in the extent to which PAPs registered during this period accomplished the goals they were designed to achieve. We discuss these findings in light of both the costs and benefits of pre-registration, showing how our results speak to the various arguments that have been made in support of and against PAPs. We also highlight the norms and institutions that will need to be strengthened to augment the power of PAPs to improve research credibility and to create incentives for researchers to invest in both producing and policing them.
Facebook
TwitterThis project delves into the workflow and results of regression models on monthly and daily utility data (meter readings of electricity consumption), outlining a process for screening and gathering useful results from inverse models. Energy modeling predictions created in Building Energy Optimization software (BEopt) Version 2.0.0.3 (BEopt 2013) are used to infer causes of differences among similar homes. This simple data analysis is useful for the purposes of targeting audits and maximizing the accuracy of energy savings predictions with minimal costs. The data for this project are from two adjacent military housing communities of 1,166 houses in the southeastern United States. One community was built in the 1970s, and the other was built in the mid-2000s. Both communities are all electric; the houses in the older community were retrofitted with ground source heat pumps in the early 1990s, and the newer community was built to an early version of ENERGY STAR with air source heat pumps. The houses in the older community will receive phased retrofits (approximately 10 per month) in the coming years. All houses have had daily electricity metering readings since early 2011. This project explores a dataset at a simple level and describes applications of a utility data normalization. There are far more sophisticated ways to analyze a dataset of dynamic, high resolution data; however, this report focuses on simple processes to create big-picture overviews of building portfolios as an initial step in a community-scale analysis. TO4 9.1.2: Comm. Scale Military Housing Upgrades
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Supplementary tables and figure of paper Spatio-temporal analysis for detection of pre-symptomatic shape changes in neuro-degenerative diseases: initial application to the GENFI cohort, for testing different number of clusters of the spatio-temporal regression. The supplementary materials shows results for 2 4 6 8 12 14 and 16 clusters. The 10 clusters analysis is in the paper.
Facebook
TwitterBackgroundCentral line-associated bloodstream infections (CLABSI) are associated with significant morbidity and mortality. This condition is therefore the focus of quality initiatives, which primarily use audit and feedback to improve performance. However, feedback of quality data inconsistently affects clinician behavior. A hypothesis for this inconsistency is that a lack of comprehension of CLABSI data by decision makers prevents behavior change. In order to rigorously test this hypothesis, a comprehension scale is necessary. Therefore, we sought to develop a scale to assess comprehension of CLABSI quality metric data.MethodsThe initial instrument was constructed via an exploratory approach, including literature review and iterative item development. The developed instrument was administered to a sample of clinicians, and each item was scored dichotomously as correct or incorrect. Psychometric evaluation via exploratory factor analyses (using tetrachoric correlations) and Cronbach’s alpha were used to assess dimensionality and internal consistency.Results97 clinicians responded and were included. Factor analyses yielded a scale with one factor containing four items with an eigenvalue of 2.55 and a Cronbach’s alpha of 0.82. The final solution was interpreted as an overall CLABSI “comprehension” scale given its unidimensionality and assessment of each piece of data within the CLABSI feedback report. The cohort had a mean performance on the scale of 49% correct (median = 50%).ConclusionsWe present the first psychometric evaluation of a preliminary scale that assesses clinician comprehension of CLABSI quality metric data. This scale has internal consistency, assesses clinically relevant concepts related to CLABSI comprehension, and is brief, which will assist in response rates. This scale has potential policy relevance as it could aid efforts to make quality metrics more effective in driving practice change.
Facebook
TwitterThree combination pollen, starch, and phytolith samples were examined from three separate pits at site IBB S4 on Tinian. Pollen, starch, and phytoliths were examined from these pits in an effort to identify whether or not any were agricultural pits or perhaps used in processing agricultural crops grown in this area. Occupation is dated to the end of the transitional Pre-Latte/Early Latte Phase, anchored by a radiocarbon age of AD 690-790 on charcoal recovered from the large pit represented by sample 26.
Facebook
TwitterThirty-five samples of sediment adhering to a variety of vessels and grinding stones from the site of Gegharot in Armenia were examined for pollen and phytoliths. Samples were obtained by washing the tools in the field, then submitting the washes to PaleoResearch Institute. They were received in two shipments, allowing preliminary analysis of the first fifteen samples to assess issues of preservation prior to completing the study. Recovery of both pollen and phytoliths from the initial samples indicated sufficient remains for a complete analysis. Similarity in remains recovered from many of these washes results in focus of the analysis on recovery of the “minor elements” of each record. Crops, weeds, and regional vegetation all appear to be represented. Samples span the Early and Late Bronze age. One sample attributed to the Iron III occupation also was examined.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This collection contains a snapshot of the learning resource metadata from ESIP's Data management Training Clearinghouse (DMTC) associated with the closeout (March 30, 2023) of the Institute of Museum and Library Services funded (Award Number: LG-70-18-0092-18) Development of an Enhanced and Expanded Data Management Training Clearinghouse project. The shared metadata are a snapshot associated with the final reporting date for the project, and the associated data report is also based upon the same data snapshot on the same date.
The materials included in the collection consist of the following:
esip-dev-02.edacnm.org.json.zip - a zip archive containing the metadata for 587 published learning resources as of March 30, 2023. These metadata include all publicly available metadata elements for the published learning resources with the exception of the metadata elements containing individual email addresses (submitter and contact) to reduce the exposure of these data.
statistics.pdf - an automatically generated report summarizing information about the collection of materials in the DMTC Clearinghouse, including both published and unpublished learning resources. This report includes the numbers of published and unpublished resources through time; the number of learning resources within subject categories and detailed subject categories, the dates items assigned to each category were first added to the Clearinghouse, and the most recent data that items were added to that category; the distribution of learning resources across target audiences; and the frequency of keywords within the learning resource collection. This report is based on the metadata for published resourced included in this collection, and preliminary metadata for unpublished learning resources that are not included in the shared dataset.
The metadata fields consist of the following:
Fieldname
Description
abstract_data
A brief synopsis or abstract about the learning resource
abstract_format
Declaration for how the abstract description will be represented.
access_conditions
Conditions upon which the resource can be accessed beyond cost, e.g., login required.
access_cost
Yes or No choice stating whether othere is a fee for access to or use of the resource.
accessibililty_features_name
Content features of the resource, such as accessible media, alternatives and supported enhancements for accessibility.
accessibililty_summary
A human-readable summary of specific accessibility features or deficiencies.
author_names
List of authors for a resource derived from the given/first and family/last names of the personal author fields by the system
author_org
- name
- name_identifier
- name_identifier_type
- Name of organization authoring the learning resource.
- The unique identifier for the organization authoring the resource.
- The identifier scheme associated with the unique identifier for the organization authoring the resource.
authors - givenName - familyName - name_identifier - name_identifier_type
- Given or first name of person(s) authoring the resource.
- Last or family name of person(s) authoring the resource.
- The unique identifier for the person(s) authoring the resource.
- The identifier scheme associated with the unique identifier for the person(s) authoring the resource, e.g., ORCID.
citation
Preferred Form of Citation.
completion_time
Intended Time to Complete
contact - name - org - email
- Name of person(s) who has/have been asserted as the contact(s) for the resource in case of questions or follow-up by resource user.
- Name of organization that has/have been asserted as the contact(s) for the resource in case of questions or follow-up by resource user.
- (excluded) Contact email address.
contributor_orgs
- name
- name_identifier
- name_identifier_type
- type
- Name of organization that is a secondary contributor to the learningresource. A contributor can also be an individual person.
- The unique identifier for the organization contributing to the resource.
- The identifier scheme associated with the unique identifier for the organization contributing to the resource.
- Type of contribution to the resource made by an organization.
contributors
- familyName
- givenName
- name_identifier
- name_identifier_type
contributors.type
Type of contribution to the resource made by a person.
created
The date on which the metadata record was first saved as part of the input workflow.
creator
The name of the person creating the MD record for a resource.
credential_status
Declaration of whether a credential is offered for comopletion of the resource.
ed_frameworks - name - description - nodes.name
- The name of the educational framework to which the resource is aligned, if any. An educational framework is a structured description of educational concepts such as a shared curriculum, syllabus or set of learning objectives, or a vocabulary for describing some other aspect of education such as educational levels or reading ability.
- A description of one or more subcategories of an educational framework to which a resource is associated.
- The name of a subcategory of an educational framework to which a resource is associated.
expertise_level
The skill level targeted for the topic being taught.
id
Unique identifier for the MD record generated by the system in UUID format.
keywords
Important phrases or words used to describe the resource.
language_primary
Original language in which the learning resource being described is published or made available.
languages_secondary
Additional languages in which the resource is tranlated or made available, if any.
license
A license for use of that applies to the resource, typically indicated by URL.
locator_data
The identifier for the learning resource used as part of a citation, if available.
locator_type
Designation of citation locatorr type, e.g., DOI, ARK, Handle.
lr_outcomes
Descriptions of what knowledge, skills or abilities students should learn from the resource.
lr_type
A characteristic that describes the predominant type or kind of learning resource.
media_type
Media type of resource.
modification_date
System generated date and time when MD record is modified.
notes
MD Record Input Notes
pub_status
Status of metadata record within the system, i.e., in-process, in-review, pre-pub-review, deprecate-request, deprecated or published.
published
Date of first broadcast / publication.
publisher
The organization credited with publishing or broadcasting the resource.
purpose
The purpose of the resource in the context of education; e.g., instruction, professional education, assessment.
rating
The aggregation of input from all user assessments evaluating users' reaction to the learning resource following Kirkpatrick's model of training evaluation.
ratings
Inputs from users assessing each user's reaction to the learning resource following Kirkpatrick's model of training evaluation.
resource_modification_date
Date in which the resource has last been modified from the original published or broadcast version.
status
System generated publication status of the resource w/in the registry as a yes for published or no for not published.
subject
Subject domain(s) toward which the resource is targeted. There may be more than one value for this field.
submitter_email
(excluded) Email address of person who submitted the resource.
submitter_name
Submission Contact Person
target_audience
Audience(s) for which the resource is intended.
title
The name of the resource.
url
URL that resolves to a downloadable version of the learning resource or to a landing page for the resource that contains important contextual information including the direct resolvable link to the resource, if applicable.
usage_info
Descriptive information about using the resource, not addressed by the License information field.
version
The specific version of the resource, if declared.
Facebook
TwitterThe Sri Lanka Contraceptive Prevalence Survey. (CPS) is a national sample survey designed to obtain information on contraceptive use and fertility. This survey was conducted in 1982 by the Department of Census and Statistics, Ministry of Plan Implementation, in collaboration with the Westinghouse Health Systems of Columbia, Maryland, U.S.A.
The Department of Census and Statisties through the CPS has obtained and tabulated data on levels of fertility, knowledge, use and availability of contraceptives for the entire island as well as for urban-rural areas. These data have been obtained by interviewing a nationally representative probability sample of about 4,500 ever-married women in the age group 15-49. The interviews were conducted by rigourously trained female interviewers of the Department of Census and Statistics under careful supervision.
The field work lasted a period of approximately two months from February to March 1982. Findings from the survey on a preliminary analysis were presented and discussed at a seminar held in Colombo on 4th August 1982.
Western Province Colombo Gampaha Kalutara
Central Province Kandy Matale Nuwara Eliya
Southern Province Galle Matara Hambantota
Northern Province Jaffna Mannar Vavuniya Mullaitivu
Eastern Province Batticaloa Trincornalee Amparai
North Western Province Kurunegala Puttalam
North Central Province Anuradhapura Polonnaruwa
Uva Province Badulla Moneragala
Sabaragamuwa Province Ratnapura Kegalle
all ever-married women 15-49 years old living in housing units one of which is defined as a place of residence separate from the other places of residence and with an independent access. (One or more households could occupy one housing units). The population living in places other than housing units such as institutions were excluded.
All ever-married women 15-49 years old living in housing units.
Sample survey data [ssd]
The sample was a nationally representative probability sample drawn from a two stage design. In the first stage, a sample of Census Blocks was drawn from the predetermined strata. In the second stage a sample of housing units was drawn from each selected Census Block. All ever-married women aged 15-49 who lived in the selected housing units or who spent the previous night in the unit were interviewed in detail.
First Stage Selection
The country was stratified into 2 strata as urban and rural areas. It was decided to select a sample of about 4,500 respondents spread out in 540 Census Blocks. A Census Block is an area assigned to an enumerator at the 1981 Census of Population and Housing for the purpose of enumeration. The Survey estimates were required at the national level and hence it was decided to allocate the sample proportional to the stratum population which was defind as the female population aged 15-49. This made it necessary to select 90 Census Blocks from the Urban Stratum and 450 from the rural stratum. The required number of blocks within each stratum was then selected from among the 24 administrative districts, the number selected from each district being proportional to the stratum population within the district.
Second Stage Selection The Second' Stage consisted of selecting households from lists of housing units. These lists were obtained from the Pre-listing Forms prepared for the 1981 Census and were updated by the procedure outlined in the next section. The procedure for selection of households was as follows.
In the urban Census Blocks, a systematic sample of 15 housing units was selected from a list of such units. That is, starting from a randomly selected unit every unit at the end of an interval equal to one fifteenth the number of units in the block was selected in to the sample. In the rural Census Blocks, clusters of approximately ten housing units were formed and one cluster was selected at random from each block. All households in every housing unit whenever there was more than one in a unit were selected into the Sample.
Listing of Housing Units
The target population of the survey was all ever-married women 15-49 years old living in housing units. A housing unit was defined as a place of residence and with an independent access. One or more households could occupy one housing unit.
The population living in places other than housing units such as institutions were excluded. The effect of this exclusion the survey estimates was considered to be small as the population living in non-housing units at he 1981 census was a very small proportion of approximately 2 per cent. The sample frame for the survey was the prelisting Forms of the 1981 Census. A prelisting form was prepared for each Census Block and it contained a list of all housing units and non-housing units in the Census Block. The Pre-listing Forms of the selected Census Blocks were updated by the range Statistical Investigators of the Department. These officers were also the ones who prepared and later updated the lists initially for the Census and were quite familiar with the updating procedures. However, they were given specific instructions on updating by asking to delete the demolished and vacant units and to insert in the proper place any new units that had come up since the Census.
While the Survey was going on, it was found that some selected housing units were vacant, some were non-existent, and some could not be located by their addresses. However, the proportion of such units was quite small, only 2.7% and is unlikely to have caused a bias in the selection procedure.
Face-to-face [f2f]
The survey questionnaire was an adaptation of the core questionnaire developed by the Westinghouse Health System to collect information relating to family planning management. The questionnaire has two main sections:
Household Schedule The household schedule was used for listing all females present regardless of their eligibility and for recording their background information. Names of females who usually resided in the household and of female visitors who spent the previous night in the household were recorded in this schedule. For each of these women, age, date of birth, and marital status were entered and based on these information the interviewers decided and recorded the eligibility of each woman for the individual interview. A woman was eligible for the individual interview if she met all of the following three criteria: 1. 15 through 49 years old. 2. Had been or was currently married. 3. Was in the household on the night prior to the interview.
Individual Questionnaire The individual questionnaire consisted of the following five sections:- Section I - Respondent's Background. Section II - Fertility Section III - Fertility Regulation Section IV - Contraceptive Availability Section V - Husband's Status
In adapting the core questionnaire to meet the country's requirements, some additional questions were included. Timing of future births and breast-feeding were added to Section II, motivation to adopt family planning, approval of family planning, and induced abortions were added to Section III, and problems related to family planning services was added to Section IV.
The questionnaire was translated into the two national1anguages, Sinhala and Tamil. The translations were independently re-translated into English and compared with the original to ensure exactness of the translation. The questionnaires and all other survey documents were· printed by the Printing Division of the Department.
EDITING, CODING, TABULATION AND ANALYSIS Seventeen of the interviewers and two supervisors were retained for manual editing and coding. These officers were given detailed instructions in editing and coding procedures by two senior officers who were also responsible for the preparation of edit specifications and the coding instructions. A coder was, on average, expected to edit, code and check 20 schedules per day. All responses to questions were given specific numeric, machine readable values. Since all but two questions used pre-coded responses, the work of the coders was fairly simple and it progressed smoothly. Computer processing of the data was carried out by the Data Processing Division of the Department of Census and Statistics.. Data were key punched directly from the schedules. Error printouts were returned to the editors and coders for correction. At the end of each correction, the files were updated and the edit program was re-run until a clean data file was obtained. The specified tabulations were prepared well within the allotted time of 2 ½ months from June to early August. Each tabulation was checked for likely errors and internal consistency and it was possible to make the necessary corrections without much delay. These tabulations were made available to any interested institution in order to enable the data from the survey to be used as early as possible. A preliminary analysis of the data was carried out by a team of 6 staff' members of the Department of Census and Statistics. In this task they were assisted by the Westinghouse representative whose advice and comments were particularly valuable in the presentation of results. These findings were presented
Facebook
TwitterAbstract INTRODUCTION: COVID-19 emerged in late 2019 and quickly became a serious public health problem worldwide. This study aim to describe the epidemiological course of cases and deaths due to COVID-19 and their impact on hospital bed occupancy rates in the first 45 days of the epidemic in the state of Ceará, Northeastern Brazil. METHODS: The study used an ecological design with data gathered from multiple government and health care sources. Data were analyzed using Epi Info software. RESULTS: The first cases were confirmed on March 15, 2020. After 45 days, 37,268 cases reported in 85.9% of Ceará’s municipalities, with 1,019 deaths. Laboratory test positivity reached 84.8% at the end of April, a period in which more than 700 daily tests were processed. The average age of cases was 67 (<1 - 101) years, most occurred in a hospital environment (91.9%), and 58% required hospitalization in an ICU bed. The average time between the onset of symptoms and death was 18 (1 - 56) days. Patients who died in the hospital had spent an average of six (0 - 40) days hospitalized. Across Ceará, the bed occupancy rate reached 71.3% in the wards and 80.5% in the ICU. CONCLUSIONS: The first 45 days of the COVID-19 epidemic in Ceará revealed a large number of cases and deaths, spreading initially among the population with a high socioeconomic status. Despite the efforts by the health services and social isolation measures the health system still collapsed.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
A tile layer map service was created using the TIFF file "al092024_HELENE_storm_surge_inundation_estimate1.tiff "(SLOSH Hindcast Storm Surge Estimate) from National Hurricane Program. The zipped file (al092024_0927_HELENE_stormsurge_inundation_estimate.zip) was downloaded by Sarah Ellis. The TIFF file was clipped to Hillsborough County, projected to WGS 1984 Web Mercator (auxiliary sphere) from GCS_North_American_1983, cached and published as a tile layer map service.
Initial preliminary best estimate of storm surge inundation (above ground) for Hurricane HELENE (2024) from post-landfall SLOSH simulation(s) and inundation post-processing produced by the National Hurricane Center Storm Surge Unit. See raster attribute table for 1 foot inundation bin ranges displayed by the raster values. Some inundation areas mapped may occur over wetlands, water areas, or other uninhabited areas. Some wetlands areas do not include inundation data. Levee areas not mapped for inundation are labeled with bin value 99.
This preliminary analysis is based on very limited initial post-storm observations from tide gauges and without extensive survey data that comes at a time long after the immediate post-storm response. SLOSH simulation includes tide and storm surge. Analysis does not include rainfall, freshwater flooding, river discharge, wave setup, or wave runup.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview: This repository contains supplementary materials for the qualitative study presented in the paper "Exploring a user-centered approach for movement-based features in interaction design" by Antonio Escamilla, Javier Melenchón, Carlos Monzo, Jose Antonio Morán, and Juan Pablo Carrascal. These materials provide additional context, methodological details, and analytical tools that complement the findings discussed in the main publication.
Contents: * Codebook for Motion Analysis in Interaction Design Study: A comprehensive codebook detailing the coding scheme used for analyzing interview data about motion and movement-based features in interaction design contexts. * Flow of Codes into Categories and Preliminary Themes: Interactive Jupyter notebook demonstrating the analytical process of how initial codes were consolidated into categories and preliminary themes, providing transparency into the qualitative analysis workflow. * Interview Protocol: The complete interview protocol used for data collection with interaction design practitioners, including questions and prompts that guided the semi-structured interviews. * Open Coding Summary: Summary document of the initial open coding phase, showing the range of codes identified during the first round of analysis. * Thematic Map Visualization: Interactive Jupyter notebook containing visualizations of the thematic structure that emerged from the analysis, illustrating relationships between themes and categories. * Themes Search: Documentation of the iterative process of theme refinement and development, showing how themes evolved throughout the analytical process. * coding_analysis_flow: Interactive HTML visualization of the complete qualitative analysis workflow, from initial coding to final theme development. * Manuscript Appendix with Technical Specifications of the Implementation: Technical context to understand the movement-based features and their visualization strategies, with comprehensive hardware and software specifications.
Purpose: These supplementary materials provide researchers, practitioners, and educators with transparent access to the methodological approach and analytical process used in our study of how designers conceptualize and implement movement-based features in interaction design. The materials support reproducibility and offer additional context for interpreting the findings presented in the main paper. Citation: If you use these materials in your research or practice, please cite: Escamilla, A., Melenchón, J., Monzo, C., Morán, J. A., & Carrascal, J. P. (2025). Exploring a user-centered approach for movement-based features in interaction design. Contact: For questions regarding these materials, please contact the corresponding author, Antonio Escamilla, at the Escuela de Ingenierías, Universidad Pontificia Bolivariana, Medellín, Colombia.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Discrepancies in geographic variation patterns between nuclear DNA and mitochondrial DNA (mtDNA) are the result of the complicated differentiation processes in organisms and the key to understanding their true evolutionary process. The genetic differentiation of the northern and southern Izu lineages of the Japanese newt Cynops pyrrhogaster was investigated through their single nucleotide polymorphism variations by multiplexed ISSR genotyping by sequencing (MIG-seq). We found three genetic groups (Tohoku, N-Kanto, and S-Kanto) those not detected by mtDNA in the northern lineage. N-Kanto has intermediate genetic characteristics between Tohoku and S-Kanto. The western populations of N-Kanto are close to S-Kanto, whereas the eastern populations of N-Kanto are close to Tohoku. Tohoku, N-Kanto, and S-Kanto are now moderately isolated from each other and have unique genetic characteristics. An estimation of the evolutionary history by the Approximate Bayesian Computation approach suggested that Tohoku diverged from the common ancestor of S-Kanto and S-Izu. Then, S-Kanto and S-Izu split and the recent hybridization between Tohoku and S-Kanto gave rise to N-Kanto. The origin of N-Kanto through the hybridization is relatively young and seems to be related to changes in the distributions of Tohoku and S-Kanto as a result of the climatic oscillation in the Pleistocene. We concluded that the mitochondrial genome of S-Kanto was captured into Tohoku and the past original mitochondrial genome of Tohoku was entirely swept out from Tohoku through the hybridization. Methods To obtain de novo genome-wide SNP data for C. pyrrhogaster, multiplexed inter-simple sequence repeat (ISSR) genotyping by sequencing (MIG-seq) analysis (Suyama & Matsuki, 2015) was conducted. MIG-seq method is one of the PCR-based reduced-representation sequencings that has been conducted in various species (e.g., Matsui et al., 2019; Sato et al., 2021). All experimental procedures for MIG-seq analysis are the same as those described by Suyama & Matsuki (2015). The procedure is briefly described here. The first PCR was conducted to amplify ISSR regions from genomic DNA with MIG-seq primer set-1 (Suyama & Matsuki, 2015) with the Multiplex PCR Assay Kit Ver. 2 (TaKaRa). The first PCR conditions were as follows: initial heating at 94°C (1 min), 27 cycles of 94°C (30 sec), 48°C (1 min), and 72°C (1 min), and finally 10 min extension at 72°C. The second PCR was done using the PrimeSTAR GXL DNA Polymerase (TaKaRa) and its conditions were as follows: initial heating at 94°C (1 min), 15 cycles of 98°C (10 sec), 54°C (15 sec), and 68°C (30 sec). Then, 3 μl of each second PCR product was collected in one tube to form a mixed library. The purification and size selection (300–800 bp) of the mixed library were conducted using BluePippin DNA size selection system (Sage Science, Beverly, MA, USA). DNA concentration of the size-selected library was measured using the Agilent 4200 TapeStation (Agilent Technologies, CA, USA). The sequencing was conducted on the Illumina MiSeq System (Illumina, San Diego, CA, USA) using a mixed library at a final concentration of 10 pM using a MiSeq Reagent Kit v3 (150 cycle, Illumina). The sequencing of the first 17 bases of R1 read and 3 bases (anchor region) of R2 read were skipped by the 'DarkCycle' option of the MiSeq system. Both ends of the fragment (R1 and R2) were read by paired-end sequencing to determine 80 and 94 bp, respectively. Obtained raw sequence data were preprocessed and quality-filtered, mainly following the procedures of Sato et al., (2021). The sequences derived from adapter primers that were occasionally left in or repeated at the opposite 3' -ends of the read were cut out twice by Cutadapt 1.14 (Martin, 2011) with ≥ 8 bp accordance and ≤ 10% base mismatch. Then, we trimmed the low quality (Phred score < 10) 3-tails of each read using Cutadapt and excluded the shorter sequences (in length R1 reads < 70 bp and R2 reads <80 bp) by a custom Perl script. All quality-filtered R2 reads, which were longer on average and confirmed to have sufficient quality relative to R1 reads, were analyzed for de novo SNP calling by the Stacks 1.45 (Catchen et al., 2013). First, all R2 reads from each sample were clustered into putative loci within the sample based on sequence resemblance using the Ustacks program. From a preliminary analysis of 20 randomly selected samples from nearly all sampling locations, we set the thresholds for minimum depth of coverage and maximum number of mismatched bases between alleles at each locus to 4 and 4, respectively. Shared loci among samples were detected using the Cstacks program, with a threshold value of 2 for the maximum number of mismatch bases among alleles from other samples, as determined by a preliminary analysis of the same samples as above. Finally, SNP calling was made using the Stacks and the Populations programs to create a SNP matrix table with less than 50% missing values for each locus in all samples. To avoid biased output of missing values due to subpopulation structure, all samples were tentatively treated as a single population (predefinition of subpopulations was not adopted). We further selected SNPs that were found in more than 70% of the samples (more than 126 samples) and less than 0.5 heterozygosity to avoid using the loci that have excess heterozygosity and then selected 177 ingroup samples of 28 populations from 181 ingroup samples excluding the outgroup samples of the central and western lineages with missing values of less than 40% (dataset 1) using TASSEL 5.0 software (Bradbury et al. 2007). The ancestry inference and genetic structure for the entire sample based on the SNP loci obtained by MIG-seq were then assessed using Admixture 1.23 (Alexander and Lange, 2011). The most probable number of genetic clusters (K) was presumed in Admixture software, by computing maximum-likelihood estimates of the parameters. Ten independent simulations with cross-validation were run for each value of K from 1 to 10 to investigate the convergence of samples. The minimization of cross-validation error among all runs was used to estimate the most likely value of K. To compare with the results from Admixture analysis, principal component analysis (PCA) was conducted using the same dataset by TASSEL 5.0. To validate the effect of isolation by distance, we calculated FST /(1-FST) among 28 ingroup populations as genetic distances using the Populations program of Stacks. The Mantel test based on Pearson’s product moment correlation ® was applied to assay the significance of the correlation between FST /(1-FST) and geographic distances using the vegan R package (Dixon, 2003) We also constructed a Neighbor-Net network based on concatenated SNPs sequences using estimated uncorrected p-distance including the outgroup samples by Splits Tree 4 software (Huson and Bryant, 2006). For this analysis, we selected SNPs that were found in more than 50% of the samples (more than 95 of 189 samples) and less than 0.5 heterozygosity to avoid using the loci that have excess heterozygosity and then selected 186 samples, including those of the central and western lineages, with missing values of less than 40% in each sample using TASSEL 5.0 (dataset 2). To infer the genetic relationships among recognized genetic groups in Admixture analysis and PCA, we conducted Approximate Bayesian Computation (ABC) to test for different diversification process scenarios (Beaumont et al. 2002; Beaumont 2010). The analysis was conducted with DIYABC 2.1.0 (Cornuet et al. 2014), following the detailed manual of the software. For this step, we divided the ingroup samples (dataset 1) into four populations recognized in Admixture analysis and PCA and excluded the SNPs that were missing for all samples in each population according to DIYABC 2.1.0 software settings. Each scenario involved four genetic groups of samples, corresponding to Group 1 (Tohoku: pops. 1–7), Group 2 (N-Kanto: pops. 8–21), Group 3 (S-Kanto: pops. 22–27), and Group 4 (S-Izu: pop. 28). The following eight scenarios were tested for the four populations (Fig. 2).
Facebook
TwitterBackground and AimsBlood metabolite abnormalities have revealed an association with cholestatic liver diseases (CLDs), while the underlying metabolic mechanisms have remained sluggish yet. Accordingly, the present evaluation aims to investigate the causal relationship between blood metabolites and the risk of two major CLDs, including primary biliary cholangitis (PBC) and primary sclerosing cholangitis (PSC).MethodsUnivariable and multivariable Mendelian randomization (MR) approaches were employed to uncover potential causal associations between blood metabolites and 2 CLDs, including PBS and PSC, through extracting instrumental variables (IVs) for metabolites from genome-wide association studies (GWAS) conducted on European individuals. The GWAS summary data of PBC or PSC were sourced from two distinct datasets. The initial analysis employed inverse variance weighted (IVW) and an array of sensitivity analyses, followed by replication and meta-analysis utilizing FinnGen consortium data. Finally, a multivariable MR analysis was carried out to ascertain the independent effects of each metabolite. Furthermore, the web-based tool MetaboAnalyst 5.0 was used to perform metabolic pathway examination.ResultsA genetic causality between 15 metabolites and CLDs was recognized after preliminary analysis and false discovery rate (FDR) correction. Subsequently, 9 metabolites consistently represented an association through replication and meta-analysis. Additionally, the independent causal effects of 7 metabolites were corroborated by multivariable MR analysis. Specifically, the metabolites isovalerylcarnitine (odds ratio [OR] = 3.146, 95% confidence intervals [CI]: 1.471–6.726, p = 0.003), valine (OR = 192.44, 95%CI: 4.949–7483.27, p = 0.005), and mannose (OR = 0.184, 95%CI: 0.068–0.499, p < 0.001) were found to have a causal relationship with the occurrence of PBC. Furthermore, erythrose (OR = 5.504, 95%CI: 1.801–16.821, p = 0.003), 1-stearoylglycerophosphocholine (OR = 6.753, 95%CI: 2.621–17.399, p = 7.64 × 10−5), X-11847 (OR = 0.478, 95%CI: 0.352–0.650, p = 2.28 × 10−6), and X-12405 (OR = 3.765, 95%CI: 1.771–8.005, p = 5.71 × 10−4) were independently associated with the occurrence of PSC. Furthermore, the analysis of metabolic pathways identified seven significant pathways in two CLDs.ConclusionThe findings of the present study have unveiled robust causal relationships between 7 metabolites and 2 CLDs, thereby providing novel insights into the metabolic mechanisms and therapeutic strategies for these disorders.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global hair drug testing market size reached USD 1.32 billion in 2024, and it is poised for robust growth with a projected CAGR of 7.1% from 2025 to 2033. By the end of 2033, the market is expected to attain a value of USD 2.44 billion. This upward trajectory is primarily driven by the increasing adoption of hair drug testing methods across various industries for their accuracy and long detection window, as well as growing regulatory requirements and workplace safety initiatives worldwide.
The expansion of the hair drug testing market is significantly influenced by the rising awareness about the limitations of traditional drug testing methods, such as urine and blood tests, which often have shorter detection windows and are more susceptible to adulteration. Hair drug testing offers a much longer detection period, sometimes up to 90 days, which makes it highly valuable for organizations aiming to maintain drug-free environments, especially in safety-sensitive industries. Additionally, advancements in analytical technologies have made hair drug testing more reliable, efficient, and cost-effective, further boosting its adoption. The integration of automation and AI-driven analyzers has also improved the throughput and accuracy of these tests, making them more attractive for large-scale screening programs.
Another critical growth driver for the hair drug testing market is the tightening of regulations and the implementation of stringent workplace policies across multiple regions, especially in North America and Europe. Employers are increasingly required to ensure a drug-free workplace, not only to comply with legal mandates but also to enhance productivity, reduce workplace accidents, and lower insurance costs. This regulatory push is particularly strong in sectors such as transportation, construction, and healthcare, where safety is paramount. Furthermore, the criminal justice system's growing reliance on hair drug testing for probation, parole, and child custody cases is expanding the end-user base for these solutions, contributing to sustained market growth.
Emerging economies, particularly in the Asia Pacific region, are also witnessing a surge in demand for hair drug testing, driven by rapid industrialization, urbanization, and a growing focus on workplace safety and public health. As awareness of substance abuse issues increases and governments introduce more comprehensive drug testing policies, the adoption of hair drug testing technologies is expected to accelerate. Moreover, the expansion of drug treatment centers and hospitals in these regions is creating new avenues for market penetration. This global momentum is supported by continuous innovation, partnerships, and investments from leading market players, ensuring that the hair drug testing market remains on a growth trajectory well into the next decade.
From a regional perspective, North America currently dominates the hair drug testing market, accounting for the largest revenue share, followed by Europe and Asia Pacific. The United States, in particular, has a well-established framework for workplace drug testing, and the presence of key market players enhances the region's leadership. Europe is experiencing steady growth due to increasing regulatory harmonization and public health initiatives, while Asia Pacific is emerging as the fastest-growing region, propelled by economic development and rising awareness. Latin America and the Middle East & Africa are also expected to witness moderate growth, supported by expanding healthcare infrastructure and evolving legal frameworks.
The hair drug testing market by product type is segmented into hair drug testing kits, hair drug testing analyzers, and others. Hair drug testing kits represent a significant share of the market due to their convenience, portability, and ease of use, making them suitable for on-site screening in workplaces, law enforcement, and healthcare settings. These kits typically include collection tools and reagents that allow for rapid sample collection and preliminary analysis, streamlining the initial stages of the testing process. The growing demand for point-of-care diagnostics and the need for immediate results are fueling the adoption of these kits, especially in remote or resource-limited environments.
Hair drug testing anal
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An Open Context "types" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This record is part of the "Digital Companion to "A Preliminary Analysis of the Iron Age III Faunal Remains from Tell Ta`yinat, Turkey (Ancient Kunulua)"" data publication.
Facebook
TwitterThe Integrated Household Survey is one of the primary instruments implemented by the Government of Malawi through the National Statistical Office (NSO) roughly every 5 years to monitor and evaluate the changing conditions of Malawian households. The IHS data have, among other insights, provided benchmark poverty and vulnerability indicators to foster evidence-based policy formulation and monitor the progress of meeting the Millennium Development Goals (MDGs) as well as the goals listed as part of the Malawi Growth and Development Strategy (MGDS).
National
Households
Members of the following households are not eligible for inclusion in the survey: - All people who live outside the selected EAs, whether in urban or rural areas. - All residents of dwellings other than private dwellings, such as prisons, hospitals and army barracks. - Members of the Malawian armed forces who reside within a military base. (If such individuals reside in private dwellings off the base, however, they should be included among the households eligible for random selection for the survey.) - Non-Malawian diplomats, diplomatic staff, and members of their households. (However, note that non-Malawian residents who are not diplomats or diplomatic staff and are resident in private dwellings are eligible for inclusion in the survey. The survey is not restricted to Malawian citizens alone.) - Non-Malawian tourists and others on vacation in Malawi.
Sample survey data [ssd]
The IHS3 sampling frame is based on the listing information and cartography from the 2008 Malawi Population and Housing Census (PHC); includes the three major regions of Malawi, namely North, Center and South; and is stratified into rural and urban strata. The urban strata include the four major urban areas: Lilongwe City, Blantyre City, Mzuzu City, and the Municipality of Zomba. All other areas are considered as rural areas, and each of the 27 districts were considered as a separate sub-stratum as part of the main rural stratum. It was decided to exclude the island district of Likoma from the IHS3 sampling frame, since it only represents about 0.1% of the population of Malawi, and the corresponding cost of enumeration would be relatively high. The sampling frame further excludes the population living in institutions, such as hospitals, prisons and military barracks. Hence, the IHS3 strata are composed of 31 districts in Malawi. A stratified two-stage sample design was used for the IHS3. Note: Detailed sample design information is presented in the "Third Integrated Household Survey 2010-2011, Basic Information Document" document.
The total sample size for the IHS3 was 12,288 households sampled from a total of 768 EAs. At the end of the survey, a total of 12,271 households were interviewed. Of the 12,271 interviewed households, 688 were replacements (6 percent)
Face-to-face [f2f]
Data Entry:
Data Entry Clerks Each IHS3 field team was assigned 1 data entry clerk to process completed questionnaires at the team's field based residence. Each data entry clerk was issued a laptop with the CSPro based data entry application, a printer to produce error reports on entered questionnaire, and flash disks for transferring files. The field based data entry clerk's primary responsibilities included: (1) receiving the completed questionnaires following the field supervisor's initial screening (2) organizing and entering completed questionnaire in a timely manner (3) generating and printing error reports for supervisor review (4) modifying data after errors were resolved and authorized by the field supervisor (5) managing data files and local data back-ups
The data entry clerk was responsible for beginning initial data entry upon receipt of questionnaires from the field and generating error reports as quickly as possible after interviews were complete in the EA. When long distance travel to an enumeration area by the field team was required and the field team was required to spend multiple days away from their field residence the data entry clerk was required to travel with the team in order to maintain data processing schedules. Field Based Data Entry and CAFE To better facilitate higher quality data and increase timely availability of data during the data capture process IHS3 utilized computer assisted field entry (CAFE). First data entry was conducted by field based data entry clerks immediately following completion of the team's daily field activities. Each team was equipped with 1 laptop computer for field based data entry using a CSPro-based application. The range and consistency checks built into the CSPro application was informed by the LSMS-ISA experience in Tanzania and Uganda, and the review of the IHS2 data. Prior programming of the data entry application allowed for a wide variety of range and consistency checks to be conducted and reported and potential issues investigated and corrected before closing the assigned enumeration area. Completed data was frequently relayed to the NSO central office in Zomba via email and tracked and processed upon receipt. Double Data Entry Double data entry was implemented by a team of data entry clerks based at the NSO central office. Electronic data and questionnaires received from the field were catalogued by the Data Manager and electronic data loaded onto a central server to enable data entry verification on networked computers.
Quality Checks:
To increase quality, the Data Entry Manager monitored the data verification staff and conducted quality assessments by randomly selecting processed questionnaires and comparing physical questionnaires to the result of double data entry. Data verification clerks were coached on inconsistencies when required. Data Cleaning The data cleaning process was done in several stages over the course of field work and through preliminary analysis. The first stage of data cleaning was conducted in the field by the field based field teams utilizing error reports produced by the data entry applications. Field supervisors collected reports for each enumeration area and household and in coordination with the enumerators reviewed, investigated, and collected errors. Due to the quick turn-around in error reporting, it was possible to conduct call backs while the team was still operating in the enumeration area when required. Corrections to the data were entered by the field based data entry clerk before transmitting data to the NSO central office. Upon receipt of the data from the field, module and cross module checks were performed using Stata to identify systematic issues and, where applicable, field teams were asked to investigate, revise and resend data for questionnaires still in their possession. Revised data files were catalogued and then replaced previous version of the data. After data verification by the headquarters' double data entry team, data from the first data entry and second data entry were compared. Cases that revealed large inconsistencies between the first and second data entry, specifically large amounts of missing case level data in the second data entry relative to the first data entry were completely re-entered. Further, variable specific inconsistency reports were generated and investigated and corrected by the double data entry team. Additional cleaning was performed after the double data entry team cleaning activities where appropriate to resolve systematic errors and organize data modules for consistency and efficient use. Case by case cleaning was also performed during the preliminary analysis specifically pertaining to out of range and outlier variables. All cleaning activities were conducted in collaboration with the WB staff providing technical assistance to the NSO in the design and implementation of the IHS3.
99.9 percent
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Preliminary analysis report generated automatically by the iMAP to provide a summary of conserved taxonomy assigned to OTUs and the initial analysis of OTUs and taxa data. The preliminary analysis report was automatically saved in the “reports” folder as “report4_preliminary_analysis.html”. (HTML 20379 kb)