Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Correlations (r) among cognitive variables, collapsing across all four studies.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
analyze the consumer expenditure survey (ce) with r the consumer expenditure survey (ce) is the primo data source to understand how americans spend money. participating households keep a running diary about every little purchase over the year. those diaries are then summed up into precise expenditure categories. how else are you gonna know that the average american household spent $34 (±2) on bacon, $826 (±17) on cellular phones, and $13 (±2) on digital e-readers in 2011? an integral component of the market basket calculation in the consumer price index, this survey recently became available as public-use microdata and they're slowly releasing historical files back to 1996. hooray! for a t aste of what's possible with ce data, look at the quick tables listed on their main page - these tables contain approximately a bazillion different expenditure categories broken down by demographic groups. guess what? i just learned that americans living in households with $5,000 to $9,999 of annual income spent an average of $283 (±90) on pets, toys, hobbies, and playground equipment (pdf page 3). you can often get close to your statistic of interest from these web tables. but say you wanted to look at domestic pet expenditure among only households with children between 12 and 17 years old. another one of the thirteen web tables - the consumer unit composition table - shows a few different breakouts of households with kids, but none matching that exact population of interest. the bureau of labor statistics (bls) (the survey's designers) and the census bureau (the survey's administrators) have provided plenty of the major statistics and breakouts for you, but they're not psychic. if you want to comb through this data for specific expenditure categories broken out by a you-defined segment of the united states' population, then let a little r into your life. fun starts now. fair warning: only analyze t he consumer expenditure survey if you are nerd to the core. the microdata ship with two different survey types (interview and diary), each containing five or six quarterly table formats that need to be stacked, merged, and manipulated prior to a methodologically-correct analysis. the scripts in this repository contain examples to prepare 'em all, just be advised that magnificent data like this will never be no-assembly-required. the folks at bls have posted an excellent summary of what's av ailable - read it before anything else. after that, read the getting started guide. don't skim. a few of the descriptions below refer to sas programs provided by the bureau of labor statistics. you'll find these in the C:\My Directory\CES\2011\docs directory after you run the download program. this new github repository contains three scripts: 2010-2011 - download all microdata.R lo op through every year and download every file hosted on the bls's ce ftp site import each of the comma-separated value files into r with read.csv depending on user-settings, save each table as an r data file (.rda) or stat a-readable file (.dta) 2011 fmly intrvw - analysis examples.R load the r data files (.rda) necessary to create the 'fmly' table shown in the ce macros program documentation.doc file construct that 'fmly' table, using five quarters of interviews (q1 2011 thru q1 2012) initiate a replicate-weighted survey design object perform some lovely li'l analysis examples replicate the %mean_variance() macro found in "ce macros.sas" and provide some examples of calculating descriptive statistics using unimputed variables replicate the %compare_groups() macro found in "ce macros.sas" and provide some examples of performing t -tests using unimputed variables create an rsqlite database (to minimize ram usage) containing the five imputed variable files, after identifying which variables were imputed based on pdf page 3 of the user's guide to income imputation initiate a replicate-weighted, database-backed, multiply-imputed survey design object perform a few additional analyses that highlight the modified syntax required for multiply-imputed survey designs replicate the %mean_variance() macro found in "ce macros.sas" and provide some examples of calculating descriptive statistics using imputed variables repl icate the %compare_groups() macro found in "ce macros.sas" and provide some examples of performing t-tests using imputed variables replicate the %proc_reg() and %proc_logistic() macros found in "ce macros.sas" and provide some examples of regressions and logistic regressions using both unimputed and imputed variables replicate integrated mean and se.R match each step in the bls-provided sas program "integr ated mean and se.sas" but with r instead of sas create an rsqlite database when the expenditure table gets too large for older computers to handle in ram export a table "2011 integrated mean and se.csv" that exactly matches the contents of the sas-produced "2011 integrated mean and se.lst" text file click here to view these three scripts for...
Intermediate-age (IA) star clusters in the Large Magellanic Cloud (LMC) present extended main-sequence turn-offs (MSTO) that have been attributed to either multiple stellar populations or an effect of stellar rotation. Recently it has been proposed that these extended main sequences can also be produced by ill-characterized stellar variability. Here we present Gemini-S/Gemini Multi-Object Spectrometer (GMOS) time series observations of the IA cluster NGC 1846. Using differential image analysis, we identified 73 new variable stars, with 55 of those being of the Delta Scuti type, that is, pulsating variables close the MSTO for the cluster age. Considering completeness and background contamination effects, we estimate the number of {delta} Sct belonging to the cluster between 40 and 60 members, although this number is based on the detection of a single {delta} Sct within the cluster half-light radius. This amount of variable stars at the MSTO level will not produce significant broadening of the MSTO, albeit higher-resolution imaging will be needed to rule out variable stars as a major contributor to the extended MSTO phenomenon. Though modest, this amount of {delta} Sct makes NGC 1846 the star cluster with the highest number of these variables ever discovered. Lastly, our results present a cautionary tale about the adequacy of shallow variability surveys in the LMC (like OGLE) to derive properties of its {delta} Sct population. Cone search capability for table J/AJ/155/183/table1 (Positions, mean magnitudes, r amplitudes, periods and classification for all of the variables discovered in the NGC 1846 field)
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The ability to predict enzyme–metal–organic framework (MOF) properties such as enzyme loading, immobilization yield, activity retention, and reusability can maximize product yield and extend the operational life of enzyme–MOF biocatalysts. However, this is challenging due to the vast combinations of available metal and ligand building blocks for MOF and enzymes. Therefore, several machine learning (ML) algorithms are applied in this study using data collected from 127 journal articles to estimate these biocatalyst parameters. Twelve input variables, including the metal and ligand properties of MOF, as well as the enzyme properties, were integrated and fed into two ML algorithmsrandom forest and Gaussian process regression (GPR)to predict model outputs. A 10-fold cross-validation approach with grid search was applied to obtain the optimal hyperparameter values. The random forest model (RFM) provided more accurate estimates of the enzyme loading, immobilization yield, and reusability of the biocatalyst than the GPR model, with relatively high R2 values of 0.85, 0.77, and 0.91, respectively. Both models are less effective in predicting the enzyme activity retention, however, with R2 values of 0.63 or lower. Sensitivity analysis of the input variables revealed the most significant variables for each corresponding output parameter, allowing further optimization of the RFM. The final RFM was then tested with a second unseen dataset collected from experiments. Findings confirmed the validity of the predictive model, including a relative error of less than 25%. Our model can aid in the synthesis of enzyme–MOF biocatalysts by providing valuable estimates of these output parameters for different MOF precursors and enzymes, saving experimental time and cost.
We describe the construction of a highly reliable sample of ~7000 optically faint periodic variable stars with light curves obtained by the asteroid survey LINEAR across 10000deg^2^ of the northern sky. The majority of these variables have not been cataloged yet. The sample flux limit is several magnitudes fainter than most other wide-angle surveys; the photometric errors range from ~0.03mag at r=15 to ~0.20mag at r=18. Light curves include on average 250 data points, collected over about a decade. Using Sloan Digital Sky Survey (SDSS) based photometric recalibration of the LINEAR data for about 25 million objects, we selected ~200000 most probable candidate variables with r<17 and visually confirmed and classified ~7000 periodic variables using phased light curves. The reliability and uniformity of visual classification across eight human classifiers was calibrated and tested using a catalog of variable stars from the SDSS Stripe 82 region and verified using an unsupervised machine learning approach. The resulting sample of periodic LINEAR variables is dominated by 3900 RR Lyrae stars and 2700 eclipsing binary stars of all subtypes and includes small fractions of relatively rare populations such as asymptotic giant branch stars and SX Phoenicis stars. We discuss the distribution of these mostly uncataloged variables in various diagrams constructed with optical-to-infrared SDSS, Two Micron All Sky Survey, and Wide-field Infrared Survey Explorer photometry, and with LINEAR light-curve features. We find that the combination of light-curve features and colors enables classification schemes much more powerful than when colors or light curves are each used separately. An interesting side result is a robust and precise quantitative description of a strong correlation between the light-curve period and color/spectral type for close and contact eclipsing binary stars ({beta} Lyrae and W UMa): as the color-based spectral type varies from K4 to F5, the median period increases from 5.9hr to 8.8hr. These large samples of robustly classified variable stars will enable detailed statistical studies of the Galactic structure and physics of binary and other stars and we make these samples publicly available.
The high-frequency phone survey of refugees monitors the economic and social impact of and responses to the COVID-19 pandemic on refugees and nationals, by calling a sample of households every four weeks. The main objective is to inform timely and adequate policy and program responses. Since the outbreak of the COVID-19 pandemic in Ethiopia, two rounds of data collection of refugees were completed between September and November 2020. The first round of the joint national and refugee HFPS was implemented between the 24 September and 17 October 2020 and the second round between 20 October and 20 November 2020.
Household
Sample survey data [ssd]
The sample was drawn using a simple random sample without replacement. Expecting a high non-response rate based on experience from the HFPS-HH, we drew a stratified sample of 3,300 refugee households for the first round. More details on sampling methodology are provided in the Survey Methodology Document available for download as Related Materials.
Computer Assisted Telephone Interview [cati]
The Ethiopia COVID-19 High Frequency Phone Survey of Refugee questionnaire consists of the following sections:
A more detailed description of the questionnaire is provided in Table 1 of the Survey Methodology Document that is provided as Related Materials. Round 1 and 2 questionnaires available for download.
DATA CLEANING At the end of data collection, the raw dataset was cleaned by the Research team. This included formatting, and correcting results based on monitoring issues, enumerator feedback and survey changes. Data cleaning carried out is detailed below.
Variable naming and labeling: • Variable names were changed to reflect the lowercase question name in the paper survey copy, and a word or two related to the question. • Variables were labeled with longer descriptions of their contents and the full question text was stored in Notes for each variable. • “Other, specify” variables were named similarly to their related question, with “_other” appended to the name. • Value labels were assigned where relevant, with options shown in English for all variables, unless preloaded from the roster in Amharic.
Variable formatting:
• Variables were formatted as their object type (string, integer, decimal, time, date, or datetime).
• Multi-select variables were saved both in space-separated single-variables and as multiple binary variables showing the yes/no value of each possible response.
• Time and date variables were stored as POSIX timestamp values and formatted to show Gregorian dates.
• Location information was left in separate ID and Name variables, following the format of the incoming roster. IDs were formatted to include only the variable level digits, and not the higher-level prefixes (2-3 digits only.)
• Only consented surveys were kept in the dataset, and all personal information and internal survey variables were dropped from the clean dataset. • Roster data is separated from the main data set and kept in long-form but can be merged on the key variable (key can also be used to merge with the raw data).
• The variables were arranged in the same order as the paper instrument, with observations arranged according to their submission time.
Backcheck data review: Results of the backcheck survey are compared against the originally captured survey results using the bcstats command in Stata. This function delivers a comparison of variables and identifies any discrepancies. Any discrepancies identified are then examined individually to determine if they are within reason.
The following data quality checks were completed: • Daily SurveyCTO monitoring: This included outlier checks, skipped questions, a review of “Other, specify”, other text responses, and enumerator comments. Enumerator comments were used to suggest new response options or to highlight situations where existing options should be used instead. Monitoring also included a review of variable relationship logic checks and checks of the logic of answers. Finally, outliers in phone variables such as survey duration or the percentage of time audio was at a conversational level were monitored. A survey duration of close to 15 minutes and a conversation-level audio percentage of around 40% was considered normal. • Dashboard review: This included monitoring individual enumerator performance, such as the number of calls logged, duration of calls, percentage of calls responded to and percentage of non-consents. Non-consent reason rates and attempts per household were monitored as well. Duration analysis using R was used to monitor each module's duration and estimate the time required for subsequent rounds. The dashboard was also used to track overall survey completion and preview the results of key questions. • Daily Data Team reporting: The Field Supervisors and the Data Manager reported daily feedback on call progress, enumerator feedback on the survey, and any suggestions to improve the instrument, such as adding options to multiple choice questions or adjusting translations. • Audio audits: Audio recordings were captured during the consent portion of the interview for all completed interviews, for the enumerators' side of the conversation only. The recordings were reviewed for any surveys flagged by enumerators as having data quality concerns and for an additional random sample of 2% of respondents. A range of lengths were selected to observe edge cases. Most consent readings took around one minute, with some longer recordings due to questions on the survey or holding for the respondent. All reviewed audio recordings were completed satisfactorily. • Back-check survey: Field Supervisors made back-check calls to a random sample of 5% of the households that completed a survey in Round 1. Field Supervisors called these households and administered a short survey, including (i) identifying the same respondent; (ii) determining the respondent's position within the household; (iii) confirming that a member of the the data collection team had completed the interview; and (iv) a few questions from the original survey.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Correlations (r) among cognitive variables, collapsing across all four studies.