Using data from five Spouse Assault Replication Program (SARP) sites, this study examined the extent to which domestic violence offenders exhibit a specialized proclivity toward violence and the extent to which attack severity escalates, de-escalates, or stays about the same over time. The specialization question was examined using official arrest records from the Charlotte, North Carolina, Colorado Springs, Colorado, Milwaukee, Wisconsin, and Omaha, Nebraska sites. Escalation was examined using victim interview data from the Charlotte, Milwaukee, Omaha, and Miami-Dada, Florida sites. This collection consists of 18 SAS setup files used to recode the variables from the original datasets, organized in five groups, by city of each data collection site. This collection does not contain the original data files, themselves.
The dataset was collected via a combination of the following: 1. manual extraction of EHR-based data followed by entry into REDCap and then analysis and further processing in SAS 9.4; 2. Data pull of Epic EHR-based data from Clarity database using standard programming techniques, followed by processing in SAS 9.4 and merging with data from REDCap; 3. Collection of data directly from participants via telephone with entry into REDCap and further processing in SAS 9.4; 4. Collection of process measures from study team tracking records followed by entry into REDCap and further processing in SAS 9.4. One file in the dataset contains aggregate data generated following merging of Clarity data pull-origin dataset with a REDCap dataset and further manual processing. Recruitment for the randomized trial began at an epilepsy clinic visit, with EHR-embedded validated anxiety and depression instruments, followed by automated EHR-based research screening consent and eligibility assessment. Fully eligible individuals later completed telephone consent, enrollment and randomization. Thirty total participants were randomized 1:1 to EHR portal versus telephone outcome assessment, and patient-reported and process outcomes were collected at 3- and 6-months, with primary outcome 6-month retention in EHR arm(feasibility target: ≥11 participants retained). Variables in this dataset include recruitment flow diagram data, baseline participant sociodemographic and clinical characteristics, retention (successful PROM collection at 6 months), and process measures. The process measures included research staff time to collect outcomes, research staff time to collect outcomes and enter data, time from initial outcome collection reminder to outcome collection, and number of reminders sent to participants for outcome collection. PROMs were collected via the randomized method only at 3 months. At 6 months, if the criteria for retention was not met by the randomized method (failure to return outcomes by 1 week after 5 post-due date reminders for outcome collection), up to 3 additional attempts were made to collect outcomes by the alternative method, and process measures were also collected during this hybrid outcome collection method approach. Objective: To close gaps between research and clinical practice, tools are needed for efficient pragmatic trial recruitment and patient-reported outcome(PROM) collection. The objective was to assess feasibility and process measures for patient-reported outcome collection in a randomized trial comparing electronic health record(EHR) patient portal questionnaires to telephone interview among adults with epilepsy and anxiety or depression symptoms. Results: Participants were 60% women, 77% White/non-Hispanic, with mean age 42.5 years. Among 15 individuals randomized to EHR portal, 10(67%, CI 41.7-84.8%) met the 6-month retention endpoint, versus 100%(CI 79.6-100%) in the telephone group(p=0.04). EHR outcome collection at 6 months required 11.8 minutes less research staff time per participant than telephone (5.9, CI 3.3-7.7 vs. 17.7, CI 14.1-20.2). Subsequent telephone contact after unsuccessful EHR attempts enabled near complete data collection and still saved staff time. Discussion: Data from this randomized pilot study of pragmatic outcome collection methods for patients with anxiety or depression symptoms in epilepsy includes baseline participant characteristics, recruitment flow resulting from a novel EHR-based, care-embedded recruitment process, and data on retention along with various process measures at 6-months.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Intellectual Property Government Open Data (IPGOD) includes over 100 years of registry data on all intellectual property (IP) rights administered by IP Australia. It also has derived information about the applicants who filed these IP rights, to allow for research and analysis at the regional, business and individual level. This is the 2019 release of IPGOD.
IPGOD is large, with millions of data points across up to 40 tables, making them too large to open with Microsoft Excel. Furthermore, analysis often requires information from separate tables which would need specialised software for merging. We recommend that advanced users interact with the IPGOD data using the right tools with enough memory and compute power. This includes a wide range of programming and statistical software such as Tableau, Power BI, Stata, SAS, R, Python, and Scalar.
IP Australia is also providing free trials to a cloud-based analytics platform with the capabilities to enable working with large intellectual property datasets, such as the IPGOD, through the web browser, without any installation of software. IP Data Platform
The following pages can help you gain the understanding of the intellectual property administration and processes in Australia to help your analysis on the dataset.
Due to the changes in our systems, some tables have been affected.
Data quality has been improved across all tables.
The “Sustainable Energy for all (SE4ALL)” initiative, launched in 2010 by the UN Secretary General, established three global objectives to be accomplished by 2030: to ensure universal access to modern energy services, to double the global rate of improvement in global energy efficiency, and to double the share of renewable energy in the global energy mix. SE4ALL database supports this initiative and provides country level historical data for access to electricity and non-solid fuel; share of renewable energy in total final energy consumption by technology; and energy intensity rate of improvement.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Along track temperature, Salinity, backscatter, Chlorophyll Fluoresence, and normalized water leaving radiance (nLw).
On the bow of the R/V Roger Revelle was a Satlantic SeaWiFS Aircraft Simulator (MicroSAS) system, used to estimate water-leaving radiance from the ship, analogous to to the nLw derived by the SeaWiFS and MODIS satellite sensors, but free from atmospheric error (hence, it can provide data below clouds).
The system consisted of a down-looking radiance sensor and a sky-viewing radiance sensor, both mounted on a steerable holder on the bow. A downwelling irradiance sensor was mounted at the top of the ship's meterological mast, on the bow, far from any potentially shading structures. These data were used to estimate normalized water-leaving radiance as a function of wavelength. The radiance detector was set to view the water at 40deg from nadir as recommended by Mueller et al. [2003b]. The water radiance sensor was able to view over an azimuth range of ~180deg across the ship's heading with no viewing of the ship's wake. The direction of the sensor was adjusted to view the water 90-120deg from the sun's azimuth, to minimize sun glint. This was continually adjusted as the time and ship's gyro heading were used to calculate the sun's position using an astronomical solar position subroutine interfaced with a stepping motor which was attached to the radiometer mount (designed and fabricated at Bigelow Laboratory for Ocean Sciences). Protocols for operation and calibration were performed according to Mueller [Mueller et al., 2003a; Mueller et al., 2003b; Mueller et al., 2003c]. Before 1000h and after 1400h, data quality was poorer as the solar zenith angle was too low. Post-cruise, the 10Hz data were filtered to remove as much residual white cap and glint as possible (we accept the lowest 5% of the data). Reflectance plaque measurements were made several times at local apparent noon on sunny days to verify the radiometer calibrations.
Within an hour of local apparent noon each day, a Satlantic OCP sensor was deployed off the stern of the R/V Revelle after the ship oriented so that the sun was off the stern. The ship would secure the starboard Z-drive, and use port Z-drive and bow thruster to move the ship ahead at about 25cm s-1. The OCP was then trailed aft and brought to the surface ~100m aft of the ship, then allowed to sink to 100m as downwelling spectral irradiance and upwelling spectral radiance were recorded continuously along with temperature and salinity. This procedure ensured there were no ship shadow effects in the radiometry.
Instruments include a WETLabs wetstar fluorometer, a WETLabs ECOTriplet and a SeaBird microTSG.
Radiometry was done using a Satlantic 7 channel microSAS system with Es, Lt and Li sensors.
Chl data is based on inter calibrating surface discrete Chlorophyll measure with the temporally closest fluorescence measurement and applying the regression results to all fluorescence data.
Data have been corrected for instrument biofouling and drift based on weekly purewater calibrations of the system. Radiometric data has been processed using standard Satlantic processing software and has been checked with periodic plaque measurements using a 2% spectralon standard.
Lw is calculated from Lt and Lsky and is "what Lt would be if the
sensor were looking straight down". Since our sensors are mounted at
40o, based on various NASA protocols, we need to do that conversion.
Lwn adds Es to the mix. Es is used to normalize Lw. Nlw is related to Rrs, Remote Sensing Reflectance
Techniques used are as described in:
Balch WM, Drapeau DT, Bowler BC, Booth ES, Windecker LA, Ashe A (2008) Space-time variability of carbon standing stocks and fixation rates in the Gulf of Maine, along the GNATS transect between Portland, ME, USA, and Yarmouth, Nova Scotia, Canada. J Plankton Res 30:119-139
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
analyze the national health interview survey (nhis) with r the national health interview survey (nhis) is a household survey about health status and utilization. each annual data set can be used to examine the disease burden and access to care that individuals and families are currently experiencing across the country. check out the wikipedia article (ohh hayy i wrote that) for more detail about its current and potential uses. if you're cooking up a health-related analysis that doesn't need medical expenditures or monthly health insurance coverage, look at nhis before the medical expenditure panel survey (it's sample is twice as big). the centers for disease control and prevention (cdc) has been keeping nhis real since 1957, and the scripts below automate the download, importation, and analysis of every file back to 1963. what happened in 1997, you ask? scientists cloned dolly the sheep, clinton started his second term, and the national health interview survey underwent its most recent major questionnaire re-design. here's how all the moving parts work: a person-level file (personsx) that merges onto other files using unique household (hhx), family (fmx), and person (fpx) identifiers. [note to data historians: prior to 2004, person number was (px) and unique within each household.] this file includes the complex sample survey variables needed to construct a taylor-series linearization design, and should be used if your analysis doesn't require variables from the sample adult or sample c hild files. this survey setup generalizes to the noninstitutional, non-active duty military population. a family-level file that merges onto other files using unique household (hhx) and family (fmx) identifiers. a household-level file that merges onto other files using the unique household (hhx) identifier. a sample adult file that includes questions asked of only one adult within each household (selected at random) - a subset of the main person-level file. hhx, fmx, and fpx identifiers will merge with each of the files above, but since not every adult gets asked thes e questions, this file contains its own set of weights: wtfa_sa instead of wtfa. you can merge on whatever other variables you need from the three files above, but if your analysis requires any variables from the sample adult questionnaire, you can't use records in the person-level file that aren't also in the sample adult file (a big sample size cut). this survey setup generalizes to the noninstitutional, non-active duty military adult population. a sample child file that includes questions asked of only one child within each household (if available, and also selected at random) - another subset of the main person-level file. same deal as the sample adult description, except use wtfa_sc instead of wtfa oh yeah and this one generalizes to the child population. five imputed income files. if you want income and/or poverty variables incorporated into any part of your analysis, you'll need these puppies. the replication example below uses these, but if that's impenetrable, post in the comments describing where you get stuck. some injury stuff and other miscellanea that varies by year. if anyone uses this, please share your experience. if you use anything more than the personsx file alone, you'll need to merge some tables together. make sure you understand the difference between setting the parameter all = TRUE versus all = FALSE -- not everyone in the personsx file has a record in the samadult and sam child files. this new github repository contains four scripts: 1963-2011 - download all microdata.R loop through every year and download every file hosted on the cdc's nhis ftp site import each file into r with SAScii save each file as an r d ata file (.rda) download all the documentation into the year-specific directory 2011 personsx - analyze.R load the r data file (.rda) created by the download script (above) set up a taylor-series linearization survey design outlined on page 6 of this survey document perform a smattering of analysis examples 2011 personsx plus samadult with multiple imputation - analyze.R load the personsx and samadult r data files (.rda) created by the download script (above) merge the personsx and samadult files, highlighting how to conduct analyses that need both create tandem survey designs for both personsx-only and merg ed personsx-samadult files perform just a touch of analysis examples load and loop through the five imputed income files, tack them onto the personsx-samadult file conduct a poverty recode or two analyze the multiply-imputed survey design object, just like mom used to analyze replicate cdc tecdoc - 2000 multiple imputation.R download and import the nhis 2000 personsx and imputed income files, using SAScii and this imputed income sas importation script (no longer hosted on the cdc's nhis ftp site). loop through each of the five imputed income files, merging each to the personsx file and performing the same set of...
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Using data from five Spouse Assault Replication Program (SARP) sites, this study examined the extent to which domestic violence offenders exhibit a specialized proclivity toward violence and the extent to which attack severity escalates, de-escalates, or stays about the same over time. The specialization question was examined using official arrest records from the Charlotte, North Carolina, Colorado Springs, Colorado, Milwaukee, Wisconsin, and Omaha, Nebraska sites. Escalation was examined using victim interview data from the Charlotte, Milwaukee, Omaha, and Miami-Dada, Florida sites. This collection consists of 18 SAS setup files used to recode the variables from the original datasets, organized in five groups, by city of each data collection site. This collection does not contain the original data files, themselves.