States are required by the CCDF Final Rule to ensure that families receiving child care assistance have equal access to comparable care purchased by private-paying parents. A market rate survey (MRS) is a tool States use to achieve this program objective. Some States conduct surveys to collect the child care market rate and others use administrative data, such as data collected by child care resource and referral (CCR&R) and State licensing agencies, to analyze the market rate for child care. This survey was one strategy used to collect child care market price data. Comparing findings garnered from different methods allows one to evaluate whether different data collection methods produce different price findings (convergent validity) and how well these data collection methods represent the child care market (criterion-related validity). These data can also be used to explore several validity issues of concern with market price studies.
Units of Response: Program
Type of Data: Survey
Tribal Data: No
Periodicity: One-time
Demographic Indicators: Not Applicable
SORN: Not Applicable
Data Use Agreement: Yes
Data Use Agreement Location: https://www.icpsr.umich.edu/rpxlogin
Granularity: Childcare Providers;Individual;Program;Region
Spatial: United States
Geocoding: Unavailable
National Center for Health Statistics (NCHS) population health survey data have been linked to VA administrative data containing information on military service history and VA benefit program utilization. The linked data can provide information on the health status and access to health care for VA program beneficiaries. In addition, researchers can compare the health of Veterans within and outside the VA health care system and compare Veterans to non-Veterans in the civilian non-institutionalized U.S. population. Due to confidentiality requirements, the Restricted-use NCHS-VA Linked Data Files are accessible only through the NCHS Research Data Center (RDC) Network. All interested researchers must submit a research proposal to the RDC. Please see the NCHS RDC website (https://www.cdc.gov/rdc/index.htm) for instructions on submitting a proposal.
https://www.icpsr.umich.edu/web/ICPSR/studies/38688/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/38688/terms
The National Incident-Based Reporting System (NIBRS) is a part of the Uniform Crime Reporting Program (UCR), administered by the Federal Bureau of Investigation (FBI). In the late 1970s, the law enforcement community called for a thorough evaluative study of the UCR with the objective of recommending an expanded and enhanced UCR program to meet law enforcement needs into the 21st century. The FBI fully concurred with the need for an updated program to meet contemporary needs and provided its support, formulating a comprehensive redesign effort. Following a multiyear study, a "Blueprint for the Future of the Uniform Crime Reporting Program" was developed. Using the "Blueprint," and in consultation with local and state law enforcement executives, the FBI formulated new guidelines for the Uniform Crime Reports. The National Incident-Based Reporting System (NIBRS) was implemented to meet these guidelines. NIBRS data as formatted by the FBI are stored in a single file. These data are organized by various segment levels (record types). There are six main segment levels: administrative, offense, property, victim, offender, and arrestee. Each segment level has a different length and layout. There are other segment levels which occur with less frequency than the six main levels. Significant computing resources are necessary to work with the data in its single-file format. In addition, the user must be sophisticated in working with data in complex file types. While it is convenient to think of NIBRS as a hierarchical file, its structure is more similar to a relational database in that there are key variables that link the different segment levels together. NIBRS data are archived at ICPSR as 11 separate data files per year, which may be merged by using linkage variables. Prior to 2013 the data were archived and distributed as 13 separate data files, including three separate batch header record files. Starting with the 2013 data, the FBI combined the three batch header files into one file. Consequently, ICPSR instituted new file numbering for the data. NIBRS data focus on a variety of aspects of a crime incident. Part 2 (formerly Part 4), Administrative Segment, offers data on the incident itself (date and time). Each crime incident is delineated by one administrative segment record. Also provided are Part 3 (formerly Part 5), Offense Segment (offense type, location, weapon use, and bias motivation), Part 4 (formerly Part 6), Property Segment (type of property loss, property description, property value, drug type and quantity), Part 5 (formerly Part 7), Victim Segment (age, sex, race, ethnicity, and injuries), Part 6 (formerly Part 8), Offender Segment (age, sex, and race), and Part 7 (formerly Part 9), Arrestee Segment (arrest date, age, sex, race, and weapon use). The Batch Header Segment (Part 1, formerly Parts 1-3) separates and identifies individual police agencies by Originating Agency Identifier (ORI). Batch Header information, which is contained on three records for each ORI, includes agency name, geographic location, and population of the area. Part 8 (formerly Part 10), Group B Arrest Report Segment, includes arrestee data for Group B crimes. Window Segments files (Parts 9-11, formerly Parts 11-13) pertain to incidents for which the complete Group A Incident Report was not submitted to the FBI. In general, a Window Segment record will be generated if the incident occurred prior to January 1 of the previous year or if the incident occurred prior to when the agency started NIBRS reporting. As with the UCR, participation in NIBRS is voluntary on the part of law enforcement agencies. The data are not a representative sample of crime in the United States. Recognizing many differences in computing resources and that many users will be interested in only one or two segment levels, ICPSR has decided to make the data available as multiple files. Each NIBRS segment level in the FBI's single-file format has been made into a separate rectangular ASCII data file. Linkage (key) variables are used to perform analyses that involve two or more segment levels. If the user is interested in variables contained in one segment level, then the data are easy to work with since each segment level file is simply a rectangular ASCII data file. Setup files are available to read each segment level. Also, with only one segment level, the issue of
Effective management of Pathology and Laboratory Medicine Service (P&LMS) laboratories requires indicators capable of demonstrating each individual laboratory's productivity and efficiency. Local sites require the capability to determine in real time, the effects of any procedural or policy changes relating to productivity and efficiency. Data collected by each individual medical center is compiled on a national level at the Austin Information Technology Center (AITC) for P&LMS Central Office utilization. Each local medical center will have the capability to independently monitor laboratory trends and make appropriate decisions. A detailed view of workload data will be provided to support a variety of management and clinical requirements and needs. Measurements of productivity and efficiency data are capable of providing medical center to medical center comparisons. In addition, workload data is suitable for comparison to private sector facilities that capture laboratory workload based on Current Procedure Terminology (CPT). The National Laboratory Workload & Laboratory Management Index Program has been selected as the efficiency and productivity logic model. The National Laboratory Workload & Laboratory Management Index Program report replaces the Lab Automated Management Information System (AMIS) segment used in the past. Each local site identifies the reportable units based on CPT and VA guidelines. Reportable units are extracted by laboratory software and are transmitted to the AITC. The transmitted data is compiled and stored in the National Laboratory Workload & Laboratory Management Index Program database. This database supports P&LMS Headquarters and Veterans Integrated Service Network director's office.
https://www.icpsr.umich.edu/web/ICPSR/studies/23261/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/23261/terms
Starting with the Family Support Act of 1988, requirements for federal funding stipulate that child care subsidy rates be informed by market rates. In 1990 the federal government began a major investment in child care with the passage of the Child Care and Development Block Grant Act of 1990. Support of parental choice was a key component of this new block grant program that sent new money to states to support child care. Parental choice and state control of policy remained central when the program was expanded in 1996 as a part of welfare reform legislation. At that time, child care funding became known as the Child Care and Development Fund (CCDF). States are required by the CCDF Final Rule to ensure that families receiving child care assistance have equal access to comparable care purchased by private-paying parents. A market rate survey (MRS) is a tool States use to achieve this program objective. Some States conduct surveys to collect the child care market rate and others use administrative data, such as data collected by child care resource and referral (CCR&R) and State licensing agencies, to analyze the market rate for child care. This survey was one strategy used to collect child care market price data. Comparing findings garnered from different methods allows one to evaluate whether different data collection methods produce different price findings (convergent validity) and how well these data collection methods represent the child care market (criterion-related validity). These data can also be used to explore several validity issues of concern with market price studies. The major areas of investigation in this survey include child care prices by type of care, geographic location, and price mode (hourly, daily, weekly, monthly). Other areas of investigation include capacity by age group, additional fees facilities charge, whether they care for subsidized children, and what affects the prices that they charge parents.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Direct observations of the oceans acquired on oceanographic research ships operated across the international community support fundamental research into the many disciplines of ocean science and provide essential information for monitoring the health of the oceans. A comprehensive knowledge base is needed to support the responsible stewardship of the oceans with easy access to all data acquired globally. In the United States, the multidisciplinary shipboard sensor data routinely acquired each year on the fleet of coastal, regional and global ranging vessels supporting academic marine research are managed by the Rolling Deck to Repository (R2R, rvdata.us) program. With over a decade of operations, the R2R program has developed a robust routinized system to transform diverse data contributions from different marine data providers into a standardized and comprehensive collection of global-ranging observations of marine atmosphere, ocean, seafloor and subseafloor properties that is openly available to the international research community. In this article we describe the elements and framework of the R2R program and the services provided. To manage all expeditions conducted annually, a fleet-wide approach has been developed using data distributions submitted from marine operators with a data management workflow designed to maximize automation of data curation. Other design goals are to improve the completeness and consistency of the data and metadata archived, to support data citability, provenance tracking and interoperable data access aligned with FAIR (findable, accessible, interoperable, reusable) recommendations, and to facilitate delivery of data from the fleet for global data syntheses. Findings from a collection-level review of changes in data acquisition practices and quality over the past decade are presented. Lessons learned from R2R operations are also discussed including the benefits of designing data curation around the routine practices of data providers, approaches for ensuring preservation of a more complete data collection with a high level of FAIRness, and the opportunities for homogenization of datasets from the fleet so that they can support the broadest re-use of data across a diverse user community.
To educate consumers about responsible use of financial products, many governments, non-profit organizations and financial institutions have started to provide financial literacy courses. However, participation rates for non-compulsory financial education programs are typically extremely low.
Researchers from the World Bank conducted randomized experiments around a large-scale financial literacy course in Mexico City to understand the reasons for low take-up among a general population, and to measure the impact of this financial education course. The free, 4-hour financial literacy course was offered by a major financial institution and covered savings, retirement, and credit use. Motivated by different theoretical and logistics reasons why individuals may not attend training, researchers randomized the treatment group into different subgroups, which received incentives designed to provide evidence on some key barriers to take-up. These incentives included monetary payments for attendance equivalent to $36 or $72 USD, a one-month deferred payment of $36 USD, free cost transportation to the training location, and a video CD with positive testimonials about the training.
A follow-up survey conducted on clients of financial institutions six months after the course was used to measure the impacts of the training on financial knowledge, behaviors and outcomes, all relating to topics covered in the course.
The baseline dataset documented here is administrative data received from a screener that was used to get people to enroll in the financial course. The follow-up dataset contains data from the follow-up questionnaire.
Mexico City
-Individuals
Participants in a financial education evaluation
Sample survey data [ssd]
Researchers used three different approaches to obtain a sample for the experiment.
The first one was to send 40,000 invitation letters from a collaborating financial institution asking about interest in participating. However, only 42 clients (0.1 percent) expressed interest.
The second approach was to advertise through Facebook, with an ad displayed 16 million times to individuals residing in Mexico City, receiving 119 responses.
The third approach was to conduct screener surveys on streets in Mexico City and outside branches of the partner institution. Together this yielded a total sample of 3,503 people. Researchers divided this sample into a control group of 1,752 individuals, and a treatment group of 1,751 individuals, using stratified randomization. A key variable used in stratification was whether or not individuals were financial institution clients. The analysis of treatment impacts is based on the sample of 2,178 individuals who were financial institution clients.
The treatment group received an invitation to participate in the financial education course and the control group did not receive this invitation. Those who were selected for treatment were given a reminder call the day before their training session, which was at a day and time of their choosing.
Face-to-face [f2f]
The follow-up survey was conducted between February and July 2012 to measure post-training financial knowledge, behavior and outcomes. The questionnaire was relatively short (about 15 minutes) to encourage participation.
Interviewers first attempted to conduct the follow-up survey over the phone. If the person did not respond to the survey during the first attempt, researchers offered one a 500 pesos (US$36) Walmart gift card for completing the survey during the second attempt. If the person was still unavailable for the phone interview, a surveyor visited his/her house to conduct a face-to-face interview. If the participant was not at home, the surveyor delivered a letter with information about the study and instructions for how to participate in the survey and to receive the Walmart gift card. Surveyors made two more attempts (three attempts in total) to conduct a face-to-face interview if a respondent was not at home.
72.8 percent of the sample was interviewed in the follow-up survey. The attrition rate was slightly higher in the treatment group (29 percent) than in the control group (25.3 percent).
https://www.icpsr.umich.edu/web/ICPSR/studies/39080/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/39080/terms
Employment coaching involves trained staff working collaboratively with participants to help them set individualized goals directly or indirectly related to employment and providing motivation, support, and feedback as participants work toward those goals. Unlike most traditional case managers, coaches work in partnership with participants and do not tell the participants what goals they should pursue or what action steps to take in pursuing them. Recently, there has been growing interest among policymakers, practitioners, researchers, and others in using employment coaching to assist Temporary Assistance for Needy Families (TANF) recipients and other adults with low incomes. To learn more about the potential of employment coaching, the Administration for Children and Families (ACF) funded an experimental impact study of four employment coaching programs conducted as part of the Evaluation of Employment Coaching for TANF and Related Populations. The impact study evaluated the effectiveness of each program on study participants' self-regulation skills, employment, earnings, and other measures of personal and family well-being during the 21 months after participants enrolled in the study. Data and documentation for the longer term follow-up, 48 to 67 months after study enrollment, are forthcoming. The four employment coaching programs included in the evaluation are: Family Development and Self-Sufficiency (FaDSS), which serves TANF recipients and their family members in Iowa. Participation in FaDSS is voluntary and most coaching sessions occur in the participant's home. Goal4 It!TM, which provides employment coaching to TANF recipients in Jefferson County, Colorado in lieu of traditional case management. Receipt of TANF benefits is conditional on participation in either Goal4 It! or traditional case management. LIFT, which is a voluntary coaching program operated in four U.S. cities. Most coaching is conducted by unpaid student interns from Master of Social Work programs. MyGoals for Employment Success (MyGoals), which is a voluntary coaching program that served recipients of public housing assistance in Baltimore, Maryland, and Houston, Texas. The impact study addressed the following primary research questions: Do the coaching programs improve the outcomes of adults with low incomes? Specifically: Do the coaching programs affect participants' intermediate outcomes related to self-regulation and other skills associated with labor market success? Do the coaching programs affect participants' employment and economic security outcomes? How do the impacts of the coaching programs change over time? Are the coaching programs more effective for some groups of participants than others? Between February 2017 and November 2019, about 4,300 adults who were eligible for one of the four employment coaching programs and who consented to participate in the evaluation were randomly assigned either to (1) a program group that had access to employment coaching, or (2) a control group that did not have access to employment coaching but could receive other services available in the community. The effectiveness of each employment coaching program was assessed based on differences in average outcomes between program and control group members. Impacts were estimated during two follow-up periods: at 9 to 12 months after study enrollment (depending on the program; Moore et al. 2023) and at 21 months after study enrollment (Moore et al. forthcoming). To estimate the impacts of employment coaching, the study used data from: (1) a baseline survey or form administered to study participants at the time of study enrollment, (2) follow-up surveys administered to study participants approximately 9 to 12 months after study enrollment, and again approximately 21 months after study enrollment, (3) administrative employment and Unemployment Insurance records from the National Directory of New Hires (NDNH), and (4) administrative records from state and local agencies on participation in public assistance programs. The employment coaching restricted-use data collection includes nine files with data from these sources, excluding administrative and Unemployment Insurance records from the NDNH. Some of the files include data for a single program, while others combine data for more than one program. A user guide provides documentation for each file. Lo
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Village Law, enacted in 2014, mandated the transfer of funds to villages with the goals of reducing poverty and improving living standards in villages through village-led development and community empowerment. Village Law (VL) builds on Indonesia’s 17-year history of participatory and community-driven development (CDD) approaches such as under the Kecamatan Development Project (KDP) and Program Nasional Permberdayaan Masyarakat (PNPM). The changes consequent upon the closing down of PNPM and its replacement by Village Law transfers (Dana Desa and Alokasi Dana Desa) and implementation arrangements, form a critical backdrop to the report titled: Indonesia Village Law: Technical Evaluation of Infrastructure Built with Village Funds. The Technical Evaluation of Village Infrastructure evaluates the development process, quality, costs, and operations and maintenance (O&M) of 168 village infrastructure projects (VIPs) with budgets greater than USD 10,000, from 39 villages in six provinces. The five types of projects assessed were: A) buildings (33); B) bridges (15); C) water supply (14); D) roads and drainage (94); and E) irrigation (12). Assessors evaluated the physical structures and related files (budgets, design, approvals, etc.) implementation methods, and operations and maintenance (O&M) procedures. The technical evaluation covers VIPs in the same provinces as in 2012 under the PMPN program. This collection of data is comprised of audit results from seven field tools, plus one administrative data file. The technical evaluation team collected data on five types of infrastructure projects, with total observations at 168, as described above. The seven field tools are included in this data deposit, for reference. Data were originally collected and assembled as eight data files; one for administrative data and one for each of the seven field tools. The technical evaluation team stored data primarily in binary format, using hundreds of variables per field tool to accommodate the options available for each question within each of the field tools. These data were reorganized into five data sets, one for each infrastructure type (compare to one for each field tool). The data were also consolidated from many sets of binary variables to encoded numeric variables, where applicable, for efficiency. Responses to open-ended questions were left as string variables. Responses to simple yes/no questions were left as binary numeric variables. The public versions of the datasets included here exclude variables containing PII, including: (1) name of infrastructure project inspector; (2) name or firm of infrastructure project design consultant; (3) narrative description of infrastructure project, in Indonesian; and (4) narrative description of infrastructure project, in English. Total infrastructure variables sum to 736 across all five datasets. All variables are named logically and include descriptions in their labels.
https://www.usa.gov/government-workshttps://www.usa.gov/government-works
We are releasing data that compares the HHS Provider Relief Fund and the CMS Accelerated and Advance Payments by State and provider as of May 15, 2020. This data is already available on other websites, but this chart brings the information together into one view for comparison. You can find additional information on the Accelerated and Advance Payments at the following links:
Fact Sheet: https://www.cms.gov/files/document/Accelerated-and-Advanced-Payments-Fact-Sheet.pdf;
Zip file on providers in each state: https://www.cms.gov/files/zip/accelerated-payment-provider-details-state.zip
Medicare Accelerated and Advance Payments State-by-State information and by Provider Type: https://www.cms.gov/files/document/covid-accelerated-and-advance-payments-state.pdf.
This file was assembled by HHS via CMS, HRSA and reviewed by leadership and compares the HHS Provider Relief Fund and the CMS Accelerated and Advance Payments by State and provider as of December 4, 2020.
HHS Provider Relief Fund President Trump is providing support to healthcare providers fighting the coronavirus disease 2019 (COVID-19) pandemic through the bipartisan Coronavirus Aid, Relief, & Economic Security Act and the Paycheck Protection Program and Health Care Enhancement Act, which provide a total of $175 billion for relief funds to hospitals and other healthcare providers on the front lines of the COVID-19 response. This funding supports healthcare-related expenses or lost revenue attributable to COVID-19 and ensures uninsured Americans can get treatment for COVID-19. HHS is distributing this Provider Relief Fund money and these payments do not need to be repaid. The Department allocated $50 billion of the Provider Relief Fund for general distribution to Medicare facilities and providers impacted by COVID-19, based on eligible providers' net reimbursement. It allocated another $22 billion to providers in areas particularly impacted by the COVID-19 outbreak, rural providers, and providers who serve low-income populations and uninsured Americans. HHS will be allocating the remaining funds in the near future.
As part of the Provider Relief Fund distribution, all providers have 45 days to attest that they meet certain criteria to keep the funding they received, including public disclosure. As of May 15, 2020, there has been a total of $34 billion in attested payments. The chart only includes those providers that have attested to the payments by that date. We will continue to update this information and add the additional providers and payments once their attestation is complete.
CMS Accelerated and Advance Payments Program On March 28, 2020, to increase cash flow to providers of services and suppliers impacted by the coronavirus disease 2019 (COVID-19) pandemic, the Centers for Medicare & Medicaid Services (CMS) expanded the Accelerated and Advance Payment Program to a broader group of Medicare Part A providers and Part B suppliers. Beginning on April 26, 2020, CMS stopped accepting new applications for the Advance Payment Program, and CMS began reevaluating all pending and new applications for Accelerated Payments in light of the availability of direct payments made through HHS’s Provider Relief Fund.
Since expanding the AAP program on March 28, 2020, CMS approved over 21,000 applications totaling $59.6 billion in payments to Part A providers, which includes hospitals, through May 18, 2020. For Part B suppliers—including doctors, non-physician practitioners and durable medical equipment suppliers— during the same time period, CMS approved almost 24,000 applications advancing $40.4 billion in payments. The AAP program is not a grant, and providers and suppliers are required to repay the loan.
CMS has published AAP data, as required by the Continuing Appropriations and Other Extensions Act of 2021, on this website: https://www.cms.gov/files/document/covid-medicare-accelerated-and-advance-payments-program-covid-19-public-health-emergency-payment.pdf. Requests for additional data related to the program must be submitted through the CMS FOIA office. For more information on how to submit a FOIA request please visit our website at https://www.cms.gov/Regulations-and-Guidance/Legislation/FOIA. The PRF is administered by the Health Resources & Services Administration (HRSA). For more information on how to submit a request for unpublished program data from HRSA, please visit https://www.hrsa.gov/foia/index.html.
Provider Relief Fund Data - https://data.cdc.gov/Administrative/Provider-Relief-Fund-COVID-19-High-Impact-Payments/b58h-s9zx
We study the impact of the USDA’s Broadband Initiatives Program (BIP) on business outcomes in program recipient areas. The BIP was established by the American Recovery and Reinvestment Act (ARRA) of 2009 and implemented by the Rural Utilities Service (RUS) of the USDA Rural Development Mission Area. It was a $2.5 billion program (appropriations) that provided grants and loans to support broadband provision in unserved and underserved areas that were primarily rural. This research combines RUS program administrative data on BIP loans and grants and business outcomes and attributes data from the National Establishment Time Series (NETS) data. We use a quasi-experimental research design that combines matching with difference-in-differences (DiD) estimation to identify the causal effect of the BIP program on employment change at the establishment level and on business survival. Focusing on businesses that already existed in 2010, we find that the average employment decreased in both BIP and non-BIP area businesses during the post-program period, but the decline was slower for businesses in BIP areas. The statistical significance of the differences in employment change between the two groups indicates a positive impact of the program. A disaggregated view of the employment impacts show that the positive employment impact is mainly found to be statistically significant in metro counties, the service sector, and employer establishments. Results also show that businesses in BIP areas were less likely to fail compared to businesses in non-BIP areas and this effect is found to be different across metro/nonmetro counties, employer vs. nonemployer businesses, and broad industrial sectors.
On August 25th, 2022, Metro Council Passed Open Data Ordinance; previously open data reports were published on Mayor Fischer's Executive Order, You can find here both the Open Data Ordinance, 2022 (PDF) and the Mayor's Open Data Executive Order, 2013 Open Data Annual ReportsPage 6 of the Open Data Ordinance, Within one year of the effective date of this Ordinance, and thereafter no later than September1 of each year, the Open Data Management Team shall submit to the Mayor and Metro Council an annual Open Data Report.The Open Data Management team (also known as the Data Governance Team is currently led by the city's Data Officer Andrew McKinney in the Office of Civic Innovation and Technology. Previously, it was led by the former Data Officer, Michael Schnuerle and prior to that by Director of IT.Open Data Ordinance O-243-22 TextLouisville Metro GovernmentLegislation TextFile #: O-243-22, Version: 3ORDINANCE NO._, SERIES 2022AN ORDINANCE CREATING A NEW CHAPTER OF THE LOUISVILLE/JEFFERSONCOUNTY METRO CODE OF ORDINANCES CREATING AN OPEN DATA POLICYAND REVIEW. (AMENDMENT BY SUBSTITUTION)(AS AMENDED).SPONSORED BY: COUNCIL MEMBERS ARTHUR, WINKLER, CHAMBERS ARMSTRONG,PIAGENTINI, DORSEY, AND PRESIDENT JAMESWHEREAS, Metro Government is the catalyst for creating a world-class city that provides itscitizens with safe and vibrant neighborhoods, great jobs, a strong system of education and innovationand a high quality of life;WHEREAS, it should be easy to do business with Metro Government. Online governmentinteractions mean more convenient services for citizens and businesses and online governmentinteractions improve the cost effectiveness and accuracy of government operations;WHEREAS, an open government also makes certain that every aspect of the builtenvironment also has reliable digital descriptions available to citizens and entrepreneurs for deepengagement mediated by smart devices;WHEREAS, every citizen has the right to prompt, efficient service from Metro Government;WHEREAS, the adoption of open standards improves transparency, access to publicinformation and improved coordination and efficiencies among Departments and partnerorganizations across the public, non-profit and private sectors;WHEREAS, by publishing structured standardized data in machine readable formats, MetroGovernment seeks to encourage the local technology community to develop software applicationsand tools to display, organize, analyze, and share public record data in new and innovative ways;WHEREAS, Metro Government’s ability to review data and datasets will facilitate a betterUnderstanding of the obstacles the city faces with regard to equity;WHEREAS, Metro Government’s understanding of inequities, through data and datasets, willassist in creating better policies to tackle inequities in the city;WHEREAS, through this Ordinance, Metro Government desires to maintain its continuousimprovement in open data and transparency that it initiated via Mayoral Executive Order No. 1,Series 2013;WHEREAS, Metro Government’s open data work has repeatedly been recognized asevidenced by its achieving What Works Cities Silver (2018), Gold (2019), and Platinum (2020)certifications. What Works Cities recognizes and celebrates local governments for their exceptionaluse of data to inform policy and funding decisions, improve services, create operational efficiencies,and engage residents. The Certification program assesses cities on their data-driven decisionmakingpractices, such as whether they are using data to set goals and track progress, allocatefunding, evaluate the effectiveness of programs, and achieve desired outcomes. These datainformedstrategies enable Certified Cities to be more resilient, respond in crisis situations, increaseeconomic mobility, protect public health, and increase resident satisfaction; andWHEREAS, in commitment to the spirit of Open Government, Metro Government will considerpublic information to be open by default and will proactively publish data and data containinginformation, consistent with the Kentucky Open Meetings and Open Records Act.NOW, THEREFORE, BE IT ORDAINED BY THE COUNCIL OF THELOUISVILLE/JEFFERSON COUNTY METRO GOVERNMENT AS FOLLOWS:SECTION I: A new chapter of the Louisville Metro Code of Ordinances (“LMCO”) mandatingan Open Data Policy and review process is hereby created as follows:§ XXX.01 DEFINITIONS. For the purpose of this Chapter, the following definitions shall apply unlessthe context clearly indicates or requires a different meaning.OPEN DATA. Any public record as defined by the Kentucky Open Records Act, which could bemade available online using Open Format data, as well as best practice Open Data structures andformats when possible, that is not Protected Information or Sensitive Information, with no legalrestrictions on use or reuse. Open Data is not information that is treated as exempt under KRS61.878 by Metro Government.OPEN DATA REPORT. The annual report of the Open Data Management Team, which shall (i)summarize and comment on the state of Open Data availability in Metro Government Departmentsfrom the previous year, including, but not limited to, the progress toward achieving the goals of MetroGovernment’s Open Data portal, an assessment of the current scope of compliance, a list of datasetscurrently available on the Open Data portal and a description and publication timeline for datasetsenvisioned to be published on the portal in the following year; and (ii) provide a plan for the next yearto improve online public access to Open Data and maintain data quality.OPEN DATA MANAGEMENT TEAM. A group consisting of representatives from each Departmentwithin Metro Government and chaired by the Data Officer who is responsible for coordinatingimplementation of an Open Data Policy and creating the Open Data Report.DATA COORDINATORS. The members of an Open Data Management Team facilitated by theData Officer and the Office of Civic Innovation and Technology.DEPARTMENT. Any Metro Government department, office, administrative unit, commission, board,advisory committee, or other division of Metro Government.DATA OFFICER. The staff person designated by the city to coordinate and implement the city’sopen data program and policy.DATA. The statistical, factual, quantitative or qualitative information that is maintained or created byor on behalf of Metro Government.DATASET. A named collection of related records, with the collection containing data organized orformatted in a specific or prescribed way.METADATA. Contextual information that makes the Open Data easier to understand and use.OPEN DATA PORTAL. The internet site established and maintained by or on behalf of MetroGovernment located at https://data.louisvilleky.gov/ or its successor website.OPEN FORMAT. Any widely accepted, nonproprietary, searchable, platform-independent, machinereadablemethod for formatting data which permits automated processes.PROTECTED INFORMATION. Any Dataset or portion thereof to which the Department may denyaccess pursuant to any law, rule or regulation.SENSITIVE INFORMATION. Any Data which, if published on the Open Data Portal, could raiseprivacy, confidentiality or security concerns or have the potential to jeopardize public health, safety orwelfare to an extent that is greater than the potential public benefit of publishing that data.§ XXX.02 OPEN DATA PORTAL(A) The Open Data Portal shall serve as the authoritative source for Open Data provided by MetroGovernment.(B) Any Open Data made accessible on Metro Government’s Open Data Portal shall use an OpenFormat.(C) In the event a successor website is used, the Data Officer shall notify the Metro Council andshall provide notice to the public on the main city website.§ XXX.03 OPEN DATA MANAGEMENT TEAM(A) The Data Officer of Metro Government will work with the head of each Department to identify aData Coordinator in each Department. The Open Data Management Team will work to establish arobust, nationally recognized, platform that addresses digital infrastructure and Open Data.(B) The Open Data Management Team will develop an Open Data Policy that will adopt prevailingOpen Format standards for Open Data and develop agreements with regional partners to publish andmaintain Open Data that is open and freely available while respecting exemptions allowed by theKentucky Open Records Act or other federal or state law.§ XXX.04 DEPARTMENT OPEN DATA CATALOGUE(A) Each Department shall retain ownership over the Datasets they submit to the Open DataPortal. The Departments shall also be responsible for all aspects of the quality, integrity and securityPortal. The Departments shall also be responsible for all aspects of the quality, integrity and securityof the Dataset contents, including updating its Data and associated Metadata.(B) Each Department shall be responsible for creating an Open Data catalogue which shall includecomprehensive inventories of information possessed and/or managed by the Department.(C) Each Department’s Open Data catalogue will classify information holdings as currently “public”or “not yet public;” Departments will work with the Office of Civic Innovation and Technology todevelop strategies and timelines for publishing Open Data containing information in a way that iscomplete, reliable and has a high level of detail.§ XXX.05 OPEN DATA REPORT AND POLICY REVIEW(A) Within one year of the effective date of this Ordinance, and thereafter no later than September1 of each year, the Open Data Management Team shall submit to the Mayor and Metro Council anannual Open Data Report.(B) Metro Council may request a specific Department to report on any data or dataset that may bebeneficial or pertinent in implementing policy and legislation.(C) In acknowledgment that technology changes rapidly, in the future, the Open Data Policy shouldshall be reviewed annually and considered for revisions or additions that will continue to positionMetro Government as a leader on issues of
In the second half of 2010 the Ministry of Agriculture and Food (MAF) carried out the farm structure survey (FSS) and the survey on agricultural production methods (SAPM) on the entire country’s territory in accordance with the Law on Agricultural Census 2010 in Bulgaria. This was the first census carried out in Bulgaria being a member of the European Union (EU) and the second one, in compliance with the legislation of the EU. The census was conducted using a methodology consistent with the requirements of Regulation (EC) No 1166/2008 of the European Parliament and of the Council of 19 November 2008 on farm structure surveys and the survey on agricultural production methods and repealing Council Regulation (EEC) No 571/88 and Regulation (EC) No 1200/2009 of 30 November 2009 implementing Regulation (EC) No 1166/2008 of the European Parliament and of the Council on farm structure surveys and the survey on agricultural production methods, as regards to livestock unit coefficients and definitions of the characteristics. This ensured comparability of the results on the structure of agricultural holdings in Bulgaria and agricultural production methods with those of the EU Member States (MS). The Agricultural Census is the main source of information on the status and trends in agriculture. It has to provide a current economic, social and environmental overview of the agrarian sector needed for the decision making in the Common Agricultural Policy (CAP). The census data will be taken as a basis for sampling of the annual production surveys, to determine the framework of the Rural Development Program for the programming period after 2013, to define the field of observation of the Farm Accountancy Data Network (FADN) and to start the creation a statistical farm register.
National coverage
Households
In compliance with the EU Regulations Bulgaria applied the following national threshold:
0.5 ha of utilised agricultural area; or 0.3 ha of arable land; or 0.5 ha of natural meadows; or 0.1 ha of orchard (compact plantation), vineyard, vegetables, hops, tobacco, spices, medical and essential oil crops, flowers, ornamental plants; or 0.05 ha of greenhouses; or 1 cow/ buffalo-cow; or 2 cattle/ buffaloes; 1 female for reproduction (equidae); or 2 working animals (equidae); or 5 pigs; or 1 breeding-sow; or 5 breeding-ewes; or 2 breeding she-goats; or 50 laying hens; or 100 chicken for fattening; or 1 reproductive male animal used for natural mating - bull, stud, boar, etc.
Census/enumeration data [cen]
(a) Frame All agricultural holdings throughout the country on the list of agricultural holdings prepared by the Agrostatistics Department of the Ministry of Agriculture and Food. The list contained 750 733 agricultural holdings and was based on data from the previous census, agricultural administrative records and the annual updates from twelve major sources.
(b) Complete or Sample Enumeration Methods There was no sampling as the Census was an enumeration of all agricultural holdings for both the Farm Structure Survey, and the Survey on Agricultural Production Methods.
Face-to-face [f2f]
EU Regulations require information on holding location and geo-coordinates, legal status, ownership and tenancy, land use and crops grown, irrigation, livestock, organic farming, machinery (mandatory in 2013 FSS), renewable energy installations, other gainful activities, socio-economic circumstances (full and part-time farming), labour force (family, non-family, contractors), agricultural and vocational training of the manager, inclusion in rural development support programmes, soil tillage methods, crop rotation, erosion protection, livestock housing and livestock management, grazing of animals, manure application, manure storage and treatment facilities, maintenance and installation of landscape features.In addition, Bulgaria included more detailed breakdown on land ownership, area with aromatic crops – oil rose, coriander, lavender, spearmint, valeriana; questions on holding’s bookkeeping, mineral fertilizers and plant protection products application on open-field area; availability and types of milking facilities.There were three collection forms. The main statistical questionnaire (Form No.1) was a questionnaire collecting information on farm characteristics. The household-listing questionnaire (Form No.2) was used to determine whether the households in urban areas met the criteria for an agricultural holding. Form No.3 was used for temporary or permanently inactive holdings being part of the farm holdings list or the Farm Register.
(a) Data Entry, Edits and Imputations, Estimation and Tabulation Data processing, estimation and analysis were carried out on central level. The data file was prepared and sent to Eurostat for final validation. A special computer module was prepared for data entry. Data entry from the completed questionnaires in the computer module began in mid-September 2010 by operators in the regional offices of the Ministry of Agriculture and Food. Data regarding Rural Development Support was cross-checked with the administrative records of the Paying Agency. In the case of doubt, data from Paying Agency was imputed into the database.
(b) Census Data Quality The individual and aggregated data control on regional and central level started from mid-September 2010, together with the data entry of the questionnaires into the computer program. The 28 regional offices sent data to Headquarter’s database on a weekly basis. The Agrostatistics Department at the Ministry of Agriculture and Food conducted multiple checks of the logical links within each data record. Obvious erroneous questionnaires with incoherent data were compared with data from administrative sources. In case of significant differences holdings were revisited for follow-up interviews. The data was summarized and analyzed at central level for the 28 districts and the 6 statistical regions. The data from regular crop, livestock, poultry and beekeeping surveys proved to be comparable with the Census data. Some of the differences were attributable to the different survey reference periods. The difference in annual crop estimates was often due to non-harvested area and was normally within the published survey sampling errors.
The primary effort to minimize non-sampling error was placed in the interviewer and supervisor training programs and the instruction and procedures manuals for the field collection operation. Processes were also put in place for correction of the anticipated under-coverage, duplicate records, non-response and no contacts. Measurement errors were mostly detected by control in the computer module or by the additional monitoring of the data at central level. When discovering errors the regional experts and the enumerators contacted the holder for data clarification and data correction.
The preliminary results were published in May 2011 on the website of the Ministry of Agriculture and Food, seven months after the end of the reference period (crop year). Final detailed results were released in October 2012. The census results reflect the state of agriculture in Bulgaria in 2010 and are the basis for decisionmaking by state and local governments, as well as by the European Union and other European institutions in the implementation of the Common Agricultural Policy in the EU.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Data Collected from the study of Obese patients who took part in Virtual Chronic Disese intervention (VCDI). Reults were collected on patients experiencing various Health illnesesses, but primarily focused on the intervention program influencing Type 2 Diabetes remission.
The dataset has the total 93 individuals that took part in this research program, with the following description of the collumns provided below:
Patient: Patient ID Age: Age of patient at the beginning of intervetion Gender: Gender code (M,F) TimeInProgramInWeeks: Duration of individual in program in Weeks StartingWeight: Weight at beginning of program AfterProgramWeight: Weight after program WeightLost: Difference of Weight before and after program WeightLostPercentage: Weight Difference as a percent. StartingBMI: Body-Mass-Index at beginning of program AfterProgramBMI: Body-Mass-Index after program CompletedPodcastSession: the number of podcast completed in program CompletedPodcastSessionPercentage: percentage of podcast completed out of 60. CompletedFlag: flag for patients who created over 25 podcasts T2DFlag: flag for Type-2 Diabetes (T2D) patients T2DRemissionFlag: flag for Type-2 Diabetes patients who entered remission DroppedOutFlag: flag for patients who had to drop out of program. T2DDroppedOutFlag: flag for T2D patients who didn't complete program T2DNoncompletedFlag: flag for patients who did not complete 25 podcasts or more during the program. T2DInProgramFlag: flag for T2D individuals in the program T2DCompletedFlag: flag for T2D individuals who completed 25 podcasts or more T2DCompletedInRemissionFlag: flag for T2D individuals who completed 25 podcasts or more and whoes diabetes were in remission T2DNonCompletedInRemissionFlag: flag for T2D individuals who completed 25 podcasts or more and whoes diabetes were in remission
The R coded included Provides a step by step guide to constructed the fishers odds ratio test that was used in this research, as well as providing the average weight loss amongst completers and non-completer groups.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
FEMP EISA 432 Compliance Tracking System (CTS) enables the public to analyze and generate custom reports of CTS facility level data. This data warehouse includes only non-restricted CTS data. The data warehouse consists of five data sets (or "data cubes") that may be used to build custom reports. There are four detail level data sets used to explore data per module, including prior year records: Facility Annual Detail, Comprehensive Evaluation Detail, Benchmarked Building Detail, and Project and Follow-up Detail data. The 5th data set, Most Recent Facility Overview, allows users to compare the most recent data from fields across all of the facility sub-sets.
States are required by the CCDF Final Rule to ensure that families receiving child care assistance have equal access to comparable care purchased by private-paying parents. A market rate survey (MRS) is a tool States use to achieve this program objective. Some States conduct surveys to collect the child care market rate and others use administrative data, such as data collected by child care resource and referral (CCR&R) and State licensing agencies, to analyze the market rate for child care.
This survey was one strategy used to collect child care market price data. Comparing findings garnered from different methods allows one to evaluate whether different data collection methods produce different price findings (convergent validity) and how well these data collection methods represent the child care market (criterion-related validity). These data can also be used to explore several validity issues of concern with market price studies.
Units of Response: Program
Type of Data: Survey
Tribal Data: No
Periodicity: One-time
Demographic Indicators: Not Applicable
SORN: Not Applicable
Data Use Agreement: Yes
Data Use Agreement Location: https://www.icpsr.umich.edu/rpxlogin
Granularity: Childcare Providers;Individual;Program;Region
Spatial: United States
Geocoding: Unavailable
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We test whether different identification strategies give similar results when evaluating activation programs. Budgetary problems at the Dutch unemployment insurance (UI) administration in March 2010 caused a sharp drop in the availability of these programs. Using administrative data provided by the UI administration, we evaluate the effect of the program (1) exploiting the policy discontinuity as a quasi-experiment, (2) using dynamic matching assuming conditional independence, and (3) applying the timing-of-events model. All three strategies use the same data to consider the same program in the same setting, and show that the program reduces job finding directly after enrollment. However, the magnitude of the estimated drop in job finding differs between the three estimation methods. In the longer run, all three methods show a zero effect on employment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Minimum Data Set (MDS) enables integration in data collection, uniform data reporting, and data exchange across clinical and research information systems. The current study was conducted to determine a comprehensive national MDS for the Epidermolysis Bullosa (EB) information management system in Iran. This cross-sectional descriptive study consists of three steps: systematic review, focus group discussion, and the Delphi technique. A systematic review was conducted using relevant databases. Then, a focus group discussion was held to determine the extracted data elements with the help of contributing multidisciplinary experts. Finally, MDSs were selected through the Delphi technique in two rounds. The collected data were analyzed using Microsoft Excel 2019. In total, 103 data elements were included in the Delphi survey. The data elements, based on the experts’ opinions, were classified into two main categories: administrative data and clinical data. The final categories of data elements consisted of 11 administrative items and 92 clinical items. The national MDS, as the core of the EB surveillance program, is essential for enabling appropriate and informed decisions by healthcare policymakers, physicians, and healthcare providers. In this study, a MDS was developed and internally validated for EB. This research generated new knowledge to enable healthcare professionals to collect relevant and meaningful data for use. The use of this standardized approach can help benchmark clinical practice and target improvements worldwide.
This study was a program evaluation of the Reasoning and Rehabilitation Cognitive Skills Development Program, an educational program that taught cognitive skills to offenders, as implemented in juvenile intensive supervision probation in Colorado. Using an experimental design, researchers sought to measure the extent of change in attitudes and behaviors due to the cognitive skills program by administering pre- and post-test interviews. Researchers also measured recidivism by conducting interviews with probation officers who supervised the offenders in the sample six months after termination from intensive supervision. These interviews were supplemented with administrative records data that provided background information about the sample. In addition, administrative data were collected on all juveniles sentenced to intensive supervision during fiscal years 1994 and 1995 to compare juveniles in the sample with all juveniles in the intensive program. Variables in this collection include cognitive measures, such as impulsivity, problem-solving ability, egocentricity, and cognitive style. Other variables measure emotional responses to various situations, attitudes toward the law, values, drug abuse, program participation, and recidivism. Administrative data include age, gender, ethnicity, offense of conviction, and basic assessment data.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
AbstractThe dataset provided here contains the efforts of independent data aggregation, quality control, and visualization of the University of Arizona (UofA) COVID-19 testing programs for the 2019 novel Coronavirus pandemic. The dataset is provided in the form of machine-readable tables in comma-separated value (.csv) and Microsoft Excel (.xlsx) formats.Additional InformationAs part of the UofA response to the 2019-20 Coronavirus pandemic, testing was conducted on students, staff, and faculty prior to start of the academic year and throughout the school year. These testings were done at the UofA Campus Health Center and through their instance program called "Test All Test Smart" (TATS). These tests identify active cases of SARS-nCoV-2 infections using the reverse transcription polymerase chain reaction (RT-PCR) test and the Antigen test. Because the Antigen test provided more rapid diagnosis, it was greatly used three weeks prior to the start of the Fall semester and throughout the academic year.As these tests were occurring, results were provided on the COVID-19 websites. First, beginning in early March, the Campus Health Alerts website reported the total number of positive cases. Later, numbers were provided for the total number of tests (March 12 and thereafter). According to the website, these numbers were updated daily for positive cases and weekly for total tests. These numbers were reported until early September where they were then included in the reporting for the TATS program.For the TATS program, numbers were provided through the UofA COVID-19 Update website. Initially on August 21, the numbers provided were the total number (July 31 and thereafter) of tests and positive cases. Later (August 25), additional information was provided where both PCR and Antigen testings were available. Here, the daily numbers were also included. On September 3, this website then provided both the Campus Health and TATS data. Here, PCR and Antigen were combined and referred to as "Total", and daily and cumulative numbers were provided.At this time, no official data dashboard was available until September 16, and aside from the information provided on these websites, the full dataset was not made publicly available. As such, the authors of this dataset independently aggregated data from multiple sources. These data were made publicly available through a Google Sheet with graphical illustration provided through the spreadsheet and on social media. The goal of providing the data and illustrations publicly was to provide factual information and to understand the infection rate of SARS-nCoV-2 in the UofA community.Because of differences in reported data between Campus Health and the TATS program, the dataset provides Campus Health numbers on September 3 and thereafter. TATS numbers are provided beginning on August 14, 2020.Description of Dataset ContentThe following terms are used in describing the dataset.1. "Report Date" is the date and time in which the website was updated to reflect the new numbers2. "Test Date" is to the date of testing/sample collection3. "Total" is the combination of Campus Health and TATS numbers4. "Daily" is to the new data associated with the Test Date5. "To Date (07/31--)" provides the cumulative numbers from 07/31 and thereafter6. "Sources" provides the source of information. The number prior to the colon refers to the number of sources. Here, "UACU" refers to the UA COVID-19 Update page, and "UARB" refers to the UA Weekly Re-Entry Briefing. "SS" and "WBM" refers to screenshot (manually acquired) and "Wayback Machine" (see Reference section for links) with initials provided to indicate which author recorded the values. These screenshots are available in the records.zip file.The dataset is distinguished where available by the testing program and the methods of testing. Where data are not available, calculations are made to fill in missing data (e.g., extrapolating backwards on the total number of tests based on daily numbers that are deemed reliable). Where errors are found (by comparing to previous numbers), those are reported on the above Google Sheet with specifics noted.For inquiries regarding the contents of this dataset, please contact the Corresponding Author listed in the README.txt file. Administrative inquiries (e.g., removal requests, trouble downloading, etc.) can be directed to data-management@arizona.edu
States are required by the CCDF Final Rule to ensure that families receiving child care assistance have equal access to comparable care purchased by private-paying parents. A market rate survey (MRS) is a tool States use to achieve this program objective. Some States conduct surveys to collect the child care market rate and others use administrative data, such as data collected by child care resource and referral (CCR&R) and State licensing agencies, to analyze the market rate for child care. This survey was one strategy used to collect child care market price data. Comparing findings garnered from different methods allows one to evaluate whether different data collection methods produce different price findings (convergent validity) and how well these data collection methods represent the child care market (criterion-related validity). These data can also be used to explore several validity issues of concern with market price studies.
Units of Response: Program
Type of Data: Survey
Tribal Data: No
Periodicity: One-time
Demographic Indicators: Not Applicable
SORN: Not Applicable
Data Use Agreement: Yes
Data Use Agreement Location: https://www.icpsr.umich.edu/rpxlogin
Granularity: Childcare Providers;Individual;Program;Region
Spatial: United States
Geocoding: Unavailable