Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The excel file contains data about 10-18 years old students' affect/ well-being during lock-down in Pakistan. The cells in the top row contain the items. The numbers in the columns 'C' to 'J' represent following. 1 = Very Often; 2 = Often; 3 = Sometimes; 2 = Rarely; 1 = Never The numbers in the column 'K' have been coded in reverse order. Column 'L' contains response of the question "Have you started fighting more with your siblings?" Column 'M' and 'N' include open-ended, short responses.
My Grandpa asked if the programs I was using could calculate his Golf League’s handicaps, so I decided to play around with SQL and Google Sheets to see if I could functionally recreate what they were doing.
The goal is to calculate a player’s handicap, which is the average of the last six months of their scores minus 29. The average is calculated based on how many games they have actually played in the last six months, and the number of scores averaged correlates to total games. For example, Clem played over 20 games so his handicap will be calculated with the maximum possible scores accounted for, that being 8. Schomo only played six games, so the lowest 4 will be used for their average. Handicap is always calculated with the lowest available scores.
This league uses Excel, so upon receiving the data I converted it into a CSV and uploaded it into bigQuery.
First thing I did was change column names to best represent what they were and simplify things in the code. It is much easier to remember ‘someone_scores’ than ‘int64_field_number’. It also seemed to confuse SQL less, as int64 can mean something independently.
(ALTER TABLE grandpa-golf.grandpas_golf_35.should only need the one
RENAME COLUMN int64_field_4 TO schomo_scores;)
To Find the average of Clem’s scores:
SELECT AVG(clem_scores)
FROM grandpa-golf.grandpas_golf_35.should only need the one
LIMIT 8; RESULT: 43.1
Remembering that handicap is the average minus 29, the final computation looks like:
SELECT AVG(clem_scores) - 29
FROM grandpa-golf.grandpas_golf_35.should only need the one
LIMIT 8; RESULT: 14.1
Find the average of Schomo’s scores:
SELECT AVG(schomo_scores) - 29
FROM grandpa-golf.grandpas_golf_35.should only need the one
LIMIT 6; RESULT: 10.5
This data was already automated to calculate a handicap in the league’s excel spreadsheet, so I asked for more data to see if i could recreate those functions.
Grandpa provided the past three years of league data. The names were all replaced with generic “Golfer 001, Golfer 002, etc”. I had planned on converting this Excel sheet into a CSV and manipulating it in SQL like with the smaller sample, but this did not work.
Immediately, there were problems. I had initially tried to just convert the file into a CSV and drop it into SQL, but there were functions that did not transfer properly from what was functionally the PDF I had been emailed. So instead of working with SQL, I decided to pull this into google sheets and recreate the functions for this spreadsheet. We only need the most recent 6 months of scores to calculate our handicap, so once I made a working copy I deleted the data from before this time period. Once that was cleaned up, I started working on a function that would pull the working average from these values, which is still determined by how many total values there were. This correlates as follows: for 20 or more scores average the lowest 8, for 15 to 19 scores average the lowest 6, for 6 to 14 scores average the lowest 4 and for 6 or fewer scores average the lowest 2. We also need to ensure that an average value of 0 returns a value of 0 so our handicap calculator works. My formula ended up being:
=IF(COUNT(E2:AT2)>=20, AVERAGE(SMALL(E2:AT2, ROW(INDIRECT("1:"&8)))), IF(COUNT(E2:AT2)>=15, AVERAGE(SMALL(E2:AT2, ROW(INDIRECT("1:"&6)))), IF(COUNT(E2:AT2)>=6, AVERAGE(SMALL(E2:AT2, ROW(INDIRECT("1:"&4)))), IF(COUNT(E2:AT2)>=1, AVERAGE(SMALL(E2:AT2, ROW(INDIRECT("1:"&2)))), IF(COUNT(E2:AT2)=0, 0, "")))))
The handicap is just this value minus 29, so for the handicap column the script is relatively simple: =IF(D2=0,0,IF(D2>47,18,D2-29)) This ensures that we will not get a negative value for our handicap, and pulls the basic average from the right place. It also sets the handicap to zero if there are no scores present.
Now that we have our spreadsheet back in working order with our new scripts, we are functionally done. We have recreated what my Grandpa’s league uses to generate handicaps.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
AbstractThe dataset provided here contains the efforts of independent data aggregation, quality control, and visualization of the University of Arizona (UofA) COVID-19 testing programs for the 2019 novel Coronavirus pandemic. The dataset is provided in the form of machine-readable tables in comma-separated value (.csv) and Microsoft Excel (.xlsx) formats.Additional InformationAs part of the UofA response to the 2019-20 Coronavirus pandemic, testing was conducted on students, staff, and faculty prior to start of the academic year and throughout the school year. These testings were done at the UofA Campus Health Center and through their instance program called "Test All Test Smart" (TATS). These tests identify active cases of SARS-nCoV-2 infections using the reverse transcription polymerase chain reaction (RT-PCR) test and the Antigen test. Because the Antigen test provided more rapid diagnosis, it was greatly used three weeks prior to the start of the Fall semester and throughout the academic year.As these tests were occurring, results were provided on the COVID-19 websites. First, beginning in early March, the Campus Health Alerts website reported the total number of positive cases. Later, numbers were provided for the total number of tests (March 12 and thereafter). According to the website, these numbers were updated daily for positive cases and weekly for total tests. These numbers were reported until early September where they were then included in the reporting for the TATS program.For the TATS program, numbers were provided through the UofA COVID-19 Update website. Initially on August 21, the numbers provided were the total number (July 31 and thereafter) of tests and positive cases. Later (August 25), additional information was provided where both PCR and Antigen testings were available. Here, the daily numbers were also included. On September 3, this website then provided both the Campus Health and TATS data. Here, PCR and Antigen were combined and referred to as "Total", and daily and cumulative numbers were provided.At this time, no official data dashboard was available until September 16, and aside from the information provided on these websites, the full dataset was not made publicly available. As such, the authors of this dataset independently aggregated data from multiple sources. These data were made publicly available through a Google Sheet with graphical illustration provided through the spreadsheet and on social media. The goal of providing the data and illustrations publicly was to provide factual information and to understand the infection rate of SARS-nCoV-2 in the UofA community.Because of differences in reported data between Campus Health and the TATS program, the dataset provides Campus Health numbers on September 3 and thereafter. TATS numbers are provided beginning on August 14, 2020.Description of Dataset ContentThe following terms are used in describing the dataset.1. "Report Date" is the date and time in which the website was updated to reflect the new numbers2. "Test Date" is to the date of testing/sample collection3. "Total" is the combination of Campus Health and TATS numbers4. "Daily" is to the new data associated with the Test Date5. "To Date (07/31--)" provides the cumulative numbers from 07/31 and thereafter6. "Sources" provides the source of information. The number prior to the colon refers to the number of sources. Here, "UACU" refers to the UA COVID-19 Update page, and "UARB" refers to the UA Weekly Re-Entry Briefing. "SS" and "WBM" refers to screenshot (manually acquired) and "Wayback Machine" (see Reference section for links) with initials provided to indicate which author recorded the values. These screenshots are available in the records.zip file.The dataset is distinguished where available by the testing program and the methods of testing. Where data are not available, calculations are made to fill in missing data (e.g., extrapolating backwards on the total number of tests based on daily numbers that are deemed reliable). Where errors are found (by comparing to previous numbers), those are reported on the above Google Sheet with specifics noted.For inquiries regarding the contents of this dataset, please contact the Corresponding Author listed in the README.txt file. Administrative inquiries (e.g., removal requests, trouble downloading, etc.) can be directed to data-management@arizona.edu
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
These data are mainly obtained from the sliceomatic software for the measurements of angles, lever arms and volume of reconstructions. The ratios have been calculated on excel
This dataset contains all current and active business licenses issued by the Department of Business Affairs and Consumer Protection. This dataset contains a large number of records /rows of data and may not be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Notepad or Wordpad, to view and search.
Data fields requiring description are detailed below.
APPLICATION TYPE: 'ISSUE' is the record associated with the initial license application. 'RENEW' is a subsequent renewal record. All renewal records are created with a term start date and term expiration date. 'C_LOC' is a change of location record. It means the business moved. 'C_CAPA' is a change of capacity record. Only a few license types my file this type of application. 'C_EXPA' only applies to businesses that have liquor licenses. It means the business location expanded.
LICENSE STATUS: 'AAI' means the license was issued.
Business license owners may be accessed at: http://data.cityofchicago.org/Community-Economic-Development/Business-Owners/ezma-pppn To identify the owner of a business, you will need the account number or legal name.
Data Owner: Business Affairs and Consumer Protection
Time Period: Current
Frequency: Data is updated daily
This dataset contains all current and active business licenses issued by the Department of Business Affairs and Consumer Protection. This dataset contains a large number of records /rows of data and may not be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Notepad or Wordpad, to view and search.
Data fields requiring description are detailed below.
APPLICATION TYPE: 'ISSUE' is the record associated with the initial license application. 'RENEW' is a subsequent renewal record. All renewal records are created with a term start date and term expiration date. 'C_LOC' is a change of location record. It means the business moved. 'C_CAPA' is a change of capacity record. Only a few license types my file this type of application. 'C_EXPA' only applies to businesses that have liquor licenses. It means the business location expanded.
LICENSE STATUS: 'AAI' means the license was issued.
Business license owners may be accessed at: http://data.cityofchicago.org/Community-Economic-Development/Business-Owners/ezma-pppn To identify the owner of a business, you will need the account number or legal name.
Data Owner: Business Affairs and Consumer Protection
Time Period: Current
Frequency: Data is updated daily
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A Microsoft Excel file containing the data that forms the figures in the main text of the publication. The Microsoft Excel file contains the ultrafast transient absorption and photoluminescence data for TXO-TPA and 4CzIPN, presented in wavelength (nm) and time (ps). Also included is the steady state Raman and impulsive vibrational spectra of TXO-TPA and 4CzIPN at 0.5, 3, and 10 ps (in wavenumbers). The full datasets for the quantum-chemical molecular dynamics simulations of TXO-TPA are also presented. See the main manuscript for more details.
Version 5 release notes:
Removes support for SPSS and Excel data.Changes the crimes that are stored in each file. There are more files now with fewer crimes per file. The files and their included crimes have been updated below.
Adds in agencies that report 0 months of the year.Adds a column that indicates the number of months reported. This is generated summing up the number of unique months an agency reports data for. Note that this indicates the number of months an agency reported arrests for ANY crime. They may not necessarily report every crime every month. Agencies that did not report a crime with have a value of NA for every arrest column for that crime.Removes data on runaways.
Version 4 release notes:
Changes column names from "poss_coke" and "sale_coke" to "poss_heroin_coke" and "sale_heroin_coke" to clearly indicate that these column includes the sale of heroin as well as similar opiates such as morphine, codeine, and opium. Also changes column names for the narcotic columns to indicate that they are only for synthetic narcotics.
Version 3 release notes:
Add data for 2016.Order rows by year (descending) and ORI.Version 2 release notes:
Fix bug where Philadelphia Police Department had incorrect FIPS county code.
The Arrests by Age, Sex, and Race data is an FBI data set that is part of the annual Uniform Crime Reporting (UCR) Program data. This data contains highly granular data on the number of people arrested for a variety of crimes (see below for a full list of included crimes). The data sets here combine data from the years 1980-2015 into a single file. These files are quite large and may take some time to load.
All the data was downloaded from NACJD as ASCII+SPSS Setup files and read into R using the package asciiSetupReader. All work to clean the data and save it in various file formats was also done in R. For the R code used to clean this data, see here. https://github.com/jacobkap/crime_data. If you have any questions, comments, or suggestions please contact me at jkkaplan6@gmail.com.
I did not make any changes to the data other than the following. When an arrest column has a value of "None/not reported", I change that value to zero. This makes the (possible incorrect) assumption that these values represent zero crimes reported. The original data does not have a value when the agency reports zero arrests other than "None/not reported." In other words, this data does not differentiate between real zeros and missing values. Some agencies also incorrectly report the following numbers of arrests which I change to NA: 10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 99999, 99998.
To reduce file size and make the data more manageable, all of the data is aggregated yearly. All of the data is in agency-year units such that every row indicates an agency in a given year. Columns are crime-arrest category units. For example, If you choose the data set that includes murder, you would have rows for each agency-year and columns with the number of people arrests for murder. The ASR data breaks down arrests by age and gender (e.g. Male aged 15, Male aged 18). They also provide the number of adults or juveniles arrested by race. Because most agencies and years do not report the arrestee's ethnicity (Hispanic or not Hispanic) or juvenile outcomes (e.g. referred to adult court, referred to welfare agency), I do not include these columns.
To make it easier to merge with other data, I merged this data with the Law Enforcement Agency Identifiers Crosswalk (LEAIC) data. The data from the LEAIC add FIPS (state, county, and place) and agency type/subtype. Please note that some of the FIPS codes have leading zeros and if you open it in Excel it will automatically delete those leading zeros.
I created 9 arrest categories myself. The categories are:
Total Male JuvenileTotal Female JuvenileTotal Male AdultTotal Female AdultTotal MaleTotal FemaleTotal JuvenileTotal AdultTotal ArrestsAll of these categories are based on the sums of the sex-age categories (e.g. Male under 10, Female aged 22) rather than using the provided age-race categories (e.g. adult Black, juvenile Asian). As not all agencies report the race data, my method is more accurate. These categories also make up the data in the "simple" version of the data. The "simple" file only includes the above 9 columns as the arrest data (all other columns in the data are just agency identifier columns). Because this "simple" data set need fewer columns, I include all offenses.
As the arrest data is very granular, and each category of arrest is its own column, there are dozens of columns per crime. To keep the data somewhat manageable, there are nine different files, eight which contain different crimes and the "simple" file. Each file contains the data for all years. The eight categories each have crimes belonging to a major crime category and do not overlap in crimes other than with the index offenses. Please note that the crime names provided below are not the same as the column names in the data. Due to Stata limiting column names to 32 characters maximum, I have abbreviated the crime names in the data. The files and their included crimes are:
Index Crimes
MurderRapeRobberyAggravated AssaultBurglaryTheftMotor Vehicle TheftArsonAlcohol CrimesDUIDrunkenness
LiquorDrug CrimesTotal DrugTotal Drug SalesTotal Drug PossessionCannabis PossessionCannabis SalesHeroin or Cocaine PossessionHeroin or Cocaine SalesOther Drug PossessionOther Drug SalesSynthetic Narcotic PossessionSynthetic Narcotic SalesGrey Collar and Property CrimesForgeryFraudStolen PropertyFinancial CrimesEmbezzlementTotal GamblingOther GamblingBookmakingNumbers LotterySex or Family CrimesOffenses Against the Family and Children
Other Sex Offenses
ProstitutionRapeViolent CrimesAggravated AssaultMurderNegligent ManslaughterRobberyWeapon Offenses
Other CrimesCurfewDisorderly ConductOther Non-trafficSuspicion
VandalismVagrancy
Simple
This data set has every crime and only the arrest categories that I created (see above).
If you have any questions, comments, or suggestions please contact me at jkkaplan6@gmail.com.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains the files required for replicating the results reported in the paper “The Flexible Reverse Approach for Decomposing Economic Inefficiency: With an Application to Taiwanese Banks” coauthored with Jesús T Pastor, Juan Aparicio, and Javier Alcaraz and accepted for publication in June 2024 in Economic Modelling.
The package contains:
A Word™ file describing the content of the accompanying Excel file, where the results of the example reported in Table 1 are replicated. The file includes basic instructions for running Excel’s Solver and the Visual Basic for Applications (VBA) macros that automate the optimization processes for all firms.
An Excel™ file consisting of four tabs. The first tab presents the data, while each successive tab includes the models and results for the weighted additive technical inefficiency (Model_1), profit inefficiency (Model_4), and the closest benchmarks maximizing profit (Model_5).
The replication files correspond to the example used to illustrate the flexible reverse approach for measuring and decomposing profit inefficiency. The data on Taiwanese banks, collected and studied by Juo et al. (2015), were kindly provided by Prof. Tsu-Tan Fu. Since these data are not publicly available, readers interested in replicating the empirical application should contact the above authors. The spreadsheets can be easily modified to measure and decompose the profit inefficiency of any dataset of choice.
Reference: Juo, J. C., Fu, T. T., Yu, M. M., & Lin, Y. H. (2015). Profit-oriented productivity change. Omega, 57, 176-187.
Dryad_MatedPair_DataThis spreadsheet (MS Excel format) contains data related to raven mate pairing behavior with respect to their mtDNA haplotypes. See the associated ReadMe file for addition details.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This study investigates pricing and coordination strategies for a dual-channel supply chain (DCSC), considering technological innovations in emergencies. We have established the DCSC model consisting of a manufacturer, a retailer, and an E-commerce platform (ECP). Whether manufacturers choose to invest in technological innovation during emergencies can be divided into traditional production mode and technological innovation mode. Using the reverse induction method to solve the Stackelberg game problem, explore the pricing and channel selection strategies of each member in a DCSC under different modes. In addition, a revenue-sharing contract for a DCSC under emergencies was designed and improved. Research has shown that under emergencies, consumers’ technological innovation preference can increase the profits of each member in the DCSC and manufacturers’ technological innovation level. Manufacturers are more willing to choose technological innovation mode rather than traditional production mode. However, an increase in the commission rate of ECP can hinder the level of technological innovation of manufacturers and affect the issue of choosing between offline channel and ECP channel. Specifically, when the commission rate exceeds a certain threshold, the offline channel should be chosen. Finally, traditional revenue-sharing contracts fail to effectively coordinate DCSC that incorporate technological innovation during emergencies. To address this limitation, an improved revenue-sharing contract is proposed, which enhances the level of technological innovation while achieving Pareto improvements within the DCSC.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Nigeria adopted dolutegravir (DTG) as part of first line (1L) antiretroviral therapy (ART) in 2017. However, there is limited documented experience using DTG in sub-Saharan Africa. Our study assessed DTG acceptability from the patient’s perspective as well as treatment outcomes at 3 high-volume facilities in Nigeria. This is a mixed method prospective cohort study with 12 months of follow-up between July 2017 and January 2019. Patients who had intolerance or contraindications to non-nucleoside reverse-transcriptase inhibitors were included. Patient acceptability was assessed through one-on-one interviews at 2, 6, and 12 months following DTG initiation. ART-experienced participants were asked about side effects and regimen preference compared to their previous regimen. Viral load (VL) and CD4+ cell count tests were assessed according to the national schedule. Data were analysed in MS Excel and SAS 9.4. A total of 271 participants were enrolled on the study, the median age of participants was 45 years, 62% were female. 229 (206 ART-experienced, 23 ART-naive) of enrolled participants were interviewed at 12 months. 99.5% of ART-experienced study participants preferred DTG to their previous regimen. 32% of particpants reported at least one side effect. “Increase in appetite” was most frequently reported (15%), followed by insomnia (10%) and bad dreams (10%). Average adherence as measured by drug pick-up was 99% and 3% reported a missed dose in the 3 days preceding their interview. Among participants with VL results (n = 199), 99% were virally suppressed (
http://guides.library.uq.edu.au/deposit_your_data/terms_and_conditionshttp://guides.library.uq.edu.au/deposit_your_data/terms_and_conditions
The data set is derived from survey responses provided by individual participants in the study, which are stored in an Excel file. As the data is original, no modifications, coding, and/or reversing have been applied. Only numerical data from the survey questions (on a Likert scale) and written responses from the open-ended questions are stored in the file. For detailed information regarding the survey questions, it is recommended to contact the project leader via email. All names have been removed from the data set to safeguard participants' anonymity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excel file containing source data for Figs 1–4, 6, B and C in S1 Text.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Literature search methods: A systematic review was done in November 2014 using the databases Web of Science and Google Scholar to collect and analyse the primary literature to date written about tool use and tool making behaviour in non-human animals. The search for publications on Google Scholar was done using the search terms “tool+using+making+animals”, including only articles, written in a non-limited period of time, sorted by relevance. Since Google Scholar provided a large amount of articles in a descending order from more relevant to less relevant, we detected relevant articles doing a first manual scan looking at the title and at the abstract until relevance was consistent, producing a total of 23 possible publications.The search for literature that was executed using the database Web of Science was done using the search terms “Tool*” (Topic) AND “Use* OR Utilization*” (Topic) AND “Mak*” (Topic) AND “Animal*”(Topic). This produced a result of 316 possible publications. Then we refined the results using the following search categories: “Behavioral Sciences”, “Ecology”, “Zoology”. We also selected only articles. After that, 9 articles were left. Then these underwent a title and abstract scan for relevance to the specific topic. The full text of the remaining articles was processed, and the articles that did not provide specific information about the occurrence of tool use and/or tool behaviour in animals were excluded. We also excluded all the secondary literature as reviewed primary literature without providing its own data. Articles whose content was not focused on the specified topic and articles whose data provided were not enough were also excluded. Of the 339 initial publications, 32 were screened: 2 were removed for not being primary research articles, 24 were directly related to the topic, 6 were excluded with reasons listed above. The remaining 24 studies included in the analysis were composed of experiments from 1973 to 2014. Out of 24 articles, 4 were written in 2005, 2 in 1982, 2 in 1990, 2 in 1994, 2 in 2003 and 2 in 2014. All articles that were included in this review were published in English in a total of 17 journals. Journal of Comparative Psychology published 4 articles out of 24 and Primates published 3. Analysis of the literature: Studies were coded by the geographical location (country's name), the duration (total lenght of the research measured in months), the type of the experiment performed ( observational, experimental), the common name of the animal observed or used as experimental subject, the activity that was the scope of the tool use behaviour, the kind of tool being used.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Graphical analysis of the toxicity testing and the potency of millet extracts in reversing the tachycardic and bradycardic conditions. The results show significant changes and it is effectively supported by the statistical data (correlation analysis) performed using the basic functions of Microsoft Excel.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excel spreadsheet containing, in separate sheets, the underlying numerical data presented in the manuscript.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionIn March 2023, a Marburg Virus Disease (MVD) outbreak was declared in Kagera region, Northwestern Tanzania. This was the first MVD outbreak in the country. We describe the epidemiological characteristics of MVD cases and contacts.MethodsThe Ministry of Health activated an outbreak response team. Outbreak investigation methods were applied to cases identified through MVD standard case definitions and confirmed through reverse-transcriptase polymerase chain reaction (RT PCR). All identified case contacts were added into the contact listing form and followed up in-person daily for any signs or symptoms for 21 days. Data collected from various forms was managed and analyzed using Excel and QGIS software for mapping.ResultsA total of nine MVD cases were reported with eight laboratory-confirmed and one probable. Two of the reported cases were frontline healthcare workers and seven were family related members. Cases were children and adults between 1–59 years of age with a median age of 34 years. Six were males. Six cases died equivalent to a case fatality rate (CFR) of 66.7%. A total of 212 individuals were identified as contacts and two (2) became cases. The outbreak was localized in two geo-administrative wards (Maruku and Kanyangereko) of Bukoba District Council.ConclusionTransmission during this outbreak occurred among family members and healthcare workers who provided care to the cases. The delay in detection aggravated the spread and possibly the consequent fatality but once confirmed the swift response stemmed further transmission containing the disease at the epicenter wards. The outbreak lasted for 72 days but as the origin is still unknown, further research is required to explore the source of this outbreak.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
!!!WARNING~~~This dataset has a large number of flaws and is unable to properly answer many questions that people generally use it to answer, such as whether national hate crimes are changing (or at least they use the data so improperly that they get the wrong answer). A large number of people using this data (academics, advocates, reporting, US Congress) do so inappropriately and get the wrong answer to their questions as a result. Indeed, many published papers using this data should be retracted. Before using this data I highly recommend that you thoroughly read my book on UCR data, particularly the chapter on hate crimes (https://ucrbook.com/hate-crimes.html) as well as the FBI's own manual on this data. The questions you could potentially answer well are relatively narrow and generally exclude any causal relationships. ~~~WARNING!!!For a comprehensive guide to this data and other UCR data, please see my book at ucrbook.comVersion 10 release notes:Adds 2022 dataVersion 9 release notes:Adds 2021 data.Version 8 release notes:Adds 2019 and 2020 data. Please note that the FBI has retired UCR data ending in 2020 data so this will be the last UCR hate crime data they release. Changes .rda file to .rds.Version 7 release notes:Changes release notes description, does not change data.Version 6 release notes:Adds 2018 dataVersion 5 release notes:Adds data in the following formats: SPSS, SAS, and Excel.Changes project name to avoid confusing this data for the ones done by NACJD.Adds data for 1991.Fixes bug where bias motivation "anti-lesbian, gay, bisexual, or transgender, mixed group (lgbt)" was labeled "anti-homosexual (gay and lesbian)" prior to 2013 causing there to be two columns and zero values for years with the wrong label.All data is now directly from the FBI, not NACJD. The data initially comes as ASCII+SPSS Setup files and read into R using the package asciiSetupReader. All work to clean the data and save it in various file formats was also done in R. Version 4 release notes: Adds data for 2017.Adds rows that submitted a zero-report (i.e. that agency reported no hate crimes in the year). This is for all years 1992-2017. Made changes to categorical variables (e.g. bias motivation columns) to make categories consistent over time. Different years had slightly different names (e.g. 'anti-am indian' and 'anti-american indian') which I made consistent. Made the 'population' column which is the total population in that agency. Version 3 release notes: Adds data for 2016.Order rows by year (descending) and ORI.Version 2 release notes: Fix bug where Philadelphia Police Department had incorrect FIPS county code. The Hate Crime data is an FBI data set that is part of the annual Uniform Crime Reporting (UCR) Program data. This data contains information about hate crimes reported in the United States. Please note that the files are quite large and may take some time to open.Each row indicates a hate crime incident for an agency in a given year. I have made a unique ID column ("unique_id") by combining the year, agency ORI9 (the 9 character Originating Identifier code), and incident number columns together. Each column is a variable related to that incident or to the reporting agency. Some of the important columns are the incident date, what crime occurred (up to 10 crimes), the number of victims for each of these crimes, the bias motivation for each of these crimes, and the location of each crime. It also includes the total number of victims, total number of offenders, and race of offenders (as a group). Finally, it has a number of columns indicating if the victim for each offense was a certain type of victim or not (e.g. individual victim, business victim religious victim, etc.). The only changes I made to the data are the following. Minor changes to column names to make all column names 32 characters or fewer (so it can be saved in a Stata format), made all character values lower case, reordered columns. I also generated incident month, weekday, and month-day variables from the incident date variable included in the original data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
T1 values for intraobserver reproducibility assessment; Excel data with manual ROI placement by observer 1.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The excel file contains data about 10-18 years old students' affect/ well-being during lock-down in Pakistan. The cells in the top row contain the items. The numbers in the columns 'C' to 'J' represent following. 1 = Very Often; 2 = Often; 3 = Sometimes; 2 = Rarely; 1 = Never The numbers in the column 'K' have been coded in reverse order. Column 'L' contains response of the question "Have you started fighting more with your siblings?" Column 'M' and 'N' include open-ended, short responses.