Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset includes all the datafiles and computational notebooks required to reproduce the work reported in the paper “Characterisation of Dansgaard-Oeschger events in palaeoclimate time series using the Matrix Profile”: Input datafiles time series (20-years resolution) of oxygen isotope ratios (δ18O) from NGRIP ice core on the GICC05 time scale (source: https://www.iceandclimate.nbi.ku.dk, DOI: 10.1016/j.quascirev.2014.09.007): the 1st columns is the time in ka (10³ years) b2k (before A.D. 2000), and the 2nd column the oxygen isotope concentration; time series (20-years resolution) of calcium concentration (Ca2+) from NGRIP ice core on the GICC05 time scale (source: https://www.iceandclimate.nbi.ku.dk, DOI: 10.1016/j.quascirev.2014.09.007): the 1st columns is the time in ka (10³ years) b2k (before A.D. 2000), and the 2nd column the Ca2+ concentration; time series (20-years resolution) of calcium concentration (Ca2+) from NGRIP ice core on the GICC05 times scale, artificially shifted by 10 ka (500 data points): the 1st columns is the time in ka (10³ years) b2k (before A.D. 2000), and the 2nd column the Ca2+ concentration; time series (20-years resolution) of calcium concentration (Ca2+) from NGRIP ice core on the GICC05 times scale, trimmed by 10 ka (500 data points): the 1st columns is the time in ka (10³ years) b2k (before A.D. 2000), and the 2nd column the Ca2+ concentration; Code and computational notebooks R code for visualisation of matrix profile calculations; jupyter notebook (python) containing the matrix profile analysis of the oxygen isotope time series; jupyter notebook (python) containing the matrix profile analysis of the calcium time series; jupyter notebook (python) containing the join matrix profile analysis of oxygen isotope and calcium time series; jupyter notebook (R) for visualisation of matrix profile results of the oxygen isotope time series; jupyter notebook (R) for visualisation of matrix profile results of the calcium time series; jupyter notebook (R) for visualisation of join matrix profile results; Output datafiles matrix profile of the oxygen isotope time series (sub-sequence length of 2,500 years): the 1st column contains the matrix profile value (distance to the nearest sub-sequence), the 2nd column contains the profile index (the zero-based index location of the nearest sub-sequence);
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
File List Supplement_Avian data.csv Supplement_R code.r Description The Supplement_Avian data.csv file contains data on stand-level habitat covariates and visit-specific detections of avian species, Oregon, USA, 2008–2009. Column definitions
Stand id
Percent cover of conifer species
Percent cover of broadleaf species
Percent cover of deciduous broadleaf species
Percent cover of hardwood species
Percent cover of hardwood species in a 2000 m radius circle around each sample stand
Elevation (m) of stand
Age of stand
Year of sampling
Visit number
Detection of Magnolia Warbler on Visit 1
Detection of Magnolia Warbler on Visit 2
Detection of Orange-crowned Warbler on Visit 1
Detection of Orange-crowned Warbler on Visit 2
Detection of Swainson’s Thrush on Visit 1
Detection of Swainson’s Thrush on Visit 2
Detection of Willow Flycatcher on Visit 1
Detection of Willow Flycatcher on Visit 2
Detection of Wilson’s Warbler on Visit 1
Detection of Wilson’s Warbler on Visit 1
Checksum values are:
Column 2 (Percent cover of conifer species – CONIFER): SUM = 5862.83
Column 3 (Percent cover of broadleaf species – BROAD): SUM = 7043.17
Column 4 (Percent cover of deciduous broadleaf species – DECBROAD): SUM = 5475.17
Column 5 (Percent cover of hardwood species – HARDWOOD): SUM = 2151.96
Column 6 (Percent cover of hardwood species in a 2000 m radius circle around each sample stand– HWD2000): SUM = 3486.07
Column 7 (Stand elevation – ELEVM): SUM = 83240.58
Column 8 (Stand age – AGE): SUM = 1537; NA indicates a stand was harvested in 2008
Column 9 (Year of sampling – YEAR): SUM = 425792
Column 11 (MGWA.1): SUM = 70
Column 12 (MGWA.2): SUM = 71
Column 13 (OCWA.1): SUM = 121
Column 14 (OCWA.2): SUM = 76
Column 15 (SWTH.1): SUM = 90
Column 16 (SWTH.2): SUM = 95
Column 17 (WIFL.1): SUM = 85
Column 18 (WIFL.2): SUM = 85
Column 19 (WIWA.1): SUM = 36
Column 20 (WIWA.2): SUM = 37
The Supplement_R code.r file is R source code for simulation and empirical analyses conducted in Jones et al.
Version 5 release notes:
Removes support for SPSS and Excel data.Changes the crimes that are stored in each file. There are more files now with fewer crimes per file. The files and their included crimes have been updated below.
Adds in agencies that report 0 months of the year.Adds a column that indicates the number of months reported. This is generated summing up the number of unique months an agency reports data for. Note that this indicates the number of months an agency reported arrests for ANY crime. They may not necessarily report every crime every month. Agencies that did not report a crime with have a value of NA for every arrest column for that crime.Removes data on runaways.
Version 4 release notes:
Changes column names from "poss_coke" and "sale_coke" to "poss_heroin_coke" and "sale_heroin_coke" to clearly indicate that these column includes the sale of heroin as well as similar opiates such as morphine, codeine, and opium. Also changes column names for the narcotic columns to indicate that they are only for synthetic narcotics.
Version 3 release notes:
Add data for 2016.Order rows by year (descending) and ORI.Version 2 release notes:
Fix bug where Philadelphia Police Department had incorrect FIPS county code.
The Arrests by Age, Sex, and Race data is an FBI data set that is part of the annual Uniform Crime Reporting (UCR) Program data. This data contains highly granular data on the number of people arrested for a variety of crimes (see below for a full list of included crimes). The data sets here combine data from the years 1980-2015 into a single file. These files are quite large and may take some time to load.
All the data was downloaded from NACJD as ASCII+SPSS Setup files and read into R using the package asciiSetupReader. All work to clean the data and save it in various file formats was also done in R. For the R code used to clean this data, see here. https://github.com/jacobkap/crime_data. If you have any questions, comments, or suggestions please contact me at jkkaplan6@gmail.com.
I did not make any changes to the data other than the following. When an arrest column has a value of "None/not reported", I change that value to zero. This makes the (possible incorrect) assumption that these values represent zero crimes reported. The original data does not have a value when the agency reports zero arrests other than "None/not reported." In other words, this data does not differentiate between real zeros and missing values. Some agencies also incorrectly report the following numbers of arrests which I change to NA: 10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 99999, 99998.
To reduce file size and make the data more manageable, all of the data is aggregated yearly. All of the data is in agency-year units such that every row indicates an agency in a given year. Columns are crime-arrest category units. For example, If you choose the data set that includes murder, you would have rows for each agency-year and columns with the number of people arrests for murder. The ASR data breaks down arrests by age and gender (e.g. Male aged 15, Male aged 18). They also provide the number of adults or juveniles arrested by race. Because most agencies and years do not report the arrestee's ethnicity (Hispanic or not Hispanic) or juvenile outcomes (e.g. referred to adult court, referred to welfare agency), I do not include these columns.
To make it easier to merge with other data, I merged this data with the Law Enforcement Agency Identifiers Crosswalk (LEAIC) data. The data from the LEAIC add FIPS (state, county, and place) and agency type/subtype. Please note that some of the FIPS codes have leading zeros and if you open it in Excel it will automatically delete those leading zeros.
I created 9 arrest categories myself. The categories are:
Total Male JuvenileTotal Female JuvenileTotal Male AdultTotal Female AdultTotal MaleTotal FemaleTotal JuvenileTotal AdultTotal ArrestsAll of these categories are based on the sums of the sex-age categories (e.g. Male under 10, Female aged 22) rather than using the provided age-race categories (e.g. adult Black, juvenile Asian). As not all agencies report the race data, my method is more accurate. These categories also make up the data in the "simple" version of the data. The "simple" file only includes the above 9 columns as the arrest data (all other columns in the data are just agency identifier columns). Because this "simple" data set need fewer columns, I include all offenses.
As the arrest data is very granular, and each category of arrest is its own column, there are dozens of columns per crime. To keep the data somewhat manageable, there are nine different files, eight which contain different crimes and the "simple" file. Each file contains the data for all years. The eight categories each have crimes belonging to a major crime category and do not overlap in crimes other than with the index offenses. Please note that the crime names provided below are not the same as the column names in the data. Due to Stata limiting column names to 32 characters maximum, I have abbreviated the crime names in the data. The files and their included crimes are:
Index Crimes
MurderRapeRobberyAggravated AssaultBurglaryTheftMotor Vehicle TheftArsonAlcohol CrimesDUIDrunkenness
LiquorDrug CrimesTotal DrugTotal Drug SalesTotal Drug PossessionCannabis PossessionCannabis SalesHeroin or Cocaine PossessionHeroin or Cocaine SalesOther Drug PossessionOther Drug SalesSynthetic Narcotic PossessionSynthetic Narcotic SalesGrey Collar and Property CrimesForgeryFraudStolen PropertyFinancial CrimesEmbezzlementTotal GamblingOther GamblingBookmakingNumbers LotterySex or Family CrimesOffenses Against the Family and Children
Other Sex Offenses
ProstitutionRapeViolent CrimesAggravated AssaultMurderNegligent ManslaughterRobberyWeapon Offenses
Other CrimesCurfewDisorderly ConductOther Non-trafficSuspicion
VandalismVagrancy
Simple
This data set has every crime and only the arrest categories that I created (see above).
If you have any questions, comments, or suggestions please contact me at jkkaplan6@gmail.com.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset and R code accompanies a manuscript submitted by Marks et al. to Lancet Public Health entitled "Identifying Counties at Risk of High Overdose Mortality Burden Throughout the Emerging Fentanyl Epidemic in the United States: A Predictive Statistical Modeling Study". The analyses and results are available in the manuscript. All publicly available data used in the study is included in this dataset, in addition to several additional variables. Since the study used restricted mortality records from the CDC, we have censored all variables derived from this restricted data. Given access to the restricted data, researchers can add these variables to this dataset in the indicated columns. The accompanying R Code was used for the analysis.
The data contains inequality measures at the municipality-level for 1892 and 1871, as estimated in the PhD thesis "Institutions, Inequality and Societal Transformations" by Sara Moricz. The data also contains the source publications: 1) tabel 1 from “Bidrag till Sverige official statistik R) Valstatistik. XI. Statistiska Centralbyråns underdåniga berättelse rörande kommunala rösträtten år 1892” (biSOS R 1892) 2) tabel 1 from “Bidrag till Sverige official statistik R) Valstatistik. II. Statistiska Centralbyråns underdåniga berättelse rörande kommunala rösträtten år 1871” (biSOS R 1871)
A UTF-8 encoded .csv-file. Each row is a municipality of the agricultural sample (2222 in total). Each column is a variable.
R71muncipality_id: a unique identifier for the municipalities in the R1871 publication (the municipality name can be obtained from the source data) R92muncipality_id: a unique identifier for the municipalities in the R1892 publication (the municipality name can be obtained from the source data) agriTop1_1871: an ordinal measure (ranking) of the top 1 income share in the agricultural sector for 1871 agriTop1_1892: an ordinal measure (ranking) of the top 1 income share in the agricultural sector for 1892 highestFarm_1871: a cardinal measure of the top 1 person share in the agricultural sector for 1871 highestFarm_1871: a cardinal measure of the top 1 person share in the agricultural sector for 1892
A UTF-8 encoded .csv-file. Each row is a municipality of the industrial sample (1328 in total). Each column is a variable.
R71muncipality_id: see above description R92muncipality_id: see above description indTop1_1871: an ordinal measure (ranking) of the top 1 income share in the industrial sector for 1871 indTop1_1892: an ordinal measure (ranking) of the top 1 income share in the industrial sector for 1892
A UTF-8 encoded .csv-file with the source data. The variables are described in the adherent codebook moricz_R1892_source_data_codebook.csv.
Contains table 1 from “Bidrag till Sverige official statistik R) Valstatistik. XI. Statistiska Centralbyråns underdåniga berättelse rörande kommunala rösträtten år 1892” (biSOS R 1892). SCB provides the scanned publication on their website. Dollar Typing Service typed and delivered the data in 2015. All numerical variables but two have been checked. This is easy to do since nearly all columns should sum up to another column. For “Folkmangd” (population) the numbers have been corrected against U1892. The highest estimate of errors in the variables is 0.005 percent (0.5 promille), calculated at cell level. The two numerical variables which have not been checked is “hogsta_fyrk_jo“ and “hogsta_fyrk_ov“, as this cannot much be compared internally in the data. According to my calculations as the worst case scenario, I have measurement errors of 0.0043 percent (0.43 promille) in those variables.
A UTF-8 encoded .csv-file with the source data. The variables are described in the adherent codebook moricz_R1871_source_data_codebook.csv.
Contains table 1 from “Bidrag till Sverige official statistik R) Valstatistik. II. Statistiska Centralbyråns underdåniga berättelse rörande kommunala rösträtten år 1871” (biSOS R 1871). SCB provides the scanned publication on their website. Dollar Typing Service typed and delivered the data in 2015. The variables have been checked for accuracy, which is feasible since columns and rows should sum. The variables that most likely carry mistakes are “hogsta_fyrk_al” and “hogsta_fyrk_jo”.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Marcelo A. Aizen, Gabriela R. Gleiser, Thomas Kitzberger, Ruben Milla. Being a tree crop increases the odds of experiencing yield declines irrespective of pollinator dependence (to be submitted to PCI)
Data and R scripts to reproduce the analyses and the figures shown in the paper. All analyses were performed using R 4.0.2.
Data
This file includes yearly data (1961-2020, column 8) on yield and cultivated area (columns 6 and 10) at the country, sub-regional, and regional levels (column 2) for each crop (column 4) drawn from the United Nations Food and Agriculture Organization database (data available at http://www.fao.org/faostat/en; accessed July 21-12-2021). [Used in Script 1 to generate the synthesis dataset]
This file provides information on the region (column 2) to which each country (column 1) belongs. [Used in Script 1 to generate the synthesis dataset]
This file provides information on the pollinator dependence category (column 2) of each crop (column 1).
This file provides information on the traits of each crop other than pollinator dependence, including, besides the crop name (column1), the variables type of harvested organ (column 5) and growth form (column 6). [Used in Script 1 to generate the synthesis dataset]
The synthesis dataset generated by Script 1.
The yield growth dataset generated by Script 1 and used as input by Scripts 2 and 3.
This file lists all the crops (column 1) and their equivalent tip names in the crop phylogeny (column 2). [Used in Script 2 for the phylogenetically-controlled analyses]
8.phylo137.tre
File containing the phylogenetic tree.
Scripts
This R script curates and merges all the individual datasets mentioned above into a single dataset, estimating and adding to this single dataset the growth rate for each crop and country, and the (log) cumulative harvested area per crop and country over the period 1961-2020.
This R script includes all the analyses described in the article’s main text.
This R script creates all the main and supplementary figures of this article.
R function written by Li and Bolker (2019) to carry out phylogenetically-controlled generalized linear mixed-effects models as described in the main text of the article.
References
Li, M., and B. Bolker. 2019. wzmli/phyloglmm: First release of phylogenetic comparative analysis in lme4- verse. Zenodo. https://doi.org/10.5281/zenodo.2639887.
A bike-sharing system is a service in which bikes are made available for shared use to individuals on a short term basis for a price or free. Many bike share systems allow people to borrow a bike from a "dock" which is usually computer-controlled wherein the user enters the payment information, and the system unlocks it. This bike can then be returned to another dock belonging to the same system.
A US bike-sharing provider BoomBikes has recently suffered considerable dip in their revenue due to the Corona pandemic. The company is finding it very difficult to sustain in the current market scenario. So, it has decided to come up with a mindful business plan to be able to accelerate its revenue.
In such an attempt, BoomBikes aspires to understand the demand for shared bikes among the people. They have planned this to prepare themselves to cater to the people's needs once the situation gets better all around and stand out from other service providers and make huge profits.
They have contracted a consulting company to understand the factors on which the demand for these shared bikes depends. Specifically, they want to understand the factors affecting the demand for these shared bikes in the American market. The company wants to know:
Based on various meteorological surveys and people's styles, the service provider firm has gathered a large dataset on daily bike demands across the American market based on some factors.
You are required to model the demand for shared bikes with the available independent variables. It will be used by the management to understand how exactly the demands vary with different features. They can accordingly manipulate the business strategy to meet the demand levels and meet the customer's expectations. Further, the model will be a good way for management to understand the demand dynamics of a new market.
In the dataset provided, you will notice that there are three columns named 'casual', 'registered', and 'cnt'. The variable 'casual' indicates the number casual users who have made a rental. The variable 'registered' on the other hand shows the total number of registered users who have made a booking on a given day. Finally, the 'cnt' variable indicates the total number of bike rentals, including both casual and registered. The model should be built taking this 'cnt' as the target variable.
When you're done with model building and residual analysis and have made predictions on the test set, just make sure you use the following two lines of code to calculate the R-squared score on the test set.
python
from sklearn.metrics import r2_score
r2_score(y_test, y_pred)
- where y_test is the test data set for the target variable, and y_pred is the variable containing the predicted values of the target variable on the test set.
- Please perform this step as the R-squared score on the test set holds as a benchmark for your model.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This data repository provides the Food and Agriculture Biomass Input Output (FABIO) database, a global set of multi-regional physical supply-use and input-output tables covering global agriculture and forestry.
The work is based on mostly freely available data from FAOSTAT, IEA, EIA, and UN Comtrade/BACI. FABIO currently covers 191 countries + RoW, 118 processes and 125 commodities (raw and processed agricultural and food products) for 1986-2013. All R codes and auxilliary data are available on GitHub. For more information please refer to https://fabio.fineprint.global.
The database consists of the following main components, in compressed .rds format:
Z: the inter-commodity input-output matrix, displaying the relationships of intermediate use of each commodity in the production of each commodity, in physical units (tons). The matrix has 24000 rows and columns (125 commodities x 192 regions), and is available in two versions, based on the method to allocate inputs to outputs in production processes: Z_mass (mass allocation) and Z_value (value allocation). Note that the row sums of the Z matrix (= total intermediate use by commodity) are identical in both versions.
Y: the final demand matrix, denoting the consumption of all 24000 commodities by destination country and final use category. There are six final use categories (yielding 192 x 6 = 1152 columns): 1) food use, 2) other use (non-food), 3) losses, 4) stock addition, 5) balancing, and 6) unspecified.
X: the total output vector of all 24000 commodities. Total output is equal to the sum of intermediate and final use by commodity.
L: the Leontief inverse, computed as (I – A)-1, where A is the matrix of input coefficients derived from Z and x. Again, there are two versions, depending on the underlying version of Z (L_mass and L_value).
E: environmental extensions for each of the 24000 commodities, including four resource categories: 1) primary biomass extraction (in tons), 2) land use (in hectares), 3) blue water use (in m3)., and 4) green water use (in m3).
mr_sup_mass/mr_sup_value: For each allocation method (mass/value), the supply table gives the physical supply quantity of each commodity by producing process, with processes in the rows (118 processes x 192 regions = 22656 rows) and commodities in columns (24000 columns).
mr_use: the use table capture the quantities of each commodity (rows) used as an input in each process (columns).
A description of the included countries and commodities (i.e. the rows and columns of the Z matrix) can be found in the auxiliary file io_codes.csv. Separate lists of the country sample (including ISO3 codes and continental grouping) and commodities (including moisture content) are given in the files regions.csv and items.csv, respectively. For information on the individual processes, see auxiliary file su_codes.csv. RDS files can be opened in R. Information on how to read these files can be obtained here: https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/readRDS
Except of X.rds, which contains a matrix, all variables are organized as lists, where each element contains a sparse matrix. Please note that values are always given in physical units, i.e. tonnes or head, as specified in items.csv. The suffixes value and mass only indicate the form of allocation chosen for the construction of the symmetric IO tables (for more details see Bruckner et al. 2019). Product, process and country classifications can be found in the file fabio_classifications.xlsx.
Footprint results are not contained in the database but can be calculated, e.g. by using this script: https://github.com/martinbruckner/fabio_comparison/blob/master/R/fabio_footprints.R
How to cite:
To cite FABIO work please refer to this paper:
Bruckner, M., Wood, R., Moran, D., Kuschnig, N., Wieland, H., Maus, V., Börner, J. 2019. FABIO – The Construction of the Food and Agriculture Input–Output Model. Environmental Science & Technology 53(19), 11302–11312. DOI: 10.1021/acs.est.9b03554
License:
This data repository is distributed under the CC BY-NC-SA 4.0 License. You are free to share and adapt the material for non-commercial purposes using proper citation. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. In case you are interested in a collaboration, I am happy to receive enquiries at martin.bruckner@wu.ac.at.
Known issues:
The underlying FAO data have been manipulated to the minimum extent necessary. Data filling and supply-use balancing, yet, required some adaptations. These are documented in the code and are also reflected in the balancing item in the final demand matrices. For a proper use of the database, I recommend to distribute the balancing item over all other uses proportionally and to do analyses with and without balancing to illustrate uncertainties.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
File List
Merow_et_al_R_code.r -- R code for all analyses
fynbos_abundance_matrix.csv -- matrix of relative abundance (rows) by sites (columns)
fynbos_trait_matrix.csv -- matrix of species (rows) by traits (columns)
Description
The file Merow_et_al_R_code.r contains R code for constructing the EM models shown in the main text. This includes the basic EM model, calculating the Hellinger fit metric, Lagrange multipliers and plotting results for both local and regional communities. We also include code for predicting community-aggregated traits from splines, cross validation, generating informative priors and permutation tests. The file fynbos_abundance_matrix.csv contains site by site relative abundance data for the eight elevational communities we used (derived from 43 5 × 10 m releves) from the Baviaanskloof Mountains, South Africa. The file fynbos_trait_matrix.csv contains data for each species on the following traits: graminoid (binary), succulent (binary), maximum height, leaf longevity (ordinal 1–3, 1 is lowest), flowering duration, pubescence (binary), leaf width, leaf perimeter^2/leaf area, leaf area/basal diameter, stem length/stem basal diameter^(2/3). These traits have been rescaled to lie on the interval [0,1].
Checksum values are as follows:
For fynbos_trait_matrix.csv the columns sum to: col 1 (life.form.graminoid) = 19, col 2 = 2, col. 3 = 5.84708, col. 4 = 24, col. 5 = 18.09093, col. 6 = 14, col. 7 = 7.75024, col. 8 = 8.16681, col. 9 = 6.7815, and last col. = 8.2096.
For fynbos_abundance_matrix.csv, all columns sum to 1.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
For a comprehensive guide to this data and other UCR data, please see my book at ucrbook.comVersion 13 release notes:Adds 2022 dataVersion 12 release notes:Adds 2021 data.Version 11 release notes:Adds 2020 data. Please note that the FBI has retired UCR data ending in 2020 data so this will (probably, I haven't seen confirmation either way) be the last LEOKA data they release. Changes .rda file to .rds.Version 10 release notes:Changes release notes description, does not change data.Version 9 release notes:Adds data for 2019.Version 8 release notes:Fix bug for years 1960-1971 where the number of months reported variable was incorrectly down by 1 month. I recommend caution when using these years as they only report either 0 or 12 months of the year, which differs from every other year in the data. Added the variable officers_killed_total which is the sum of officers_killed_by_felony and officers_killed_by_accident.Version 7 release notes:Adds data from 2018Version 6 release notes:Adds data in the following formats: SPSS and Excel.Changes project name to avoid confusing this data for the ones done by NACJD.Version 5 release notes: Adds data for 1960-1974 and 2017. Note: many columns (including number of female officers) will always have a value of 0 for years prior to 1971. This is because those variables weren't collected prior to 1971. These should be NA, not 0 but I'm keeping it as 0 to be consistent with the raw data. Removes support for .csv and .sav files.Adds a number_of_months_reported variable for each agency-year. A month is considered reported if the month_indicator column for that month has a value of "normal update" or "reported, not data."The formatting of the monthly data has changed from wide to long. This means that each agency-month has a single row. The old data had each agency being a single row with each month-category (e.g. jan_officers_killed_by_felony) being a column. Now there will just be a single column for each category (e.g. officers_killed_by_felony) and the month can be identified in the month column. This also results in most column names changing. As such, be careful when aggregating the monthly data since some variables are the same every month (e.g. number of officers employed is measured annually) so aggregating will be 12 times as high as the real value for those variables. Adds a date column. This date column is always set to the first of the month. It is NOT the date that a crime occurred or was reported. It is only there to make it easier to create time-series graphs that require a date input.All the data in this version was acquired from the FBI as text/DAT files and read into R using the package asciiSetupReader. The FBI also provided a PDF file explaining how to create the setup file to read the data. Both the FBI's PDF and the setup file I made are included in the zip files. Data is the same as from NACJD but using all FBI files makes cleaning easier as all column names are already identical. Version 4 release notes: Add data for 2016.Order rows by year (descending) and ORI.Version 3 release notes: Fix bug where Philadelphia Police Department had incorrect FIPS county code. The LEOKA data sets contain highly detailed data about the number of officers/civilians employed by an agency and how many officers were killed or assaulted. All the data was acquired from the FBI as text/DAT files and read into R using the package asciiSetupReader. The FBI also provided a PDF file explaining how to create the setup file to read the data. Both the FBI's PDF and the setup file I made are included in the zip files. About 7% of all agencies in the data report more officers or civilians than population. As such, I removed the officers/civilians per 1,000 population variables. You should exercise caution if deciding to generate and use these variables yourself. Several agency had impossible large (>15) officer deaths in a single month. For those months I changed the value to NA. The UCR Handbook (https://ucr.fbi.gov/additional-ucr-publications/ucr_handbook.pdf/view) describes the LEOKA data as follows:"The UCR Program collects data from all contributing agencies ... on officer line-of-duty deaths and assaults. Reporting agencies must submit data on ... their own duly sworn officers feloniously or accidentally killed or assaulted in the line of duty. The purpose of this data collect
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset includes all the datafiles and computational notebooks required to reproduce the work reported in the paper “Characterisation of Dansgaard-Oeschger events in palaeoclimate time series using the Matrix Profile”: Input datafiles time series (20-years resolution) of oxygen isotope ratios (δ18O) from NGRIP ice core on the GICC05 time scale (source: https://www.iceandclimate.nbi.ku.dk, DOI: 10.1016/j.quascirev.2014.09.007): the 1st columns is the time in ka (10³ years) b2k (before A.D. 2000), and the 2nd column the oxygen isotope concentration; time series (20-years resolution) of calcium concentration (Ca2+) from NGRIP ice core on the GICC05 time scale (source: https://www.iceandclimate.nbi.ku.dk, DOI: 10.1016/j.quascirev.2014.09.007): the 1st columns is the time in ka (10³ years) b2k (before A.D. 2000), and the 2nd column the Ca2+ concentration; time series (20-years resolution) of calcium concentration (Ca2+) from NGRIP ice core on the GICC05 times scale, artificially shifted by 10 ka (500 data points): the 1st columns is the time in ka (10³ years) b2k (before A.D. 2000), and the 2nd column the Ca2+ concentration; time series (20-years resolution) of calcium concentration (Ca2+) from NGRIP ice core on the GICC05 times scale, trimmed by 10 ka (500 data points): the 1st columns is the time in ka (10³ years) b2k (before A.D. 2000), and the 2nd column the Ca2+ concentration; Code and computational notebooks R code for visualisation of matrix profile calculations; jupyter notebook (python) containing the matrix profile analysis of the oxygen isotope time series; jupyter notebook (python) containing the matrix profile analysis of the calcium time series; jupyter notebook (python) containing the join matrix profile analysis of oxygen isotope and calcium time series; jupyter notebook (R) for visualisation of matrix profile results of the oxygen isotope time series; jupyter notebook (R) for visualisation of matrix profile results of the calcium time series; jupyter notebook (R) for visualisation of join matrix profile results; Output datafiles matrix profile of the oxygen isotope time series (sub-sequence length of 2,500 years): the 1st column contains the matrix profile value (distance to the nearest sub-sequence), the 2nd column contains the profile index (the zero-based index location of the nearest sub-sequence);