8 datasets found
  1. Z

    A dataset for temporal analysis of files related to the JFK case

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luczak-Roesch, Markus (2020). A dataset for temporal analysis of files related to the JFK case [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_1042153
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Victoria University of Wellington
    Authors
    Luczak-Roesch, Markus
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the content of the subset of all files with a correct publication date from the 2017 release of files related to the JFK case (retrieved from https://www.archives.gov/research/jfk/2017-release). This content was extracted from the source PDF files using the R OCR libraries tesseract and pdftools.

    The code to derive the dataset is given as follows:

    BEGIN R DATA PROCESSING SCRIPT

    library(tesseract) library(pdftools)

    pdfs <- list.files("[path to your output directory containing all PDF files]")

    meta <- read.csv2("[path to your input directory]/jfkrelease-2017-dce65d0ec70a54d5744de17d280f3ad2.csv",header = T,sep = ',') #the meta file containing all metadata for the PDF files (e.g. publication date)

    meta$Doc.Date <- as.character(meta$Doc.Date)

    meta.clean <- meta[-which(meta$Doc.Date=="" | grepl("/0000",meta$Doc.Date)),] for(i in 1:nrow(meta.clean)){ meta.clean$Doc.Date[i] <- gsub("00","01",meta.clean$Doc.Date[i])

    if(nchar(meta.clean$Doc.Date[i])<10){ meta.clean$Doc.Date[i]<-format(strptime(meta.clean$Doc.Date[i],format = "%d/%m/%y"),"%m/%d/%Y") }

    }

    meta.clean$Doc.Date <- strptime(meta.clean$Doc.Date,format = "%m/%d/%Y")

    meta.clean <- meta.clean[order(meta.clean$Doc.Date),]

    docs <- data.frame(content=character(0),dpub=character(0),stringsAsFactors = F) for(i in 1:nrow(meta.clean)){

    for(i in 1:3){

    pdf_prop <- pdftools::pdf_info(paste0("[path to your output directory]/",tolower(meta.clean$File.Name[i]))) tmp_files <- c() for(k in 1:pdf_prop$pages){ tmp_files <- c(tmp_files,paste0("/home/STAFF/luczakma/RProjects/JFK/data/tmp/",k)) }

    img_file <- pdftools::pdf_convert(paste0("[path to your output directory]/",tolower(meta.clean$File.Name[i])), format = 'tiff', pages = NULL, dpi = 700,filenames = tmp_files)

    txt <- ""

    for(j in 1:length(img_file)){ extract <- ocr(img_file[j], engine = tesseract("eng")) #unlink(img_file) txt <- paste(txt,extract,collapse = " ") }

    docs <- rbind(docs,data.frame(content=iconv(tolower(gsub("\s+"," ",gsub("[[:punct:]]|[ ]"," ",txt))),to="UTF-8"),dpub=format(meta.clean$Doc.Date[i],"%Y/%m/%d"),stringsAsFactors = F),stringsAsFactors = F) }

    write.table(docs,"[path to your output directory]/documents.csv", row.names = F)

    END R DATA PROCESSING SCRIPT

  2. Google Data Analytics Case Study Cyclistic

    • kaggle.com
    zip
    Updated Sep 27, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Udayakumar19 (2022). Google Data Analytics Case Study Cyclistic [Dataset]. https://www.kaggle.com/datasets/udayakumar19/google-data-analytics-case-study-cyclistic/suggestions
    Explore at:
    zip(1299 bytes)Available download formats
    Dataset updated
    Sep 27, 2022
    Authors
    Udayakumar19
    Description

    Introduction

    Welcome to the Cyclistic bike-share analysis case study! In this case study, you will perform many real-world tasks of a junior data analyst. You will work for a fictional company, Cyclistic, and meet different characters and team members. In order to answer the key business questions, you will follow the steps of the data analysis process: ask, prepare, process, analyze, share, and act. Along the way, the Case Study Roadmap tables — including guiding questions and key tasks — will help you stay on the right path.

    Scenario

    You are a junior data analyst working in the marketing analyst team at Cyclistic, a bike-share company in Chicago. The director of marketing believes the company’s future success depends on maximizing the number of annual memberships. Therefore, your team wants to understand how casual riders and annual members use Cyclistic bikes differently. From these insights, your team will design a new marketing strategy to convert casual riders into annual members. But first, Cyclistic executives must approve your recommendations, so they must be backed up with compelling data insights and professional data visualizations.

    Ask

    How do annual members and casual riders use Cyclistic bikes differently?

    Guiding Question:

    What is the problem you are trying to solve?
      How do annual members and casual riders use Cyclistic bikes differently?
    How can your insights drive business decisions?
      The insight will help the marketing team to make a strategy for casual riders
    

    Prepare

    Guiding Question:

    Where is your data located?
      Data located in Cyclistic organization data.
    
    How is data organized?
      Dataset are in csv format for each month wise from Financial year 22.
    
    Are there issues with bias or credibility in this data? Does your data ROCCC? 
      It is good it is ROCCC because data collected in from Cyclistic organization.
    
    How are you addressing licensing, privacy, security, and accessibility?
      The company has their own license over the dataset. Dataset does not have any personal information about the riders.
    
    How did you verify the data’s integrity?
      All the files have consistent columns and each column has the correct type of data.
    
    How does it help you answer your questions?
      Insights always hidden in the data. We have the interpret with data to find the insights.
    
    Are there any problems with the data?
      Yes, starting station names, ending station names have null values.
    

    Process

    Guiding Question:

    What tools are you choosing and why?
      I used R studio for the cleaning and transforming the data for analysis phase because of large dataset and to gather experience in the language.
    
    Have you ensured the data’s integrity?
     Yes, the data is consistent throughout the columns.
    
    What steps have you taken to ensure that your data is clean?
      First duplicates, null values are removed then added new columns for analysis.
    
    How can you verify that your data is clean and ready to analyze? 
     Make sure the column names are consistent thorough out all data sets by using the “bind row” function.
    
    Make sure column data types are consistent throughout all the dataset by using the “compare_df_col” from the “janitor” package.
    Combine the all dataset into single data frame to make consistent throught the analysis.
    Removed the column start_lat, start_lng, end_lat, end_lng from the dataframe because those columns not required for analysis.
    Create new columns day, date, month, year, from the started_at column this will provide additional opportunities to aggregate the data
    Create the “ride_length” column from the started_at and ended_at column to find the average duration of the ride by the riders.
    Removed the null rows from the dataset by using the “na.omit function”
    Have you documented your cleaning process so you can review and share those results? 
      Yes, the cleaning process is documented clearly.
    

    Analyze Phase:

    Guiding Questions:

    How should you organize your data to perform analysis on it? The data has been organized in one single dataframe by using the read csv function in R Has your data been properly formatted? Yes, all the columns have their correct data type.

    What surprises did you discover in the data?
      Casual member ride duration is higher than the annual members
      Causal member widely uses docked bike than the annual members
    What trends or relationships did you find in the data?
      Annual members are used mainly for commute purpose
      Casual member are preferred the docked bikes
      Annual members are preferred the electric or classic bikes
    How will these insights help answer your business questions?
      This insights helps to build a profile for members
    

    Share

    Guiding Quesions:

    Were you able to answer the question of how ...
    
  3. RUNNING"calorie:heartrate

    • kaggle.com
    zip
    Updated Jan 6, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    romechris34 (2022). RUNNING"calorie:heartrate [Dataset]. https://www.kaggle.com/datasets/romechris34/wellness
    Explore at:
    zip(25272804 bytes)Available download formats
    Dataset updated
    Jan 6, 2022
    Authors
    romechris34
    Description

    title: 'BellaBeat Fitbit' author: 'C Romero' date: 'r Sys.Date()' output: html_document: number_sections: true

    toc: true

    ##Installation of the base package for data analysis tool
    install.packages("base")
    
    ##Installation of the ggplot2 package for data analysis tool
    install.packages("ggplot2")
    
    ##install Lubridate is an R package that makes it easier to work with dates and times.
    install.packages("lubridate")
    ```{r}
    
    ##Installation of the tidyverse package for data analysis tool
    install.packages("tidyverse")
    
    ##Installation of the tidyr package for data analysis tool
    install.packages("dplyr")
    
    ##Installation of the readr package for data analysis tool
    install.packages("readr")
    
    ##Installation of the tidyr package for data analysis tool
    install.packages("tidyr")
    

    Importing packages

    metapackage of all tidyverse packages

    library(base) library(lubridate)# make dealing with dates a little easier library(ggplot2)# create elegant data visialtions using the grammar of graphics library(dplyr)# a grammar of data manpulation library(readr)# read rectangular data text library(tidyr)

    
    ## Running code
    
    In a notebook, you can run a single code cell by clicking in the cell and then hitting 
    the blue arrow to the left, or by clicking in the cell and pressing Shift+Enter. In a script, 
    you can run code by highlighting the code you want to run and then clicking the blue arrow
    at the bottom of this window.
    
    ## Reading in files
    
    
    ```{r}
    list.files(path = "../input")
    
    # load the activity and sleep data set
    ```{r}
    dailyActivity <- read_csv("../input/wellness/dailyActivity_merge.csv")
    sleepDay <- read_csv("../input/wellness/sleepDay_merged.csv")
    
    

    check for duplicates and na

    sum(duplicated(dailyActivity)) sum(duplicated(sleepDay)) sum(is.na(dailyActivity)) sum(is.na(sleepDay))

    now we will remove duplicate from sleep & create new dataframe

    sleepy <- sleepDay %>% distinct() head(sleepy) head(dailyActivity)

    count number of id's total sleepy & dailyActivity frames

    n_distinct(dailyActivity$Id) n_distinct(sleepy$Id)

    get total sum steps for each member id

    dailyActivity %>% group_by(Id) %>% summarise(freq = sum(TotalSteps)) %>% arrange(-freq) Tot_dist <- dailyActivity %>% mutate(Id = as.character(dailyActivity$Id)) %>% group_by(Id) %>% summarise(dizzy = sum(TotalDistance)) %>% arrange(-dizzy)

    now get total min sleep & lie in bed

    sleepy %>% group_by(Id) %>% summarise(Msleep = sum(TotalMinutesAsleep)) %>% arrange(Msleep) sleepy %>% group_by(Id) %>% summarise(inBed = sum(TotalTimeInBed)) %>% arrange(inBed)

    plot graph for "inbed and sleep data" & "total steps and distance"

    ggplot(Tot_dist) + 
     geom_count(mapping = aes(y= dizzy, x= Id, color = Id, fill = Id, size = 2)) +
     labs(x = "member id's", title = "distance miles" ) +
     theme(axis.text.x = element_text(angle = 90)) 
     ```
    
  4. Z

    Data from: Lower complexity of motor primitives ensures robust control of...

    • data.niaid.nih.gov
    Updated Jun 18, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Santuz, Alessandro; Ekizos, Antonis; Kunimasa, Yoko; Kijima, Kota; Ishikawa, Masaki; Arampatzis, Adamantios (2022). Lower complexity of motor primitives ensures robust control of high-speed human locomotion [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3764760
    Explore at:
    Dataset updated
    Jun 18, 2022
    Dataset provided by
    Humboldt-Universität zu Berlin
    Osaka University of Health and Sport Sciences
    Humboldt-Universität zu Berlin, Dalhousie University
    Authors
    Santuz, Alessandro; Ekizos, Antonis; Kunimasa, Yoko; Kijima, Kota; Ishikawa, Masaki; Arampatzis, Adamantios
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Walking and running are mechanically and energetically different locomotion modes. For selecting one or another, speed is a parameter of paramount importance. Yet, both are likely controlled by similar low-dimensional neuronal networks that reflect in patterned muscle activations called muscle synergies. Here, we investigated how humans synergistically activate muscles during locomotion at different submaximal and maximal speeds. We analysed the duration and complexity (or irregularity) over time of motor primitives, the temporal components of muscle synergies. We found that the challenge imposed by controlling high-speed locomotion forces the central nervous system to produce muscle activation patterns that are wider and less complex relative to the duration of the gait cycle. The motor modules, or time-independent coefficients, were redistributed as locomotion speed changed. These outcomes show that robust locomotion control at challenging speeds is achieved by modulating the relative contribution of muscle activations and producing less complex and wider control signals, whereas slow speeds allow for more irregular control.

    In this supplementary data set we made available: a) the metadata with anonymized participant information, b) the raw EMG, c) the touchdown and lift-off timings of the recorded limb, d) the filtered and time-normalized EMG, e) the muscle synergies extracted via NMF and f) the code to process the data, including the scripts to calculate the Higuchi's fractal dimension (HFD) of motor primitives. In total, 180 trials from 30 participants are included in the supplementary data set.

    The file “metadata.dat” is available in ASCII and RData format and contains:

    Code: the participant’s code

    Group: the experimental group in which the participant was involved (G1 = walking and submaximal running; G2 = submaximal and maximal running)

    Sex: the participant’s sex (M or F)

    Speeds: the type of locomotion (W for walking or R for running) and speed at which the recordings were conducted in 10*[m/s]

    Age: the participant’s age in years

    Height: the participant’s height in [cm]

    Mass: the participant’s body mass in [kg]

    PB: 100 m-personal best time (for G2).

    The "RAW_DATA.RData" R list consists of elements of S3 class "EMG", each of which is a human locomotion trial containing cycle segmentation timings and raw electromyographic (EMG) data from 13 muscles of the right-side leg. Cycle times are structured as data frames containing two columns that correspond to touchdown (first column) and lift-off (second column). Raw EMG data sets are also structured as data frames with one row for each recorded data point and 14 columns. The first column contains the incremental time in seconds. The remaining 13 columns contain the raw EMG data, named with the following muscle abbreviations: ME = gluteus medius, MA = gluteus maximus, FL = tensor fasciæ latæ, RF = rectus femoris, VM = vastus medialis, VL = vastus lateralis, ST = semitendinosus, BF = biceps femoris, TA = tibialis anterior, PL = peroneus longus, GM = gastrocnemius medialis, GL = gastrocnemius lateralis, SO = soleus. Please note that the following trials include less than 30 gait cycles (the actual number shown between parentheses): P16_R_83 (20), P16_R_95 (25), P17_R_28 (28), P17_R_83 (24), P17_R_95 (13), P18_R_95 (23), P19_R_95 (18), P20_R_28 (25), P20_R_42 (27), P20_R_95 (25), P22_R_28 (23), P23_R_28(29), P24_R_28 (28), P24_R_42 (29), P25_R_28 (29), P25_R_95 (28), P26_R_28 (29), P26_R_95 (28), P27_R_28 (28), P27_R_42 (29), P27_R_95 (24), P28_R_28 (29), P29_R_95 (17). All the other trials consist of 30 gait cycles. Trials are named like “P20_R_20,” where the characters “P20” indicate the participant number (in this example the 20th), the character “R” indicate the locomotion type (W=walking, R=running), and the numbers “20” indicate the locomotion speed in 10*m/s (in this case the speed is 2.0 m/s). The filtered and time-normalized emg data is named, following the same rules, like “FILT_EMG_P03_R_30”.

    Old versions not compatible with the R package musclesyneRgies

    The files containing the gait cycle breakdown are available in RData format, in the file named “CYCLE_TIMES.RData”. The files are structured as data frames with as many rows as the available number of gait cycles and two columns. The first column named “touchdown” contains the touchdown incremental times in seconds. The second column named “stance” contains the duration of each stance phase of the right foot in seconds. Each trial is saved as an element of a single R list. Trials are named like “CYCLE_TIMES_P20_R_20,” where the characters “CYCLE_TIMES” indicate that the trial contains the gait cycle breakdown times, the characters “P20” indicate the participant number (in this example the 20th), the character “R” indicate the locomotion type (W=walking, R=running), and the numbers “20” indicate the locomotion speed in 10*m/s (in this case the speed is 2.0 m/s). Please note that the following trials include less than 30 gait cycles (the actual number shown between parentheses): P16_R_83 (20), P16_R_95 (25), P17_R_28 (28), P17_R_83 (24), P17_R_95 (13), P18_R_95 (23), P19_R_95 (18), P20_R_28 (25), P20_R_42 (27), P20_R_95 (25), P22_R_28 (23), P23_R_28(29), P24_R_28 (28), P24_R_42 (29), P25_R_28 (29), P25_R_95 (28), P26_R_28 (29), P26_R_95 (28), P27_R_28 (28), P27_R_42 (29), P27_R_95 (24), P28_R_28 (29), P29_R_95 (17).

    The files containing the raw, filtered and the normalized EMG data are available in RData format, in the files named “RAW_EMG.RData” and “FILT_EMG.RData”. The raw EMG files are structured as data frames with as many rows as the amount of recorded data points and 13 columns. The first column named “time” contains the incremental time in seconds. The remaining 12 columns contain the raw EMG data, named with muscle abbreviations that follow those reported above. Each trial is saved as an element of a single R list. Trials are named like “RAW_EMG_P03_R_30”, where the characters “RAW_EMG” indicate that the trial contains raw emg data, the characters “P03” indicate the participant number (in this example the 3rd), the character “R” indicate the locomotion type (see above), and the numbers “30” indicate the locomotion speed (see above). The filtered and time-normalized emg data is named, following the same rules, like “FILT_EMG_P03_R_30”.

    The files containing the muscle synergies extracted from the filtered and normalized EMG data are available in RData format, in the files named “SYNS_H.RData” and “SYNS_W.RData”. The muscle synergies files are divided in motor primitives and motor modules and are presented as direct output of the factorisation and not in any functional order. Motor primitives are data frames with 6000 rows and a number of columns equal to the number of synergies (which might differ from trial to trial) plus one. The rows contain the time-dependent coefficients (motor primitives), one column for each synergy plus the time points (columns are named e.g. “time, Syn1, Syn2, Syn3”, where “Syn” is the abbreviation for “synergy”). Each gait cycle contains 200 data points, 100 for the stance and 100 for the swing phase which, multiplied by the 30 recorded cycles, result in 6000 data points distributed in as many rows. This output is transposed as compared to the one discussed in the methods section to improve user readability. Each set of motor primitives is saved as an element of a single R list. Trials are named like “SYNS_H_P12_W_07”, where the characters “SYNS_H” indicate that the trial contains motor primitive data, the characters “P12” indicate the participant number (in this example the 12th), the character “W” indicate the locomotion type (see above), and the numbers “07” indicate the speed (see above). Motor modules are data frames with 12 rows (number of recorded muscles) and a number of columns equal to the number of synergies (which might differ from trial to trial). The rows, named with muscle abbreviations that follow those reported above, contain the time-independent coefficients (motor modules), one for each synergy and for each muscle. Each set of motor modules relative to one synergy is saved as an element of a single R list. Trials are named like “SYNS_W_P22_R_20”, where the characters “SYNS_W” indicate that the trial contains motor module data, the characters “P22” indicate the participant number (in this example the 22nd), the character “W” indicates the locomotion type (see above), and the numbers “20” indicate the speed (see above). Given the nature of the NMF algorithm for the extraction of muscle synergies, the supplementary data set might show non-significant differences as compared to the one used for obtaining the results of this paper.

    The files containing the HFD calculated from motor primitives are available in RData format, in the file named “HFD.RData”. HFD results are presented in a list of lists containing, for each trial, 1) the HFD, and 2) the interval time k used for the calculations. HFDs are presented as one number (mean HFD of the primitives for that trial), as are the interval times k. Trials are named like “HFD_P01_R_95”, where the characters “HFD” indicate that the trial contains HFD data, the characters “P01” indicate the participant number (in this example the 1st), the character “R” indicates the locomotion type (see above), and the numbers “95” indicate the speed (see above).

    All the code used for the pre-processing of EMG data, the extraction of muscle synergies and the calculation of HFD is available in R format. Explanatory comments are profusely present throughout the script “muscle_synergies.R”.

  5. u

    Data from: A Phanerozoic gridded dataset for palaeogeographic...

    • portalcientifico.uvigo.gal
    • data.niaid.nih.gov
    • +1more
    Updated 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jones, Lewis A.; Domeier, Mathew; Jones, Lewis A.; Domeier, Mathew (2024). A Phanerozoic gridded dataset for palaeogeographic reconstructions [Dataset]. https://portalcientifico.uvigo.gal/documentos/668fc42bb9e7c03b01bd5735
    Explore at:
    Dataset updated
    2024
    Authors
    Jones, Lewis A.; Domeier, Mathew; Jones, Lewis A.; Domeier, Mathew
    Description

    This repository provides access to five pre-computed reconstruction files as well as the static polygons and rotation files used to generate them. This set of palaeogeographic reconstruction files provide palaeocoordinates for three global grids at H3 resolutions 2, 3, and 4, which have an average cell spacing of ~316 km, ~119 km, and ~45 km, respectively. Grids were reconstructed at a temporal resolution of one million years throughout the entire Phanerozoic (540–0 Ma). The reconstruction files are stored as comma-separated-value (CSV) files which can be easily read by almost any spreadsheet program (e.g. Microsoft Excel and Google Sheets) or programming language (e.g. Python, Julia, and R). In addition, R Data Serialization (RDS) files—a common format for saving R objects—are also provided as lighter (and compressed) alternatives to the CSV files. The structure of the reconstruction files follows a wide-form data frame structure to ease indexing. Each file consists of three initial index columns relating to the H3 cell index (i.e. the 'H3 address'), present-day longitude of the cell centroid, and the present-day latitude of the cell centroid. The subsequent columns provide the reconstructed longitudinal and latitudinal coordinate pairs for their respective age of reconstruction in ascending order, indicated by a numerical suffix. Each row contains a unique spatial point on the Earth's continental surface reconstructed through time. NA values within the reconstruction files indicate points which are not defined in deeper time (i.e. either the static polygon does not exist at that time, or it is outside the temporal coverage as defined by the rotation file).

    The following five Global Plate Models are provided (abbreviation, temporal coverage, reference) within the GPMs folder:

    WR13, 0–550 Ma, (Wright et al., 2013)

    MA16, 0–410 Ma, (Matthews et al., 2016)

    TC16, 0–540 Ma, (Torsvik and Cocks, 2016)

    SC16, 0–1100 Ma, (Scotese, 2016)

    ME21, 0–1000 Ma, (Merdith et al., 2021)

    In addition, the H3 grids for resolutions 2, 3, and 4 are provided within the grids folder. Finally, we also provide two scripts (python and R) within the code folder which can be used to generate reconstructed coordinates for user data from the reconstruction files.

    For access to the code used to generate these files:

    https://github.com/LewisAJones/PhanGrids

    For more information, please refer to the article describing the data:

    Jones, L.A. and Domeier, M.M. 2024. A Phanerozoic gridded dataset for palaeogeographic reconstructions. (2024).

    For any additional queries, contact:

    Lewis A. Jones (lewisa.jones@outlook.com) or Mathew M. Domeier (mathewd@uio.no)

    If you use these files, please cite:

    Jones, L.A. and Domeier, M.M. 2024. A Phanerozoic gridded dataset for palaeogeographic reconstructions. DOI: 10.5281/zenodo.10069221

    References

    Matthews, K. J., Maloney, K. T., Zahirovic, S., Williams, S. E., Seton, M., & Müller, R. D. (2016). Global plate boundary evolution and kinematics since the late Paleozoic. Global and Planetary Change, 146, 226–250. https://doi.org/10.1016/j.gloplacha.2016.10.002.

    Merdith, A. S., Williams, S. E., Collins, A. S., Tetley, M. G., Mulder, J. A., Blades, M. L., Young, A., Armistead, S. E., Cannon, J., Zahirovic, S., & Müller, R. D. (2021). Extending full-plate tectonic models into deep time: Linking the Neoproterozoic and the Phanerozoic. Earth-Science Reviews, 214, 103477. https://doi.org/10.1016/j.earscirev.2020.103477.

    Scotese, C. R. (2016). Tutorial: PALEOMAP paleoAtlas for GPlates and the paleoData plotter program: PALEOMAP Project, Technical Report.

    Torsvik, T. H., & Cocks, L. R. M. (2017). Earth history and palaeogeography. Cambridge University Press. https://doi.org/10.1017/9781316225523.

    Wright, N., Zahirovic, S., Müller, R. D., & Seton, M. (2013). Towards community-driven paleogeographic reconstructions: Integrating open-access paleogeographic and paleobiology data with plate tectonics. Biogeosciences, 10, 1529–1541. https://doi.org/10.5194/bg-10-1529-2013.

  6. i

    Demographic and Health Survey 1996 - Zambia

    • catalog.ihsn.org
    • microdata.worldbank.org
    Updated Jul 6, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Central Statistical Office (2017). Demographic and Health Survey 1996 - Zambia [Dataset]. https://catalog.ihsn.org/catalog/2475
    Explore at:
    Dataset updated
    Jul 6, 2017
    Dataset authored and provided by
    Central Statistical Office
    Time period covered
    1996 - 1997
    Area covered
    Zambia
    Description

    Abstract

    The 1996 Zambia Demographic and Health Survey (ZDHS) is a nationally representative survey conducted by the Central Statistical Office at the request of the Ministry of Health, with the aim of gathering reliable information on fertility, childhood and maternal mortality rates, maternal and child health indicators, contraceptive knowledge and use, and knowledge and prevalence of sexually transmitted diseases (STDs) including AIDS. The survey is a follow-up to the Zambia DHS survey carried out in 1992.

    The primary objectives of the ZDHS are: - To collect up-to-date information on fertility, infant and child mortality and family planning; - To collect information on health-related matters such as breastfeeding, antenatal care, children's immunisations and childhood diseases; - To assess the nutritional status of mothers and children; iv) To support dissemination and utilisation of the results in planning, managing and improving family planning and health services in the country; and - To enhance the survey capabilities of the institutions involved in order to facilitate the implementation of surveys of this type in the future.

    SUMMARY OF FINDINGS

    FERTILITY

    • Fertility Trends. The 1996 ZDHS survey results indicate that the level of fertility in Zambia is continuing to decline.
    • Fertility Differentials. Some women are apparently leading the fertility decline. Moreover, women who have received some secondary education have the lowest level of fertility.
    • Age at First Birth. Childbearing begins early in Zambia, with over one-third of women becoming mothers by the time they reach age 18 and around two-thirds having had a child by the time they reach age 20.
    • Birth Intervals. The majority of Zambian children (81 percent) are born after a "safe" birth interval (24 or more months apart), with 36 percent born at least 36 months after a prior birth. Nevertheless, 19 percent of non-first births occur less than 24 months after the preceding birth. The overall median birth interval is 32 months.
    • Fertility Preferences. Survey data indicate that there is a strong desire for children and a preference for large families in Zambian society.
    • Unplanned Fertility. Despite the increasing level of contraceptive use, ZDHS data indicate that unplanned pregnancies are still common.

    FAMILY PLANNING

    • Increasing Use of Contraception. The contraceptive prevalence rate in Zambia has increased significantly over the past five years, rising from 15 percent in 1992 to 26 percent in 1996.
    • Differentials in Family Planning Use. Differentials in current use of family planning by province are large.
    • Source of Contraception. Six in ten users obtain their methods from public sources, while 24 percent use non-governmental medical sources and shops and friends account for the remaining 13 percent. Government health centres (41 percent) and government hospitals (16 percent) are the most common sources of contraceptive methods.
    • Knowledge of Contraceptive Methods. Knowledge of contraceptive methods is nearly universal, with 96 percent of all women and men knowing at least one method of family planning.
    • Family Planning Messages. One reason for the increase in level of contraceptive awareness is that family planning messages are prevalent.
    • Unmet Need for Family Planning. ZDHS data show that there is a considerable unmet need for family planning services in Zambia.

    MATERNAL AND CHILD HEALTH

    • Maternal Health Care. ZDHS data show some encouraging results regarding maternal health care, as well as to some areas in which improvements could be made. Results show that most Zambian mothers receive antenatal care, 3 percent from a doctor and 93 percent from a nurse or trained midwife.
    • High Childhood Mortality. One of the more disturbing findings from the survey is that child survival has not improved over the past few years.
    • Childhood Vaccination Coverage. Vaccination coverage against the most common childhood illnesses has increased recently.
    • Childhood Health. ZDHS data indicate that Zambian mothers are reasonably well-informed about childhood illnesses and that a high proportion are treated appropriately.
    • Breastfeeding Practices. The ZDHS results indicate that breastfeeding is almost universally practised in Zambia, with a median duration of 20 months.
    • Knowledge and Behaviour Regarding AIDS. Survey results indicate that virtually all respondents had heard of AIDS. Common sources of information were friends/relatives, the radio, and health workers. The vast majority of respondents--80 percent of women and 94 percent of men--say they have changed their behaviour in order to avoid contracting AIDS, mostly by restricting themselves to one sexual partner.

    Geographic coverage

    The 1996 Zambia Demographic and Health Survey (ZDHS) is a nationally representative survey. The sample was designed to produce reliable estimates for the country as a whole, for the urban and the rural areas separately, and for each of the nine provinces in the country.

    Analysis unit

    • Household
    • women age 15-49
    • Men age 15-59
    • Children under five years

    Universe

    The survey covered all de jure household members (usual residents), all women of reproductive age, aged 15-49 years in the total sample of households, men aged 15-59 and Children under age 5 resident in the household.

    Kind of data

    Sample survey data

    Sampling procedure

    The 1996 ZDHS covered the population residing in private households in the country. The design for the ZDHS called for a representative probability sample of approximately 8,000 completed individual interviews with women between the ages of 15 and 49. It is designed principally to produce reliable estimates for the country as a whole, for the urban and the rural areas separately, and for each of the nine provinces in the country. In addition to the sample of women, a sub-sample of about 2,000 men between the ages of 15 and 59 was also designed and selected to allow for the study of AIDS knowledge and other topics.

    SAMPLING FRAME

    Zambia is divided administratively into nine provinces and 57 districts. For the Census of Population, Housing and Agriculture of 1990, the whole country was demarcated into census supervisory areas (CSAs). Each CSA was in turn divided into standard enumeration areas (SEAs) of approximately equal size. For the 1992 ZDHS, this frame of about 4,200 CSAs and their corresponding SEAs served as the sampling frame. The measure of size was the number of households obtained during a quick count operation carried out in 1987. These same CSAs and SEAs were later updated with new measures of size which are the actual numbers of households and population figures obtained in the census. The sample for the 1996 ZDHS was selected from this updated CSA and SEA frame.

    CHARACTERISTICS OF THE AMPLE

    The sample for ZDHS was selected in three stages. At the first stage, 312 primary sampling units corresponding to the CSAs were selected from the frame of CSAs with probability proportional to size, the size being the number of households obtained from the 1990 census. At the second stage, one SEA was selected, again with probability proportional to size, within each selected CSA. An updating of the maps as well as a complete listing of the households in the selected SEAs was carried out. The list of households obtained was used as the frame for the third-stage sampling in which households were selected for interview. Women between the ages of 15 and 49 were identified in these households and interviewed. Men between the ages of 15 and 59 were also interviewed, but only in one-fourth of the households selected for the women's survey.

    SAMPLE ALLOCATION

    The provinces, stratified by urban and rural areas, were the sampling strata. There were thus 18 strata. The proportional allocation would result in a completely self-weighting sample but would not allow for reliable estimates for at least three of the nine provinces, namely Luapula, North-Western and Western. Results of other demographic and health surveys show that a minimum sample of 800-1,000 women is required in order to obtain estimates of fertility and childhood mortality rates at an acceptable level of sampling errors. It was decided to allocate a sample of 1,000 women to each of the three largest provinces, and a sample of 800 women to the two smallest provinces. The remaining provinces got samples of 850 women. Within each province, the sample was distributed approximately proportionally to the urban and rural areas.

    STRATIFICATION AND SYSTEMATIC SELECTION OF CLUSTERS

    A cluster is the ultimate area unit retained in the survey. In the 1992 ZDHS and the 1996 ZDHS, the cluster corresponds exactly to an SEA selected from the CSA that contains it. In order to decrease sampling errors of comparisons over time between 1992 and 1996--it was decided that as many as possible of the 1992 clusters be retained. After carefully examining the 262 CSAs that were included in the 1992 ZDHS, locating them in the updated frame and verifying their SEA composition, it was decided to retain 213 CSAs (and their corresponding SEAs). This amounted to almost 70 percent of the new sample. Only 99 new CSAs and their corresponding SEAs were selected.

    As in the 1992 ZDHS, stratification of the CSAs was only geographic. In each stratum, the CSAs were listed by districts ordered geographically. The procedure for selecting CSAs in each stratum consisted of: (1) calculating the sampling interval for the stratum: (2) calculating the cumulated size of each CSA; (3) calculating the series of sampling numbers R, R+I, R+21, .... R+(a-1)l, where R is a random number between 1 and 1; (4) comparing each sampling number with the cumulated sizes.

    The reasons for not

  7. Z

    Data from: Code and Data Schimmelradar manuscript 1.1

    • data-staging.niaid.nih.gov
    Updated Apr 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kortenbosch, Hylke (2025). Code and Data Schimmelradar manuscript 1.1 [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_14851614
    Explore at:
    Dataset updated
    Apr 3, 2025
    Dataset provided by
    Wageningen University & Research
    Authors
    Kortenbosch, Hylke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Read me – Schimmelradar manuscript

    The code in this repository was written to analyse the data and generate figures for the manuscript “Land use drives spatial structure of drug resistance in a fungal pathogen”.

    This repository consists of two original .csv raw data files, 2 .tif files that are minimally reformatted after being downloaded from LGN.nl and www.pdok.nl/introductie/-/article/basisregistratie-gewaspercelen-brp-, and 9 scripts using the R language. The remaining files include intermediate .tif and .csv files to skip more computationally heavy steps of the analysis and facilitate the reproduction of the analysis.

    Data files:§1

    Schimmelradar_360_submission.csv: The raw phenotypic resistance spatial data from the air sample

    • Sample: an arbitrary sample code given to each of the participants

    • Area: A random number assigned to each of the 100 areas the Netherlands was split up into to facilitate an even spread of samples across the country during participant selection.

    • Logistics status: Variable used to indicate whether the sample was returned in good order, not otherwise used in the analysis.

    • Arrived back on: The date by which the sample arrived back at Wageningen University

    • Quality seals: quality of the seals upon sample return, only samples of a quality designated as good seals were processed. (also see Supplement file – section A).

    • Start sampling: The date on which the trap was deployed and the stickers exposed to the air, recorded by the participant

    • End sampling: The date on which the trap was taken down and the stickers were re-covered and no longer exposed to the air, recorded by the participant

    • 3 back in area?: Binary indicating whether at least three samples have been returned in the respective area (see Area)

    • Batch: The date on which processing of the sample was started. To be more specific, the date at which Flamingo medium was poured over the seals of the sample and incubation was started.

    • Lab processing: Binary indication completion of lab processing

    • Tot ITR: A. fumigatus CFU count in the permissive layer of the itraconazole-treated plate

    • RES ITR: CFU count of colonies that had breached the surface of the itraconazole-treated layer after incubation and were visually (with the unaided eye) sporulating.

    • RF ITR: The itraconazole (~4 mg/L) resistance fraction = RES ITR/Tot ITR

    • Muccor ITR: Indication of the presence of Mucorales spp. growth on the itraconazole treatment plate

    • Tot VOR: A. fumigatus CFU count in the permissive layer of the voriconazole-treated plate

    • RES VOR: CFU count of colonies that had breached the surface of the voriconazole-treated layer after incubation and were visually (with the unaided eye) sporulating.

    • RF VOR: The voriconazole (~2 mg/L) resistance fraction = RES VOR/Tot VOR

    • Muccor VOR: Indication of the presence of Mucorales spp. growth on the voriconazole treatment plate

    • Tot CON: CFU count on the untreated growth control plate Note: note on the sample based on either information given by the participant or observations in the lab. The exclude label was given if the sample had either too little (<25) or too many (>300) CFUs on one or more of the plates (also see Supplement file – section A).

    • Lat: Exact latitude of the address where the sample was taken. Not used in the published version of the code and hidden for privacy reasons.

    • Long: Exact longitude of the address where the sample was taken. Not used in the published version of the code and hidden for privacy reasons.

    • Round_Lat: Rounded latitude of the address where the sample was taken. Rounded down to two decimals (the equivalent of a 1 km2 area), so they could not be linked to a specific address. Used in the published version of the code.

    • Round_Long: Rounded longitude of the address where the sample was taken. Rounded down to two decimals (the equivalent of a 1 km2 area), so they could not be linked to a specific address. Used in the published version of the code.

    Analysis_genotypic_schimmelradar_TR_types.csv: The genotype data inferred from gel electrophoresis for all resistant isolates

    • TR type: Indicates the length of the tandem repeats in bp, as judged from a gel. 34 bp, 46 bp, or multiples of 46.

    • Plate: 96-well plate on which the isolate was cultured

    • 96-well: well in which the isolate was cultured

    • Azole: Azole on which the isolate was grown and resistant to. Itraconazole (ITRA) or Voriconazole (VORI).

    • Sample: The air sample the isolate was taken from, corresponds to “Sample” in “Schimmelradar_360_submission.csv”.

    • Strata: The number that equates to “Area” in “Schimmelradar_360_submission.csv”.

    • WT: A binary that indicates whether an isolate had a wildtype cyp51a promotor.

    • TR34: A binary that indicates whether an isolate had a TR34 cyp51a promotor.

    • TR46: A binary that indicates whether an isolate had a TR46 cyp51a promotor.

    • TR46_3: A binary that indicates whether an isolate had a TR46*3 cyp51a promotor.

    • TR46_4: A binary that indicates whether an isolate had a TR46*4 cyp51a promotor.

    Script 1 - generation_100_equisized_areas_NL

    NOTE: Running this code is not necessary for the other analyses, it was used primarily for sample selection. The area distribution was used during the analysis in script 2B, yet each sample is already linked to an area in “Schimmelradar_360_submission.csv". This script was written to generate a spatial polygons data frame of 100 equisized areas of the Netherlands. The registrations for the citizen science project Schimmelradar were binned into these areas to facilitate a relatively even distribution of samples throughout the country which can be seen in Figure S1. The spatial polygons data frame can be opened and displayed in open-source software such as QGIS. The package “spcosa” used to generate the areas has RJava as a dependency, so having Java installed is required to run this script. The R script uses a shapefile of the Netherlands from the tmap package to generate the areas within the Netherlands. Generating a similar distribution for other countries will require different shape files!

    Script 2 - Spatial_data_integration_fungalradar

    This script produces 4 data files that describe land use in the Netherlands: The three focal.RData files with land use and resistant/colony counts, as well as the “Predictor_raster_NL.tif” land use file.

    In this script, both the phenotypic and genotypic resistance spatial data from the air samples taken during the Fungal radar citizen science project are integrated with the land use and weather data used to model them. It is not recommended to run this code because the data extraction is fairly computationally demanding and it does not itself contain key statistical analyses. Rather it is used to generate the objects used for modelling and spatial predictions that are also included in this repository.

    The phenotypic resistance is summarised in Table 1, which is generated in this script. Subsequently, the spatial data from the LNG22 and BRP datasets are integrated into the data. These dataset can be loaded from the "LGN2022.tif" and "Gewas22rast.tiff" raster files, respectively. Link to webpages where these files can be downloaded can found in the code.

    Once the raster files are loaded, the code generates heatmaps and calculates the proportions of all the land use classes in both a 5 and 10-km radius around every sample and across the country to make spatial predictions. Only the 10 km radius data are used in the later analysis, but the 5 km radius was generated to test if that radius would be more appropriate, during an earlier stage of the analyses, and was left in for completeness. For documentation of the LGN22 data set, we refer to https://lgn.nl/documentatie and for BRP to https://nationaalgeoregister.nl/geonetwork/srv/dut/catalog.search#/metadata/44e6d4d3-8fc5-47d6-8712-33dd6d244eef, both of these online resources are in Dutch but can be readily translated. A list of the variables that were included from these datasets during model selection can be found in Table S3. Alongside land-use data, the code extracts weather data from datafiles that can be downloaded from https://cds.climate.copernicus.eu/datasets/sis-agrometeorological-indicators?tab=download for the Netherlands during the sampling window, dates and dimensions are listed within the code. The Weather_schimmelradar folder contains four subfolders for each weather variable that was considered during modelling: temperature, wind speed, precipitation and humidity. Each of these subfolders contains 44 .nc files that each cover the daily mean of the respective weather variable across the Netherlands for each of the 44 days of the sampling window the citizen scientists were given.

    All spatial objects weather + land use are eventually merged into one predictor raster "Predictor_raster_NL.tif". The land use fractions and weather data are subsequently integrated with the air sample data into a single spatial data frame along with the resistance data and saved into an R object "Schimmelradar360spat_focal.RData". The script concludes by merging the cyp51a haplotype data with this object as well, to create two different objects: "Schimmelradar360spat_focal_TR_VORI.RData" for the haplotype data of the voriconazole resistant isolates and "Schimmelradar360spat_focal_TR_ITRA.RData" including the haplotype data of itraconazole resistant isolates. These two datasets are modeled separately in scripts 5,9 and 6,8, respectively. This final section of the script also generates summary table S2, which summarises the frequency of the cyp51a haplotypes per azole treatment.

    If the relevant objects are loaded

  8. Market Basket Analysis

    • kaggle.com
    zip
    Updated Dec 9, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aslan Ahmedov (2021). Market Basket Analysis [Dataset]. https://www.kaggle.com/datasets/aslanahmedov/market-basket-analysis
    Explore at:
    zip(23875170 bytes)Available download formats
    Dataset updated
    Dec 9, 2021
    Authors
    Aslan Ahmedov
    Description

    Market Basket Analysis

    Market basket analysis with Apriori algorithm

    The retailer wants to target customers with suggestions on itemset that a customer is most likely to purchase .I was given dataset contains data of a retailer; the transaction data provides data around all the transactions that have happened over a period of time. Retailer will use result to grove in his industry and provide for customer suggestions on itemset, we be able increase customer engagement and improve customer experience and identify customer behavior. I will solve this problem with use Association Rules type of unsupervised learning technique that checks for the dependency of one data item on another data item.

    Introduction

    Association Rule is most used when you are planning to build association in different objects in a set. It works when you are planning to find frequent patterns in a transaction database. It can tell you what items do customers frequently buy together and it allows retailer to identify relationships between the items.

    An Example of Association Rules

    Assume there are 100 customers, 10 of them bought Computer Mouth, 9 bought Mat for Mouse and 8 bought both of them. - bought Computer Mouth => bought Mat for Mouse - support = P(Mouth & Mat) = 8/100 = 0.08 - confidence = support/P(Mat for Mouse) = 0.08/0.09 = 0.89 - lift = confidence/P(Computer Mouth) = 0.89/0.10 = 8.9 This just simple example. In practice, a rule needs the support of several hundred transactions, before it can be considered statistically significant, and datasets often contain thousands or millions of transactions.

    Strategy

    • Data Import
    • Data Understanding and Exploration
    • Transformation of the data – so that is ready to be consumed by the association rules algorithm
    • Running association rules
    • Exploring the rules generated
    • Filtering the generated rules
    • Visualization of Rule

    Dataset Description

    • File name: Assignment-1_Data
    • List name: retaildata
    • File format: . xlsx
    • Number of Row: 522065
    • Number of Attributes: 7

      • BillNo: 6-digit number assigned to each transaction. Nominal.
      • Itemname: Product name. Nominal.
      • Quantity: The quantities of each product per transaction. Numeric.
      • Date: The day and time when each transaction was generated. Numeric.
      • Price: Product price. Numeric.
      • CustomerID: 5-digit number assigned to each customer. Nominal.
      • Country: Name of the country where each customer resides. Nominal.

    imagehttps://user-images.githubusercontent.com/91852182/145270162-fc53e5a3-4ad1-4d06-b0e0-228aabcf6b70.png">

    Libraries in R

    First, we need to load required libraries. Shortly I describe all libraries.

    • arules - Provides the infrastructure for representing, manipulating and analyzing transaction data and patterns (frequent itemsets and association rules).
    • arulesViz - Extends package 'arules' with various visualization. techniques for association rules and item-sets. The package also includes several interactive visualizations for rule exploration.
    • tidyverse - The tidyverse is an opinionated collection of R packages designed for data science.
    • readxl - Read Excel Files in R.
    • plyr - Tools for Splitting, Applying and Combining Data.
    • ggplot2 - A system for 'declaratively' creating graphics, based on "The Grammar of Graphics". You provide the data, tell 'ggplot2' how to map variables to aesthetics, what graphical primitives to use, and it takes care of the details.
    • knitr - Dynamic Report generation in R.
    • magrittr- Provides a mechanism for chaining commands with a new forward-pipe operator, %>%. This operator will forward a value, or the result of an expression, into the next function call/expression. There is flexible support for the type of right-hand side expressions.
    • dplyr - A fast, consistent tool for working with data frame like objects, both in memory and out of memory.
    • tidyverse - This package is designed to make it easy to install and load multiple 'tidyverse' packages in a single step.

    imagehttps://user-images.githubusercontent.com/91852182/145270210-49c8e1aa-9753-431b-a8d5-99601bc76cb5.png">

    Data Pre-processing

    Next, we need to upload Assignment-1_Data. xlsx to R to read the dataset.Now we can see our data in R.

    imagehttps://user-images.githubusercontent.com/91852182/145270229-514f0983-3bbb-4cd3-be64-980e92656a02.png"> imagehttps://user-images.githubusercontent.com/91852182/145270251-6f6f6472-8817-435c-a995-9bc4bfef10d1.png">

    After we will clear our data frame, will remove missing values.

    imagehttps://user-images.githubusercontent.com/91852182/145270286-05854e1a-2b6c-490e-ab30-9e99e731eacb.png">

    To apply Association Rule mining, we need to convert dataframe into transaction data to make all items that are bought together in one invoice will be in ...

  9. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Luczak-Roesch, Markus (2020). A dataset for temporal analysis of files related to the JFK case [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_1042153

A dataset for temporal analysis of files related to the JFK case

Explore at:
Dataset updated
Jan 24, 2020
Dataset provided by
Victoria University of Wellington
Authors
Luczak-Roesch, Markus
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset contains the content of the subset of all files with a correct publication date from the 2017 release of files related to the JFK case (retrieved from https://www.archives.gov/research/jfk/2017-release). This content was extracted from the source PDF files using the R OCR libraries tesseract and pdftools.

The code to derive the dataset is given as follows:

BEGIN R DATA PROCESSING SCRIPT

library(tesseract) library(pdftools)

pdfs <- list.files("[path to your output directory containing all PDF files]")

meta <- read.csv2("[path to your input directory]/jfkrelease-2017-dce65d0ec70a54d5744de17d280f3ad2.csv",header = T,sep = ',') #the meta file containing all metadata for the PDF files (e.g. publication date)

meta$Doc.Date <- as.character(meta$Doc.Date)

meta.clean <- meta[-which(meta$Doc.Date=="" | grepl("/0000",meta$Doc.Date)),] for(i in 1:nrow(meta.clean)){ meta.clean$Doc.Date[i] <- gsub("00","01",meta.clean$Doc.Date[i])

if(nchar(meta.clean$Doc.Date[i])<10){ meta.clean$Doc.Date[i]<-format(strptime(meta.clean$Doc.Date[i],format = "%d/%m/%y"),"%m/%d/%Y") }

}

meta.clean$Doc.Date <- strptime(meta.clean$Doc.Date,format = "%m/%d/%Y")

meta.clean <- meta.clean[order(meta.clean$Doc.Date),]

docs <- data.frame(content=character(0),dpub=character(0),stringsAsFactors = F) for(i in 1:nrow(meta.clean)){

for(i in 1:3){

pdf_prop <- pdftools::pdf_info(paste0("[path to your output directory]/",tolower(meta.clean$File.Name[i]))) tmp_files <- c() for(k in 1:pdf_prop$pages){ tmp_files <- c(tmp_files,paste0("/home/STAFF/luczakma/RProjects/JFK/data/tmp/",k)) }

img_file <- pdftools::pdf_convert(paste0("[path to your output directory]/",tolower(meta.clean$File.Name[i])), format = 'tiff', pages = NULL, dpi = 700,filenames = tmp_files)

txt <- ""

for(j in 1:length(img_file)){ extract <- ocr(img_file[j], engine = tesseract("eng")) #unlink(img_file) txt <- paste(txt,extract,collapse = " ") }

docs <- rbind(docs,data.frame(content=iconv(tolower(gsub("\s+"," ",gsub("[[:punct:]]|[ ]"," ",txt))),to="UTF-8"),dpub=format(meta.clean$Doc.Date[i],"%Y/%m/%d"),stringsAsFactors = F),stringsAsFactors = F) }

write.table(docs,"[path to your output directory]/documents.csv", row.names = F)

END R DATA PROCESSING SCRIPT

Search
Clear search
Close search
Google apps
Main menu