53 datasets found
  1. Data Set S5 - Splice interval table for the corrected revised composite...

    • doi.pangaea.de
    html, tsv
    Updated Sep 2, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carlotta Cappelli; Paul R Bown; Thomas Westerhold; Yuhji Yamamoto; Claudia Agnini; Steven M Bohaty; Martina de Riu; Veronica Lobba (2019). Data Set S5 - Splice interval table for the corrected revised composite depth scale (crmcd) for Site 342-U1408 [Dataset]. http://doi.org/10.1594/PANGAEA.905419
    Explore at:
    tsv, htmlAvailable download formats
    Dataset updated
    Sep 2, 2019
    Dataset provided by
    PANGAEA
    Authors
    Carlotta Cappelli; Paul R Bown; Thomas Westerhold; Yuhji Yamamoto; Claudia Agnini; Steven M Bohaty; Martina de Riu; Veronica Lobba
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Variables measured
    Tie point, Reference/source, Sample code/label, DEPTH, sediment/rock, Depth, composite revised, corrected
    Description

    This dataset is about: Data Set S5 - Splice interval table for the corrected revised composite depth scale (crmcd) for Site 342-U1408. Please consult parent dataset @ https://doi.org/10.1594/PANGAEA.905432 for more information.

  2. f

    Additive Hazards Regression Analysis of Massive Interval-Censored Data via...

    • tandf.figshare.com
    pdf
    Updated May 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peiyao Huang; Shuwei Li; Xinyuan Song (2025). Additive Hazards Regression Analysis of Massive Interval-Censored Data via Data Splitting [Dataset]. http://doi.org/10.6084/m9.figshare.27103243.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 12, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Peiyao Huang; Shuwei Li; Xinyuan Song
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    With the rapid development of data acquisition and storage space, massive datasets exhibited with large sample size emerge increasingly and make more advanced statistical tools urgently need. To accommodate such big volume in the analysis, a variety of methods have been proposed in the circumstances of complete or right censored survival data. However, existing development of big data methodology has not attended to interval-censored outcomes, which are ubiquitous in cross-sectional or periodical follow-up studies. In this work, we propose an easily implemented divide-and-combine approach for analyzing massive interval-censored survival data under the additive hazards model. We establish the asymptotic properties of the proposed estimator, including the consistency and asymptotic normality. In addition, the divide-and-combine estimator is shown to be asymptotically equivalent to the full-data-based estimator obtained from analyzing all data together. Simulation studies suggest that, relative to the full-data-based approach, the proposed divide-and-combine approach has desirable advantage in terms of computation time, making it more applicable to large-scale data analysis. An application to a set of interval-censored data also demonstrates the practical utility of the proposed method.

  3. d

    Data from: A randomized controlled trial of positive outcome expectancies...

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Apr 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Data from: A randomized controlled trial of positive outcome expectancies during high-intensity interval training in inactive adults [Dataset]. https://catalog.data.gov/dataset/data-from-a-randomized-controlled-trial-of-positive-outcome-expectancies-during-high-inten-9219d
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Service
    Description

    Includes accelerometer data using an ActiGraph to assess usual sedentary, moderate, vigorous, and very vigorous activity at baseline, 6 weeks, and 10 weeks. Includes relative reinforcing value (RRV) data showing how participants rated how much they would want to perform both physical and sedentary activities on a scale of 1-10 at baseline, week 6, and week 10. Includes data on the breakpoint, or Pmax of the RRV, which was the last schedule of reinforcement (i.e. 4, 8, 16, …) completed for the behavior (exercise or sedentary). For both Pmax and RRV score, greater scores indicated a greater reinforcing value, with scores exceeding 1.0 indicating increased exercise reinforcement. Includes questionnaire data regarding preference and tolerance for exercise intensity using the Preference for and Tolerance of Intensity of Exercise Questionnaire (PRETIEQ) and positive and negative outcome expectancy of exercise using the outcome expectancy scale (OES). Includes data on height, weight, and BMI. Includes demographic data such as gender and race/ethnicity. Resources in this dataset:Resource Title: Actigraph activity data. File Name: AGData.csvResource Description: Includes data from Actigraph accelerometer for each participant at baseline, 6 weeks, and 10 weeks.Resource Title: RRV Data. File Name: RRVData.csvResource Description: Includes data from RRV at baseline, 6 weeks, and 10 weeks, OES survey data, PRETIE-Q survey data, and demographic data (gender, weight, height, race, ethnicity, and age).

  4. Z

    Dataset used in the paper: "Scaling laws and dynamics of hashtags on...

    • data.niaid.nih.gov
    Updated Apr 27, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hongjia H. Chen (2020). Dataset used in the paper: "Scaling laws and dynamics of hashtags on Twitter" [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_3673743
    Explore at:
    Dataset updated
    Apr 27, 2020
    Dataset provided by
    Tristram J. Alexander
    Hongjia H. Chen
    Eduardo G. Altmann
    Diego F. M.Oliveira
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was used in the manuscript "Scaling laws and dynamics of hashtags on Twitter"..

    The Twitter data was obtained from a sample of 10% of all public tweets, provided by the Twitter streaming application programming interface. We extracted the hashtags from each tweet and counted how many times they were used in different time intervals. Time intervals of three different lengths were used: days, hours, and minutes. The tweets were published between November 1st 2015 and November 30th 2016, but not all time intervals between these dates are available.

    The four files in this dataset correspond each to one folder (collected using tar). Each folder contains compressed .csv files (compressed using gzip). The content of the .csv files in each folder are:

    hashtags_frequency_day.tar Counts of hashtags in each day. The name of each file in the folder indicates the date (GMT). The entries in each file are the hashtag and the count in the interval.

    hashtags_frequency_hour.tar Counts of hashtags in each hour. The name of each file in the folder indicates the date (GMT). The entries in each file are the hashtag and the count in the interval.

    hashtags_frequency_minutes.tar Counts of hashtags in each minute. The name of each file in the folder indicates the date (GMT, only a fraction of all days is available). The entries in each file are the hashtag and the count in the interval.

    number_of_tweets.tar Counts of the number of tweets in each minute. The name of each file in the folder indicates the day. The entries in each file are the minute in the day (GMT) and count of tweets in our dataset.

  5. m

    The banksia plot: a method for visually comparing point estimates and...

    • bridges.monash.edu
    • researchdata.edu.au
    txt
    Updated Oct 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Turner; Amalia Karahalios; Elizabeth Korevaar; Joanne E. McKenzie (2024). The banksia plot: a method for visually comparing point estimates and confidence intervals across datasets [Dataset]. http://doi.org/10.26180/25286407.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 15, 2024
    Dataset provided by
    Monash University
    Authors
    Simon Turner; Amalia Karahalios; Elizabeth Korevaar; Joanne E. McKenzie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Companion data for the creation of a banksia plot:Background:In research evaluating statistical analysis methods, a common aim is to compare point estimates and confidence intervals (CIs) calculated from different analyses. This can be challenging when the outcomes (and their scale ranges) differ across datasets. We therefore developed a plot to facilitate pairwise comparisons of point estimates and confidence intervals from different statistical analyses both within and across datasets.Methods:The plot was developed and refined over the course of an empirical study. To compare results from a variety of different studies, a system of centring and scaling is used. Firstly, the point estimates from reference analyses are centred to zero, followed by scaling confidence intervals to span a range of one. The point estimates and confidence intervals from matching comparator analyses are then adjusted by the same amounts. This enables the relative positions of the point estimates and CI widths to be quickly assessed while maintaining the relative magnitudes of the difference in point estimates and confidence interval widths between the two analyses. Banksia plots can be graphed in a matrix, showing all pairwise comparisons of multiple analyses. In this paper, we show how to create a banksia plot and present two examples: the first relates to an empirical evaluation assessing the difference between various statistical methods across 190 interrupted time series (ITS) data sets with widely varying characteristics, while the second example assesses data extraction accuracy comparing results obtained from analysing original study data (43 ITS studies) with those obtained by four researchers from datasets digitally extracted from graphs from the accompanying manuscripts.Results:In the banksia plot of statistical method comparison, it was clear that there was no difference, on average, in point estimates and it was straightforward to ascertain which methods resulted in smaller, similar or larger confidence intervals than others. In the banksia plot comparing analyses from digitally extracted data to those from the original data it was clear that both the point estimates and confidence intervals were all very similar among data extractors and original data.Conclusions:The banksia plot, a graphical representation of centred and scaled confidence intervals, provides a concise summary of comparisons between multiple point estimates and associated CIs in a single graph. Through this visualisation, patterns and trends in the point estimates and confidence intervals can be easily identified.This collection of files allows the user to create the images used in the companion paper and amend this code to create their own banksia plots using either Stata version 17 or R version 4.3.1

  6. HRV-ACC: a dataset with R-R intervals and accelerometer data for the...

    • zenodo.org
    • data.niaid.nih.gov
    csv, txt, zip
    Updated Aug 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kamil Książek; Kamil Książek; Wilhelm Masarczyk; Wilhelm Masarczyk; Przemysław Głomb; Przemysław Głomb; Michał Romaszewski; Michał Romaszewski; Iga Stokłosa; Iga Stokłosa; Piotr Ścisło; Piotr Ścisło; Paweł Dębski; Paweł Dębski; Robert Pudlo; Robert Pudlo; Piotr Gorczyca; Piotr Gorczyca; Magdalena Piegza; Magdalena Piegza (2023). HRV-ACC: a dataset with R-R intervals and accelerometer data for the diagnosis of psychotic disorders using a Polar H10 wearable sensor [Dataset]. http://doi.org/10.5281/zenodo.8171266
    Explore at:
    txt, zip, csvAvailable download formats
    Dataset updated
    Aug 9, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Kamil Książek; Kamil Książek; Wilhelm Masarczyk; Wilhelm Masarczyk; Przemysław Głomb; Przemysław Głomb; Michał Romaszewski; Michał Romaszewski; Iga Stokłosa; Iga Stokłosa; Piotr Ścisło; Piotr Ścisło; Paweł Dębski; Paweł Dębski; Robert Pudlo; Robert Pudlo; Piotr Gorczyca; Piotr Gorczyca; Magdalena Piegza; Magdalena Piegza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT

    The issue of diagnosing psychotic diseases, including schizophrenia and bipolar disorder, in particular, the objectification of symptom severity assessment, is still a problem requiring the attention of researchers. Two measures that can be helpful in patient diagnosis are heart rate variability calculated based on electrocardiographic signal and accelerometer mobility data. The following dataset contains data from 30 psychiatric ward patients having schizophrenia or bipolar disorder and 30 healthy persons. The duration of the measurements for individuals was usually between 1.5 and 2 hours. R-R intervals necessary for heart rate variability calculation were collected simultaneously with accelerometer data using a wearable Polar H10 device. The Positive and Negative Syndrome Scale (PANSS) test was performed for each patient participating in the experiment, and its results were attached to the dataset. Furthermore, the code for loading and preprocessing data, as well as for statistical analysis, was included on the corresponding GitHub repository.

    BACKGROUND

    Heart rate variability (HRV), calculated based on electrocardiographic (ECG) recordings of R-R intervals stemming from the heart's electrical activity, may be used as a biomarker of mental illnesses, including schizophrenia and bipolar disorder (BD) [Benjamin et al]. The variations of R-R interval values correspond to the heart's autonomic regulation changes [Berntson et al, Stogios et al]. Moreover, the HRV measure reflects the activity of the sympathetic and parasympathetic parts of the autonomous nervous system (ANS) [Task Force of the European Society of Cardiology the North American Society of Pacing Electrophysiology, Matusik et al]. Patients with psychotic mental disorders show a tendency for a change in the centrally regulated ANS balance in the direction of less dynamic changes in the ANS activity in response to different environmental conditions [Stogios et al]. Larger sympathetic activity relative to the parasympathetic one leads to lower HRV, while, on the other hand, higher parasympathetic activity translates to higher HRV. This loss of dynamic response may be an indicator of mental health. Additional benefits may come from measuring the daily activity of patients using accelerometry. This may be used to register periods of physical activity and inactivity or withdrawal for further correlation with HRV values recorded at the same time.

    EXPERIMENTS

    In our experiment, the participants were 30 psychiatric ward patients with schizophrenia or BD and 30 healthy people. All measurements were performed using a Polar H10 wearable device. The sensor collects ECG recordings and accelerometer data and, additionally, prepares a detection of R wave peaks. Participants of the experiment had to wear the sensor for a given time. Basically, it was between 1.5 and 2 hours, but the shortest recording was 70 minutes. During this time, evaluated persons could perform any activity a few minutes after starting the measurement. Participants were encouraged to undertake physical activity and, more specifically, to take a walk. Due to patients being in the medical ward, they received instruction to take a walk in the corridors at the beginning of the experiment. They were to repeat the walk 30 minutes and 1 hour after the first walk. The subsequent walks were to be slightly longer (about 3, 5 and 7 minutes, respectively). We did not remind or supervise the command during the experiment, both in the treatment and the control group. Seven persons from the control group did not receive this order and their measurements correspond to freely selected activities with rest periods but at least three of them performed physical activities during this time. Nevertheless, at the start of the experiment, all participants were requested to rest in a sitting position for 5 minutes. Moreover, for each patient, the disease severity was assessed using the PANSS test and its scores are attached to the dataset.

    The data from sensors were collected using Polar Sensor Logger application [Happonen]. Such extracted measurements were then preprocessed and analyzed using the code prepared by the authors of the experiment. It is publicly available on the GitHub repository [Książek et al].

    Firstly, we performed a manual artifact detection to remove abnormal heartbeats due to non-sinus beats and technical issues of the device (e.g. temporary disconnections and inappropriate electrode readings). We also performed anomaly detection using Daubechies wavelet transform. Nevertheless, the dataset includes raw data, while a full code necessary to reproduce our anomaly detection approach is available in the repository. Optionally, it is also possible to perform cubic spline data interpolation. After that step, rolling windows of a particular size and time intervals between them are created. Then, a statistical analysis is prepared, e.g. mean HRV calculation using the RMSSD (Root Mean Square of Successive Differences) approach, measuring a relationship between mean HRV and PANSS scores, mobility coefficient calculation based on accelerometer data and verification of dependencies between HRV and mobility scores.

    DATA DESCRIPTION

    The structure of the dataset is as follows. One folder, called HRV_anonymized_data contains values of R-R intervals together with timestamps for each experiment participant. The data was properly anonymized, i.e. the day of the measurement was removed to prevent person identification. Files concerned with patients have the name treatment_X.csv, where X is the number of the person, while files related to the healthy controls are named control_Y.csv, where Y is the identification number of the person. Furthermore, for visualization purposes, an image of the raw RR intervals for each participant is presented. Its name is raw_RR_{control,treatment}_N.png, where N is the number of the person from the control/treatment group. The collected data are raw, i.e. before the anomaly removal. The code enabling reproducing the anomaly detection stage and removing suspicious heartbeats is publicly available in the repository [Książek et al]. The structure of consecutive files collecting R-R intervals is following:

    Phone timestampRR-interval [ms]
    12:43:26.538000651
    12:43:27.189000632
    12:43:27.821000618
    12:43:28.439000621
    12:43:29.060000661
    ......

    The first column contains the timestamp for which the distance between two consecutive R peaks was registered. The corresponding R-R interval is presented in the second column of the file and is expressed in milliseconds.
    The second folder, called accelerometer_anonymized_data contains values of accelerometer data collected at the same time as R-R intervals. The naming convention is similar to that of the R-R interval data: treatment_X.csv and control_X.csv represent the data coming from the persons from the treatment and control group, respectively, while X is the identification number of the selected participant. The numbers are exactly the same as for R-R intervals. The structure of the files with accelerometer recordings is as follows:

    Phone timestampX [mg]Y [mg]Z [mg]
    13:00:17.196000-961-23182
    13:00:17.205000-965-21181
    13:00:17.215000-966-22187
    13:00:17.225000-967-26193
    13:00:17.235000-965-27191
    ............

    The first column contains a timestamp, while the next three columns correspond to the currently registered acceleration in three axes: X, Y and Z, in milli-g unit.

    We also attached a file with the PANSS test scores (PANSS.csv) for all patients participating in the measurement. The structure of this file is as follows:

    no_of_personPANSS_PPANSS_NPANSS_GPANSS_total
    18132243
    21171836
    314304488
    418132758
    ..............


    The first column contains the identification number of the patient, while the three following columns refer to the PANSS scores related to positive, negative and general symptoms, respectively.

    USAGE NOTES

    All the files necessary to run the HRV and/or accelerometer data analysis are available on the GitHub repository [Książek et al]. HRV data loading, preprocessing (i.e. anomaly detection and removal), as well as the

  7. A

    Data from: Crete. 1:125,000 scale. 50m contour interval

    • abacus.library.ubc.ca
    • borealisdata.ca
    • +1more
    pdf
    Updated Jul 26, 2010
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abacus Data Network (2010). Crete. 1:125,000 scale. 50m contour interval [Dataset]. https://abacus.library.ubc.ca/dataset.xhtml;jsessionid=efce012a732d80ccb30abac749f7?persistentId=hdl%3A11272.1%2FAB2%2FK545J0&version=&q=&fileTypeGroupFacet=%22Document%22&fileAccess=
    Explore at:
    pdf(32569240)Available download formats
    Dataset updated
    Jul 26, 2010
    Dataset provided by
    Abacus Data Network
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    Greece, Greece (GR)
    Description

    Crete (Greece) 50m contour interval. Includes roads and places of interest. Coordinate system: GCS European 1950. Projection: Albers Equal Area Conic.

  8. d

    Uncertainty Intervals and Evaluation Metrics for Simulated Streamflow and...

    • catalog.data.gov
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Uncertainty Intervals and Evaluation Metrics for Simulated Streamflow and Runoff from a Continental-Scale Monthly Water Balance Model [Dataset]. https://catalog.data.gov/dataset/uncertainty-intervals-and-evaluation-metrics-for-simulated-streamflow-and-runoff-from-a-co
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This dataset consists of time series and evaluation metrics (in comma-separated value format [.csv]) which are described in the Bock and others (2018) Advances in Water Resources research article “Quantifying uncertainty in simulated streamflow and runoff from a continental-scale monthly water balance model.” In this paper, uncertainty was quantified in simulated monthly runoff produced by a monthly water balance model for gaged and ungaged locations across the conterminous United States. The compressed folder UI_byGage.zip contains two files. The file UI_byGage.csv contains the monthly time-step uncertainty intervals and measured and simulated time series of streamflow developed at 1,575 streamgages across the conterminous United States (CONUS). The period of record varies by streamgage. The file Met_byGage.csv contains three metrics (coverage ratio, average width index, and interval skill score), which are evaluations of the uncertainty interval at each of the streamgages. The compressed folder RUN_byHRU.zip contains simulated runoff for 109,951 hydrologic response units (HRUs) across the CONUS. Files are organized by ninteen hydrologic regions (NHDPlus, 2010) and available at a monthly time-step from January 1949 through December 2010. The compressed folder UI_byHRU.zip contains uncertainty intervals (rXX_High.csv and rXX_Low.csv) bounding the simulated runoff at the HRUs. The files have naming conventions and formats identical to the files in the RUN_byHRU.zip folder. The file AWI_byHRU.csv is the average width index calculated for each HRU. See Bock and others (2018) for a full description of the data and metrics.

  9. J

    Interval censored regression with fixed effects (replication data)

    • jda-test.zbw.eu
    • journaldata.zbw.eu
    .rmd, csv, r, txt
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jason Abrevaya; Chris Muris; Jason Abrevaya; Chris Muris (2024). Interval censored regression with fixed effects (replication data) [Dataset]. https://jda-test.zbw.eu/dataset/interval-censored-regression-with-fixed-effects
    Explore at:
    txt(3460), csv(4118642), .rmd(2070), .rmd(3797), .rmd(2506), r(5699)Available download formats
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    ZBW - Leibniz Informationszentrum Wirtschaft
    Authors
    Jason Abrevaya; Chris Muris; Jason Abrevaya; Chris Muris
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This paper considers identification and estimation of a fixed-effects model with an interval-censored dependent variable. In each time period, the researcher observes the interval (with known endpoints) in which the dependent variable lies but not the value of the dependent variable itself. Two versions of the model are considered: a parametric model with logistic errors and a semiparametric model with errors having an unspecified distribution. In both cases, the error disturbances can be heteroskedastic over cross-sectional units as long as they are stationary within a cross-sectional unit; the semiparametric model also allows for serial correlation of the error disturbances. A conditional-logit-type composite likelihood estimator is proposed for the logistic fixed-effects model, and a composite maximum-score-type estimator is proposed for the semiparametric model. In general, the scale of the coefficient parameters is identified by these estimators, meaning that the causal effects of interest are estimated directly in cases where the latent dependent variable is of primary interest (e.g., pure data-coding situations). Monte Carlo simulations and an empirical application to birthweight outcomes illustrate the performance of the parametric estimator.

  10. h

    Supporting data for "Investigating Changes of COVID-19 Epidemiological...

    • datahub.hku.hk
    zip
    Updated Mar 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dongxuan Chen (2025). Supporting data for "Investigating Changes of COVID-19 Epidemiological Parameters from Different Perspectives" [Dataset]. http://doi.org/10.25442/hku.27929508.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 29, 2025
    Dataset provided by
    HKU Data Repository
    Authors
    Dongxuan Chen
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    My PhD thesis with title "Investigating Changes in COVID-19 Epidemiological Parameters from Different Perspectives" focus on using line list data (anonymized), patient hospitalization data (anonymized) and viral load data (anonymized) to improve the estimatin of different key epidemiological parameters during the COVID-19 pandemic in Hong Kong.This dataset contains supporting data for reproducibility, it has 6 subfolders correspond to 6 chapters of the thesis (chapters 2, 4, 5, 6, 7, 8) where contain figures and data analyses, each sub folder contains data and R code for reproducing the figures and other analytical results, with README file accompanied with each sub folder.In chapter 2, I provided an overview of the COVID-19 pandemic in Hong Kong and worldwide, and thus used datasets contain case incidence data and a R code to generate incidence figure. I also conducted a systematic review of the latent period estimation, and I provided the endnote library with spreadsheet of the endnote output that contain my paper screening process, which are included in subfolder dataset chapter 2.In chapter 4, I did a detailed statistical analyses of the changing serial interval of COVID-19 in Hong Kong, and thus sub folder dataset chapter 4 contained anonymized transmission pair line list data for estimating the serial interval, I provided R codes and essential subset of the data output for reproducibility of my results. The related published work is on American Journal of Epidemiology, in README chapter4.txt I have put the DOI of this paper.In chapter 5, I developed an inferential framework to infer the generation interval on temporal time scale, sub folder dataset chapter 5 contained public available line list data from mainland China, and R codes and essential subset of the data output for reproducibility of my results. The related published work is on Nature Communications, and the data and code are also available on github, I have out the DOI and github link in README chapter5.txt.In chapter 6, I investigated the superspreading potential and setting-specific generation interval in Hong Kong, subfolder dataset chapter 6 contained simplified and anonymized transmission cluster size information, and related R code to reproduce the result, and also the R code for modelling buildig and estimation summary of the generation interval estimates.In chapter 7, I estimated the latent period of COVID-19 based on different settings in Hong Kong, sub folder dataset chapter 7 contained processed and anonymized viral load record and transmission pair information of COVID-19 cases in Hong Kong, and related R code to reproduce the result, together with two spreadsheets for estimation summary. The entire R programming process contain a lot of R scripts, which I put two sub folders (R and Stan) under sub folder dataset chapter 7, and also put the original Github link for R programming of the method in README chapter 7.txtIn chapter 8, I analyzed the length of stay in hospital of COVID-19 patients in Hong Kong and the potential association with vaccination status. In sub folder dataset chapter 8 I put a simplified and anonymized dataset of patient's hospitalization record regarding their vaccination status and length of stay in hospital for the analysis. I also put R code and essential subset of the data output to reproduce the result.

  11. c

    The Response Scale Transformation Project

    • datacatalogue.cessda.eu
    • ssh.datastations.nl
    Updated Apr 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    J.J. de Jonge; R. Veenhoven (2023). The Response Scale Transformation Project [Dataset]. http://doi.org/10.17026/dans-zx5-p7pe
    Explore at:
    Dataset updated
    Apr 11, 2023
    Dataset provided by
    Erasmus Happiness Economics Research Organisation, Erasmus University Rotterdam
    Authors
    J.J. de Jonge; R. Veenhoven
    Description

    In this project we have reviewed existing methods used to homogenize data and developed several new methods for dealing with this diversity in survey questions on the same subject. The project is a spin-off from the World Database of Happiness, the main aim of which is to collate and make available research findings on the subjective enjoyment of life and to prepare these data for research synthesis. The first methods we discuss were proposed in the book ‘Happiness in Nations’ and which were used at the inception of the World Database of Happiness. Some 10 years later a new method was introduced: the International Happiness Scale Interval Study (HSIS). Taking the HSIS as a basis the Continuum Approach was developed. Then, building on this approach, we developed the Reference Distribution Method.

  12. P

    IRV2V Dataset

    • paperswithcode.com
    Updated Oct 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). IRV2V Dataset [Dataset]. https://paperswithcode.com/dataset/irv2v
    Explore at:
    Dataset updated
    Oct 9, 2023
    Description

    To facilitate research on asynchrony for collaborative perception, we simulate the first collaborative perception dataset with different temporal asynchronies based on CARLA, named IRregular V2V(IRV2V). We set 100ms as ideal sampling time interval and simulate various asynchronies in real-world scenarios from two main aspects: i) considering that agents are unsynchronized with the unified global clock, we uniformly sample a time shift $\delta_s\sim \mathcal{U}(-50,50)\text{ms}$ for each agent in the same scene, and ii) considering the trigger noise of the sensors, we uniformly sample a time turbulence $\delta_d\sim \mathcal{U}(-10,10)\text{ms}$ for each sampling timestamp. The final asynchronous time interval between adjacent timestamps is the summation of the time shift and time turbulence. In experiments, we also sample the frame intervals to achieve large-scale and diverse asynchrony. Each scene includes multiple collaborative agents ranging from 2 to 5. Each agent is equipped with 4 cameras with the resolution 600 $\times$ 800 and a 32-channel LiDAR. The detection range is 281.6m $\times$ 80m. It results in 34K images and 8.5K LiDAR sweeps. See more details in the Appendix.

  13. (Table AT3) Depth scale conversion for the analyzed interval in IODP Hole...

    • doi.pangaea.de
    html, tsv
    Updated Feb 8, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    S Morgan; Jenny Inwood; A McGrath; Michael J Norry; S Davies; H Foster (2019). (Table AT3) Depth scale conversion for the analyzed interval in IODP Hole 313-M0029A [Dataset]. http://doi.org/10.1594/PANGAEA.898174
    Explore at:
    html, tsvAvailable download formats
    Dataset updated
    Feb 8, 2019
    Dataset provided by
    PANGAEA
    Authors
    S Morgan; Jenny Inwood; A McGrath; Michael J Norry; S Davies; H Foster
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jun 21, 2009
    Area covered
    Variables measured
    Ratio, Liner Length, Cumulative depth, Sample code/label, Overlap of overlying core, Section Bot in meters below surface, Section Top in meters below surface
    Description

    This dataset is about: (Table AT3) Depth scale conversion for the analyzed interval in IODP Hole 313-M0029A. Please consult parent dataset @ https://doi.org/10.1594/PANGAEA.898245 for more information.

  14. d

    FSL Flood Return Interval - Dataset - data.govt.nz - discover and use data

    • catalogue.data.govt.nz
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FSL Flood Return Interval - Dataset - data.govt.nz - discover and use data [Dataset]. https://catalogue.data.govt.nz/dataset/fsl-flood-return-interval
    Explore at:
    Description

    The New Zealand Fundamental Soil Layer originates from a relational join of features from two databases: the New Zealand Land Resource Inventory (NZLRI), and the National Soils Database (NSD). The NZLRI is a national polygon database of physical land resource information, including a soil unit. Soil is one in an inventory of five physical factors (including rock, slope, erosion, and vegetation) delineated by physiographic polygons at approximately 1:50,000 scale. The NSD is a point database of soil physical, chemical, and mineralogical characteristics for over 1500 soil profiles nationally. A relational join between the NZLRI dominant soil and derivative tables from the NSD was the means by which 14 important soil attributes were attached to the NZLRI polygons. Some if these attributes originate from exact matches with NSD records, while others derive from matches to similar soils or professional estimates. This layers contains flood return interval attributes. The classes originate from and are described more fully in Webb and Wilson (1995).

  15. Z

    Large-scale temporal graph datasets

    • data.niaid.nih.gov
    • explore.openaire.eu
    Updated Feb 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simmhan, Yogesh (2022). Large-scale temporal graph datasets [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5937375
    Explore at:
    Dataset updated
    Feb 2, 2022
    Dataset provided by
    Simmhan, Yogesh
    Baranawal, Animesh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Large scale datasets used for evaluation in the article: Optimizing the Interval-centric Distributed Computing Model for Temporal Graph Algorithms to appear in EuroSys 2022.

  16. H

    High frequency dataset for event-scale concentration-discharge analysis in a...

    • hydroshare.org
    • search.dataone.org
    zip
    Updated Sep 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andreas Musolff (2024). High frequency dataset for event-scale concentration-discharge analysis in a forested headwater 01/2018-08/2023 [Dataset]. http://doi.org/10.4211/hs.9be43573ba754ec1b3650ce233fc99de
    Explore at:
    zip(17.1 MB)Available download formats
    Dataset updated
    Sep 19, 2024
    Dataset provided by
    HydroShare
    Authors
    Andreas Musolff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2018 - Aug 23, 2023
    Area covered
    Description

    This composite repository contains high-frequency data of discharge, electrical conductivity, nitrate-N, DOC and water temperature obtained the Rappbode headwater catchment in the Harz mountains, Germany. This catchment was affected by a bark-beetle infestion and forest dieback from 2018 onwards.The data extents previous observations from the same catchment (RB) published as part of Musolff (2020). Details on the catchment can be found here: Werner et al. (2019, 2021), Musolff et al. (2021). The file RB_HF_data_2018_2023.txt states measurements for each timestep using the following columns: "index" (number of observation),"Date.Time" (timestamp in YYYY-MM-DD HH:MM:SS), "WT" (water temperature in degree celsius), "Q.smooth" ( discharge in mm/d smoothed using moving average), "NO3.smooth" (nitrate concentrations in mg N/L smoothed using moving average), "DOC.smooth" (Dissolved organic carbon concentrations in mg/L, smoothed using moving average), "EC.smooth" (electrical conductivity in µS/cm smoothed using moving average); NA - no data.

    Water quality data and discharge was measured at a high-frequency interval of 15 min in the time period between January 2018 and August 2023. Both, NO3-N and DOC were measured using an in-situ UV-VIS probe (s::can spectrolyser, scan Austria). EC was measured using an in-situ probe (CTD Diver, Van Essen Canada). Discharge measurements relied on an established stage-discharge relationship based on water level observations (CTD Diver, Van Essen Canada, see Werner et al. [2019]). Data loggers were maintained every two weeks, including manual cleaning of the UV-VIS probes and grab sampling for subsequent lab analysis, calibration and validation.

    Data preparation included five steps: drift corrections, outlier detection, gap filling, calibration and moving averaging: - Drift was corrected by distributing the offset between mean values one hour before and after cleaning equally among the two weeks maintenance interval as an exponential growth. - Outliers were detected with a two-step procedure. First, values outside a physically unlikely range were removed. Second, the Grubbs test, to detect and remove outliers, was applied to a moving window of 100 values. - Data gaps smaller than two hours were filled using cubic spline interpolation. - The resulting time series were globally calibrated against the lab measured concentration of NO3-N and DOC. EC was calibrated against field values obtained with a handheld WTW probe (WTW Multi 430, Xylem Analytics Germany). - Noise in the signal of both discharge and water quality was reduced by a moving average with a window lenght of 2.5 hours.

    References: Musolff, A. (2020). High frequency dataset for event-scale concentration-discharge analysis. https://doi.org/http://www.hydroshare.org/resource/27c93a3f4ee2467691a1671442e047b8 Musolff, A., Zhan, Q., Dupas, R., Minaudo, C., Fleckenstein, J. H., Rode, M., Dehaspe, J., & Rinke, K. (2021). Spatial and Temporal Variability in Concentration-Discharge Relationships at the Event Scale. Water Resources Research, 57(10). Werner, B. J., A. Musolff, O. J. Lechtenfeld, G. H. de Rooij, M. R. Oosterwoud, and J. H. Fleckenstein (2019), High-frequency measurements explain quantity and quality of dissolved organic carbon mobilization in a headwater catchment, Biogeosciences, 16(22), 4497-4516. Werner, B. J., Lechtenfeld, O. J., Musolff, A., de Rooij, G. H., Yang, J., Grundling, R., Werban, U., & Fleckenstein, J. H. (2021). Small-scale topography explains patterns and dynamics of dissolved organic carbon exports from the riparian zone of a temperate, forested catchment. Hydrology and Earth System Sciences, 25(12), 6067-6086.

  17. c

    Data tables of well locations, perforated intervals, and time series of...

    • s.cnmilf.com
    • data.cnra.ca.gov
    • +4more
    Updated Nov 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Data tables of well locations, perforated intervals, and time series of hydraulic-head observations for the Central Valley Hydrologic Model (CVHM) [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/data-tables-of-well-locations-perforated-intervals-and-time-series-of-hydraulic-head-obser
    Explore at:
    Dataset updated
    Nov 28, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Central Valley
    Description

    This digital dataset defines the well locations, perforated intervals, and time series of hydraulic-head observations used in the calibration of the transient hydrologic model of the Central Valley flow system. The Central Valley encompasses an approximate 50,000 square-kilometer region of California. The complex hydrologic system of the Central Valley is simulated using the U.S. Geological Survey (USGS) numerical modeling code MODFLOW-FMP (Schmid and others, 2006b). This simulation is referred to here as the Central Valley Hydrologic Model (CVHM) (Faunt, 2009). Utilizing MODFLOW-FMP, the CVHM simulates groundwater and surface-water flow, irrigated agriculture, land subsidence, and other key processes in the Central Valley on a monthly basis from 1961-2003. The USGS and CA-DWR maintain databases of key wells in the Central Valley that are web-accessible (http://waterdata.usgs.gov and http://www.water.ca.gov/waterdatalibrary/, respectively). These data were combined to form a database of available water levels throughout the Central Valley from 1961 to 2003. More than 850,000 water-level altitude measurements from more than 21,400 wells have been compiled by the USGS or CA-DWR and have been entered into their respective databases. However, only a small portion of these wells have both sufficient construction information to determine the well-perforation interval and water-level measurements for the simulation period. For model calibration, water-level altitude data were needed that were (1) distributed spatially (both geographically and vertically) throughout the Central Valley; (2) distributed temporally throughout the simulation period (years 1961-2003); and (3) available during both wet and dry climatic regimes. From the available wells records, a subset of comparison wells was selected on the basis of perforation depths, completeness of record, climatic intervals, and locations throughout the Central Valley. Water-level altitude observations (19,725) for 206 wells were used as calibration targets during parameter estimation. The CVHM is the most recent regional-scale model of the Central Valley developed by the U.S. Geological Survey (USGS). The CVHM was developed as part of the USGS Groundwater Resources Program (see "Foreword", Chapter A, page iii, for details).

  18. H

    High frequency dataset for event-scale concentration-discharge analysis

    • hydroshare.org
    zip
    Updated Sep 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andreas Musolff (2021). High frequency dataset for event-scale concentration-discharge analysis [Dataset]. http://doi.org/10.4211/hs.27c93a3f4ee2467691a1671442e047b8
    Explore at:
    zip(28.4 MB)Available download formats
    Dataset updated
    Sep 27, 2021
    Dataset provided by
    HydroShare
    Authors
    Andreas Musolff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2013 - Dec 31, 2014
    Area covered
    Description

    This composite repository contains high-frequency data of discharge, electrical conductivity, nitrate-N, spectral absorbance at 254 nm and water temperature obtained in four neighboring catchments in the Harz mountains, Germany. The repository contains four files - one for each catchment (WB - Warme Bode, RB - Rappbode, HS - Hassel, SK - Selke). Details on the catchments can be found here: WB - Kong et a.(2019), RV - Werner et al. (2019), HS and SK - Musolff et al. (2015) Data for the SK catchment is part of the TERENO initiative (https://www.tereno.net/). Each file states measurements for each timestep using the following columns: "index" (number of observation),"Date.Time" (timestamp in YYYY-MM-DD HH:MM:SS), "WT" (water temperature in degree celsius), "discharge.mm" (discharge in mm/d), "Q.smooth" ( discharge in mm/d smoothed using moving average),"EC.smooth" (electrical conductivity in µS/cm smoothed using moving average), "NO3.smooth" (NO3-N concentrations in mg N/L smoothed using moving average), "SAC.smooth" (spectral absorbance at 254 nm in 1/m, smoothed using moving average); NA - no data

    Water quality data and discharge was measured at a high-frequency interval of 15 min in the time period between January 2013 and December 2014. Both, NO3-N and SAC were measured using in-situ UV-VIS probes (TRIOS ProPS, Trios Germany in WB, HS and SK; s::can spectrolyser, scan Austria in RB). EC was measured using in-situ probes (YSI6800, YSI, USA for WB, HS and SK; CTD Diver, Van Essen Canada for RB). Discharge measurements were provided by the state authorities LHW, 2018 or relied on an established stage-discharge relationship (RB, Werner et al. [2019]). Data loggers were maintained every two weeks, including manual cleaning of the UV-VIS probes and grab sampling for subsequent calibration and validation.

    Data preparation included five steps: drift corrections, outlier detection, gap filling, calibration and moving averaging: - Drift was corrected by distributing the offset between mean values one hour before and after cleaning equally among the two weeks maintenance interval as an exponential growth. - Outliers were detected with a two-step procedure. First, values outside a physically unlikely range were removed. Second, the Grubbs test, to detect and remove outliers, was applied to a moving window of 100 values. - Data gaps smaller than two hours were filled using cubic spline interpolation. - The resulting time series were globally calibrated against the lab measured concentration of NO3-N (all stations) and SAC254 (all stations but SK). Here, average probe values one hour before and after sampling were used. EC was calibrated against field values obtained with a handheld WTW probe (WTW Multi 430, Xylem Analytics Germany) for RB while YSI-probe values for WB, HS and SK have been regularly calibrated in field making later corrections obsolete. - Noise in the signal of both discharge and water quality was reduced by a moving average between 2.5 and 6 hours.

    References: Kong, X. Z., Q. Zhan, B. Boehrer, and K. Rinke (2019), High frequency data provide new insights into evaluating and modeling nitrogen retention in reservoirs, Water Res, 166, 115017. LHW (2018), Datenportal Gewaesserkundlicher Landesdienst Sachsen-Anhalt (GLD), Landesbetrieb fuer Hochwasserschutz und Wasserwirtschaft Sachsen-Anhalt. accessed 2018-08-15 Musolff, A., C. Schmidt, B. Selle, and J. H. Fleckenstein (2015), Catchment controls on solute export, Advances in Water Resources, 86, 133-146. Werner, B. J., A. Musolff, O. J. Lechtenfeld, G. H. de Rooij, M. R. Oosterwoud, and J. H. Fleckenstein (2019), High-frequency measurements explain quantity and quality of dissolved organic carbon mobilization in a headwater catchment, Biogeosciences, 16(22), 4497-4516.

  19. Data from: Urban Traffic Flow Dataset

    • kaggle.com
    Updated Nov 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ziya (2024). Urban Traffic Flow Dataset [Dataset]. https://www.kaggle.com/datasets/ziya07/urban-traffic-flow-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 26, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ziya
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset is designed for urban traffic flow prediction and includes temporal, spatial, and categorical features essential for analyzing traffic patterns.

    Key Features: Timestamp: Records the exact date and time in 15-minute intervals, enabling the modeling of temporal dependencies. Location: Identifies the traffic sensor locations (e.g., Sensor_01, Sensor_02), capturing spatial variability. Vehicle_Count: Represents the number of vehicles detected by sensors during each interval. Vehicle_Speed: Measures the average speed of vehicles in km/h, indicating traffic conditions. Congestion_Level: An ordinal variable representing traffic congestion on a scale (e.g., 0 for no congestion, 5 for high congestion). Peak_Off_Peak: Categorical data distinguishing between peak and off-peak hours for better contextual analysis. Target_Vehicle_Count: The predicted vehicle count for the subsequent time interval, serving as the target variable for predictive modeling. Data Overview: Rows: 200 Columns: 7 Temporal Coverage: 2 days and 15 minutes intervals, providing high-resolution data for short-term prediction.

  20. a

    NZ Active Fault Datasets

    • gwrc-open-data-11-1-gwrc.hub.arcgis.com
    Updated May 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Greater Wellington Regional Council (2025). NZ Active Fault Datasets [Dataset]. https://gwrc-open-data-11-1-gwrc.hub.arcgis.com/maps/4ee736bfda9b4fd99067634d8433612d
    Explore at:
    Dataset updated
    May 6, 2025
    Dataset authored and provided by
    Greater Wellington Regional Council
    Area covered
    New Zealand,
    Description

    The active fault data displayed here are from a variety of sources. It includes the New Zealand Active Faults Database (NZAFD) which comes in two versions - 1:250,000 scale (NZAFD-AF250) and a high-resolution scale (NZAFD-HighRes) – and is prepared by the Institute of Geological and Nuclear Sciences Limited (GNS Science). The active fault datasets also include Fault Avoidance Zones (FAZs) and Fault Awareness Areas (FAAs). The NZAFD-AF250 database covers New Zealand mainland, while the NZAFD-HighRes database, FAZs and FAAs are only available for restricted areas of New Zealand (updated periodically and without prior notification). If the FAZs are used to assist future land use planning, this should be done in accordance with the Ministry for the Environment "Planning for Development on or Close to Active Faults" (Kerr et al. 2003). The FAAs show where there may be a surface fault rupture hazard, but further work is needed to define a FAZ, and it is recommended that this dataset is used in conjunction with the guidelines developed by Barrell et al. (2015).The NZAFD is produced by GNS Science and represents the most current mapping of active faults for New Zealand in a single database. The NZAFD can be accessed on the GNS webmap via the link below.The NZAFD contains two distinct datasets based on scale:The high-resolution (NZAFD-HighRes) dataset (1:10,000 scale or better), designed for portrayal and use at cadastral (property) scale. This is currently only available to be viewed on the GNS webmap for some regions.The generalised (NZAFD-AF250) dataset, designed for portrayal and use at regional scale (1:250,000 scale). This can be viewed and downloaded on the GNS webmap for the entire country.Both datasets comprise polylines that represent the location of an active fault trace at or near the surface, at different scales. Each fault trace has attributes that describe its name, sense of movement, displacement, recurrence interval and other parameters.The high-resolution dataset group on the GNS webmap also includes two polygon layers derived from the NZAFD:Fault Avoidance Zones, which delineate areas of surface rupture hazard, as defined by the Ministry for the Environment Active Fault Guidelines (Kerr et al. 2003(external link)), or modifications thereof.Fault Awareness Areas, which highlight areas where a surface rupture hazard may exist (Barrell et al. 2015(external link)) and where more work is needed.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Carlotta Cappelli; Paul R Bown; Thomas Westerhold; Yuhji Yamamoto; Claudia Agnini; Steven M Bohaty; Martina de Riu; Veronica Lobba (2019). Data Set S5 - Splice interval table for the corrected revised composite depth scale (crmcd) for Site 342-U1408 [Dataset]. http://doi.org/10.1594/PANGAEA.905419
Organization logo

Data Set S5 - Splice interval table for the corrected revised composite depth scale (crmcd) for Site 342-U1408

Related Article
Explore at:
tsv, htmlAvailable download formats
Dataset updated
Sep 2, 2019
Dataset provided by
PANGAEA
Authors
Carlotta Cappelli; Paul R Bown; Thomas Westerhold; Yuhji Yamamoto; Claudia Agnini; Steven M Bohaty; Martina de Riu; Veronica Lobba
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Area covered
Variables measured
Tie point, Reference/source, Sample code/label, DEPTH, sediment/rock, Depth, composite revised, corrected
Description

This dataset is about: Data Set S5 - Splice interval table for the corrected revised composite depth scale (crmcd) for Site 342-U1408. Please consult parent dataset @ https://doi.org/10.1594/PANGAEA.905432 for more information.

Search
Clear search
Close search
Google apps
Main menu