Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Many capture-recapture surveys of wildlife populations operate in continuous time but detections are typically aggregated into occasions for analysis, even when exact detection times are available. This discards information and introduces subjectivity, in the form of decisions about occasion definition. We develop a spatio-temporal Poisson process model for spatially explicit capture-recapture (SECR) surveys that operate continuously and record exact detection times. We show that, except in some special cases (including the case in which detection probability does not change within occasion), temporally aggregated data do not provide sufficient statistics for density and related parameters, and that when detection probability is constant over time our continuous-time (CT) model is equivalent to an existing model based on detection frequencies. We use the model to estimate jaguar density from a camera-trap survey and conduct a simulation study to investigate the properties of a CT estimator and discrete-occasion estimators with various levels of temporal aggregation. This includes investigation of the effect on the estimators of spatio-temporal correlation induced by animal movement. The CT estimator is found to be unbiased and more precise than discrete-occasion estimators based on binary capture data (rather than detection frequencies) when there is no spatio-temporal correlation. It is also found to be only slightly biased when there is correlation induced by animal movement, and to be more robust to inadequate detector spacing, while discrete-occasion estimators with binary data can be sensitive to occasion length, particularly in the presence of inadequate detector spacing. Our model includes as a special case a discrete-occasion estimator based on detection frequencies, and at the same time lays a foundation for the development of more sophisticated CT models and estimators. It allows modelling within-occasion changes in detectability, readily accommodates variation in detector effort, removes subjectivity associated with user-defined occasions, and fully utilises CT data. We identify a need for developing CT methods that incorporate spatio-temporal dependence in detections and see potential for CT models being combined with telemetry-based animal movement models to provide a richer inference framework.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Facebook
TwitterWMD, Weighted Mean Difference; CI, confidence interval.*The numerals indicate the total number of cases and controls.†Pheterogeneity less than 0.1 was considered significant.Clinical Endpoints: Continuous Data.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
DWR continuous groundwater level measurements contains continuous time-series data from automated recorders at sites operated by the Department of Water Resources. Readings are taken at 15-minute to one-hour intervals. Some of the readings are relayed to the California Data Exchange Center. However, most of the monitoring sites are visited once every month or two, when readings are off-loaded from data recorders, then finalized and published. Wells monitored for this dataset are located within Butte, Colusa, Glenn, Mendocino, Modoc, Sacramento, San Joaquin, Shasta, Siskiyou, Solano, Sutter, Tehama, Yolo, and Yuba Counties.
Water-level measurements are the principal source of information about changes in groundwater storage and movement in a basin, and how these are affected by various forms of recharge (e.g., precipitation, seepage from streams, irrigation return) and discharge (e.g., seepage to streams, groundwater pumping).
Water-level monitoring involves "continuous" or periodic measurements. Continuous monitoring makes use of automatic water-level sensing and recording instruments that are programmed to make scheduled measurements in wells. This provides a high-resolution record of water-level fluctuations. Resulting hydrographs can accurately identify the effects of various stresses on the aquifer system and provide measurements of maximum and minimum water levels in aquifers. Continuous monitoring may be the best technique to use for monitoring fluctuations in groundwater levels during droughts and other critical periods when hydraulic stresses may change at relatively rapid rates, or when real-time data are needed for making water management decisions see usgs reference.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Experimental data can broadly be divided in discrete or continuous data. Continuous data are obtained from measurements that are performed as a function of another quantitative variable, e.g., time, length, concentration, or wavelength. The results from these types of experiments are often used to generate plots that visualize the measured variable on a continuous, quantitative scale. To simplify state-of-the-art data visualization and annotation of data from such experiments, an open-source tool was created with R/shiny that does not require coding skills to operate it. The freely available web app accepts wide (spreadsheet) and tidy data and offers a range of options to normalize the data. The data from individual objects can be shown in 3 different ways: (1) lines with unique colors, (2) small multiples, and (3) heatmap-style display. Next to this, the mean can be displayed with a 95% confidence interval for the visual comparison of different conditions. Several color-blind-friendly palettes are available to label the data and/or statistics. The plots can be annotated with graphical features and/or text to indicate any perturbations that are relevant. All user-defined settings can be stored for reproducibility of the data visualization. The app is dubbed PlotTwist and runs locally or online: https://huygens.science.uva.nl/PlotTwist
Facebook
TwitterRecent BOLD-fMRI studies have revealed spatial distinction between variability- and mean-based between-condition differences, suggesting that BOLD variability could offer complementary and even orthogonal views of brain function with traditional activation. However, these findings were mainly observed in block-designed fMRI studies. As block design may not be appreciate for characterizing the low-frequency dynamics of BOLD signal, the evidences suggesting the distinction between BOLD variability and mean are less convincing. Based on the high reproducibility of signal variability modulation between continuous eyes-open (EO) and eyes-closed (EC) states, here we employed EO/EC paradigm and BOLD-fMRI to compare variability- and mean-based EO/EC differences while the subjects were in light. The comparisons were made both on block-designed and continuous EO/EC data. Our results demonstrated that the spatial patterns of variability- and mean-based EO/EC differences were largely distinct with each other, both for block-designed and continuous data. For continuous data, increases of BOLD variability were found in secondary visual cortex and decreases were mainly in primary auditory cortex, primary sensorimotor cortex and medial nuclei of thalamus, whereas no significant mean-based differences were observed. For the block-designed data, the pattern of increased variability resembled that of continuous data and the negative regions were restricted to medial thalamus and a few clusters in auditory and sensorimotor networks, whereas activation regions were mainly located in primary visual cortex and lateral nuclei of thalamus. Furthermore, with the expanding window analyses we found variability results of continuous data exhibited a rather slower dynamical process than typically considered for task activation, suggesting block design is less optimal than continuous design in characterizing BOLD variability. In sum, we provided more solid evidences that variability-based modulation could represent orthogonal views of brain function with traditional mean-based activation.
Facebook
TwitterThe world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. Here, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also briefly discuss results on synthetic and real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data is based on the Seshat data release in https://zenodo.org/record/6642230 and aims to dissect the time series of each NGA into culturally and institutionally continuous time series. For both continuity criteria, the central continuous time series is marked in the data (central meaning that this is the time interval during which the NGA has crossed a specified threshold between low-complexity and high-complexity societies). Details can be found in v3 of https://arxiv.org/abs/2212.00563
Facebook
TwitterThe world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequences of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods.
Facebook
TwitterThis dataset contains crash information from the last five years to the current date. The data is based on the National Incident Based Reporting System (NIBRS). The data is dynamic, allowing for additions, deletions and modifications at any time, resulting in more accurate information in the database. Due to ongoing and continuous data entry, the numbers of records in subsequent extractions are subject to change.About Crash DataThe Cary Police Department strives to make crash data as accurate as possible, but there is no avoiding the introduction of errors into this process, which relies on data furnished by many people and that cannot always be verified. As the data is updated on this site there will be instances of adding new incidents and updating existing data with information gathered through the investigative process.Not surprisingly, crash data becomes more accurate over time, as new crashes are reported and more information comes to light during investigations.This dynamic nature of crash data means that content provided here today will probably differ from content provided a week from now. Likewise, content provided on this site will probably differ somewhat from crime statistics published elsewhere by the Town of Cary, even though they draw from the same database.About Crash LocationsCrash locations reflect the approximate locations of the crash. Certain crashes may not appear on maps if there is insufficient detail to establish a specific, mappable location.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
(See text for details).(SBP: systolic blood pressure. DBP: diastolic blood pressure. HR: heart rate. SpO2: pulse-oximeter saturation)
Facebook
TwitterDaily maximum and minimum air temperature data were obtained from the Global Historical Climatology Network daily (GHCNd, Menne, et al. 2012) and the Great Lakes Air Temperature/Degree Day Climatology, 1897-1983 (Assel et al. 1995). Daily air temperature was calculated by taking a simple average of daily maximum and minimum air temperature. To accurately capture climate trends and variability, it is critical to ensure data consistency across the historical record, such as spatial coverage, a number of representative weather stations, and measurement details (e.g., sensor types and heights, measurement protocols) as any inconsistencies could result in apparent climate change in the data record. Bearing this in consideration and following Cohn et al. (2021), a total of 24 coastal locations along the Great Lakes were selected (see Figure 1 in the Method Document). These 24 locations had relatively consistent station data records since the 1890s while data from other locations had large gaps in time, or had inconsistencies among data from neighboring stations. Each of the selected locations had multiple weather stations in their proximity covering the historical period from 1890s to 2023, representing the weather conditions around the location. Only a couple of stations covered the whole historical period (e.g., Green Bay, WI). Therefore, for most of the locations, datasets from multiple stations in the proximity of each location were combined to create a continuous data record from the 1890s to 2023 (see Table 1 in the Method Document for station information and periods for which the station data was used). When doing so, data consistency was verified by comparing the data during the period when station datasets overlap. This procedure resulted in almost continuous timeseries, except for a few locations that still had temporal gaps of one to several days (e.g., Escanaba, MI). Therefore, any temporal data gap less than 10 days in the combined timeseries were filled based on the linear interpolation. This resulted in completely continuous timeseries for all the locations. Average daily air temperature was calculated from January 1, 1897 to October 22, 2023 by simply making an average of timeseries data from corresponding locations around each lake. This resulted in daily air temperature records for all five Great Lakes (Lake Superior, Lake Huron, Lake Michigan, Lake Erie, and Lake Ontario). The cumulative freezing degree days (CFDDs) and the net melting degree days (NMDDs) were also added to this version of the dataset. The description of the calculation methods for CFDD and NMDD can be found in the method document included in this dataset.
Facebook
TwitterView crash information from the last five years to current date.
This dataset includes crashes in the Town of Cary for the previous four calendar years plus the current year to date.
The data is based on the National Incident Based Reporting System (NIBRS). The data is dynamic, allowing for additions, deletions and modifications at any time, resulting in more accurate information in the database. Due to ongoing and continuous data entry, the numbers of records in subsequent extractions are subject to change.
About Crash Data
The Cary Police Department strives to make Crash data as accurate as possible, but there is no avoiding the introduction of errors into this process, which relies on data furnished by many people and that cannot always be verified. As the data is updated on this site there will be instances of adding new incidents and updating existing data with information gathered through the investigative process.
Not surprisingly, Crash data become more accurate over time, as new crashes are reported and more information comes to light during investigations.
This dynamic nature of Crash data means that content provided here today will probably differ from content provided a week from now. Likewise, content provided on this site will probably differ somewhat from crime statistics published elsewhere by the Town of Cary, even though they draw from the same database.
About Crash Locations
Crash locations reflect the approximate locations of the crash. Certain crashes may not appear on maps if there is insufficient detail to establish a specific, mappable location.
This data is updated daily.
Facebook
TwitterIn our changing world, it is critical to understand and predict plant community responses to global change drivers. Plant functional traits promise to be a key predictive tool for many ecosystems, including grasslands, however their use requires both complete plant community and functional trait data. Yet, representation of these data in global databases is incredibly sparse, particularly beyond a handful of most used traits and common species. Here we present the CoRRE Trait Database, spanning 17 traits (9 categorical, 8 continuous) anticipated to predict species’ responses to global change for 4,079 vascular plant species across 173 plant families present in 390 grassland experiments from around the world. The database contains complete categorical trait records for all 4,079 plant species, obtained from a comprehensive literature search. Additionally, the database contains nearly complete coverage (99.97%) of species mean values for continuous traits for a subset of 2,927 plant species, predicted from observed trait data drawn from TRY and a variety of other plant trait databases using Bayesian Probabilistic Matrix Factorization (BHPMF) and multivariate imputation using chained equations (MICE). These data will shed light on mechanisms underlying population, community, and ecosystem responses to global change in grasslands worldwide.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data and code associated with the study "Quantifying accuracy and precision from continuous response data in studies of spatial perception and crossmodal recalibration" by Patrick Bruns, Caroline Thun, and Brigitte Röder.
example_code.R contains analysis code that can be used to to calculate error-based and regression-based localization performance metrics from single-subject response data with a working example in R. It requires as inputs a numeric vector containing the stimulus location (true value) in each trial and a numeric vector containing the corresponding localization response (perceived value) in each trial.
example_data.csv contains the data used in the working example of the analysis code.
localization.csv contains extracted localization performance metrics from 188 subjects which were analyzed in the study to assess the agreement between error-based and regression-based measures of accuracy and precision. The subjects had all naively performed an azimuthal sound localization task (see related identifiers for the underlying raw data).
recalibration.csv contains extracted localization performance metrics from a subsample of 57 subjects in whom data from a second sound localization test, performed after exposure to audiovisual stimuli in which the visual stimulus was consistently presented 13.5° to the right of the sound source, were available. The file contains baseline performance (pre) and changes in performance after audiovisual exposure relative to baseline (delta) in each of the localization performance metrics.
Localization performance metrics were either derived from the single-trial localization errors (error-based approach) or from a linear regression of localization responses on the actual target locations (regression-based approach).The following localization performance metrics were included in the study:
bias: overall bias of localization responses to the left (negative values) or to the right (positive values), equivalent to constant error (CE) in error-based approaches and intercept in regression-based approaches
absolute constant error (aCE): absolute value of bias (or CE), indicates the amount of bias irrespective of direction
mean absolute contant error (maCE): mean of the aCE per target location, reflects over- or underestimation of peripheral target locations
variable error (VE): mean of the standard deviations (SD) of the single-trial localization errors at each target location
pooled variable error (pVE): SD of the single-trial localization errors pooled across trials from all target locations
absolute error (AE): mean of the absolute values of the single-trial localization errors, sensitive to both bias and variability of the localization responses
slope: slope of the regression model function, indicates an overestimation (values > 1) or underestimation (values < 1) of peripheral target locations
R2: coefficient of determination of the regression model, indicates the goodness of the fit of the localization responses to the regression line
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Context: Use of continuous glucose monitoring (CGM) is increasing for insulin-requiring patients with diabetes. While data on glycemic profiles of healthy, non-diabetic individuals exists for older sensors, assessment of glycemic metrics with new generation CGM devices is lacking. Objective: To establish reference sensor glucose ranges in healthy, non-diabetic individuals across different age groups, using a current generation CGM sensor. Design: Multicenter, prospective study. Setting: 12 centers within the T1D Exchange Clinic Network. Patients or Participants: Non-pregnant, healthy, non-diabetic children and adults (age ≥6 years); with non-obese body mass index. Intervention: A blinded Dexcom G6 CGM, with once daily calibration, was worn for up to 10 days in each participant. Main Outcome Measure: CGM metrics of mean glucose, hyperglycemia, hypoglycemia, and glycemic variability. Results: 153 participants (age 7-80 years) were included in the analyses. Mean average glucose was 98-99 mg/dL (5.4-5.5 mmol/L) for all age groups except those over 60 years in whom mean average glucose was 104 mg/dL (5.8 mmol/L). The median % time between 70-140 mg/dL (3.9-7.8 mmol/L) was 96% (IQR 93%-98%). Mean within-individual coefficient of variation (CV) was 17±3%. Median time spent with glucose levels >140mg/dL was 2.1% (30 min/day) and <70 mg/dL (3.9 mmol/L) was 1.1% (15 min/day). Conclusion: By assessing across age groups in a healthy, non-diabetic population, normative sensor glucose data have been derived, and will be useful as a benchmark for future research studies.
Facebook
TwitterATL22 is a derivative of the continuous Level 3A ATL13 Along Track Inland Surface Water Data product. ATL13 contains the high-resolution, along-track inland water surface profiles derived from analysis of the geolocated photon clouds from the ATL03 product. Starting from ATL13, ATL22 computes the mean surface water quantities with no additional photon analysis. The two data products, ATL22 and ATL13, can be used in conjunction as they include the same orbit and water body nomenclature independent from version numbers.
Facebook
TwitterThe NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) Vegetation Continuous Fields (VCF) Version 1 data product (VCF5KYR) provides global fractional vegetation cover at 0.05 degree (5,600 meter) spatial resolution at yearly intervals from 1982 to 2016. The VCF5KYR data product is derived from a bagged linear model algorithm using Long Term Data Record Version 4 (LTDR V4) data compiled from Advanced Very High Resolution Radiometer (AVHRR) observations. Fractional vegetation cover (FVC) is the ratio of the area of the vertical projection of green vegetation above ground to the total area, capturing the horizontal distribution and density of vegetation on the Earth's surface. FVC is a primary means for measuring global forest cover change and is a key parameter for a variety of environmental and climate-related applications, including carbon land surface models and biomass measurements. The three bands included in each VCF5KYR Version 1 GeoTIFF are: percent of tree cover, non-tree vegetation, and bare ground. A water mask was applied with all pure water pixels (defined as ≥ 95% water coverage) set to zero.Data from years 1994 and 2000 were excluded due to lack of data in the LTDR V4.Known Issues* Known issues, including constraints and limitations, are provided on page 10 of the Algorithm Theoretical Basis Document (ATBD).
Facebook
TwitterThe K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Description
This dataset contains a simulated collection of 1,00000 patient records designed to explore hypertension management in resource-constrained settings. It provides comprehensive data for analyzing blood pressure control rates, associated risk factors, and complications. The dataset is ideal for predictive modelling, risk analysis, and treatment optimization, offering insights into demographic, clinical, and treatment-related variables.
Dataset Structure
Dataset Volume
• Size: 10,000 records. • Features: 19 variables, categorized into Sociodemographic, Clinical, Complications, and Treatment/Control groups.
Variables and Categories
A. Sociodemographic Variables
1. Age:
• Continuous variable in years.
• Range: 18–80 years.
• Mean ± SD: 49.37 ± 12.81.
2. Sex:
• Categorical variable.
• Values: Male, Female.
3. Education:
• Categorical variable.
• Values: No Education, Primary, Secondary, Higher Secondary, Graduate, Post-Graduate, Madrasa.
4. Occupation:
• Categorical variable.
• Values: Service, Business, Agriculture, Retired, Unemployed, Housewife.
5. Monthly Income:
• Categorical variable in Bangladeshi Taka.
• Values: <5000, 5001–10000, 10001–15000, >15000.
6. Residence:
• Categorical variable.
• Values: Urban, Sub-urban, Rural.
B. Clinical Variables
7. Systolic BP:
• Continuous variable in mmHg.
• Range: 100–200 mmHg.
• Mean ± SD: 140 ± 15 mmHg.
8. Diastolic BP:
• Continuous variable in mmHg.
• Range: 60–120 mmHg.
• Mean ± SD: 90 ± 10 mmHg.
9. Elevated Creatinine:
• Binary variable (\geq 1.4 \, \text{mg/dL}).
• Values: Yes, No.
10. Diabetes Mellitus:
• Binary variable.
• Values: Yes, No.
11. Family History of CVD:
• Binary variable.
• Values: Yes, No.
12. Elevated Cholesterol:
• Binary variable (\geq 200 \, \text{mg/dL}).
• Values: Yes, No.
13. Smoking:
• Binary variable.
• Values: Yes, No.
C. Complications
14. LVH (Left Ventricular Hypertrophy):
• Binary variable (ECG diagnosis).
• Values: Yes, No.
15. IHD (Ischemic Heart Disease):
• Binary variable.
• Values: Yes, No.
16. CVD (Cerebrovascular Disease):
• Binary variable.
• Values: Yes, No.
17. Retinopathy:
• Binary variable.
• Values: Yes, No.
D. Treatment and Control
18. Treatment:
• Categorical variable indicating therapy type.
• Values: Single Drug, Combination Drugs.
19. Control Status:
• Binary variable.
• Values: Controlled, Uncontrolled.
Dataset Applications
1. Predictive Modeling:
• Develop models to predict blood pressure control status using demographic and clinical data.
2. Risk Analysis:
• Identify significant factors influencing hypertension control and complications.
3. Severity Scoring:
• Quantify hypertension severity for patient risk stratification.
4. Complications Prediction:
• Forecast complications like IHD, LVH, and CVD for early intervention.
5. Treatment Guidance:
• Analyze therapy efficacy to recommend optimal treatment strategies.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Many capture-recapture surveys of wildlife populations operate in continuous time but detections are typically aggregated into occasions for analysis, even when exact detection times are available. This discards information and introduces subjectivity, in the form of decisions about occasion definition. We develop a spatio-temporal Poisson process model for spatially explicit capture-recapture (SECR) surveys that operate continuously and record exact detection times. We show that, except in some special cases (including the case in which detection probability does not change within occasion), temporally aggregated data do not provide sufficient statistics for density and related parameters, and that when detection probability is constant over time our continuous-time (CT) model is equivalent to an existing model based on detection frequencies. We use the model to estimate jaguar density from a camera-trap survey and conduct a simulation study to investigate the properties of a CT estimator and discrete-occasion estimators with various levels of temporal aggregation. This includes investigation of the effect on the estimators of spatio-temporal correlation induced by animal movement. The CT estimator is found to be unbiased and more precise than discrete-occasion estimators based on binary capture data (rather than detection frequencies) when there is no spatio-temporal correlation. It is also found to be only slightly biased when there is correlation induced by animal movement, and to be more robust to inadequate detector spacing, while discrete-occasion estimators with binary data can be sensitive to occasion length, particularly in the presence of inadequate detector spacing. Our model includes as a special case a discrete-occasion estimator based on detection frequencies, and at the same time lays a foundation for the development of more sophisticated CT models and estimators. It allows modelling within-occasion changes in detectability, readily accommodates variation in detector effort, removes subjectivity associated with user-defined occasions, and fully utilises CT data. We identify a need for developing CT methods that incorporate spatio-temporal dependence in detections and see potential for CT models being combined with telemetry-based animal movement models to provide a richer inference framework.