https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Many capture-recapture surveys of wildlife populations operate in continuous time but detections are typically aggregated into occasions for analysis, even when exact detection times are available. This discards information and introduces subjectivity, in the form of decisions about occasion definition. We develop a spatio-temporal Poisson process model for spatially explicit capture-recapture (SECR) surveys that operate continuously and record exact detection times. We show that, except in some special cases (including the case in which detection probability does not change within occasion), temporally aggregated data do not provide sufficient statistics for density and related parameters, and that when detection probability is constant over time our continuous-time (CT) model is equivalent to an existing model based on detection frequencies. We use the model to estimate jaguar density from a camera-trap survey and conduct a simulation study to investigate the properties of a CT estimator and discrete-occasion estimators with various levels of temporal aggregation. This includes investigation of the effect on the estimators of spatio-temporal correlation induced by animal movement. The CT estimator is found to be unbiased and more precise than discrete-occasion estimators based on binary capture data (rather than detection frequencies) when there is no spatio-temporal correlation. It is also found to be only slightly biased when there is correlation induced by animal movement, and to be more robust to inadequate detector spacing, while discrete-occasion estimators with binary data can be sensitive to occasion length, particularly in the presence of inadequate detector spacing. Our model includes as a special case a discrete-occasion estimator based on detection frequencies, and at the same time lays a foundation for the development of more sophisticated CT models and estimators. It allows modelling within-occasion changes in detectability, readily accommodates variation in detector effort, removes subjectivity associated with user-defined occasions, and fully utilises CT data. We identify a need for developing CT methods that incorporate spatio-temporal dependence in detections and see potential for CT models being combined with telemetry-based animal movement models to provide a richer inference framework.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These datasets are continuous parameter grids (CPG) of annual mean daily maximum air temperature data for the years 2000 through 2016 in the Pacific Northwest. Source temperature data was produced by the PRISM Climate Group at Oregon State University.
View crash information from the last five years to current date.This dataset includes crashes in the Town of Cary for the previous four calendar years plus the current year to date. The data is based on the National Incident Based Reporting System (NIBRS). The data is dynamic, allowing for additions, deletions and modifications at any time, resulting in more accurate information in the database. Due to ongoing and continuous data entry, the numbers of records in subsequent extractions are subject to change.About Crash DataThe Cary Police Department strives to make Crash data as accurate as possible, but there is no avoiding the introduction of errors into this process, which relies on data furnished by many people and that cannot always be verified. As the data is updated on this site there will be instances of adding new incidents and updating existing data with information gathered through the investigative process.Not surprisingly, Crash data become more accurate over time, as new crashes are reported and more information comes to light during investigations.This dynamic nature of Crash data means that content provided here today will probably differ from content provided a week from now. Likewise, content provided on this site will probably differ somewhat from crime statistics published elsewhere by the Town of Cary, even though they draw from the same database.About Crash LocationsCrash locations reflect the approximate locations of the crash. Certain crashes may not appear on maps if there is insufficient detail to establish a specific, mappable location.This data is updated daily.
These datasets are continuous parameter grids (CPG) of monthly mean evapotranspiration data for March through September, years 2000 through 2015, in the Pacific Northwest. Source evapotranspiration data was produced using the operational Simplified Surface Energy Balance (SSEBop) model.
This data release contains three different datasets that were used in the Scientific Investigations Report: Spatial and Temporal Distribution of Bacterial Indicators and Microbial Source Tracking within Tumacácori National Historical Park and the Upper Santa Cruz River, Arizona, 2015-16. These datasets contain regression model data, estimated discharge data, and calculated flux and yields data. Regression Model Data: This dataset contains data used in a regression model development in the SIR. The period of data ranged from May 25, 1994 to May 19, 2017. Data from 2015 to 2017 were collected by the U.S. Geological Survey. Data prior to 2015 were provided by various agencies. Listed below are the different data contained within this dataset: - Season represented as an indicator variable (Fall, Spring, Summer, and Winter) - Hydrologic Condition represented as an indicator variable (rising limb, recession limb, peak, or unable to classify) - Flood (binary variable indicating if the sample was collected during a flood event or not) - Decimal Date (DT) represented as a continuous variable - Sine of DT represented as a continuous variable for periodic function to describe seasonal variation - Cosine of DT represented as a continuous variable for periodic function to describe seasonal variation Estimated Discharge: This dataset contains estimated discharge at four different sites between 03/02/2015 and 12/14/2016. The discharge was estimated using nearby streamgage relations and methods are described in detail in the SIR . The sites where discharge was estimated are listed below. - NW8; 312551110573901; Nogales Wash at Ruby Road - SC3; 312654110573201; Santa Cruz River abv Nogales Wash - SC10; 313343110024701; Santa Cruz River at Santa Gertrudis Lane - SC14; 09481740; Santa Cruz River at Tubac, AZ Calculated Flux and Yields: This dataset contains calculated flux and yields for E. coli and suspended sediment concentrations. Mean daily flux was calculated when mean daily discharge was available at a corresponding streamgage. Instantaneous flux was calculated when instantaneous discharge (at 15-minute intervals) were available at a corresponding streamgage, or from a measured or estimated discharge value. The yields were calculated using the calculated flux values and the area of the different watersheds. Methods and equations are described in detail in the SIR. Listed below are the data contained within this dataset: - Mean daily E. coli flux, in most probable number per day - Mean daily suspended sediment, in flux, in tons per day - Instantaneous E. coli flux, in most probable number per second - Instantaneous suspended sediment flux, in tons per second - E. coli, in most probable number per square mile - Suspended sediment, in tons per square mile
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Description
This dataset contains a simulated collection of 1,00000 patient records designed to explore hypertension management in resource-constrained settings. It provides comprehensive data for analyzing blood pressure control rates, associated risk factors, and complications. The dataset is ideal for predictive modelling, risk analysis, and treatment optimization, offering insights into demographic, clinical, and treatment-related variables.
Dataset Structure
Dataset Volume
• Size: 10,000 records. • Features: 19 variables, categorized into Sociodemographic, Clinical, Complications, and Treatment/Control groups.
Variables and Categories
A. Sociodemographic Variables
1. Age:
• Continuous variable in years.
• Range: 18–80 years.
• Mean ± SD: 49.37 ± 12.81.
2. Sex:
• Categorical variable.
• Values: Male, Female.
3. Education:
• Categorical variable.
• Values: No Education, Primary, Secondary, Higher Secondary, Graduate, Post-Graduate, Madrasa.
4. Occupation:
• Categorical variable.
• Values: Service, Business, Agriculture, Retired, Unemployed, Housewife.
5. Monthly Income:
• Categorical variable in Bangladeshi Taka.
• Values: <5000, 5001–10000, 10001–15000, >15000.
6. Residence:
• Categorical variable.
• Values: Urban, Sub-urban, Rural.
B. Clinical Variables
7. Systolic BP:
• Continuous variable in mmHg.
• Range: 100–200 mmHg.
• Mean ± SD: 140 ± 15 mmHg.
8. Diastolic BP:
• Continuous variable in mmHg.
• Range: 60–120 mmHg.
• Mean ± SD: 90 ± 10 mmHg.
9. Elevated Creatinine:
• Binary variable (\geq 1.4 \, \text{mg/dL}).
• Values: Yes, No.
10. Diabetes Mellitus:
• Binary variable.
• Values: Yes, No.
11. Family History of CVD:
• Binary variable.
• Values: Yes, No.
12. Elevated Cholesterol:
• Binary variable (\geq 200 \, \text{mg/dL}).
• Values: Yes, No.
13. Smoking:
• Binary variable.
• Values: Yes, No.
C. Complications
14. LVH (Left Ventricular Hypertrophy):
• Binary variable (ECG diagnosis).
• Values: Yes, No.
15. IHD (Ischemic Heart Disease):
• Binary variable.
• Values: Yes, No.
16. CVD (Cerebrovascular Disease):
• Binary variable.
• Values: Yes, No.
17. Retinopathy:
• Binary variable.
• Values: Yes, No.
D. Treatment and Control
18. Treatment:
• Categorical variable indicating therapy type.
• Values: Single Drug, Combination Drugs.
19. Control Status:
• Binary variable.
• Values: Controlled, Uncontrolled.
Dataset Applications
1. Predictive Modeling:
• Develop models to predict blood pressure control status using demographic and clinical data.
2. Risk Analysis:
• Identify significant factors influencing hypertension control and complications.
3. Severity Scoring:
• Quantify hypertension severity for patient risk stratification.
4. Complications Prediction:
• Forecast complications like IHD, LVH, and CVD for early intervention.
5. Treatment Guidance:
• Analyze therapy efficacy to recommend optimal treatment strategies.
Daily maximum and minimum air temperature data were obtained from the Global Historical Climatology Network daily (GHCNd, Menne, et al. 2012) and the Great Lakes Air Temperature/Degree Day Climatology, 1897-1983 (Assel et al. 1995). Daily air temperature was calculated by taking a simple average of daily maximum and minimum air temperature. To accurately capture climate trends and variability, it is critical to ensure data consistency across the historical record, such as spatial coverage, a number of representative weather stations, and measurement details (e.g., sensor types and heights, measurement protocols) as any inconsistencies could result in apparent climate change in the data record. Bearing this in consideration and following Cohn et al. (2021), a total of 24 coastal locations along the Great Lakes were selected (see Figure 1 in the Method Document). These 24 locations had relatively consistent station data records since the 1890s while data from other locations had large gaps in time, or had inconsistencies among data from neighboring stations. Each of the selected locations had multiple weather stations in their proximity covering the historical period from 1890s to 2023, representing the weather conditions around the _location. Only a couple of stations covered the whole historical period (e.g., Green Bay, WI). Therefore, for most of the locations, datasets from multiple stations in the proximity of each _location were combined to create a continuous data record from the 1890s to 2023 (see Table 1 in the Method Document for station information and periods for which the station data was used). When doing so, data consistency was verified by comparing the data during the period when station datasets overlap. This procedure resulted in almost continuous timeseries, except for a few locations that still had temporal gaps of one to several days (e.g., Escanaba, MI). Therefore, any temporal data gap less than 10 days in the combined timeseries were filled based on the linear interpolation. This resulted in completely continuous timeseries for all the locations. Average daily air temperature was calculated from January 1, 1897 to October 22, 2023 by simply making an average of timeseries data from corresponding locations around each lake. This resulted in daily air temperature records for all five Great Lakes (Lake Superior, Lake Huron, Lake Michigan, Lake Erie, and Lake Ontario). The cumulative freezing degree days (CFDDs) and the net melting degree days (NMDDs) were also added to this version of the dataset. The description of the calculation methods for CFDD and NMDD can be found in the method document included in this dataset.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The High Definition (HD) Maps market is experiencing robust growth, driven by the escalating demand for autonomous vehicles and Advanced Driver-Assistance Systems (ADAS). The market size in 2025 is estimated at $15.49 billion, projecting a significant expansion over the forecast period (2025-2033). While the provided CAGR (Compound Annual Growth Rate) is missing, considering the rapid technological advancements and increasing adoption of autonomous driving technologies, a conservative estimate would place the CAGR between 15% and 20% for the forecast period. This growth is fueled by several key factors, including the increasing accuracy and detail offered by HD maps compared to traditional maps, enabling safer and more efficient navigation for autonomous vehicles. The market is segmented by type (centralized vs. crowdsourced mapping) and application (autonomous vehicles, ADAS, others), with autonomous vehicles currently dominating the market share due to their critical reliance on precise and up-to-date map data. Major players like TomTom, Google, HERE Technologies, and Baidu Apollo are heavily investing in research and development, fostering innovation and competition within the market. Regional growth is expected to be geographically diverse, with North America and Europe leading the initial adoption, followed by a rapid expansion in the Asia-Pacific region driven by significant investments in autonomous vehicle infrastructure and technological advancements. The competitive landscape is characterized by both established map providers and technology giants entering the market. This intense competition is pushing innovation forward, leading to more accurate, detailed, and frequently updated HD maps. Challenges include the high cost of creating and maintaining HD maps, the need for continuous data updates to reflect dynamic road conditions, and data privacy concerns surrounding the collection and use of location data. Despite these challenges, the long-term outlook for the HD Maps market remains incredibly positive, fueled by the continuous advancement of autonomous driving technology and the increasing demand for improved road safety and traffic management solutions. The market's growth trajectory suggests significant opportunities for both established players and emerging companies in the years to come. We project a substantial increase in market size by 2033, exceeding the 2025 figures by a considerable margin, based on the estimated CAGR.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
One of the primary goals of Integrated Vehicle Health Management (IVHM) is to detect, diagnose, predict, and mitigate adverse events during the flight of an aircraft, regardless of the subsystem(s) from which the adverse event arises. To properly address this problem, it is critical to develop technologies that can integrate large, heterogeneous (meaning that they contain both continuous and discrete signals), asynchronous data streams from multiple subsystems in order to detect a potential adverse event, diagnose its cause, predict the effect of that event on the remaining useful life of the vehicle, and then take appropriate steps to mitigate the event if warranted. These data streams may have highly non-Gaussian distributions and can also contain discrete signals such as caution and warning messages which exhibit non-stationary and obey arbitrary noise models. At the aircraft level, a Vehicle-Level Reasoning System (VLRS) can be developed to provide aircraft with at least two significant capabilities: improvement of aircraft safety due to enhanced monitoring and reasoning about the aircraft’s health state, and also potential cost savings through Condition Based Maintenance (CBM). Along with the achieving the benefits of CBM, an important challenge facing aviation safety today is safeguarding against system- and component-level failures and malfunctions. Citation: A. N. Srivastava, D. Mylaraswamy, R. Mah, and E. Cooper, “Vehicle Level Reasoning Systems: Concept and Future Directions,” Society of Automotive Engineers Integrated Vehicle Health Management Book, Ian Jennions, Ed., 2011.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Transparency in data visualization is an essential ingredient for scientific communication. The traditional approach of visualizing continuous quantitative data solely in the form of summary statistics (i.e., measures of central tendency and dispersion) has repeatedly been criticized for not revealing the underlying raw data distribution. Remarkably, however, systematic and easy-to-use solutions for raw data visualization using the most commonly reported statistical software package for data analysis, IBM SPSS Statistics, are missing. Here, a comprehensive collection of more than 100 SPSS syntax files and an SPSS dataset template is presented and made freely available that allow the creation of transparent graphs for one-sample designs, for one- and two-factorial between-subject designs, for selected one- and two-factorial within-subject designs as well as for selected two-factorial mixed designs and, with some creativity, even beyond (e.g., three-factorial mixed-designs). Depending on graph type (e.g., pure dot plot, box plot, and line plot), raw data can be displayed along with standard measures of central tendency (arithmetic mean and median) and dispersion (95% CI and SD). The free-to-use syntax can also be modified to match with individual needs. A variety of example applications of syntax are illustrated in a tutorial-like fashion along with fictitious datasets accompanying this contribution. The syntax collection is hoped to provide researchers, students, teachers, and others working with SPSS a valuable tool to move towards more transparency in data visualization.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Recent work in survey research has made progress in estimating models involving selection bias in a particularly difficult circumstance—all nonrespondents are unit nonresponders, meaning that no data are available for them. These models are reasonably successful in circumstances where the dependent variable of interest is continuous, but they are less practical empirically when it is latent and only discrete outcomes or choices are observed. I develop a method in this article to estimate these models that is much more practical in terms of estimation. The model uses a small amount of auxiliary information to estimate the selection equation parameters, which are then held fixed while estimating the equation of interest parameters in a maximum-likelihood setting. After presenting Monte Carlo analyses to support the model, I apply the technique to a substantive problem: Which interest groups are likely to to be involved in support of potential initiatives to achieve their policy goals?
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Indonesia Export: Volume: Weighing Machines Scales for Continuous Weighing of Goods on Conveyors ; Using Electronic Means for Gauging Weight data was reported at 0.000 kg mn in Nov 2024. This records a decrease from the previous number of 0.001 kg mn for Oct 2024. Indonesia Export: Volume: Weighing Machines Scales for Continuous Weighing of Goods on Conveyors ; Using Electronic Means for Gauging Weight data is updated monthly, averaging 0.000 kg mn from Jan 2019 (Median) to Nov 2024, with 35 observations. The data reached an all-time high of 0.003 kg mn in Feb 2022 and a record low of 0.000 kg mn in Jul 2023. Indonesia Export: Volume: Weighing Machines Scales for Continuous Weighing of Goods on Conveyors ; Using Electronic Means for Gauging Weight data remains active status in CEIC and is reported by Statistics Indonesia. The data is categorized under Indonesia Premium Database’s Foreign Trade – Table ID.JAH083: Foreign Trade: by HS 8 Digits: Export: HS84: Nuclear Reactors, Boilers, Machinery, and Mechanical Appliances, Parts Thereof.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Continuous data were indicated with mean±SD (standard deviation) while categorical data were indicated with number and percentage (%). For comparisons of means between groups, Mann-Whitney U test or student’s independent t-test was used depends on normality assumption. Categorical data were tested using Chi-square test or Fisher’s exact text (if expected value ≤ 5 was found). Spearman’s correlation coefficient was used to observe the relation among independent variables. Further, univariate and multivariate logistic regression models were used to analyze the association between independent variables and survival results.The independent variables which were significant in univariate were entered into a multivariate model. Two kinds of multivariate models were used, including the enter method and forward (Wald test) method. In the enter method, significant variables were recognized as associated factors. In the forward method with Wald test, the combination of independent variables with best explained variation were reported. The estimated odds ratio (OR) and its 95% confidence interval (CI) were reported in all logistic regression results. The probabilities generated from the final multivariate logistic regression model was further validated by ROC analysis. The AUC and its 95% confidence interval (CI) were reported. All above analyses were performed using IBM SPSS Version 25 (SPSS Statistics V25, IBM Corporation, Somers, New York).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplemental Table S1. Predicted means (categorical data), beta values (continuous data) and standard errors of univariable models for crown-rump length (CRL) by different independent variables.
Executive Statement To support analysis of the Landsat long-term data record that began in 1972, the USGS Landsat data archive was reorganized into a formal tiered data collection structure. This structure ensures all Landsat Level 1 products provide a consistent archive of known data quality to support time-series analysis and data “stacking”, while controlling continuous improvement of the archive, and access to all data as they are acquired. Collection 1 Level 1 processing began in August 2016 and continued until all archived data was processed, completing May 2018. Newly-acquired Landsat 8 and Landsat 7 data continue to be processed into Collection 1 shortly after data is downlinked to USGS EROS.Learn more: https://www.usgs.gov/media/files/landsat-collection-1-level-1-product-definition
The merged-satellite Solar Backscattered Ultraviolet (SBUV) Level-3 monthly zonal mean (MZM) product (MSO3L3zm5) contains 1 month zonal means for profile layer and total column ozone based on v8.6 SBUV data from the Nimbus-4 BUV, Nimbus-7 SBUV, and NOAA-9 through NOAA-18 SBUV/2 instruments. The v8.6 SBUV algorithm estimates the ozone nadir profile and total column from SBUV measurements, and differs from the v8.0 SBUV algorithm via the use of 1) the Brion-Daumont-Malicet ozone cross sections, 2) an OMI-derived cloud-height climatology, 3) a revised a priori ozone climatology, and 4) inter-instrument calibration based on comparisons with no local time difference. The MSO3L3zm5 product is stored as a single HDF5 file, and has a size of 0.4 MB. The MZM product contains 5.0-degree-wide latitude zones with data between latitude -80.0 and 80.0 degrees. The data cover the time period from May 1970 through July 2013. Data coverage during the BUV mission from 1970 - 1977 contains many gaps after October 1973, and there are no data between November 1976 and November 1978. Continuous data coverage begins with SBUV and SBUV/2 missions starting November 1978.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Recent BOLD-fMRI studies have revealed spatial distinction between variability- and mean-based between-condition differences, suggesting that BOLD variability could offer complementary and even orthogonal views of brain function with traditional activation. However, these findings were mainly observed in block-designed fMRI studies. As block design may not be appreciate for characterizing the low-frequency dynamics of BOLD signal, the evidences suggesting the distinction between BOLD variability and mean are less convincing. Based on the high reproducibility of signal variability modulation between continuous eyes-open (EO) and eyes-closed (EC) states, here we employed EO/EC paradigm and BOLD-fMRI to compare variability- and mean-based EO/EC differences while the subjects were in light. The comparisons were made both on block-designed and continuous EO/EC data. Our results demonstrated that the spatial patterns of variability- and mean-based EO/EC differences were largely distinct with each other, both for block-designed and continuous data. For continuous data, increases of BOLD variability were found in secondary visual cortex and decreases were mainly in primary auditory cortex, primary sensorimotor cortex and medial nuclei of thalamus, whereas no significant mean-based differences were observed. For the block-designed data, the pattern of increased variability resembled that of continuous data and the negative regions were restricted to medial thalamus and a few clusters in auditory and sensorimotor networks, whereas activation regions were mainly located in primary visual cortex and lateral nuclei of thalamus. Furthermore, with the expanding window analyses we found variability results of continuous data exhibited a rather slower dynamical process than typically considered for task activation, suggesting block design is less optimal than continuous design in characterizing BOLD variability. In sum, we provided more solid evidences that variability-based modulation could represent orthogonal views of brain function with traditional mean-based activation.
ATL22 is a derivative of the continuous Level 3A ATL13 Along Track Inland Surface Water Data product. ATL13 contains the high-resolution, along-track inland water surface profiles derived from analysis of the geolocated photon clouds from the ATL03 product. Starting from ATL13, ATL22 computes the mean surface water quantities with no additional photon analysis. The two data products, ATL22 and ATL13, can be used in conjunction as they include the same orbit and water body nomenclature independent from version numbers.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Many capture-recapture surveys of wildlife populations operate in continuous time but detections are typically aggregated into occasions for analysis, even when exact detection times are available. This discards information and introduces subjectivity, in the form of decisions about occasion definition. We develop a spatio-temporal Poisson process model for spatially explicit capture-recapture (SECR) surveys that operate continuously and record exact detection times. We show that, except in some special cases (including the case in which detection probability does not change within occasion), temporally aggregated data do not provide sufficient statistics for density and related parameters, and that when detection probability is constant over time our continuous-time (CT) model is equivalent to an existing model based on detection frequencies. We use the model to estimate jaguar density from a camera-trap survey and conduct a simulation study to investigate the properties of a CT estimator and discrete-occasion estimators with various levels of temporal aggregation. This includes investigation of the effect on the estimators of spatio-temporal correlation induced by animal movement. The CT estimator is found to be unbiased and more precise than discrete-occasion estimators based on binary capture data (rather than detection frequencies) when there is no spatio-temporal correlation. It is also found to be only slightly biased when there is correlation induced by animal movement, and to be more robust to inadequate detector spacing, while discrete-occasion estimators with binary data can be sensitive to occasion length, particularly in the presence of inadequate detector spacing. Our model includes as a special case a discrete-occasion estimator based on detection frequencies, and at the same time lays a foundation for the development of more sophisticated CT models and estimators. It allows modelling within-occasion changes in detectability, readily accommodates variation in detector effort, removes subjectivity associated with user-defined occasions, and fully utilises CT data. We identify a need for developing CT methods that incorporate spatio-temporal dependence in detections and see potential for CT models being combined with telemetry-based animal movement models to provide a richer inference framework.