37 datasets found
  1. n

    Data from: Continuous-time spatially explicit capture-recapture models, with...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Apr 21, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rebecca Foster; Bart Harmsen; Lorenzo Milazzo; Greg Distiller; David Borchers (2014). Continuous-time spatially explicit capture-recapture models, with an application to a jaguar camera-trap survey [Dataset]. http://doi.org/10.5061/dryad.mg5kv
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 21, 2014
    Dataset provided by
    University of St Andrews
    University of Cambridge
    University of Belize
    University of Cape Town
    Authors
    Rebecca Foster; Bart Harmsen; Lorenzo Milazzo; Greg Distiller; David Borchers
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Belize, Cockscomb Basin Wildlife Sanctuary
    Description

    Many capture-recapture surveys of wildlife populations operate in continuous time but detections are typically aggregated into occasions for analysis, even when exact detection times are available. This discards information and introduces subjectivity, in the form of decisions about occasion definition. We develop a spatio-temporal Poisson process model for spatially explicit capture-recapture (SECR) surveys that operate continuously and record exact detection times. We show that, except in some special cases (including the case in which detection probability does not change within occasion), temporally aggregated data do not provide sufficient statistics for density and related parameters, and that when detection probability is constant over time our continuous-time (CT) model is equivalent to an existing model based on detection frequencies. We use the model to estimate jaguar density from a camera-trap survey and conduct a simulation study to investigate the properties of a CT estimator and discrete-occasion estimators with various levels of temporal aggregation. This includes investigation of the effect on the estimators of spatio-temporal correlation induced by animal movement. The CT estimator is found to be unbiased and more precise than discrete-occasion estimators based on binary capture data (rather than detection frequencies) when there is no spatio-temporal correlation. It is also found to be only slightly biased when there is correlation induced by animal movement, and to be more robust to inadequate detector spacing, while discrete-occasion estimators with binary data can be sensitive to occasion length, particularly in the presence of inadequate detector spacing. Our model includes as a special case a discrete-occasion estimator based on detection frequencies, and at the same time lays a foundation for the development of more sophisticated CT models and estimators. It allows modelling within-occasion changes in detectability, readily accommodates variation in detector effort, removes subjectivity associated with user-defined occasions, and fully utilises CT data. We identify a need for developing CT methods that incorporate spatio-temporal dependence in detections and see potential for CT models being combined with telemetry-based animal movement models to provide a richer inference framework.

  2. Data from: Nitrogen addition alleviates the adverse effects of drought on...

    • figshare.com
    txt
    Updated Feb 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yonghong Luo; Zhuwen Xu (2024). Nitrogen addition alleviates the adverse effects of drought on plant productivity in a temperate steppe [Dataset]. http://doi.org/10.6084/m9.figshare.25137836.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Feb 3, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Yonghong Luo; Zhuwen Xu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    N-Dr datasetData associated with the following paper:Title: Nitrogen addition alleviates the adverse effects of drought on plant productivity in a temperate steppe Author: Yonghong Luo, Lan Du, Jiatao Zhang, Haiyan Ren, Yan Shen, Jinbao Zhang, Na Li, Ru Tian, Shan Wang, Heyong Liu, Zhuwen XuJournal: Ecological ApplicationsDescription of the data and file structureThe uploaded .csv file includes means of variables calculated across the sampling period (2019-2021) (N-Dr dataset.csv).Description of variables included in N-Dr dataset.csv:* Block (categorical data): Block* Drought (categorical data): Drought treatment, D0 for ambient precipitation, D1 for excluding precipitation in June in each year, D2 for reducing half precipitation amount from June to August in each year, D3 for reducing half precipitation frequency without changing precipitation amount from June to August in each year* Nitrogen (categorical data): Nitrogen addition treatment, N0 for ambient nitrogen and N1 for addition of 10 g N m-2 yr-1;* SM (continuous data): Soil moisture;* Soil IN (continuous data): Soil inorganic nitrogen concentration;* NRI (continuous data): The net relatedness index;* Species richness (continuous data): Species richness;* FDis (continuous data): Functional dispersion;* CWMPH (continuous data): Community weighted mean of plant stature;\ * CWMLA (continuous data): Community weighted mean of leaf area;* CWMSLA (continuous data): Community weighted mean of specific leaf area;* CWMLDMC (continuous data): Community weighted mean of leaf dry matter content (LDMC);* CWMSDMC (continuous data): Community weighted mean of stem dry matter content (SDMC);* CWMSLR (continuous data): Community weighted mean of stem leaf ratio (SLR);* CWMLN (continuous data): Community weighted mean of leaf nitrogen concentration (LN);* ANPP (continuous data): Community aboveground net primary productivity;\

  3. d

    Air temperature, annual mean daily maximum, 2000-2016, Region 17, Continuous...

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Jun 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Climate Adaptation Science Centers (2024). Air temperature, annual mean daily maximum, 2000-2016, Region 17, Continuous Parameter Grid (CPG) [Dataset]. https://catalog.data.gov/dataset/air-temperature-annual-mean-daily-maximum-2000-2016-region-17-continuous-parameter-grid-cp
    Explore at:
    Dataset updated
    Jun 15, 2024
    Dataset provided by
    Climate Adaptation Science Centers
    Description

    These datasets are continuous parameter grids (CPG) of annual mean daily maximum air temperature data for the years 2000 through 2016 in the Pacific Northwest. Source temperature data was produced by the PRISM Climate Group at Oregon State University.

  4. d

    Crash Data

    • catalog.data.gov
    • data.amerigeoss.org
    • +1more
    Updated Nov 12, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Town of Cary (2020). Crash Data [Dataset]. https://catalog.data.gov/no/dataset/crash-data
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    Town of Cary
    Description

    View crash information from the last five years to current date.This dataset includes crashes in the Town of Cary for the previous four calendar years plus the current year to date. The data is based on the National Incident Based Reporting System (NIBRS). The data is dynamic, allowing for additions, deletions and modifications at any time, resulting in more accurate information in the database. Due to ongoing and continuous data entry, the numbers of records in subsequent extractions are subject to change.About Crash DataThe Cary Police Department strives to make Crash data as accurate as possible, but there is no avoiding the introduction of errors into this process, which relies on data furnished by many people and that cannot always be verified. As the data is updated on this site there will be instances of adding new incidents and updating existing data with information gathered through the investigative process.Not surprisingly, Crash data become more accurate over time, as new crashes are reported and more information comes to light during investigations.This dynamic nature of Crash data means that content provided here today will probably differ from content provided a week from now. Likewise, content provided on this site will probably differ somewhat from crime statistics published elsewhere by the Town of Cary, even though they draw from the same database.About Crash LocationsCrash locations reflect the approximate locations of the crash. Certain crashes may not appear on maps if there is insufficient detail to establish a specific, mappable location.This data is updated daily.

  5. d

    Evapotranspiration (ET), monthly mean, 2000-2015, Region 17, Continuous...

    • datasets.ai
    • s.cnmilf.com
    • +1more
    0
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Interior, Evapotranspiration (ET), monthly mean, 2000-2015, Region 17, Continuous Parameter Grid (CPG) [Dataset]. https://datasets.ai/datasets/evapotranspiration-et-monthly-mean-2000-2015-region-17-continuous-parameter-grid-cpg
    Explore at:
    0Available download formats
    Dataset authored and provided by
    Department of the Interior
    Description

    These datasets are continuous parameter grids (CPG) of monthly mean evapotranspiration data for March through September, years 2000 through 2015, in the Pacific Northwest. Source evapotranspiration data was produced using the operational Simplified Surface Energy Balance (SSEBop) model.

  6. d

    Overview Metadata for the Regression Model Data, Estimated Discharge Data,...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Overview Metadata for the Regression Model Data, Estimated Discharge Data, and Calculated Flux and Yields Data at Tumacácori National Historical Park and the Upper Santa Cruz River, Arizona (1994-2017) [Dataset]. https://catalog.data.gov/dataset/overview-metadata-for-the-regression-model-data-estimated-discharge-data-and-calculat-1994
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Tumacacori-Carmen, Santa Cruz River, Arizona
    Description

    This data release contains three different datasets that were used in the Scientific Investigations Report: Spatial and Temporal Distribution of Bacterial Indicators and Microbial Source Tracking within Tumacácori National Historical Park and the Upper Santa Cruz River, Arizona, 2015-16. These datasets contain regression model data, estimated discharge data, and calculated flux and yields data. Regression Model Data: This dataset contains data used in a regression model development in the SIR. The period of data ranged from May 25, 1994 to May 19, 2017. Data from 2015 to 2017 were collected by the U.S. Geological Survey. Data prior to 2015 were provided by various agencies. Listed below are the different data contained within this dataset: - Season represented as an indicator variable (Fall, Spring, Summer, and Winter) - Hydrologic Condition represented as an indicator variable (rising limb, recession limb, peak, or unable to classify) - Flood (binary variable indicating if the sample was collected during a flood event or not) - Decimal Date (DT) represented as a continuous variable - Sine of DT represented as a continuous variable for periodic function to describe seasonal variation - Cosine of DT represented as a continuous variable for periodic function to describe seasonal variation Estimated Discharge: This dataset contains estimated discharge at four different sites between 03/02/2015 and 12/14/2016. The discharge was estimated using nearby streamgage relations and methods are described in detail in the SIR . The sites where discharge was estimated are listed below. - NW8; 312551110573901; Nogales Wash at Ruby Road - SC3; 312654110573201; Santa Cruz River abv Nogales Wash - SC10; 313343110024701; Santa Cruz River at Santa Gertrudis Lane - SC14; 09481740; Santa Cruz River at Tubac, AZ Calculated Flux and Yields: This dataset contains calculated flux and yields for E. coli and suspended sediment concentrations. Mean daily flux was calculated when mean daily discharge was available at a corresponding streamgage. Instantaneous flux was calculated when instantaneous discharge (at 15-minute intervals) were available at a corresponding streamgage, or from a measured or estimated discharge value. The yields were calculated using the calculated flux values and the area of the different watersheds. Methods and equations are described in detail in the SIR. Listed below are the data contained within this dataset: - Mean daily E. coli flux, in most probable number per day - Mean daily suspended sediment, in flux, in tons per day - Instantaneous E. coli flux, in most probable number per second - Instantaneous suspended sediment flux, in tons per second - E. coli, in most probable number per square mile - Suspended sediment, in tons per square mile

  7. m

    Bridging the Gap in Hypertension Management: Evaluating Blood Pressure...

    • data.mendeley.com
    Updated Jan 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    abu sufian (2025). Bridging the Gap in Hypertension Management: Evaluating Blood Pressure Control and Associated Risk Factors in a Resource-Constrained Setting [Dataset]. http://doi.org/10.17632/56jyjndvcr.1
    Explore at:
    Dataset updated
    Jan 15, 2025
    Authors
    abu sufian
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset Description

    This dataset contains a simulated collection of 1,00000 patient records designed to explore hypertension management in resource-constrained settings. It provides comprehensive data for analyzing blood pressure control rates, associated risk factors, and complications. The dataset is ideal for predictive modelling, risk analysis, and treatment optimization, offering insights into demographic, clinical, and treatment-related variables.

    Dataset Structure

    1. Dataset Volume

      • Size: 10,000 records. • Features: 19 variables, categorized into Sociodemographic, Clinical, Complications, and Treatment/Control groups.

    2. Variables and Categories

    A. Sociodemographic Variables

    1. Age:
    •  Continuous variable in years.
    •  Range: 18–80 years.
    •  Mean ± SD: 49.37 ± 12.81.
    2. Sex:
    •  Categorical variable.
    •  Values: Male, Female.
    3. Education:
    •  Categorical variable.
    •  Values: No Education, Primary, Secondary, Higher Secondary, Graduate, Post-Graduate, Madrasa.
    4. Occupation:
    •  Categorical variable.
    •  Values: Service, Business, Agriculture, Retired, Unemployed, Housewife.
    5. Monthly Income:
    •  Categorical variable in Bangladeshi Taka.
    •  Values: <5000, 5001–10000, 10001–15000, >15000.
    6. Residence:
    •  Categorical variable.
    •  Values: Urban, Sub-urban, Rural.
    

    B. Clinical Variables

    7. Systolic BP:
    •  Continuous variable in mmHg.
    •  Range: 100–200 mmHg.
    •  Mean ± SD: 140 ± 15 mmHg.
    8. Diastolic BP:
    •  Continuous variable in mmHg.
    •  Range: 60–120 mmHg.
    •  Mean ± SD: 90 ± 10 mmHg.
    9. Elevated Creatinine:
    •  Binary variable (\geq 1.4 \, \text{mg/dL}).
    •  Values: Yes, No.
    10. Diabetes Mellitus:
    •  Binary variable.
    •  Values: Yes, No.
    11. Family History of CVD:
    •  Binary variable.
    •  Values: Yes, No.
    12. Elevated Cholesterol:
    •  Binary variable (\geq 200 \, \text{mg/dL}).
    •  Values: Yes, No.
    13. Smoking:
    •  Binary variable.
    •  Values: Yes, No.
    

    C. Complications

    14. LVH (Left Ventricular Hypertrophy):
    •  Binary variable (ECG diagnosis).
    •  Values: Yes, No.
    15. IHD (Ischemic Heart Disease):
    •  Binary variable.
    •  Values: Yes, No.
    16. CVD (Cerebrovascular Disease):
    •  Binary variable.
    •  Values: Yes, No.
    17. Retinopathy:
    •  Binary variable.
    •  Values: Yes, No.
    

    D. Treatment and Control

    18. Treatment:
    •  Categorical variable indicating therapy type.
    •  Values: Single Drug, Combination Drugs.
    19. Control Status:
    •  Binary variable.
    •  Values: Controlled, Uncontrolled.
    

    Dataset Applications

    1. Predictive Modeling:
    •  Develop models to predict blood pressure control status using demographic and clinical data.
    2. Risk Analysis:
    •  Identify significant factors influencing hypertension control and complications.
    3. Severity Scoring:
    •  Quantify hypertension severity for patient risk stratification.
    4. Complications Prediction:
    •  Forecast complications like IHD, LVH, and CVD for early intervention.
    5. Treatment Guidance:
    •  Analyze therapy efficacy to recommend optimal treatment strategies.
    
  8. c

    Data from: Daily mean air temperature data for the North American Great...

    • s.cnmilf.com
    • catalog.data.gov
    Updated Jun 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact) (2025). Daily mean air temperature data for the North American Great Lakes based on coastal weather stations; 1897-2023 (NCEI Accession 0291722) [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/daily-mean-air-temperature-data-for-the-north-american-great-lakes-based-on-coastal-weather-sta
    Explore at:
    Dataset updated
    Jun 1, 2025
    Dataset provided by
    (Point of Contact)
    Area covered
    North America, The Great Lakes
    Description

    Daily maximum and minimum air temperature data were obtained from the Global Historical Climatology Network daily (GHCNd, Menne, et al. 2012) and the Great Lakes Air Temperature/Degree Day Climatology, 1897-1983 (Assel et al. 1995). Daily air temperature was calculated by taking a simple average of daily maximum and minimum air temperature. To accurately capture climate trends and variability, it is critical to ensure data consistency across the historical record, such as spatial coverage, a number of representative weather stations, and measurement details (e.g., sensor types and heights, measurement protocols) as any inconsistencies could result in apparent climate change in the data record. Bearing this in consideration and following Cohn et al. (2021), a total of 24 coastal locations along the Great Lakes were selected (see Figure 1 in the Method Document). These 24 locations had relatively consistent station data records since the 1890s while data from other locations had large gaps in time, or had inconsistencies among data from neighboring stations. Each of the selected locations had multiple weather stations in their proximity covering the historical period from 1890s to 2023, representing the weather conditions around the _location. Only a couple of stations covered the whole historical period (e.g., Green Bay, WI). Therefore, for most of the locations, datasets from multiple stations in the proximity of each _location were combined to create a continuous data record from the 1890s to 2023 (see Table 1 in the Method Document for station information and periods for which the station data was used). When doing so, data consistency was verified by comparing the data during the period when station datasets overlap. This procedure resulted in almost continuous timeseries, except for a few locations that still had temporal gaps of one to several days (e.g., Escanaba, MI). Therefore, any temporal data gap less than 10 days in the combined timeseries were filled based on the linear interpolation. This resulted in completely continuous timeseries for all the locations. Average daily air temperature was calculated from January 1, 1897 to October 22, 2023 by simply making an average of timeseries data from corresponding locations around each lake. This resulted in daily air temperature records for all five Great Lakes (Lake Superior, Lake Huron, Lake Michigan, Lake Erie, and Lake Ontario). The cumulative freezing degree days (CFDDs) and the net melting degree days (NMDDs) were also added to this version of the dataset. The description of the calculation methods for CFDD and NMDD can be found in the method document included in this dataset.

  9. H

    High Definition Maps Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Mar 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). High Definition Maps Report [Dataset]. https://www.archivemarketresearch.com/reports/high-definition-maps-52988
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Mar 7, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The High Definition (HD) Maps market is experiencing robust growth, driven by the escalating demand for autonomous vehicles and Advanced Driver-Assistance Systems (ADAS). The market size in 2025 is estimated at $15.49 billion, projecting a significant expansion over the forecast period (2025-2033). While the provided CAGR (Compound Annual Growth Rate) is missing, considering the rapid technological advancements and increasing adoption of autonomous driving technologies, a conservative estimate would place the CAGR between 15% and 20% for the forecast period. This growth is fueled by several key factors, including the increasing accuracy and detail offered by HD maps compared to traditional maps, enabling safer and more efficient navigation for autonomous vehicles. The market is segmented by type (centralized vs. crowdsourced mapping) and application (autonomous vehicles, ADAS, others), with autonomous vehicles currently dominating the market share due to their critical reliance on precise and up-to-date map data. Major players like TomTom, Google, HERE Technologies, and Baidu Apollo are heavily investing in research and development, fostering innovation and competition within the market. Regional growth is expected to be geographically diverse, with North America and Europe leading the initial adoption, followed by a rapid expansion in the Asia-Pacific region driven by significant investments in autonomous vehicle infrastructure and technological advancements. The competitive landscape is characterized by both established map providers and technology giants entering the market. This intense competition is pushing innovation forward, leading to more accurate, detailed, and frequently updated HD maps. Challenges include the high cost of creating and maintaining HD maps, the need for continuous data updates to reflect dynamic road conditions, and data privacy concerns surrounding the collection and use of location data. Despite these challenges, the long-term outlook for the HD Maps market remains incredibly positive, fueled by the continuous advancement of autonomous driving technology and the increasing demand for improved road safety and traffic management solutions. The market's growth trajectory suggests significant opportunities for both established players and emerging companies in the years to come. We project a substantial increase in market size by 2033, exceeding the 2025 figures by a considerable margin, based on the estimated CAGR.

  10. f

    What to Do When K-Means Clustering Fails: A Simple yet Principled...

    • plos.figshare.com
    • researchdata.aston.ac.uk
    txt
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yordan P. Raykov; Alexis Boukouvalas; Fahd Baig; Max A. Little (2023). What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm [Dataset]. http://doi.org/10.1371/journal.pone.0162259
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Yordan P. Raykov; Alexis Boukouvalas; Fahd Baig; Max A. Little
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  11. d

    Vehicle-Level Reasoning Systems: Integrating System-Wide data to Estimate...

    • catalog.data.gov
    • gimi9.com
    • +2more
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Vehicle-Level Reasoning Systems: Integrating System-Wide data to Estimate Instantaneous Health State [Dataset]. https://catalog.data.gov/dataset/vehicle-level-reasoning-systems-integrating-system-wide-data-to-estimate-instantaneous-hea
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    One of the primary goals of Integrated Vehicle Health Management (IVHM) is to detect, diagnose, predict, and mitigate adverse events during the flight of an aircraft, regardless of the subsystem(s) from which the adverse event arises. To properly address this problem, it is critical to develop technologies that can integrate large, heterogeneous (meaning that they contain both continuous and discrete signals), asynchronous data streams from multiple subsystems in order to detect a potential adverse event, diagnose its cause, predict the effect of that event on the remaining useful life of the vehicle, and then take appropriate steps to mitigate the event if warranted. These data streams may have highly non-Gaussian distributions and can also contain discrete signals such as caution and warning messages which exhibit non-stationary and obey arbitrary noise models. At the aircraft level, a Vehicle-Level Reasoning System (VLRS) can be developed to provide aircraft with at least two significant capabilities: improvement of aircraft safety due to enhanced monitoring and reasoning about the aircraft’s health state, and also potential cost savings through Condition Based Maintenance (CBM). Along with the achieving the benefits of CBM, an important challenge facing aviation safety today is safeguarding against system- and component-level failures and malfunctions. Citation: A. N. Srivastava, D. Mylaraswamy, R. Mah, and E. Cooper, “Vehicle Level Reasoning Systems: Concept and Future Directions,” Society of Automotive Engineers Integrated Vehicle Health Management Book, Ian Jennions, Ed., 2011.

  12. f

    Data_Sheet_1_Raw Data Visualization for Common Factorial Designs Using SPSS:...

    • frontiersin.figshare.com
    zip
    Updated Jun 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Florian Loffing (2023). Data_Sheet_1_Raw Data Visualization for Common Factorial Designs Using SPSS: A Syntax Collection and Tutorial.ZIP [Dataset]. http://doi.org/10.3389/fpsyg.2022.808469.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Frontiers
    Authors
    Florian Loffing
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Transparency in data visualization is an essential ingredient for scientific communication. The traditional approach of visualizing continuous quantitative data solely in the form of summary statistics (i.e., measures of central tendency and dispersion) has repeatedly been criticized for not revealing the underlying raw data distribution. Remarkably, however, systematic and easy-to-use solutions for raw data visualization using the most commonly reported statistical software package for data analysis, IBM SPSS Statistics, are missing. Here, a comprehensive collection of more than 100 SPSS syntax files and an SPSS dataset template is presented and made freely available that allow the creation of transparent graphs for one-sample designs, for one- and two-factorial between-subject designs, for selected one- and two-factorial within-subject designs as well as for selected two-factorial mixed designs and, with some creativity, even beyond (e.g., three-factorial mixed-designs). Depending on graph type (e.g., pure dot plot, box plot, and line plot), raw data can be displayed along with standard measures of central tendency (arithmetic mean and median) and dispersion (95% CI and SD). The free-to-use syntax can also be modified to match with individual needs. A variety of example applications of syntax are illustrated in a tutorial-like fashion along with fictitious datasets accompanying this contribution. The syntax collection is hoped to provide researchers, students, teachers, and others working with SPSS a valuable tool to move towards more transparency in data visualization.

  13. H

    Replication data for: Using Auxiliary Data to Estimate Selection Bias...

    • dataverse.harvard.edu
    Updated Mar 8, 2010
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frederick J. Boehmke (2010). Replication data for: Using Auxiliary Data to Estimate Selection Bias Models, with an Application to Interest Group Use of the Direct Initiative Process [Dataset]. http://doi.org/10.7910/DVN/RMSOVN
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 8, 2010
    Dataset provided by
    Harvard Dataverse
    Authors
    Frederick J. Boehmke
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Recent work in survey research has made progress in estimating models involving selection bias in a particularly difficult circumstance—all nonrespondents are unit nonresponders, meaning that no data are available for them. These models are reasonably successful in circumstances where the dependent variable of interest is continuous, but they are less practical empirically when it is latent and only discrete outcomes or choices are observed. I develop a method in this article to estimate these models that is much more practical in terms of estimation. The model uses a small amount of auxiliary information to estimate the selection equation parameters, which are then held fixed while estimating the equation of interest parameters in a maximum-likelihood setting. After presenting Monte Carlo analyses to support the model, I apply the technique to a substantive problem: Which interest groups are likely to to be involved in support of potential initiatives to achieve their policy goals?

  14. I

    Indonesia Export: Volume: Weighing Machines Scales for Continuous Weighing...

    • ceicdata.com
    Updated Sep 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com (2023). Indonesia Export: Volume: Weighing Machines Scales for Continuous Weighing of Goods on Conveyors ; Using Electronic Means for Gauging Weight [Dataset]. https://www.ceicdata.com/en/indonesia/foreign-trade-by-hs-8-digits-export-hs84-nuclear-reactors-boilers-machinery-and-mechanical-appliances-parts-thereof/export-volume-weighing-machines-scales-for-continuous-weighing-of-goods-on-conveyors--using-electronic-means-for-gauging-weight
    Explore at:
    Dataset updated
    Sep 26, 2023
    Dataset provided by
    CEICdata.com
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 1, 2022 - Nov 1, 2024
    Area covered
    Indonesia
    Description

    Indonesia Export: Volume: Weighing Machines Scales for Continuous Weighing of Goods on Conveyors ; Using Electronic Means for Gauging Weight data was reported at 0.000 kg mn in Nov 2024. This records a decrease from the previous number of 0.001 kg mn for Oct 2024. Indonesia Export: Volume: Weighing Machines Scales for Continuous Weighing of Goods on Conveyors ; Using Electronic Means for Gauging Weight data is updated monthly, averaging 0.000 kg mn from Jan 2019 (Median) to Nov 2024, with 35 observations. The data reached an all-time high of 0.003 kg mn in Feb 2022 and a record low of 0.000 kg mn in Jul 2023. Indonesia Export: Volume: Weighing Machines Scales for Continuous Weighing of Goods on Conveyors ; Using Electronic Means for Gauging Weight data remains active status in CEIC and is reported by Statistics Indonesia. The data is categorized under Indonesia Premium Database’s Foreign Trade – Table ID.JAH083: Foreign Trade: by HS 8 Digits: Export: HS84: Nuclear Reactors, Boilers, Machinery, and Mechanical Appliances, Parts Thereof.

  15. S

    Klebsiella pneumoniae in the communit

    • scidb.cn
    Updated Jan 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HongKui Sun (2024). Klebsiella pneumoniae in the communit [Dataset]. http://doi.org/10.57760/sciencedb.15375
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 19, 2024
    Dataset provided by
    Science Data Bank
    Authors
    HongKui Sun
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Continuous data were indicated with mean±SD (standard deviation) while categorical data were indicated with number and percentage (%). For comparisons of means between groups, Mann-Whitney U test or student’s independent t-test was used depends on normality assumption. Categorical data were tested using Chi-square test or Fisher’s exact text (if expected value ≤ 5 was found). Spearman’s correlation coefficient was used to observe the relation among independent variables. Further, univariate and multivariate logistic regression models were used to analyze the association between independent variables and survival results.The independent variables which were significant in univariate were entered into a multivariate model. Two kinds of multivariate models were used, including the enter method and forward (Wald test) method. In the enter method, significant variables were recognized as associated factors. In the forward method with Wald test, the combination of independent variables with best explained variation were reported. The estimated odds ratio (OR) and its 95% confidence interval (CI) were reported in all logistic regression results. The probabilities generated from the final multivariate logistic regression model was further validated by ROC analysis. The AUC and its 95% confidence interval (CI) were reported. All above analyses were performed using IBM SPSS Version 25 (SPSS Statistics V25, IBM Corporation, Somers, New York).

  16. m

    Data from: Variability of bovine conceptus-related volumes in early...

    • data.mendeley.com
    Updated Mar 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Rotheneder (2022). Variability of bovine conceptus-related volumes in early pregnancy measured with transrectal three-dimensional ultrasonography [Dataset]. http://doi.org/10.17632/tw3nnrcrcb.1
    Explore at:
    Dataset updated
    Mar 2, 2022
    Authors
    Simon Rotheneder
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplemental Table S1. Predicted means (categorical data), beta values (continuous data) and standard errors of univariable models for crown-rump length (CRL) by different independent variables.

  17. a

    Landsat Collection 1 Level 1 Product Definition

    • amerigeo.org
    Updated Jul 9, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AmeriGEOSS (2021). Landsat Collection 1 Level 1 Product Definition [Dataset]. https://www.amerigeo.org/datasets/landsat-collection-1-level-1-product-definition
    Explore at:
    Dataset updated
    Jul 9, 2021
    Dataset authored and provided by
    AmeriGEOSS
    Description

    Executive Statement To support analysis of the Landsat long-term data record that began in 1972, the USGS Landsat data archive was reorganized into a formal tiered data collection structure. This structure ensures all Landsat Level 1 products provide a consistent archive of known data quality to support time-series analysis and data “stacking”, while controlling continuous improvement of the archive, and access to all data as they are acquired. Collection 1 Level 1 processing began in August 2016 and continued until all archived data was processed, completing May 2018. Newly-acquired Landsat 8 and Landsat 7 data continue to be processed into Collection 1 shortly after data is downlinked to USGS EROS.Learn more: https://www.usgs.gov/media/files/landsat-collection-1-level-1-product-definition

  18. Multi-Satellite Merged Ozone (O3) Profile and Total Column 1 Month Zonal...

    • data.nasa.gov
    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Multi-Satellite Merged Ozone (O3) Profile and Total Column 1 Month Zonal Mean L3 Global 5.0 degree Latitude Zones V1 (MSO3L3zm5) at GES DISC - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/multi-satellite-merged-ozone-o3-profile-and-total-column-1-month-zonal-mean-l3-global-5-0--83f2b
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The merged-satellite Solar Backscattered Ultraviolet (SBUV) Level-3 monthly zonal mean (MZM) product (MSO3L3zm5) contains 1 month zonal means for profile layer and total column ozone based on v8.6 SBUV data from the Nimbus-4 BUV, Nimbus-7 SBUV, and NOAA-9 through NOAA-18 SBUV/2 instruments. The v8.6 SBUV algorithm estimates the ozone nadir profile and total column from SBUV measurements, and differs from the v8.0 SBUV algorithm via the use of 1) the Brion-Daumont-Malicet ozone cross sections, 2) an OMI-derived cloud-height climatology, 3) a revised a priori ozone climatology, and 4) inter-instrument calibration based on comparisons with no local time difference. The MSO3L3zm5 product is stored as a single HDF5 file, and has a size of 0.4 MB. The MZM product contains 5.0-degree-wide latitude zones with data between latitude -80.0 and 80.0 degrees. The data cover the time period from May 1970 through July 2013. Data coverage during the BUV mission from 1970 - 1977 contains many gaps after October 1973, and there are no data between November 1976 and November 1978. Continuous data coverage begins with SBUV and SBUV/2 missions starting November 1978.

  19. f

    Data_Sheet_1_Distinction Between Variability-Based Modulation and Mean-Based...

    • frontiersin.figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pei-Wen Zhang; Xiu-Juan Qu; Shu-Fang Qian; Xin-Bo Wang; Rui-Di Wang; Qiu-Yue Li; Shi-Yu Liu; Lihong Chen; Dong-Qiang Liu (2023). Data_Sheet_1_Distinction Between Variability-Based Modulation and Mean-Based Activation Revealed by BOLD-fMRI and Eyes-Open/Eyes-Closed Contrast.PDF [Dataset]. http://doi.org/10.3389/fnins.2018.00516.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers
    Authors
    Pei-Wen Zhang; Xiu-Juan Qu; Shu-Fang Qian; Xin-Bo Wang; Rui-Di Wang; Qiu-Yue Li; Shi-Yu Liu; Lihong Chen; Dong-Qiang Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recent BOLD-fMRI studies have revealed spatial distinction between variability- and mean-based between-condition differences, suggesting that BOLD variability could offer complementary and even orthogonal views of brain function with traditional activation. However, these findings were mainly observed in block-designed fMRI studies. As block design may not be appreciate for characterizing the low-frequency dynamics of BOLD signal, the evidences suggesting the distinction between BOLD variability and mean are less convincing. Based on the high reproducibility of signal variability modulation between continuous eyes-open (EO) and eyes-closed (EC) states, here we employed EO/EC paradigm and BOLD-fMRI to compare variability- and mean-based EO/EC differences while the subjects were in light. The comparisons were made both on block-designed and continuous EO/EC data. Our results demonstrated that the spatial patterns of variability- and mean-based EO/EC differences were largely distinct with each other, both for block-designed and continuous data. For continuous data, increases of BOLD variability were found in secondary visual cortex and decreases were mainly in primary auditory cortex, primary sensorimotor cortex and medial nuclei of thalamus, whereas no significant mean-based differences were observed. For the block-designed data, the pattern of increased variability resembled that of continuous data and the negative regions were restricted to medial thalamus and a few clusters in auditory and sensorimotor networks, whereas activation regions were mainly located in primary visual cortex and lateral nuclei of thalamus. Furthermore, with the expanding window analyses we found variability results of continuous data exhibited a rather slower dynamical process than typically considered for task activation, suggesting block design is less optimal than continuous design in characterizing BOLD variability. In sum, we provided more solid evidences that variability-based modulation could represent orthogonal views of brain function with traditional mean-based activation.

  20. g

    ATLAS/ICESat-2 L3B Mean Inland Surface Water Data V003 | gimi9.com

    • gimi9.com
    Updated May 28, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). ATLAS/ICESat-2 L3B Mean Inland Surface Water Data V003 | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_atlas-icesat-2-l3b-mean-inland-surface-water-data-v003/
    Explore at:
    Dataset updated
    May 28, 2017
    Description

    ATL22 is a derivative of the continuous Level 3A ATL13 Along Track Inland Surface Water Data product. ATL13 contains the high-resolution, along-track inland water surface profiles derived from analysis of the geolocated photon clouds from the ATL03 product. Starting from ATL13, ATL22 computes the mean surface water quantities with no additional photon analysis. The two data products, ATL22 and ATL13, can be used in conjunction as they include the same orbit and water body nomenclature independent from version numbers.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Rebecca Foster; Bart Harmsen; Lorenzo Milazzo; Greg Distiller; David Borchers (2014). Continuous-time spatially explicit capture-recapture models, with an application to a jaguar camera-trap survey [Dataset]. http://doi.org/10.5061/dryad.mg5kv

Data from: Continuous-time spatially explicit capture-recapture models, with an application to a jaguar camera-trap survey

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Apr 21, 2014
Dataset provided by
University of St Andrews
University of Cambridge
University of Belize
University of Cape Town
Authors
Rebecca Foster; Bart Harmsen; Lorenzo Milazzo; Greg Distiller; David Borchers
License

https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

Area covered
Belize, Cockscomb Basin Wildlife Sanctuary
Description

Many capture-recapture surveys of wildlife populations operate in continuous time but detections are typically aggregated into occasions for analysis, even when exact detection times are available. This discards information and introduces subjectivity, in the form of decisions about occasion definition. We develop a spatio-temporal Poisson process model for spatially explicit capture-recapture (SECR) surveys that operate continuously and record exact detection times. We show that, except in some special cases (including the case in which detection probability does not change within occasion), temporally aggregated data do not provide sufficient statistics for density and related parameters, and that when detection probability is constant over time our continuous-time (CT) model is equivalent to an existing model based on detection frequencies. We use the model to estimate jaguar density from a camera-trap survey and conduct a simulation study to investigate the properties of a CT estimator and discrete-occasion estimators with various levels of temporal aggregation. This includes investigation of the effect on the estimators of spatio-temporal correlation induced by animal movement. The CT estimator is found to be unbiased and more precise than discrete-occasion estimators based on binary capture data (rather than detection frequencies) when there is no spatio-temporal correlation. It is also found to be only slightly biased when there is correlation induced by animal movement, and to be more robust to inadequate detector spacing, while discrete-occasion estimators with binary data can be sensitive to occasion length, particularly in the presence of inadequate detector spacing. Our model includes as a special case a discrete-occasion estimator based on detection frequencies, and at the same time lays a foundation for the development of more sophisticated CT models and estimators. It allows modelling within-occasion changes in detectability, readily accommodates variation in detector effort, removes subjectivity associated with user-defined occasions, and fully utilises CT data. We identify a need for developing CT methods that incorporate spatio-temporal dependence in detections and see potential for CT models being combined with telemetry-based animal movement models to provide a richer inference framework.

Search
Clear search
Close search
Google apps
Main menu