19 datasets found
  1. h

    simon-arc-histogram-v8

    • huggingface.co
    Updated Jul 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Strandgaard (2024). simon-arc-histogram-v8 [Dataset]. https://huggingface.co/datasets/neoneye/simon-arc-histogram-v8
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 26, 2024
    Authors
    Simon Strandgaard
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Version 1

    The counters are in the range 1-20.

      Version 2
    

    The counters are in the range 1-50.

      Version 3
    

    The counters are in the range 1-100.

      Version 4
    

    The counters are in the range 1-200. Histogram.remove_other_colors() added.

      Version 5
    

    I forgot to update the range of the counters when doing comparisons. Now the counters are in the range 1-100.

      Version 6
    

    The counters are in the range 1-200.

      Version 7
    

    The counters are in… See the full description on the dataset page: https://huggingface.co/datasets/neoneye/simon-arc-histogram-v8.

  2. d

    Crystal Rock and Trib. 104 Histogram Data, 2016, Montgomery County, MD

    • catalog.data.gov
    • search.dataone.org
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Crystal Rock and Trib. 104 Histogram Data, 2016, Montgomery County, MD [Dataset]. https://catalog.data.gov/dataset/crystal-rock-and-trib-104-histogram-data-2016-montgomery-county-md
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Montgomery County, Maryland
    Description

    This data release includes the data used to generate histograms that compared total watershed pollutant removal efficiency (TWPRE) in the two study watersheds Crystal Rock (traditional watershed) and Tributary (Trib.) 104 low impact development (LID watershed) to determine if LID BMP design offered an improved water quality benefit. Input/calibrants data used in the model (Monte Carlo) are described in the manuscript as mentioned in the list below: -BMP Name and Type: references in the manuscript -BMP Connectivity: Proprietary (derived from Montgomery County GIS Data) -BMP Drainage Areas: Proprietary (derived from Montgomery County GIS Data) -BMP Efficiency Ranges: referenced in manuscript -Baseline Pollutant Loadings: referenced in manuscript Stormwater runoff and associated pollutants from urban areas in the Chesapeake Bay Watershed represent a serious impairment to local streams and downstream ecosystems, despite urbanized land comprising only 7% of the Bay watershed area. Excess nitrogen, phosphorus, and sediment affect local streams in the Bay watershed by causing problems ranging from eutrophication and toxic algal blooms to reduced oxygen levels and loss of biodiversity. Traditional management of urban stormwater has primarily focused on directing runoff away from developed areas as quickly as possible. More recently, stormwater best management practices (BMPs) have been implemented in a low impact development (LID) manner on the landscape to treat stormwater runoff closer to its source.The objective of this research was to use a modeling approach to compare total watershed pollutant removal efficiency (TWPRE) of two watersheds with differing spatial patterns of SW BMP design (traditional and LID), and determine if LID SW BMP design offered an improved water quality benefit.

  3. d

    Daily histograms of wind speed (100m), wind direction (100m) and atmospheric...

    • data.dtu.dk
    zip
    Updated Feb 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marc Imberger (2025). Daily histograms of wind speed (100m), wind direction (100m) and atmospheric stability derived from ERA5 [Dataset]. http://doi.org/10.11583/DTU.27930399.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 28, 2025
    Dataset provided by
    Technical University of Denmark
    Authors
    Marc Imberger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains daily histograms of wind speed at 100m ("WS100"), wind direction at 100 m ("WD100") and an atmospheric stability proxy ("STAB") derived from the ERA5 hourly data on single levels [1] accessed via the Copernicus Climate Change Climate Data Store [2]. The dataset covers six geographical regions (illustrated in regions.png) on a reduced 0.5 x 0.5 degrees regular grid and covers the period 1994 to 2023 (both years included). The dataset is packaged as a zip folder per region which contains a range of monthly zip folders following the convention of zarr ZipStores (more details here: https://zarr.readthedocs.io/en/stable/api/storage.html). Thus, the monthly zip folders are intended to be used in connection with the xarray python package (no unzipping of the monthly files needed).Wind speed and wind direction are derived from the U- and V-components. The stability metric makes use of a 5-class classification scheme [3] based on the Obukhov length whereby the required Obukhov length was computed using [4]. The following bins (left edges) have been used to create the histograms:Wind speed: [0, 40) m/s (bin width 1 m/s)Wind direction: [0,360) deg (bin width 15 deg)Stability: 5 discrete stability classes (1: very unstable, 2: unstable, 3: neutral, 4: stable, 5: very stable)Main Purpose: The dataset serves as minimum input data for the CLIMatological REPresentative PERiods (climrepper) python package (https://gitlab.windenergy.dtu.dk/climrepper/climrepper) in preparation for public release).References:[1] Hersbach, H., Bell, B., Berrisford, P., Biavati, G., Horányi, A., Muñoz Sabater, J., Nicolas, J., Peubey, C., Radu, R., Rozum, I., Schepers, D., Simmons, A., Soci, C., Dee, D., Thépaut, J-N. (2023): ERA5 hourly data on single levels from 1940 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), DOI: 10.24381/cds.adbb2d47 (Accessed Nov. 2024)[2] Copernicus Climate Change Service, Climate Data Store, (2023): ERA5 hourly data on single levels from 1940 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), DOI: 10.24381/cds.adbb2d47 (Accessed Nov. 2024)'[3] Holtslag, M. C., Bierbooms, W. A. A. M., & Bussel, G. J. W. van. (2014). Estimating atmospheric stability from observations and correcting wind shear models accordingly. In Journal of Physics: Conference Series (Vol. 555, p. 012052). IOP Publishing. https://doi.org/10.1088/1742-6596/555/1/012052[4] Copernicus Knowledge Base, ERA5: How to calculate Obukhov Length, URL: https://confluence.ecmwf.int/display/CKB/ERA5:+How+to+calculate+Obukhov+Length, last accessed: Nov 2024

  4. MESSENGER H XRS REDUCED DATA RECORD (RDR) MAPS V1.0

    • catalog.data.gov
    • s.cnmilf.com
    • +3more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2025). MESSENGER H XRS REDUCED DATA RECORD (RDR) MAPS V1.0 [Dataset]. https://catalog.data.gov/dataset/messenger-h-xrs-reduced-data-record-rdr-maps-v1-0-5426b
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Abstract ======== This data set consists of the MESSENGER XRS reduced data record observations, also known as RDRs, which are derived from the calibrated data records, CDRs. Each XRS observation results in four X-ray spectra. When an X-ray interacts with one of the four detectors, a charge or voltage pulse is generated. This signal is converted into one of 2^8 (256) channels, which are correlated to energy. Over a commanded integration time period a histogram of counts as a function of energy (channel number) is recorded. The EDRs are the number of events in each channel of the four detectors accumulated over the integration period. Channels above or below the useful energy range of the detectors are not transmitted. The result is three 244-channel GPC histograms and one 231-channel solar monitor histogram, each of which is designated as a single X-ray spectrum. Each observation is calibrated and processed into the CDR data set and then further processed to produce a map of elemental ratios, the maps of which compose the RDR data set.

  5. MESSENGER E/V/H XRS UNCALIBRATED (EDR) DATA V1.0

    • catalog.data.gov
    • datasets.ai
    • +4more
    Updated Apr 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2025). MESSENGER E/V/H XRS UNCALIBRATED (EDR) DATA V1.0 [Dataset]. https://catalog.data.gov/dataset/messenger-e-v-h-xrs-uncalibrated-edr-data-v1-0-49807
    Explore at:
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Abstract ======== This data set consists of the MESSENGER XRS uncalibrated observations, also known as EDRs. Each XRS observation results in four X-ray spectra. When an X-ray interacts with one of the four detectors, a charge or voltage pulse is generated. This signal is converted into one of 2^8 (256) channels, which are correlated to energy. Over a commanded integration time period a histogram of counts as a function of energy (channel number) is recorded. The EDRs are the number of events in each channel of the four detectors accumulated over the integration period. Channels above or below the useful energy range of the detectors are not transmitted. The result is three 244-channel GPC histograms and one 231-channel solar monitor histogram, each of which is designated as a single X-ray spectrum. In addition to the science data, associated instrument parameters are included.

  6. MESSENGER H XRS 5 REDUCED DATA RECORD (RDR) FOOTPRINTS V1.0

    • catalog.data.gov
    • data.nasa.gov
    • +2more
    Updated Apr 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2025). MESSENGER H XRS 5 REDUCED DATA RECORD (RDR) FOOTPRINTS V1.0 [Dataset]. https://catalog.data.gov/dataset/messenger-h-xrs-5-reduced-data-record-rdr-footprints-v1-0-4033a
    Explore at:
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Abstract ======== This data set consists of the MESSENGER XRS reduced data record (RDR) footprints which are derived from the navigational meta-data for each calibrated data record (CDR) whose FOV_STATUS is 1 or 3; that is, when the field of view intersects the planet and is either partially or entirely sunlit. Each XRS observation results in four X-ray spectra. When an X-ray interacts with one of the four detectors, a charge or voltage pulse is generated. This signal is converted into one of 2^8 (256) channels, which are correlated to energy. Over a commanded integration time period a histogram of counts as a function of energy (channel number) is recorded. The EDRs are the number of events in each channel of the four detectors accumulated over the integration period. Channels above or below the useful energy range of the detectors are not transmitted. The result is three 244-channel GPC histograms and one 231-channel solar monitor histogram, each of which is designated as a single X-ray spectrum. Each observation is calibrated and processed into the CDR data set. For each CDR whose field of view is contained or partially contained on the planetary surface, a footprint is computed that corresponds to the perimeter of the planetary region within the instrument field of view during the integration time of the observation.

  7. g

    MESSENGER H XRS REDUCED DATA RECORD (RDR) MAPS V1.0 | gimi9.com

    • gimi9.com
    Updated Mar 10, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2014). MESSENGER H XRS REDUCED DATA RECORD (RDR) MAPS V1.0 | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_messenger-h-xrs-reduced-data-record-rdr-maps-v1-0-5426b
    Explore at:
    Dataset updated
    Mar 10, 2014
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Abstract ======== This data set consists of the MESSENGER XRS reduced data record observations, also known as RDRs, which are derived from the calibrated data records, CDRs. Each XRS observation results in four X-ray spectra. When an X-ray interacts with one of the four detectors, a charge or voltage pulse is generated. This signal is converted into one of 2^8 (256) channels, which are correlated to energy. Over a commanded integration time period a histogram of counts as a function of energy (channel number) is recorded. The EDRs are the number of events in each channel of the four detectors accumulated over the integration period. Channels above or below the useful energy range of the detectors are not transmitted. The result is three 244-channel GPC histograms and one 231-channel solar monitor histogram, each of which is designated as a single X-ray spectrum. Each observation is calibrated and processed into the CDR data set and then further processed to produce a map of elemental ratios, the maps of which compose the RDR data set.

  8. Node status in WSN

    • kaggle.com
    Updated May 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SMILIKA SANGAM (2024). Node status in WSN [Dataset]. https://www.kaggle.com/datasets/smilikasangam/node-status-in-wsn
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 5, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    SMILIKA SANGAM
    License

    https://www.licenses.ai/ai-licenseshttps://www.licenses.ai/ai-licenses

    Description

    This dataset comprises sensor readings collected from various sensors deployed in an environment. Each entry in the dataset includes the following information:

    1. SensorID: Identifier for the sensor.
    2. Timestamp: The date and time when the data was recorded.
    3. SensorType: Type of sensor used.
    4. X, Y: Coordinates of the sensor's location.
    5. SensorData: Recorded data from the sensor.
    6. BatteryLife: Remaining battery life of the sensor.
    7. Temperature: Temperature reading from the sensor.
    8. IsFaulty: Indicates whether the sensor is faulty (binary: 0 for non-faulty, 1 for faulty).
    9. Label: A categorical label assigned to the data.
    10. Count: The count of data instances falling within specific ranges for each label.

    The dataset also includes additional information in the form of histograms and time series data:

    • Histograms depict the distribution of certain parameters like temperature, humidity, etc., across different ranges.
    • Time series data provides sequential readings of temperature, humidity, and pressure over time, along with associated timestamps.

    This dataset is valuable for tasks such as anomaly detection, predictive maintenance, and environmental monitoring.

  9. f

    Histogram-Based Calibration Method for Pipeline ADCs

    • plos.figshare.com
    • figshare.com
    xlsx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hyeonuk Son; Jaewon Jang; Heetae Kim; Sungho Kang (2023). Histogram-Based Calibration Method for Pipeline ADCs [Dataset]. http://doi.org/10.1371/journal.pone.0129736
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Hyeonuk Son; Jaewon Jang; Heetae Kim; Sungho Kang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively.

  10. m

    MBU - A Comprehensive Synthetic Underwater Image Dataset

    • data.mendeley.com
    Updated Nov 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Purnima Kuruma (2024). MBU - A Comprehensive Synthetic Underwater Image Dataset [Dataset]. http://doi.org/10.17632/2mcwfc5dvs.3
    Explore at:
    Dataset updated
    Nov 15, 2024
    Authors
    Purnima Kuruma
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains 100 photographic images that are treated as ground-truth images. On each ground truth image, the effects of underwater environment are applied and 150 synthetic underwater images are generated. Hence, the data set contains 100 ground-truth photographic images and 100*150=15000 synthetic underwater images. Four effects of the underwater environment, i.e., color cast, blurring, contrast reduction, and low light, are considered. These effects are applied individually and in combinations. Four effects result in a total of 15 combinations, and the effect of each combination is varied by considering 10 levels. This results in a total of 150 images for a single ground-truth image. In addition to this, 21 focus metrics are evaluated on all these 1,50,100 images. The metrics calculated are Absolute central moment (ACMO), Brenner's focus measure (BREN), Image curvature (CURV), Gray-level variance (GLVA), Gray-level local variance (GLLV), Gray-level variance normalized (GLVN), Squared gradient (GRAS), Helmli's measure (HELM), Histogram entropy (HISE), Histogram range (HISR), Energy of Laplacian (LAPE), Diagonal Laplacian (LAPD), Modified Laplacian (LAPM), Variance of Laplacian (LAPV), Tenengrad variance (TENV), Vollat's correlation (VOLA), Wavelet ratio (WAVR), Wavelet sum (WAVS), and Wavelet variance (WAVV). In addition, 7 statistical measures are also calculated. The statistical measures calculated are Mean Intensity, Standard Deviation, Skewness, Kurtosis, Entropy, Contrast, and Sharpness (Laplacian Variance). The literature categorizes these metrics as gradient-based and non-gradient-based.

  11. Smartphone sensor data (accelerometer, virtual keyboard) collected...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Dec 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandros Papadopoulos; Alexandros Papadopoulos; Dimitrios Iakovakis; Dimitrios Iakovakis; Lisa Klingelhoefer; Sevasti Bostantjopoulou; Kallol Ray Chaudhuri; Konstantinos Kyritsis; Konstantinos Kyritsis; Stelios Hadjidimitriou; Vasileios Charisis; Leontios J. Hadjileontiadis; Anastasios Delopoulos; Lisa Klingelhoefer; Sevasti Bostantjopoulou; Kallol Ray Chaudhuri; Stelios Hadjidimitriou; Vasileios Charisis; Leontios J. Hadjileontiadis; Anastasios Delopoulos (2020). Smartphone sensor data (accelerometer, virtual keyboard) collected in-the-wild by Parkinson's Disease patients and Healthy Controls [Dataset]. http://doi.org/10.5281/zenodo.4311175
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 9, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alexandros Papadopoulos; Alexandros Papadopoulos; Dimitrios Iakovakis; Dimitrios Iakovakis; Lisa Klingelhoefer; Sevasti Bostantjopoulou; Kallol Ray Chaudhuri; Konstantinos Kyritsis; Konstantinos Kyritsis; Stelios Hadjidimitriou; Vasileios Charisis; Leontios J. Hadjileontiadis; Anastasios Delopoulos; Lisa Klingelhoefer; Sevasti Bostantjopoulou; Kallol Ray Chaudhuri; Stelios Hadjidimitriou; Vasileios Charisis; Leontios J. Hadjileontiadis; Anastasios Delopoulos
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    For detailed description of the dataset see the relevant journal article.

    Python code for model inference and training is available here.

    DESCRIPTION

    The dataset contains accelerometer recodings and keyboard typing data contributed by Parkinson's Disease patients and Healthy Controls. Accelerometer data consists of acceleration values recorded during phone calls and typing data consist of virtual keyboard press and release timestamps. The dataset is divided into two parts: the first part, called SData, contains data from a small, medically evaluated, set of users, while the second part, called GData, contains recordings from a large body of users with self-reported PD labels.

    The dataset is organized into 5 pickle files:

    1. imu_sdata.pickle: Contains the tri-axial accelerometer recordings for the SData part of the dataset in the form of a list of python dictionaries, one for each participating subject. Accelerometer data have been pre-processed to a sampling frequency of 100Hz and come segmented into non-overlapping 5 second windows. Hence, a segment's dimension will be 500 x 3 samples.

    Sample Python code for accessing the acceleration data of a subject

    sdata = pickle.load(open('imu_sdata.pickle', 'rb'))
    subject_list = list(sdata.keys())
    
    ## Data for first subject
    subject_data = sdata[subject_list[0]] # subject_data is a list of length 4
    
    ## The actual data is in the last element of the list
    acc_segments = subject_data[-1]
    num_acc_sessions_for_subject = len(acc_segments)
    
    acc_segments_for_first_session = acc_segments[0]
    acc_segments_for_second_session = acc_segments[1]
    # ..etc
    
    In: print(acc_segments_for_first_session.shape)
    Out: (3, 500, 3)
    ## The first accelerometer session for this subject consists of 3 five-second segments.
    
    In: print(acc_segments_for_second_session.shape)
    Out: (8, 500, 3)
    
    ## The second accelerometer session for this subject consists of 8 five-second segments.

    2. imu_gdata.pickle: Same layout as imu_sdata.pickle but with data ffrom GData subjects.

    3. typing_sdata.pickle: This files contains the typing data originating from the SData part of the dataset. It is a list of dictionaries with one entry per subject. The typing data are given in the form of concatenated hold time (the time elapsed between press and release of the virtual key) and flight time (the time between releasing a key and press the next) histograms, computed over 10ms bins in the range of [0, 1]s for hold time and [0, 4]s for flight time (an additional bin that contains the values in the (1, +oo) and (4, +oo) intervals is also used). So, the total length of the concatenated histogram is 1000/10 + 1 + 4000/10 + 1 = 502.

    Sample Python code for accessing the typing data of a subject:

    sdata = pickle.load(open('typing_sdata.pickle', 'rb'))
    subject_list = list(sdata.keys())
    
    ## Data for first subject
    subject_data = sdata[subject_list[0]]
    
    ## The actual data is in the first element of the list
    typing_histograms = subject_data[0]
    num_typing_sessions_for_subject = len(typing_histograms)
    
    typing_hist_for_first_session = typing_histograms[0]
    typing_hist_for_second_session = typing_histograms[1]
    # ..etc
    
    In: print(typing_hist_for_first_session.shape)
    Out: (502, )
    
    ht_hist = typing_hist_for_first_session[:101] # Hold time histogram of the session
    ft_hist = typing_hist_for_first_session[101:] # Flight time histogram of the session

    4. typing_gdata.pickle: Same layout as typing_sdata.pickle but with data from GData subjects.

    5. subject_metadata.pickle: A list of dictionaries with one entry per subject containing demographic information. The relevant demographic fields have the following interpretation:
    'age': Year of birth,
    'gender_id': 0 indicates male, 1 indicates female
    'healthstatus_id': 0 indicates PD patient, 1 indicates Healthy with PD family history, 2 indicates Healthy without PD family history

    In the case of SData subjects, there is also symptom UPDRS scores from one or two medical examinations. These are ncoded in the fields med_eval_1 and med_eval_2.

    ETHICS & FUNDING

    The study during which the present dataset was collected is a multi-center study approved in each country available (for more info visit: http://www.i-prognosis.eu/?page_id=3606). Informed consent, including permission for third-party access to pseudo-anonymised data, was obtained from all subjects prior to their engagement with the study. The work has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 690494 - i-PROGNOSIS: Intelligent Parkinson early detection guiding novel supportive interventions (i-prognosis.eu).

    CORRESPONDANCE

    Any inquiries regarding this dataset should be adressed to:

    Mr. Alexandros Papadopoulos (Electrical & Computer Engineer, PhD candidate)

    Multimedia Understanding Groupmug
    Department of Electrical & Computer Engineering
    Aristotle University of Thessaloniki
    University Campus, Building C, 3rd floor
    Thessaloniki, Greece, GR54124

    Tel: +30 2310 996359, 996365
    Fax: +30 2310 996398
    E-mail: alpapado@mug.ee.auth.gr


  12. MESSENGER E/V/H XRS CALIBRATED (CDR) SPECTRA V1.0

    • catalog.data.gov
    • datasets.ai
    • +3more
    Updated Apr 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2025). MESSENGER E/V/H XRS CALIBRATED (CDR) SPECTRA V1.0 [Dataset]. https://catalog.data.gov/dataset/messenger-e-v-h-xrs-calibrated-cdr-spectra-v1-0-40ea1
    Explore at:
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Abstract ======== This data set consists of the MESSENGER XRS calibrated observations, also known as CDRs. Each XRS observation results in four X-ray spectra. When an X-ray interacts with one of the four detectors, a charge or voltage pulse is generated. This signal is converted into one of 2^8 (256) channels, which are correlated to energy. Over a commanded integration time period a histogram of counts as a function of energy (channel number) is recorded. The EDRs are the number of events in each channel of the four detectors accumulated over the integration period. Channels above or below the useful energy range of the detectors are not transmitted. The result is three 244-channel GPC histograms and one 231-channel solar monitor histogram, each of which is designated as a single X-ray spectrum. In addition to the science data, associated instrument parameters are included.

  13. E

    [jumbo squid time-at-depth] - Time-at-depth data (to generate histograms)...

    • erddap.bco-dmo.org
    Updated Nov 14, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BCO-DMO (2019). [jumbo squid time-at-depth] - Time-at-depth data (to generate histograms) from tagged jumbo squid from R/V R4107, R/V Pacific Storm, Chartered Vessels, R/V cruises in the Monterey Bay vicinity and Gulf of California from 2004-2009 (Hypoxia and the ecology, behavior and physiology of jumbo squid, Dosidicus gigas) [Dataset]. https://erddap.bco-dmo.org/erddap/info/bcodmo_dataset_471977/index.html
    Explore at:
    Dataset updated
    Nov 14, 2019
    Dataset provided by
    Biological and Chemical Oceanographic Data Management Office (BCO-DMO)
    Authors
    BCO-DMO
    License

    https://www.bco-dmo.org/dataset/471977/licensehttps://www.bco-dmo.org/dataset/471977/license

    Area covered
    Variables measured
    depth, tag_id, descrip, lat_end, lon_end, date_end, latitude, squid_id, count_day, longitude, and 7 more
    Description

    Time-at-depth data (to generate histograms) from tagged jumbo squid from R/V R4107, R/V Pacific Storm, Chartered Vessels, R/V cruises in the Monterey Bay vicinity and Gulf of California from 2004-2009 access_formats=.htmlTable,.csv,.json,.mat,.nc,.tsv,.esriCsv,.geoJson acquisition_description=All data were collected with Mk10-PAT tags (Wildlife Computers, Redmond, WA) attached to living Humboldt squid (Dosidicus gigas) as described elsewhere (Gilly et al. 2006). Tags were programmed to sample at 0.5 Hz or 1 Hz. Tags deployed in Monterey Bay (CCS-1through CCS-6; deployed during OCE-0850839) were programmed to transmit time series data (75 s intervals = 0.01333 Hz) for depth, temperature and light to the Argos satellite system. Tags deployed in the Gulf of California (GOC-1 through GOC-6; deployed during OCE-0526640) were physically recovered, and the data were subsampled to match the 75 s interval of the CCS tags. This procedure was also carried out for tag CCS-6 that was recovered but never reported to Argos.

    Mk10 PAT tags measure depth from 0 to 2000 m with a resolution of 0.5 m and temperature from 0 to +40 degrees C with a resolution of 0.05 degree C. The tags were used as supplied by the manufacturer without additional calibration.

    References:
    Gilly, W.F., Zeidberg, L.D., Booth, J.A.T, Stewart, J.S., Marshall, G., Abernathy, K., and Bell, L.E. 2012. Locomotion and behavior of Humboldt squid, Dosidicus gigas, in relation to natural hypoxia in the Gulf of California, Mexico. The Journal of Experimental Biology, 215, 3175-3190. doi: 10.1242/jeb.072538.
    Gilly, W.F., Markaida, U., Baxter, C.H., Block, B.A., Boustany, A., Zeidberg, L., Reisenbichler, K., Robinson, B., Bazzino, G., and Salinas, C. 2006. Vertical and horizontal migrations by the jumbo squid Dosidicus gigas revealed by electronic tagging. Marine Ecology Press Series, 324, 1-17. doi: 10.3354/meps324001.
    Stewart, J.S., Field, J.C., Markaida, U., and Gilly, W.F. 2013. Behavioral ecology of jumbo squid (Dosidicus gigas) in relation to oxygen minimum zones. Deep Sea Research Part II: Topical Studies in Oceanography, 95, 197-208. doi: 10.1016/j.dsr2.2012.06.005. awards_0_award_nid=55203 awards_0_award_number=OCE-0850839 awards_0_data_url=http://www.nsf.gov/awardsearch/showAward?AWD_ID=0850839 awards_0_funder_name=NSF Division of Ocean Sciences awards_0_funding_acronym=NSF OCE awards_0_funding_source_nid=355 awards_0_program_manager=David L. Garrison awards_0_program_manager_nid=50534 awards_1_award_nid=55226 awards_1_award_number=R/OPCFISH-06 awards_1_funder_name=California Sea Grant awards_1_funding_acronym=CASG awards_1_funding_source_nid=402 awards_2_award_nid=471705 awards_2_award_number=OCE-0526640 awards_2_data_url=http://www.nsf.gov/awardsearch/showAward?AWD_ID=0526640 awards_2_funder_name=NSF Division of Ocean Sciences awards_2_funding_acronym=NSF OCE awards_2_funding_source_nid=355 awards_2_program_manager=David L. Garrison awards_2_program_manager_nid=50534 cdm_data_type=Other comment=Jumbo squid (Dosidicus gigas) time-at-depth data from MK10 PAT tags California Current System (CCS) & Gulf of California (GOC) PI: William Gilly (Stanford University) Version: 22 Nov 2013

    NOTE: 1 count represents a 75-second interval (in count_night and count_day columns) Conventions=COARDS, CF-1.6, ACDD-1.3 data_source=extract_data_as_tsv version 2.3 19 Dec 2019 defaultDataQuery=&time<now doi=10.1575/1912/bco-dmo.471977.1 Easternmost_Easting=-111.22 geospatial_lat_max=37.91 geospatial_lat_min=27.34 geospatial_lat_units=degrees_north geospatial_lon_max=-111.22 geospatial_lon_min=-123.48 geospatial_lon_units=degrees_east geospatial_vertical_max=1950.0 geospatial_vertical_min=0.0 geospatial_vertical_positive=down geospatial_vertical_units=m infoUrl=https://www.bco-dmo.org/dataset/471977 institution=BCO-DMO instruments_0_acronym=MK10 PAT instruments_0_dataset_instrument_description=Mk10-PAT tags (Wildlife Computers, Redmond, WA) were programmed to sample at 0.5 Hz or 1 Hz. Tags deployed in Monterey Bay (CCS-1through CCS-6) were programmed to transmit time series data (75 s intervals = 0.01333 Hz) to the Argos satellite system. Tags deployed in the Gulf of California (GOC-1 through GOC-6) were physically recovered. Mk10 PAT tags measure depth from 0 to 2000 m with a resolution of 0.5 m and temperature from 0 to +40 degrees C with a resolution of 0.05 degree C. The tags were used as supplied by the manufacturer without additional calibration. instruments_0_dataset_instrument_nid=471984 instruments_0_description=The Pop-up Archival Transmitting (Mk10-PAT) tag, manufactured by Wildlife Computers, is a combination of archival and Argos satellite technology. It is designed to track the large-scale movements and behavior of fish and other animals which do not spend enough time at the surface to allow the use of real-time Argos satellite tags. The PAT can be configured to transmit time-at-depth and time-at-temperature histograms, depth-temperature profiles, and/or light-level curves. The histogram duration (1 to 24 hours) and bin ranges can also be configured. PAT archives depth, temperature, and light-level data while being towed by the animal. At a user-specified date and time, the PAT actively corrodes the pin to which the tether is attached, thus releasing the PAT from the animal. The PAT then floats to the surface and transmits summarized information via the Argos system. Argos also uses the transmitted messages to provide the position of the tag at the time of release. instruments_0_instrument_name=Wildlife Computers Mk10 Pop-up Archival Tag (PAT) instruments_0_instrument_nid=471924 instruments_0_supplied_name=MK10 PAT metadata_source=https://www.bco-dmo.org/api/dataset/471977 Northernmost_Northing=37.91 param_mapping={'471977': {'lon_start': 'flag - longitude', 'depth_m': 'flag - depth', 'lat_start': 'flag - latitude'}} parameter_source=https://www.bco-dmo.org/mapserver/dataset/471977/parameters people_0_affiliation=Stanford University people_0_person_name=William Gilly people_0_person_nid=51715 people_0_role=Principal Investigator people_0_role_type=originator people_1_affiliation=Woods Hole Oceanographic Institution people_1_affiliation_acronym=WHOI BCO-DMO people_1_person_name=Shannon Rauch people_1_person_nid=51498 people_1_role=BCO-DMO Data Manager people_1_role_type=related project=Jumbo Squid Physiology,Jumbo Squid Vertical Migration projects_0_acronym=Jumbo Squid Physiology projects_0_description=This project concerns the ecological physiology of Dosidicus gigas, a large squid endemic to the eastern Pacific where it inhabits both open ocean and continental shelf environments. Questions to be addressed include: 1) How does utilization of the OML by D. gigas vary on both a daily and seasonal basis, and how do the vertical distributions of the OML and its associated fauna vary? 2) What behaviors of squid are impaired by conditions found in the OML, and how are impairments compensated to minimize costs of utilizing this environment? and 3) What are the physiological and biochemical processes by which squid maintain swimming activity at such remarkable levels under low oxygen conditions? The investigators will use an integrated approach involving oceanographic, acoustic, electronic tagging, physiological and biochemical methods. D. gigas provides a trophic connection between small, midwater organisms and top vertebrate predators, and daily vertical migrations between near-surface waters and a deep, low-oxygen environment (OML) characterize normal behavior of adult squid. Electronic tagging has shown that this squid can remain active for extended periods in the cold, hypoxic conditions of the upper OML. Laboratory studies have demonstrated suppression of aerobic metabolism during a cold, hypoxic challenge, but anaerobic metabolism does not appear to account for the level of activity maintained. Utilization of the OML in the wild may permit daytime foraging on midwater organisms. Foraging also occurs near the surface at night, and Dosidicus may thus be able to feed continuously. D. gigas is present in different regions of the Guaymas Basin on a predicable year-round basis, allowing changes in squid distribution to be related to changing oceanographic features on a variety time scales. This research is of broad interest because Dosidicus gigas has substantially extended its range over the last decade, and foraging on commercially important finfish in invaded areas off California and Chile has been reported. In addition, the OML has expanded during the last several decades, mostly vertically by shoaling, including in the Gulf of Alaska, the Southern California Bight and several productive regions of tropical oceans, and a variety of ecological impacts will almost certainly accompany changes in the OML. Moreover, D. gigas currently supports the world's largest squid fishery, and this study will provide acoustic methods for reliable biomass estimates, with implications for fisheries management in Mexico and elsewhere. This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5). This is a Collaborative Research project encompassing three NSF-OCE awards. Background Publications: Stewart, J.S., Field, J.C., Markaida, U., and Gilly, W.F. 2013. Behavioral ecology of jumbo squid (Dosidicus gigas) in relation to oxygen minimum zones. Deep Sea Research Part II: Topical Studies in Oceanography, 95, 197-208. doi:10.1016/j.dsr2.2012.06.005. Gilly, W.F., Zeidberg, L.D., Booth, J.A.T, Stewart, J.S., Marshall, G., Abernathy, K., and Bell, L.E. 2012. Locomotion and behavior of Humboldt squid, Dosidicus gigas, in relation to natural hypoxia in the Gulf of California, Mexico. The Journal of Experimental Biology, 215, 3175-3190. doi: 10.1242/jeb.072538. Related Project:

  14. Long-term global vegetation and climate index datasets

    • zenodo.org
    sh, text/x-python
    Updated Mar 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Won-Jun Choi; Won-Jun Choi; Hwan-Jin Song; Hwan-Jin Song (2025). Long-term global vegetation and climate index datasets [Dataset]. http://doi.org/10.5281/zenodo.15048700
    Explore at:
    sh, text/x-pythonAvailable download formats
    Dataset updated
    Mar 25, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Won-Jun Choi; Won-Jun Choi; Hwan-Jin Song; Hwan-Jin Song
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    NDVI Data Set (1. NDVI.nc)

    • Global Vegetation Greenness (NDVI) from AVHRR GIMMS-3G+, 1981-2022
    • Variable: Normalized Difference Vegetation Index (NDVI)
    • Area: Global (60°S ~ 70°N, -180°W ~ 180°E)
    • Period: 1982-01-01 ~ 2022-12-31
    • Horizontal resolution: 0.25° × 0.25° (Regridded from original 0.0833° × 0.0833°)
    • Temporal resolution: Bi-monthly (1st–15th and 16th–end of each month)
    • Source: https://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=2187

    Meteorological Data Set (2.Temperature.nc, ... , 6.Cloud_cover.nc)

    • Agrometeorological indicators from 1979 to present derived from reanalysis, Copernicus Climate Change Service
    • Variable: Normalized Difference Vegetation Index (NDVI)
    • Area: Global (60°S ~ 70°N, -180°W ~ 180°E)
    • Period: 1982-01-01 ~ 2022-12-31
    • Horizontal resolution: 0.25° × 0.25° (Regridded from original 0.1° × 0.1°)
    • Temporal resolution: Bi-monthly (1st–15th and 16th–end of each month)
    • The meteorological data were converted from daily values to bi-monthly average values.
    • Variables: 2m temperature (K), 2m relative humidity (%), 10m wind speed (m s⁻¹), Precipitation flux (mm day⁻¹), Solar radiation flux (J m⁻² day⁻¹), Cloud cover (dimensionless)
    • Source: https://cds.climate.copernicus.eu/datasets/sis-agrometeorological-indicators?tab=overview

    Pre-processing code (Set_data_1~3)

    • Set_data_1 : Combining raw data for NDVI and checking for missing values in the original data
    • Set_data_2 : Combining annual data, calculating semi-monthly averages, and setting the latitude and longitude ranges for meteorological data.
    • Set_data_3 : Synchronization of latitude, longitude, and resolution between NDVI and meteorological data.

    Analysis code (code1~5)

    • code_1 : This script processes climate data for variables by calculating their seasonal anomalies and time-averaged values. It performs the following steps:
      • Monthly Mean Calculation: The script first calculates the monthly mean for each variable over a period of 41 years.
      • Semi-Monthly Mean Calculation: It then computes the semi-monthly mean by combining the monthly mean data.
      • Anomaly Calculation: The script calculates the anomaly by subtracting the semi-monthly mean from the original data.
      • Time Mean Calculation: Finally, the time-mean for the entire time period is calculated for each variable.

    • code_2 : This script calculates the linear regression slope, intercept, correlation coefficient (r-value), p-value, and standard error for a given climate variable (in this case, temperature anomaly) over time at each latitude and longitude point. The steps involved are:
      • Load Data: The script loads the input NetCDF file and extracts the time and temperature anomaly (or other climate data) values.
      • Linear Regression: For each spatial point (latitude, longitude), the script performs a linear regression between time and the corresponding climate data to determine the slope, intercept, r-value, p-value, and standard error.
      • Save Results: The regression results are saved into a new NetCDF file with variables for slope, intercept, r-value, p-value, and standard error for each latitude and longitude point.

    • code_3 : This script processes NDVI (Normalized Difference Vegetation Index) data by performing the following steps:
      • Prepare Heatmap Data: It reshapes the NDVI data into a 4D array of the shape (latitude, longitude, years, 24 months), where each year contains 24 months of data.
      • Compute NDVI Histograms: It computes histograms of the NDVI data for each latitude, longitude, and year, adjusting the NDVI values into 20 bins for analysis.
      • Save Histogram Data: The histogram data is then saved to a .npy file, which stores the data for further analysis.

    • code_4 : This script performs k-means clustering on NDVI data, based on histograms of NDVI values:
      1. Load Data: It loads NDVI data and histogram data (NDVI values in bins) from files.
      2. Filter Data: It filters out regions with zero values to focus on areas of interest.
      3. Reshape Data: The data is reshaped into a 2D format to prepare for clustering.
      4. K-Means Clustering: The script applies k-means clustering to the reshaped histogram data.
      5. Mean NDVI Calculation: It calculates the mean NDVI value for each cluster by extracting values from the NDVI data.
      6. Reordering Clusters: The clusters are reordered based on their mean NDVI values.
      7. Save Results: Finally, the script saves the cluster labels and non-zero indices into separate files.

    • code_5 : This script processes NDVI (Normalized Difference Vegetation Index) data by clustering and saving the data for each cluster.
      • Load Data
        • Loads NDVI slope data (slope) from a NetCDF file.
        • Loads precomputed cluster labels (cluster_labels_8.npy) and valid data locations (non_zero_indices_8.npy).
      • Save NDVI Data by Cluster
        • Categorizes NDVI data based on clusters.
        • Creates a 2D array for each cluster and assigns NDVI data to the corresponding cluster coordinates.
        • Saves the clustered NDVI data as .npy files for further analysis.
      • Create Directory and Execute
        • Creates the output directory (if it does not exist).
        • Runs the main function to save the clustered NDVI data.

    Acknowledgments

    This work was also supported by Global - Learning & Academic research institution for Master’s·PhD students, and Postdocs (LAMP) Program of the National Research Foundation of Korea (NRF) grant funded by the Ministry of Education (No. RS-2023-00301914).

  15. Z

    WWLLN Datasets for "A Terrestrial Gamma-ray Flash from the 2022 Hunga...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Briggs, Michael S. (2022). WWLLN Datasets for "A Terrestrial Gamma-ray Flash from the 2022 Hunga Tonga–Hunga Ha'apai Volcanic Eruption" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6782048
    Explore at:
    Dataset updated
    Jul 5, 2022
    Dataset provided by
    Holzworth, R. H.
    Briggs, Michael S.
    Mailyan, B.
    Schultz, C.
    Lesage, S.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Hunga Tonga, Tonga
    Description

    These data files contain data used in the paper "A Terrestrial Gamma-ray Flash from the 2022 Hunga Tonga–Hunga Ha’apai Volcanic Eruption", M. S. Briggs, S. Lesage, C. Schultz, B. Mailyan, R. H. Holzworth, Geophysical Research Letters, 2022.

    The authors wish to thank the World Wide Lightning Location Network (WWLLN), a collaboration among over 50 universities and institutions, for providing the lightning location data used in these datasets and in the paper. Additional WWLLN data are available at nominal cost from http://wwlln.net.

    The file named Fig_1.txt contains the data used to generate Figure 1 in the paper.

    The first two columns list the time ranges for each histogram bin, in UTC on 2022 January 15, while the final column lists the lightning detection rate, in counts per minute, for all WWLLN sferics located within a 400 km radius of the Hunga Tonga–Hunga Ha’apai volcano.

    The times when Fermi passed within 1000 km of the volcano, shown as grey bars in Figure 1, are: 03:47:58.5 to 03:52:56.2 UTC 05:29:25.1 to 05:33:59.7 UTC 07:11:04.0 to 07:15:18.3 UTC 08:52:04.8 to 08:57:05.1 UTC 10:33:48.1 to 10:37:32.7 UTC

    The time of the Fermi TGF detection, shown as a red line in Figure 1, is: 08:52:40.011500 UTC

    The file named Fig_2.txt contains the WWLLN sferic data used to generate Figure 2 in the aforementioned paper.

    This file has the same format as the text files for the WWLLN maps provided in the Fermi GBM TGF catalog, https://fermi.gsfc.nasa.gov/ssc/data/access/gbm/tgf/.

    Line 1 is the network_name Line 2 is TGF_name Line 3 is the coordinates of Fermi at the time of the TGF (2022-01-15 08:52:40.011500 UTC). Line 4 is the coordinates of the center of the map The second number on line 5 is the number of sferics in a +/- 1 minute interval about the TGF. The remaining 104 lines list the properties of each sferic in columns containing the following information: sequence_number, longitude, latitude, time_separation_between_sferic_and_TGF_corrected_for_light-travel-time

    The two GLM lightning flashes, shown as magenta dots in Figure 2, have longitude and latitude values: -175.27394, -20.9348 -175.29301, -20.8466

    All of the aforementioned longitudes are East longitudes.

  16. f

    Data from: Two-Dimensional Energy Histograms as Features for Machine...

    • acs.figshare.com
    • figshare.com
    xlsx
    Updated Jun 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kaihang Shi; Zhao Li; Dylan M. Anstine; Dai Tang; Coray M. Colina; David S. Sholl; J. Ilja Siepmann; Randall Q. Snurr (2023). Two-Dimensional Energy Histograms as Features for Machine Learning to Predict Adsorption in Diverse Nanoporous Materials [Dataset]. http://doi.org/10.1021/acs.jctc.2c00798.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset provided by
    ACS Publications
    Authors
    Kaihang Shi; Zhao Li; Dylan M. Anstine; Dai Tang; Coray M. Colina; David S. Sholl; J. Ilja Siepmann; Randall Q. Snurr
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    A major obstacle for machine learning (ML) in chemical science is the lack of physically informed feature representations that provide both accurate prediction and easy interpretability of the ML model. In this work, we describe adsorption systems using novel two-dimensional energy histogram (2D-EH) features, which are obtained from the probe-adsorbent energies and energy gradients at grid points located throughout the adsorbent. The 2D-EH features encode both energetic and structural information of the material and lead to highly accurate ML models (coefficient of determination R2 ∼ 0.94–0.99) for predicting single-component adsorption capacity in metal–organic frameworks (MOFs). We consider the adsorption of spherical molecules (Kr and Xe), linear alkanes with a wide range of aspect ratios (ethane, propane, n-butane, and n-hexane), and a branched alkane (2,2-dimethylbutane) over a wide range of temperatures and pressures. The interpretable 2D-EH features enable the ML model to learn the basic physics of adsorption in pores from the training data. We show that these MOF-data-trained ML models are transferrable to different families of amorphous nanoporous materials. We also identify several adsorption systems where capillary condensation occurs, and ML predictions are more challenging. Nevertheless, our 2D-EH features still outperform structural features including those derived from persistent homology. The novel 2D-EH features may help accelerate the discovery and design of advanced nanoporous materials using ML for gas storage and separation in the future.

  17. d

    Statewide Ortho 1994 (raster).

    • datadiscoverystudio.org
    • data.wu.ac.at
    html
    Updated Apr 10, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2015). Statewide Ortho 1994 (raster). [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/817924f3ff1740b5ae5ded612d9fb713/html
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Apr 10, 2015
    Description

    description: Data available online through GeoStor at http://www.geostor.arkansas.gov. Orthophotos combine the image characteristics of a photograph with the geometric qualities of a map. The primary digital orthophotoquadrangle (DOQ) is a 1-meter ground resolution, quarter-quadrangle (3.75 minutes of latitude by 3.75 minutes of longitude) image cast on the Universal Transverse Mercator projection (UTM) on the North American Datum of 1983 (NAD83). The geographic extent of the DOQ is equivalent to a quarter-quadrangle plus the overedge ranges from a minimum of 50 meters to a maximum of 300 meters beyond the extremes of the primary and secondary corner points. The overedge is included to facilitate tonal matching for mosaicking and for the placement of the NAD83 and secondary datum corner ticks. The normal orientation of data is by lines (rows) and samples (columns). Each line contains a series of pixels ordered from west to east with the order of the lines from north to south. The radiometric image brightness values are stored as 256 gray levels, ranging from 0 to 255. This dataset is a combination of all DOQ images from the State of Arkansas. They have been stitched into a single mosaic through an automated process using ER Mapper software from Earth Resource Mapping Pty Ltd. The DOQ images were contrast balanced (using histogram matching) and the resulting balanced mosaic was forced into a value range of 0 to 255 using a simple linear transformation.; abstract: Data available online through GeoStor at http://www.geostor.arkansas.gov. Orthophotos combine the image characteristics of a photograph with the geometric qualities of a map. The primary digital orthophotoquadrangle (DOQ) is a 1-meter ground resolution, quarter-quadrangle (3.75 minutes of latitude by 3.75 minutes of longitude) image cast on the Universal Transverse Mercator projection (UTM) on the North American Datum of 1983 (NAD83). The geographic extent of the DOQ is equivalent to a quarter-quadrangle plus the overedge ranges from a minimum of 50 meters to a maximum of 300 meters beyond the extremes of the primary and secondary corner points. The overedge is included to facilitate tonal matching for mosaicking and for the placement of the NAD83 and secondary datum corner ticks. The normal orientation of data is by lines (rows) and samples (columns). Each line contains a series of pixels ordered from west to east with the order of the lines from north to south. The radiometric image brightness values are stored as 256 gray levels, ranging from 0 to 255. This dataset is a combination of all DOQ images from the State of Arkansas. They have been stitched into a single mosaic through an automated process using ER Mapper software from Earth Resource Mapping Pty Ltd. The DOQ images were contrast balanced (using histogram matching) and the resulting balanced mosaic was forced into a value range of 0 to 255 using a simple linear transformation.

  18. f

    Histogram data of mean (SEM) and median (+/- percentile) WOB indices...

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lauren Ryan; Tariq Rahman; Abigail Strang; Robert Heinle; Thomas H. Shaffer (2023). Histogram data of mean (SEM) and median (+/- percentile) WOB indices expressed as percent within the normal range*. [Dataset]. http://doi.org/10.1371/journal.pone.0226980.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Lauren Ryan; Tariq Rahman; Abigail Strang; Robert Heinle; Thomas H. Shaffer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Histogram data of mean (SEM) and median (+/- percentile) WOB indices expressed as percent within the normal range*.

  19. f

    Histograms of the distributions of all descriptors in the dataset.

    • plos.figshare.com
    zip
    Updated Jan 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Puck J. A. M. Mulders; Edwin R. van den Heuvel; Pytrik Reidsma; Wouter Duivesteijn (2024). Histograms of the distributions of all descriptors in the dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0296684.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 29, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Puck J. A. M. Mulders; Edwin R. van den Heuvel; Pytrik Reidsma; Wouter Duivesteijn
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    On each field, soil samples were taken. These soil samples are evaluated using the Eurofins protocol, and provide us the amount of the following macro- and micronutrients: N, P, K, Ca, Mg, S, Si, Fe, Zn, Mn, and B. In these histograms, two lines are present as well. The left line represents the lower limit of the advise of Eurofins, and the right line represents the maximum of the range. In addition, some categorical variables are provided. The nutrient content of the field is determined by the farmer’s team, who classifies fields as poor, average or rich. In addition, the field is classified as dry, average or wet by the farmer himself. Potato is a rotation crop; only once per four years, potatoes can be grown on the same field. The crop cultivated before potatoes were grown on the field is the previously cultivated crop. In the “others” category all kinds of crops are captured. Usually, only one or two times, a field is cultivated with that crop. Crops in this category are for example conifers, salsify, or peas. Finally, some fields suffer from nematodes, which can have a negative effect on potato yield. A: N in soil. B: P in soil. C: K in soil. D: Ca in soil. E: Mg in soil. F: Si in soil. G: S in soil. H: Fe in soil. I: Zn in soil. J: Mn in soil. K: B in soil. L: Tuber weight. M: Nutrient content. N: Contains nematodes? O: Year. P: Dryness. Q: Previously cultivated crop. (ZIP)

  20. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Simon Strandgaard (2024). simon-arc-histogram-v8 [Dataset]. https://huggingface.co/datasets/neoneye/simon-arc-histogram-v8

simon-arc-histogram-v8

neoneye/simon-arc-histogram-v8

simons ARC (abstraction & reasoning corpus) lab histogram version 8

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jul 26, 2024
Authors
Simon Strandgaard
License

MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically

Description

Version 1

The counters are in the range 1-20.

  Version 2

The counters are in the range 1-50.

  Version 3

The counters are in the range 1-100.

  Version 4

The counters are in the range 1-200. Histogram.remove_other_colors() added.

  Version 5

I forgot to update the range of the counters when doing comparisons. Now the counters are in the range 1-100.

  Version 6

The counters are in the range 1-200.

  Version 7

The counters are in… See the full description on the dataset page: https://huggingface.co/datasets/neoneye/simon-arc-histogram-v8.

Search
Clear search
Close search
Google apps
Main menu