100+ datasets found
  1. The Pizza Problem

    • kaggle.com
    zip
    Updated Feb 8, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeremy Jeanne (2019). The Pizza Problem [Dataset]. https://www.kaggle.com/jeremyjeanne/google-hashcode-pizza-training-2019
    Explore at:
    zip(178852 bytes)Available download formats
    Dataset updated
    Feb 8, 2019
    Authors
    Jeremy Jeanne
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Problem description

    Pizza

    The pizza is represented as a rectangular, 2-dimensional grid of R rows and C columns. The cells within the grid are referenced using a pair of 0-based coordinates [r, c] , denoting respectively the row and the column of the cell.

    Each cell of the pizza contains either:

    mushroom, represented in the input file as M
    tomato, represented in the input file as T
    

    Slice

    A slice of pizza is a rectangular section of the pizza delimited by two rows and two columns, without holes. The slices we want to cut out must contain at least L cells of each ingredient (that is, at least L cells of mushroom and at least L cells of tomato) and at most H cells of any kind in total - surprising as it is, there is such a thing as too much pizza in one slice. The slices being cut out cannot overlap. The slices being cut do not need to cover the entire pizza.

    Goal

    The goal is to cut correct slices out of the pizza maximizing the total number of cells in all slices. Input data set The input data is provided as a data set file - a plain text file containing exclusively ASCII characters with lines terminated with a single ‘ ’ character at the end of each line (UNIX- style line endings).

    File format

    The file consists of:

    one line containing the following natural numbers separated by single spaces:
    R (1 ≤ R ≤ 1000) is the number of rows
    C (1 ≤ C ≤ 1000) is the number of columns
    L (1 ≤ L ≤ 1000) is the minimum number of each ingredient cells in a slice
    H (1 ≤ H ≤ 1000) is the maximum total number of cells of a slice
    

    Google 2017, All rights reserved.

    R lines describing the rows of the pizza (one after another). Each of these lines contains C characters describing the ingredients in the cells of the row (one cell after another). Each character is either ‘M’ (for mushroom) or ‘T’ (for tomato).

    Example

    3 5 1 6
    TTTTT
    TMMMT
    TTTTT
    

    3 rows, 5 columns, min 1 of each ingredient per slice, max 6 cells per slice

    Example input file.

    Submissions

    File format

    The file must consist of:

    one line containing a single natural number S (0 ≤ S ≤ R × C) , representing the total number of slices to be cut,
    U lines describing the slices. Each of these lines must contain the following natural numbers separated by single spaces:
    r 1 , c 1 , r 2 , c 2 describe a slice of pizza delimited by the rows r (0 ≤ r1,r2 < R, 0 ≤ c1, c2 < C) 1 and r 2 and the columns c 1 and c 2 , including the cells of the delimiting rows and columns. The rows ( r 1 and r 2 ) can be given in any order. The columns ( c 1 and c 2 ) can be given in any order too.
    

    Example

    0 0 2 1
    0 2 2 2
    0 3 2 4
    

    3 slices.

    First slice between rows (0,2) and columns (0,1).
    Second slice between rows (0,2) and columns (2,2).
    Third slice between rows (0,2) and columns (3,4).
    Example submission file.
    

    © Google 2017, All rights reserved.

    Slices described in the example submission file marked in green, orange and purple. Validation

    For the solution to be accepted:

    the format of the file must match the description above,
    each cell of the pizza must be included in at most one slice,
    each slice must contain at least L cells of mushroom,
    each slice must contain at least L cells of tomato,
    total area of each slice must be at most H
    

    Scoring

    The submission gets a score equal to the total number of cells in all slices. Note that there are multiple data sets representing separate instances of the problem. The final score for your team is the sum of your best scores on the individual data sets. Scoring example

    The example submission file given above cuts the slices of 6, 3 and 6 cells, earning 6 + 3 + 6 = 15 points.

  2. SCOAPE Pandora Column Observations - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SCOAPE Pandora Column Observations - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/scoape-pandora-column-observations-8c90a
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    SCOAPE_Pandora_Data is the column NO2 and ozone data collected by Pandora spectrometers during the Satellite Coastal and Oceanic Atmospheric Pollution Experiment (SCOAPE). Pandora instruments were located on the University of Southern Mississippi’s Research Vessel (R/V) Point Sur and at the Louisiana Universities Marine Consortium (LUMCON; Cocodrie, LA). Data collection for this product is complete.The Outer Continental Shelf Lands Act (OCSLA) requires the US Department of Interior Bureau of Ocean Energy Management (BOEM) to ensure compliance with the US National Ambient Air Quality Standard (NAAQS) so that Outer Continental Shelf (OCS) oil and natural gas (ONG) exploration, development, and production do not significantly impact the air quality of any US state. In 2017, BOEM and NASA entered into an interagency agreement to begin a study to scope out the feasibility of BOEM personnel using a suite of NASA and non-NASA resources to assess how pollutants from ONG exploration, development, and production activities affect air quality. An important activity of this interagency agreement was SCOAPE, a field deployment that took place in May 2019, that aimed to assess the capability of satellite observations for monitoring offshore air quality. The outcomes of the study are documented in two BOEM reports (Duncan, 2020; Thompson, 2020).To address BOEM’s goals, the SCOAPE science team conducted surface-based remote sensing and in-situ measurements, which enabled a systematic assessment of the application of satellite observations, primarily NO2, for monitoring air quality. The SCOAPE field measurements consisted of onshore ground sites, including in the vicinity of LUMCON, as well as those from University of Southern Mississippi’s R/V Point Sur, which cruised in the Gulf of America from 10-18 May 2019. Based on the 2014 and 2017 BOEM emissions inventories as well as daily air quality and meteorological forecasts, the cruise track was designed to sample both areas with large oil drilling platforms and areas with dense small natural gas facilities. The R/V Point Sur was instrumented to carry out both remote sensing and in-situ measurements of NO2 and O3 along with in-situ CH4, CO2, CO, and VOC tracers which allowed detailed characterization of airmass type and emissions. In addition, there were also measurements of multi-wavelength AOD and black carbon as well as planetary boundary layer structure and meteorological variables, including surface temperature, humidity, and winds. A ship-based spectrometer instrument provided remotely-sensed total column amounts of NO2 and O3 for direct comparison with satellite measurements. Ozonesondes and radiosondes were also launched 1-3 times daily from the R/V Point Sur to provide O3 and meteorological vertical profile observations. The ground-based observations, primarily at LUMCON, included spectrometer-measured column NO2 and O3, in-situ NO2, VOCs, and planetary boundary layer structure. A NO2sonde was also mounted on a vehicle with the goal to detect pollution onshore from offshore ONG activities during onshore flow; data were collected along coastal Louisiana from Burns Point Park to Grand Isle to the tip of the Mississippi River delta. The in-situ measurements were reported in ICARTT files or Excel files. The remote sensing data are in either HDF or netCDF files.

  3. b

    Video Plankton Recorder data (formatted with taxa displayed in single...

    • bco-dmo.org
    • search.dataone.org
    csv
    Updated Jul 31, 2012
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carin J. Ashjian (2012). Video Plankton Recorder data (formatted with taxa displayed in single column); from R/V Columbus Iselin and R/V Endeavor cruises CI9407, EN259, EN262 in the Gulf of Maine and Georges Bank from 1994-1995 [Dataset]. https://www.bco-dmo.org/dataset/3685
    Explore at:
    csv(370.26 MB)Available download formats
    Dataset updated
    Jul 31, 2012
    Dataset provided by
    Biological and Chemical Data Management Office
    Authors
    Carin J. Ashjian
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Gulf of Maine
    Variables measured
    lat, lon, sal, temp, year, fluor, press, taxon, flvolt, abund_L, and 9 more
    Measurement technique
    Video Plankton Recorder
    Description

    This dataset includes ALL the abundance values, zero and non-zero. Taxonomic groups are diplayed in the 'taxon' column, rather than in separate columns, with abundances in the 'abund_L' column. For the original presentation of the data, see VPR_ashjian_orig. For a version of the data with only non-zero data, see VPR_ashjian_nonzero. In the 'nonzero' dataset, values of 0 in the abund_L column (taxon abundance) have been removed.

    Methodology
    The following information was extracted from C.J. Ashjian et al., Deep- Sea Research II 48(2001) 245-282 . An in-depth discussion of the data and sampling methods can be found there.

    The Video Plankton Recorder was towed at 2 m/s, collecting data from the surface to the bottom (towyo). The VPR was equipped with 2-4 cameras, temperature and conductivity probes, fluorometer and transmissometer. Environmental data was collected at 0.25 Hz (CI9407) or 0.5 Hz (EN259, EN262). Video images were recorded at 60 fields per second (fps).

    Video tapes were analyzed for plankton abundances using a semi-automated method discussed in Davis, C.S. et al., Deep-Sea Research II 43 (1996) 1946-1970. In-focus images were extracted from the video tapes and identified by hand to particle type, taxon, or species. Plankton and particle observations were merged with environmental and navigational data by binning the observations for each category into the time intervals at which the environmental data were collected (again see above Davis citation). Concentrations were calculated utilizing the total volume (liters) imaged during that period. For less-abundant categories, usually only a single organism was observed during each time interval so that the resulting concentrations are close to presence or absence data rather than covering a range of values.

  4. Reddit: /r/news

    • kaggle.com
    zip
    Updated Dec 17, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). Reddit: /r/news [Dataset]. https://www.kaggle.com/datasets/thedevastator/uncovering-popularity-and-user-engagement-trends/discussion
    Explore at:
    zip(146481 bytes)Available download formats
    Dataset updated
    Dec 17, 2022
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Reddit: /r/news

    Exploring Topics, Scores, and Engagement

    By Reddit [source]

    About this dataset

    This dataset provides an in-depth look into learning what communities find important and engaging in the news. With this data, researchers can discover trends related to user engagement and popular topics within subreddits. By examining the “score” and “comms_num” columns, our researchers will be able to pinpoint which topics are most liked, discussed or shared within the various subreddits. Researchers may also gain insights into not only how popular a topic is but how it is growing over time. Additionally, by exploring the body column of our dataset, researchers can understand more about which types of news stories drive conversation within particular subreddits—providing an opportunity for deeper analysis of that subreddit’s diverse community dynamics

    The dataset includes eight columns: title, score, id, url, comms_num created**body and timestamp** which can help us identify key insights into user engagement among popular subreddits. With this data we may also determine relationships between topics of discussion and their impact on user engagement allowing us to create a better understanding surrounding issue-based conversations online as well as uncover emerging trends in online news consumption habits

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset is useful for those who are looking to gain insight into the popularity and user engagement of specific subreddits. The data includes 8 different columns including title, score, id, url, comms_num, created, body and timestamp. This can provide valuable information about how users view and interact with particular topics across various subreddits.

    In this guide we’ll look at how you can use this dataset to uncover trends in user engagement on topics within specific subreddits as well as measure the overall popularity of these topics within a subreddit.

    1) Analyzing Score: By analyzing the “score” column you can determine which news stories are popular in a particular subreddit and which ones aren't by looking at how many upvotes each story has received. With this data you will be able to determine trends in what types of stories users preferred within a particular subreddit over time.

    2) Analyzing Comms_Num: Similarly to analyzing the score column you can analyze the “comms_num” column to see which news stories had more engagement from users by tracking number of comments received on each post. Knowing these points can provide insight into what types of stories tend to draw more comment activity from users in certain subreddits from one day or an extended period of time such tracking post activity for multiple weeks or months at once 3) Analyzing Body: Additionally by looking at the “body” column for each post researchers can gain a better understanding which kinds of topics/news draw attention among specific Reddit communities.. With that complete picture researchers have access not only to data measuring Reddit buzz but also access topic discussion/comments helping generate further insights into why certain posts might be popular or receive more comments than others

    Overallthis dataset provides valuable insights about user engagedment related specifically topics trending accross subsbreddits allowing anyone interested reseraching such things easier way access those insights all one place

    Research Ideas

    • Grouping news topics within particular subreddits and assessing the overall popularity of those topics in terms of scores/user engagement.
    • Correlating user engagement with certain news topics to understand how they influence discussion or reactions on a subreddit.
    • Examining the potential correlation between score and the actual body content of a given post to assess what types of content are most successful in gaining interest from users and creating positive engagement for posts

    Acknowledgements

    If you use this dataset in your research, please credit the original authors.

    Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: news.csv | Column name | Description ...

  5. EM2040 Water Column Sonar Data Collected During H13177

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Sep 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA National Centers for Environmental Information (Point of Contact) (2021). EM2040 Water Column Sonar Data Collected During H13177 [Dataset]. https://catalog.data.gov/dataset/em2040-water-column-sonar-data-collected-during-h13177
    Explore at:
    Dataset updated
    Sep 17, 2021
    Dataset provided by
    National Centers for Environmental Informationhttps://www.ncei.noaa.gov/
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Description

    Sea Scout Hydrographic Survey, H13177 (EM2040). Mainline coverage within the survey area consisted of Complete Coverage (100% side scan sonar with concurrent multibeam data) acquisition. The assigned Fish Haven area and associated debris area were surveyed with Object Detection MBES coverage. Bathymetric and water column data were acquired with a Kongsberg EM2040C multibeam echo sounder aboard the R/V Sea Scout and bathymetry data was acquired with a Kongsberg EM3002 multibeam echo sounder aboard the R/V C-Wolf. Side scan sonar acoustic imagery was collected with a Klein 5000 V2 system aboard the R/V Sea Scout and an EdgeTech 4200 aboard the R/V C-Wolf.

  6. u

    Data from: Data corresponding to the paper "Traveling Bubbles and Vortex...

    • portalcientifico.uvigo.gal
    Updated 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michinel, Humberto; Michinel, Humberto (2025). Data corresponding to the paper "Traveling Bubbles and Vortex Pairs within Symmetric 2D Quantum Droplets" [Dataset]. https://portalcientifico.uvigo.gal/documentos/682afb714c44bf76b287f3ae
    Explore at:
    Dataset updated
    2025
    Authors
    Michinel, Humberto; Michinel, Humberto
    Description

    Datasets generated for the Physical Review E article with title: "Traveling Bubbles and Vortex Pairs within Symmetric 2D Quantum Droplets" by Paredes, Guerra-Carmenate, Salgueiro, Tommasini and Michinel. In particular, we provide the data needed to generate the figures in the publication, which illustrate the numerical results found during this work.

    We also include python code in the file "plot_from_data_for_repository.py" that generates a version of the figures of the paper from .pt data sets. Data can be read and plots can be produced with a simple modification of the python code.

    Figure 1: Data are in fig1.csv

    The csv file has four columns separated by comas. The four columns correspond to values of r (first column) and the function psi(r) for the three cases depicted in the figure (columns 2-4).

    Figures 2 and 4: Data are in data_figs_2_and_4.pt

    This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six eigenstates depicted in figures 2 and 4 ("psia", "psib", "psic", "psid", "psie", "psif"). Notice that figure 2 is the square of the modulus and figure 4 is the argument, both are obtained from the same data sets.

    Figure 3: Data are in fig3.csv

    The csv file has three columns separated by comas. The three columns correspond to values of momentum p (first column), energy E (second column) and velocity U (third column).

    Figure 5: Data are in fig5.csv

    The csv file has three columns separated by comas. The three columns correspond to values of momentum p (first column), the minimum value of |psi|^2 (second column) and the value of |psi|^2 at the center (third column).

    Figure 6: Data are in data_fig_6.pt

    This is a data file generated with the torch module of python. It includes six torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the four instants of time depicted in figure 6 ("psia", "psib", "psic", "psid").

    Figure 7: Data are in data_fig_7.pt

    This is a data file generated with the torch module of python. It includes six torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the four instants of time depicted in figure 7 ("psia", "psib", "psic", "psid").

    Figures 8 and 10: Data are in data_figs_8_and_10.pt

    This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six eigenstates depicted in figures 8 and 10 ("psia", "psib", "psic", "psid", "psie", "psif"). Notice that figure 8 is the square of the modulus and figure 10 is the argument, both are obtained from the same data sets.

    Figure 9: Data are in fig9.csv

    The csv file has two columns separated by comas. The two columns correspond to values of momentum p (first column) and energy (second column).

    Figure 11: Data are in data_fig_11.pt

    This is a data file generated with the torch module of python. It includes ten torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the two cases, four instants of time for each case, depicted in figure 11 ("psia", "psib", "psic", "psid", "psie", "psif", "psig", "psih").

    Figure 12: Data are in data_fig_12.pt

    This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six instants of time depicted in figure 12 ("psia", "psib", "psic", "psid", "psie", "psif").

    Figure 13: Data are in data_fig_13.pt

    This is a data file generated with the torch module of python. It includes ten torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the eight instants of time depicted in figure 13 ("psia", "psib", "psic", "psid", "psie", "psif", "psig", "psih").

  7. Swath sonar multibeam EM122 water column data of R/V SONNE cruises SO268/1...

    • doi.pangaea.de
    html, tsv
    Updated Apr 3, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Iason-Zois Gazis; Peter Urban (2020). Swath sonar multibeam EM122 water column data of R/V SONNE cruises SO268/1 and SO268/2 with links to wcd data files [Dataset]. http://doi.org/10.1594/PANGAEA.914390
    Explore at:
    html, tsvAvailable download formats
    Dataset updated
    Apr 3, 2020
    Dataset provided by
    GEOMAR - Helmholtz Centre for Ocean Research Kiel
    PANGAEA
    Authors
    Iason-Zois Gazis; Peter Urban
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    Feb 26, 2019 - May 20, 2019
    Area covered
    Variables measured
    LATITUDE, DATE/TIME, File name, File size, LONGITUDE, Event label, File format, Uniform resource locator/link to wcd data file
    Description

    Data acquisition was performed using the multibeam echosounder Kongsberg EM122. Raw data are delivered in Kongsberg .wcd format. The data acquisition was part of the international project JPI Oceans - MiningImpact Environmental Impacts and Risks of Deep-Sea Mining.

  8. 💎 r/Books Post & Comments 💎

    • kaggle.com
    zip
    Updated Oct 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Curiel (2024). 💎 r/Books Post & Comments 💎 [Dataset]. https://www.kaggle.com/datasets/curiel/rbooks-post-and-comments
    Explore at:
    zip(123445900 bytes)Available download formats
    Dataset updated
    Oct 4, 2024
    Authors
    Curiel
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Important Note: The dataset contains some important information regarding the columns 'title' and 'comments'. It's important to understand their values in order to interpret the data correctly.

    In the 'title' column, there may be a significant number of null values. A null value in this column indicates that the corresponding row pertains to a comment rather than a post. To identify the relationship between comment rows and their associated posts, you can examine the 'post_id' column. Rows with the same 'post_id' value refer to comments that are associated with the post identified by that 'post_id'.

    Similarly, in the 'comments' column, the presence or absence of null values is crucial for determining whether a row represents a comment or a post. If the 'comments' column is null, it signifies a comment row. Conversely, if the 'comments' column is populated (including cases where the value is 0), it indicates a post row.

    Understanding these conventions will enable accurate analysis and interpretation of the dataset.

  9. 💎 r/Fitness Posts & Comments 💎

    • kaggle.com
    zip
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Curiel (2025). 💎 r/Fitness Posts & Comments 💎 [Dataset]. https://www.kaggle.com/datasets/curiel/rfitness-posts-and-comments
    Explore at:
    zip(52567764 bytes)Available download formats
    Dataset updated
    Apr 2, 2025
    Authors
    Curiel
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Important Note: The dataset contains some important information regarding the columns 'title' and 'comments'. It's important to understand their values in order to interpret the data correctly.

    In the 'title' column, there may be a significant number of null values. A null value in this column indicates that the corresponding row pertains to a comment rather than a post. To identify the relationship between comment rows and their associated posts, you can examine the 'post_id' column. Rows with the same 'post_id' value refer to comments that are associated with the post identified by that 'post_id'.

    Similarly, in the 'comments' column, the presence or absence of null values is crucial for determining whether a row represents a comment or a post. If the 'comments' column is null, it signifies a comment row. Conversely, if the 'comments' column is populated (including cases where the value is 0), it indicates a post row.

    Understanding these conventions will enable accurate analysis and interpretation of the dataset.

  10. W

    Waterproof and Anti-corrosion Operation Column Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Jun 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). Waterproof and Anti-corrosion Operation Column Report [Dataset]. https://www.promarketreports.com/reports/waterproof-and-anti-corrosion-operation-column-247677
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Jun 6, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Discover the booming market for waterproof & anti-corrosion operation columns! Learn about its $2.5 billion (2025 est.) size, 7% CAGR, key drivers, and top players like Eaton & Emerson. Explore regional market shares and future growth projections in this detailed analysis.

  11. Evaluation of Formaldehyde Column Observations by Pandora Spectrometers

    • catalog.data.gov
    Updated Nov 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2020). Evaluation of Formaldehyde Column Observations by Pandora Spectrometers [Dataset]. https://catalog.data.gov/dataset/evaluation-of-formaldehyde-column-observations-by-pandora-spectrometers
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    Data collected for this research provides information on mixing heights, surface and column formaldehyde during the KORUS-AQ field campaign and over two research sites in South Korea. This dataset is associated with the following publication: Spinei, E., A. Whitehill, A. Fried, M. Tiefengraber, T. Knepp, S. Herndon, J. Herman, M. Muller, N. Abuhassan, A. Cede, D. Richter, J. Walega, J. Crawford, J. Szykman, L. Valin, D. Williams, R. Long, R. Swap, Y. Lee, N. Nowak, and B. Poche. The first evaluation of formaldehyde column observations by improved Pandora spectrometers during the KORUS-AQ field study. Atmospheric Measurement Techniques. Copernicus Publications, Katlenburg-Lindau, GERMANY, 11(9): 4943-4961, (2018).

  12. U

    Water column sample data from predefined locations of the West Florida...

    • data.usgs.gov
    • search.dataone.org
    • +2more
    Updated Feb 15, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States Geological Survey (2014). Water column sample data from predefined locations of the West Florida Shelf: USGS Cruise 11BHM03 [Dataset]. https://data.usgs.gov/datacatalog/data/USGS:94b95e3f-fe33-40d3-885f-54c70ead5714
    Explore at:
    Dataset updated
    Feb 15, 2014
    Dataset provided by
    United States Geological Survey
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Sep 20, 2011 - Sep 28, 2011
    Area covered
    Florida
    Description

    The United States Geological Survey (USGS) is conducting a study on the effects of climate change on ocean acidification within the Gulf of Mexico; dealing specifically with the effect of ocean acidification on marine organisms and habitats. To investigate this, the USGS participated in two cruises in the West Florida Shelf and northern Gulf of Mexico regions aboard the R/V Weatherbird II, a ship of opportunity lead by Dr. Kendra Daly, of the University of South Florida (USF). The cruises occurred September 20 - 28 and November 2 - 4, 2011. Both left from and returned to Saint Petersburg, Florida, but followed different routes (see Trackline). On both cruises the USGS collected data pertaining to pH, dissolved inorganic carbon (DIC), and total alkalinity in discrete samples. Discrete surface samples were taken during transit approximatly hourly on both cruises, 95 in September were collected over a span of 2127 km, and 7 over a trackline of 732 km line on the November cruise. Along wit ...

  13. 💎 r/TikTokCringe Post & Comments 💎

    • kaggle.com
    zip
    Updated Mar 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Curiel (2025). 💎 r/TikTokCringe Post & Comments 💎 [Dataset]. https://www.kaggle.com/datasets/curiel/rtiktokcringe-post-and-comments/discussion
    Explore at:
    zip(189418739 bytes)Available download formats
    Dataset updated
    Mar 25, 2025
    Authors
    Curiel
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Important Note: The dataset contains some important information regarding the columns 'title' and 'comments'. It's important to understand their values in order to interpret the data correctly.

    In the 'title' column, there may be a significant number of null values. A null value in this column indicates that the corresponding row pertains to a comment rather than a post. To identify the relationship between comment rows and their associated posts, you can examine the 'post_id' column. Rows with the same 'post_id' value refer to comments that are associated with the post identified by that 'post_id'.

    Similarly, in the 'comments' column, the presence or absence of null values is crucial for determining whether a row represents a comment or a post. If the 'comments' column is null, it signifies a comment row. Conversely, if the 'comments' column is populated (including cases where the value is 0), it indicates a post row.

    Understanding these conventions will enable accurate analysis and interpretation of the dataset.

  14. d

    Numerical code and data for the stellar structure and dynamical instability...

    • datadryad.org
    • search.dataone.org
    • +1more
    zip
    Updated May 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arun Mathew; Malay K. Nandy (2021). Numerical code and data for the stellar structure and dynamical instability analysis of generalised uncertainty white dwarfs [Dataset]. http://doi.org/10.5061/dryad.dncjsxkzt
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 10, 2021
    Dataset provided by
    Dryad
    Authors
    Arun Mathew; Malay K. Nandy
    Time period covered
    Apr 28, 2021
    Description

    There is a total of 17 datasets to produce all the Figures in the article. There are mainly two different data files: GUP White Dwarf Mass-Radius (GUPWD_M-R) data and GUP White Dwarf Profile (GUPWD_Profile) data.

    The file GUPWD_M-R gives only the Mass-Radius relation with Radius (km) in the first column and Mass (solar mass) in the second.

    On the other hand GUPWD_Profile provides the complete profile with following columns.

    column 1: Dimensionless central Fermi Momentum $\xi_c$ column 2: Central Density $\rho_c$ ( Log10 [$\rho_c$ g cm$^{-3}$] ) column 3: Radius $R$ (km) column 4: Mass $M$ (solar mass) column 5: Square of fundamental frequency $\omega_0^2$ (sec$^{-2}$)

    =====================================================================================

    Figure 1 (a) gives Mass-Radius (M-R) curves for $\beta_0=10^{42}$, $10^{41}$ and $10^{40}$. The filenames of the corresponding dataset are

    GUPWD_M-R[Beta0=E42].dat GUPWD_M-R[Beta0=E41].dat GUPWD_M-R[Beta0...

  15. KORUS-AQ Pandora Column Observations - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). KORUS-AQ Pandora Column Observations - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/korus-aq-pandora-column-observations-dda12
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    KORUSAQ_Ground_Pandora_Data contains all of the Pandora instrumentation data collected during the KORUS-AQ field study. Contained in this dataset are column measurements of NO2, O3, and HCHO. Pandoras were situated at various ground sites across the study area, including, NIER-Taehwa, NIER-Olympic Park, NIER-Gwangju, NIER-Anmyeon, Busan, Yonsei University, Songchon, and Yeoju. Data collection for this product is complete.The KORUS-AQ field study was conducted in South Korea during May-June, 2016. The study was jointly sponsored by NASA and Korea’s National Institute of Environmental Research (NIER). The primary objectives were to investigate the factors controlling air quality in Korea (e.g., local emissions, chemical processes, and transboundary transport) and to assess future air quality observing strategies incorporating geostationary satellite observations. To achieve these science objectives, KORUS-AQ adopted a highly coordinated sampling strategy involved surface and airborne measurements including both in-situ and remote sensing instruments.Surface observations provided details on ground-level air quality conditions while airborne sampling provided an assessment of conditions aloft relevant to satellite observations and necessary to understand the role of emissions, chemistry, and dynamics in determining air quality outcomes. The sampling region covers the South Korean peninsula and surrounding waters with a primary focus on the Seoul Metropolitan Area. Airborne sampling was primarily conducted from near surface to about 8 km with extensive profiling to characterize the vertical distribution of pollutants and their precursors. The airborne observational data were collected from three aircraft platforms: the NASA DC-8, NASA B-200, and Hanseo King Air. Surface measurements were conducted from 16 ground sites and 2 ships: R/V Onnuri and R/V Jang Mok.The major data products collected from both the ground and air include in-situ measurements of trace gases (e.g., ozone, reactive nitrogen species, carbon monoxide and dioxide, methane, non-methane and oxygenated hydrocarbon species), aerosols (e.g., microphysical and optical properties and chemical composition), active remote sensing of ozone and aerosols, and passive remote sensing of NO2, CH2O, and O3 column densities. These data products support research focused on examining the impact of photochemistry and transport on ozone and aerosols, evaluating emissions inventories, and assessing the potential use of satellite observations in air quality studies.

  16. W

    Waterproof and Anti-corrosion Operation Column Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Jun 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Waterproof and Anti-corrosion Operation Column Report [Dataset]. https://www.archivemarketresearch.com/reports/waterproof-and-anti-corrosion-operation-column-438737
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Jun 9, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Discover the booming waterproof & anti-corrosion operation column market! This report analyzes market size ($1.5B in 2025, projected to reach $2.5B by 2033 at a 7% CAGR), key trends, leading companies (Eaton, Emerson, etc.), and regional insights. Learn about the drivers, restraints, and future growth potential.

  17. EK60 Water Column Sonar Data Collected During SH1806

    • catalog.data.gov
    • gimi9.com
    • +1more
    Updated Nov 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA National Centers for Environmental Information (Point of Contact) (2024). EK60 Water Column Sonar Data Collected During SH1806 [Dataset]. https://catalog.data.gov/dataset/ek60-water-column-sonar-data-collected-during-sh1806
    Explore at:
    Dataset updated
    Nov 1, 2024
    Dataset provided by
    National Centers for Environmental Informationhttps://www.ncei.noaa.gov/
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Description

    This survey is part of a long-term Bonneville Power Administration-funded effort by academic and federal scientists to understand coastal ecosystems and biological and physical processes that may influence recruitment variability of salmon in Pacific Northwest waters. Prior to a potential switch to the R/V Shimada as a long-term platform, we intend to compare catches between vessels (The R/V Shimada and the F/V Frosti) on the continental shelf of Washington. Sampling will occur during the day for 3 days (23-25 May) with surface trawls along pre-specified transects. One trawl will be performed as we leave the Strait of Juan de Fuca on the afternoon of the 22nd. In addition, a bongo net will be towed several times each night between 2300 and 0500 (nights of 19-22 June).

  18. Tennessee Eastman Process Simulation Dataset

    • kaggle.com
    zip
    Updated Feb 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sergei Averkiev (2020). Tennessee Eastman Process Simulation Dataset [Dataset]. https://www.kaggle.com/averkij/tennessee-eastman-process-simulation-dataset
    Explore at:
    zip(1370814903 bytes)Available download formats
    Dataset updated
    Feb 9, 2020
    Authors
    Sergei Averkiev
    Description

    Intro

    This dataverse contains the data referenced in Rieth et al. (2017). Issues and Advances in Anomaly Detection Evaluation for Joint Human-Automated Systems. To be presented at Applied Human Factors and Ergonomics 2017.

    Content

    Each .RData file is an external representation of an R dataframe that can be read into an R environment with the 'load' function. The variables loaded are named ‘fault_free_training’, ‘fault_free_testing’, ‘faulty_testing’, and ‘faulty_training’, corresponding to the RData files.

    Each dataframe contains 55 columns:

    Column 1 ('faultNumber') ranges from 1 to 20 in the “Faulty” datasets and represents the fault type in the TEP. The “FaultFree” datasets only contain fault 0 (i.e. normal operating conditions).

    Column 2 ('simulationRun') ranges from 1 to 500 and represents a different random number generator state from which a full TEP dataset was generated (Note: the actual seeds used to generate training and testing datasets were non-overlapping).

    Column 3 ('sample') ranges either from 1 to 500 (“Training” datasets) or 1 to 960 (“Testing” datasets). The TEP variables (columns 4 to 55) were sampled every 3 minutes for a total duration of 25 hours and 48 hours respectively. Note that the faults were introduced 1 and 8 hours into the Faulty Training and Faulty Testing datasets, respectively.

    Columns 4 to 55 contain the process variables; the column names retain the original variable names.

    Acknowledgements

    This work was sponsored by the Office of Naval Research, Human & Bioengineered Systems (ONR 341), program officer Dr. Jeffrey G. Morrison under contract N00014-15-C-5003. The views expressed are those of the authors and do not reflect the official policy or position of the Office of Naval Research, Department of Defense, or US Government.

    User Agreement

    By accessing or downloading the data or work provided here, you, the User, agree that you have read this agreement in full and agree to its terms.

    The person who owns, created, or contributed a work to the data or work provided here dedicated the work to the public domain and has waived his or her rights to the work worldwide under copyright law. You can copy, modify, distribute, and perform the work, for any lawful purpose, without asking permission.

    In no way are the patent or trademark rights of any person affected by this agreement, nor are the rights that any other person may have in the work or in how the work is used, such as publicity or privacy rights.

    Pacific Science & Engineering Group, Inc., its agents and assigns, make no warranties about the work and disclaim all liability for all uses of the work, to the fullest extent permitted by law.

    When you use or cite the work, you shall not imply endorsement by Pacific Science & Engineering Group, Inc., its agents or assigns, or by another author or affirmer of the work.

    This Agreement may be amended, and the use of the data or work shall be governed by the terms of the Agreement at the time that you access or download the data or work from this Website.

  19. d

    Council; Council Files September 22, 1843, Case of Isaac Leavitt, GC3/series...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Digital Archive of Massachusetts Anti-Slavery and Anti-Segregation Petitions, Massachusetts Archives, Boston MA (2023). Council; Council Files September 22, 1843, Case of Isaac Leavitt, GC3/series 378, Petition of Charles W. Lillie [Dataset]. http://doi.org/10.7910/DVN/2RMA9
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Digital Archive of Massachusetts Anti-Slavery and Anti-Segregation Petitions, Massachusetts Archives, Boston MA
    Time period covered
    Sep 11, 1843
    Description

    Petition subject: Execution case Original: http://nrs.harvard.edu/urn-3:FHCL:12232985 Date of creation: 1843-09-11 Petition location: Roxbury Selected signatures:Charles W. LillieStephen R. DoggettCaroline Williams Total signatures: 13 Legal voter signatures (males not identified as non-legal): 9 Female signatures: 4 Female only signatures: No Identifications of signatories: inhabitants, [females] Prayer format was printed vs. manuscript: Manuscript Signatory column format: not column separated Additional non-petition or unrelated documents available at archive: additional documents available Additional archivist notes: Isaac Leavitt Location of the petition at the Massachusetts Archives of the Commonwealth: Governor Council Files, September 22, 1843, Case of Isaac Leavitt Acknowledgements: Supported by the National Endowment for the Humanities (PW-5105612), Massachusetts Archives of the Commonwealth, Radcliffe Institute for Advanced Study at Harvard University, Center for American Political Studies at Harvard University, Institutional Development Initiative at Harvard University, and Harvard University Library.

  20. u

    R/V Ron Brown Ozone Column Data

    • data.ucar.edu
    • ckanprod.data-commons.k8s.ucar.edu
    ascii
    Updated Oct 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anne M. Thompson; James E. Johnson (2025). R/V Ron Brown Ozone Column Data [Dataset]. http://doi.org/10.26023/QE6B-M46P-9B0V
    Explore at:
    asciiAvailable download formats
    Dataset updated
    Oct 7, 2025
    Authors
    Anne M. Thompson; James E. Johnson
    Time period covered
    Jan 17, 1999 - Feb 19, 1999
    Area covered
    Description

    This dataset contains the Ron Brown ozonesonde profile data.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jeremy Jeanne (2019). The Pizza Problem [Dataset]. https://www.kaggle.com/jeremyjeanne/google-hashcode-pizza-training-2019
Organization logo

The Pizza Problem

A dataset to find the best way to cut a rectangular pizza

Explore at:
zip(178852 bytes)Available download formats
Dataset updated
Feb 8, 2019
Authors
Jeremy Jeanne
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

Problem description

Pizza

The pizza is represented as a rectangular, 2-dimensional grid of R rows and C columns. The cells within the grid are referenced using a pair of 0-based coordinates [r, c] , denoting respectively the row and the column of the cell.

Each cell of the pizza contains either:

mushroom, represented in the input file as M
tomato, represented in the input file as T

Slice

A slice of pizza is a rectangular section of the pizza delimited by two rows and two columns, without holes. The slices we want to cut out must contain at least L cells of each ingredient (that is, at least L cells of mushroom and at least L cells of tomato) and at most H cells of any kind in total - surprising as it is, there is such a thing as too much pizza in one slice. The slices being cut out cannot overlap. The slices being cut do not need to cover the entire pizza.

Goal

The goal is to cut correct slices out of the pizza maximizing the total number of cells in all slices. Input data set The input data is provided as a data set file - a plain text file containing exclusively ASCII characters with lines terminated with a single ‘ ’ character at the end of each line (UNIX- style line endings).

File format

The file consists of:

one line containing the following natural numbers separated by single spaces:
R (1 ≤ R ≤ 1000) is the number of rows
C (1 ≤ C ≤ 1000) is the number of columns
L (1 ≤ L ≤ 1000) is the minimum number of each ingredient cells in a slice
H (1 ≤ H ≤ 1000) is the maximum total number of cells of a slice

Google 2017, All rights reserved.

R lines describing the rows of the pizza (one after another). Each of these lines contains C characters describing the ingredients in the cells of the row (one cell after another). Each character is either ‘M’ (for mushroom) or ‘T’ (for tomato).

Example

3 5 1 6
TTTTT
TMMMT
TTTTT

3 rows, 5 columns, min 1 of each ingredient per slice, max 6 cells per slice

Example input file.

Submissions

File format

The file must consist of:

one line containing a single natural number S (0 ≤ S ≤ R × C) , representing the total number of slices to be cut,
U lines describing the slices. Each of these lines must contain the following natural numbers separated by single spaces:
r 1 , c 1 , r 2 , c 2 describe a slice of pizza delimited by the rows r (0 ≤ r1,r2 < R, 0 ≤ c1, c2 < C) 1 and r 2 and the columns c 1 and c 2 , including the cells of the delimiting rows and columns. The rows ( r 1 and r 2 ) can be given in any order. The columns ( c 1 and c 2 ) can be given in any order too.

Example

0 0 2 1
0 2 2 2
0 3 2 4

3 slices.

First slice between rows (0,2) and columns (0,1).
Second slice between rows (0,2) and columns (2,2).
Third slice between rows (0,2) and columns (3,4).
Example submission file.

© Google 2017, All rights reserved.

Slices described in the example submission file marked in green, orange and purple. Validation

For the solution to be accepted:

the format of the file must match the description above,
each cell of the pizza must be included in at most one slice,
each slice must contain at least L cells of mushroom,
each slice must contain at least L cells of tomato,
total area of each slice must be at most H

Scoring

The submission gets a score equal to the total number of cells in all slices. Note that there are multiple data sets representing separate instances of the problem. The final score for your team is the sum of your best scores on the individual data sets. Scoring example

The example submission file given above cuts the slices of 6, 3 and 6 cells, earning 6 + 3 + 6 = 15 points.

Search
Clear search
Close search
Google apps
Main menu