Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Problem description
Pizza
The pizza is represented as a rectangular, 2-dimensional grid of R rows and C columns. The cells within the grid are referenced using a pair of 0-based coordinates [r, c] , denoting respectively the row and the column of the cell.
Each cell of the pizza contains either:
mushroom, represented in the input file as M
tomato, represented in the input file as T
Slice
A slice of pizza is a rectangular section of the pizza delimited by two rows and two columns, without holes. The slices we want to cut out must contain at least L cells of each ingredient (that is, at least L cells of mushroom and at least L cells of tomato) and at most H cells of any kind in total - surprising as it is, there is such a thing as too much pizza in one slice. The slices being cut out cannot overlap. The slices being cut do not need to cover the entire pizza.
Goal
The goal is to cut correct slices out of the pizza maximizing the total number of cells in all slices. Input data set The input data is provided as a data set file - a plain text file containing exclusively ASCII characters with lines terminated with a single ‘ ’ character at the end of each line (UNIX- style line endings).
File format
The file consists of:
one line containing the following natural numbers separated by single spaces:
R (1 ≤ R ≤ 1000) is the number of rows
C (1 ≤ C ≤ 1000) is the number of columns
L (1 ≤ L ≤ 1000) is the minimum number of each ingredient cells in a slice
H (1 ≤ H ≤ 1000) is the maximum total number of cells of a slice
Google 2017, All rights reserved.
R lines describing the rows of the pizza (one after another). Each of these lines contains C characters describing the ingredients in the cells of the row (one cell after another). Each character is either ‘M’ (for mushroom) or ‘T’ (for tomato).
Example
3 5 1 6
TTTTT
TMMMT
TTTTT
3 rows, 5 columns, min 1 of each ingredient per slice, max 6 cells per slice
Example input file.
Submissions
File format
The file must consist of:
one line containing a single natural number S (0 ≤ S ≤ R × C) , representing the total number of slices to be cut,
U lines describing the slices. Each of these lines must contain the following natural numbers separated by single spaces:
r 1 , c 1 , r 2 , c 2 describe a slice of pizza delimited by the rows r (0 ≤ r1,r2 < R, 0 ≤ c1, c2 < C) 1 and r 2 and the columns c 1 and c 2 , including the cells of the delimiting rows and columns. The rows ( r 1 and r 2 ) can be given in any order. The columns ( c 1 and c 2 ) can be given in any order too.
Example
0 0 2 1
0 2 2 2
0 3 2 4
3 slices.
First slice between rows (0,2) and columns (0,1).
Second slice between rows (0,2) and columns (2,2).
Third slice between rows (0,2) and columns (3,4).
Example submission file.
© Google 2017, All rights reserved.
Slices described in the example submission file marked in green, orange and purple. Validation
For the solution to be accepted:
the format of the file must match the description above,
each cell of the pizza must be included in at most one slice,
each slice must contain at least L cells of mushroom,
each slice must contain at least L cells of tomato,
total area of each slice must be at most H
Scoring
The submission gets a score equal to the total number of cells in all slices. Note that there are multiple data sets representing separate instances of the problem. The final score for your team is the sum of your best scores on the individual data sets. Scoring example
The example submission file given above cuts the slices of 6, 3 and 6 cells, earning 6 + 3 + 6 = 15 points.
Facebook
TwitterSCOAPE_Pandora_Data is the column NO2 and ozone data collected by Pandora spectrometers during the Satellite Coastal and Oceanic Atmospheric Pollution Experiment (SCOAPE). Pandora instruments were located on the University of Southern Mississippi’s Research Vessel (R/V) Point Sur and at the Louisiana Universities Marine Consortium (LUMCON; Cocodrie, LA). Data collection for this product is complete.The Outer Continental Shelf Lands Act (OCSLA) requires the US Department of Interior Bureau of Ocean Energy Management (BOEM) to ensure compliance with the US National Ambient Air Quality Standard (NAAQS) so that Outer Continental Shelf (OCS) oil and natural gas (ONG) exploration, development, and production do not significantly impact the air quality of any US state. In 2017, BOEM and NASA entered into an interagency agreement to begin a study to scope out the feasibility of BOEM personnel using a suite of NASA and non-NASA resources to assess how pollutants from ONG exploration, development, and production activities affect air quality. An important activity of this interagency agreement was SCOAPE, a field deployment that took place in May 2019, that aimed to assess the capability of satellite observations for monitoring offshore air quality. The outcomes of the study are documented in two BOEM reports (Duncan, 2020; Thompson, 2020).To address BOEM’s goals, the SCOAPE science team conducted surface-based remote sensing and in-situ measurements, which enabled a systematic assessment of the application of satellite observations, primarily NO2, for monitoring air quality. The SCOAPE field measurements consisted of onshore ground sites, including in the vicinity of LUMCON, as well as those from University of Southern Mississippi’s R/V Point Sur, which cruised in the Gulf of America from 10-18 May 2019. Based on the 2014 and 2017 BOEM emissions inventories as well as daily air quality and meteorological forecasts, the cruise track was designed to sample both areas with large oil drilling platforms and areas with dense small natural gas facilities. The R/V Point Sur was instrumented to carry out both remote sensing and in-situ measurements of NO2 and O3 along with in-situ CH4, CO2, CO, and VOC tracers which allowed detailed characterization of airmass type and emissions. In addition, there were also measurements of multi-wavelength AOD and black carbon as well as planetary boundary layer structure and meteorological variables, including surface temperature, humidity, and winds. A ship-based spectrometer instrument provided remotely-sensed total column amounts of NO2 and O3 for direct comparison with satellite measurements. Ozonesondes and radiosondes were also launched 1-3 times daily from the R/V Point Sur to provide O3 and meteorological vertical profile observations. The ground-based observations, primarily at LUMCON, included spectrometer-measured column NO2 and O3, in-situ NO2, VOCs, and planetary boundary layer structure. A NO2sonde was also mounted on a vehicle with the goal to detect pollution onshore from offshore ONG activities during onshore flow; data were collected along coastal Louisiana from Burns Point Park to Grand Isle to the tip of the Mississippi River delta. The in-situ measurements were reported in ICARTT files or Excel files. The remote sensing data are in either HDF or netCDF files.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes ALL the abundance values, zero and non-zero. Taxonomic groups are diplayed in the 'taxon' column, rather than in separate columns, with abundances in the 'abund_L' column. For the original presentation of the data, see VPR_ashjian_orig. For a version of the data with only non-zero data, see VPR_ashjian_nonzero. In the 'nonzero' dataset, values of 0 in the abund_L column (taxon abundance) have been removed.
Methodology
The following information was extracted from C.J. Ashjian et al., Deep- Sea Research II 48(2001) 245-282 . An in-depth discussion of the data and sampling methods can be found there.
The Video Plankton Recorder was towed at 2 m/s, collecting data from the surface to the bottom (towyo). The VPR was equipped with 2-4 cameras, temperature and conductivity probes, fluorometer and transmissometer. Environmental data was collected at 0.25 Hz (CI9407) or 0.5 Hz (EN259, EN262). Video images were recorded at 60 fields per second (fps).
Video tapes were analyzed for plankton abundances using a semi-automated method discussed in Davis, C.S. et al., Deep-Sea Research II 43 (1996) 1946-1970. In-focus images were extracted from the video tapes and identified by hand to particle type, taxon, or species. Plankton and particle observations were merged with environmental and navigational data by binning the observations for each category into the time intervals at which the environmental data were collected (again see above Davis citation). Concentrations were calculated utilizing the total volume (liters) imaged during that period. For less-abundant categories, usually only a single organism was observed during each time interval so that the resulting concentrations are close to presence or absence data rather than covering a range of values.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By Reddit [source]
This dataset provides an in-depth look into learning what communities find important and engaging in the news. With this data, researchers can discover trends related to user engagement and popular topics within subreddits. By examining the “score” and “comms_num” columns, our researchers will be able to pinpoint which topics are most liked, discussed or shared within the various subreddits. Researchers may also gain insights into not only how popular a topic is but how it is growing over time. Additionally, by exploring the body column of our dataset, researchers can understand more about which types of news stories drive conversation within particular subreddits—providing an opportunity for deeper analysis of that subreddit’s diverse community dynamics
The dataset includes eight columns: title, score, id, url, comms_num created**body and timestamp** which can help us identify key insights into user engagement among popular subreddits. With this data we may also determine relationships between topics of discussion and their impact on user engagement allowing us to create a better understanding surrounding issue-based conversations online as well as uncover emerging trends in online news consumption habits
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset is useful for those who are looking to gain insight into the popularity and user engagement of specific subreddits. The data includes 8 different columns including title, score, id, url, comms_num, created, body and timestamp. This can provide valuable information about how users view and interact with particular topics across various subreddits.
In this guide we’ll look at how you can use this dataset to uncover trends in user engagement on topics within specific subreddits as well as measure the overall popularity of these topics within a subreddit.
1) Analyzing Score: By analyzing the “score” column you can determine which news stories are popular in a particular subreddit and which ones aren't by looking at how many upvotes each story has received. With this data you will be able to determine trends in what types of stories users preferred within a particular subreddit over time.
2) Analyzing Comms_Num: Similarly to analyzing the score column you can analyze the “comms_num” column to see which news stories had more engagement from users by tracking number of comments received on each post. Knowing these points can provide insight into what types of stories tend to draw more comment activity from users in certain subreddits from one day or an extended period of time such tracking post activity for multiple weeks or months at once 3) Analyzing Body: Additionally by looking at the “body” column for each post researchers can gain a better understanding which kinds of topics/news draw attention among specific Reddit communities.. With that complete picture researchers have access not only to data measuring Reddit buzz but also access topic discussion/comments helping generate further insights into why certain posts might be popular or receive more comments than others
Overallthis dataset provides valuable insights about user engagedment related specifically topics trending accross subsbreddits allowing anyone interested reseraching such things easier way access those insights all one place
- Grouping news topics within particular subreddits and assessing the overall popularity of those topics in terms of scores/user engagement.
- Correlating user engagement with certain news topics to understand how they influence discussion or reactions on a subreddit.
- Examining the potential correlation between score and the actual body content of a given post to assess what types of content are most successful in gaining interest from users and creating positive engagement for posts
If you use this dataset in your research, please credit the original authors.
License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.
File: news.csv | Column name | Description ...
Facebook
TwitterSea Scout Hydrographic Survey, H13177 (EM2040). Mainline coverage within the survey area consisted of Complete Coverage (100% side scan sonar with concurrent multibeam data) acquisition. The assigned Fish Haven area and associated debris area were surveyed with Object Detection MBES coverage. Bathymetric and water column data were acquired with a Kongsberg EM2040C multibeam echo sounder aboard the R/V Sea Scout and bathymetry data was acquired with a Kongsberg EM3002 multibeam echo sounder aboard the R/V C-Wolf. Side scan sonar acoustic imagery was collected with a Klein 5000 V2 system aboard the R/V Sea Scout and an EdgeTech 4200 aboard the R/V C-Wolf.
Facebook
TwitterDatasets generated for the Physical Review E article with title: "Traveling Bubbles and Vortex Pairs within Symmetric 2D Quantum Droplets" by Paredes, Guerra-Carmenate, Salgueiro, Tommasini and Michinel. In particular, we provide the data needed to generate the figures in the publication, which illustrate the numerical results found during this work.
We also include python code in the file "plot_from_data_for_repository.py" that generates a version of the figures of the paper from .pt data sets. Data can be read and plots can be produced with a simple modification of the python code.
Figure 1: Data are in fig1.csv
The csv file has four columns separated by comas. The four columns correspond to values of r (first column) and the function psi(r) for the three cases depicted in the figure (columns 2-4).
Figures 2 and 4: Data are in data_figs_2_and_4.pt
This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six eigenstates depicted in figures 2 and 4 ("psia", "psib", "psic", "psid", "psie", "psif"). Notice that figure 2 is the square of the modulus and figure 4 is the argument, both are obtained from the same data sets.
Figure 3: Data are in fig3.csv
The csv file has three columns separated by comas. The three columns correspond to values of momentum p (first column), energy E (second column) and velocity U (third column).
Figure 5: Data are in fig5.csv
The csv file has three columns separated by comas. The three columns correspond to values of momentum p (first column), the minimum value of |psi|^2 (second column) and the value of |psi|^2 at the center (third column).
Figure 6: Data are in data_fig_6.pt
This is a data file generated with the torch module of python. It includes six torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the four instants of time depicted in figure 6 ("psia", "psib", "psic", "psid").
Figure 7: Data are in data_fig_7.pt
This is a data file generated with the torch module of python. It includes six torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the four instants of time depicted in figure 7 ("psia", "psib", "psic", "psid").
Figures 8 and 10: Data are in data_figs_8_and_10.pt
This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six eigenstates depicted in figures 8 and 10 ("psia", "psib", "psic", "psid", "psie", "psif"). Notice that figure 8 is the square of the modulus and figure 10 is the argument, both are obtained from the same data sets.
Figure 9: Data are in fig9.csv
The csv file has two columns separated by comas. The two columns correspond to values of momentum p (first column) and energy (second column).
Figure 11: Data are in data_fig_11.pt
This is a data file generated with the torch module of python. It includes ten torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the two cases, four instants of time for each case, depicted in figure 11 ("psia", "psib", "psic", "psid", "psie", "psif", "psig", "psih").
Figure 12: Data are in data_fig_12.pt
This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six instants of time depicted in figure 12 ("psia", "psib", "psic", "psid", "psie", "psif").
Figure 13: Data are in data_fig_13.pt
This is a data file generated with the torch module of python. It includes ten torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the eight instants of time depicted in figure 13 ("psia", "psib", "psic", "psid", "psie", "psif", "psig", "psih").
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Data acquisition was performed using the multibeam echosounder Kongsberg EM122. Raw data are delivered in Kongsberg .wcd format. The data acquisition was part of the international project JPI Oceans - MiningImpact Environmental Impacts and Risks of Deep-Sea Mining.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Important Note: The dataset contains some important information regarding the columns 'title' and 'comments'. It's important to understand their values in order to interpret the data correctly.
In the 'title' column, there may be a significant number of null values. A null value in this column indicates that the corresponding row pertains to a comment rather than a post. To identify the relationship between comment rows and their associated posts, you can examine the 'post_id' column. Rows with the same 'post_id' value refer to comments that are associated with the post identified by that 'post_id'.
Similarly, in the 'comments' column, the presence or absence of null values is crucial for determining whether a row represents a comment or a post. If the 'comments' column is null, it signifies a comment row. Conversely, if the 'comments' column is populated (including cases where the value is 0), it indicates a post row.
Understanding these conventions will enable accurate analysis and interpretation of the dataset.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Important Note: The dataset contains some important information regarding the columns 'title' and 'comments'. It's important to understand their values in order to interpret the data correctly.
In the 'title' column, there may be a significant number of null values. A null value in this column indicates that the corresponding row pertains to a comment rather than a post. To identify the relationship between comment rows and their associated posts, you can examine the 'post_id' column. Rows with the same 'post_id' value refer to comments that are associated with the post identified by that 'post_id'.
Similarly, in the 'comments' column, the presence or absence of null values is crucial for determining whether a row represents a comment or a post. If the 'comments' column is null, it signifies a comment row. Conversely, if the 'comments' column is populated (including cases where the value is 0), it indicates a post row.
Understanding these conventions will enable accurate analysis and interpretation of the dataset.
Facebook
Twitterhttps://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy
Discover the booming market for waterproof & anti-corrosion operation columns! Learn about its $2.5 billion (2025 est.) size, 7% CAGR, key drivers, and top players like Eaton & Emerson. Explore regional market shares and future growth projections in this detailed analysis.
Facebook
TwitterData collected for this research provides information on mixing heights, surface and column formaldehyde during the KORUS-AQ field campaign and over two research sites in South Korea. This dataset is associated with the following publication: Spinei, E., A. Whitehill, A. Fried, M. Tiefengraber, T. Knepp, S. Herndon, J. Herman, M. Muller, N. Abuhassan, A. Cede, D. Richter, J. Walega, J. Crawford, J. Szykman, L. Valin, D. Williams, R. Long, R. Swap, Y. Lee, N. Nowak, and B. Poche. The first evaluation of formaldehyde column observations by improved Pandora spectrometers during the KORUS-AQ field study. Atmospheric Measurement Techniques. Copernicus Publications, Katlenburg-Lindau, GERMANY, 11(9): 4943-4961, (2018).
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The United States Geological Survey (USGS) is conducting a study on the effects of climate change on ocean acidification within the Gulf of Mexico; dealing specifically with the effect of ocean acidification on marine organisms and habitats. To investigate this, the USGS participated in two cruises in the West Florida Shelf and northern Gulf of Mexico regions aboard the R/V Weatherbird II, a ship of opportunity lead by Dr. Kendra Daly, of the University of South Florida (USF). The cruises occurred September 20 - 28 and November 2 - 4, 2011. Both left from and returned to Saint Petersburg, Florida, but followed different routes (see Trackline). On both cruises the USGS collected data pertaining to pH, dissolved inorganic carbon (DIC), and total alkalinity in discrete samples. Discrete surface samples were taken during transit approximatly hourly on both cruises, 95 in September were collected over a span of 2127 km, and 7 over a trackline of 732 km line on the November cruise. Along wit ...
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Important Note: The dataset contains some important information regarding the columns 'title' and 'comments'. It's important to understand their values in order to interpret the data correctly.
In the 'title' column, there may be a significant number of null values. A null value in this column indicates that the corresponding row pertains to a comment rather than a post. To identify the relationship between comment rows and their associated posts, you can examine the 'post_id' column. Rows with the same 'post_id' value refer to comments that are associated with the post identified by that 'post_id'.
Similarly, in the 'comments' column, the presence or absence of null values is crucial for determining whether a row represents a comment or a post. If the 'comments' column is null, it signifies a comment row. Conversely, if the 'comments' column is populated (including cases where the value is 0), it indicates a post row.
Understanding these conventions will enable accurate analysis and interpretation of the dataset.
Facebook
TwitterThere is a total of 17 datasets to produce all the Figures in the article. There are mainly two different data files: GUP White Dwarf Mass-Radius (GUPWD_M-R) data and GUP White Dwarf Profile (GUPWD_Profile) data.
The file GUPWD_M-R gives only the Mass-Radius relation with Radius (km) in the first column and Mass (solar mass) in the second.
On the other hand GUPWD_Profile provides the complete profile with following columns.
column 1: Dimensionless central Fermi Momentum $\xi_c$ column 2: Central Density $\rho_c$ ( Log10 [$\rho_c$ g cm$^{-3}$] ) column 3: Radius $R$ (km) column 4: Mass $M$ (solar mass) column 5: Square of fundamental frequency $\omega_0^2$ (sec$^{-2}$)
=====================================================================================
Figure 1 (a) gives Mass-Radius (M-R) curves for $\beta_0=10^{42}$, $10^{41}$ and $10^{40}$. The filenames of the corresponding dataset are
GUPWD_M-R[Beta0=E42].dat GUPWD_M-R[Beta0=E41].dat GUPWD_M-R[Beta0...
Facebook
TwitterKORUSAQ_Ground_Pandora_Data contains all of the Pandora instrumentation data collected during the KORUS-AQ field study. Contained in this dataset are column measurements of NO2, O3, and HCHO. Pandoras were situated at various ground sites across the study area, including, NIER-Taehwa, NIER-Olympic Park, NIER-Gwangju, NIER-Anmyeon, Busan, Yonsei University, Songchon, and Yeoju. Data collection for this product is complete.The KORUS-AQ field study was conducted in South Korea during May-June, 2016. The study was jointly sponsored by NASA and Korea’s National Institute of Environmental Research (NIER). The primary objectives were to investigate the factors controlling air quality in Korea (e.g., local emissions, chemical processes, and transboundary transport) and to assess future air quality observing strategies incorporating geostationary satellite observations. To achieve these science objectives, KORUS-AQ adopted a highly coordinated sampling strategy involved surface and airborne measurements including both in-situ and remote sensing instruments.Surface observations provided details on ground-level air quality conditions while airborne sampling provided an assessment of conditions aloft relevant to satellite observations and necessary to understand the role of emissions, chemistry, and dynamics in determining air quality outcomes. The sampling region covers the South Korean peninsula and surrounding waters with a primary focus on the Seoul Metropolitan Area. Airborne sampling was primarily conducted from near surface to about 8 km with extensive profiling to characterize the vertical distribution of pollutants and their precursors. The airborne observational data were collected from three aircraft platforms: the NASA DC-8, NASA B-200, and Hanseo King Air. Surface measurements were conducted from 16 ground sites and 2 ships: R/V Onnuri and R/V Jang Mok.The major data products collected from both the ground and air include in-situ measurements of trace gases (e.g., ozone, reactive nitrogen species, carbon monoxide and dioxide, methane, non-methane and oxygenated hydrocarbon species), aerosols (e.g., microphysical and optical properties and chemical composition), active remote sensing of ozone and aerosols, and passive remote sensing of NO2, CH2O, and O3 column densities. These data products support research focused on examining the impact of photochemistry and transport on ozone and aerosols, evaluating emissions inventories, and assessing the potential use of satellite observations in air quality studies.
Facebook
Twitterhttps://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
Discover the booming waterproof & anti-corrosion operation column market! This report analyzes market size ($1.5B in 2025, projected to reach $2.5B by 2033 at a 7% CAGR), key trends, leading companies (Eaton, Emerson, etc.), and regional insights. Learn about the drivers, restraints, and future growth potential.
Facebook
TwitterThis survey is part of a long-term Bonneville Power Administration-funded effort by academic and federal scientists to understand coastal ecosystems and biological and physical processes that may influence recruitment variability of salmon in Pacific Northwest waters. Prior to a potential switch to the R/V Shimada as a long-term platform, we intend to compare catches between vessels (The R/V Shimada and the F/V Frosti) on the continental shelf of Washington. Sampling will occur during the day for 3 days (23-25 May) with surface trawls along pre-specified transects. One trawl will be performed as we leave the Strait of Juan de Fuca on the afternoon of the 22nd. In addition, a bongo net will be towed several times each night between 2300 and 0500 (nights of 19-22 June).
Facebook
TwitterThis dataverse contains the data referenced in Rieth et al. (2017). Issues and Advances in Anomaly Detection Evaluation for Joint Human-Automated Systems. To be presented at Applied Human Factors and Ergonomics 2017.
Each .RData file is an external representation of an R dataframe that can be read into an R environment with the 'load' function. The variables loaded are named ‘fault_free_training’, ‘fault_free_testing’, ‘faulty_testing’, and ‘faulty_training’, corresponding to the RData files.
Each dataframe contains 55 columns:
Column 1 ('faultNumber') ranges from 1 to 20 in the “Faulty” datasets and represents the fault type in the TEP. The “FaultFree” datasets only contain fault 0 (i.e. normal operating conditions).
Column 2 ('simulationRun') ranges from 1 to 500 and represents a different random number generator state from which a full TEP dataset was generated (Note: the actual seeds used to generate training and testing datasets were non-overlapping).
Column 3 ('sample') ranges either from 1 to 500 (“Training” datasets) or 1 to 960 (“Testing” datasets). The TEP variables (columns 4 to 55) were sampled every 3 minutes for a total duration of 25 hours and 48 hours respectively. Note that the faults were introduced 1 and 8 hours into the Faulty Training and Faulty Testing datasets, respectively.
Columns 4 to 55 contain the process variables; the column names retain the original variable names.
This work was sponsored by the Office of Naval Research, Human & Bioengineered Systems (ONR 341), program officer Dr. Jeffrey G. Morrison under contract N00014-15-C-5003. The views expressed are those of the authors and do not reflect the official policy or position of the Office of Naval Research, Department of Defense, or US Government.
By accessing or downloading the data or work provided here, you, the User, agree that you have read this agreement in full and agree to its terms.
The person who owns, created, or contributed a work to the data or work provided here dedicated the work to the public domain and has waived his or her rights to the work worldwide under copyright law. You can copy, modify, distribute, and perform the work, for any lawful purpose, without asking permission.
In no way are the patent or trademark rights of any person affected by this agreement, nor are the rights that any other person may have in the work or in how the work is used, such as publicity or privacy rights.
Pacific Science & Engineering Group, Inc., its agents and assigns, make no warranties about the work and disclaim all liability for all uses of the work, to the fullest extent permitted by law.
When you use or cite the work, you shall not imply endorsement by Pacific Science & Engineering Group, Inc., its agents or assigns, or by another author or affirmer of the work.
This Agreement may be amended, and the use of the data or work shall be governed by the terms of the Agreement at the time that you access or download the data or work from this Website.
Facebook
TwitterPetition subject: Execution case Original: http://nrs.harvard.edu/urn-3:FHCL:12232985 Date of creation: 1843-09-11 Petition location: Roxbury Selected signatures:Charles W. LillieStephen R. DoggettCaroline Williams Total signatures: 13 Legal voter signatures (males not identified as non-legal): 9 Female signatures: 4 Female only signatures: No Identifications of signatories: inhabitants, [females] Prayer format was printed vs. manuscript: Manuscript Signatory column format: not column separated Additional non-petition or unrelated documents available at archive: additional documents available Additional archivist notes: Isaac Leavitt Location of the petition at the Massachusetts Archives of the Commonwealth: Governor Council Files, September 22, 1843, Case of Isaac Leavitt Acknowledgements: Supported by the National Endowment for the Humanities (PW-5105612), Massachusetts Archives of the Commonwealth, Radcliffe Institute for Advanced Study at Harvard University, Center for American Political Studies at Harvard University, Institutional Development Initiative at Harvard University, and Harvard University Library.
Facebook
TwitterThis dataset contains the Ron Brown ozonesonde profile data.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Problem description
Pizza
The pizza is represented as a rectangular, 2-dimensional grid of R rows and C columns. The cells within the grid are referenced using a pair of 0-based coordinates [r, c] , denoting respectively the row and the column of the cell.
Each cell of the pizza contains either:
mushroom, represented in the input file as M
tomato, represented in the input file as T
Slice
A slice of pizza is a rectangular section of the pizza delimited by two rows and two columns, without holes. The slices we want to cut out must contain at least L cells of each ingredient (that is, at least L cells of mushroom and at least L cells of tomato) and at most H cells of any kind in total - surprising as it is, there is such a thing as too much pizza in one slice. The slices being cut out cannot overlap. The slices being cut do not need to cover the entire pizza.
Goal
The goal is to cut correct slices out of the pizza maximizing the total number of cells in all slices. Input data set The input data is provided as a data set file - a plain text file containing exclusively ASCII characters with lines terminated with a single ‘ ’ character at the end of each line (UNIX- style line endings).
File format
The file consists of:
one line containing the following natural numbers separated by single spaces:
R (1 ≤ R ≤ 1000) is the number of rows
C (1 ≤ C ≤ 1000) is the number of columns
L (1 ≤ L ≤ 1000) is the minimum number of each ingredient cells in a slice
H (1 ≤ H ≤ 1000) is the maximum total number of cells of a slice
Google 2017, All rights reserved.
R lines describing the rows of the pizza (one after another). Each of these lines contains C characters describing the ingredients in the cells of the row (one cell after another). Each character is either ‘M’ (for mushroom) or ‘T’ (for tomato).
Example
3 5 1 6
TTTTT
TMMMT
TTTTT
3 rows, 5 columns, min 1 of each ingredient per slice, max 6 cells per slice
Example input file.
Submissions
File format
The file must consist of:
one line containing a single natural number S (0 ≤ S ≤ R × C) , representing the total number of slices to be cut,
U lines describing the slices. Each of these lines must contain the following natural numbers separated by single spaces:
r 1 , c 1 , r 2 , c 2 describe a slice of pizza delimited by the rows r (0 ≤ r1,r2 < R, 0 ≤ c1, c2 < C) 1 and r 2 and the columns c 1 and c 2 , including the cells of the delimiting rows and columns. The rows ( r 1 and r 2 ) can be given in any order. The columns ( c 1 and c 2 ) can be given in any order too.
Example
0 0 2 1
0 2 2 2
0 3 2 4
3 slices.
First slice between rows (0,2) and columns (0,1).
Second slice between rows (0,2) and columns (2,2).
Third slice between rows (0,2) and columns (3,4).
Example submission file.
© Google 2017, All rights reserved.
Slices described in the example submission file marked in green, orange and purple. Validation
For the solution to be accepted:
the format of the file must match the description above,
each cell of the pizza must be included in at most one slice,
each slice must contain at least L cells of mushroom,
each slice must contain at least L cells of tomato,
total area of each slice must be at most H
Scoring
The submission gets a score equal to the total number of cells in all slices. Note that there are multiple data sets representing separate instances of the problem. The final score for your team is the sum of your best scores on the individual data sets. Scoring example
The example submission file given above cuts the slices of 6, 3 and 6 cells, earning 6 + 3 + 6 = 15 points.