Facebook
Twitterhttps://fred.stlouisfed.org/legal/#copyright-public-domainhttps://fred.stlouisfed.org/legal/#copyright-public-domain
View data of PCE, an index that measures monthly changes in the price of consumer goods and services as a means of analyzing inflation.
Facebook
TwitterPersonal consumption expenditures (PCE) is the value of the goods and services purchased by, or on the behalf of, Iowa residents. PCE is divided by the Census Bureau’s annual midyear (July 1) population estimates to calculate the per capita PCE.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PCE Price Index Annual Change in the United States increased to 2.74 percent in August from 2.60 percent in July of 2025. This dataset includes a chart with historical data for the United States PCE Price Index Annual Change.
Facebook
TwitterI was looking for data to add to a visualization about minimum wage in the U.S. over the years and found this inflation rate data useful.
Investopedia summary: The inflation rate is the percentage change in the price of products and services from one year to the next. Two of the most common ways to measure inflation are the Consumer Price Index (CPI) calculated by the Bureau of Labor Statistics (BLS) and the personal consumption expenditures (PCE) price index from the Bureau of Economic Analysis (BEA). The CPI measures the change in prices paid by U.S. consumers over time, and it is the most popular way to gauge inflation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is related to the Ishigami function benchmark case. A detailed description of the benchmark case can be found on the public online community website UQWorld: https://uqworld.org/t/benchmark-case-ishigami-function/.
The experimental designs include datasets with 40, 80, 120, 160, and 200 samples, each generated using optimized maximin distance Latin Hypercube Sampling (LHS) with 1000 iterations. Each dataset is replicated 20 times. The validation set contains 100,000 samples generated by Monte Carlo simulation. Each dataset contains input samples and the corresponding computational model responses.
The dataset file includes two variables:
Both variables are Matlab structures with fields X, Y, and nSamples. Variable ExpDesigns is a non-scalar structure sized according to the number of experimental design groups. Each field of X for the i-th element of the struct array contains replicated datasets, forming a matrix of size [number of samples] x [dimensionality] x [number of replications]. Similarly, each field of Y for the i-th element contains replicated computational model responses that correspond to the experimental design of the same replication, sized [number of samples] x [number of model outputs] x [number of replications]. The same structure logic applies to the ValidationSet variable, except it contains only one dataset per benchmark case.
The structure can be summarized as follows:
The selection of competitors was based on our experience with meta-modeling and includes various metamodel types: Polynomial Chaos Expansions (PCE), Polynomial Chaos Kriging (PCK), and Kriging. Given that each metamodel has many hyperparameters, we chose the most general settings to address different benchmark case difficulties, including dimensionality, nonlinearity, and non-monotonicity.
For Polynomial Chaos Expansions (PCE), we used a polynomial degree and q-norm adaptivity approach. This approach adaptively increases the maximum polynomial degree and truncation q-norm until the estimated leave-one-out error starts increasing. Maximum polynomial interaction terms were limited to 2 due to the memory requirements for large model dimensionality and large experimental designs. We tested three different solvers to calculate the PCE coefficients: Least Angle Regression (LARS), Orthogonal Matching Pursuit (OMP), and Subspace Pursuit (SP).
Polynomial Chaos Kriging (PCK) employs a sequential combination strategy of PCE and Kriging. PCE uses degree adaptivity with a fixed q-norm. The maximum number of interactions is again set to 2 with the LARS solver. Ordinary Kriging is applied using the Matérn-5/2 correlation family, ellipsoidal, and anisotropic correlation function. We used a hybrid genetic algorithm to optimize the hyperparameters.
We benchmarked both linear and ordinary Kriging, including Matérn-5/2 and Gaussian correlation families and separable and ellipsoidal correlation, resulting in eight different Kriging competitors. The hyperparameters were calculated using a hybrid covariance matrix adaptation-evolution strategy optimization.
For further details on the settings, please refer to the competitors.m file and UQLab user manuals:
The results file contains one variable: Metrics. It is a Matlab structure with fields corresponding to each competitor (currently 12). Each competitor field contains data of type non-scalar struct array. The performance metrics included are RelMSE, RelRMSE, RelMAE, MAPE, Q2, and RelCVErr. Each field of Metrics.(CompetitorName) for the i-th element of the struct array contains metrics corresponding to the replicated dataset and the competitor, structured as follows:
The description of the performance measures (metrics) can be found here: https://uqworld.org/t/metamodel-performance-measures/.
We provide files in three languages (MATLAB, Python, and Julia) to showcase how to work with datasets, results, and their visualization. The files are called working_with_datafiles.* (the extension depends on the selected language).
This project was supported by the Open Research Data Program of the ETH Board under Grant number EPFL SCR0902285. The calculations were run on the Euler cluster of ETH Zürich using the MATLAB-based UQLab software developed at the Chair of Risk, Safety and Uncertainty Quantification of ETH Zürich.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is related to the Two-dimensional heat diffusion model benchmark case. A detailed description of the benchmark case can be found on the public online community website UQWorld: https://uqworld.org/t/benchmark-case-two-dimensional-heat-diffusion-model/.
The experimental designs include datasets with 400, 800, 1200, 1600, and 2000 samples, each generated using optimized maximin distance Latin Hypercube Sampling (LHS) with 1000 iterations. Each dataset is replicated 20 times. The validation set contains 100,000 samples generated by Monte Carlo simulation. Each dataset contains input samples and the corresponding computational model responses.
The dataset file includes two variables:
Both variables are Matlab structures with fields X, Y, and nSamples. Variable ExpDesigns is a non-scalar structure sized according to the number of experimental design groups. Each field of X for the i-th element of the struct array contains replicated datasets, forming a matrix of size [number of samples] x [dimensionality] x [number of replications]. Similarly, each field of Y for the i-th element contains replicated computational model responses that correspond to the experimental design of the same replication, sized [number of samples] x [number of model outputs] x [number of replications]. The same structure logic applies to the ValidationSet variable, except it contains only one dataset per benchmark case.
The structure can be summarized as follows:
The selection of competitors was based on our experience with meta-modeling and includes various metamodel types: Polynomial Chaos Expansions (PCE), Polynomial Chaos Kriging (PCK), and Kriging. Given that each metamodel has many hyperparameters, we chose the most general settings to address different benchmark case difficulties, including dimensionality, nonlinearity, and non-monotonicity.
For Polynomial Chaos Expansions (PCE), we used a polynomial degree and q-norm adaptivity approach. This approach adaptively increases the maximum polynomial degree and truncation q-norm until the estimated leave-one-out error starts increasing. Maximum polynomial interaction terms were limited to 2 due to the memory requirements for large model dimensionality and large experimental designs. We tested three different solvers to calculate the PCE coefficients: Least Angle Regression (LARS), Orthogonal Matching Pursuit (OMP), and Subspace Pursuit (SP).
Polynomial Chaos Kriging (PCK) employs a sequential combination strategy of PCE and Kriging. PCE uses degree adaptivity with a fixed q-norm. The maximum number of interactions is again set to 2 with the LARS solver. Ordinary Kriging is applied using the Matérn-5/2 correlation family, ellipsoidal, and anisotropic correlation function. We used a hybrid genetic algorithm to optimize the hyperparameters.
We benchmarked both linear and ordinary Kriging, including Matérn-5/2 and Gaussian correlation families and separable and ellipsoidal correlation, resulting in eight different Kriging competitors. The hyperparameters were calculated using a hybrid covariance matrix adaptation-evolution strategy optimization.
For further details on the settings, please refer to the competitors.m file and UQLab user manuals:
The results file contains one variable: Metrics. It is a Matlab structure with fields corresponding to each competitor (currently 12). Each competitor field contains data of type non-scalar struct array. The performance metrics included are RelMSE, RelRMSE, RelMAE, MAPE, Q2, and RelCVErr. Each field of Metrics.(CompetitorName) for the i-th element of the struct array contains metrics corresponding to the replicated dataset and the competitor, structured as follows:
The description of the performance measures (metrics) can be found here: https://uqworld.org/t/metamodel-performance-measures/.
We provide files in three languages (MATLAB, Python, and Julia) to showcase how to work with datasets, results, and their visualization. The files are called working_with_datafiles.* (the extension depends on the selected language).
This project was supported by the Open Research Data Program of the ETH Board under Grant number EPFL SCR0902285. The calculations were run on the Euler cluster of ETH Zürich using the MATLAB-based UQLab software developed at the Chair of Risk, Safety and Uncertainty Quantification of ETH Zürich.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
On the basis of theoretical models and calculations, several alternating polymeric structures have been investigated to develop optimized poly(2,7-carbazole) derivatives for solar cell applications. Selected low band gap alternating copolymers have been obtained via a Suzuki coupling reaction. A good correlation between DFT theoretical calculations performed on model compounds and the experimental HOMO, LUMO, and band gap energies of the corresponding polymers has been obtained. This study reveals that the alternating copolymer HOMO energy level is mainly fixed by the carbazole moiety, whereas the LUMO energy level is mainly related to the nature of the electron-withdrawing comonomer. However, solar cell performances are not solely driven by the energy levels of the materials. Clearly, the molecular weight and the overall organization of the polymers are other important key parameters to consider when developing new polymers for solar cells. Preliminary measurements have revealed hole mobilities of about 1 × 10-3 cm2·V-1·s-1 and a power conversion efficiency (PCE) up to 3.6%. Further improvements are anticipated through a rational design of new symmetric low band gap poly(2,7-carbazole) derivatives.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is related to the One-dimensional diffusion model benchmark case. A detailed description of the benchmark case can be found on the public online community website UQWorld: https://uqworld.org/t/benchmark-case-one-dimensional-diffusion-model.
The experimental designs include datasets with 200, 400, 600, 800, and 1000 samples, each generated using optimized maximin distance Latin Hypercube Sampling (LHS) with 1000 iterations. Each dataset is replicated 20 times. The validation set contains 100,000 samples generated by Monte Carlo simulation. Each dataset contains input samples and the corresponding computational model responses.
The dataset file includes two variables:
Both variables are Matlab structures with fields X, Y, and nSamples. Variable ExpDesigns is a non-scalar structure sized according to the number of experimental design groups. Each field of X for the i-th element of the struct array contains replicated datasets, forming a matrix of size [number of samples] x [dimensionality] x [number of replications]. Similarly, each field of Y for the i-th element contains replicated computational model responses that correspond to the experimental design of the same replication, sized [number of samples] x [number of model outputs] x [number of replications]. The same structure logic applies to the ValidationSet variable, except it contains only one dataset per benchmark case.
The structure can be summarized as follows:
The selection of competitors was based on our experience with meta-modeling and includes various metamodel types: Polynomial Chaos Expansions (PCE), Polynomial Chaos Kriging (PCK), and Kriging. Given that each metamodel has many hyperparameters, we chose the most general settings to address different benchmark case difficulties, including dimensionality, nonlinearity, and non-monotonicity.
For Polynomial Chaos Expansions (PCE), we used a polynomial degree and q-norm adaptivity approach. This approach adaptively increases the maximum polynomial degree and truncation q-norm until the estimated leave-one-out error starts increasing. Maximum polynomial interaction terms were limited to 2 due to the memory requirements for large model dimensionality and large experimental designs. We tested three different solvers to calculate the PCE coefficients: Least Angle Regression (LARS), Orthogonal Matching Pursuit (OMP), and Subspace Pursuit (SP).
Polynomial Chaos Kriging (PCK) employs a sequential combination strategy of PCE and Kriging. PCE uses degree adaptivity with a fixed q-norm. The maximum number of interactions is again set to 2 with the LARS solver. Ordinary Kriging is applied using the Matérn-5/2 correlation family, ellipsoidal, and anisotropic correlation function. We used a hybrid genetic algorithm to optimize the hyperparameters.
We benchmarked both linear and ordinary Kriging, including Matérn-5/2 and Gaussian correlation families and separable and ellipsoidal correlation, resulting in eight different Kriging competitors. The hyperparameters were calculated using a hybrid covariance matrix adaptation-evolution strategy optimization.
For further details on the settings, please refer to the competitors.m file and UQLab user manuals:
The results file contains one variable: Metrics. It is a Matlab structure with fields corresponding to each competitor (currently 12). Each competitor field contains data of type non-scalar struct array. The performance metrics included are RelMSE, RelRMSE, RelMAE, MAPE, Q2, and RelCVErr. Each field of Metrics.(CompetitorName) for the i-th element of the struct array contains metrics corresponding to the replicated dataset and the competitor, structured as follows:
The description of the performance measures (metrics) can be found here: https://uqworld.org/t/metamodel-performance-measures/.
We provide files in three languages (MATLAB, Python, and Julia) to showcase how to work with datasets, results, and their visualization. The files are called working_with_datafiles.* (the extension depends on the selected language).
This project was supported by the Open Research Data Program of the ETH Board under Grant number EPFL SCR0902285. The calculations were run on the Euler cluster of ETH Zürich using the MATLAB-based UQLab software developed at the Chair of Risk, Safety and Uncertainty Quantification of ETH Zürich.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
Twitterhttps://fred.stlouisfed.org/legal/#copyright-public-domainhttps://fred.stlouisfed.org/legal/#copyright-public-domain
View data of PCE, an index that measures monthly changes in the price of consumer goods and services as a means of analyzing inflation.