The average energy consumption of a ChatGPT request was estimated at *** watt-hours, nearly ** times that of a regular Google search, which reportedly consumes *** Wh per request. BLOOM had a similar energy consumption, at around **** Wh per request. Meanwhile, incorporating generative AI into every Google search could lead to a power consumption of *** Wh per request, based on server power consumption estimations.
Google’s energy consumption has increased over the last few years, reaching 25.9 terawatt hours in 2023, up from 12.8 terawatt hours in 2019. The company has made efforts to make its data centers more efficient through customized high-performance servers, using smart temperature and lighting, advanced cooling techniques, and machine learning. Datacenters and energy Through its operations, Google pursues a more sustainable impact on the environment by creating efficient data centers that use less energy than the average, transitioning towards renewable energy, creating sustainable workplaces, and providing its users with the technological means towards a cleaner future for the future generations. Through its efficient data centers, Google has also managed to divert waste from its operations away from landfills. Reducing Google’s carbon footprint Google’s clean energy efforts is also related to their efforts to reduce their carbon footprint. Since their commitment to using 100 percent renewable energy, the company has met their targets largely through solar and wind energy power purchase agreements and buying renewable power from utilities. Google is one of the largest corporate purchasers of renewable energy in the world.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
To power each Google Cloud region and/or data center, we use electricity from the grid where the region is located. This electricity generates more or less carbon emissions (gCO2eq), depending on the type of power plants generating electricity for that grid and when we consume it. In 2020, we set a goal to match our energy consumption with carbon-free energy (CFE), every hour and in every region by 2030. As we work towards our 2030 goal, we want to provide transparency on our progress. To characterize each region we use a metric: "CFE%". This metric is calculated for every hour in every region and tells us what percentage of the energy we consumed during an hour that is carbon-free. We take into account the carbon-free energy that’s already supplied by the grid, in addition to the investments we have made in renewable energy in that location to reach our 24/7 carbon-free objective .We then aggregate the available average hourly CFE percentage for each region for the year. We do not currently have the hourly energy information available for calculating the metrics for all regions. We anticipate rolling out the calculated metrics using hourly data to regions as the data becomes available. The hourly grid mix data used to calculate these metrics is from Electricity Maps . This public dataset is hosted in Google BigQuery and is included in BigQuery's 1TB/mo of free tier processing. This means that each user receives 1TB of free BigQuery processing every month, which can be used to run queries on this public dataset. Watch this short video to learn how to get started quickly using BigQuery to access public datasets. What is BigQuery
Google’s total greenhouse gas (GHG) emissions increased by 13 percent in 2023, to 14.31 million metric tons of carbon dioxide equivalent (MtCO₂e). That year, Google’s carbon intensity was approximately 11.4 tCO₂e per unit of revenue. Google’s emissions surge Google’s GHG emissions have increased by 48 percent since 2019, the base year for the tech giant’s goal of reaching net zero. The main reason for these rising emissions is the soaring energy demands at Google’s data centers, which are primarily being driven by the company’s expanding artificial intelligence (AI) services. AI requires considerable amounts of energy for computation and data storage. Google’s climate targets at risk Google has set the target of reaching net zero emissions across all its operations and value chain by 2030. This includes slashing Scope 1, 2, and 3 emissions by 50 percent from the 2019 base year. But with the company’s emissions currently rising and energy demand from its AI services set to grow further, these targets are being put at risk. One way in which Google is aiming to address its rising emissions is to move toward purchasing high-quality carbon removal credits.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Important: As an research not-for-profit organisation, if you found this dataset useful we would appreciate your time in filling out this short survey.
This dataset contains 3 aggregate datasets from the electricity smart meter data of over 25,000 customers in Great Britain (GB) from March 2021 - March 2022.
For each consumer, we know (via a survey) what low carbon technologies (LCTs) they own. The potential LCT options are: Solar PV, Heat Pump (Air Source, or Ground Source), Electric Vehicle, Battery, Electric Storage Heaters.
For simplicity, this dataset contains only customers with one type of LCT (with the exception of Solar PV, where we include Solar PV + Battery customers as is common in GB). We do not include customers with multiple LCTs (for example home battery + EV)
We include quantiles of usage for each half hour (the "profile") for each type of LCT ownership "archetype", both overall (when season=None) and by season. As is common in the literature, we normalise by the square meterage of the house using open EPC data in GB (https://epc.opendatacommunities.org/) to get the watt hours per square meter. You can also find the raw, unnormalised, kwh values by quantile in this release. These two datasets have the quantiles for each half hour period. In addition, we release the daily quantiles of electricity consumption, in kwh per square meterage, by LCT type.
In summary the data we are releasing, aggregated over 25,000 customers over 1 year of usage from March 2021 - March 2020 is:
We believe this data will be useful for modelling efforts, as customers with different types of LCTs use energy at different times of the day, and by different amounts daily. By releasing this data openly, we hope forecasting scenarios for the future energy system are more accurate. We have a supporting blog post on our website at https://www.centrefornetzero.org/res/lessons-from-early-adopters-electricity-consumption-profiles/.
The data set records the per capita electricity consumption of 1971-2014 countries along 65 countries along the belt and road. Data sources: IEA,http://www.iea.org/stats/index.asp.Data on electric power production and consumption are collected from national energy agencies by the International Energy Agency (IEA) and adjusted by the IEA to meet international definitions. Data are reported as net consumption as opposed to gross consumption. Net consumption excludes the energy consumed by the generating units. For all countries except the United States, total electric power consumption is equal total net electricity generation plus electricity imports minus electricity exports minus electricity distribution losses.
# LQN4Energy-Replication-Package
This repository contains the replication package and dataset of the paper titled "An approach using performance models for supporting energy analysis of software systems".
This study has been developed by:
1. [Vincenzo Stoico](https://scholar.google.com/citations?user=E8C9Uz4AAAAJ&hl=en)(University of L'Aquila)
2. [Vittorio Cortellessa](https://scholar.google.com/citations?hl=en&user=s4JPUOEAAAAJ)(University of L'Aquila)
3. [Ivano Malavolta](https://scholar.google.com/citations?hl=en&user=ya3htIoAAAAJ)(Vrije University Amsterdam)
4. [Daniele Di Pompeo](https://scholar.google.com/citations?hl=en&user=E2dr5vIAAAAJ)(University of L'Aquila)
5. [Luigi Pomante](https://scholar.google.com/citations?hl=en&user=q2_sZiMAAAAJ)(University of L'Aquila)
for further details, comments, and/or suggestions, you can write an email to the following address:
## Repository Description
This repository is made by three directories:
- `code`: it contains the scripts that read the dataset and generate the results for the Digital Camera and Train Ticket Booking System. Therefore, the response time for the supplied workloads, CPU utilization, the average power, i.e., e multiplier, and the average energy consumption.
- `dc_energy_estimation.py`: generates the energy estimates for Digital Camera
- `dc_overall_stats.py`: calculates the performance and the energy metrics from the measurements collected for Digital Camera
- `ttbs_performance_stats.py`: calculates the performance metrics from the measurements retrieved for Train Ticket Booking System
- `ttbs_energy_stats.py`: calculates the energy metrics from the measurements taken for Train Ticket Booking System
- `ttbs_overall_stats.py`: generates the energy estimates and the charts comparing estimates and measurements for Train Ticket Booking System. It prints the Root Mean Square Error (RMSE) and the Mean Absolute Percentage Error (MAPE).
- `dataset`: it has two subdirectories: `dc` and `ttbs` containing the data collected during the experiments performed for Digital Camera and Train Ticket Booking System, respectively;
- `model`: it includes the Layered Queuing Networks we used to retrieve CPU Utilization and the response time for both case studies;
## How do I run this?
The scripts are written in python, so you must have installed the latest version of python to run them. In addition, they require `pandas`, `matplotlib`, `numpy`, `scipy`.
The suite to execute the Layered Queuing Networks must be installed to retrieve performance estimatations. It is possible to find it at the following repository:
[https://github.com/layeredqueuing/V5](https://github.com/layeredqueuing/V5)
After installation is complete, you can execute the list of commands indicated below to obtain the results for the case studies. The commands must be executed in the described order. The results will be generated in the `results/` directory.
### Digital Camera
1. move to the `~/code` folder
2. execute `python dc_overall_stats.py`, it will take ~1m
3. move to the `~/model` directory and execute `lqns dc.lqnx > ../results/dc_estimates.csv`
4. go back to the `~/code` folder and execute `python dc_energy_estimation.py`
### Train Ticket Booking System
1. move to the `~/code` folder
2. execute `python ttbs_performance_stats.py`
3. execute `python ttbs_energy_stats.py`
3. move to the `~/model` directory and execute `lqns ttbs.lqnx > ../results/ttbs_performance_estimates.csv`
4. go back to the `~/code` folder and execute `python ttbs_overall_stats.py`
This data set records the statistical data on the total amount and composition of terminal energy charges by industry in Qinghai Province from 1997 to 2020. The data are divided by agriculture, forestry, animal husbandry and fishery, industry, construction, transportation, storage and transportation, postal industry, wholesale and retail industry, accommodation, catering industry, other industries and the life of urban and rural residents. The data are compiled from the statistical yearbook of Qinghai Province issued by Qinghai Provincial Bureau of statistics The set contains 17 data tables with the same structure. For example, the data table for 2010 has five fields: Field 1: Industry Field 2: total energy consumption Field 3: raw coal consumption Field 4: gasoline consumption Field 5: electricity consumption
The Global Power Plant Database is a comprehensive, open source database of power plants around the world. It centralizes power plant data to make it easier to navigate, compare and draw insights. Each power plant is geolocated and entries contain information on plant capacity, generation, ownership, and fuel type. As …
The revenue of energy management market is forecast to experience significant growth until 2029. It is estimated that the revenue will continuously rise in the segments for smart thermostats, as well as smart AC & heater controls. In this regard, the Smart AC & Heater Controls segment achieves the highest value of 8.31 billion U.S. dollars in 2029.
This data package includes the underlying data to replicate the charts presented in Energy transition: The race between technology and political backlash, PIIE Working Paper 24-4.
If you use the data, please cite as: Gourinchas, Pierre-Olivier, Gregor Schwerhoff, and Antonio Spilimbergo (2024). Energy transition: The race between technology and political backlash, PIIE Working Paper 24-4. Peterson Institute for International Economics.
This data package includes the underlying data and files to replicate the calculations, charts, and tables presented in Against the Wind: China's Struggle to Integrate Wind Energy into Its National Grid, PIIE Policy Brief 17-5. If you use the data, please cite as: Lam, Long, Lee G. Branstetter, and Inês M. L. Azevedo. (2017). Against the Wind: China's Struggle to Integrate Wind Energy into Its National Grid. PIIE Policy Brief 17-5. Peterson Institute for International Economics.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains research data and software (re)use indications (formal citations, informal mentions) in scholarly works related to High Energy Physics. 1,411 research and software indications were identified by a mix of approaches: use of citation discovery services and multiple search approaches in Google Scholar. The dataset contains indications by what approach the (re)use indications were found. All identified research data and software (re)use indications were classified according to their purpose, location, and elements.
The data was collected in 2018 for a PhD thesis on research data and software (re)use indications in scholarly works.
This dataset records statistical data on the total energy consumption and composition of Qinghai Province from 1980 to 2022, divided by major years. The data is compiled from the Qinghai Provincial Statistical Yearbook released by the Qinghai Provincial Bureau of Statistics. Among them, the energy consumption and related data from 2005 to 2013 were revised based on the results of the second and third national economic censuses. For years 2015 and earlier, electricity was the sum of primary electricity and net electricity inflows. After 2016, electricity was the primary electricity consumption, calculated at equal value (coal consumption for power generation in the current year). Coal includes: raw coal, washed coal, other washed coal, coal products, coke, coke oven gas, coal gangue, blast furnace gas, other coking products, converter gas, other gas, etc. Petroleum includes: crude oil, gasoline, kerosene, diesel, fuel oil, naphtha, lubricating oil, paraffin, solvent oil, petroleum asphalt, petroleum coke, liquefied petroleum gas, refinery dry gas, etc. The dataset contains one data table, and the total energy consumption and composition data tables have a total of six fields: Field 1: Year Field 2: Total Energy Consumption Field 3: Composition of raw coal Field 4: Composition of Crude Oil Field 5: Composition of natural gas Field 6: Water and Electricity Composition
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains High Energy Physics related research data and software (re)use indications (formal citations, informal mentions) in scholarly works. All research data and software resources were identified and extracted from INSPIRE-HEP. The (re)use indications were identified by a mix of approaches: use of citation discovery services and multiple search approaches in Google Scholar. All identified research data and software (re)use indications were classified according to their purpose, location, and elements.
The data was collected in 2018 for a PhD thesis on research data and software (re)use indications in scholarly works.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains High Energy Physics related research data and software (re)use indications (formal citations, informal mentions) in scholarly works. All research data and software resources were identified and extracted from Zenodo. The (re)use indications were identified by a mix of approaches: use of citation discovery services and multiple search approaches in Google Scholar. All identified research data and software (re)use indications were classified according to their purpose, location, and elements.
The data was collected in 2018 for a PhD thesis on research data and software (re)use indications in scholarly works.
This repository includes python scripts and input/output data associated with the following publication:
[1] Brown, P.R.; O'Sullivan, F. "Shaping photovoltaic array output to align with changing wholesale electricity price profiles." Applied Energy 2019. https://doi.org/10.1016/j.apenergy.2019.113734
Please cite reference [1] for full documentation if the contents of this repository are used for subsequent work.
Some of the scripts and data are also used in the following working paper:
[2] Brown, P.R.; O'Sullivan, F. "Spatial and temporal variation in the value of solar power across United States electricity markets". Working Paper, MIT Center for Energy and Environmental Policy Research. 2019. http://ceepr.mit.edu/publications/working-papers/705
All code is in python 3 and relies on a number of dependencies that can be installed using pip or conda.
Contents
pvvm.zip : Python module with functions for modeling PV generation, calculating PV revenues and capacity factors, and optimizing PV orientation.
notebooks.zip : Jupyter notebooks, including:
pvvm-pvtos-data.ipynb: Example scripts used to download and clean input LMP data, determine LMP node locations, and reproduce some figures in reference [1]
pvvm-pvtos-analysis.ipynb: Example scripts used to perform the calculations and reproduce some figures in reference [1]
pvvm-pvtos-plots.ipynb: Scripts used to produce additional figures in reference [1]
pvvm-example-generation.ipynb: Example scripts demonstrating the usage of the PV generation model and orientation optimization
html.zip : Static images of the above Jupyter notebooks for viewing without a python kernel
data.zip : Day-ahead and real-time nodal locational marginal prices (LMPs) for CAISO, ERCOT, MISO, NYISO, and ISONE.
At the time of publication of this repository, permission had not been received from PJM to republish their LMP data. If permission is received in the future, a new version of this repository will linked here with the complete dataset.
results.zip : Simulation results associated with reference [1] above, including modeled revenue, capacity factor, and optimized orientations for PV systems at all LMP nodes
Data terms and usage notes
ISO LMP data are used with permission from the different ISOs. Adapting the MIT License (https://opensource.org/licenses/MIT), "The data are provided 'as is', without warranty of any kind, express or implied, including but not limited to the warranties of merchantibility, fitness for a particular purpose and noninfringement. In no event shall the authors or sources be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the data or other dealings with the data." Copyright and usage permissions for the LMP data are available on the ISO websites, linked below.
ISO-specific notes:
CAISO data from http://oasis.caiso.com/mrioasis/logon.do are used pursuant to the terms at http://www.caiso.com/Pages/PrivacyPolicy.aspx#TermsOfUse.
ERCOT data are from http://www.ercot.com/mktinfo/prices.
MISO data are from https://www.misoenergy.org/markets-and-operations/real-time--market-data/market-reports/ and https://www.misoenergy.org/markets-and-operations/real-time--market-data/market-reports/market-report-archives/.
PJM data were originally downloaded from https://www.pjm.com/markets-and-operations/energy/day-ahead/lmpda.aspx and https://www.pjm.com/markets-and-operations/energy/real-time/lmp.aspx. At the time of this writing these data are currently hosted at https://dataminer2.pjm.com/feed/da_hrl_lmps and https://dataminer2.pjm.com/feed/rt_hrl_lmps.
NYISO data from http://mis.nyiso.com/public/ are used subject to the disclaimer at https://www.nyiso.com/legal-notice.
ISONE data are from https://www.iso-ne.com/isoexpress/web/reports/pricing/-/tree/lmps-da-hourly and https://www.iso-ne.com/isoexpress/web/reports/pricing/-/tree/lmps-rt-hourly-final. The Material is provided on an "as is" basis. ISO New England Inc., to the fullest extent permitted by law, disclaims all warranties, either express or implied, statutory or otherwise, including but not limited to the implied warranties of merchantability, non-infringement of third parties' rights, and fitness for particular purpose. Without limiting the foregoing, ISO New England Inc. makes no representations or warranties about the accuracy, reliability, completeness, date, or timeliness of the Material. ISO New England Inc. shall have no liability to you, your employer or any other third party based on your use of or reliance on the Material.
Data workup: LMP data were downloaded directly from the ISOs using scripts similar to the pvvm.data.download_lmps() function (see below for caveats), then repackaged into single-node single-year files using the pvvm.data.nodalize() function. These single-node single-year files were then combined into the dataframes included in this repository, using the procedure shown in the pvvm-pvtos-data.ipynb notebook for MISO. We provide these yearly dataframes, rather than the long-form data, to minimize file size and number. These dataframes can be unpacked into the single-node files used in the analysis using the pvvm.data.copylmps() function.
Code license and usage notes
Code (*.py and *.ipynb files) is provided under the MIT License, as specified in the pvvm/LICENSE file.
Updates to the code, if any, will be posted in the non-static repository at https://github.com/patrickbrown4/pvvm_pvtos. The code in the present repository has the following version-specific dependencies:
matplotlib: 3.0.3
numpy: 1.16.2
pandas: 0.24.2
pvlib: 0.6.1
scipy: 1.2.1
tqdm: 4.31.1
To use the NSRDB download functions, modify the "settings.py" file to insert a valid NSRDB API key, which can be requested from https://developer.nrel.gov/signup/. Locations can be specified by passing latitude, longitude floats to pvvm.data.downloadNSRDBfile(), or by passing a string googlemaps query to pvvm.io.queryNSRDBfile(). To use the googlemaps functionality, request a googlemaps API key (https://developers.google.com/maps/documentation/javascript/get-api-key) and insert it in the "settings.py" file.
Note that many of the ISO websites have changed in the time since the functions in the pvvm.data module were written and the LMP data used in the above papers were downloaded. As such, the pvvm.data.download_lmps() function no longer works for all ISOs and years. We provide this function to illustrate the general procedure used, and do not intend to maintain it or keep it up to date with the changing ISO websites. For up-to-date functions for accessing ISO data, the following repository (no connection to the present work) may be helpful: https://github.com/catalyst-cooperative/pudl.
Of the leading ten technology companies worldwide based on market capitalization, Samsung is the company consuming the most electricity at nearly ** million megawatt-hours (MWh) based on the company's most recent 2023 figures. Google, Taiwan Semiconductor Manufacturing Company (TSMC), and Microsoft came in second, third, and fourth place in electricity consumption, respectively.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The dataset consists of green technology patents sourced from the "patents-public-data.patents.publications" dataset and is structured in three versions:
All versions are sorted by publication date, with the most recent patents listed first and provided in JSON format.
Selection Criteria: The patents included in this dataset are filtered based on keywords related to renewable energy and sustainable technology solutions. The SQL query utilizes regular expressions to search for terms such as "solar energy," "photovoltaics," "hydropower," "hydrogen energy," "geothermal energy," "wind energy," and "carbon capture and storage/e-mobility" within both the abstract and title of the patents.
Data Source: The data is sourced from the publicly accessible Google Patents dataset, which aggregates global patent information.
The following information and metadata applies to both the Phase I (Hydrodynamics) and Phase II (Full System Power Take-Off) zip folders which contain testing data from the OSU (Oregon State University) O.H. Hinsdale Wave Research Laboratory, from both OSU and the University of Hawaii at Manoa (UH). See zip folders provided further below in the downloads section. For experimental data of the full system, including PTO, see Phase II dataset. There are two main directories in each Phases's zip folder: "OSU_data" and "UH_data". The "OSU_data" directory contains data collected from their DAQ (data acquisition system), which includes all wave gauge observations, as well as body motions derived from their Qualisys motion tracking system. The organization of the directory follows OSU's convention. Detailed information on the instrument setup can be found under "OSU_data/docs/setup/instm_locations". The experiments conducted are documented in the "OSU_data/docs/daq_logs", which provides the trial number to the corresponding data located under "OSU_data/data" in several formats (e.g., ".mat" and ".txt"). Inside the trial directory, data is provided for each of the instruments defined in "OSU_data/docs/setup/instm_locations". The "UH_data" directory contains data collected from their DAQ. The data is stored in a ".tdms" file format. There are free plug-ins for Microsoft Excel and MathWorks MATLAB to read the ".tdms" format. Below are a few links providing methods to read in the data, but a Google search should identify alternatives sources if these no longer exist (valid as of January 2024): Excel: http://www.ni.com/example/27944/en/ MATLAB: https://www.mathworks.com/matlabcentral/fileexchange/30023-tdms-reader The Excel plugin is recommend to get a quick overview of the data. The UH data is organized by directory name, in which the sub-directories for each experiment contains a directory whose name defines the wave height and period for the experimental data within. For example, a directory name "H02_T0275" corresponds to an experiment with wave height 0.1m and a period of 2.75s. For random wave data, the gamma value is also included in the directory name. For example, a directory name "H02_T0225_G18" corresponds to an experiment with a significant wave height of 0.2m, a peak period of 2.25s, and a gamma value of 1.8, with each spectra being a TMA spectrum. For the free decay experiments, the directory name is defined by the initial angular displacement. For example, a directory name "ang05_run01" corresponds to an experiment with an initial angular displacement of 5 degrees. There is a dataset in the UH data for each corresponding experiment defined in the OSU DAQ logs. The ".tdms" data is output from the DAQ at fixed intervals. Therefore, if multiple files are contained within the folder, the data will need to be stitched together. Within the UH dataset, there are two input channels from the OSU DAQ providing a random square wave signal for time synchronization ("ENV-WHT-0010") and a high/low signal ("ENV-WHT-0012") to identify when the wave maker is active (+5V). The UH data is logged as a collection of channel outputs. Channels not in use for the OSU testing (either Phase I or Phase II) are marked "nan" below. If the sensor is disconnected, it will record noise throughout the experiment. Below are the channel definitions in terms of what they measure: GPS Time = time CYL-POS-0001 = position between flap and fixed reference CYL-LCA-0001 = force between flap and hydraulic cylinder REC-LPT-0001 = nan REC-HPT-0001 = nan REC-HPT-0002 = nan REC-HPT-0003 = nan HHT-HPT-0001 = pressure at exhaust ("head" only) REC-FQC-0001 = nan REC-FQC-0002 = nan HHT-FQC-0001 = flow at exhaust ("head" only) ENV-WHT-0001 = nan ENV-WHT-0002 = nan ENV-WHT-0003 = nan ENV-WHT-0010 = random signal from OSU DAQ ENV-WHT-0012 = high/low signal from OSU DAQ Also included is a calibration curve to convert the string pot data to flap pi...
The average energy consumption of a ChatGPT request was estimated at *** watt-hours, nearly ** times that of a regular Google search, which reportedly consumes *** Wh per request. BLOOM had a similar energy consumption, at around **** Wh per request. Meanwhile, incorporating generative AI into every Google search could lead to a power consumption of *** Wh per request, based on server power consumption estimations.