2m_temperature
, Celsius degrees) - ssrd: Surface solar radiation (surface_solar_radiation_downwards
, Watt per square meter) - ssrdc: Surface solar radiation clear-sky (surface_solar_radiation_downward_clear_sky
, Watt per square meter) - ro: Runoff (runoff
, millimeters) There are also a set of derived variables: - ws10: Wind speed at 10 meters (derived by 10m_u_component_of_wind
and 10m_v_component_of_wind
, meters per second) - ws100: Wind speed at 100 meters (derived by 100m_u_component_of_wind
and 100m_v_component_of_wind
, meters per second) - CS: Clear-Sky index (the ratio between the solar radiation and the solar radiation clear-sky) - HDD/CDD: Heating/Cooling Degree days (derived by 2-meter temperature the EUROSTAT definition. For each variable we have 367 440 hourly samples (from 01-01-1980 00:00:00 to 31-12-2021 23:00:00) for 34/115/309 regions (NUTS 0/1/2). The data is provided in two formats: - NetCDF version 4 (all the variables hourly and CDD/HDD daily). NOTE: the variables are stored as int16
type using a scale_factor
to minimise the size of the files. - Comma Separated Value ("single index" format for all the variables and the time frequencies and "stacked" only for daily and monthly) All the CSV files are stored in a zipped file for each variable. ## Methodology The time-series have been generated using the following workflow: 1. The NetCDF files are downloaded from the Copernicus Data Store from the ERA5 hourly data on single levels from 1979 to present dataset 2. The data is read in R with the climate4r packages and aggregated using the function /get_ts_from_shp
from panas. All the variables are aggregated at the NUTS boundaries using the average except for the runoff, which consists of the sum of all the grid points within the regional/national borders. 3. The derived variables (wind speed, CDD/HDD, clear-sky) are computed and all the CSV files are generated using R 4. The NetCDF are created using xarray
in Python 3.8. ## Example notebooks In the folder notebooks
on the associated Github repository there are two Jupyter notebooks which shows how to deal effectively with the NetCDF data in xarray
and how to visualise them in several ways by using matplotlib or the enlopy package. There are currently two notebooks: - exploring-ERA-NUTS: it shows how to open the NetCDF files (with Dask), how to manipulate and visualise them. - ERA-NUTS-explore-with-widget: explorer interactively the datasets with jupyter and ipywidgets. The notebook exploring-ERA-NUTS
is also available rendered as HTML. ## Additional files In the folder additional files
on the associated Github repository there is a map showing the spatial resolution of the ERA5 reanalysis and a CSV file specifying the number of grid points with respect to each NUTS0/1/2 region. ## License This dataset is released under CC-BY-4.0 license.2m_temperature
, Celsius degrees) - ssrd: Surface solar radiation (surface_solar_radiation_downwards
, Watt per square meter) - ssrdc: Surface solar radiation clear-sky (surface_solar_radiation_downward_clear_sky
, Watt per square meter) - ro: Runoff (runoff
, millimeters) There are also a set of derived variables: - ws10: Wind speed at 10 meters (derived by 10m_u_component_of_wind
and 10m_v_component_of_wind
, meters per second) - ws100: Wind speed at 100 meters (derived by 100m_u_component_of_wind
and 100m_v_component_of_wind
, meters per second) - CS: Clear-Sky index (the ratio between the solar radiation and the solar radiation clear-sky) - HDD/CDD: Heating/Cooling Degree days (derived by 2-meter temperature the EUROSTAT definition. For each variable we have 350 599 hourly samples (from 01-01-1980 00:00:00 to 31-12-2019 23:00:00) for 34/115/309 regions (NUTS 0/1/2). The data is provided in two formats: - NetCDF version 4 (all the variables hourly and CDD/HDD daily). NOTE: the variables are stored as int16
type using a scale_factor
of 0.01 to minimise the size of the files. - Comma Separated Value ("single index" format for all the variables and the time frequencies and "stacked" only for daily and monthly) All the CSV files are stored in a zipped file for each variable. ## Methodology The time-series have been generated using the following workflow: 1. The NetCDF files are downloaded from the Copernicus Data Store from the ERA5 hourly data on single levels from 1979 to present dataset 2. The data is read in R with the climate4r packages and aggregated using the function /get_ts_from_shp
from panas. All the variables are aggregated at the NUTS boundaries using the average except for the runoff, which consists of the sum of all the grid points within the regional/national borders. 3. The derived variables (wind speed, CDD/HDD, clear-sky) are computed and all the CSV files are generated using R 4. The NetCDF are created using xarray
in Python 3.7. NOTE: air temperature, solar radiation, runoff and wind speed hourly data have been rounded with two decimal digits. ## Example notebooks In the folder notebooks
on the associated Github repository there are two Jupyter notebooks which shows how to deal effectively with the NetCDF data in xarray
and how to visualise them in several ways by using matplotlib or the enlopy package. There are currently two notebooks: - exploring-ERA-NUTS: it shows how to open the NetCDF files (with Dask), how to manipulate and visualise them. - ERA-NUTS-explore-with-widget: explorer interactively the datasets with jupyter and ipywidgets. The notebook exploring-ERA-NUTS
is also available rendered as HTML. ## Additional files In the folder additional files
on the associated Github repository there is a map showing the spatial resolution of the ERA5 reanalysis and a CSV file specifying the number of grid points with respect to each NUTS0/1/2 region. ## License This dataset is released under CC-BY-4.0 license.Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Definition of important variables.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These are the data and code to go with "Rapid evolution of thermal tolerance and phenotypic plasticity in variable environments"
Figure 01 takes the following data/scripts:
20210610_Thally02_Figure01_plot_and_stats.R with track_keeper.csv and corr_Response growth .csv . These files contain growth rates per transfers for all selection environments throughout the experiment and growth rates in correlated environments during reciprocal transplants, respectively.
Figure 02 takes the following data/scripts:
20210610_Thally02_Figure02_plot_and_stats.R with 20161120_res_logis_t000.csv and 20161123_resloglint300.csv . These files contain the output of the shapes of the growth curves (i.e. information on lag time , growth at µmax, K etc) for all samples in all selection environments at t0 and t300, respectively
The remaining figures - position not clear at time of submission - take the following data/script.
For plasticity in FRRF data, the script 20181204_FRRF_plasticity.R takes the FRRF raw data contained in allfvfmdata_thally_t300_t000.csv . Extracted parameters are in files CvaluesThally02.csv, psiPSI_slope_intercept.csv, and rP_extracted_values.csv and can be analysed using the R script extracted parameter plots.R . R script FRRF visualisation only .R is for visualisation only, as the title suggests.
For comparing plasticity/growth , the data are in 20170327_giantbigtable.csv , and can be visualised/analysed in plast vs growth.R
In order to recreate the AMOVAS based on SNVs, use all_variants_fixed-only_using_5x_depth_threshold.cvs with amova thally02.R
For additional information, please contact elisa.schaum@uni-hamburg.de
https://ora.ox.ac.uk/terms_of_usehttps://ora.ox.ac.uk/terms_of_use
See text in Content.txt Prospective gating and automatic reacquisition of data corrupted by respiration motion were implemented in variable flip angle (VFA) and actual flip angle imaging (AFI) MRI scans to enable cardio-respiratory synchronised T1 mapping of the whole mouse. Stability tests of cardio-respiratory gating (CR-gating) and respiratory gating (R-gating) with and without reacquisition were compared with un-gated scans in 4 mice. The automatic and immediate reacquisition of data corrupted by respiration motion is observed to properly eliminate respiration motion artefact. CR-gated VFA scans with 16 flip angles and 32 k-lines per cardiac R-wave were acquired with R-gated AFI scans in a total scan time of less than 14 minutes. The VFA data were acquired with a voxel size of 0.075 mm3. T1 was calculated in the whole mouse with a robust and efficient nonlinear least squares fit of data. The standard deviation in the T1 measurement is conservatively estimated to be less than 6.2%. The T1 values measured from VFA scans with 32 k-lines per R-wave are in very good agreement with those measured from VFA scans with 8 k-lines per R-wave, even for myocardium. As such, it is demonstrated that prospective gating and reacquisition enables fast and accurate T1 mapping of small animals.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Environmental variables retained in final generalized additive models for juvenile (age-0), sub-adult (age-1-2), and adult (age-2+) red snapper over unconsolidated substrate of the eastern and western GoM.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The file set is a freely downloadable aggregation of information about Australian schools. The individual files represent a series of tables which, when considered together, form a relational database. The records cover the years 2008-2014 and include information on approximately 9500 primary and secondary school main-campuses and around 500 subcampuses. The records all relate to school-level data; no data about individuals is included. All the information has previously been published and is publicly available but it has not previously been released as a documented, useful aggregation. The information includes: (a) the names of schools (b) staffing levels, including full-time and part-time teaching and non-teaching staff (c) student enrolments, including the number of boys and girls (d) school financial information, including Commonwealth government, state government, and private funding (e) test data, potentially for school years 3, 5, 7 and 9, relating to an Australian national testing programme know by the trademark 'NAPLAN'
Documentation of this Edition 2016.1 is incomplete but the organization of the data should be readily understandable to most people. If you are a researcher, the simplest way to study the data is to make use of the SQLite3 database called 'school-data-2016-1.db'. If you are unsure how to use an SQLite database, ask a guru.
The database was constructed directly from the other included files by running the following command at a command-line prompt: sqlite3 school-data-2016-1.db < school-data-2016-1.sql Note that a few, non-consequential, errors will be reported if you run this command yourself. The reason for the errors is that the SQLite database is created by importing a series of '.csv' files. Each of the .csv files contains a header line with the names of the variable relevant to each column. The information is useful for many statistical packages but it is not what SQLite expects, so it complains about the header. Despite the complaint, the database will be created correctly.
Briefly, the data are organized as follows. (a) The .csv files ('comma separated values') do not actually use a comma as the field delimiter. Instead, the vertical bar character '|' (ASCII Octal 174 Decimal 124 Hex 7C) is used. If you read the .csv files using Microsoft Excel, Open Office, or Libre Office, you will need to set the field-separator to be '|'. Check your software documentation to understand how to do this. (b) Each school-related record is indexed by an identifer called 'ageid'. The ageid uniquely identifies each school and consequently serves as the appropriate variable for JOIN-ing records in different data files. For example, the first school-related record after the header line in file 'students-headed-bar.csv' shows the ageid of the school as 40000. The relevant school name can be found by looking in the file 'ageidtoname-headed-bar.csv' to discover that the the ageid of 40000 corresponds to a school called 'Corpus Christi Catholic School'. (3) In addition to the variable 'ageid' each record is also identified by one or two 'year' variables. The most important purpose of a year identifier will be to indicate the year that is relevant to the record. For example, if one turn again to file 'students-headed-bar.csv', one sees that the first seven school-related records after the header line all relate to the school Corpus Christi Catholic School with ageid of 40000. The variable that identifies the important differences between these seven records is the variable 'studentyear'. 'studentyear' shows the year to which the student data refer. One can see, for example, that in 2008, there were a total of 410 students enrolled, of whom 185 were girls and 225 were boys (look at the variable names in the header line). (4) The variables relating to years are given different names in each of the different files ('studentsyear' in the file 'students-headed-bar.csv', 'financesummaryyear' in the file 'financesummary-headed-bar.csv'). Despite the different names, the year variables provide the second-level means for joining information acrosss files. For example, if you wanted to relate the enrolments at a school in each year to its financial state, you might wish to JOIN records using 'ageid' in the two files and, secondarily, matching 'studentsyear' with 'financialsummaryyear'. (5) The manipulation of the data is most readily done using the SQL language with the SQLite database but it can also be done in a variety of statistical packages. (6) It is our intention for Edition 2016-2 to create large 'flat' files suitable for use by non-researchers who want to view the data with spreadsheet software. The disadvantage of such 'flat' files is that they contain vast amounts of redundant information and might not display the data in the form that the user most wants it. (7) Geocoding of the schools is not available in this edition. (8) Some files, such as 'sector-headed-bar.csv' are not used in the creation of the database but are provided as a convenience for researchers who might wish to recode some of the data to remove redundancy. (9) A detailed example of a suitable SQLite query can be found in the file 'school-data-sqlite-example.sql'. The same query, used in the context of analyses done with the excellent, freely available R statistical package (http://www.r-project.org) can be seen in the file 'school-data-with-sqlite.R'.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
2m_temperature
, Celsius degrees) - ssrd: Surface solar radiation (surface_solar_radiation_downwards
, Watt per square meter) - ssrdc: Surface solar radiation clear-sky (surface_solar_radiation_downward_clear_sky
, Watt per square meter) - ro: Runoff (runoff
, millimeters) There are also a set of derived variables: - ws10: Wind speed at 10 meters (derived by 10m_u_component_of_wind
and 10m_v_component_of_wind
, meters per second) - ws100: Wind speed at 100 meters (derived by 100m_u_component_of_wind
and 100m_v_component_of_wind
, meters per second) - CS: Clear-Sky index (the ratio between the solar radiation and the solar radiation clear-sky) - HDD/CDD: Heating/Cooling Degree days (derived by 2-meter temperature the EUROSTAT definition. For each variable we have 367 440 hourly samples (from 01-01-1980 00:00:00 to 31-12-2021 23:00:00) for 34/115/309 regions (NUTS 0/1/2). The data is provided in two formats: - NetCDF version 4 (all the variables hourly and CDD/HDD daily). NOTE: the variables are stored as int16
type using a scale_factor
to minimise the size of the files. - Comma Separated Value ("single index" format for all the variables and the time frequencies and "stacked" only for daily and monthly) All the CSV files are stored in a zipped file for each variable. ## Methodology The time-series have been generated using the following workflow: 1. The NetCDF files are downloaded from the Copernicus Data Store from the ERA5 hourly data on single levels from 1979 to present dataset 2. The data is read in R with the climate4r packages and aggregated using the function /get_ts_from_shp
from panas. All the variables are aggregated at the NUTS boundaries using the average except for the runoff, which consists of the sum of all the grid points within the regional/national borders. 3. The derived variables (wind speed, CDD/HDD, clear-sky) are computed and all the CSV files are generated using R 4. The NetCDF are created using xarray
in Python 3.8. ## Example notebooks In the folder notebooks
on the associated Github repository there are two Jupyter notebooks which shows how to deal effectively with the NetCDF data in xarray
and how to visualise them in several ways by using matplotlib or the enlopy package. There are currently two notebooks: - exploring-ERA-NUTS: it shows how to open the NetCDF files (with Dask), how to manipulate and visualise them. - ERA-NUTS-explore-with-widget: explorer interactively the datasets with jupyter and ipywidgets. The notebook exploring-ERA-NUTS
is also available rendered as HTML. ## Additional files In the folder additional files
on the associated Github repository there is a map showing the spatial resolution of the ERA5 reanalysis and a CSV file specifying the number of grid points with respect to each NUTS0/1/2 region. ## License This dataset is released under CC-BY-4.0 license.