CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains raw data files and base codes to analyze them.A. The 'powerx_y.xlsx' files are the data files with the one dimensional trajectory of optically trapped probes modulated by an Ornstein-Uhlenbeck noise of given 'x' amplitude. For the corresponding diffusion amplitude A=0.1X(0.6X10-6)2 m2/s, x is labelled as '1'B. The codes are of three types. The skewness codes are used to calculate the skewness of the trajectory. The error_in_fit codes are used to calculate deviations from arcsine behavior. The sigma_exp codes point to the deviation of the mean from 0.5. All the codes are written three times to look ar T+, Tlast and Tmax.C. More information can be found in the manuscript.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modelling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold’s (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate samples size. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date – repeatedly described as more evolutionarily stable than expected –, so this skewness should be accounted for when investigating evolutionary dynamics in the wild.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This section presents a discussion of the research data. The data was received as secondary data however, it was originally collected using the time study techniques. Data validation is a crucial step in the data analysis process to ensure that the data is accurate, complete, and reliable. Descriptive statistics was used to validate the data. The mean, mode, standard deviation, variance and range determined provides a summary of the data distribution and assists in identifying outliers or unusual patterns. The data presented in the dataset show the measures of central tendency which includes the mean, median and the mode. The mean signifies the average value of each of the factors presented in the tables. This is the balance point of the dataset, the typical value and behaviour of the dataset. The median is the middle value of the dataset for each of the factors presented. This is the point where the dataset is divided into two parts, half of the values lie below this value and the other half lie above this value. This is important for skewed distributions. The mode shows the most common value in the dataset. It was used to describe the most typical observation. These values are important as they describe the central value around which the data is distributed. The mean, mode and median give an indication of a skewed distribution as they are not similar nor are they close to one another. In the dataset, the results and discussion of the results is also presented. This section focuses on the customisation of the DMAIC (Define, Measure, Analyse, Improve, Control) framework to address the specific concerns outlined in the problem statement. To gain a comprehensive understanding of the current process, value stream mapping was employed, which is further enhanced by measuring the factors that contribute to inefficiencies. These factors are then analysed and ranked based on their impact, utilising factor analysis. To mitigate the impact of the most influential factor on project inefficiencies, a solution is proposed using the EOQ (Economic Order Quantity) model. The implementation of the 'CiteOps' software facilitates improved scheduling, monitoring, and task delegation in the construction project through digitalisation. Furthermore, project progress and efficiency are monitored remotely and in real time. In summary, the DMAIC framework was tailored to suit the requirements of the specific project, incorporating techniques from inventory management, project management, and statistics to effectively minimise inefficiencies within the construction project.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A common descriptive statistic in cluster analysis is the $R^2$ that measures the overall proportion of variance explained by the cluster means. This note highlights properties of the $R^2$ for clustering. In particular, we show that generally the $R^2$ can be artificially inflated by linearly transforming the data by ``stretching'' and by projecting. Also, the $R^2$ for clustering will often be a poor measure of clustering quality in high-dimensional settings. We also investigate the $R^2$ for clustering for misspecified models. Several simulation illustrations are provided highlighting weaknesses in the clustering $R^2$, especially in high-dimensional settings. A functional data example is given showing how that $R^2$ for clustering can vary dramatically depending on how the curves are estimated.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset contains site information, basin characteristics, results of flood-frequency analysis, and a generalized (regional) flood skew for 76 selected streamgages operated by the U.S. Geological Survey (USGS) in the upper White River basin (4-digit hydrologic unit 1101) in southern Missouri and northern Arkansas. The Little Rock District U.S. Army Corps of Engineers (USACE) needed updated estimates of streamflows corresponding to selected annual exceedance probabilities (AEPs) and a basin-specific regional flood skew. USGS selected 111 candidate streamgages in the study area that had 20 or more years of gaged annual peak-flow data available through the 2020 water year. After screening for regulation, urbanization, redundant/nested basins, drainage areas greater than 2,500 square miles, and streamgage basins located in the Mississippi Alluvial Plain (8-digit hydrologic unit 11010013), 77 candidate streamgages remained. After conducting the initial flood-frequency analysis ...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Reproducibility package for the article:Reaction times and other skewed distributions: problems with the mean and the medianGuillaume A. Rousselet & Rand R. Wilcoxpreprint: https://psyarxiv.com/3y54rdoi: 10.31234/osf.io/3y54rThis package contains all the code and data to reproduce the figures and analyses in the article.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Intensive skewness of $\langle p_\mathrm{T}\rangle$ as a function of $\langle\mathrm{d}N_\mathrm{ch}/\mathrm{d}\eta\rangle^{1/3}_{|\eta|<0.5}$ in pp collisions at $\sqrt{s}$ = 5.02 TeV.
This dataset contains site information, basin characteristics, results of flood-frequency analysis, and results of Bayesian weighted least-squares/Bayesian generalized least-squares (B-WLS/B-GLS) analysis of regional skewness of the annual peak flows for 785 streamflow gaging stations (streamgages) operated by the U.S. Geological Survey (USGS) in the Tennessee and parts of the Ohio and Lower Mississippi River basins (hydrologic unit codes 06, 05, and 08, respectively) in Tennessee, Kentucky, western Virginia, western West Virginia, far western Maryland and parts of North Carolina, Georgia, Alabama, and Mississippi. Annual peak-flow data through the 2021 water year (a water year is defined as the period October 1-September 30 and named for the year in which it ends) were used in the study. For regional skew analysis, 283 of the 785 candidate streamgages were removed for pseudo record length (PRL; Veilleux and Wagner, 2021) less than 30 years, 108 were removed for redundancy, 4 were removed for regulation and 2 were removed for urbanization (see file "VAskew_Region2.csv" in this dataset). For the remaining 387 of 785 candidate streamgages, B-WLS/B-GLS regression (Veilleux and Wagner, 2021) was used to relate flood skew to a suite of 32 explanatory variables. None of the explanatory variables tested had sufficient predictive power in explaining the variability in skew in the region; thus, a constant model of regional skew, 0.048 (average variance of prediction 0.16, standard error 0.4) was selected for the study area (Messinger and others, 2025). For the 785 candidate streamgages, annual peak-flow data through the 2021 water year ("VAskew_region2.pkf") and specification ("VAskew_region2.psf"), output ("VASKEW_REGION2.PRT"), and export ("VASKEW_REGION2.EXP") files from flood-frequency analysis in version 7.4.1 of USGS PeakFQ software (hereafter referred to as "PeakFQ"; Veilleux and others, 2014; Flynn and others, 2006) are provided. Two .csv files are provided, one describing the basin characteristics tested ("BasinCharsTested.csv") and the other ("VAskew_Region2.csv") containing site information (U.S. Geological Survey, 2023), results of flood-frequency analysis in PeakFQ, and, for the 387 streamgages used in the B-WLS/B-GLS regression, PRL, unbiased at-site skew, unbiased mean squared error of the at-site skew, the B-WLS/B-GLS residual, and metrics of leverage and influence. A geographic information systems (GIS) shapefile ("VA_SkewRegion2.shp") containing a polygon representing the geographic extent of the skew region is also included.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database includes simulated data showing the accuracy of estimated probability distributions of project durations when limited data are available for the project activities. The base project networks are taken from PSPLIB. Then, various stochastic project networks are synthesized by changing the variability and skewness of project activity durations.
Number of variables: 20
Number of cases/rows: 114240
Variable List:
• Experiment ID: The ID of the experiment
• Experiment for network: The ID of the experiment for each of the synthesized networks
• Network ID: ID of the synthesized network
• #Activities: Number of activities in the network, including start and finish activities
• Variability: Variance of the activities in the network (this value can be either high, low, medium or rand, where rand shows a random combination of low, high and medium variance in the network activities.)
• Skewness: Skewness of the activities in the network (Skewness can be either right, left, None or rand, where rand shows a random combination of right, left, and none skewed in the network activities)
• Fitted distribution type: Distribution type used to fit on sampled data
• Sample size: Number of sampled data used for the experiment resembling limited data condition
• Benchmark 10th percentile: 10th percentile of project duration in the benchmark stochastic project network
• Benchmark 50th percentile: 50th project duration in the benchmark stochastic project network
• Benchmark 90th percentile: 90th project duration in the benchmark stochastic project network
• Benchmark mean: Mean project duration in the benchmark stochastic project network
• Benchmark variance: Variance project duration in the benchmark stochastic project network
• Experiment 10th percentile: 10th percentile of project duration distribution for the experiment
• Experiment 50th percentile: 50th percentile of project duration distribution for the experiment
• Experiment 90th percentile: 90th percentile of project duration distribution for the experiment
• Experiment mean: Mean of project duration distribution for the experiment
• Experiment variance: Variance of project duration distribution for the experiment
• K-S: Kolmogorov–Smirnov test comparing benchmark distribution and project duration
• distribution of the experiment
• P_value: the P-value based on the distance calculated in the K-S test
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper evaluates the claim that Welch’s t-test (WT) should replace the independent-samples t-test (IT) as the default approach for comparing sample means. Simulations involving unequal and equal variances, skewed distributions, and different sample sizes were performed. For normal distributions, we confirm that the WT maintains the false positive rate close to the nominal level of 0.05 when sample sizes and standard deviations are unequal. However, the WT was found to yield inflated false positive rates under skewed distributions with unequal sample sizes. A complementary empirical study based on gender differences in two psychological scales corroborates these findings. Finally, we contend that the null hypothesis of unequal variances together with equal means lacks plausibility, and that empirically, a difference in means typically coincides with differences in variance and skewness. An additional analysis using the Kolmogorov-Smirnov and Anderson-Darling tests demonstrates that examining entire distributions, rather than just their means, can provide a more suitable alternative when facing unequal variances or skewed distributions. Given these results, researchers should remain cautious with software defaults, such as R favoring Welch’s test.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Remarkable variation exists in the distribution of reproduction (skew) among members of cooperatively breeding groups, both within and between species. Reproductive skew theory has provided an important framework for understanding this variation. In the primitively eusocial Hymenoptera, two models have been routinely tested: concessions models, which assume complete control of reproduction by a dominant individual, and tug-of-war models, which assume on-going competition among group members over reproduction. Current data provide little support for either model, but uncertainty about the ability of individuals to detect genetic relatedness and difficulties in identifying traits conferring competitive ability mean that the relative importance of concessions versus tug-of-war remains unresolved. Here, we suggest that the use of social parasitism to generate meaningful variation in key social variables represents a valuable opportunity to explore the mechanisms underpinning reproductive skew within the social Hymenoptera. We present a direct test of concessions and tug-of-war models in the paper wasp Polistes dominulus by exploiting pronounced changes in relatedness and power structures that occur following replacement of the dominant by a congeneric social parasite. Comparisons of skew in parasitized and unparasitized colonies are consistent with a tug-of-war over reproduction within P. dominulus groups, but provide no evidence for reproductive concessions.
Flood-frequency analyses for 141 streamgages in Connecticut were updated using the U.S. Geological Survey program PeakFQ, version 7.2 (https://water.usgs.gov/software/PeakFQ/; Veilleux and others, 2014). The PeakFQ program follows Bulletin 17C national guidelines for flood-frequency analysis (https://doi.org/10.3133/tm4B5). The input and output files to PeakFQ that were used in the Connecticut flood-frequency update are presented. Individual file folders for the 141 streamgages using the streamgage identification number as the folder name contain three files: ".TXT" file used as input to PeakFQ contains the annual peak flows for the streamgage in standard PeakFQ (WATSTORE) text format available from NWIS web at https://nwis.waterdata.usgs.gov/usa/nwis/peak; ".PRT" text file provides estimates of flood magnitudes and their corresponding variance for a range of annual exceedance probabilities, estimates of the parameters of the log-Pearson Type III frequency distribution, including the logarithmic mean, standard deviation, skew, and mean square error of the skew; and ".JPG" image file shows the fitted frequency curve, systematic peaks, confidence limits, and associated information on low outliers, censored peaks, interval peaks, historic peaks, and thresholds if applicable.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Measuring changes of the T cell receptor (TCR) repertoire is important to many fields of medicine. Flow cytometry is a popular technique to study the TCR repertoire, as it quickly provides insight into the TCR-Vβ usage among well-defined populations of T cells. However, the interpretation of the flow cytometric data remains difficult, and subtle TCR repertoire changes may go undetected. Here, we introduce a novel means for analyzing the flow cytometric data on TCR-Vβ usage. By applying economic statistics, we calculated the Gini-TCR skewing index from the flow cytometric TCR-Vβ analysis. The Gini-TCR skewing index, which is a direct measure of TCR-Vβ distribution among T cells, allowed us to track subtle changes of the TCR repertoire among distinct populations of T cells. Application of the Gini-TCR skewing index to the flow cytometric TCR-Vβ analysis will greatly help to gain better understanding of the TCR repertoire in health and disease.
Premise: Flowering phenology strongly influences reproductive success in plants. Days to first flower is easy to quantify and widely used to characterize phenology, but reproductive fitness depends on the full schedule of flower production over time. Methods: We examined floral display traits associated with rapid adaptive evolution and range expansion among thirteen populations of Lythrum salicaria, sampled along a 10-degree latitudinal gradient in eastern North America. We grew these collections in a common garden field experiment at a mid-latitude site and quantified variation in flowering schedule shape using Principal Coordinates Analysis (PCoA) and quantitative metrics analogous to central moments of probability distributions (i.e., mean, variance, skew, and kurtosis). Key Results: Consistent with earlier evidence for adaptation to shorter growing seasons, we found that populations from higher latitudes had earlier start and mean flowering day, on average, when compared to popul..., , , # Data and analysis files from: Latitudinal clines in floral display associated with adaptive evolution during a biological invasion
https://doi.org/10.5061/dryad.jdfn2z3jz
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionThere are Challenges in statistically modelling immune responses to longitudinal HIV viral load exposure as a function of covariates. We define Bayesian Markov Chain Monte Carlo mixed effects models to incorporate priors and examine the effect of different distributional assumptions. We prospectively fit these models to an as-yet-unpublished data from the Tshwane District Hospital HIV treatment clinic in South Africa, to determine if cumulative log viral load, an indicator of long-term viral exposure, is a valid predictor of immune response.MethodsModels are defined, to express ‘slope’, i.e. mean annual increase in CD4 counts, and ‘asymptote’, i.e. the odds of having a CD4 count ≥500 cells/μL during antiretroviral treatment, as a function of covariates and random-effects. We compare the effect of using informative versus non-informative prior distributions on model parameters. Models with cubic splines or Skew-normal distributions are also compared using the conditional Deviance Information Criterion.ResultsThe data of 750 patients are analyzed. Overall, models adjusting for cumulative log viral load provide a significantly better fit than those that do not. An increase in cumulative log viral load is associated with a decrease in CD4 count slope (19.6 cells/μL (95% credible interval: 28.26, 10.93)) and a reduction in the odds of achieving a CD4 counts ≥500 cells/μL (0.42 (95% CI: 0.236, 0.730)) during 5 years of therapy. Using informative priors improves the cumulative log viral load estimate, and a skew-normal distribution for the random-intercept and measurement error results is a better fit compared to using classical Gaussian distributions.DiscussionWe demonstrate in an unpublished South African cohort that cumulative log viral load is a strong and significant predictor of both CD4 count slope and asymptote. We argue that Bayesian methods should be used more frequently for such data, given their flexibility to incorporate prior information and non-Gaussian distributions.
Sediment particle size frequency distributions from the USNL (Unites States Naval Laboratory) box cores were determined optically using a Malvern Mastersizer 2000 He-Ne LASER diffraction sizer and were used to resolve mean particle size, sorting, skewness and kurtosis.
Samples were collected on cruises JR16006 and JR17007.
Funding was provided by ''The Changing Arctic Ocean Seafloor (ChAOS) - how changing sea ice conditions impact biological communities, biogeochemical processes and ecosystems'' project (NE/N015894/1 and NE/P006426/1, 2017-2021), part of the NERC funded Changing Arctic Ocean programme.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
GC skew denotes the relative excess of G nucleotides over C nucleotides on the leading versus the lagging replication strand of eubacteria. While the effect is small, typically around 2.5%, it is robust and pervasive. GC skew and the analogous TA skew are a localized deviation from Chargaff’s second parity rule, which states that G and C, and T and A occur with (mostly) equal frequency even within a strand.
Most bacteria also show the analogous TA skew. Different phyla show different kinds of skew and differing relations between TA and GC skew. This article introduces an open access database (https://skewdb.org) of GC and 10 other skews for over 28,000 chromosomes and plasmids. Further details like codon bias, strand bias, strand lengths and taxonomic data are also included.
The SkewDB database can be used to generate or verify hypotheses. Since the origins of both the second parity rule, as well as GC skew itself, are not yet satisfactorily explained, such a database may enhance our understanding of microbial DNA.
Methods The SkewDB analysis relies exclusively on the tens of thousands of FASTA and GFF3 files available through the NCBI download service, which covers both GenBank and RefSeq. The database includes bacteria, archaea and their plasmids. Furthermore, to ease analysis, the NCBI Taxonomy database is sourced and merged so output data can quickly be related to (super)phyla or specific species. No other data is used, which greatly simplifies processing. Data is read directly in the compressed format provided by NCBI.
All results are emitted as standard CSV files. In the first step of the analysis, for each organism the FASTA sequence and the GFF3 annotation file are parsed. Every chromosome in the FASTA file is traversed from beginning to end, while a running total is kept for cumulative GC and TA skew. In addition, within protein coding genes, such totals are also kept separately for these skews on the first, second and third codon position. Furthermore, separate totals are kept for regions which do not code for proteins. In addition, to enable strand bias measurements, a cumulative count is maintained of nucleotides that are part of a positive or negative sense gene. The counter is increased for positive sense nucleotides, decreased for negative sense nucleotides, and left alone for non-genic regions.
A separate counter is kept for non-genic nucleotides. Finally, G and C nucleotides are counted, regardless of if they are part of a gene or not. These running totals are emitted at 4096 nucleotide intervals, a resolution suitable for determining skews and shifts. In addition, one line summaries are stored for each chromosome. These line includes the RefSeq identifier of the chromosome, the full name mentioned in the FASTA file, plus counts of A, C, G and T nucleotides. Finally five levels of taxonomic data are stored.
Chromosomes and plasmids of fewer than 100 thousand nucleotides are ignored, as these are too noisy to model faithfully. Plasmids are clearly marked in the database, enabling researchers to focus on chromosomes if so desired. Fitting Once the genomes have been summarised at 4096-nucleotide resolution, the skews are fitted to a simple model. The fits are based on four parameters. Alpha1 and alpha2 denote the relative excess of G over C on the leading and lagging strands. If alpha1 is 0.046, this means that for every 1000 nucleotides on the leading strand, the cumulative count of G excess increases by 46. The third parameter is div and it describes how the chromosome is divided over leading and lagging strands. If this number is 0.557, the leading replication strand is modeled to make up 55.7% of the chromosome. The final parameter is shift (the dotted vertical line), and denotes the offset of the origin of replication compared to the DNA FASTA file. This parameter has no biological meaning of itself, and is an artifact of the DNA assembly process.
The goodness-of-fit number consists of the root mean squared error of the fit, divided by the absolute mean skew. This latter correction is made to not penalize good fits for bacteria showing significant skew. GC skew tends to be defined very strongly, and it is therefore used to pick the div and shift parameters of the DNA sequence, which are then kept as a fixed constraint for all the other skews, which might not be present as clearly. The fitting process itself is a downhill simplex method optimization over the three dimensions, seeded with the average observed skew over the whole genome, and assuming there is no shift, and that the leading and lagging strands are evenly distributed. The simplex optimization is tuned so that it takes sufficiently large steps so it can reach the optimum even if some initial assumptions are off.
https://vocab.nerc.ac.uk/collection/L08/current/UN/https://vocab.nerc.ac.uk/collection/L08/current/UN/
This database, and the accompanying website called ‘SurgeWatch’ (http://surgewatch.stg.rlp.io), provides a systematic UK-wide record of high sea level and coastal flood events over the last 100 years (1915-2014). Derived using records from the National Tide Gauge Network, a dataset of exceedence probabilities from the Environment Agency and meteorological fields from the 20th Century Reanalysis, the database captures information of 96 storm events that generated the highest sea levels around the UK since 1915. For each event, the database contains information about: (1) the storm that generated that event; (2) the sea levels recorded around the UK during the event; and (3) the occurrence and severity of coastal flooding as consequence of the event. The data are presented to be easily assessable and understandable to a wide range of interested parties. The database contains 100 files; four CSV files and 96 PDF files. Two CSV files contain the meteorological and sea level data for each of the 96 events. A third file contains the list of the top 20 largest skew surges at each of the 40 study tide gauge site. In the file containing the sea level and skew surge data, the tide gauge sites are numbered 1 to 40. A fourth accompanying CSV file lists, for reference, the site name and location (longitude and latitude). A description of the parameters in each of the four CSV files is given in the table below. There are also 96 separate PDF files containing the event commentaries. For each event these contain a concise narrative of the meteorological and sea level conditions experienced during the event, and a succinct description of the evidence available in support of coastal flooding, with a brief account of the recorded consequences to people and property. In addition, these contain graphical representation of the storm track and mean sea level pressure and wind fields at the time of maximum high water, the return period and skew surge magnitudes at sites around the UK, and a table of the date and time, offset return period, water level, predicted tide and skew surge for each site where the 1 in 5 year threshold was reached or exceeded for each event. A detailed description of how the database was created is given in Haigh et al. (2015). Coastal flooding caused by extreme sea levels can be devastating, with long-lasting and diverse consequences. The UK has a long history of severe coastal flooding. The recent 2013-14 winter in particular, produced a sequence of some of the worst coastal flooding the UK has experienced in the last 100 years. At present 2.5 million properties and £150 billion of assets are potentially exposed to coastal flooding. Yet despite these concerns, there is no formal, national framework in the UK to record flood severity and consequences and thus benefit an understanding of coastal flooding mechanisms and consequences. Without a systematic record of flood events, assessment of coastal flooding around the UK coast is limited. The database was created at the School of Ocean and Earth Science, National Oceanography Centre, University of Southampton with help from the Faculty of Engineering and the Environment, University of Southampton, the National Oceanography Centre and the British Oceanographic Data Centre. Collation of the database and the development of the website was funded through a Natural Environment Research Council (NERC) impact acceleration grant. The database contributes to the objectives of UK Engineering and Physical Sciences Research Council (EPSRC) consortium project FLOOD Memory (EP/K013513/1).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically