Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Genetic approaches for controlling disease vectors have aimed either to reduce wild-type populations or to replace wild-type populations with insects that cannot transmit pathogens. Here, we propose a Reduce and Replace (R&R) strategy in which released insects have both female-killing and anti-pathogen genes. We develop a mathematical model to numerically explore release strategies involving an R&R strain of the dengue vector Aedes aegypti. We show that repeated R&R releases may lead to a temporary decrease in mosquito population density and, in the absence of fitness costs associated with the anti-pathogen gene, a long-term decrease in competent vector population density. We find that R&R releases more rapidly reduce the transient and long-term competent vector densities than female-killing releases alone. We show that releases including R&R females lead to greater reduction in competent vector density than male-only releases. The magnitude of reduction in total and competent vectors depends upon the release ratio, release duration, and whether females are included in releases. Even when the anti-pathogen allele has a fitness cost, R&R releases lead to greater reduction in competent vectors than female-killing releases during the release period; however, continued releases are needed to maintain low density of competent vectors long-term. We discuss the results of the model as motivation for more detailed studies of R&R strategies.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In attempting to enhance the mechanical properties of recycled concrete after high temperature and solve the problem of large stacking of lithium slag (LS), this paper proposes lithium slag recycled concrete (LSRAC). In this research, LS was used to replace part of the cement (γL = 10%, 20%, and 30%), recycled coarse aggregate (RCA) completely replaced the natural aggregate (γR = 100%), and the heated temperatures were 200°C, 400°C, and 600°C. This paper carried out the heating test and the strength tests. The test results indicated, for the same heating temperature, the loss of strength of LSRAC was less than that of RAC and the compressive strengths and splitting strength of LSRAC with 20% lithium slag replacement rate were improved by 33.9%, 36.5% and 34.5%, respectively. The increase in flexural strength of LSRAC with 10% lithium slag dosage reached 24.1%. The results indicate LSRAC can effectively improve the bearing capacity of structural concrete subject to high temperature. The strength retention equations of LSRAC were established by comparing the strengths of 20°C. The calculation results of the strength retention formula for post-high-temperature LSRAC matched the measured results well. Therefore, this paper provided reliable experimental basis and theoretical guidance for on-site rescue, post-disaster assessment and reinforcement of RAC used for pavement base and public facilities constructions, and the eco-friendly way for sustainable development.
Facebook
TwitterAttribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
License information was derived automatically
Search API for looking up addresses and roads within the catchment. The api can search for both address and road, or either. This dataset is updated weekly from VicMap Roads and Addresses, sourced via www.data.vic.gov.au.\r \r
\r
The Search API uses a data.gov.au datastore and allows a user to take full advantage of full test search functionality.\r
\r
An sql attribute is passed to the URL to define the query against the API. Please note that the attribute must be URL encoded. The sql statement takes for form as below:\r
\r
\r
SELECT distinct display, x, y\r
FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a"\r
WHERE _full_text @@ to_tsquery(replace('[term]', ' ', ' %26 '))\r
LIMIT 10\r\r
\r
The above will select the top 10 results from the API matching the input 'term', and return the display name as well as an x and y coordinate. \r
\r
The full URL for the above query would be:\r
\r
\r
https://data.gov.au/api/3/action/datastore_search_sql?sql=SELECT display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('[term]', ' ', ' %26 ')) LIMIT 10)\r\r
\r
Any field in the source dataset can be returned via the API. Display, x and y are used in the example above, but any other field can be returned by altering the select component of the sql statement. See examples below.\r \r
Search data sources and LGA can also be used to filter results. When not using a filter, the API defaults to using all records. See examples below.\r \r
A filter can be applied to select for a particular source dataset using the 'src' field. The currently available datasets are as follows:\r \r - 1 for Roads\r - 2 for Address\r - 3 for Localities\r - 4 for Parcels (CREF and SPI)\r - 5 for Localities (Propnum)\r \r
Filters can be applied to select for a specific local government area using the 'lga_code' field. LGA codes are derrived from Vicmap LGA datasets. Wimmeras LGAs include:\r \r - 332 Horsham Rural City Council\r - 330 Hindmarsh Shire Council\r - 357 Northern Grampians Shire Council\r - 371 West Wimmera Shire Council\r - 378 Yarriambiack Shire Council\r \r
Search for the top 10 addresses and roads with the word 'darlot' in their names:\r
\r
\r
SELECT distinct display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('darlot', ' ', ' & ')) LIMIT 10)\r\r
example\r
\r
Search for all roads with the word 'perkins' in their names:\r
\r
\r
SELECT distinct display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('perkins', ' ', ' %26 ')) AND src=1\r\r
example\r
\r
Search for all addresses with the word 'kalimna' in their names, within Horsham Rural City Council:\r
\r
\r
SELECT distinct display, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('kalimna', ' ', ' %26 ')) AND src=2 and lga_code=332\r\r
example\r
\r
Search for the top 10 addresses and roads with the word 'green' in their names, returning just their display name, locality, x and y:\r
\r
\r
SELECT distinct display, locality, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE _full_text @@ to_tsquery(replace('green', ' ', ' %26 ')) LIMIT 10\r\r
example\r
\r
Search all addresses in Hindmarsh Shire:\r
\r
\r
SELECT distinct display, locality, x, y FROM "4bf30358-6dc6-412c-91ee-a6f15aaee62a" WHERE lga_code=330\r\r
example
Facebook
TwitterThe linked bookdown contains the notes and most exercises for a course on data analysis techniques in hydrology using the programming language R. The material will be updated each time the course is taught. If new topics are added, the topics they replace will remain, in case they are useful to others.
I hope these materials can be a resource to those teaching themselves R for hydrologic analysis and/or for instructors who may want to use a lesson or two or the entire course. At the top of each chapter there is a link to a github repository. In each repository is the code that produces each chapter and a version where the code chunks within it are blank. These repositories are all template repositories, so you can easily copy them to your own github space by clicking Use This Template on the repo page.
In my class, I work through the each document, live coding with students following along.Typically I ask students to watch as I code and explain the chunk and then replicate it on their computer. Depending on the lesson, I will ask students to try some of the chunks before I show them the code as an in-class activity. Some chunks are explicitly designed for this purpose and are typically labeled a “challenge.”
Chapters called ACTIVITY are either homework or class-period-long in-class activities. The code chunks in these are therefore blank. If you would like a key for any of these, please just send me an email.
If you have questions, suggestions, or would like activity answer keys, etc. please email me at jpgannon at vt.edu
Finally, if you use this resource, please fill out the survey on the first page of the bookdown (https://forms.gle/6Zcntzvr1wZZUh6S7). This will help me get an idea of how people are using this resource, how I might improve it, and whether or not I should continue to update it.
Facebook
TwitterAttribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
License information was derived automatically
The "Expanded Jobseeker Payment and Youth Allowance - monthly profile" publication has introduced expanded reporting populations for income support recipients. As a result, the reporting population for Jobseeker Payment has changed to include recipients who are current but on zero rate of payment and those who are suspended from payment. The reporting population for Youth Allowance has changed to include those who are suspended from payment. \r The expanded report will replace the standard report after June 2023.\r \r Additional data for JobSeeker Payment and Youth Allowance (other) recipients in the monthly profile includes:\r \r •\tA monthly time series by rate of payment, providing details of recipients who are current on payment and in receipt of a full, part or zero rate of payment, and those who are suspended from payment (table 2)\r \r •\tBy work capacity status, showing those who have a partial capacity to work and those who have full capacity (table 7)\r \r •\tBy payment duration (table 8)\r \r The “JobSeeker Payment and Youth Allowance recipients – monthly profile” is a monthly report, covering the Income Support payments of JobSeeker Payment and Youth Allowance (other). It also includes data on Youth Allowance (student and apprentice), Sickness Allowance and Bereavement Allowance. The report includes payment recipient numbers by demographics such as age, gender, state, earnings and Statistical Area Level 2.\r
Facebook
TwitterComparison of changes in R-square among different models and relative risks (95% confidence intervals) of the variables when vapour pressure was used to replace temperature and relative humidity.
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
#https://www.kaggle.com/c/facial-keypoints-detection/details/getting-started-with-r #################################
###Variables for downloaded files data.dir <- ' ' train.file <- paste0(data.dir, 'training.csv') test.file <- paste0(data.dir, 'test.csv') #################################
###Load csv -- creates a data.frame matrix where each column can have a different type. d.train <- read.csv(train.file, stringsAsFactors = F) d.test <- read.csv(test.file, stringsAsFactors = F)
###In training.csv, we have 7049 rows, each one with 31 columns. ###The first 30 columns are keypoint locations, which R correctly identified as numbers. ###The last one is a string representation of the image, identified as a string.
###To look at samples of the data, uncomment this line:
###Let's save the first column as another variable, and remove it from d.train: ###d.train is our dataframe, and we want the column called Image. ###Assigning NULL to a column removes it from the dataframe
im.train <- d.train$Image d.train$Image <- NULL #removes 'image' from the dataframe
im.test <- d.test$Image d.test$Image <- NULL #removes 'image' from the dataframe
################################# #The image is represented as a series of numbers, stored as a string #Convert these strings to integers by splitting them and converting the result to integer
#strsplit splits the string #unlist simplifies its output to a vector of strings #as.integer converts it to a vector of integers. as.integer(unlist(strsplit(im.train[1], " "))) as.integer(unlist(strsplit(im.test[1], " ")))
###Install and activate appropriate libraries ###The tutorial is meant for Linux and OSx, where they use a different library, so: ###Replace all instances of %dopar% with %do%.
library("foreach", lib.loc="~/R/win-library/3.3")
###implement parallelization im.train <- foreach(im = im.train, .combine=rbind) %do% { as.integer(unlist(strsplit(im, " "))) } im.test <- foreach(im = im.test, .combine=rbind) %do% { as.integer(unlist(strsplit(im, " "))) } #The foreach loop will evaluate the inner command for each row in im.train, and combine the results with rbind (combine by rows). #%do% instructs R to do all evaluations in parallel. #im.train is now a matrix with 7049 rows (one for each image) and 9216 columns (one for each pixel):
###Save all four variables in data.Rd file ###Can reload them at anytime with load('data.Rd')
#each image is a vector of 96*96 pixels (96*96 = 9216). #convert these 9216 integers into a 96x96 matrix: im <- matrix(data=rev(im.train[1,]), nrow=96, ncol=96)
#im.train[1,] returns the first row of im.train, which corresponds to the first training image. #rev reverse the resulting vector to match the interpretation of R's image function #(which expects the origin to be in the lower left corner).
#To visualize the image we use R's image function: image(1:96, 1:96, im, col=gray((0:255)/255))
#Let’s color the coordinates for the eyes and nose points(96-d.train$nose_tip_x[1], 96-d.train$nose_tip_y[1], col="red") points(96-d.train$left_eye_center_x[1], 96-d.train$left_eye_center_y[1], col="blue") points(96-d.train$right_eye_center_x[1], 96-d.train$right_eye_center_y[1], col="green")
#Another good check is to see how variable is our data. #For example, where are the centers of each nose in the 7049 images? (this takes a while to run): for(i in 1:nrow(d.train)) { points(96-d.train$nose_tip_x[i], 96-d.train$nose_tip_y[i], col="red") }
#there are quite a few outliers -- they could be labeling errors. Looking at one extreme example we get this: #In this case there's no labeling error, but this shows that not all faces are centralized idx <- which.max(d.train$nose_tip_x) im <- matrix(data=rev(im.train[idx,]), nrow=96, ncol=96) image(1:96, 1:96, im, col=gray((0:255)/255)) points(96-d.train$nose_tip_x[idx], 96-d.train$nose_tip_y[idx], col="red")
#One of the simplest things to try is to compute the mean of the coordinates of each keypoint in the training set and use that as a prediction for all images colMeans(d.train, na.rm=T)
#To build a submission file we need to apply these computed coordinates to the test instances: p <- matrix(data=colMeans(d.train, na.rm=T), nrow=nrow(d.test), ncol=ncol(d.train), byrow=T) colnames(p) <- names(d.train) predictions <- data.frame(ImageId = 1:nrow(d.test), p) head(predictions)
#The expected submission format has one one keypoint per row, but we can easily get that with the help of the reshape2 library:
library(...
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Bland and Altman plot method is a widely cited and applied graphical approach to assess the equivalence of quantitative measurement techniques, usually aiming to replace a traditional technique with a new, less invasive or less expensive one. However, the Bland and Altman plot is often misinterpreted due to a lack of suitable inferential statistical support. Usual alternatives, such as Pearson's correlation or ordinal least-square linear regression, also fail to identify the weaknesses of each measurement technique. This is a package designed for the analysis of equivalence between measurement techniques. It should be noted that this package does not introduce another iteration of the Bland-Altman plot method. The package's name and our intention were simply inspired by the shared objective of establishing equivalence. This objective revolves around comparing single or repeated interval-scaled measures from two measurement techniques applied to the same subjects. We have developed a completely different inferential test in contrast to the original Bland-Altman proposal. We have highlighted certain criticisms of the original Bland-Altman plot method, which heavily relies on visual inspection and subjectivity for determining equivalence. Our goal is to empower the reader to make an informed decision regarding the validity of this new measurement technique. Here, inferential statistics support for equivalence between measurement techniques is proposed in three nested tests based on structural regressions to assess the equivalence of structural means (accuracy), equivalence of structural variances (precision), and concordance with the structural bisector line (agreement in measurements obtained from the same subject), by analytical methods and robust approach by bootstrapping. Graphical outputs are also implemented to follow Bland and Altman's principles for easy communication. The related publication shows that this approach was tested on five datasets from articles that used Bland and Altman's method. In one case, where authors concluded disagreement, the approach identified equivalence by addressing bias correction. In another case, it aligned with the original assessment but refined the original authors’ results. In a specific case, unnecessary numerical transformations led to a conclusion of equivalence, but this approach, which naturally generates slanted bands, found non-equivalence in precision and agreement. In one case where authors claimed disagreement, the approach revealed precision issues, rendering the comparison invalid.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Selected confidentialised extracts of the number of vehicles registered for road use in Australia on 31 January 2025. The statistics have been derived from state and territory motor vehicle registry data. These statistics replace the ABS Motor Vehicle Census, discontinued in 2021.\r \r Disclaimers: https://www.infrastructure.gov.au/disclaimers.\r
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
File List R script: Shipley.script (md5: 3f49ef8fa2c8169aca9d9bc07a7c7bf9) R function: maxent2 (md5: f4502d0134c1f7c0c59ef6b85e6235c9) R function: maxenttest2 (md5: 33323061f78743cbb5fff8e254fc9e97) Description Shipley.script is a small R script that generated a small simulated data set consisting of six « species », one « trait » and a relative abundance distribution in the meta-community. This script then calls “maxent2” and “maxenttest2” using different arguments and performs the decomposition described in the main paper. maxent2 is a modified version of the maxent function of the FD library in R, as cited in the main paper; the FD library will eventually be updated and this maxent2 function will replace the original maxent function. maxenttest2 is a modified version of the maxent.test function of the FD library in R, as cited in the main paper; the FD library will eventually be updated and this maxenttest2 function will replace the original maxent.test function.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
R script used to generate Tables 1 and 2 and Figures 3 and 4in the "Semantic and Cognitive Tools to Aid Statistical Inference: Replace Confidence and Significance by Compatibility and Surprise" manuscripthttps://arxiv.org/abs/1909.08579
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
R files containing BART models for several outcome prediction models described in the manuscript.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Genetic approaches for controlling disease vectors have aimed either to reduce wild-type populations or to replace wild-type populations with insects that cannot transmit pathogens. Here, we propose a Reduce and Replace (R&R) strategy in which released insects have both female-killing and anti-pathogen genes. We develop a mathematical model to numerically explore release strategies involving an R&R strain of the dengue vector Aedes aegypti. We show that repeated R&R releases may lead to a temporary decrease in mosquito population density and, in the absence of fitness costs associated with the anti-pathogen gene, a long-term decrease in competent vector population density. We find that R&R releases more rapidly reduce the transient and long-term competent vector densities than female-killing releases alone. We show that releases including R&R females lead to greater reduction in competent vector density than male-only releases. The magnitude of reduction in total and competent vectors depends upon the release ratio, release duration, and whether females are included in releases. Even when the anti-pathogen allele has a fitness cost, R&R releases lead to greater reduction in competent vectors than female-killing releases during the release period; however, continued releases are needed to maintain low density of competent vectors long-term. We discuss the results of the model as motivation for more detailed studies of R&R strategies.