Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Regression ranks among the most popular statistical analysis methods across many research areas, including psychology. Typically, regression coefficients are displayed in tables. While this mode of presentation is information-dense, extensive tables can be cumbersome to read and difficult to interpret. Here, we introduce three novel visualizations for reporting regression results. Our methods allow researchers to arrange large numbers of regression models in a single plot. Using regression results from real-world as well as simulated data, we demonstrate the transformations which are necessary to produce the required data structure and how to subsequently plot the results. The proposed methods provide visually appealing ways to report regression results efficiently and intuitively. Potential applications range from visual screening in the model selection stage to formal reporting in research papers. The procedure is fully reproducible using the provided code and can be executed via free-of-charge, open-source software routines in R.
This child page contains a zipped folder which contains all items necessary to run trend models and produce results published in U.S. Geological Scientific Investigations Report 2022–XXXX [Nustad, R.A., and Tatge, W.S., 2023, Comprehensive Water-Quality Trend Analysis for Selected Sites and Constituents in the International Souris River Basin, Saskatchewan and Manitoba, Canada and North Dakota, United States, 1970-2020: U.S. Geological Survey Scientific Investigations Report 2023-XXXX, XX p.]. To run the R-QWTREND program in R, 6 files are required and each is included in this child page: prepQWdataV4.txt, runQWmodelV4.txt, plotQWtrendV4.txt, qwtrend2018v4.exe, salflibc.dll, and StartQWTrendV4.R (Vecchia and Nustad, 2020). The folder contains: three items required to run the R–QWTREND trend analysis tool; a README.txt file; a folder called "dataout"; and a folder called "scripts". The "scripts" folder contains the scripts that can be used to reproduce the results found in the USGS Scientific Investigations Report referenced above. The "dataout" folder contains folders for each site that contain .RData files with the naming convention of site_flow for streamflow data and site_qw_XXX depending upon the group of constituents MI, NUT, or TM. R–QWTREND is a software package for analyzing trends in stream-water quality. The package is a collection of functions written in R (R Development Core Team, 2019), an open source language and a general environment for statistical computing and graphics. The following system requirements are necessary for using R–QWTREND: • Windows 10 operating system • R (version 3.4 or later; 64 bit recommended) • RStudio (version 1.1.456 or later). An accompanying report (Vecchia and Nustad, 2020) serves as the formal documentation for R–QWTREND. Vecchia, A.V., and Nustad, R.A., 2020, Time-series model, statistical methods, and software documentation for R–QWTREND—An R package for analyzing trends in stream-water quality: U.S. Geological Survey Open-File Report 2020–1014, 51 p., https://doi.org/10.3133/ofr20201014 R Development Core Team, 2019, R—A language and environment for statistical computing: Vienna, Austria, R Foundation for Statistical Computing, accessed December 7, 2020, at https://www.r-project.org.
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
This child page contains a zipped folder which contains all items necessary to run trend models and produce results published in U.S. Geological Scientific Investigations Report 2021–XXXX [Tatge, W.S., Nustad, R.A., and Galloway, J.M., 2021, Evaluation of Salinity and Nutrient Conditions in the Heart River Basin, North Dakota, 1970-2020: U.S. Geological Survey Scientific Investigations Report 2021-XXXX, XX p.]. To run the R-QWTREND program in R 6 files are required and each is included in this child page: prepQWdataV4.txt, runQWmodelV4XXUEP.txt, plotQWtrendV4XXUEP.txt, qwtrend2018v4.exe, salflibc.dll, and StartQWTrendV4.R (Vecchia and Nustad, 2020). The folder contains: six items required to run the R–QWTREND trend analysis tool; a readme.txt file; a flowtrendData.RData file; an allsiteinfo.table.csv file, a folder called "scripts", and a folder called "waterqualitydata". The "scripts" folder contains the scripts that can be used to reproduce the results found in the USGS Scientific Investigations Report referenced above. The "waterqualitydata" folder contains .csv files with the naming convention of site_ions or site_nuts for major ions and nutrients constituents and contains machine readable files with the water-quality data used for the trend analysis at each site. R–QWTREND is a software package for analyzing trends in stream-water quality. The package is a collection of functions written in R (R Development Core Team, 2019), an open source language and a general environment for statistical computing and graphics. The following system requirements are necessary for using R–QWTREND: • Windows 10 operating system • R (version 3.4 or later; 64 bit recommended) • RStudio (version 1.1.456 or later). An accompanying report (Vecchia and Nustad, 2020) serves as the formal documentation for R–QWTREND. Vecchia, A.V., and Nustad, R.A., 2020, Time-series model, statistical methods, and software documentation for R–QWTREND—An R package for analyzing trends in stream-water quality: U.S. Geological Survey Open-File Report 2020–1014, 51 p., https://doi.org/10.3133/ofr20201014 R Development Core Team, 2019, R—A language and environment for statistical computing: Vienna, Austria, R Foundation for Statistical Computing, accessed December 7, 2020, at https://www.r-project.org.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This report describes support for a new type of variable-width line in the 'vwline' package for R that is based on Bezier curves. There is also a new function for specifying the width of a variable-width line based on Bezier curves and there is a new linejoin and lineend style, called "extend", that is available when both the line and the width of the line are based on Bezier curves. This report also introduces a small 'gridBezier' package for drawing Bezier curves in R.
This child page contains a zipped folder which contains all files necessary to run trend models and produce results published in U.S. Geological Scientific Investigations Report 2020–5079 [Nustad, R.A., and Vecchia, A.V., 2020, Water-quality trends for selected sites and constituents in the international Red River of the North Basin, Minnesota and North Dakota, United States, and Manitoba, Canada, 1970–2017: U.S. Geological Survey Scientific Investigations Report 2020–5079, 75 p., https://doi.org/10.3133/sir20205079]. The folder contains: six files required to run the R–QWTREND trend analysis tool; a readme.txt file; an alldata.RData file; a siteinfo_appendix.txt: and a folder called "scripts". R–QWTREND is a software package for analyzing trends in stream-water quality. The package is a collection of functions written in R (R Development Core Team, 2019), an open source language and a general environment for statistical computing and graphics. The following system requirements are necessary for using R–QWTREND: • Windows 10 operating system • R (version 3.4 or later; 64 bit recommended) • RStudio (version 1.1.456 or later). An accompanying report (Vecchia and Nustad, 2020) serves as the formal documentation for R–QWTREND. Vecchia, A.V., and Nustad, R.A., 2020, Time-series model, statistical methods, and software documentation for R–QWTREND—An R package for analyzing trends in stream-water quality: U.S. Geological Survey Open-File Report 2020–1014, 51 p., https://doi.org/10.3133/ofr20201014 R Development Core Team, 2019, R—A language and environment for statistical computing: Vienna, Austria, R Foundation for Statistical Computing, accessed June 12, 2019, at https://www.r-project.org.
clinicaltrials.gov_searchThis is complete original dataset.identify completed trialsThis is the R script which when run on "clinicaltrials.gov_search.txt" will produce a .csv file which lists all the completed trials.FDA_table_with_sensThis is the final dataset after cross referencing the trials. An explanation of the variables is included in the supplementary file "2011-10-31 Prayle Hurley Smyth Supplementary file 3 variables in the dataset".analysis_after_FDA_categorization_and_sensThis R script reproduces the analysis from the paper, including the tables and statistical tests. The comments should make it self explanatory.2011-11-02 prayle hurley smyth supplementary file 1 STROBE checklistThis is a STROBE checklist for the study2011-10-31 Prayle Hurley Smyth Supplementary file 2 examples of categorizationThis is a supplementary file which illustrates some of the decisions which had to be made when categorizing trials.2011-10-31 Prayle Hurley Smyth Supplementary file 3 variables in th...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data contains general government sector operating expenses, sourced from the Australian Bureau of Statistics historical data and the Department of Treasury and Finance, categorised by ‘government purpose classification’ (GPC) and ‘classification of the functions of government’ (COFOG).\r \r The Australian system of Government Finance Statistics (GFS) was revised by the Australian Bureau of Statistics, with the release of the Australian System of Government Finance Statistics: Concepts, Sources and Methods 2015 Cat. No. 5514.0.\r \r Implementation of the updated GFS manual has resulted in the COFOG framework replacing the former GPC framework, with effect from the 2018-19 financial year for financial reporting under AASB 1049.\r \r The underlying data from 1961-62 to 1997-98 represents a conversion from the original cash series to an accruals basis by estimating depreciation and superannuation expenses based on statistical modelling.\r \r Although the conversion provides a basis for comparison with total expenses in the current series of accrual GFS information from 1998 (in the attached table), the estimated accrued expense items have not been apportioned to individual purpose classifications.\r \r The absence of these splits between functional classifications in the attached table data therefore represents a break in the series and it is not possible to compare individual purpose categories with those in other tables.\r \r Similarly, the transition from GPC to COFOG represents an additional break in the series and comparability between the two frameworks will not be possible.\r \r The key reporting changes from GPC to COFOG are as follows:\r \r - the number of categories has reduced from 12 under GPC to 10 under COFOG; \r - the fuel and energy, agriculture, forestry, fishing and hunting categories have been abolished and are now part of the new economic affairs category. The majority of the outputs in other economic affairs are also included in this new category;\r - public debt transactions have moved from the other purposes category (i.e. primarily interest expense on borrowings) to general public services category;\r - a new environmental protection category was created to include functions such as waste management, water waste management, pollution and production of biodiversity and landscape, which were previously classified under housing and community amenities category, as well as national and state parks functions from the recreation and culture category; and\r - housing functions such as housing assistance and housing concessions are now part of the social protection category
This data set includes water quality data and microbial community abundance tables for periphyton samples from this project. The data set also includes extensive R markdown code used to process the data and generate the results included in the report. This dataset is associated with the following publication: Hagy, J., R. Devereux, K. Houghton, D. Beddick, T. Pierce, and S. Friedman. Developing Microbial Community Indicators of Nutrient Exposure in Southeast Coastal Plain Streams using a Molecular Approach. US EPA Office of Research and Development, Washington, DC, USA, 2018.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The Department of Human Services Annual Report (Annual Report) sets out the department’s activities for the financial year with a focus on the department's performance and financial details, amongst other information. \r \r The Annual Report is prepared in accordance with the Requirements for Annual Reports, issued by the Department of the Prime Minister and Cabinet. Under the Public Service Act 1999 (Cth.) the Secretary of the Department of Human Services is required, after the end of each financial year, to provide a report on the department's activities to the Minister, for presentation to the Parliament. Annual Reports are tabled in parliament usually in late October and are available to the public online shortly after tabling. \r \r Reports prior to 2012-2013 can be found on the Department of Human Services website. \r \r If you require statistics at a more detailed level, please contact statistics@humanservices.gov.au. The department charges on a cost recovery basis for providing more detailed statistics and their provision is subject to privacy considerations. \r \r \r \r Disclaimer: \r This data is provided by the Department of Human Services (Human Services) for general information purposes only. While Human Services has taken care to ensure the information is as correct and accurate as possible, we do not guarantee, or accept legal liability whatsoever arising from, or connected to its use. \r \r We recommend that users exercise their own skill and care with respect to the use of this data and that users carefully evaluate the accuracy, currency, completeness and relevance of the data for their needs.\r
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Initial data analysis (IDA) is the part of the data pipeline that takes place between the end of data retrieval and the beginning of data analysis that addresses the research question. Systematic IDA and clear reporting of the IDA findings is an important step towards reproducible research. A general framework of IDA for observational studies includes data cleaning, data screening, and possible updates of pre-planned statistical analyses. Longitudinal studies, where participants are observed repeatedly over time, pose additional challenges, as they have special features that should be taken into account in the IDA steps before addressing the research question. We propose a systematic approach in longitudinal studies to examine data properties prior to conducting planned statistical analyses. In this paper we focus on the data screening element of IDA, assuming that the research aims are accompanied by an analysis plan, meta-data are well documented, and data cleaning has already been performed. IDA data screening comprises five types of explorations, covering the analysis of participation profiles over time, evaluation of missing data, presentation of univariate and multivariate descriptions, and the depiction of longitudinal aspects. Executing the IDA plan will result in an IDA report to inform data analysts about data properties and possible implications for the analysis plan—another element of the IDA framework. Our framework is illustrated focusing on hand grip strength outcome data from a data collection across several waves in a complex survey. We provide reproducible R code on a public repository, presenting a detailed data screening plan for the investigation of the average rate of age-associated decline of grip strength. With our checklist and reproducible R code we provide data analysts a framework to work with longitudinal data in an informed way, enhancing the reproducibility and validity of their work.
Road Safety Statistics releases and guidance about the data collection. Collision analysis tool for bespoke breakdowns of our data. STATS19 R package developed independently of DfT, offering an alternative way to access this data for those familiar with the R language. Latest data Provisional data for the first 6 months of 2024 published 28 November 2024. These are provisional un-validated data. Data included These files provide detailed road safety data about the circumstances of personal injury road collisions in Great Britain from 1979, the types of vehicles involved and the consequential casualties. The statistics relate only to personal injury collisions on public roads that are reported to the police, and subsequently recorded, using the STATS19 collision reporting form. This data contains all the non-sensitive fields that can be made public. Sensitive data fields, for example contributory factors data, can be requested by completing the sensitive data form and contacting the road safety statistics team at roadacc.stats@dft.gov.uk All the data variables are coded rather than containing textual strings. The lookup tables are available in the supporting documents section towards the bottom of the table. Data relating to the casualty and collision severity adjustment to account for changes in police reporting of severity is provided in separate files and can be joined using the appropriate record identifiers. Timing of data release Final annual data is released annually in late September following the publication of the annual reported road casualties Great Britain statistical publication. Individual years data is available for each of the last 5 years, with earlier years available as part of a single download. In addition, un-validated provisional mid-year data (covering January to June) is released at end November, to provide more up to date information Data revisions Except for the severity adjustments, data are not routinely revised those occasionally minor amendments to previous years can be made. Details of recent revisions are available, together with a request for any feedback on the approach to revising the data. The files published here represent the latest data.
This page contains a zipped folder which contains all items necessary to run trend models and produce results published in U.S. Geological Scientific Investigations Report 2021–XXXX [Tatge, W.S., Hoogestraat, G., and Nustad, R.A., 2022, Water-Quality Data and Trends in the Rapid Creek Basin, South Dakota, 1970–2020: U.S. Geological Survey Scientific Investigations Report 2022-XXXX, XX p.]. To run the R-QWTREND program in R 6 files are required and each is included in this child page: prepQWdataV4.txt, runQWmodelV4XXUEP.txt, plotQWtrendV4XXUEP.txt, qwtrend2018v4.exe, salflibc.dll, and StartQWTrendV4.R (Vecchia and Nustad, 2020). The folder contains five items required to run the R–QWTREND trend analysis tool; a readme.txt file; an alldata.csv file; a finalsites2.csv file, a folder called "scripts", and a folder called "dataout". The "scripts" folder contains the scripts that can be used to reproduce the results found in the USGS Scientific Investigations Report referenced above. The "dataout" folder contains folders for each site evaluated for trends with .RData files with the naming convention of site_qw_MI, site_flow, site_qw_NUT, site_qw_SED, and site_qw_PHY for major ions, nutrients, sediment, and physical parameter constituents. R–QWTREND is a software package for analyzing trends in stream-water quality. The package is a collection of functions written in R (R Development Core Team, 2019), an open source language and a general environment for statistical computing and graphics. The following system requirements are necessary for using R–QWTREND: • Windows 10 operating system • R (version 3.4 or later; 64 bit recommended) • RStudio (version 1.1.456 or later). An accompanying report (Vecchia and Nustad, 2020) serves as the formal documentation for R–QWTREND. Vecchia, A.V., and Nustad, R.A., 2020, Time-series model, statistical methods, and software documentation for R–QWTREND—An R package for analyzing trends in stream-water quality: U.S. Geological Survey Open-File Report 2020–1014, 51 p., https://doi.org/10.3133/ofr20201014 R Development Core Team, 2019, R—A language and environment for statistical computing: Vienna, Austria, R Foundation for Statistical Computing, accessed December 7, 2020, at https://www.r-project.org.
Data and Analysis Repository for
Developing Virtual Reality and Computer Screen Experiments One to One Using Selective Attention as a Case Study
June 2023, Rasmus Ahmt Hansen and Marta Topor
The current repository holds all data and analysis scripts used in the report named above. Data files are saved in .csv format and analysis scripts were written using R and R Markdown.
The report preprint can be accessed at:
The study aimed to develop a reliable PC control condition for a VR experiment assessing selective attention in grade 0 children.
The selective attention task we developed and implemented can be accessed here:
Participants
73 grade 0 children from Danish primary schools completed the selective attention test in both VR and PC environments. Performance quality was low and thus we only included 19 participants in final analyses. All data, included and excluded, are openly available in this repository.
Data
Analysis
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Medicare provides access to medical and hospital services for all Australian residents and certain categories of visitors to Australia. The Medicare Benefits Schedule (MBS) lists services that are subsidised by the Australian Government under Medicare. \r \r This data provides statistics on groups of MBS services. MBS groups (ie. category, groups and subgroup) are described in the MBS online.\r \r Data is provided in the following formats: \r \r Excel: The human readable data for the current year is provided in an individual excel file. Historical data (1993-2015) may be found in the excel zipped file. \r CSV: The machine readable data for the current year is provided in an individual csv file. Historical data (1993-2015) may be found in the csv zipped file. \r \r \r Additional Medicare statistics may be found on the Department of Human Services website.\r \r \r Disclaimer: The information and data contained in the reports and tables have been provided by Medicare Australia for general information purposes only. While Medicare Australia takes care in the compilation and provision of the information and data, it does not assume or accept liability for the accuracy, quality, suitability and currency of the information or data, or for any reliance on the information and data. Medicare Australia recommends that users exercise their own care, skill and diligence with respect to the use and interpretation of the information and data. \r \r \r \r \r \r
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
# README
These files contain r data objects and R files that represent the key details of the paper, "Evaluating health facility access using Bayesian spatial models and location analysis methods".
The following datasources are available for simulation of some of the ideas in the paper.
- dat_grid_sim: simulated data of the grid and grid cells
- dat_ohca_cv_sim: simulated data containing the cross validated test/training sets of OHCA data
- dat_ohca_sim: simulated OHCA event data
- dat_aed_sim: simulated AED location data
- dat_bldg_sim: simulated building location data
- dat_municipality_sim: simulated municipality information
- table_1: Table 1 information containing key demographic data
These data were produced using the code in 01-create-sim-data.R, and one of the statistical models is demonstrated in 02-demo-inla-model.R
In terms of the paper itself, the functions and code used in the manuscript are located in:
* 01_tidy.Rmd - analysis code used to tidy up the data
* 02_fit_fixed_all_cv.Rmd - analysis code used to place AEDs
* 02_model.Rmd - analysis code used to fit the model in INLA
* 03_manuscript.Rmd - Full code and text used to create the paper
* 04_supp_materials.Rmd - full code and text used to create the supplementary materials
The following files are a part of an R package "swatial" that was developed along with the paper. These files are:
* DESCRIPTION
* NAMESPACE
* LICENSE
* LICENSE.md
* decay.R
* spherical-distance.R
* test-figure-data-matches.R
* test-table-data-matches.R
* testthat.R
* tidy-inla.R
* tidy-posterior-coefs.R
* tidy-predictions.R
* utils-pipe.R
* All files that end in .Rd are documentation files for the functions.
## Regarding data sources
Census information for Ticino was transcribed from the Annual Statistical Report of Canton Ticino from years 2010 to 2015. This data was taken from their publicly accessible annual reports - for example: (https://www3.ti.ch/DFE/DR/USTAT/allegati/volume/ast_2015.pdf). The raw data was extracted from these annual reports, and placed into the file: "swiss_census_popn_2010_2015.xlsx". These data are put into analysis ready format in the file “01_tidy.Rmd”
Housing and other relevant geospatial data can be accessed via http://map.housing-stat.ch/ and https://data.geo.admin.ch/. The maps of buildings from the REA (Register of Buildings and Dwellings) can be found here: https://map.geo.admin.ch/?zoom=11&bgLayer=ch.swisstopo.pixelkarte-grau&lang=en&topic=ech&layers=ch.bfs.gebaeude_wohnungs_register,ch.swisstopo.swissboundaries3d-gemeinde-flaeche.fill,ch.bfs.volkszaehlung-gebaeudestatistik_gebaeude,ch.bfs.volkszaehlung-gebaeudestatistik_wohnungen,ch.swisstopo.swissbuildings3d_1.metadata,ch.swisstopo.swissbuildings3d_2.metadata&E=2717616.28&N=1096597.25&catalogNodes=687,696&layers_timestamp=,,2016,2016,,&layers_visibility=true,false,false,false,false,false&layers_opacity=1,1,1,1,1,0.75
For further enquiries on this data, contact the Swiss federal Office of Statistics at the details listed here: https://www.bfs.admin.ch/bfs/en/home/services/contact.html
The shapefiles of the Comuni can be accessed here: https://www4.ti.ch/dfe/de/ucr/documentazione/download-file/?noMobile=1
Data from the people living in the Municipalities in Ticino can be downloaded here: https://www3.ti.ch/DFE/DR/USTAT/index.php?fuseaction=dati.home&tema=33&id2=61&id3=65&c1=01&c2=02&c3=02
## Future work
In the future, these functions from the paper may be generalised and put into their own package. If that happens, this repository will be updated with a link to updated functions.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
In March 2003, banks and selected Registered Financial Corporations (RFCs) began reporting their international assets, liabilities and country exposures to APRA in ARF/RRF 231 International Exposures. This return is the basis of the data provided by Australia to the Bank for International Settlements (BIS) for its International Banking Statistics (IBS) data collection. APRA ceased the RFC data collection after September 2010.\r \r The IBS data are based on the methodology described in the BIS Guide on International Financial Statistics (see http://www.bis.org/statistics/intfinstatsguide.pdf; Part II International banking statistics). Data reported for Australia, and other countries, on the BIS website are expressed in United States dollars (USD).\r \r Data are recorded on an end-quarter basis.\r \r This statistical table contains two data worksheets - one presenting data expressed in Australian dollar (AUD) terms and the other in USD terms.\r \r There are two sets of IBS data: locational data, which are used to gauge the role of banks and financial centres in the intermediation of international capital flows; and consolidated data, which can be used to monitor the country risk exposure of national banking systems. Only consolidated data are reported in this statistical table.\r \r ‘Total banks and RFCs’ is also reported in USD equivalent amounts, using the end-quarter AUD/USD exchange rate from statistical table F11. \r \r The consolidated data reported in this statistical table are on the international exposures of banks (and RFCs between March 2003 and September 2010) operating in Australia. The types of assets included here are consistent with the locational data in statistical table B12.1. However, the consolidated data differ from the locational data in three key ways: foreign currency positions with Australian residents are excluded (whereas they are included in the locational data); claims between different offices of the same institution (e.g. between the head office and its subsidiary) are netted (whereas positions, including intra-group positions, are reported on a gross basis in the locational data); and on-balance sheet derivatives are not included in international claims or foreign claims, but are included separately under ‘Derivatives’ in statistical table B13.2. Foreign-owned reporting entities report on an unconsolidated basis.\r \r The consolidated data are split by type of exposure. ‘International claims’ refers to all cross-border claims plus foreign offices’ local claims on residents in foreign currencies; foreign claims refers to all cross-border claims plus foreign offices’ local claims on residents in both local and foreign currencies; immediate risk claims (expressed by the BIS as claims on an immediate borrower basis) cover claims based on the country where the immediate counterparty resides; and ultimate risk claims cover immediate exposures adjusted (via guarantees and other risk transfers) to reflect the location of the ultimate counterparty/risk.\r \r Foreign offices include the overseas branches, subsidiaries and joint ventures of a bank (or RFC between March 2003 and September 2010).\r \r Risk transfers are those transfers of risk from the country of the immediate borrower to the country of ultimate risk as a result of guarantees, collateral, and where the counterparty is a legally dependent branch of a bank headquartered in another country. The risk reallocation includes loans to Australian borrowers that are guaranteed by foreign entities and therefore represent outward risk transfers from Australia, which increase the ultimate exposure to the country of the guarantor. Similarly, foreign lending that is guaranteed by Australian entities is reported as an inward risk transfer to Australia, which reduces the ultimate exposure to the country of the foreign borrower. The risk reallocation also includes transfers between different economic sectors (banks, public sector and non-bank private sector) in the same country. \r \r Foreign claims on an ultimate risk basis are shown for the following types of reporting entity: Australian-owned banks (i.e. those with their parent entity legally incorporated in Australia); foreign subsidiary banks; branches of foreign banks; RFCs; and Australian-owned entities (i.e. Australian-owned banks and RFCs). The RFC data are only available between March 2003 and September 2010.\r \r ‘Foreign claims (ultimate risk basis) – Aust-owned entities’ is also reported in USD equivalent amounts, using the end-quarter AUD/USD exchange rate from statistical table F11.\r
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Data and R source for the 2022 Aotea Bird Count (ABC) citizen science project. For more information and previous years' reports, see: https://www.gbiet.org/bird-count
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Script containing all analyses reported in the text, with full statistical outpus reported as comments
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This report discusses some problems that can arise when attempting to import PostScript images into R, when the PostScript image contains coordinate transformations that skew the image. There is a description of some new features in the ‘grImport’ package for R that allow these sorts of images to be imported into R successfully.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Regression ranks among the most popular statistical analysis methods across many research areas, including psychology. Typically, regression coefficients are displayed in tables. While this mode of presentation is information-dense, extensive tables can be cumbersome to read and difficult to interpret. Here, we introduce three novel visualizations for reporting regression results. Our methods allow researchers to arrange large numbers of regression models in a single plot. Using regression results from real-world as well as simulated data, we demonstrate the transformations which are necessary to produce the required data structure and how to subsequently plot the results. The proposed methods provide visually appealing ways to report regression results efficiently and intuitively. Potential applications range from visual screening in the model selection stage to formal reporting in research papers. The procedure is fully reproducible using the provided code and can be executed via free-of-charge, open-source software routines in R.