Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
PublicationPrimahadi Wijaya R., Gede. 2014. Visualisation of diachronic constructional change using Motion Chart. In Zane Goebel, J. Herudjati Purwoko, Suharno, M. Suryadi & Yusuf Al Aried (eds.). Proceedings: International Seminar on Language Maintenance and Shift IV (LAMAS IV), 267-270. Semarang: Universitas Diponegoro. doi: https://doi.org/10.4225/03/58f5c23dd8387Description of R codes and data files in the repositoryThis repository is imported from its GitHub repo. Versioning of this figshare repository is associated with the GitHub repo's Release. So, check the Releases page for updates (the next version is to include the unified version of the codes in the first release with the tidyverse).The raw input data consists of two files (i.e. will_INF.txt and go_INF.txt). They represent the co-occurrence frequency of top-200 infinitival collocates for will and be going to respectively across the twenty decades of Corpus of Historical American English (from the 1810s to the 2000s).These two input files are used in the R code file 1-script-create-input-data-raw.r. The codes preprocess and combine the two files into a long format data frame consisting of the following columns: (i) decade, (ii) coll (for "collocate"), (iii) BE going to (for frequency of the collocates with be going to) and (iv) will (for frequency of the collocates with will); it is available in the input_data_raw.txt. Then, the script 2-script-create-motion-chart-input-data.R processes the input_data_raw.txt for normalising the co-occurrence frequency of the collocates per million words (the COHA size and normalising base frequency are available in coha_size.txt). The output from the second script is input_data_futurate.txt.Next, input_data_futurate.txt contains the relevant input data for generating (i) the static motion chart as an image plot in the publication (using the script 3-script-create-motion-chart-plot.R), and (ii) the dynamic motion chart (using the script 4-script-motion-chart-dynamic.R).The repository adopts the project-oriented workflow in RStudio; double-click on the Future Constructions.Rproj file to open an RStudio session whose working directory is associated with the contents of this repository.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
A machine learning streamflow (MLFLOW) model was developed in R (model is in the Rscripts folder) for modeling monthly streamflow from 2012 to 2017 in three watersheds on the Wyoming Range in the upper Green River basin. Geospatial information for 125 site features (vector data are in the Sites.shp file) and discrete streamflow observation data and environmental predictor data were used in fitting the MLFLOW model and predicting with the fitted model. Tabular calibration and validation data are in the Model_Fitting_Site_Data.csv file, totaling 971 discrete observations and predictions of monthly streamflow. Geospatial information for 17,518 stream grid cells (raster data are in the Streams.tif file) and environmental predictor data were used for continuous streamflow predictions with the MLFLOW model. Tabular prediction data for all the study area (17,518 stream grid cells) and study period (72 months; 2012–17) are in the Model_Prediction_Stream_Data.csv file, totaling 1,261,296 p ...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Input dataset for R code (first sheet), and BOLD spreadsheet downloaded on April 11, 2022 (next sheets) for "Facing the Infinity".
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Input data for R script.
Trends in nutrient fluxes and streamflow for selected tributaries in the Lake Erie watershed were calculated using monitoring data at 10 locations. Trends in flow-normalized nutrient fluxes were determined by applying a weighted regression approach called WRTDS (Weighted Regression on Time, Discharge, and Season). Site information and streamflow and water-quality records are contained in 3 zipped files named as follows: INFO (site information), Daily (daily streamflow records), and Sample (water-quality records). The INFO, Daily (flow), and Sample files contain the input data, by water-quality parameter and by site as .csv files, used to run trend analyses. These files were generated by the R (version 3.1.2) software package called EGRET - Exploration and Graphics for River Trends (version 2.5.1) (Hirsch and DeCicco, 2015), and can be used directly as input to run graphical procedures and WRTDS trend analyses using EGRET R software. The .csv files are identified according to water-quality parameter (TP, SRP, TN, NO23, and TKN) and site reference number (e.g. TPfiles.1.INFO.csv, SRPfiles.1.INFO.csv, TPfiles.2.INFO.csv, etc.). Water-quality parameter abbreviations and site reference numbers are defined in the file "Site-summary_table.csv" on the landing page, where there is also a site-location map ("Site_map.pdf"). Parameter information details, including abbreviation definitions, appear in the abstract on the Landing Page. SRP data records were available at only 6 of the 10 trend sites, which are identified in the file "site-summary_table.csv" (see landing page) as monitored by the organization NCWQR (National Center for Water Quality Research). The SRP sites are: RAIS, MAUW, SAND, HONE, ROCK, and CUYA. The model-input dataset is presented in 3 parts: 1. INFO.zip (site information) 2. Daily.zip (daily streamflow records) 3. Sample.zip (water-quality records) Reference: Hirsch, R.M., and De Cicco, L.A., 2015 (revised). User Guide to Exploration and Graphics for RivEr Trends (EGRET) and dataRetrieval: R Packages for Hydrologic Data, Version 2.0, U.S. Geological Survey Techniques Methods, 4-A10. U.S. Geological Survey, Reston, VA., 93 p. (at: http://dx.doi.org/10.3133/tm4A10).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data to run the IMACLIM-R France code
This data set includes input data for the development of regression models to predict chloride from specific conductance (SC) data at 56 U. S. Geological Survey water quality monitoring stations in the eastern United States. Each site has 20 or more simultaneous observations of SC and chloride. Data were downloaded from the National Water Information System (NWIS) using the R package dataRetrieval. Datasets for each site were evaluated and outliers were removed prior to the development of the regression model. This file contains only the final input dataset for the regression models. Please refer to Moore and others (in review) for more details. Moore, J., R. Fanelli, and A. Sekellick. In review. High-frequency data reveal deicing salts drive elevated conductivity and chloride along with pervasive and frequent exceedances of the EPA aquatic life criteria for chloride in urban streams. Submitted to Environmental Science and Technology.
The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.
AWRA-R model implementation.
The metadata within the dataset contains the workflow, processes, input and output data and instructions to implement the Namoi AWRA-R model for model calibration or simulation. In Namoi the AWRA-R simulation was done twice, firstly without the baseflow input from the groundwater modelling and then second set of runs were carried out with the input from the groundwater modelling.
Each sub-folder in the associated data has a readme file indicating folder contents and providing general instructions about the workflow performed.
Detailed documentation of the AWRA-R model, is provided in: https://publications.csiro.au/rpr/download?pid=csiro:EP154523&dsid=DS2
Documentation about the implementation of AWRA-R in the Namoi bioregion is provided in BA NAM 2.6.1.3 and 2.6.1.4 products.
'..\AWRAR_Metadata_WGW\Namoi_Model_Sequence.pptx' shows the AWRA-L/R modelling sequence.
BA Surface water modelling for Namoi bioregion
The directories within contain the input data and outputs of the Namoi AWRA-R model for model calibration and simulation. The folders of calibration is used as an example,simulation uses mirror files of these data, albeit with longer time-series depending on the simualtion period.
Detailed documentation of the AWRA-R model, is provided in: https://publications.csiro.au/rpr/download?pid=csiro:EP154523&dsid=DS2
Documentation about the implementation of AWRA-R in the Hunter bioregion is provided in BA NAM 2.6.1.3 and 2.6.1.4 products.
Additional data needed to generate some of the inputs needed to implement AWRA-R are detailed in the corresponding metadata statement as stated below.
Here is the parent folder:
'..\AWRAR_Metadata_WGW..'
Input data needed:
Gauge/node topological information in '...\model calibration\NAM5.3.1_low_calib\gis\sites\AWRARv5.00_reaches.csv'.
Look up table for soil thickness in '...\model calibration\NAM5.3.1_low_calib\gis\ASRIS_soil_properties\NAM_AWRAR_ASRIS_soil_thickness_v5.00.csv'. (check metadata statement)
Look up tables of AWRA-LG groundwater parameters in '...\model calibration\NAM5.3.1_low_calib\gis\AWRA-LG_gw_parameters\'.
Look up table of AWRA-LG catchment grid cell contribution in '...model calibration\NAM5.3.1_low_calib\gis\catchment-boundary\AWRA-R_catchment_x_AWRA-L_weight.csv'. (check metadata statement)
Look up tables of link lengths for main river, tributaries and distributaries within a reach in \model calibration\NAM5.3.1_low_calib\gis\rivers\'. (check metadata statement)
Time series data of AWRA-LG outputs: evaporation, rainfall, runoff and depth to groundwater.
Gridded data of AWRA-LG groundwater parameters, refer to explanation in '...'\model calibration\NAM5.3.1_low_calib\rawdata\AWRA_LG_output\gw_parameters\README.txt'.
Time series of observed or simulated reservoir level, volume and surface area for reservoirs used in the simulation: Keepit Dam, Split Rock and Chaffey Creek Dam.
located in '...\model calibration\NAM5.3.1_low_calib\rawdata\reservoirs\'.
Gauge station cross sections in '...\model calibration\NAM5.3.1_low_calib\rawdata\Site_Station_Sections\'. (check metadata statement)
Daily Streamflow and level time-series in'...\model calibration\NAM5.3.1_low_calib\rawdata\streamflow_and_level_all_processed\'.
Irrigation input, configuration and parameter files in '...\model calibration\NAM5.3.1_low_calib\inputs\NAM\irrigation\'.
These come from the separate calibration of the AWRA-R irrigation module in:
'...\irrigation calibration simulation\', refer to explanation in readme.txt file therein.
' ..\AWRAR_Metadata_WGW\dam model calibration simulation\Chaffey\readme.txt'
'... \AWRAR_Metadata_WGW\dam model calibration simulation\Split_Rock_and_Keepit\readme.txt'
Relevant ouputs include:
AWRA-R time series of stores and fluxes in river reaches ('...\AWRAR_Metadata_WGW\model calibration\NAM5.3.1_low_calib\outputs\jointcalibration\v00\NAM\simulations\')
including simulated streamflow in files denoted XXXXXX_full_period_states_nonrouting.csv where XXXXXX denotes gauge or node ID.
AWRA-R time series of stores and fluxes for irrigation/mining in the same directory as above in files XXXXXX_irrigation_states.csv
AWRA-R calibration validation goodness of fit metrics ('...\AWRAR_Metadata_WGW\model calibration\NAM5.3.1_low_calib\outputs\jointcalibration\v00\NAM\postprocessing\')
in files calval_results_XXXXXX_v5.00.csv
Bioregional Assessment Programme (2017) Namoi AWRA-R model implementation (post groundwater input). Bioregional Assessment Derived Dataset. Viewed 12 March 2019, http://data.bioregionalassessments.gov.au/dataset/8681bd56-1806-40a8-892e-4da13cda86b8.
Derived From Historical Mining Footprints DTIRIS NAM 20150914
Derived From GEODATA 9 second DEM and D8: Digital Elevation Model Version 3 and Flow Direction Grid 2008
Derived From Namoi Environmental Impact Statements - Mine footprints
Derived From Namoi Surface Water Mine Footprints - digitised
Derived From River Styles Spatial Layer for New South Wales
Derived From National Surface Water sites Hydstra
Derived From Namoi AWRA-L model
Derived From Namoi Hydstra surface water time series v1 extracted 140814
Derived From Namoi AWRA-R (restricted input data implementation)
Derived From Namoi Existing Mine Development Surface Water Footprints
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This USGS data release represents the input data, R script, and output data for WRTDS analyses used to identify trends in suspended sediment loads of Coastal Plain streams and rivers in the eastern United States.
The dataset was compiled by the Bioregional Assessment Programme from multiple sources referenced within the dataset and/or metadata. The processes undertaken to compile this dataset are described in the History field in this metadata statement.
Namoi AWRA-R (restricted input data implementation)
This dataset was supplied to the Bioregional Assessment Programme by DPI Water (NSW Government). Metadata was not provided and has been compiled by the Bioregional Assessment Programme based on known details at the time of acquisition. The metadata within the dataset contains the restricted input data to implement the Namoi AWRA-R model for model calibration or simulation. The restricted input contains simulated time-series extracted from the Namoi Integrated Quantity and Quality Model (IQQM) including: irrigation and other diversions (town water supply, mining), reservoir information (volumes, inflows, releases) and allocation information.
Each sub-folder in the associated data has a readme file indicating folder contents and providing general instructions about the use of the data and how it was sourced.
Detailed documentation of the AWRA-R model, is provided in: https://publications.csiro.au/rpr/download?pid=csiro:EP154523&dsid=DS2
Documentation about the implementation of AWRA-R in the Namoi bioregion is provided in BA NAM 2.6.1.3 and 2.6.1.4 products.
The resource is used in the development of river system models.
This dataset was supplied to the Bioregional Assessment Programme by DPI Water (NSW Government). The data was extracted from the IQQM interface and formatted accordingly. It is considered a source dataset because the IQQM model cannot be registered as it was provided under a formal agreement between CSIRO and DPI Water with confidentiality clauses.
Bioregional Assessment Programme (2017) Namoi AWRA-R (restricted input data implementation). Bioregional Assessment Source Dataset. Viewed 12 March 2019, http://data.bioregionalassessments.gov.au/dataset/04fc0b56-ba1d-4981-aaf2-ca6c8eaae609.
The model.zip file contains input data and code supporting the cod_v2 population estimates. The file modelData.RData provides the input data to the JAGS model and the file modelCode.R contains the source code for the model in the JAGS language. The files can be used to run the model for further assessments and as a starting point for further model development. The data and the model were developed using the statistical software R version 4.0.2 (https://cran.r-project.org/bin/windows/base/old/4.0.2) and JAGS 4.3.0 (https://mcmc-jags.sourceforge.io), a program for analysis of Bayesian graphical models using Gibbs sampling, through the R package runjags 2.2.0 (https://cran.r-project.org/web/packages/runjags).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data and code archive provides all the data and code for replicating the empirical analysis that is presented in the journal article "A Ray-Based Input Distance Function to Model Zero-Valued Output Quantities: Derivation and an Empirical Application" authored by Juan José Price and Arne Henningsen and published in the Journal of Productivity Analysis (DOI: 10.1007/s11123-023-00684-1).
We conducted the empirical analysis with the "R" statistical software (version 4.3.0) using the add-on packages "combinat" (version 0.0.8), "miscTools" (version 0.6.28), "quadprog" (version 1.5.8), sfaR (version 1.0.0), stargazer (version 5.2.3), and "xtable" (version 1.8.4) that are available at CRAN. We created the R package "micEconDistRay" that provides the functions for empirical analyses with ray-based input distance functions that we developed for the above-mentioned paper. Also this R package is available at CRAN (https://cran.r-project.org/package=micEconDistRay).
This replication package contains the following files and folders:
README This file
MuseumsDk.csv The original data obtained from the Danish Ministry of Culture and from Statistics Denmark. It includes the following variables:
museum: Name of the museum.
type: Type of museum (Kulturhistorisk museum = cultural history museum; Kunstmuseer = arts museum; Naturhistorisk museum = natural history museum; Blandet museum = mixed museum).
munic: Municipality, in which the museum is located.
yr: Year of the observation.
units: Number of visit sites.
resp: Whether or not the museum has special responsibilities (0 = no special responsibilities; 1 = at least one special responsibility).
vis: Number of (physical) visitors.
aarc: Number of articles published (archeology).
ach: Number of articles published (cultural history).
aah: Number of articles published (art history).
anh: Number of articles published (natural history).
exh: Number of temporary exhibitions.
edu: Number of primary school classes on educational visits to the museum.
ev: Number of events other than exhibitions.
ftesc: Scientific labor (full-time equivalents).
ftensc: Non-scientific labor (full-time equivalents).
expProperty: Running and maintenance costs [1,000 DKK].
expCons: Conservation expenditure [1,000 DKK].
ipc: Consumer Price Index in Denmark (the value for year 2014 is set to 1).
prepare_data.R This R script imports the data set MuseumsDk.csv, prepares it for the empirical analysis (e.g., removing unsuitable observations, preparing variables), and saves the resulting data set as DataPrepared.csv.
DataPrepared.csv This data set is prepared and saved by the R script prepare_data.R. It is used for the empirical analysis.
make_table_descriptive.R This R script imports the data set DataPrepared.csv and creates the LaTeX table /tables/table_descriptive.tex, which provides summary statistics of the variables that are used in the empirical analysis.
IO_Ray.R This R script imports the data set DataPrepared.csv, estimates a ray-based Translog input distance functions with the 'optimal' ordering of outputs, imposes monotonicity on this distance function, creates the LaTeX table /tables/idfRes.tex that presents the estimated parameters of this function, and creates several figures in the folder /figures/ that illustrate the results.
IO_Ray_ordering_outputs.R This R script imports the data set DataPrepared.csv, estimates a ray-based Translog input distance functions, imposes monotonicity for each of the 720 possible orderings of the outputs, and saves all the estimation results as (a huge) R object allOrderings.rds.
allOrderings.rds (not included in the ZIP file, uploaded separately) This is a saved R object created by the R script IO_Ray_ordering_outputs.R that contains the estimated ray-based Translog input distance functions (with and without monotonicity imposed) for each of the 720 possible orderings.
IO_Ray_model_averaging.R This R script loads the R object allOrderings.rds that contains the estimated ray-based Translog input distance functions for each of the 720 possible orderings, does model averaging, and creates several figures in the folder /figures/ that illustrate the results.
/tables/ This folder contains the two LaTeX tables table_descriptive.tex and idfRes.tex (created by R scripts make_table_descriptive.R and IO_Ray.R, respectively) that provide summary statistics of the data set and the estimated parameters (without and with monotonicity imposed) for the 'optimal' ordering of outputs.
/figures/ This folder contains 48 figures (created by the R scripts IO_Ray.R and IO_Ray_model_averaging.R) that illustrate the results obtained with the 'optimal' ordering of outputs and the model-averaged results and that compare these two sets of results.
Objective: To develop a clinical informatics pipeline designed to capture large-scale structured EHR data for a national patient registry.
Materials and Methods: The EHR-R-REDCap pipeline is implemented using R-statistical software to remap and import structured EHR data into the REDCap-based multi-institutional Merkel Cell Carcinoma (MCC) Patient Registry using an adaptable data dictionary.
Results: Clinical laboratory data were extracted from EPIC Clarity across several participating institutions. Labs were transformed, remapped and imported into the MCC registry using the EHR labs abstraction (eLAB) pipeline. Forty-nine clinical tests encompassing 482,450 results were imported into the registry for 1,109 enrolled MCC patients. Data-quality assessment revealed highly accurate, valid labs. Univariate modeling was performed for labs at baseline on overall survival (N=176) using this clinical informatics pipeline.
Conclusion: We demonstrate feasibility of the facile eLAB workflow. EHR...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Input data for openSTARS.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contain a zip file with input and output data for an experiment with OCALM (https://github.com/fanavarro/ocalm):
The SNOMED ontology and the medical text corpus used for input were not included due to licensing issues; however, the results are included in this repository.
This collection includes input and output data files from a SWAT model calibrated for the Upper Rogue River, OR, USA with predictions generated for the 2040s. Files are archived in a model instance folder in CUAHSI's HydroShare site. See link for more details.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Input dataset of BIN data for R script.
Abstract This dataset contains the excel spreadsheet extract and shapefile of all NAM coal deposits and resources. Namoi AWRA-R model implementation. The metadata within the dataset contains the …Show full descriptionAbstract This dataset contains the excel spreadsheet extract and shapefile of all NAM coal deposits and resources. Namoi AWRA-R model implementation. The metadata within the dataset contains the workflow, processes, input and output data and instructions to implement the Namoi AWRA-R model for model calibration or simulation. In Namoi the AWRA-R simulation was done twice, firstly without the baseflow input from the groundwater modelling and then second set of runs were carried out with the input from the groundwater modelling. Each sub-folder in the associated data has a readme file indicating folder contents and providing general instructions about the workflow performed. Detailed documentation of the AWRA-R model, is provided in: https://publications.csiro.au/rpr/download?pid=csiro:EP154523&dsid=DS2 Documentation about the implementation of AWRA-R in the Namoi bioregion is provided in BA NAM 2.6.1.3 and 2.6.1.4 products. '..\AWRAR_Metadata_WGW\Namoi_Model_Sequence.pptx' shows the AWRA-L/R modelling sequence. Dataset History The directories within contain the input data and outputs of the Namoi AWRA-R model for model calibration and simulation. The folders of calibration is used as an example,simulation uses mirror files of these data, albeit with longer time-series depending on the simualtion period. Detailed documentation of the AWRA-R model, is provided in: https://publications.csiro.au/rpr/download?pid=csiro:EP154523&dsid=DS2 Documentation about the implementation of AWRA-R in the Namoi subregion is provided in BA NAM 2.6.1.3 and 2.6.1.4 products. Additional data needed to generate some of the inputs needed to implement AWRA-R are detailed in the corresponding metadata statement as stated below. Here is the parent folder: '..\AWRAR_Metadata_NGW..' Input data needed: Gauge/node topological information in '...\model calibration\NAM5.3.1_low_calib\gis\sites\AWRARv5.00_reaches.csv'. Look up table for soil thickness in '...\model calibration\NAM5.3.1_low_calib\gis\ASRIS_soil_properties\NAM_AWRAR_ASRIS_soil_thickness_v5.00.csv'. (check metadata statement) Look up tables of AWRA-LG groundwater parameters in '...\model calibration\NAM5.3.1_low_calib\gis\AWRA-LG_gw_parameters'. Look up table of AWRA-LG catchment grid cell contribution in '...model calibration\NAM5.3.1_low_calib\gis\catchment-boundary\AWRA-R_catchment_x_AWRA-L_weight.csv'. (check metadata statement) Look up tables of link lengths for main river, tributaries and distributaries within a reach in \model calibration\NAM5.3.1_low_calib\gis\rivers'. (check metadata statement) Time series data of AWRA-LG outputs: evaporation, rainfall, runoff and depth to groundwater. Gridded data of AWRA-LG groundwater parameters, refer to explanation in '...'\model calibration\NAM5.3.1_low_calib\rawdata\AWRA_LG_output\gw_parameters\README.txt'. Time series of observed or simulated reservoir level, volume and surface area for reservoirs used in the simulation: Keepit Dam, Split Rock and Chaffey Creek Dam. located in '...\model calibration\NAM5.3.1_low_calib\rawdata\reservoirs'. Gauge station cross sections in '...\model calibration\NAM5.3.1_low_calib\rawdata\Site_Station_Sections'. (check metadata statement) Daily Streamflow and level time-series in'...\model calibration\NAM5.3.1_low_calib\rawdata\streamflow_and_level_all_processed'. Irrigation input, configuration and parameter files in '...\model calibration\NAM5.3.1_low_calib\inputs\NAM\irrigation'. These come from the separate calibration of the AWRA-R irrigation module in: '...\irrigation calibration simulation\', refer to explanation in readme.txt file therein. For Dam simulation script, read the following readme.txt files ' ..\AWRAR_Metadata_NGW\dam model calibration simulation\Chaffey\readme.txt' '... \AWRAR_Metadata_NGW\dam model calibration simulation\Split_Rock_and_Keepit\readme.txt' Relevant ouputs include: AWRA-R time series of stores and fluxes in river reaches ('...\AWRAR_Metadata_NGW\model calibration\NAM5.3.1_low_calib\outputs\jointcalibration\v00\NAM\simulations') including simulated streamflow in files denoted XXXXXX_full_period_states_nonrouting.csv where XXXXXX denotes gauge or node ID. AWRA-R time series of stores and fluxes for irrigation/mining in the same directory as above in files XXXXXX_irrigation_states.csv AWRA-R calibration validation goodness of fit metrics ('...\AWRAR_Metadata_NGW\model calibration\NAM5.3.1_low_calib\outputs\jointcalibration\v00\NAM\postprocessing') in files calval_results_XXXXXX_v5.00.csv Dataset Citation Bioregional Assessment Programme (2017) Namoi AWRA-R model implementation (pre groundwater input). Bioregional Assessment Derived Dataset. Viewed 12 March 2019, http://data.bioregionalassessments.gov.au/dataset/433a27f1-cee8-499e-970a-607c6a25e979. Dataset Ancestors Derived From Historical Mining Footprints DTIRIS NAM 20150914 Derived From Namoi AWRA-L model Derived From River Styles Spatial Layer for New South Wales Derived From Namoi Surface Water Mine Footprints - digitised Derived From Namoi Environmental Impact Statements - Mine footprints Derived From National Surface Water sites Hydstra Derived From Namoi AWRA-R (restricted input data implementation) Derived From Namoi Hydstra surface water time series v1 extracted 140814 Derived From GEODATA 9 second DEM and D8: Digital Elevation Model Version 3 and Flow Direction Grid 2008 Derived From Namoi Existing Mine Development Surface Water Footprints
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Input data relating to R workflow for predicting substrate types on the Norwegian continental margin (https://github.com/diesing-ngu/GrainSizeReg). The following files are included:
AoI_Harris_mod - Polygon shapefile delimiting the area of interest
GrainSize_4km_MaxCombArea_folk8_point_20230628 - Point shapefile of the response data (substrate type). Note that these data points were derived from mapped products and are not sample points as such.
predictors_ngb.tif - Multi-band georeferenced TIFF-file of predictor variables
predictors_description.txt - Information on variables stored in predictor_ngb.tif including units, statistics, time period and sources.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
PublicationPrimahadi Wijaya R., Gede. 2014. Visualisation of diachronic constructional change using Motion Chart. In Zane Goebel, J. Herudjati Purwoko, Suharno, M. Suryadi & Yusuf Al Aried (eds.). Proceedings: International Seminar on Language Maintenance and Shift IV (LAMAS IV), 267-270. Semarang: Universitas Diponegoro. doi: https://doi.org/10.4225/03/58f5c23dd8387Description of R codes and data files in the repositoryThis repository is imported from its GitHub repo. Versioning of this figshare repository is associated with the GitHub repo's Release. So, check the Releases page for updates (the next version is to include the unified version of the codes in the first release with the tidyverse).The raw input data consists of two files (i.e. will_INF.txt and go_INF.txt). They represent the co-occurrence frequency of top-200 infinitival collocates for will and be going to respectively across the twenty decades of Corpus of Historical American English (from the 1810s to the 2000s).These two input files are used in the R code file 1-script-create-input-data-raw.r. The codes preprocess and combine the two files into a long format data frame consisting of the following columns: (i) decade, (ii) coll (for "collocate"), (iii) BE going to (for frequency of the collocates with be going to) and (iv) will (for frequency of the collocates with will); it is available in the input_data_raw.txt. Then, the script 2-script-create-motion-chart-input-data.R processes the input_data_raw.txt for normalising the co-occurrence frequency of the collocates per million words (the COHA size and normalising base frequency are available in coha_size.txt). The output from the second script is input_data_futurate.txt.Next, input_data_futurate.txt contains the relevant input data for generating (i) the static motion chart as an image plot in the publication (using the script 3-script-create-motion-chart-plot.R), and (ii) the dynamic motion chart (using the script 4-script-motion-chart-dynamic.R).The repository adopts the project-oriented workflow in RStudio; double-click on the Future Constructions.Rproj file to open an RStudio session whose working directory is associated with the contents of this repository.