Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
data file in SAS format
Facebook
TwitterThe Fiscal Intermediary maintains the Provider Specific File (PSF). The file contains information about the facts specific to the provider that affects computations for the Prospective Payment System. The Provider Specific files in SAS format are located in the Download section below for the following provider-types, Inpatient, Skilled Nursing Facility, Home Health Agency, Hospice, Inpatient Rehab, Long Term Care, Inpatient Psychiatric Facility
Facebook
TwitterWe compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).
Facebook
TwitterThe raw data for each of the analyses are presented. Baseline severity difference (probands only) (Figure A in S1 Dataset), Repeated measures analysis of change in lesion severity (Figure B in S1 Dataset). Logistic regression of survivorship (Figure C in S1 Dataset). Time to cure (Figure D in S1 Dataset). Each data set is given as a SAS code for the data itself, and the equivalent analysis to that performed in JMP (and reported in the text). Data are presented in SAS format as this is a simple text format. The data and code were generated as direct exports from JMP, and additional SAS code added as needed (for instance, JMP does not export code for post-hoc tests). Note, however, that SAS rounds to less precision than JMP, and can give slightly different results, especially for REML methods. (DOCX)
Facebook
TwitterThe simulated synthetic aperture sonar (SAS) data presented here was generated using PoSSM [Johnson and Brown 2018]. The data is suitable for bistatic, coherent signal processing and will form acoustic seafloor imagery. Included in this data package is simulated sonar data in Generic Data Format (GDF) files, a description of the GDF file contents, example SAS imagery, and supporting information about the simulated scenes. In total, there are eleven 60 m x 90 m scenes, labeled scene00 through scene10, with scene00 provided with the scatterers in isolation, i.e. no seafloor texture. This is provided for beamformer testing purposes and should result in an image similar to the one labeled "PoSSM-scene00-scene00-starboard-0.tif" in the Related Data Sets tab. The ten other scenes have varying degrees of model variation as described in "Description_of_Simulated_SAS_Data_Package.pdf". A description of the data and the model is found in the associated document called "Description_of_Simulated_SAS_Data_Package.pdf" and a description of the format in which the raw binary data is stored is found in the related document "PSU_GDF_Format_20240612.pdf". The format description also includes MATLAB code that will effectively parse the data to aid in signal processing and image reconstruction. It is left to the researcher to develop a beamforming algorithm suitable for coherent signal and image processing. Each 60 m x 90 m scene is represented by 4 raw (not beamformed) GDF files, labeled sceneXX-STARBOARD-000000 through 000003. It is possible to beamform smaller scenes from any one of these 4 files, i.e. the four files are combined sequentially to form a 60 m x 90 m image. Also included are comma separated value spreadsheets describing the locations of scatterers and objects of interest within each scene. In addition to the binary GDF data, a beamformed GeoTIFF image and a single-look complex (SLC, science file) data of each scene is provided. The SLC data (science) is stored in the Hierarchical Data Format 5 (https://www.hdfgroup.org/), and appended with ".hdf5" to indicate the HDF5 format. The data are stored as 32-bit real and 32-bit complex values. A viewer is available that provides basic graphing, image display, and directory navigation functions (https://www.hdfgroup.org/downloads/hdfview/). The HDF file contains all the information necessary to reconstruct a synthetic aperture sonar image. All major and contemporary programming languages have library support for encoding/decoding the HDF5 format. Supporting documentation that outlines positions of the seafloor scatterers is included in "Scatterer_Locations_Scene00.csv", while the locations of the objects of interest for scene01-scene10 are included in "Object_Locations_All_Scenes.csv". Portable Network Graphic (PNG) images that plot the location of objects of all the objects of interest in each scene in Along-Track and Cross-Track notation are provided.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This formatted dataset (AnalysisDatabaseGBD) originates from raw data files from the Institute of Health Metrics and Evaluation (IHME) Global Burden of Disease Study (GBD2017) affiliated with the University of Washington. We are volunteer collaborators with IHME and not employed by IHME or the University of Washington.
The population weighted GBD2017 data are on male and female cohorts ages 15-69 years including noncommunicable diseases (NCDs), body mass index (BMI), cardiovascular disease (CVD), and other health outcomes and associated dietary, metabolic, and other risk factors. The purpose of creating this population-weighted, formatted database is to explore the univariate and multiple regression correlations of health outcomes with risk factors. Our research hypothesis is that we can successfully model NCDs, BMI, CVD, and other health outcomes with their attributable risks.
These Global Burden of disease data relate to the preprint: The EAT-Lancet Commission Planetary Health Diet compared with Institute of Health Metrics and Evaluation Global Burden of Disease Ecological Data Analysis.
The data include the following:
1. Analysis database of population weighted GBD2017 data that includes over 40 health risk factors, noncommunicable disease deaths/100k/year of male and female cohorts ages 15-69 years from 195 countries (the primary outcome variable that includes over 100 types of noncommunicable diseases) and over 20 individual noncommunicable diseases (e.g., ischemic heart disease, colon cancer, etc).
2. A text file to import the analysis database into SAS
3. The SAS code to format the analysis database to be used for analytics
4. SAS code for deriving Tables 1, 2, 3 and Supplementary Tables 5 and 6
5. SAS code for deriving the multiple regression formula in Table 4.
6. SAS code for deriving the multiple regression formula in Table 5
7. SAS code for deriving the multiple regression formula in Supplementary Table 7
8. SAS code for deriving the multiple regression formula in Supplementary Table 8
9. The Excel files that accompanied the above SAS code to produce the tables
For questions, please email davidkcundiff@gmail.com. Thanks.
Facebook
TwitterThis SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.
Facebook
TwitterThe SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionA required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data.MethodsThe system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application’s performance and functionality.ResultsThe system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects.DiscussionMedical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.
Facebook
Twitterhttps://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
| BASE YEAR | 2024 |
| HISTORICAL DATA | 2019 - 2023 |
| REGIONS COVERED | North America, Europe, APAC, South America, MEA |
| REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
| MARKET SIZE 2024 | 3.82(USD Billion) |
| MARKET SIZE 2025 | 4.06(USD Billion) |
| MARKET SIZE 2035 | 7.5(USD Billion) |
| SEGMENTS COVERED | Controller Type, Storage Interface, Deployment Type, End User, Regional |
| COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
| KEY MARKET DYNAMICS | Rising data storage needs, Increasing cloud adoption, Demand for high-speed data access, Expanding enterprise applications, Technological advancements in storage solutions |
| MARKET FORECAST UNITS | USD Billion |
| KEY COMPANIES PROFILED | Broadcom, Microchip Technology, NetApp, Oracle, Samsung Electronics, Dell Technologies, Seagate Technology, Red Hat, Hewlett Packard Enterprise, Western Digital, Intel, IBM |
| MARKET FORECAST PERIOD | 2025 - 2035 |
| KEY MARKET OPPORTUNITIES | Increasing data storage demands, Growing adoption of cloud infrastructure, Rising need for data security, Advancements in RAID technology, Expansion in emerging markets |
| COMPOUND ANNUAL GROWTH RATE (CAGR) | 6.4% (2025 - 2035) |
Facebook
Twitterhttps://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
| BASE YEAR | 2024 |
| HISTORICAL DATA | 2019 - 2023 |
| REGIONS COVERED | North America, Europe, APAC, South America, MEA |
| REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
| MARKET SIZE 2024 | 3.46(USD Billion) |
| MARKET SIZE 2025 | 3.64(USD Billion) |
| MARKET SIZE 2035 | 6.0(USD Billion) |
| SEGMENTS COVERED | Application, Technology, End Use, Circuit Type, Regional |
| COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
| KEY MARKET DYNAMICS | Technological advancements, Increasing data center demand, Rising storage solutions market, Growing adoption of cloud computing, Enhanced data transfer speeds |
| MARKET FORECAST UNITS | USD Billion |
| KEY COMPANIES PROFILED | Broadcom, Infineon Technologies, STMicroelectronics, NXP Semiconductors, Skyworks Solutions, Nordic Semiconductor, Renesas Electronics, Analog Devices, Texas Instruments, ON Semiconductor, Maxim Integrated, Microchip Technology, Cypress Semiconductor, Marvell Technology, Semtech |
| MARKET FORECAST PERIOD | 2025 - 2035 |
| KEY MARKET OPPORTUNITIES | Growing data center investments, Increasing demand for high-speed storage, Adoption of cloud computing solutions, Rising need for data redundancy systems, Advancements in storage technology. |
| COMPOUND ANNUAL GROWTH RATE (CAGR) | 5.1% (2025 - 2035) |
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Table of dataset characteristics used for the benchmark.
Facebook
TwitterThe Emerging Pathogens Initiative (EPI) database contains emerging pathogens information from the local Veterans Affairs Medical Centers (VAMCs). The EPI software package allows the VA to track emerging pathogens on the national level without additional data entry at the local level. The results from aggregation of data can be shared with the appropriate public health authorities including non-VA and the private health care sector allowing national planning, formulation of intervention strategies, and resource allocations. EPI is designed to automatically collect data on emerging diseases for Veterans Affairs Central Office (VACO) to analyze. The data is sent to the Austin Information Technology Center (AITC) from all Veterans Health Information Systems and Technology Architecture (VistA) systems for initial processing and combination with related workload data. VACO data retrieval and analysis is then carried out. The AITC creates two file structures both in Statistical Analysis Software (SAS) file format, which are used as a source of data for the Veterans Affairs Headquarters (VAHQ) Infectious Diseases Program Office. These files are manipulated and used for analysis and reporting by the National Infectious Diseases Service. Emerging Pathogens (as characterized by VACO) act as triggers for data acquisition activities in the automated program. The system retrieves relevant, predetermined, patient-specific information in the form of a Health Level Seven (HL7) message that is transmitted to the central data repository at the AITC. Once at that location, the data is converted to a SAS dataset for analysis by the VACO National Infectious Diseases Service. Before data transmission an Emerging Pathogens Verification Report is produced for the local sites to review, verify, and make corrections as needed. After data transmission to the AITC it is added to the EPI database.
Facebook
Twitterhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/10.0/customlicense?persistentId=doi:10.7910/DVN/PNOFKIhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/10.0/customlicense?persistentId=doi:10.7910/DVN/PNOFKI
InfoGroup’s Historical Business Backfile consists of geo-coded records of millions of US businesses and other organizations that contain basic information on each entity, such as: contact information, industry description, annual revenues, number of employees, year established, and other data. Each annual file consists of a “snapshot” of InfoGroup’s data as of the last day of each year, creating a time series of data 1997-2019. Access is restricted to current Harvard University community members. Use of Infogroup US Historical Business Data is subject to the terms and conditions of a license agreement (effective March 16, 2016) between Harvard and Infogroup Inc. and subject to applicable laws. Most data files are available in either .csv or .sas format. All data files are compressed into an archive in .gz, or GZIP, format. Extraction software such as 7-Zip is required to unzip these archives.
Facebook
TwitterThe SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .
Facebook
Twitterhttps://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
| BASE YEAR | 2024 |
| HISTORICAL DATA | 2019 - 2023 |
| REGIONS COVERED | North America, Europe, APAC, South America, MEA |
| REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
| MARKET SIZE 2024 | 2.29(USD Billion) |
| MARKET SIZE 2025 | 2.49(USD Billion) |
| MARKET SIZE 2035 | 5.8(USD Billion) |
| SEGMENTS COVERED | Interface Type, Application, End Use Industry, Form Factor, Regional |
| COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
| KEY MARKET DYNAMICS | Growing data storage needs, Rising demand for high-speed connectivity, Increasing adoption of cloud services, Advancements in data transfer technologies, Expanding applications in various industries |
| MARKET FORECAST UNITS | USD Billion |
| KEY COMPANIES PROFILED | Lcom, TE Connectivity, Molex, Nexans, Cinch Connectivity Solutions, 3M, Samtec, Broadcom, JAE, Phoenix Contact, Hirose Electric, Amphenol |
| MARKET FORECAST PERIOD | 2025 - 2035 |
| KEY MARKET OPPORTUNITIES | Rising demand for data centers, Increasing adoption of cloud storage, Growth in enterprise data management, Advancements in high-speed connectivity, Expansion of IoT applications |
| COMPOUND ANNUAL GROWTH RATE (CAGR) | 8.8% (2025 - 2035) |
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This online supplement contains data files and computer code, enabling the public to reproduce the results of the analysis described in the report titled “Thrifty Food Plan Cost Estimates for Alaska and Hawaii” published by USDA FNS in July 2023. The report is available at: https://www.fns.usda.gov/cnpp/tfp-akhi. The online supplement contains a user guide, which describes the contents of the online supplement in detail, provides a data dictionary, and outlines the methodology used in the analysis; a data file in CSV format, which contains the most detailed information on food price differentials between the mainland U.S. and Alaska and Hawaii derived from Circana (formerly Information Resources Inc) retail scanner data as could be released without disclosing proprietary information; SAS and R code, which use the provided data file to reproduce the results of the report; and an excel spreadsheet containing the reproduced results from the SAS or R code. For technical inquiries, contact: FNS.FoodPlans@usda.gov. Resources in this dataset:
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement User Guide File name: TFPCostEstimatesForAlaskaAndHawaii-UserGuide.pdf Resource description: The online supplement user guide describes the contents of the online supplement in detail, provides a data dictionary, and outlines the methodology used in the analysis.
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement Data File File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementDataFile.csv Resource description: The online supplement data file contains food price differentials between the mainland United States and Anchorage and Honolulu derived from Circana (formerly Information Resources Inc) retail scanner data. The data was aggregated to prevent disclosing proprietary information.
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement R Code File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementRCode.R Resource description: The online supplement R code enables users to read in the online supplement data file and reproduce the results of the analysis as described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report using the R programming language.
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement SAS Code (zipped) File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementSASCode.zip Resource description: The online supplement SAS code enables users to read in the online supplement data file and reproduce the results of the analysis as described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report using the SAS programming language. This SAS file is provided in zip format for compatibility with Ag Data Commons; users will need to unzip the file prior to its use.
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement Reproduced Results File name: TFPCostEstimatesforAlaskaandHawaii-ReproducedResults.xlsx Resource description: The online supplement reproduced results are output from either the online supplement R or SAS code and contain the results of the analysis described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Table of performance benchmarks.
Facebook
Twitterhttps://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
| BASE YEAR | 2024 |
| HISTORICAL DATA | 2019 - 2023 |
| REGIONS COVERED | North America, Europe, APAC, South America, MEA |
| REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
| MARKET SIZE 2024 | 935.9(USD Million) |
| MARKET SIZE 2025 | 1023.0(USD Million) |
| MARKET SIZE 2035 | 2500.0(USD Million) |
| SEGMENTS COVERED | Application, Connector Type, Cable Length, End Use, Regional |
| COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
| KEY MARKET DYNAMICS | increasing data transmission needs, rising demand for high-performance computing, growing adoption of data centers, technological advancements in connectivity, expansion of cloud services |
| MARKET FORECAST UNITS | USD Million |
| KEY COMPANIES PROFILED | Nexus Technologies, Lcom, Siemon, Systimax, TE Connectivity, Molex, C2G, 3M, Broadcom, StarTech, Hirschmann, Amphenol |
| MARKET FORECAST PERIOD | 2025 - 2035 |
| KEY MARKET OPPORTUNITIES | Rising data center demand, Increased adoption of SSDs, Expansion of cloud computing services, Growth in high-performance computing, Need for efficient data storage solutions |
| COMPOUND ANNUAL GROWTH RATE (CAGR) | 9.3% (2025 - 2035) |
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The SAS code (Supplementary File 1) and R program code (Supplementary File 2). For the analysis to proceed, this code requires an input data file (Supplementary File 3-5) prepared in excel format (CSV). Data can be stored in any format such as xlsx, txt, xls and others. Economic values in the SAS code are entered manually in the code, but in the R code are stored in an Excel file (Supplementary File 6).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
data file in SAS format