Merging (in Table R) data published on https://www.data.gouv.fr/fr/datasets/ventes-de-pesticides-par-departement/, and joining two other sources of information associated with MAs: — uses: https://www.data.gouv.fr/fr/datasets/usages-des-produits-phytosanitaires/ — information on the “Biocontrol” status of the product, from document DGAL/SDQSPV/2020-784 published on 18/12/2020 at https://agriculture.gouv.fr/quest-ce-que-le-biocontrole
All the initial files (.csv transformed into.txt), the R code used to merge data and different output files are collected in a zip.
enter image description here
NB:
1) “YASCUB” for {year,AMM,Substance_active,Classification,Usage,Statut_“BioConttrol”}, substances not on the DGAL/SDQSPV list being coded NA.
2) The file of biocontrol products shall be cleaned from the duplicates generated by the marketing authorisations leading to several trade names.
3) The BNVD_BioC_DY3 table and the output file BNVD_BioC_DY3.txt contain the fields {Code_Region,Region,Dept,Code_Dept,Anne,Usage,Classification,Type_BioC,Quantite_substance)}
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for the research paper "Anthropogenic Specular Interference in the Operational GOES-R Fire Product".
Large reflective structures like solar power plants and commercial greenhouses sometimes reflect sunlight directly into GOES-R sensors. These anthropogenic specular reflections, or "sparkles", cause commission errors in operational GOES-R ABI products like the Fire Detection and Characterization Algorithm (FDCA). Using the abi-sparkle library for Python (Dove-Robinson, 2023), we generated a dataset containing both detected anthropogenic specular reflection pixels and the coincident FDCA commission errors caused by them for the GOES-16 CONUS domain during the 2020 calendar year.
The dataset consists of two exported PostgreSQL tables: sparkle_pixels_g16_abi_conus_2020, which contains the detected anthropogenic specular reflection pixels at 500 m resolution, and fdca_commission_error_clusters_g16_abi_conus_2020, which contains clustered FDCA false alarm fire pixels caused by anthropogenic specular reflection at 2 km resolution. The FDCA pixels were only processed for fire mask codes 10-15 and 30-35; see Table 3.11 in Schmidt et al., 2013 for fire code definitions.
Each row in sparkle_pixels_g16_abi_conus_2020 is a detected specular reflection pixel in a GOES-16 CONUS image from the 2020 calendar year with a unique numeric ID sparkle_id and associated metadata from the detection algorithm abi-sparkle. The column sparkle_geom is a PostGIS geometry ST_Point object that can be used to plot the pixels on a map.
FDCA fire pixels at 2 km resolution were clustered based on their connectivity in a 3x3 pixel kernel and assigned a UUID fire_cluster_id in the table fdca_commission_error_clusters_g16_abi_conus_2020. Only the fire clusters that overlapped with sparkle pixels in time and space were retained in the table. In this way, each row of fdca_commission_error_clusters_g16_abi_conus_2020 is a unique cluster of errant FDCA fire pixels caused by anthropogenic specular reflection in every available scan start time for the GOES-16 CONUS domain in 2020. Every fire cluster centroid has a PostGIS geometry object fire_cluster_centroid_geom that can be used to plot the errant fire pixel clusters on a map.
The two tables relate with the column sparkle_ids in fdca_commission_error_clusters_g16_abi_conus_2020, which is an array of overlapping sparkle IDs from the sparkle_pixels_g16_abi_conus_2020 table. The combined dataset may therefore be generated with a simple SQL INNER JOIN:
SELECT * FROM fdca_commission_error_clusters_g16_abi_conus_2020 fcecgac
INNER JOIN sparkle_pixels_g16_abi_conus_2020 spgac ON spgac.sparkle_id = ANY(fcecgac.sparkle_ids);
The tables can be imported into a PostgreSQL database version 12 or newer with PostGIS extensions installed. For example, to import the tables into a database in a Linux environment, run the following commands:
gunzip -c sparkle_pixels_g16_abi_conus_2020.sql.gz | psql -d your_database_name
gunzip -c fdca_commission_error_clusters_g16_abi_conus_2020.sql.gz | psql -d your_database_name
We provide instructions, codes and datasets for replicating the article by Kim, Lee and McCulloch (2024), "A Topic-based Segmentation Model for Identifying Segment-Level Drivers of Star Ratings from Unstructured Text Reviews." This repository provides a user-friendly R package for any researchers or practitioners to apply A Topic-based Segmentation Model with Unstructured Texts (latent class regression with group variable selection) to their datasets. First, we provide a R code to replicate the illustrative simulation study: see file 1. Second, we provide the user-friendly R package with a very simple example code to help apply the model to real-world datasets: see file 2, Package_MixtureRegression_GroupVariableSelection.R and Dendrogram.R. Third, we provide a set of codes and instructions to replicate the empirical studies of customer-level segmentation and restaurant-level segmentation with Yelp reviews data: see files 3-a, 3-b, 4-a, 4-b. Note, due to the dataset terms of use by Yelp and the restriction of data size, we provide the link to download the same Yelp datasets (https://www.kaggle.com/datasets/yelp-dataset/yelp-dataset/versions/6). Fourth, we provided a set of codes and datasets to replicate the empirical study with professor ratings reviews data: see file 5. Please see more details in the description text and comments of each file. [A guide on how to use the code to reproduce each study in the paper] 1. Full codes for replicating Illustrative simulation study.txt -- [see Table 2 and Figure 2 in main text]: This is R source code to replicate the illustrative simulation study. Please run from the beginning to the end in R. In addition to estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships, you will get dendrograms of selected groups of variables in Figure 2. Computing time is approximately 20 to 30 minutes 3-a. Preprocessing raw Yelp Reviews for Customer-level Segmentation.txt: Code for preprocessing the downloaded unstructured Yelp review data and preparing DV and IVs matrix for customer-level segmentation study. 3-b. Instruction for replicating Customer-level Segmentation analysis.txt -- [see Table 10 in main text; Tables F-1, F-2, and F-3 and Figure F-1 in Web Appendix]: Code for replicating customer-level segmentation study with Yelp data. You will get estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships. Computing time is approximately 3 to 4 hours. 4-a. Preprocessing raw Yelp reviews_Restaruant Segmentation (1).txt: R code for preprocessing the downloaded unstructured Yelp data and preparing DV and IVs matrix for restaurant-level segmentation study. 4-b. Instructions for replicating restaurant-level segmentation analysis.txt -- [see Tables 5, 6 and 7 in main text; Tables E-4 and E-5 and Figure H-1 in Web Appendix]: Code for replicating restaurant-level segmentation study with Yelp. you will get estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships. Computing time is approximately 10 to 12 hours. [Guidelines for running Benchmark models in Table 6] Unsupervised Topic model: 'topicmodels' package in R -- after determining the number of topics(e.g., with 'ldatuning' R package), run 'LDA' function in the 'topicmodels'package. Then, compute topic probabilities per restaurant (with 'posterior' function in the package) which can be used as predictors. Then, conduct prediction with regression Hierarchical topic model (HDP): 'gensimr' R package -- 'model_hdp' function for identifying topics in the package (see https://radimrehurek.com/gensim/models/hdpmodel.html or https://gensimr.news-r.org/). Supervised topic model: 'lda' R package -- 'slda.em' function for training and 'slda.predict' for prediction. Aggregate regression: 'lm' default function in R. Latent class regression without variable selection: 'flexmix' function in 'flexmix' R package. Run flexmix with a certain number of segments (e.g., 3 segments in this study). Then, with estimated coefficients and memberships, conduct prediction of dependent variable per each segment. Latent class regression with variable selection: 'Unconstraind_Bayes_Mixture' function in Kim, Fong and DeSarbo(2012)'s package. Run the Kim et al's model (2012) with a certain number of segments (e.g., 3 segments in this study). Then, with estimated coefficients and memberships, we can do prediction of dependent variables per each segment. The same R package ('KimFongDeSarbo2012.zip') can be downloaded at: https://sites.google.com/scarletmail.rutgers.edu/r-code-packages/home 5. Instructions for replicating Professor ratings review study.txt -- [see Tables G-1, G-2, G-4 and G-5, and Figures G-1 and H-2 in Web Appendix]: Code to replicate the Professor ratings reviews study. Computing time is approximately 10 hours. [A list of the versions of R, packages, and computer...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Wide-field Infrared Survey Explorer (WISE) Catalog of Periodic Variable Stars
Xiaodian Chen, Shu Wang, Licai Deng, Richard de Grijs and Ming Yang
We have compiled the first all-sky mid-infrared variable-star catalog based on Wide-field
Infrared Survey Explorer (WISE) five-year survey data. Requiring more than 100 detections
for a given object, 50,282 carefully and robustly selected periodic variables are discovered,
of which 34,769 (69%) are new. Most are located in the Galactic plane and near the equatorial
poles. A method to classify variables based on their mid-infrared light curves is established
using known variable types in the General Catalog of Variable Stars. Careful classification of
the new variables results in a tally of 21,427 new EW-type eclipsing binaries, 5654 EA-type
eclipsing binaries, 1312 Cepheids, and 1231 RR Lyraes. By comparison with known variables
available in the literature, we estimate that the misclassi- fication rate is 5% and 10% for
short- and long-period variables, respectively. A detailed comparison of the types, periods,
and amplitudes with variables in the Catalina catalog shows that the independently obtained
classifications parameters are in excellent agreement. This enlarged sample of variable
stars will not only be helpful to study Galactic structure and extinction properties,
they can also be used to constrain stellar evolution theory and as potential candidates for
the James Webb Space Telescope.
These supplementary materials contain ALLWISE and NEOWISE-R single-exposure photometry tables of variables list
in Table 2 and 6 of the paper, and light curve figures for the 50,282 periodic variables in Table 2.
SourceID is identifier join these attachments to Table 2 and 6.
Example: For variable star WISEJ094812.4+093448 in Table 2, the SourceID=170 is adopted to search
corresponding single-exposure information in both 'allwise12.txt' and 'neowise12.txt'.
File Description:
allwise12.txt Single exposure photometry data of variables from ALLWISE.
Bytes Format Units Label Explanations
-----------------------------------------------------------------------------------------
1- 8 I5 --- SourceID Internal source identifier
10- 20 F11.7 deg RAdeg Right Ascension in decimal degrees (J2000)
22- 32 F11.7 deg DEdeg Declination in decimal degrees (J2000)
34- 47 F14.8 day MJD Modified Julian date of the mid-point of the observation
49- 54 F6.3 mag W1mag Single exposure WISE W1 (3.35 micron) band magnitude
56- 63 F6.3 mag eW1mag W1 band uncertainty
65- 77 F6.3 mag W2mag Single exposure WISE W2 (4.6 micron) band magnitude
79- 86 F6.3 mag eW1mag W2 band uncertainty
-----------------------------------------------------------------------------------------
neowise12.txt Single exposure photometry data of variables from NEOWISE-R.
Bytes Format Units Label Explanations
-----------------------------------------------------------------------------------------
1- 8 I5 --- SourceID Internal source identifier
10- 21 F11.7 deg RAdeg Right Ascension in decimal degrees (J2000)
23- 34 F11.7 deg DEdeg Declination in decimal degrees (J2000)
36- 44 F6.3 mag W1mag Single exposure WISE W1 (3.35 micron) band magnitude
46- 54 F6.3 mag eW1mag W1 band uncertainty
56- 64 F6.3 mag W2mag Single exposure WISE W2 (4.6 micron) band magnitude
66- 74 F6.3 mag eW1mag W2 band uncertainty
76- 90 F14.8 day MJD Modified Julian date of the mid-point of the observation
-----------------------------------------------------------------------------------------
figure0.zip -- figure23.zip Full figures of 50282 WISE variables. They are divided into 24
packages by the order of Right Ascension.
The "SPM LTDS Appendix 2 Transformer Data (Table 2) 33 to 11kV" data table provides the parameters for each group of transformers on the SP Manweb (SPM) system.Click here to access our full Long Term Development Statements for both SP Distribution (SPD) & SP Manweb (SPM).The table gives the following information:Node 1 & 2 per substation groupPositive sequence impedance R & X per substation groupZero sequence reactance per substation groupMinimum and maximum tap percentage per substation groupTransformer ratingReverse power capabilityFor additional information on column definitions, please click on the Dataset schema link below. DisclaimerWhilst all reasonable care has been taken in the preparation of this data, SP Energy Networks does not accept any responsibility or liability for the accuracy or completeness of this data, and is not liable for any loss that may be attributed to the use of this data. For the avoidance of doubt, this data should not be used for safety critical purposes without the use of appropriate safety checks and services e.g. LineSearchBeforeUDig etc. Please raise any potential issues with the data which you have received via the feedback form available at the Feedback tab above (must be logged in to see this). Data TriageAs part of our commitment to enhancing the transparency, and accessibility of the data we share, we publish the results of our Data Triage process.Our Data Triage documentation includes our Risk Assessments; detailing any controls we have implemented to prevent exposure of sensitive information. Click here to access the Data Triage documentation for the Long Term Development Statement dataset.To access our full suite of Data Triage documentation, visit the SP Energy Networks Data & Information page.Download dataset metadata (JSON)
https://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html
Le package "slakestable" permet de formater rapidement les données brutes issues de l'application pour smartphone "Slakes" (Fajardo et al., 2016). La fonction "tablecourbe" permet de créer une unique table contenant les coefficients a, b, c issues de l'ajustement sur la Gompertz des données brutes, ainsi que le SI600 pour chaque agrégat. Il est possible de concaténer les données par site.localisation par une moyenne ou une médiane avant ou après l'ajustement de l'équation de la Gompertz, deux tables indépendantes sont créées. Il est possible de les rassembler à l'aide de la fonction "jointurefeuilles". The "slakestable" package helps for quick formatting of raw data frome the "Slakes" smartphone app. (Fajardo et al., 2016). The "tablecourbe" function allows the creation of a single table containing the coefficient a, b, c from the Gompertz fit of the data, and the SI600 for each aggregate. It is also possible to concatenate the data by site/location with a mean or median before or after the Gompertz adjustement, two tables are created. It's possible to bind them with the "jointurefeuilles" function.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The number of known periodic variables has grown rapidly in recent years. Thanks to its large field of view and faint limiting magnitude, the Zwicky Transient Facility (ZTF) offers a unique opportunity to detect variable stars in the northern sky. Here, we exploit ZTF Data Release 2 (DR2) to search for and classify variables down tor ∼ 20.6 mag. We classify 781,602 periodic variables into 11 main types using an improved classification method. Comparison with previously published catalogs shows that 621,702 objects (79.5%) are newly discovered or newly classified, including ∼700 Cepheids, ∼5000 RR Lyrae stars, ∼15,000 δ Scuti variables, ∼350,000 eclipsing binaries,∼100,000 long-period variables, and about 150,000 rotational variables. The typical misclassification rate and period accuracy are on the order of 2% and 99%, respectively. 74% of our variables are located at Galactic latitudes, |b| < 10◦. This large sample of Cepheids, RR Lyrae, δ Scuti stars, and contact (EW-type) eclipsing binaries is helpful to investigate the Galaxy’s disk structure and evolution with an improved completeness, areal coverage, and age resolution. Specifically, the northern warp and the disk’s edge at distances of 15–20 kpc are significantly better covered than previously. Among rotational variables, RS Canum Venaticorum and BY Draconis-type variables can be separated easily. Our knowledge of stellar chromospheric activity would benefit greatly from a statistical analysis of these types of variables.
These supplementary materials contain g and r bands single-exposure photometry for 781,602 periodic variables in Table 2 of the paper.
SourceID is the internal source identifier joins these attachments to Table 2. Example: For variable star ZTFJ000000.19+320847.2 in Table 2, the SourceID=3 is adopted to search corresponding single-exposure information in both 'ztf2g' and ''ztf2r'.
File Description:
ztf2g g band single exposure photometry data of variables from ZTF2.
SourceID, RAdeg, DEdeg, HJD, gmag, e_gmag, g_flag
ztf2r r band single exposure photometry data of variables from ZTF2.
SourceID, RAdeg, DEdeg, HJD, rmag, e_rmag, r_flag
Table2 ZTF Variables Catalog.
Table6 ZTF Suspected Variables Catalog.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
(a) Species model results with transformation (L = linear, Q = quadratic, C = cubic, Qu = quartic) and coefficient sign for included model variables. Predictive models used a two-part model combining logistic and negative binomial regressions for krill (n = 2991) and a negative binomial regression for blue and humpback whales. The bold variable represents interaction of that variable with year. P-values: ‡ < 0.06; * < 0.05; ** < 0.01; *** < 0.0001. Model assessment metrics for goodness of fit and predictive ability are also reported. RMSE = root mean squared error, MAE = mean absolute error. Climate indices are abbreviated as in Table 1.
Attribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
License information was derived automatically
\r The Galilee Basin Operators' Forum (GBOF) is a group of petroleum companies exploring the Galilee\r \r Basin for commercial quantities of hydrocarbons. Exploration activities include the search for\r \r conventional hydrocarbons, and increasingly non-conventional hydrocarbon sources such as coal\r \r seam gas (CSG). The CSG target is the Permian coal measures as shown in Figure 1.1.\r \r Understanding and protecting groundwater is a key issue and community concern. As part of the\r \r early exploration activities in the Galilee Basin, the GBOF companies have initiated this study to\r \r assist in developing a regional and consistent subsurface description, and to document the existing\r \r data for the groundwater systems in the Galilee Basin study area. RPS, as an independent company,\r \r was contracted to perform the study and prepare a report.\r \r This initial study should not be confused with a "baseline assessment" or "underground water impact\r \r report" which are specific requirements under the Water Act 2000, triggered once production testing is\r \r underway or production has commenced. This study gathers and assembles all the base historical\r \r data which may be used in further studies. For the Galilee Basin study area, this investigation is\r \r specifically designed to:\r \r Review stratigraphy and identify possible aquifers beneath the GBOF member company\r \r tenures;\r \r Delineate aquifers that warrant further monitoring; and\r \r Obtain and tabulate current Department of Environment and Resource Management\r \r Groundwater Database (DERM GWDB)( now the Department of Environment and Heritage\r \r EHP)registered bore data including;\r \r » Water bore location and summary statistics;\r \r » Groundwater levels and artesian flow data; and\r \r » Groundwater quality.\r \r Data sources for this report include:\r \r Groundwater data available in the DERM GWDB;\r \r Petroleum exploration wells recorded in Queensland Petroleum Exploration Data (QPED);\r \r DERM groundwater data logger/tipping bucket rain gauge program;\r \r Springs of Queensland Dataset (version 4.0) held by DERM;\r \r PressurePlot Version 2 developed by CSIRO and linked to a Pressure-Hydrodynamics\r \r database; and\r \r Direct communication with GBOF members.\r \r Data was sourced in January 2011. Since then there has been considerable additional drilling by\r \r GBOF members, which is not incorporated in this report. All data has been used by RPS as provided\r \r without independent investigations to validate the data. It is recognised that historical data may be\r \r subject to inaccuracies, however, as work progresses in the region, an improvement in data integrity\r \r should be realised.\r \r
\r Tables as taken from Appendix B to F of the - Galilee Basin: Report on the Hydrogelogical Investigations, Prepared by RPS Australia PTY LTD for RLMS. PR102603-1: Rev 1 / December 2012.\r \r \r \r Spatial datasets created for each appendix table using supplied coordinate values (MGA Zone 54, MGA Zone 55, GDA94 Geographics) where available, or spatially referencing (spatial join) the NGIS QLD core - bores dataset, via the unique DERM Registered Bore Numbers attribute field.\r \r
\r Geoscience Australia (XXXX) RPS Galilee Basin: Report on the Hydrogeological Investigations - Appendix tables B to F (Spatial). Bioregional Assessment Derived Dataset. Viewed 16 November 2016, http://data.bioregionalassessments.gov.au/dataset/d3d92616-c0b8-4cfb-9eb5-4031915e5e41.\r \r
\r * Derived From National Groundwater Information System, Queensland Core dataset (superseded)\r \r * Derived From RPS Galilee Hydrogeological Investigations - Appendix tables B to F (original)\r \r
Table 2 | Extrapolated tree species hyperdominance results for African, Amazonian, Southeast Asian tropical forests at the regional scale Number of hyperdominantsTotal speciesHyperdominant percentageAfrica104 [101,107]4,638 [4,511,4,764]2.23Amazonia299 [295,304]13,826 [13,615,14,036]2.16Southeast Asia278 [268,289]11,963 [11,451,12,475]2.32Total a681 [664,700]30,427 [29,577,31,275]2.24 a Calculated as the sum of the number of hyperdominants and total species across the three major tropical forest regions with hyperdominance percentage derived therefrom.Prediction intervals (in brackets) combine uncertainty from the standard error of predicted means and the residual s.d. of the regression of the bias correction fit.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The 2015-16 Budget is officially available at budget.gov.au as the authoritative source of Budget Papers and Portfolio Budget Statement (PBS) documents. This dataset is a collection of data sources from the 2015-16 Budget, including:\r \r * The Portfolio Budget Statement Excel spreadsheets – Available after PBSs tabled in the Senate (~8.30pm Budget night).\r * A Machine Readable CSV of all PBS Excel Spreadsheet Line Items – Available after PBSs tabled in the Senate and translated (~8.30pm Budget night).\r \r Data from the 2015-16 Budget are provided to assist those who wish to analyse, visualise and programmatically access the 2015-16 Budget. \r \r Data users should refer to footnotes and memoranda in the original files as these are not usually captured in machine readable CSVs.\r \r We welcome your feedback and comments below.\r \r This dataset was prepared by the Department of Finance and the Department of the Treasury.\r \r
\r The PBS Excel files published should include the following financial tables with headings and footnotes. Only the line item data (table 2.2) is available in CSV at this stage. Much of the other data is also available in the Budget Papers 1 and 4 in aggregate form:\r \r * Table 1.1: Entity Resource Statement;\r * Table 1.2: Entity 2015-16 Budget Measures;\r * Table 2.1: Budgeted Expenses for Outcome X;\r * Table 2.2: Programme Expenses and Programme Components.\r * Table 3.1.1: Movement of Administered Funds Between Years;\r * Table 3.1.2: Estimates of Special Account Flows and Balances;\r * Table 3.1.3: Australian Government Indigenous Expenditure (AGIE);\r * Tables 3.2.1 to 3.2.6: Departmental Budgeted Financial Statements; and\r * Tables 3.2.7 to 3.2.11: Administered Budgeted Financial Statements.\r \r Please note, total expenses reported in the CSV file ‘2015-16 PBS line items dataset’ was prepared from individual entity programme expense tables. Totalling these figures does not produce the total expense figure in ‘Table1: Estimates of General Government Expenses’ (Statement 6, Budget Paper 1). \r \r Differences relate to:\r \r 1. Intra entity charging for services which are eliminated for the reporting of general government financial statements;\r 2. Entity expenses that involve revaluation of assets and liabilities are reported as other economic flows in general government financial statements; and\r 3. Additional entities’ expenses are included in general government sector expenses (e.g. Australian Strategic Policy Institute Limited and other entities) noting that only entities that are directly government funded are required to prepare a PBS.\r \r The original PBS Excel files and published documents include sub-totals and totals by entity and appropriation type which are not included in the line item CSV. These can be calculated programmatically. Where modifications are identified they will be updated as required. \r \r If a corrigendum to an entities PBS is issued after budget night, tables will be updated as necessary.\r \r The structure of the line item CSV is;\r \r * Portfolio\r * Department/Entity\r * Outcome\r * Program\r * Expense type\r * Appropriation type\r * Description\r * 2014-15\r * 2015-16\r * 2016-17\r * 2017-18\r * 2018-19\r * Source document\r * Source table\r * URL\r \r The data transformation is expected to be complete by midday 13 May. We may put up an incomplete CSV which will continue to be updated as additional PBSs are transformed into data form.\r \r The following Portfolios are included in the line item CSV: \r \r * Agriculture\r * Attorney General's\r * Communications\r * Defence\r * Education and Training \r * Employment\r * Environment\r * Finance\r * Foreign Affairs and Trade\r * Health\r * Human Services\r * Immigration and Border Protection\r * Industry and Science\r * Infrastructure and Regional Development\r * Parliamentary Departments\r * Prime Minister and Cabinet\r * Social Services\r * Treasury\r * Veterans' Affairs\r \r
\r We have made a number of data tables from the Budget Papers available in Excel and CSV formats.\r \r Below is the list of the tables published and whether we’ve translated them into CSV form this year:\r \r * Budget Paper 1: Appendix A1 - Estimates of expenses by function and sub‑function\r * Budget Paper 1: Overview - Appendix C Major Intiatives\r * Budget Paper 1: Overview - Appendix D Major Savings\r * Budget Paper 1: Statement 3 - Table 7: Reconciliation of underlying cash balance estimates\r * Budget Paper 1: Statement 4 - Table 1: Australian Government general government receipts\r * Budget Paper 1: Statement 4 - Table 7: Australian Government general government (cash) receipts\r * Budget Paper 1: Statement 4 - Table 10: Reconciliation of 2015‑16 general government (accrual) revenue\r * Budget Paper 1: Statement 4 - Supplementary table 3: Australian Government (accrual) revenue\r * Budget Paper 1: Statement 10 - Table 1: Australian Government general government sector receipts, payments, net Future Fund earnings and underlying cash balance\r * Budget Paper 1: Statement 10 - Table 4: Australian Government general government sector taxation receipts, non‑taxation receipts and total receipts\r * Budget Paper 1: Statement 10 - Table 5: Australian Government general government sector net debt and net interest payments\r * Budget Paper 4 Table 1.1 – Agency Resourcing\r * Budget Paper 4 Table 1.2 – Special Appropriations\r * Budget Paper 4 Table 1.3 – Special Accounts\r * Budget Paper 4 Table 2.2 – Staffing Tables\r * Budget Paper 4 Table 3.1 – Departmental Expenses\r * Budget Paper 4 Table 3.2 – Net Capital Investment\r \r
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
[COM_block] Center of mass (COM) sway data for each subject are shown. The baseline data were collected during stepping on the treadmill prior to connecting the cables for the force field perturbations and then after connecting the cables with low tension in the cables (6N). COM sway is shown for stabilizing and destabilizing forces at two levels, 2.5% and 5% body weight (BW) when participants were holding (Hold) or not holding (No Hold) the handle at the front of the treadmill. Data are shown for trials before (Pre), during (Pull) and after (Post) the force field block. [SW_block] Step width (SW) data for each subject are shown. The baseline data were collected during stepping on the treadmill prior to connecting the cables for the force field perturbations and then after connecting the cables with low tension in the cables (6N). COM sway is shown for stabilizing and destabilizing forces at two levels, 2.5% and 5% body weight (BW) when participants were holding (Hold) or not holding (No Hold) the handle at the front of the treadmill. Data are shown for trials before (Pre), during (Pull) and after (Post) the force field block. [COM_catch] Center of mass (COM) sway data for catch trials for each subject are shown. COM sway is shown for catch trials in stabilizing and destabilizing forces at two levels, 2.5% and 5% body weight (BW) when participants were holding (Hold) or not holding (No Hold) the handle at the front of the treadmill. Data are shown for catch trials before (Pre; forces applied), during (Pull; no forces) and after (Post; forces applied) the force field block. [SW_catch] Step width (SW) data for catch trials for each subject are shown. SW is shown for catch trials in stabilizing and destabilizing forces at two levels, 2.5% and 5% body weight (BW) when participants were holding (Hold) or not holding (No Hold) a handle at the front of the treadmill. Data are shown for catch trials before (Pre; forces applied), during (Pull; no forces) and after (Post; forces applied) the force field block. (XLSX)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data table for the EoS DS(CMF)-2 of the family Cold Neutron Star EoS with 1 parameters can be found via this link compose.obspm.fr/eos/181 together with additional information on the EoS.
Please be aware that in particular the authors of the equation of state can be found in the references below and on the Compose data base ( compose.obspm.fr/eos/181). Thus if you use this entry, please cite the original references together with the link to the table. We are working currently on rectifying the automatic citation from zenodo with author information on the equation of state.
The online service CompOSE provides data tables for different state of the art equations of state (EoS) ready for further usage in astrophysical applications, nuclear physics and beyond.
CompOSE would not be possible without the financial and organizational support from a large number of institutions and individual contributors. In particular, we gratefully acknowledge support by :
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data table for the EoS NSXM(PK1r) cold matter of the family Cold Matter EoS with 2 parameters
can be found via this link compose.obspm.fr/eos/342 together with additional information on the EoS.
Please be aware that in particular the authors of the equation of state can be found in the references below and on the
Compose data base ( compose.obspm.fr/eos/342).
Thus if you use this entry, please cite the original references together with the link to the table. We are working currently on rectifying the automatic citation from zenodo with author information on the equation of state.
The online service CompOSE provides data tables for different state of the art equations of state (EoS)
ready for further usage in astrophysical applications, nuclear physics and beyond.
CompOSE would not be possible without the financial and organizational support from a large number of institutions and individual
contributors. In particular, we gratefully acknowledge support by :
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data table for the EoS NSXM(TW99) of the family Cold Matter EoS with 2 parameters
can be found via this link compose.obspm.fr/eos/345 together with additional information on the EoS.
Please be aware that in particular the authors of the equation of state can be found in the references below and on the
Compose data base ( compose.obspm.fr/eos/345).
Thus if you use this entry, please cite the original references together with the link to the table. We are working currently on rectifying the automatic citation from zenodo with author information on the equation of state.
The online service CompOSE provides data tables for different state of the art equations of state (EoS)
ready for further usage in astrophysical applications, nuclear physics and beyond.
CompOSE would not be possible without the financial and organizational support from a large number of institutions and individual
contributors. In particular, we gratefully acknowledge support by :
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data table for the EoS HS(FSG) neutron matter (no electrons) of the family Neutron Matter EoS with 2 parameters
can be found via this link compose.obspm.fr/eos/3 together with additional information on the EoS.
Please be aware that in particular the authors of the equation of state can be found in the references below and on the
Compose data base ( compose.obspm.fr/eos/3).
Thus if you use this entry, please cite the original references together with the link to the table. We are working currently on rectifying the automatic citation from zenodo with author information on the equation of state.
The online service CompOSE provides data tables for different state of the art equations of state (EoS)
ready for further usage in astrophysical applications, nuclear physics and beyond.
CompOSE would not be possible without the financial and organizational support from a large number of institutions and individual
contributors. In particular, we gratefully acknowledge support by :
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data table for the EoS SHT(NL3) (with electrons) of the family Neutron Matter EoS with 2 parameters can be found via this link "compose.obspm.fr/eos/42" together with additional information on the EoS.
Please be aware that in particular the authors of the equation of state can be found in the references below and on the Compose data base ( compose.obspm.fr/eos/42). Thus if you use this entry, please cite the original references together with the link to the table. We are working currently on rectifying the automatic citation from zenodo with author information on the equation of state.
The online service CompOSE provides data tables for different state of the art equations of state (EoS) ready for further usage in astrophysical applications, nuclear physics and beyond.
CompOSE would not be possible without the financial and organizational support from a large number of institutions and individual contributors. In particular, we gratefully acknowledge support by :
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data table for the EoS SFH(SFHx) neutron matter (no electrons) of the family Neutron Matter EoS with 2 parameters
can be found via this link compose.obspm.fr/eos/15 together with additional information on the EoS.
Please be aware that in particular the authors of the equation of state can be found in the references below and on the
Compose data base ( compose.obspm.fr/eos/15).
Thus if you use this entry, please cite the original references together with the link to the table. We are working currently on rectifying the automatic citation from zenodo with author information on the equation of state.
The online service CompOSE provides data tables for different state of the art equations of state (EoS)
ready for further usage in astrophysical applications, nuclear physics and beyond.
CompOSE would not be possible without the financial and organizational support from a large number of institutions and individual
contributors. In particular, we gratefully acknowledge support by :
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Merging (in Table R) data published on https://www.data.gouv.fr/fr/datasets/ventes-de-pesticides-par-departement/, and joining two other sources of information associated with MAs: — uses: https://www.data.gouv.fr/fr/datasets/usages-des-produits-phytosanitaires/ — information on the “Biocontrol” status of the product, from document DGAL/SDQSPV/2020-784 published on 18/12/2020 at https://agriculture.gouv.fr/quest-ce-que-le-biocontrole
All the initial files (.csv transformed into.txt), the R code used to merge data and different output files are collected in a zip.
enter image description here
NB:
1) “YASCUB” for {year,AMM,Substance_active,Classification,Usage,Statut_“BioConttrol”}, substances not on the DGAL/SDQSPV list being coded NA.
2) The file of biocontrol products shall be cleaned from the duplicates generated by the marketing authorisations leading to several trade names.
3) The BNVD_BioC_DY3 table and the output file BNVD_BioC_DY3.txt contain the fields {Code_Region,Region,Dept,Code_Dept,Anne,Usage,Classification,Type_BioC,Quantite_substance)}