13 datasets found
  1. f

    Quantitative Research Methods and Data Analysis Workshop 2020

    • unisa.figshare.com
    pdf
    Updated Jun 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tracy Probert; Maxine Schaefer; Anneke Carien Wilsenach (2025). Quantitative Research Methods and Data Analysis Workshop 2020 [Dataset]. http://doi.org/10.25399/UnisaData.12581483.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 12, 2025
    Dataset provided by
    University of South Africa
    Authors
    Tracy Probert; Maxine Schaefer; Anneke Carien Wilsenach
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We include the course syllabus used to teach quantitative research design and analysis methods to graduate Linguistics students using a blended teaching and learning approach. The blended course took place over two weeks and builds on a face to face course presented over two days in 2019. Students worked through the topics in preparation for a live interactive video session each Friday to go through the activities. Additional communication took place on Slack for two hours each week. A survey was conducted at the start and end of the course to ascertain participants' perceptions of the usefulness of the course. The links to online elements and the evaluations have been removed from the uploaded course guide.Participants who complete this workshop will be able to:- outline the steps and decisions involved in quantitative data analysis of linguistic data- explain common statistical terminology (sample, mean, standard deviation, correlation, nominal, ordinal and scale data)- perform common statistical tests using jamovi (e.g. t-test, correlation, anova, regression)- interpret and report common statistical tests- describe and choose from the various graphing options used to display data- use jamovi to perform common statistical tests and graph resultsEvaluationParticipants who complete the course will use these skills and knowledge to complete the following activities for evaluation:- analyse the data for a project and/or assignment (in part or in whole)- plan the results section of an Honours research project (where applicable)Feedback and suggestions can be directed to M Schaefer schaemn@unisa.ac.za

  2. w

    SIA23 - Nominal Median and Nominal Mean Income Measures by National Income...

    • data.wu.ac.at
    json-stat, px
    Updated Mar 5, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Central Statistics Office (2018). SIA23 - Nominal Median and Nominal Mean Income Measures by National Income Definition, Year and Statistic [Dataset]. https://data.wu.ac.at/schema/data_gov_ie/NzE3MThjMDktMTc2MS00YWFmLWI1MTUtMzQyMWM2MDU4OWRh
    Explore at:
    px, json-statAvailable download formats
    Dataset updated
    Mar 5, 2018
    Dataset provided by
    Central Statistics Office
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Nominal Median and Nominal Mean Income Measures by National Income Definition, Year and Statistic

    View data using web pages

    Download .px file (Software required)

  3. Integrated Global Radiosonde Archive (IGRA) - Monthly Means (Version...

    • datasets.ai
    • access.earthdata.nasa.gov
    • +4more
    0, 33
    Updated Aug 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Oceanic and Atmospheric Administration, Department of Commerce (2024). Integrated Global Radiosonde Archive (IGRA) - Monthly Means (Version Superseded) [Dataset]. https://datasets.ai/datasets/integrated-global-radiosonde-archive-igra-monthly-means-version-superseded2
    Explore at:
    0, 33Available download formats
    Dataset updated
    Aug 27, 2024
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Authors
    National Oceanic and Atmospheric Administration, Department of Commerce
    Description

    Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous studies that used this version). Integrated Global Radiosonde Archive is a digital data set archived at the former National Climatic Data Center (NCDC), now National Centers for Environmental Information (NCEI). This dataset contains monthly means of geopotential height, temperature, zonal wind, and meridional wind derived from the Integrated Global Radiosonde Archive (IGRA). IGRA consists of radiosonde and pilot balloon observations at over 1500 globally distributed stations, and monthly means are available for the surface and mandatory levels at many of these stations. The period of record varies from station to station, with many extending from 1970 to 2016. Monthly means are computed separately for the nominal times of 0000 and 1200 UTC, considering data within two hours of each nominal time. A mean is provided, along with the number of values used to calculate it, whenever there are at least 10 values for a particular station, month, nominal time, and level.

  4. Nominal unit labour cost (NULC) per hour worked - quarterly data

    • data.europa.eu
    • db.nomics.world
    • +1more
    csv, html, tsv, xml
    Updated Oct 17, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eurostat (2025). Nominal unit labour cost (NULC) per hour worked - quarterly data [Dataset]. https://data.europa.eu/data/datasets/zndkorck0xzbjzicolzr5g?locale=en
    Explore at:
    xml(35401), csv(39146), tsv(17102), xml(8699), htmlAvailable download formats
    Dataset updated
    Oct 17, 2025
    Dataset authored and provided by
    Eurostathttps://ec.europa.eu/eurostat
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The nominal unit labour cost (NULC) index is defined as the ratio of labour cost to labour productivity, where labour cost is the ratio of compensation of employees (current prices) to hours worked by employees, and labour productivity is the ratio of gross domestic product (at market prices in millions, chain-linked volumes reference year 2015) to total hours worked.

    The input data are obtained through official transmissions of national accounts' country data in the ESA 2010 transmission programme.

    The data are expressed as % change on previous year and as index 2015=100.

  5. g

    Integrated Global Radiosonde Archive (IGRA) - Monthly Means (Version...

    • gimi9.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Integrated Global Radiosonde Archive (IGRA) - Monthly Means (Version Superseded) | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_integrated-global-radiosonde-archive-igra-monthly-means-version-superseded2
    Explore at:
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous studies that used this version). Integrated Global Radiosonde Archive is a digital data set archived at the former National Climatic Data Center (NCDC), now National Centers for Environmental Information (NCEI). This dataset contains monthly means of geopotential height, temperature, zonal wind, and meridional wind derived from the Integrated Global Radiosonde Archive (IGRA). IGRA consists of radiosonde and pilot balloon observations at over 1500 globally distributed stations, and monthly means are available for the surface and mandatory levels at many of these stations. The period of record varies from station to station, with many extending from 1970 to 2016. Monthly means are computed separately for the nominal times of 0000 and 1200 UTC, considering data within two hours of each nominal time. A mean is provided, along with the number of values used to calculate it, whenever there are at least 10 values for a particular station, month, nominal time, and level.

  6. SAMS/Nimbus-7 Level 3 Zonal Means Composition Data V001 (SAMSN7L3ZMTG) at...

    • catalog.data.gov
    • access.earthdata.nasa.gov
    • +2more
    Updated Sep 18, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA/GSFC/SED/ESD/TISL/GESDISC (2025). SAMS/Nimbus-7 Level 3 Zonal Means Composition Data V001 (SAMSN7L3ZMTG) at GES DISC [Dataset]. https://catalog.data.gov/dataset/sams-nimbus-7-level-3-zonal-means-composition-data-v001-samsn7l3zmtg-at-ges-disc-f8ae6
    Explore at:
    Dataset updated
    Sep 18, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    SAMSN7L3ZMTG is the Nimbus-7 Stratospheric and Mesospheric Sounder (SAMS) Level 3 Zonal Means Composition Data Product. The Earth's surface is divided into 2.5-deg latitudinal zones that extend from 50 deg South to 67.5 deg North. Retrieved mixing ratios of nitrous oxide (N2O) and methane (CH4) are averaged over day and night, along with errors, at 31 pressure levels between 50 and 0.125 mbar. Because the N2O and CH4 channels cannot function simultaneously, only one type of measurement is made for any nominal day. The data were recovered from the original magnetic tapes, and are now stored online as one file in its original proprietary binary format.The data for this product are available from 1 January 1979 through 30 December 1981. The principal investigators for the SAMS experiment were Prof. John T. Houghton and Dr. Fredric W. Taylor from Oxford University.This product was previously available from the NSSDC with the identifier ESAD-00180 (old ID 78-098A-02C).

  7. Global mean price data by country, sector and species groups

    • figshare.com
    txt
    Updated Sep 4, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Reg Watson (2020). Global mean price data by country, sector and species groups [Dataset]. http://doi.org/10.6084/m9.figshare.12907307.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Sep 4, 2020
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Reg Watson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Price data is comprised of a large data set compiled by the Sea Around Us Project and described in: Swartz, W. et al. (2012) and Sumaila, U.R. et al. (2007).Sumaila, U.R. et al. (2007). A global ex-vessel fish price database: construction and applications. Journal of Bioeconomics 9, (1), 39-51.

    Swartz, W. et al. Global Ex-vessel Fish Price Database Revisited: A New Approach for Estimating ‘Missing’Prices. Environmental and Resource Economics, 1-14 (2012).

    The database contains nominal price data as well as inflation-adjusted real price (used here) to the year (year base 2000=100).Data for Data envelopment analysis.where sector A = Artisanal, I = Industrial, cnumber = United Nations country number, kw = kilo-watt power, days= days at sea, vessno = number of vessels. Mean price per tonne in $US.see also Rousseau, Y., Watson, R.A., Blanchard, J.L., Fulton, E.A. Evolution of global marine fishing fleets and the response of fished resources. Proceedings of the National Academy of Sciences. 201820344, Available at DOI: 10.1073/pnas.1820344116 (2019). and here https://figshare.com/articles/dataset/vessel_effort_csv/12905930

  8. U

    United States FRBOP Forecast: Nominal GDP: saar: Mean: Plus 1 Qtr

    • ceicdata.com
    Updated Dec 15, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com (2018). United States FRBOP Forecast: Nominal GDP: saar: Mean: Plus 1 Qtr [Dataset]. https://www.ceicdata.com/en/united-states/nipa-2018-gdp-by-expenditure-current-price-saar-forecast-federal-reserve-bank-of-philadelphia/frbop-forecast-nominal-gdp-saar-mean-plus-1-qtr
    Explore at:
    Dataset updated
    Dec 15, 2018
    Dataset provided by
    CEICdata.com
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jun 1, 2016 - Mar 1, 2019
    Area covered
    United States
    Description

    United States FRBOP Forecast: Nominal GDP: saar: Mean: Plus 1 Qtr data was reported at 21,315.720 USD bn in Mar 2019. This records an increase from the previous number of 21,154.941 USD bn for Dec 2018. United States FRBOP Forecast: Nominal GDP: saar: Mean: Plus 1 Qtr data is updated quarterly, averaging 6,642.642 USD bn from Dec 1968 (Median) to Mar 2019, with 202 observations. The data reached an all-time high of 21,315.720 USD bn in Mar 2019 and a record low of 896.540 USD bn in Dec 1968. United States FRBOP Forecast: Nominal GDP: saar: Mean: Plus 1 Qtr data remains active status in CEIC and is reported by Federal Reserve Bank of Philadelphia. The data is categorized under Global Database’s United States – Table US.A003: NIPA 2018: GDP by Expenditure: Current Price: saar: Forecast: Federal Reserve Bank of Philadelphia.

  9. W

    ENSEMBLES CNRM-CM3 1PCTTO4X run1, monthly mean values

    • wdc-climate.de
    Updated Sep 14, 2007
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Royer, Jean-Francois (2007). ENSEMBLES CNRM-CM3 1PCTTO4X run1, monthly mean values [Dataset]. https://www.wdc-climate.de/ui/entry?acronym=ENSEMBLES_CNCM3_1PTO4X_1_MM
    Explore at:
    Dataset updated
    Sep 14, 2007
    Dataset provided by
    World Data Center for Climate (WDCC) at DKRZ
    Authors
    Royer, Jean-Francois
    License

    http://ensembles-eu.metoffice.com/docs/Ensembles_Data_Policy_261108.pdfhttp://ensembles-eu.metoffice.com/docs/Ensembles_Data_Policy_261108.pdf

    Time period covered
    Jan 1, 1930 - Dec 31, 2150
    Area covered
    Description

    These data represent monthly averaged values (monthly mean (MM) and diurnal cycle (DC)) of selected variables for ENSEMBLES (http://www.ensembles-eu.org). The list of output variables can be found in: http://ensembles.wdc-climate.de/output-variables

    The 1PCTTO4X simulation (included year 2150) was initiated from nominal year 1970 of preindustriel run,when equilibrium was reached (corresponds to nominal year 1860 of CO2-quadrupling experiment). Forcing agents included: CO2, CH4, N2O, O3, CFC11 (including other CFCs and HFCs), CFC12; sulfate(Boucher), BC, sea salt, desert dust aerosols.

    These datasets are available in netCDF format. The dataset names are composed of - centre/model acronym (e.g. CNCM3: CNRM/CM3) - scenario acronym (e.g. SRA1B: SRES A1B) - run number (e.g. 1: run 1) - time interval (MM:monthly mean, DM:daily mean, DC:diurnal cycle, 6H:6 hourly, 12h:12hourly) - variable acronym with level value --> example: CNCM3_SRA1B_1_MM_hur850

    Technical data to this experiment: CNRM-CM3 (2004): atmosphere: Arpege-Climat v3 (T42L45, cy 22b+); ocean: OPA8.1; sea ice: Gelato 3.10; river routing: TRIP

  10. f

    Demographic and Clinical Characteristics of Study Participants.

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tiziano Colibazzi; Bruce E. Wexler; Ravi Bansal; Xuejun Hao; Jun Liu; Juan Sanchez-Peña; Cheryl Corcoran; Jeffrey A. Lieberman; Bradley S. Peterson (2023). Demographic and Clinical Characteristics of Study Participants. [Dataset]. http://doi.org/10.1371/journal.pone.0055783.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Tiziano Colibazzi; Bruce E. Wexler; Ravi Bansal; Xuejun Hao; Jun Liu; Juan Sanchez-Peña; Cheryl Corcoran; Jeffrey A. Lieberman; Bradley S. Peterson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data are reported as mean (SD). F and T values are reported for independent T tests for means and chi-square values (χ2) for nominal data. An asterisk denotes significant p values. N = number.

  11. W

    ENSEMBLES CNRM-CM3 SRESB1 run1, daily values

    • wdc-climate.de
    Updated Jan 16, 2007
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Royer, Jean-Francois (2007). ENSEMBLES CNRM-CM3 SRESB1 run1, daily values [Dataset]. https://www.wdc-climate.de/ui/entry?acronym=ENSEMBLES_CNCM3_SRB1_1_D
    Explore at:
    Dataset updated
    Jan 16, 2007
    Dataset provided by
    World Data Center for Climate (WDCC) at DKRZ
    Authors
    Royer, Jean-Francois
    License

    http://ensembles-eu.metoffice.com/docs/Ensembles_Data_Policy_261108.pdfhttp://ensembles-eu.metoffice.com/docs/Ensembles_Data_Policy_261108.pdf

    Time period covered
    Jan 1, 2000 - Dec 31, 2100
    Area covered
    Description

    These data represent daily values (daily mean, instantaneous daily output, diurnal cycle) of selected variables for ENSEMBLES (http://www.ensembles-eu.org). The list of output variables can be found in: http://ensembles.wdc-climate.de/output-variables

    The SRES-B1 simulation(included year 2100) was initiated from nominal year 2000 of 20C3M run1. It corresponds to nominal year 2000 of SRES-B1 experiment. Forcing agents included: CO2,CH4,N2O,O3,CFC11(including other CFCs and HFCs),CFC12; sulfate(Boucher),BC,sea salt,desert dust aerosols. This 550 ppm stabilization experiment continued until 2300 with all concentrations fixed at their levels of year 2100.

    These datasets are available in netCDF format. The dataset names are composed of - centre/model acronym (e.g. CNCM3: CNRM/CM3) - scenario acronym (e.g. SRB1: SRES B1) - run number (e.g. 1: run 1) - time interval (MM:monthly mean, DM:daily mean, DC:diurnal cycle, 6H:6 hourly, 12h:12hourly) - variable acronym with level value --> example: CNCM3_SRB1_1_MM_hur850

    Technical data to this experiment: CNRM-CM3 (2004): atmosphere: Arpege-Climat v3 (T42L45, cy 22b+); ocean: OPA8.1; sea ice: Gelato 3.10; river routing: TRIP

  12. Z

    Peatland Decomposition Database (1.1.0)

    • data.niaid.nih.gov
    Updated Mar 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Teickner, Henning; Knorr, Klaus-Holger (2025). Peatland Decomposition Database (1.1.0) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11276064
    Explore at:
    Dataset updated
    Mar 5, 2025
    Dataset provided by
    University of Münster
    Authors
    Teickner, Henning; Knorr, Klaus-Holger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    1 Introduction

    The Peatland Decomposition Database (PDD) stores data from published litterbag experiments related to peatlands. Currently, the database focuses on northern peatlands and Sphagnum litter and peat, but it also contains data from some vascular plant litterbag experiments. Currently, the database contains entries from 34 studies, 2,160 litterbag experiments, and 7,297 individual samples with 117,841 measurements for various attributes (e.g. relative mass remaining, N content, holocellulose content, mesh size). The aim is to provide a harmonized data source that can be useful to re-analyse existing data and to plan future litterbag experiments.

    The Peatland Productivity and Decomposition Parameter Database (PPDPD) (Bona et al. 2018) is similar to the Peatland Decomposition Database (PDD) in that both contain data from peatland litterbag experiments. The differences are that both databases partly contain different data, that PPDPD additionally contains information on vegetation productivity, which PDD does not, and that PDD provides more information and metadata on litterbag experiments, and also measurement errors.

    2 Updates

    Compared to version 1.0.0, this version has a new structure for table experimental_design_format, contains additional metadata on the experimental design (these were omitted in version 1.0.0), and contains the scripts that were used to import the data into the database.

    3 Methods

    3.1 Data collection

    Data for the database was collected from published litterbag studies, by extracting published data from figures, tables, or other data sources, and by contacting the authors of the studies to obtain raw data. All data processing was done with R (R version 4.2.0 (2022-04-22)) (R Core Team 2022).

    Studies were identified via a Scopus search with search string (TITLE-ABS-KEY ( peat* AND ( "litter bag" OR "decomposition rate" OR "decay rate" OR "mass loss")) AND NOT ("tropic*")) (2022-12-17). These studies were further screened to exclude those which do not contain litterbag data or which recycle data from other studies that have already been considered. Additional studies with litterbag experiments in northern peatlands we were aware of, but which were not identified in the literature search were added to the list of publications. For studies not older than 10 years, authors were contacted to obtain raw data, however this was successful only in few cases. To date, the database focuses on Sphagnum litterbag experiments and not from all studies that were identified by the literature search data have been included yet in the database.

    Data from figures were extracted using the package ‘metaDigitise’ (1.0.1) (Pick, Nakagawa, and Noble 2018). Data from tables were extracted manually.

    Data from the following studies are currently included: Farrish and Grigal (1985), Bartsch and Moore (1985), Farrish and Grigal (1988), Vitt (1990), Hogg, Lieffers, and Wein (1992), Sanger, Billett, and Cresser (1994), Hiroki and Watanabe (1996), Szumigalski and Bayley (1996), Prevost, Belleau, and Plamondon (1997), Arp, Cooper, and Stednick (1999), Robbert A. Scheffer and Aerts (2000), R. A. Scheffer, Van Logtestijn, and Verhoeven (2001), Limpens and Berendse (2003), Waddington, Rochefort, and Campeau (2003), Asada, Warner, and Banner (2004), Thormann, Bayley, and Currah (2001), Trinder, Johnson, and Artz (2008), Breeuwer et al. (2008), Trinder, Johnson, and Artz (2009), Bragazza and Iacumin (2009), Hoorens, Stroetenga, and Aerts (2010), Straková et al. (2010), Straková et al. (2012), Orwin and Ostle (2012), Lieffers (1988), Manninen et al. (2016), Johnson and Damman (1991), Bengtsson, Rydin, and Hájek (2018a), Bengtsson, Rydin, and Hájek (2018b), Asada and Warner (2005), Bengtsson, Granath, and Rydin (2017), Bengtsson, Granath, and Rydin (2016), Hagemann and Moroni (2015), Hagemann and Moroni (2016), B. Piatkowski et al. (2021), B. T. Piatkowski et al. (2021), Mäkilä et al. (2018), Golovatskaya and Nikonova (2017), Golovatskaya and Nikonova (2017).

    4 Database records

    The database is a ‘MariaDB’ database and the database schema was designed to store data and metadata following the Ecological Metadata Language (EML) (Jones et al. 2019). Descriptions of the tables are shown in Tab. 1.

    The database contains general metadata relevant for litterbag experiments (e.g., geographical, temporal, and taxonomic coverage, mesh sizes, experimental design). However, it does not contain a detailed description of sample handling, sample preprocessing methods, site descriptions, because there currently are no discipline-specific metadata and reporting standards. Table 1: Description of the individual tables in the database.

    Name Description

    attributes Defines the attributes of the database and the values in column attribute_name in table data.

    citations Stores bibtex entries for references and data sources.

    citations_to_datasets Links entries in table citations with entries in table datasets.

    custom_units Stores custom units.

    data Stores measured values for samples, for example remaining masses.

    datasets Lists the individual datasets.

    experimental_design_format Stores information on the experimental design of litterbag experiments.

    measurement_scales, measurement_scales_date_time, measurement_scales_interval, measurement_scales_nominal, measurement_scales_ordinal, measurement_scales_ratio Defines data value types.

    missing_value_codes Defines how missing values are encoded.

    samples Stores information on individual samples.

    samples_to_samples Links samples to other samples, for example litter samples collected in the field to litter samples collected during the incubation of the litterbags.

    units, unit_types Stores information on measurement units.

    5 Attributes Table 2: Definition of attributes in the Peatland Decomposition Database and entries in the column attribute_name in table data.

    Name Definition Example value Unit Measurement scale Number type Minimum value Maximum value String format

    4_hydroxyacetophenone_mass_absolute A numeric value representing the content of 4-hydroxyacetophenone, as described in Straková et al. (2010). 0.26 g ratio real 0 Inf NA

    4_hydroxyacetophenone_mass_relative_mass A numeric value representing the content of 4-hydroxyacetophenone, as described in Straková et al. (2010). 0.26 g/g ratio real 0 1 NA

    4_hydroxybenzaldehyde_mass_absolute A numeric value representing the content of 4-hydroxybenzaldehyde, as described in Straková et al. (2010). 0.26 g ratio real 0 Inf NA

    4_hydroxybenzaldehyde_mass_relative_mass A numeric value representing the content of 4-hydroxybenzaldehyde, as described in Straková et al. (2010). 0.26 g/g ratio real 0 1 NA

    4_hydroxybenzoic_acid_mass_absolute A numeric value representing the content of 4-hydroxybenzoic acid, as described in Straková et al. (2010). 0.26 g ratio real 0 Inf NA

    4_hydroxybenzoic_acid_mass_relative_mass A numeric value representing the content of 4-hydroxybenzoic acid, as described in Straková et al. (2010). 0.26 g/g ratio real 0 1 NA

    abbreviation In table custom_units: A string representing an abbreviation for the custom unit. gC NA nominal NA NA NA NA

    acetone_extractives_mass_absolute A numeric value representing the content of acetone extractives, as described in Straková et al. (2010). 0.26 g ratio real 0 Inf NA

    acetone_extractives_mass_relative_mass A numeric value representing the content of acetone extractives, as described in Straková et al. (2010). 0.26 g/g ratio real 0 1 NA

    acetosyringone_mass_absolute A numeric value representing the content of acetosyringone, as described in Straková et al. (2010). 0.26 g ratio real 0 Inf NA

    acetosyringone_mass_relative_mass A numeric value representing the content of acetosyringone, as described in Straková et al. (2010). 0.26 g/g ratio real 0 1 NA

    acetovanillone_mass_absolute A numeric value representing the content of acetovanillone, as described in Straková et al. (2010). 0.26 g ratio real 0 Inf NA

    acetovanillone_mass_relative_mass A numeric value representing the content of acetovanillone, as described in Straková et al. (2010). 0.26 g/g ratio real 0 1 NA

    arabinose_mass_absolute A numeric value representing the content of arabinose, as described in Straková et al. (2010). 0.26 g ratio real 0 Inf NA

    arabinose_mass_relative_mass A numeric value representing the content of arabinose, as described in Straková et al. (2010). 0.26 g/g ratio real 0 1 NA

    ash_mass_absolute A numeric value representing the content of ash (after burning at 550°C). 4 g ratio real 0 Inf NA

    ash_mass_relative_mass A numeric value representing the content of ash (after burning at 550°C). 0.05 g/g ratio real 0 Inf NA

    attribute_definition A free text field with a textual description of the meaning of attributes in the dpeatdecomposition database. NA NA nominal NA NA NA NA

    attribute_name A string describing the names of the attributes in all tables of the dpeatdecomposition database. attribute_name NA nominal NA NA NA NA

    bibtex A string representing the bibtex code used for a literature reference throughout the dpeatdecomposition database. Galka.2021 NA nominal NA NA NA NA

    bounds_maximum A numeric value representing the minimum possible value for a numeric attribute. 0 NA interval real Inf Inf NA

    bounds_minimum A numeric value representing the maximum possible value for a numeric attribute. INF NA interval real Inf Inf NA

    bulk_density A numeric value representing the bulk density of the sample [g cm-3]. 0,2 g/cm^3 ratio real 0 Inf NA

    C_absolute The absolute mass of C in the sample. 1 g ratio real 0 Inf NA

    C_relative_mass The absolute mass of C in the sample. 1 g/g ratio real 0 Inf NA

    C_to_N A numeric value representing the C to N ratio of the sample. 35 g/g ratio real 0 Inf NA

    C_to_P A numeric value representing the C to P ratio of the sample. 35 g/g ratio real 0 Inf NA

    Ca_absolute The

  13. E

    TrajectoryProfile - R5.x279.000.0013 - os75nb_long - 28.81N, 89.48W -...

    • erddap.griidc.org
    Updated Feb 3, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kurt Polzin (2021). TrajectoryProfile - R5.x279.000.0013 - os75nb_long - 28.81N, 89.48W - 2018-06-21 [Dataset]. https://erddap.griidc.org/erddap/info/R5_x279_000_0013_os75nb_long/index.html
    Explore at:
    Dataset updated
    Feb 3, 2021
    Dataset provided by
    Gulf of Mexico Research Initiative Information and Data (GRIIDC)
    Authors
    Kurt Polzin
    Time period covered
    Jun 21, 2018 - Jun 28, 2018
    Area covered
    Variables measured
    crs, BT_u, BT_v, flag, time, depth, BT_depth, latitude, platform, NAV_speed, and 62 more
    Description

    This dataset contains ocean currents data from Shipboard Acoustic Doppler Current Profilers (SADCP) collected during the R/V Point Sur cruise PS18_28 in the northern Gulf of Mexico from 2018-06-21 to 2018-06-28. The experimental site is on the continental slope of the northern Gulf of Mexico, next to the Deepwater Horizon Spill site. The raw ADCP data were collected and processed using the University of Hawaii data acquisition system (UHDAS) during the cruise. The post-cruise data processing were conducted by the University of Hawaii using the Common Oceanographic Data Analysis System (CODAS) (Common Ocean Data Access System) processing. This dataset contains both raw and processed data. There were two vessel-mounted ADCPs on Point Sur operating at 75 kHz and 300 kHz respectively. Both were manufactured by RD Instruments. CODAS_variables cdm_data_type=TrajectoryProfile cdm_profile_variables=time cdm_trajectory_variables=trajectory, latitude, longitude comment=software: pycurrents comment1=CODAS_variables comment2=Variables in this CODAS long-form netcdf file are taken directly from the original CODAS database used in processing. For additional information see the CODAS_processing_note global attribute and the attributes of each of the variables.

    The term "bin" refers to the depth cell index, starting from 1 nearest the transducer. Bin depths correspond to the centers of the depth cells.

    short_name description

    time : Time at the end of the ensemble, days from start of year. lon, lat : Longitude, Latitude at the end of the ensemble. u,v : Zonal and meridional velocity component profiles relative to the moving ship, not to the earth. w : Vertical velocity -- Caution: usually dominated by ship motion and other artifacts. error_vel : Error velocity -- diagnostic, scaled difference between 2 estimates of vertical velocity (w). amp_sound_scat : Received signal strength (ADCP units; not corrected for spreading or attenuation). profile_flags : Editing flags for averaged data. percent_good : Percentage of pings used for averaging u, v after editing. spectral_width : Spectral width for NB instruments; correlation for WH, BB, OS instruments.

    CONFIG1_tr_depth : Transducer depth, meters. CONFIG1_top_ref_bin : Reference layer averaging: top bin. CONFIG1_bot_ref_bin : Reference layer averaging: bottom bin. CONFIG1_pls_length : Pulse length projected on vertical (meters). CONFIG1_blank_length : Blank length (vertical; meters). CONFIG1_bin_length : Bin length (vertical; meters). CONFIG1_num_bins : Number of bins. CONFIG1_ping_interval : Approximate mean time between pings or ping groups. CONFIG1_hd_offset : Transducer azimuth approximation prior to data processing, clockwise rotation of beam 3 from forward. CONFIG1_freq_transmit : Nominal (round number) instrument frequency. CONFIG1_ev_threshold : Error velocity editing threshold (if known). CONFIG1_bot_track : Flag: does any bottom track data exist? CONFIG1_avg_interval : Ensemble-averaging interval (seconds).

    BT_u : Eastward ship velocity from bottom tracking. BT_v : Northward ship velocity from bottom tracking. BT_depth : Depth from bottom tracking.

    ANCIL2_watrk_scale_factor : Scale factor; multiplier applied to measured velocity. ANCIL2_watrk_hd_misalign : Azimuth correction used to rotate measured velocity. ANCIL2_botrk_scale_factor : Scale factor for bottom tracking. ANCIL2_botrk_hd_misalign : Azimuth correction for bottom tracking. ANCIL2_mn_roll : Ensemble-mean roll. ANCIL2_mn_pitch : Ensemble-mean pitch. ANCIL1_mn_heading : Ensemble-mean heading. ANCIL1_tr_temp : Ensemble-mean transducer temperature. ANCIL2_std_roll : Standard deviation of roll. ANCIL2_std_pitch : Standard deviation of pitch. ANCIL2_std_heading : Standard deviation of heading. ANCIL2_std_temp : Standard deviation of transducer temperature. ANCIL2_last_roll : Last measurement of roll in the ensemble. ANCIL2_last_pitch : Last measurement of pitch. ANCIL2_last_heading : Last measurement of heading. ANCIL2_last_temp : Last measurement of transducer temperature. ANCIL2_last_good_bin : Deepest bin with good velocities. ANCIL2_max_amp_bin : Bin with maximum amplitude based on bottom- detection, if the bottom is within range. ANCIL1_snd_spd_used : Sound speed used for velocity calculations. ANCIL1_pgs_sample : Number of pings averaged in the ensemble.

    ACCESS_last_good_bin : Last bin with good data. (-1 if the entire profile is bad.) ACCESS_first_good_bin : First bin with good data. ACCESS_U_ship_absolute : Ship's mean eastward velocity component. ACCESS_V_ship_absolute : Ship's mean northward velocity component.

    The following historical variables are not currently used.

    NAV_speed : NAV_longitude : NAV_latitude : NAV_direction :

    CONFIG1_rol_offset : CONFIG1_pit_offset : CONFIG1_compensation : CONFIG1_pgs_ensemble : Number of pings averaged in the instrument; always 1 for SADCP. CONFIG1_heading_bias : Only relevant for narrowband ADCP data collected with DAS2.48 or earlier (MS-DOS). CONFIG1_ens_threshold :

    ANCIL2_rol_misalign : ANCIL2_pit_misalign : ANCIL2_ocean_depth : ANCIL1_best_snd_spd :

    percent_3_beam : This may have different meanings depending on the data acquisition system, processing method, and software versions; it is not useful without this context.

    ............................................................................. comment3=CODAS_processing_note

    comment4=CODAS processing note:

    Overview

    The CODAS database is a specialized storage format designed for shipboard ADCP data. "CODAS processing" uses this format to hold averaged shipboard ADCP velocities and other variables, during the stages of data processing. The CODAS database stores velocity profiles relative to the ship as east and north components along with position, ship speed, heading, and other variables. The netCDF short form contains ocean velocities relative to earth, time, position, transducer temperature, and ship heading; these are designed to be "ready for immediate use". The netCDF long form is just a dump of the entire CODAS database. Some variables are no longer used, and all have names derived from their original CODAS names, dating back to the late 1980's.

    Post-processing

    CODAS post-processing, i.e. that which occurs after the single-ping profiles have been vector-averaged and loaded into the CODAS database, includes editing (using automated algorithms and manual tools), rotation and scaling of the measured velocities, and application of a time-varying heading correction. Additional algorithms developed more recently include translation of the GPS positions to the transducer location, and averaging of ship's speed over the times of valid pings when Percent Good is reduced. Such post-processing is needed prior to submission of "processed ADCP data" to JASADCP or other archives.

    Full CODAS processing

    Whenever single-ping data have been recorded, full CODAS processing provides the best end product.

    Full CODAS processing starts with the single-ping velocities in beam coordinates. Based on the transducer orientation relative to the hull, the beam velocities are transformed to horizontal, vertical, and "error velocity" components. Using a reliable heading (typically from the ship's gyro compass), the velocities in ship coordinates are rotated into earth coordinates.

    Pings are grouped into an "ensemble" (usually 2-5 minutes duration) and undergo a suite of automated editing algorithms (removal of acoustic interference; identification of the bottom; editing based on thresholds; and specialized editing that targets CTD wire interference and "weak, biased profiles". The ensemble of single-ping velocities is then averaged using an iterative reference layer averaging scheme. Each ensemble is approximated as a single function of depth, with a zero-average over a reference layer plus a reference layer velocity for each ping. Adding the average of the single-ping reference layer velocities to the function of depth yields the ensemble-average velocity profile. These averaged profiles, along with ancillary measurements, are written to disk, and subsequently loaded into the CODAS database. Everything after this stage is "post-processing".

    note (time):

    Time is stored in the database using UTC Year, Month, Day, Hour, Minute, Seconds. Floating point time "Decimal Day" is the floating point interval in days since the start of the year, usually the year of the first day of the cruise.

    note (heading):

    CODAS processing uses heading from a reliable device, and (if available) uses a time-dependent correction by an accurate heading device. The reliable heading device is typically a gyro compass (for example, the Bridge gyro). Accurate heading devices

  14. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Tracy Probert; Maxine Schaefer; Anneke Carien Wilsenach (2025). Quantitative Research Methods and Data Analysis Workshop 2020 [Dataset]. http://doi.org/10.25399/UnisaData.12581483.v1

Quantitative Research Methods and Data Analysis Workshop 2020

Explore at:
pdfAvailable download formats
Dataset updated
Jun 12, 2025
Dataset provided by
University of South Africa
Authors
Tracy Probert; Maxine Schaefer; Anneke Carien Wilsenach
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

We include the course syllabus used to teach quantitative research design and analysis methods to graduate Linguistics students using a blended teaching and learning approach. The blended course took place over two weeks and builds on a face to face course presented over two days in 2019. Students worked through the topics in preparation for a live interactive video session each Friday to go through the activities. Additional communication took place on Slack for two hours each week. A survey was conducted at the start and end of the course to ascertain participants' perceptions of the usefulness of the course. The links to online elements and the evaluations have been removed from the uploaded course guide.Participants who complete this workshop will be able to:- outline the steps and decisions involved in quantitative data analysis of linguistic data- explain common statistical terminology (sample, mean, standard deviation, correlation, nominal, ordinal and scale data)- perform common statistical tests using jamovi (e.g. t-test, correlation, anova, regression)- interpret and report common statistical tests- describe and choose from the various graphing options used to display data- use jamovi to perform common statistical tests and graph resultsEvaluationParticipants who complete the course will use these skills and knowledge to complete the following activities for evaluation:- analyse the data for a project and/or assignment (in part or in whole)- plan the results section of an Honours research project (where applicable)Feedback and suggestions can be directed to M Schaefer schaemn@unisa.ac.za

Search
Clear search
Close search
Google apps
Main menu