Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset documents the spatial and temporal variability of resuspension events and sediment dynamics at seven Great Barrier Reef Lagoon inshore locations using continuous logger data (10 min sampling intervals) over 2 ½ years and analysed the quantity of sediment collected in newly designed sediment traps. The dataset highlights the influence of river discharge events on sediment dynamics across these locations.
*This dataset is under an embargo period until the end of the project extension
Methods: Nephelometers were sourced from the Marine Geophysics Laboratory, James Cook University. Prior to deployment each instrument is calibrated using the laboratory’s standard procedure where calibrations for turbidity (to normalise readings for standardisation across all instruments), pressure and light are performed. The instruments were deployed for periods spanning from 2 to 5 months at sever inshore locations. On retrieval the data from each instrument are downloaded and pasted into a spreadsheet where the calibrations (prior to deployment) are applied to showcase the time series of the data.
Site calibrations of benthic sediment to instrument normalised turbidity readings are applied to convert the NTU turbidity measurements to suspended sediment concentrations (SSC). This spreadsheet is used to also remove spurious data from the time series that have been due to instrument fouling, malfunction or obstruction. In some cases the instrument was flooded or lost and so no data are available for that deployment period. In other cases, the measurement of one or more of the parameters was not recorded by the instrument and so only the reliable data have been plotted. The current meter (Marotte) was also sourced from the Marine Geophysics Laboratory, James Cook University and downloaded using their software.
We note that the turbidity, wave pressure and light data provided have been thoroughly checked for QA/QC procedures. However, the temperate and current meter data have not been thoroughly checked and there will be instances where the data from these instruments are spurious. We caution against using these data unless a thorough QA/QC process is implemented. In some cases, on longer deployments the earlier data were overwritten on the nephelometer and so these data have been lost.
Format: The data are provided as Microsoft Excel files (a separate file for each site). Due to the limit of the number of rows permitted by Excel, the time series data have been spread across 3 worksheets for each file.
Data Dictionary:
TIMESERIES.XLSX for each location
TIMESTAMP: date and time of measurement at 10 minute frequency [DD/MM/YYYY Hour:Minute]
NEPHELOMETER DATA NTUe: turbidity measurements in units in filter effluent (NTUe) SSC: (mg.L-1) – suspended sediment concentrations converted from NTUe measurements LIGHT (uE/cm2):measure of light per 10 minute sensor reading DEPTH: of instrument from surface (m) RMS: measure of wave pressure TEMP: (degrees C) of water
CURRENT METER DATA
speed (m/s)
heading (degrees CW from North)
speed upper (m/s)
speed lower (m/s)
tilt (radians)
direction (radians CCW from East)
batt (volts)
temp (Celsius)
References: Lewis, S., Bainbridge, Z. Stevens, T. Garzon-Garcia, A. Chen, C. Burton, J. Bahadori, M. Rezaei Rashti, M. Gorman, J. Smithers, S. Olley, J. Moody, P. Dehayr, R. (2018) Sediment tracing from the catchment to reef: preliminary results from 2018 flood plume case studies, logger and sediment trap time series and an overview of project progress. Report to the National Environmental Science Programme. Reef and Rainforest Research Centre Limited, Cairns.
Data Location:
This dataset is filed in the eAtlas enduring data repository at: \data\2016-18-NESP-TWQ-2\2.1.5_Origin-detrimental-sediment
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
With a step-by-step approach, learn to prepare Excel files, data worksheets, and individual data columns for data analysis; practice conditional formatting and creating pivot tables/charts; go over basic principles of Research Data Management as they might apply to an Excel project. Avec une approche étape par étape, apprenez à préparer pour l’analyse des données des fichiers Excel, des feuilles de calcul de données et des colonnes de données individuelles; pratiquez la mise en forme conditionnelle et la création de tableaux croisés dynamiques ou de graphiques; passez en revue les principes de base de la gestion des données de recherche tels qu’ils pourraient s’appliquer à un projet Excel.
Facebook
TwitterThis is a computer exercise that takes you through retrieving multiple time series in CANSIM.
Facebook
TwitterLearn to decide which CSV version of a Statistics Canada data table to download depending on your goals and needs, and learn how to best work with the file in Excel once downloaded. Apprenez à décider de la meilleure version CSV d’un tableau de données de Statistique Canada à télécharger en fonction de vos objectifs et de vos besoins, et apprenez comment travailler avec le fichier dans Excel une fois téléchargé.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains monthly average U.S. Treasury yields across the curve from 1994-01 through 2024-12, compiled from FRED (Federal Reserve Economic Data, Federal Reserve Bank of St. Louis) and exported to a single formatted Excel file.
Treasury YieldsThe file is built from these FRED series (downloaded via FRED’s CSV endpoint):
DTB3 (3-Month T-Bill; used as closest proxy to 0.25y)DTB6 (6-Month T-Bill)DGS1DGS2DGS3DGS5DGS7DGS10DGS20DGS30A Python script:
DTB3).
Facebook
TwitterA comprehensive Quality Assurance (QA) and Quality Control (QC) statistical framework consists of three major phases: Phase 1—Preliminary raw data sets exploration, including time formatting and combining datasets of different lengths and different time intervals; Phase 2—QA of the datasets, including detecting and flagging of duplicates, outliers, and extreme values; and Phase 3—the development of time series of a desired frequency, imputation of missing values, visualization and a final statistical summary. The time series data collected at the Billy Barr meteorological station (East River Watershed, Colorado) were analyzed. The developed statistical framework is suitable for both real-time and post-data-collection QA/QC analysis of meteorological datasets.The files that are in this data package include one excel file, converted to CSV format (Billy_Barr_raw_qaqc.csv) that contains the raw meteorological data, i.e., input data used for the QA/QC analysis. The second CSV file (Billy_Barr_1hr.csv) is the QA/QC and flagged meteorological data, i.e., output data from the QA/QC analysis. The last file (QAQC_Billy_Barr_2021-03-22.R) is a script written in R that implements the QA/QC and flagging process. The purpose of the CSV data files included in this package is to provide input and output files implemented in the R script.
Facebook
TwitterList of the data tables as part of the Immigration system statistics Home Office release. Summary and detailed data tables covering the immigration system, including out-of-country and in-country visas, asylum, detention, and returns.
If you have any feedback, please email MigrationStatsEnquiries@homeoffice.gov.uk.
The Microsoft Excel .xlsx files may not be suitable for users of assistive technology.
If you use assistive technology (such as a screen reader) and need a version of these documents in a more accessible format, please email MigrationStatsEnquiries@homeoffice.gov.uk
Please tell us what format you need. It will help us if you say what assistive technology you use.
Immigration system statistics, year ending December 2025
Immigration system statistics quarterly release
Immigration system statistics user guide
Publishing detailed data tables in migration statistics
Policy and legislative changes affecting migration to the UK: timeline
Immigration statistics data archives
https://assets.publishing.service.gov.uk/media/69959366af0772e74df8d2f9/passenger-arrivals-summary-dec-2025-tables.ods">Passenger arrivals summary tables, year ending December 2025 (ODS, 31.9 KB)
‘Passengers refused entry at the border summary tables’ and ‘Passengers refused entry at the border detailed datasets’ have been discontinued. The latest published versions of these tables are from February 2025 and are available in the ‘Passenger refusals – release discontinued’ section. A similar data series, ‘Refused entry at port and subsequently departed’, is available within the Returns detailed and summary tables.
https://assets.publishing.service.gov.uk/media/6995909aa58a315dbe72bf02/electronic-travel-authorisation-datasets-dec-2025.xlsx">Electronic travel authorisation detailed datasets, year ending December 2025 (MS Excel Spreadsheet, 58.6 KB)
ETA_D01: Applications for electronic travel authorisations, by nationality
ETA_D02: Outcomes of applications for electronic travel authorisations, by nationality
https://assets.publishing.service.gov.uk/media/6996f283a58a315dbe72bfea/visas-summary-dec-2025-tables.ods">Entry clearance visas summary tables, year ending December 2025 (ODS, 58.7 KB)
https://assets.publishing.service.gov.uk/media/699590deaf0772e74df8d2f5/entry-clearance-visa-outcomes-datasets-dec-2025.xlsx">Entry clearance visa applications and outcomes detailed datasets, year ending December 2025 (MS Excel Spreadsheet, 29.2 MB)
Vis_D01: Entry clearance visa applications, by nationality and visa type
Vis_D02: Outcomes of entry clearance visa applications, by nationality, visa type, and outcome
Additional data relating to in country and overseas Vis
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains global sales and profit-related information along with customer, product, and regional details. It is suitable for business analytics, sales performance tracking, and profitability insights.
📊 Included Files: - Excel file (.xlsx) → Contains both the dataset (Sheet 1) and an Excel Dashboard (Sheet 2). - Power BI Dashboard (.pbix) → Built using the same dataset (shared via GitHub/Drive link below). - Screenshots → Sample visuals from the dashboards for quick preview.
📌 Columns in the Dataset: - Customer ID, Customer Name - Quantity Ordered - MSRP, Cost Price, Selling Price - Sales, Profit per Unit, Total Profit/Loss - Status (Completed/Cancelled/Returned) - Order Date, Month, Year - Product, Product Code - City, Country - Deal Size (Small/Medium/Large)
📈 Possible Use Cases: - Sales and profit trend analysis (monthly/yearly) - Customer profitability & segmentation - Regional performance (city & country-level) - Product-wise profitability and sales performance - Deal size impact on revenue and profit - Dashboard creation in Excel and Power BI
👉 Note: This dataset has been used to build both Excel and Power BI Dashboards.
- Excel Dashboard is included inside the .xlsx file.
- Power BI Dashboard (.pbix) is also provided in PDF format.
"This dataset can be used for Business Analytics, Customer Analysis, and building Dashboards in Power BI & Excel."
Facebook
TwitterThe USDA Agricultural Research Service (ARS) recently established SCINet , which consists of a shared high performance computing resource, Ceres, and the dedicated high-speed Internet2 network used to access Ceres. Current and potential SCINet users are using and generating very large datasets so SCINet needs to be provisioned with adequate data storage for their active computing. It is not designed to hold data beyond active research phases. At the same time, the National Agricultural Library has been developing the Ag Data Commons, a research data catalog and repository designed for public data release and professional data curation. Ag Data Commons needs to anticipate the size and nature of data it will be tasked with handling. The ARS Web-enabled Databases Working Group, organized under the SCINet initiative, conducted a study to establish baseline data storage needs and practices, and to make projections that could inform future infrastructure design, purchases, and policies. The SCINet Web-enabled Databases Working Group helped develop the survey which is the basis for an internal report. While the report was for internal use, the survey and resulting data may be generally useful and are being released publicly. From October 24 to November 8, 2016 we administered a 17-question survey (Appendix A) by emailing a Survey Monkey link to all ARS Research Leaders, intending to cover data storage needs of all 1,675 SY (Category 1 and Category 4) scientists. We designed the survey to accommodate either individual researcher responses or group responses. Research Leaders could decide, based on their unit's practices or their management preferences, whether to delegate response to a data management expert in their unit, to all members of their unit, or to themselves collate responses from their unit before reporting in the survey. Larger storage ranges cover vastly different amounts of data so the implications here could be significant depending on whether the true amount is at the lower or higher end of the range. Therefore, we requested more detail from "Big Data users," those 47 respondents who indicated they had more than 10 to 100 TB or over 100 TB total current data (Q5). All other respondents are called "Small Data users." Because not all of these follow-up requests were successful, we used actual follow-up responses to estimate likely responses for those who did not respond. We defined active data as data that would be used within the next six months. All other data would be considered inactive, or archival. To calculate per person storage needs we used the high end of the reported range divided by 1 for an individual response, or by G, the number of individuals in a group response. For Big Data users we used the actual reported values or estimated likely values. Resources in this dataset: Resource Title: Appendix A: ARS data storage survey questions. File Name: Appendix A.pdfResource Description: The full list of questions asked with the possible responses. The survey was not administered using this PDF but the PDF was generated directly from the administered survey using the Print option under Design Survey. Asterisked questions were required. A list of Research Units and their associated codes was provided in a drop down not shown here. Resource Software Recommended: Adobe Acrobat,url: https://get.adobe.com/reader/ Resource Title: CSV of Responses from ARS Researcher Data Storage Survey. File Name: Machine-readable survey response data.csvResource Description: CSV file includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed. This information is that same data as in the Excel spreadsheet (also provided). Resource Title: Responses from ARS Researcher Data Storage Survey. File Name: Data Storage Survey Data for public release.xlsxResource Description: MS Excel worksheet that Includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel
Facebook
TwitterThe zip file contains Excel workbooks (2 No.) for simulated historic and climate-change-perturbed runoff inflow series. Each excel workbook has a read-me file that describes the various entries.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r2 and p values are calculated from regressions concerning time and interval mean values. If r2≥0.65 at p≤0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
File 2 includes seven excels, named Data for Time-series regression in 2007 to Data for Time-series regression in 2013 respectively. In specific, we average all the 25 portfolios’ realized jump measures in 2007 as just one series from File 1 in the excel of “Data for Horse running regression in 2007”. The excels constitute jump variables and the other variables, among which value-weighted monthly returns of 25 portfolios are directly from the RESSET database.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The folder named data-C-galaxies input contains the input data reported in the Sofue's database in Excel format for the 291 galaxies of this series, while the corresponding data-C-galaxies-output files present the detailed fitting results for each galaxy of the C-Series. The first plot is the global dispersion curve. The EXCEL file following this spread curve summarizes the fitting parameters, the estimated mass, the maximal velocity and the SNR results obtained for the individual best fitting plots for each galaxy. These plots are successively presented after the EXCEL files. Similarly, the files named data-P-galaxies input and data-P-galaxies-output as well as data-S-galaxies input and data-S-galaxies-output report the input data and the best fitting results of the 31 galaxies of the P-Series and of the 229 galaxies of the S-Series respectively. As seen in the last lines of the EXCEL files, overall, the mean SNR and its standard deviation is 25.2 (3.8) dB for the C-series, 23.6 (5.2) dB for the P-series and 22.1 (5.9) for the S-series, which can be considered as very good for a two-parameter fitting.
Facebook
TwitterData are transition probabilities of moving across full-time employment, voluntary part-time employment, involuntary part-time employment, unemployment, and nonparticipation. Data are calculated at the monthly frequency and cover U.S. workers over the period from 1976 until 2019. The content of each *_baseline MS Excel data file is as follows: time series of seasonally adjusted stocks (normalized by the corresponding population size), and time-series of seasonally adjusted transition probabilities, corrected for margin error and time aggregation bias. The content of each *_reclassified MS Excel data file is identical, but transition probabilities are in addition adjusted for potentially spurious transitions.
Facebook
TwitterThese tables present high-level breakdowns and time series. A list of all tables, including those discontinued, is available in the table index. More detailed data is available in our data tools, or by downloading the open dataset.
We are proposing to make some changes to these tables in future, further details can be found alongside the latest provisional statistics.
The tables below are the latest final annual statistics for 2024, which are currently the latest available data. Provisional statistics for the first half of 2025 are also available, with provisional data for the whole of 2025 scheduled for publication in May 2026.
A list of all reported road collisions and casualties data tables and variables in our data download tool is available in the https://assets.publishing.service.gov.uk/media/6925869422424e25e6bc3105/reported-road-casualties-gb-index-of-tables.ods">Tables index (ODS, 28.9 KB).
https://assets.publishing.service.gov.uk/media/68d42292b6c608ff9421b2d2/ras-all-tables-excel.zip">Reported road collisions and casualties data tables (zip file) (ZIP, 11.2 MB)
RAS0101: https://assets.publishing.service.gov.uk/media/68d3cdeeca266424b221b253/ras0101.ods">Collisions, casualties and vehicles involved by road user type since 1926 (ODS, 34.7 KB)
RAS0102: https://assets.publishing.service.gov.uk/media/68d3cdfee65dc716bfb1dcf3/ras0102.ods">Casualties and casualty rates, by road user type and age group, since 1979 (ODS, 129 KB)
RAS0201: https://assets.publishing.service.gov.uk/media/68d3ce0bc908572e81248c1f/ras0201.ods">Numbers and rates (ODS, 37.5 KB)
RAS0202: https://assets.publishing.service.gov.uk/media/68d3ce17b6c608ff9421b25e/ras0202.ods">Sex and age group (ODS, 178 KB)
RAS0203: https://assets.publishing.service.gov.uk/media/6937f3b0e447374889cd8f3d/ras0203.ods">Rates by mode, including air, water and rail modes (ODS, 24.5 KB)
RAS0301: https://assets.publishing.service.gov.uk/media/68d3ce2b8c739d679fb1dcf6/ras0301.ods">Speed limit, built-up and non-built-up roads (<abbr title="OpenDocument Spreadsheet" class="gem-c-attac
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
1.Introduction
Sales data collection is a crucial aspect of any manufacturing industry as it provides valuable insights about the performance of products, customer behaviour, and market trends. By gathering and analysing this data, manufacturers can make informed decisions about product development, pricing, and marketing strategies in Internet of Things (IoT) business environments like the dairy supply chain.
One of the most important benefits of the sales data collection process is that it allows manufacturers to identify their most successful products and target their efforts towards those areas. For example, if a manufacturer could notice that a particular product is selling well in a certain region, this information could be utilised to develop new products, optimise the supply chain or improve existing ones to meet the changing needs of customers.
This dataset includes information about 7 of MEVGAL’s products [1]. According to the above information the data published will help researchers to understand the dynamics of the dairy market and its consumption patterns, which is creating the fertile ground for synergies between academia and industry and eventually help the industry in making informed decisions regarding product development, pricing and market strategies in the IoT playground. The use of this dataset could also aim to understand the impact of various external factors on the dairy market such as the economic, environmental, and technological factors. It could help in understanding the current state of the dairy industry and identifying potential opportunities for growth and development.
Please cite the following papers when using this dataset:
I. Siniosoglou, K. Xouveroudis, V. Argyriou, T. Lagkas, S. K. Goudos, K. E. Psannis and P. Sarigiannidis, "Evaluating the Effect of Volatile Federated Timeseries on Modern DNNs: Attention over Long/Short Memory," in the 12th International Conference on Circuits and Systems Technologies (MOCAST 2023), April 2023, Accepted
The dataset includes data regarding the daily sales of a series of dairy product codes offered by MEVGAL. In particular, the dataset includes information gathered by the logistics division and agencies within the industrial infrastructures overseeing the production of each product code. The products included in this dataset represent the daily sales and logistics of a variety of yogurt-based stock. Each of the different files include the logistics for that product on a daily basis for three years, from 2020 to 2022.
3.1 Data Collection
The process of building this dataset involves several steps to ensure that the data is accurate, comprehensive and relevant.
The first step is to determine the specific data that is needed to support the business objectives of the industry, i.e., in this publication’s case the daily sales data.
Once the data requirements have been identified, the next step is to implement an effective sales data collection method. In MEVGAL’s case this is conducted through direct communication and reports generated each day by representatives & selling points.
It is also important for MEVGAL to ensure that the data collection process conducted is in an ethical and compliant manner, adhering to data privacy laws and regulation. The industry also has a data management plan in place to ensure that the data is securely stored and protected from unauthorised access.
The published dataset is consisted of 13 features providing information about the date and the number of products that have been sold. Finally, the dataset was anonymised in consideration to the privacy requirement of the data owner (MEVGAL).
File
Period
Number of Samples (days)
product 1 2020.xlsx
01/01/2020–31/12/2020
363
product 1 2021.xlsx
01/01/2021–31/12/2021
364
product 1 2022.xlsx
01/01/2022–31/12/2022
365
product 2 2020.xlsx
01/01/2020–31/12/2020
363
product 2 2021.xlsx
01/01/2021–31/12/2021
364
product 2 2022.xlsx
01/01/2022–31/12/2022
365
product 3 2020.xlsx
01/01/2020–31/12/2020
363
product 3 2021.xlsx
01/01/2021–31/12/2021
364
product 3 2022.xlsx
01/01/2022–31/12/2022
365
product 4 2020.xlsx
01/01/2020–31/12/2020
363
product 4 2021.xlsx
01/01/2021–31/12/2021
364
product 4 2022.xlsx
01/01/2022–31/12/2022
364
product 5 2020.xlsx
01/01/2020–31/12/2020
363
product 5 2021.xlsx
01/01/2021–31/12/2021
364
product 5 2022.xlsx
01/01/2022–31/12/2022
365
product 6 2020.xlsx
01/01/2020–31/12/2020
362
product 6 2021.xlsx
01/01/2021–31/12/2021
364
product 6 2022.xlsx
01/01/2022–31/12/2022
365
product 7 2020.xlsx
01/01/2020–31/12/2020
362
product 7 2021.xlsx
01/01/2021–31/12/2021
364
product 7 2022.xlsx
01/01/2022–31/12/2022
365
3.2 Dataset Overview
The following table enumerates and explains the features included across all of the included files.
Feature
Description
Unit
Day
day of the month
-
Month
Month
-
Year
Year
-
daily_unit_sales
Daily sales - the amount of products, measured in units, that during that specific day were sold
units
previous_year_daily_unit_sales
Previous Year’s sales - the amount of products, measured in units, that during that specific day were sold the previous year
units
percentage_difference_daily_unit_sales
The percentage difference between the two above values
%
daily_unit_sales_kg
The amount of products, measured in kilograms, that during that specific day were sold
kg
previous_year_daily_unit_sales_kg
Previous Year’s sales - the amount of products, measured in kilograms, that during that specific day were sold, the previous year
kg
percentage_difference_daily_unit_sales_kg
The percentage difference between the two above values
kg
daily_unit_returns_kg
The percentage of the products that were shipped to selling points and were returned
%
previous_year_daily_unit_returns_kg
The percentage of the products that were shipped to selling points and were returned the previous year
%
points_of_distribution
The amount of sales representatives through which the product was sold to the market for this year
previous_year_points_of_distribution
The amount of sales representatives through which the product was sold to the market for the same day for the previous year
Table 1 – Dataset Feature Description
4.1 Dataset Structure
The provided dataset has the following structure:
Where:
Name
Type
Property
Readme.docx
Report
A File that contains the documentation of the Dataset.
product X
Folder
A folder containing the data of a product X.
product X YYYY.xlsx
Data file
An excel file containing the sales data of product X for year YYYY.
Table 2 - Dataset File Description
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 957406 (TERMINET).
References
[1] MEVGAL is a Greek dairy production company
Facebook
TwitterThis data set was acquired with a DSPL HOBO HighTemp Temperature Probe and Major Fluid Sampler assembled as part of the 1991 EPR:9N_VonDamm data compilation (Chief Scientist: Dr. Karen Von Damm; Investigators: Dr. Julie Bryce, Florencia Prado, and Dr. Karen Von Damm). The data files are in Microsoft Excel format and include Fluid Chemistry and Temperature time series data and were processed after data collection. Funding was provided by NSF grant OCE03-27126. This data was cited by Oosting and Von Damm, 1996, Von Damm et al., 1997, Ravizza et al., 2001, Von Damm, 2000, Von Damm, 2004, Von Damm and Lilley, 2004, and Haymon et al., 1993.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data product contains statistics on wheat-including the five classes of wheat: hard red winter, hard red spring, soft red winter, white, and durum-and rye. Includes data published in the monthly Wheat Outlook and previously annual Wheat Yearbook. Data are monthly, quarterly, and/or annual depending upon the data series. Most data are on a marketing year basis, but some are calendar year.This record was taken from the USDA Enterprise Data Inventory that feeds into the https://data.gov catalog. Data for this record includes the following resources: Web page with links to Excel files For complete information, please visit https://data.gov.
Facebook
TwitterRaw data from six species of raptors that evaluate the percentage of GPS fixes and the percentage of the home range area, or 95% kernel area, captured by the buffer circle. In each file, the first worksheet has details on the length of each year and season per each individual; the second worksheet counts the individual and average home range and the percentage of GPS fixes captured by the calculated buffer circle during each year and season; the third worksheet calculates the individual and average percentage of the area captured by the calculated buffer circle; while the fourth worksheet calculates the individual year and average annual percentage of the home range covered by the calculated buffer circle, and at what size does the buffer circle capture 95% of the species' home range.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset included with this article contains three files describing and defining the sample and variables for VAT impact, and Excel file 1 consists of all raw and filtered data for the variables for the panel data sample. Excel file 2 depicts time-series and cross-sectional data for nonfinancial firms listed on the Saudi market for the second and third quarters of 2019 and the third and fourth quarters of 2020. Excel file 3 presents the raw material of variables used in measuring the company's profitability of the panel data sample
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset documents the spatial and temporal variability of resuspension events and sediment dynamics at seven Great Barrier Reef Lagoon inshore locations using continuous logger data (10 min sampling intervals) over 2 ½ years and analysed the quantity of sediment collected in newly designed sediment traps. The dataset highlights the influence of river discharge events on sediment dynamics across these locations.
*This dataset is under an embargo period until the end of the project extension
Methods: Nephelometers were sourced from the Marine Geophysics Laboratory, James Cook University. Prior to deployment each instrument is calibrated using the laboratory’s standard procedure where calibrations for turbidity (to normalise readings for standardisation across all instruments), pressure and light are performed. The instruments were deployed for periods spanning from 2 to 5 months at sever inshore locations. On retrieval the data from each instrument are downloaded and pasted into a spreadsheet where the calibrations (prior to deployment) are applied to showcase the time series of the data.
Site calibrations of benthic sediment to instrument normalised turbidity readings are applied to convert the NTU turbidity measurements to suspended sediment concentrations (SSC). This spreadsheet is used to also remove spurious data from the time series that have been due to instrument fouling, malfunction or obstruction. In some cases the instrument was flooded or lost and so no data are available for that deployment period. In other cases, the measurement of one or more of the parameters was not recorded by the instrument and so only the reliable data have been plotted. The current meter (Marotte) was also sourced from the Marine Geophysics Laboratory, James Cook University and downloaded using their software.
We note that the turbidity, wave pressure and light data provided have been thoroughly checked for QA/QC procedures. However, the temperate and current meter data have not been thoroughly checked and there will be instances where the data from these instruments are spurious. We caution against using these data unless a thorough QA/QC process is implemented. In some cases, on longer deployments the earlier data were overwritten on the nephelometer and so these data have been lost.
Format: The data are provided as Microsoft Excel files (a separate file for each site). Due to the limit of the number of rows permitted by Excel, the time series data have been spread across 3 worksheets for each file.
Data Dictionary:
TIMESERIES.XLSX for each location
TIMESTAMP: date and time of measurement at 10 minute frequency [DD/MM/YYYY Hour:Minute]
NEPHELOMETER DATA NTUe: turbidity measurements in units in filter effluent (NTUe) SSC: (mg.L-1) – suspended sediment concentrations converted from NTUe measurements LIGHT (uE/cm2):measure of light per 10 minute sensor reading DEPTH: of instrument from surface (m) RMS: measure of wave pressure TEMP: (degrees C) of water
CURRENT METER DATA
speed (m/s)
heading (degrees CW from North)
speed upper (m/s)
speed lower (m/s)
tilt (radians)
direction (radians CCW from East)
batt (volts)
temp (Celsius)
References: Lewis, S., Bainbridge, Z. Stevens, T. Garzon-Garcia, A. Chen, C. Burton, J. Bahadori, M. Rezaei Rashti, M. Gorman, J. Smithers, S. Olley, J. Moody, P. Dehayr, R. (2018) Sediment tracing from the catchment to reef: preliminary results from 2018 flood plume case studies, logger and sediment trap time series and an overview of project progress. Report to the National Environmental Science Programme. Reef and Rainforest Research Centre Limited, Cairns.
Data Location:
This dataset is filed in the eAtlas enduring data repository at: \data\2016-18-NESP-TWQ-2\2.1.5_Origin-detrimental-sediment