Facebook
TwitterThe USDA Agricultural Research Service (ARS) recently established SCINet , which consists of a shared high performance computing resource, Ceres, and the dedicated high-speed Internet2 network used to access Ceres. Current and potential SCINet users are using and generating very large datasets so SCINet needs to be provisioned with adequate data storage for their active computing. It is not designed to hold data beyond active research phases. At the same time, the National Agricultural Library has been developing the Ag Data Commons, a research data catalog and repository designed for public data release and professional data curation. Ag Data Commons needs to anticipate the size and nature of data it will be tasked with handling. The ARS Web-enabled Databases Working Group, organized under the SCINet initiative, conducted a study to establish baseline data storage needs and practices, and to make projections that could inform future infrastructure design, purchases, and policies. The SCINet Web-enabled Databases Working Group helped develop the survey which is the basis for an internal report. While the report was for internal use, the survey and resulting data may be generally useful and are being released publicly. From October 24 to November 8, 2016 we administered a 17-question survey (Appendix A) by emailing a Survey Monkey link to all ARS Research Leaders, intending to cover data storage needs of all 1,675 SY (Category 1 and Category 4) scientists. We designed the survey to accommodate either individual researcher responses or group responses. Research Leaders could decide, based on their unit's practices or their management preferences, whether to delegate response to a data management expert in their unit, to all members of their unit, or to themselves collate responses from their unit before reporting in the survey. Larger storage ranges cover vastly different amounts of data so the implications here could be significant depending on whether the true amount is at the lower or higher end of the range. Therefore, we requested more detail from "Big Data users," those 47 respondents who indicated they had more than 10 to 100 TB or over 100 TB total current data (Q5). All other respondents are called "Small Data users." Because not all of these follow-up requests were successful, we used actual follow-up responses to estimate likely responses for those who did not respond. We defined active data as data that would be used within the next six months. All other data would be considered inactive, or archival. To calculate per person storage needs we used the high end of the reported range divided by 1 for an individual response, or by G, the number of individuals in a group response. For Big Data users we used the actual reported values or estimated likely values. Resources in this dataset:Resource Title: Appendix A: ARS data storage survey questions. File Name: Appendix A.pdfResource Description: The full list of questions asked with the possible responses. The survey was not administered using this PDF but the PDF was generated directly from the administered survey using the Print option under Design Survey. Asterisked questions were required. A list of Research Units and their associated codes was provided in a drop down not shown here. Resource Software Recommended: Adobe Acrobat,url: https://get.adobe.com/reader/ Resource Title: CSV of Responses from ARS Researcher Data Storage Survey. File Name: Machine-readable survey response data.csvResource Description: CSV file includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed. This information is that same data as in the Excel spreadsheet (also provided).Resource Title: Responses from ARS Researcher Data Storage Survey. File Name: Data Storage Survey Data for public release.xlsxResource Description: MS Excel worksheet that Includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel
Facebook
TwitterThis dataset was created by denggui feng613
Facebook
TwitterIn order to test hypotheses about groundwater flow under and into estuaries and the Atlantic Ocean, geophysical surveys, geophysical probing, submarine groundwater sampling, and sediment coring were conducted by U.S. Geological Survey (USGS) scientists at Cape Cod National Seashore (CCNS) from 2004 through 2006. Coastal resource managers at CCNS and elsewhere are concerned about nutrients that are entering coastal waters via submarine groundwater discharge, which are contributing to eutrophication and harmful algal blooms. The research carried out as part of the study described here was designed, in part, to help refine assumptions required by earlier versions of models about the nature of submarine groundwater flow and discharge at CCNS. This study was conducted in four phases, with a variety of field techniques and equipment employed in each phase. Phase 1 consisted of continuous resistivity profiling (CRP) surveys of the entire study area conducted in 2004. Phase 2 consisted of CRP ground-truthing via resistivity probe measurements and submarine groundwater sampling from hydraulically-drive piezometers using a barge in the Salt Pond/Nauset Marsh area in 2005. Phase 3 consisted of supplemental detailed CRP surveys in the Salt Pond/Nauset Marsh area in 2006. Finally, Phase 4 consisted of sediment coring and porewater extraction in the Salt Pond/Nauset Marsh area later in 2006 to supplement the 2005 sampling.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about book series. It has 1 row and is filtered where the books is Microsoft Excel 2000 : introductory concepts and techniques. It features 10 columns including number of authors, number of books, earliest publication date, and latest publication date.
Facebook
TwitterIn order to test hypotheses about groundwater flow under and into estuaries and the Atlantic Ocean, geophysical surveys, geophysical probing, submarine groundwater sampling, and sediment coring were conducted by U.S. Geological Survey (USGS) scientists at Cape Cod National Seashore (CCNS) from 2004 through 2006. Coastal resource managers at CCNS and elsewhere are concerned about nutrients that are entering coastal waters via submarine groundwater discharge, which are contributing to eutrophication and harmful algal blooms. The research carried out as part of the study described here was designed, in part, to help refine assumptions required by earlier versions of models about the nature of submarine groundwater flow and discharge at CCNS. This study was conducted in four phases, with a variety of field techniques and equipment employed in each phase. Phase 1 consisted of continuous resistivity profiling (CRP) surveys of the entire study area conducted in 2004. Phase 2 consisted of CRP ground-truthing via resistivity probe measurements and submarine groundwater sampling from hydraulically-drive piezometers using a barge in the Salt Pond/Nauset Marsh area in 2005. Phase 3 consisted of supplemental detailed CRP surveys in the Salt Pond/Nauset Marsh area in 2006. Finally, Phase 4 consisted of sediment coring and porewater extraction in the Salt Pond/Nauset Marsh area later in 2006 to supplement the 2005 sampling.
Facebook
TwitterThe annual Retail store data CD-ROM is an easy-to-use tool for quickly discovering retail trade patterns and trends. The current product presents results from the 1999 and 2000 Annual Retail Store and Annual Retail Chain surveys. This product contains numerous cross-classified data tables using the North American Industry Classification System (NAICS). The data tables provide access to a wide range of financial variables, such as revenues, expenses, inventory, sales per square footage (chain stores only) and the number of stores. Most data tables contain detailed information on industry (as low as 5-digit NAICS codes), geography (Canada, provinces and territories) and store type (chains, independents, franchises). The electronic product also contains survey metadata, questionnaires, information on industry codes and definitions, and the list of retail chain store respondents.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
PROJECT OBJECTIVE
We are a part of XYZ Co Pvt Ltd company who is in the business of organizing the sports events at international level. Countries nominate sportsmen from different departments and our team has been given the responsibility to systematize the membership roster and generate different reports as per business requirements.
Questions (KPIs)
TASK 1: STANDARDIZING THE DATASET
TASK 2: DATA FORMATING
TASK 3: SUMMARIZE DATA - PIVOT TABLE (Use SPORTSMEN worksheet after attempting TASK 1) • Create a PIVOT table in the worksheet ANALYSIS, starting at cell B3,with the following details:
TASK 4: SUMMARIZE DATA - EXCEL FUNCTIONS (Use SPORTSMEN worksheet after attempting TASK 1)
• Create a SUMMARY table in the worksheet ANALYSIS,starting at cell G4, with the following details:
TASK 5: GENERATE REPORT - PIVOT TABLE (Use SPORTSMEN worksheet after attempting TASK 1)
• Create a PIVOT table report in the worksheet REPORT, starting at cell A3, with the following information:
Process
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
1.Introduction
Sales data collection is a crucial aspect of any manufacturing industry as it provides valuable insights about the performance of products, customer behaviour, and market trends. By gathering and analysing this data, manufacturers can make informed decisions about product development, pricing, and marketing strategies in Internet of Things (IoT) business environments like the dairy supply chain.
One of the most important benefits of the sales data collection process is that it allows manufacturers to identify their most successful products and target their efforts towards those areas. For example, if a manufacturer could notice that a particular product is selling well in a certain region, this information could be utilised to develop new products, optimise the supply chain or improve existing ones to meet the changing needs of customers.
This dataset includes information about 7 of MEVGAL’s products [1]. According to the above information the data published will help researchers to understand the dynamics of the dairy market and its consumption patterns, which is creating the fertile ground for synergies between academia and industry and eventually help the industry in making informed decisions regarding product development, pricing and market strategies in the IoT playground. The use of this dataset could also aim to understand the impact of various external factors on the dairy market such as the economic, environmental, and technological factors. It could help in understanding the current state of the dairy industry and identifying potential opportunities for growth and development.
Please cite the following papers when using this dataset:
I. Siniosoglou, K. Xouveroudis, V. Argyriou, T. Lagkas, S. K. Goudos, K. E. Psannis and P. Sarigiannidis, "Evaluating the Effect of Volatile Federated Timeseries on Modern DNNs: Attention over Long/Short Memory," in the 12th International Conference on Circuits and Systems Technologies (MOCAST 2023), April 2023, Accepted
The dataset includes data regarding the daily sales of a series of dairy product codes offered by MEVGAL. In particular, the dataset includes information gathered by the logistics division and agencies within the industrial infrastructures overseeing the production of each product code. The products included in this dataset represent the daily sales and logistics of a variety of yogurt-based stock. Each of the different files include the logistics for that product on a daily basis for three years, from 2020 to 2022.
3.1 Data Collection
The process of building this dataset involves several steps to ensure that the data is accurate, comprehensive and relevant.
The first step is to determine the specific data that is needed to support the business objectives of the industry, i.e., in this publication’s case the daily sales data.
Once the data requirements have been identified, the next step is to implement an effective sales data collection method. In MEVGAL’s case this is conducted through direct communication and reports generated each day by representatives & selling points.
It is also important for MEVGAL to ensure that the data collection process conducted is in an ethical and compliant manner, adhering to data privacy laws and regulation. The industry also has a data management plan in place to ensure that the data is securely stored and protected from unauthorised access.
The published dataset is consisted of 13 features providing information about the date and the number of products that have been sold. Finally, the dataset was anonymised in consideration to the privacy requirement of the data owner (MEVGAL).
File
Period
Number of Samples (days)
product 1 2020.xlsx
01/01/2020–31/12/2020
363
product 1 2021.xlsx
01/01/2021–31/12/2021
364
product 1 2022.xlsx
01/01/2022–31/12/2022
365
product 2 2020.xlsx
01/01/2020–31/12/2020
363
product 2 2021.xlsx
01/01/2021–31/12/2021
364
product 2 2022.xlsx
01/01/2022–31/12/2022
365
product 3 2020.xlsx
01/01/2020–31/12/2020
363
product 3 2021.xlsx
01/01/2021–31/12/2021
364
product 3 2022.xlsx
01/01/2022–31/12/2022
365
product 4 2020.xlsx
01/01/2020–31/12/2020
363
product 4 2021.xlsx
01/01/2021–31/12/2021
364
product 4 2022.xlsx
01/01/2022–31/12/2022
364
product 5 2020.xlsx
01/01/2020–31/12/2020
363
product 5 2021.xlsx
01/01/2021–31/12/2021
364
product 5 2022.xlsx
01/01/2022–31/12/2022
365
product 6 2020.xlsx
01/01/2020–31/12/2020
362
product 6 2021.xlsx
01/01/2021–31/12/2021
364
product 6 2022.xlsx
01/01/2022–31/12/2022
365
product 7 2020.xlsx
01/01/2020–31/12/2020
362
product 7 2021.xlsx
01/01/2021–31/12/2021
364
product 7 2022.xlsx
01/01/2022–31/12/2022
365
3.2 Dataset Overview
The following table enumerates and explains the features included across all of the included files.
Feature
Description
Unit
Day
day of the month
-
Month
Month
-
Year
Year
-
daily_unit_sales
Daily sales - the amount of products, measured in units, that during that specific day were sold
units
previous_year_daily_unit_sales
Previous Year’s sales - the amount of products, measured in units, that during that specific day were sold the previous year
units
percentage_difference_daily_unit_sales
The percentage difference between the two above values
%
daily_unit_sales_kg
The amount of products, measured in kilograms, that during that specific day were sold
kg
previous_year_daily_unit_sales_kg
Previous Year’s sales - the amount of products, measured in kilograms, that during that specific day were sold, the previous year
kg
percentage_difference_daily_unit_sales_kg
The percentage difference between the two above values
kg
daily_unit_returns_kg
The percentage of the products that were shipped to selling points and were returned
%
previous_year_daily_unit_returns_kg
The percentage of the products that were shipped to selling points and were returned the previous year
%
points_of_distribution
The amount of sales representatives through which the product was sold to the market for this year
previous_year_points_of_distribution
The amount of sales representatives through which the product was sold to the market for the same day for the previous year
Table 1 – Dataset Feature Description
4.1 Dataset Structure
The provided dataset has the following structure:
Where:
Name
Type
Property
Readme.docx
Report
A File that contains the documentation of the Dataset.
product X
Folder
A folder containing the data of a product X.
product X YYYY.xlsx
Data file
An excel file containing the sales data of product X for year YYYY.
Table 2 - Dataset File Description
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 957406 (TERMINET).
References
[1] MEVGAL is a Greek dairy production company
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
About Dataset Safa S. Abdul-Jabbar, Alaa k. Farhan
Context This is the first Dataset for various ordinary patients in Iraq. The Dataset provides the patients’ Cell Blood Count test information that can be used to create a Hematology diagnosis/prediction system. Also, this Data was collected in 2022 from Al-Zahraa Al-Ahly Hospital. These data can be cleaned & analyzed using any programming language because it is provided in an excel file that can be accessed and manipulated easily. The user just needs to understand how rows and columns are arranged because the data was collected as images(CBC images) from the laboratories and then stored the extracted data in an excel file. Content This Dataset contains 500 rows. For each row (patient information), there are 21 columns containing CBC test features that can be described as follows:
ID: Patients Identifier
WBC: White Blood Cell, Normal Ranges: 4.0 to 10.0, Unit: 10^9/L.
LYMp: Lymphocytes percentage, which is a type of white blood cell, Normal Ranges: 20.0 to 40.0, Unit: %
MIDp: Indicates the percentage combined value of the other types of white blood cells not classified as lymphocytes or granulocytes, Normal Ranges: 1.0 to 15.0, Unit: %
NEUTp: Neutrophils are a type of white blood cell (leukocytes); neutrophils percentage, Normal Ranges: 50.0 to 70.0, Unit: %
LYMn: Lymphocytes number are a type of white blood cell, Normal Ranges: 0.6 to 4.1, Unit: 10^9/L.
MIDn: Indicates the combined number of other white blood cells not classified as lymphocytes or granulocytes, Normal Ranges: 0.1 to 1.8, Unit: 10^9/L.
NEUTn: Neutrophils Number, Normal Ranges: 2.0 to 7.8, Unit: 10^9/L.
RBC: Red Blood Cell, Normal Ranges: 3.50 to 5.50, Unit: 10^12/L
HGB: Hemoglobin, Normal Ranges: 11.0 to 16.0, Unit: g/dL
HCT: Hematocrit is the proportion, by volume, of the Blood that consists of red blood cells, Normal Ranges: 36.0 to 48.0, Unit: %
MCV: Mean Corpuscular Volume, Normal Ranges: 80.0 to 99.0, Unit: fL
MCH: Mean Corpuscular Hemoglobin is the average amount of haemoglobin in the average red cell, Normal Ranges: 26.0 to 32.0, Unit: pg
MCHC: Mean Corpuscular Hemoglobin Concentration, Normal Ranges: 32.0 to 36.0, Unit: g/dL
RDWSD: Red Blood Cell Distribution Width, Normal Ranges: 37.0 to 54.0, Unit: fL
RDWCV: Red blood cell distribution width, Normal Ranges: 11.5 to 14.5, Unit: %
PLT: Platelet Count, Normal Ranges: 100 to 400, Unit: 10^9/L
MPV: Mean Platelet Volume, Normal Ranges: 7.4 to 10.4, Unit: fL
PDW: Red Cell Distribution Width, Normal Ranges: 10.0 to 17.0, Unit: %
PCT: The level of Procalcitonin in the Blood, Normal Ranges: 0.10 to 0.28, Unit: %
PLCR: Platelet Large Cell Ratio, Normal Ranges: 13.0 to 43.0, Unit: %
Acknowledgements We thank the entire Al-Zahraa Al-Ahly Hospital Hospital team, especially the hospital manager, for cooperating with us in collecting this data while maintaining patients' confidentiality.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The open repository consists of two folders; Dataset and Picture. The dataset folder consists file “AWS Dataset Pangandaraan.xlsx”. There are 10 columns with three first columns as time attributes and the other six as atmosphere datasets. Each parameter has 8085 data, and Each parameter has a parameter index at the bottom of the column we added, including mMinimum, mMaximum, and Average values.
For further use, the user can choose one or more parameters for calculating or analyzing. For example, wind data (speed and direction) can be utilized to calculate Waves using the Hindcast method. Furthermore, the user can filter data by using the feature in Excel to extract the exact time range for analyzing various phenomena considered correlated to atmosphere data around Pangandaran, Indonesia.
The second folder, named “Picture,” contains three figures, including the monthly distribution of datasets, temporal data, and wind rose. Furthermore, the user can filter data by using the feature in Excel sheet to extract the exact time range for analyzing various phenomena considered correlated to atmosphere data around Pangandaran, Indonesia
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about book series. It has 1 row and is filtered where the books is Excel at problem solving. It features 10 columns including number of authors, number of books, earliest publication date, and latest publication date.
Facebook
TwitterThe ITEX experiment at Audkuluheidi was started in 1996 when control and OTC plots 1-5 were set up. In 1997 Control and OTC plots 6-10 were set up in the protected area (No Graze). Also in 1997, 10 control plots were set up in the adjacent grazed area (Graze). In 2000, all plots were sampled again. This dataset is in excel format. For more information, please see the readme file.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
With a step-by-step approach, learn to prepare Excel files, data worksheets, and individual data columns for data analysis; practice conditional formatting and creating pivot tables/charts; go over basic principles of Research Data Management as they might apply to an Excel project. Avec une approche étape par étape, apprenez à préparer pour l’analyse des données des fichiers Excel, des feuilles de calcul de données et des colonnes de données individuelles; pratiquez la mise en forme conditionnelle et la création de tableaux croisés dynamiques ou de graphiques; passez en revue les principes de base de la gestion des données de recherche tels qu’ils pourraient s’appliquer à un projet Excel.
Facebook
TwitterNo description is available. Visit https://dataone.org/datasets/6ffb72520e80a412991cd50d38f324d6 for complete metadata about this dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains sales records for fast-moving consumer goods, including daily sales, product identifiers, and store locations. The data was originally sourced from dunnhumby and processed for forecasting analysis. Excel files provide the raw data. The original data were sourced from dunnhumby.
Facebook
TwitterThe Alaska Geochemical Database Version 2.0 (AGDB2) contains new geochemical data compilations in which each geologic material sample has one "best value" determination for each analyzed species, greatly improving speed and efficiency of use. Like the Alaska Geochemical Database (AGDB) before it, the AGDB2 was created and designed to compile and integrate geochemical data from Alaska in order to facilitate geologic mapping, petrologic studies, mineral resource assessments, definition of geochemical baseline values and statistics, environmental impact assessments, and studies in medical geology. This relational database, created from the Alaska Geochemical Database (AGDB) that was released in 2011, serves as a data archive in support of present and future Alaskan geologic and geochemical projects, and contains data tables in several different formats describing historical and new quantitative and qualitative geochemical analyses. The analytical results were determined by 85 laboratory and field analytical methods on 264,095 rock, sediment, soil, mineral and heavy-mineral concentrate samples. Most samples were collected by U.S. Geological Survey (USGS) personnel and analyzed in USGS laboratories or, under contracts, in commercial analytical laboratories. These data represent analyses of samples collected as part of various USGS programs and projects from 1962 through 2009. In addition, mineralogical data from 18,138 nonmagnetic heavy mineral concentrate samples are included in this database. The AGDB2 includes historical geochemical data originally archived in the USGS Rock Analysis Storage System (RASS) database, used from the mid-1960s through the late 1980s and the USGS PLUTO database used from the mid-1970s through the mid-1990s. All of these data are currently maintained in the National Geochemical Database (NGDB). Retrievals from the NGDB were used to generate most of the AGDB data set. These data were checked for accuracy regarding sample location, sample media type, and analytical methods used. This arduous process of reviewing, verifying and, where necessary, editing all USGS geochemical data resulted in a significantly improved Alaska geochemical dataset. USGS data that were not previously in the NGDB because the data predate the earliest USGS geochemical databases, or were once excluded for programmatic reasons, are included here in the AGDB2 and will be added to the NGDB. The AGDB2 data provided here are the most accurate and complete to date, and should be useful for a wide variety of geochemical studies. The AGDB2 data provided in the linked database may be updated or changed periodically.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
File name definitions:
'...v_50_175_250_300...' - dataset for velocity ranges [50, 175] + [250, 300] m/s
'...v_175_250...' - dataset for velocity range [175, 250] m/s
'ANNdevelop...' - used to perform 9 parametric sub-analyses where, in each one, many ANNs are developed (trained, validated and tested) and the one yielding the best results is selected
'ANNtest...' - used to test the best ANN from each aforementioned parametric sub-analysis, aiming to find the best ANN model; this dataset includes the 'ANNdevelop...' counterpart
Where to find the input (independent) and target (dependent) variable values for each dataset/excel ?
input values in 'IN' sheet
target values in 'TARGET' sheet
Where to find the results from the best ANN model (for each target/output variable and each velocity range)?
open the corresponding excel file and the expected (target) vs ANN (output) results are written in 'TARGET vs OUTPUT' sheet
Check reference below (to be added when the paper is published)
https://www.researchgate.net/publication/328849817_11_Neural_Networks_-_Max_Disp_-_Railway_Beams
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A range of quarterly Excel spreadsheets and SuperTABLE datacubes. The spreadsheets contain broad level data covering all the major items of the Labour Force Survey in time series format, including seasonally adjusted and trend estimates. The datacubes contain more detailed and cross classified original data than the spreadsheets.
Facebook
TwitterExcel Age-Range creator for Office for National Statistics (ONS) Mid year population estimates (MYE) covering each year between 1999 and 2016 These files take into account the revised estimates for 2002-2010 released in April 2013 down to Local Authority level and the post 2011 estimates based on the Census results. Scotland and Northern Ireland data has not been revised, so Great Britain and United Kingdom totals comprise the original data for these plus revised England and Wales figures. This Excel based tool enables users to query the single year of age raw data so that any age range can easily be calculated without having to carry out often complex, and time consuming formulas that could also be open to human error. Simply select the lower and upper age range for both males and females and the spreadsheet will return the total population for the range. Please adhere to the terms and conditions of supply contained within the file. Tip: You can copy and paste the rows you are interested in to another worksheet by using the filters at the top of the columns and then select all by pressing Ctrl+A. Then simply copy and paste the cells to a new location. ONS Mid year population estimates Open Excel tool (London Boroughs, Regions and National, 1999-2016) Also available is a custom-age tool for all geographies in the UK. This full MYE dataset by single year of age (SYA) age and gender is available as a Datastore package here. Ward Level Population estimates Single year of age population tool for 2002 to 2015 for all wards in London. New 2014 Ward boundary estimates Ward boundary changes in May 2014 only affected three London boroughs - Hackney, Kensington and Chelsea, and Tower Hamlets. The estimates between 2001-2013 have been calculated by the GLA by taking the proportion of a the old ward that falls within the new ward based on the proportion of population living in each area at the 2011 Census. Therefore, these estimates are purely indicative and are not official statistics and not endorsed by ONS. From 2014 onwards, ONS began publishing official estimates for the new ward boundaries. Download here.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset was supplied to the Bioregional Assessment Programme by a third party and is presented here as originally supplied. Metadata was not provided and has been compiled by the Bioregional Assessment Programme based on known details at the time of acquisition.
Mean monthly flow (ML/month) and Annual flow (ML/yr) data at key gauges in the Macalister Irrigation District (MID) as monitored by SRW. The data are provided in MS Excel format in worksheets and charts.
Data used to produce Time-series drainage volume data provided by a third party. Site information and monitoring drainage flow data provided by the Southern Rural Water are specific to the Macalister Irrigation District.
Time specific data in the range 23/07/1997 to 31/12/2013
This dialogue has been copied from a draft of the BA-GIP report.
A total of 197 river gauges were identified within the model area representing all of the major rivers. Daily gauge level data was sourced from the Victorian Department of Environment, Land, Water and Planning Water Measurement Information System (WMIS, 2015). A list of the river gauges is provided in the report for key river basins
Only main stems of the major rivers were included in the model. These river reaches were identified using the DEPI hydro25 spatial data set (DEPI, 2014). The river classification was used to vary river incision depth (depth below the ground surface as defined by the digital elevation model) and width attributes. In the absence of recorded stage height information, river classification was used to estimate river stage heights. A total of 22,573 river cells are included in the model. Fifty-one gauges were selected to calibrate the catchment modelling framework in unregulated catchments based on Base Flow Indexes and observed stream flows.
Drainage channels and man-made drainage features in the Macalister Irrigation District (MID) were included in the model based on available drainage network mapping. This information was sourced from Southern Rural Water (SRW) and the DEPI Corporate Spatial Data library. Drainage cells are assigned to the uppermost cells within the model to capture groundwater discharge processes. Drain cells in Modflow can only act as groundwater discharge points and as such those cells outside drainage channels will be characterised as having a bed elevation equivalent to ground surface elevation. A total of 410,504 drainage cells are incorporated in the model. Apart from 3 river gauges sourced from the WMIS, SRW also has 15 gauges monitored drainage from the MID. The measurements commenced between 1997 and 2005. Of the 15 gauges, six were selected to calibrate the catchment modelling framework based on observed discharge.
Victorian Department of Economic Development, Jobs, Transport and Resources (2015) Mean monthly flow & annual flow data - Macalister Irrigation District. Bioregional Assessment Source Dataset. Viewed 05 October 2018, http://data.bioregionalassessments.gov.au/dataset/6ba89d78-1e42-4e02-bd5c-a435ee15bef4.
Facebook
TwitterThe USDA Agricultural Research Service (ARS) recently established SCINet , which consists of a shared high performance computing resource, Ceres, and the dedicated high-speed Internet2 network used to access Ceres. Current and potential SCINet users are using and generating very large datasets so SCINet needs to be provisioned with adequate data storage for their active computing. It is not designed to hold data beyond active research phases. At the same time, the National Agricultural Library has been developing the Ag Data Commons, a research data catalog and repository designed for public data release and professional data curation. Ag Data Commons needs to anticipate the size and nature of data it will be tasked with handling. The ARS Web-enabled Databases Working Group, organized under the SCINet initiative, conducted a study to establish baseline data storage needs and practices, and to make projections that could inform future infrastructure design, purchases, and policies. The SCINet Web-enabled Databases Working Group helped develop the survey which is the basis for an internal report. While the report was for internal use, the survey and resulting data may be generally useful and are being released publicly. From October 24 to November 8, 2016 we administered a 17-question survey (Appendix A) by emailing a Survey Monkey link to all ARS Research Leaders, intending to cover data storage needs of all 1,675 SY (Category 1 and Category 4) scientists. We designed the survey to accommodate either individual researcher responses or group responses. Research Leaders could decide, based on their unit's practices or their management preferences, whether to delegate response to a data management expert in their unit, to all members of their unit, or to themselves collate responses from their unit before reporting in the survey. Larger storage ranges cover vastly different amounts of data so the implications here could be significant depending on whether the true amount is at the lower or higher end of the range. Therefore, we requested more detail from "Big Data users," those 47 respondents who indicated they had more than 10 to 100 TB or over 100 TB total current data (Q5). All other respondents are called "Small Data users." Because not all of these follow-up requests were successful, we used actual follow-up responses to estimate likely responses for those who did not respond. We defined active data as data that would be used within the next six months. All other data would be considered inactive, or archival. To calculate per person storage needs we used the high end of the reported range divided by 1 for an individual response, or by G, the number of individuals in a group response. For Big Data users we used the actual reported values or estimated likely values. Resources in this dataset:Resource Title: Appendix A: ARS data storage survey questions. File Name: Appendix A.pdfResource Description: The full list of questions asked with the possible responses. The survey was not administered using this PDF but the PDF was generated directly from the administered survey using the Print option under Design Survey. Asterisked questions were required. A list of Research Units and their associated codes was provided in a drop down not shown here. Resource Software Recommended: Adobe Acrobat,url: https://get.adobe.com/reader/ Resource Title: CSV of Responses from ARS Researcher Data Storage Survey. File Name: Machine-readable survey response data.csvResource Description: CSV file includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed. This information is that same data as in the Excel spreadsheet (also provided).Resource Title: Responses from ARS Researcher Data Storage Survey. File Name: Data Storage Survey Data for public release.xlsxResource Description: MS Excel worksheet that Includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel