Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excel spreadsheet containing raw data, organized by figure.
About this webinar We rarely receive the research data in an appropriate form. Often data is messy. Sometimes it is incomplete. And sometimes there’s too much of it. Frequently, it has errors. This webinar targets beginners and presents a quick demonstration of using the most widespread data wrangling tool, Microsoft Excel, to sort, filter, copy, protect, transform, aggregate, summarise, and visualise research data. Webinar Topics Introduction to Microsoft Excel user interface Interpret data using sorting, filtering, and conditional formatting Summarise data using functions Analyse data using pivot tables Manipulate and visualise data Handy tips to speed up your work Licence Copyright © 2021 Intersect Australia Ltd. All rights reserved.
This provides a link to the Washington Secretary of State's Corporations Search tool. The Corporations Data Extract feature is no longer available. Customers needing a list of multiple businesses can use our advanced search to create a list of businesses under specific parameters. You can export this information to an Excel spreadsheet to sort and search more extensively. Below are the steps to perform this type of search. The more specified parameter searches provide narrower search results. Please visit our Corporations and Charities Filing System by following this link https://ccfs.sos.wa.gov/ Scroll down to the “Corporation Search” section and click the “Advanced Search” button on the right. Under the first section, specify how you would like the business name searched. Only use this for single business lookups unless all the businesses you are searching have a common name (use the “contains” selection). Select the appropriate business type from the dropdown if you are looking for a list of a specific business type. For a list of a particular business type with a specific status, select that status under “Business Status.” You can also search by expiration date in this section. Under the “Date of Incorporation/Formation/Registration,” you can search by start or end date. Under the “Registered Agent/Governor Search” section, you can search all businesses with the same registered agent on record or governor listed. Once you have made all your search selections, click the green “Search” button at the bottom right of the page. A list will populate; scroll to the bottom and select the green Excel document icon with CSV. An Excel document should automatically download. If you have popups blocked, please unblock our site, and try again. Once you have opened the downloaded Excel spreadsheet, you can adjust the width of each column and sort the data using the data tab. You can also search by pressing CTRL+F on a Windows keyboard.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The GAPs Data Repository provides a comprehensive overview of available qualitative and quantitative data on national return regimes, now accessible through an advanced web interface at https://data.returnmigration.eu/.
This updated guideline outlines the complete process, starting from the initial data collection for the return migration data repository to the development of a comprehensive web-based platform. Through iterative development, participatory approaches, and rigorous quality checks, we have ensured a systematic representation of return migration data at both national and comparative levels.
The Repository organizes data into five main categories, covering diverse aspects and offering a holistic view of return regimes: country profiles, legislation, infrastructure, international cooperation, and descriptive statistics. These categories, further divided into subcategories, are based on insights from a literature review, existing datasets, and empirical data collection from 14 countries. The selection of categories prioritizes relevance for understanding return and readmission policies and practices, data accessibility, reliability, clarity, and comparability. Raw data is meticulously collected by the national experts.
The transition to a web-based interface builds upon the Repository’s original structure, which was initially developed using REDCap (Research Electronic Data Capture). It is a secure web application for building and managing online surveys and databases.The REDCAP ensures systematic data entries and store them on Uppsala University’s servers while significantly improving accessibility and usability as well as data security. It also enables users to export any or all data from the Project when granted full data export privileges. Data can be exported in various ways and formats, including Microsoft Excel, SAS, Stata, R, or SPSS for analysis. At this stage, the Data Repository design team also converted tailored records of available data into public reports accessible to anyone with a unique URL, without the need to log in to REDCap or obtain permission to access the GAPs Project Data Repository. Public reports can be used to share information with stakeholders or external partners without granting them access to the Project or requiring them to set up a personal account. Currently, all public report links inserted in this report are also available on the Repository’s webpage, allowing users to export original data.
This report also includes a detailed codebook to help users understand the structure, variables, and methodologies used in data collection and organization. This addition ensures transparency and provides a comprehensive framework for researchers and practitioners to effectively interpret the data.
The GAPs Data Repository is committed to providing accessible, well-organized, and reliable data by moving to a centralized web platform and incorporating advanced visuals. This Repository aims to contribute inputs for research, policy analysis, and evidence-based decision-making in the return and readmission field.
Explore the GAPs Data Repository at https://data.returnmigration.eu/.
This archive contains raw data of visual and acoustic mapping of perforations in Utah FORGE well 16A(78)-32 acquired during the August 2024 circulation program. The dataset includes downhole images captured by EV, a downhole visual analytics company, providing visual records of each perforation. Images are organized in two folders: one set with perforation visualization overlays and one without. An included Excel spreadsheet provides the organized raw data.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The data file entitled “Emergy analysis of maize production in Ghana” is based on an empirical study to assess the resource as well as energy use efficiency of maize production systems using the Emergy-Data Envelopment Analysis approach, which was developed within the context of the BiomassWeb Project. The study area was Bolgatanga and Bongo Districts, Ghana, sub-Saharan Africa. The approach was developed by coupling Emergy Analysis and Data Envelopment Analysis methods into a framework, and integrating the concept of eco-efficiency into the framework to assess the resource as well as energy use efficiency and sustainability of agroecosystems as a whole. In this data file, the Emergy Analysis method is applied to achieve enviromental and economic accounting of maize production systems in Ghana. The Agricultural Production Systems sIMulator (APSIM) was used to model five maize-based production scenarios as follows: 1. Extensive rainfed maize system if the external input is 0 kg/ha/yr urea, with/ without manure (Extensive0). 2. Extensive rainfed maize system if the external input is 12 kg/ha/yr NPK, with/ without manure (Extensive12). 3. Rainfed maize-legume (cowpea - Vigna unguiculata, soybean - Glycine max, or groundnut - Arachis hypogaea) intercropping system if the external input is 20 kg/ha/yr urea, with/ without manure (Intercrop20). 4. Intensive maize system if the external input is 50 kg/ha/yr urea, including supplemental irrigation (Intensive50). 5. Intensive maize system if the external input is 100 kg/ha/yr urea, including supplemental irrigation (Intensive100). The five scenarios were compared on the basis of the evaluation that was achieved using the Emergy Analysis to account for resource as well as energy use efficiency and sustainability. The data were processed using mathemathical functions in Microsoft Excel. The data file is organized in seven sheet tabs, and they are linked. Comments have been added to make the content self-explanatory. Where secondary data have been used, the sources have been cited. This data file was authored by Mwambo, Francis Molua.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Heterogenous Big dataset is presented in this proposed work: electrocardiogram (ECG) signal, blood pressure signal, oxygen saturation (SpO2) signal, and the text input. This work is an extension version for our relevant formulating of dataset that presented in [1] and a trustworthy and relevant medical dataset library (PhysioNet [2]) was used to acquire these signals. The dataset includes medical features from heterogenous sources (sensory data and non-sensory). Firstly, ECG sensor’s signals which contains QRS width, ST elevation, peak numbers, and cycle interval. Secondly: SpO2 level from SpO2 sensor’s signals. Third, blood pressure sensors’ signals which contain high (systolic) and low (diastolic) values and finally text input which consider non-sensory data. The text inputs were formulated based on doctors diagnosing procedures for heart chronic diseases. Python software environment was used, and the simulated big data is presented along with analyses.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consist of three data files for each research instrument that was used to collect data: a questionnaire, document analysis, and interviews.
The questionnaire was created on the Qualtrics online platform, and was exported to an Excel spreadsheet. A total of 11 participants completed the questionnaire. The aim was to determine if the educators made provision for developing 21st-century skills in their fully online modules, and if they did make provision, what technologies were they using at the time to develop these skills. The questionnaire consisted of multiple-choice, multiple-option, and closed and open-ended questions.
The second research instrument used in this study was a document analysis; a process that was followed by reviewing the curriculum documents and resources in the participants’ online modules. Access to 18 online modules on the learning management system of the higher education institution was granted in order for the researcher to collect all the necessary resources. All the curriculum documents and resources that could be downloaded were gathered and saved in the cloud-storage drive. Other components that could not be downloaded included discussion groups and online assessments. These resources were observed, and relevant details were noted down on an Excel spreadsheet by the researcher and saved on the cloud-storage drive.
The third research instrument used in this study was interviews. In this study semi-structured interviews were chosen as the type of interviews to be conducted after the questionnaire and document analysis. The researcher pre-developed a few guiding questions for the interviews. However, the researcher also allowed the participants to give their own feedback and perspective on 21st-century skills and using DET to develop these skills in fully online learning. Only five of the participants who voluntary agreed were invited and participated in a follow-up interview. The interviews took place over a Zoom video call and were approximately 30 minutes long.
For data analysis, the researcher used a deductive data analysis approach to organise and analyse the findings from the questionnaire, document analysis, and interviews. The data was first divided into categories. The categories were determined by identifying the most commonly used words in the responses of the educators. The responses in each category were then translated into a percentage value to determine the highest- and lowest-ranking responses. The categories were further grouped together based on the same concept. The percentages of the grouped categories were then merged, recalculated, and organised from large to small. The analysed data were then converted to charts to visually illustrate the trends and patterns of the data. Finally, the analysed data was again stored on the cloud-storage drive with a password for protection.
The next data analysis process was for the documents where the curriculum and resources documents of the educators’ online modules were downloaded and captured from the learning management system of the institution. All the files were worked through, and the relevant content was evaluated and compared against a checklist that was designed based on the P21 Framework definitions of the 4Cs (Partnership for 21st Century Skills, 2009) and the affordances of DET (National Academies of Sciences, Engineering & Medicine, 2018). The curriculum documents and the analysis of these documents were all captured in an Excel spreadsheet and saved on the cloud-storage drive and assigned a unique password to secure access.
Lastly the video recordings were transcribed with transcription software called Descript. The transcript text was then captured into an Excel spreadsheet under each main topic and questions that were discussed. The Excel spreadsheet was then stored with a unique password on the cloud-storage drive. Once the spreadsheet was saved, the analysis continued by grouping each topic’s results per theme and summarising the discussion to draw conclusions about the topic. The final spreadsheet with findings was safely stored on the cloud-storage drive with a password to secure the data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Soybean aphid (Aphis glycines Matsumura; SA) is a major invasive pest of soybean [Glycine max (L.) Merr.] in northern production regions of North America. Although insecticides are currently the main method for controlling this pest, SA-resistant cultivars are being developed to sustainably manage SA in the future. The viability of SA-resistant cultivars may depend on identifying a diverse set of resistance genes from screening various germplasm sources, including wild soybean (Glycine soja Siebold and Zucc.), the progenitor of cultivated soybean. Data consisted of infestation ratings generated for a total of 337 distinct plant introduction lines of wild soybean that were exposed to avirulent SA biotype 1 for 14 d in 25 separate tests. Individual plants of the test lines were given a common rating by two researchers, based on a rating scale that progressed from 1=0 to 50, 2=51 to 100, 3=101 to 150, 4=151 to 200, 5=201 to 250, and 6 with >250 SA per test plant. Public dissemination of this dataset will allow for further analyses and evaluation of resistance among the test lines. Resources in this dataset:Resource Title: Infestation ratings for individual plants of various wild soybean lines. File Name: Web Page, url: https://ars.els-cdn.com/content/image/1-s2.0-S2352340917304432-mmc2.xlsx MS Excel spreadsheet showing infestation ratings for individual plants of 337 distinct plant introduction (PI) wild soybean lines following 14 d of exposure to SA.Resource Software Recommended: Microsoft Excel,url: https://office.microsoft.com/excel/
The Ontario government, generates and maintains thousands of datasets. Since 2012, we have shared data with Ontarians via a data catalogue. Open data is data that is shared with the public. Click here to learn more about open data and why Ontario releases it. Ontario’s Open Data Directive states that all data must be open, unless there is good reason for it to remain confidential. Ontario’s Chief Digital and Data Officer also has the authority to make certain datasets available publicly. Datasets listed in the catalogue that are not open will have one of the following labels: If you want to use data you find in the catalogue, that data must have a licence – a set of rules that describes how you can use it. A licence: Most of the data available in the catalogue is released under Ontario’s Open Government Licence. However, each dataset may be shared with the public under other kinds of licences or no licence at all. If a dataset doesn’t have a licence, you don’t have the right to use the data. If you have questions about how you can use a specific dataset, please contact us. The Ontario Data Catalogue endeavors to publish open data in a machine readable format. For machine readable datasets, you can simply retrieve the file you need using the file URL. The Ontario Data Catalogue is built on CKAN, which means the catalogue has the following features you can use when building applications. APIs (Application programming interfaces) let software applications communicate directly with each other. If you are using the catalogue in a software application, you might want to extract data from the catalogue through the catalogue API. Note: All Datastore API requests to the Ontario Data Catalogue must be made server-side. The catalogue's collection of dataset metadata (and dataset files) is searchable through the CKAN API. The Ontario Data Catalogue has more than just CKAN's documented search fields. You can also search these custom fields. You can also use the CKAN API to retrieve metadata about a particular dataset and check for updated files. Read the complete documentation for CKAN's API. Some of the open data in the Ontario Data Catalogue is available through the Datastore API. You can also search and access the machine-readable open data that is available in the catalogue. How to use the API feature: Read the complete documentation for CKAN's Datastore API. The Ontario Data Catalogue contains a record for each dataset that the Government of Ontario possesses. Some of these datasets will be available to you as open data. Others will not be available to you. This is because the Government of Ontario is unable to share data that would break the law or put someone's safety at risk. You can search for a dataset with a word that might describe a dataset or topic. Use words like “taxes” or “hospital locations” to discover what datasets the catalogue contains. You can search for a dataset from 3 spots on the catalogue: the homepage, the dataset search page, or the menu bar available across the catalogue. On the dataset search page, you can also filter your search results. You can select filters on the left hand side of the page to limit your search for datasets with your favourite file format, datasets that are updated weekly, datasets released by a particular organization, or datasets that are released under a specific licence. Go to the dataset search page to see the filters that are available to make your search easier. You can also do a quick search by selecting one of the catalogue’s categories on the homepage. These categories can help you see the types of data we have on key topic areas. When you find the dataset you are looking for, click on it to go to the dataset record. Each dataset record will tell you whether the data is available, and, if so, tell you about the data available. An open dataset might contain several data files. These files might represent different periods of time, different sub-sets of the dataset, different regions, language translations, or other breakdowns. You can select a file and either download it or preview it. Make sure to read the licence agreement to make sure you have permission to use it the way you want. Read more about previewing data. A non-open dataset may be not available for many reasons. Read more about non-open data. Read more about restricted data. Data that is non-open may still be subject to freedom of information requests. The catalogue has tools that enable all users to visualize the data in the catalogue without leaving the catalogue – no additional software needed. Have a look at our walk-through of how to make a chart in the catalogue. Get automatic notifications when datasets are updated. You can choose to get notifications for individual datasets, an organization’s datasets or the full catalogue. You don’t have to provide and personal information – just subscribe to our feeds using any feed reader you like using the corresponding notification web addresses. Copy those addresses and paste them into your reader. Your feed reader will let you know when the catalogue has been updated. The catalogue provides open data in several file formats (e.g., spreadsheets, geospatial data, etc). Learn about each format and how you can access and use the data each file contains. A file that has a list of items and values separated by commas without formatting (e.g. colours, italics, etc.) or extra visual features. This format provides just the data that you would display in a table. XLSX (Excel) files may be converted to CSV so they can be opened in a text editor. How to access the data: Open with any spreadsheet software application (e.g., Open Office Calc, Microsoft Excel) or text editor. Note: This format is considered machine-readable, it can be easily processed and used by a computer. Files that have visual formatting (e.g. bolded headers and colour-coded rows) can be hard for machines to understand, these elements make a file more human-readable and less machine-readable. A file that provides information without formatted text or extra visual features that may not follow a pattern of separated values like a CSV. How to access the data: Open with any word processor or text editor available on your device (e.g., Microsoft Word, Notepad). A spreadsheet file that may also include charts, graphs, and formatting. How to access the data: Open with a spreadsheet software application that supports this format (e.g., Open Office Calc, Microsoft Excel). Data can be converted to a CSV for a non-proprietary format of the same data without formatted text or extra visual features. A shapefile provides geographic information that can be used to create a map or perform geospatial analysis based on location, points/lines and other data about the shape and features of the area. It includes required files (.shp, .shx, .dbt) and might include corresponding files (e.g., .prj). How to access the data: Open with a geographic information system (GIS) software program (e.g., QGIS). A package of files and folders. The package can contain any number of different file types. How to access the data: Open with an unzipping software application (e.g., WinZIP, 7Zip). Note: If a ZIP file contains .shp, .shx, and .dbt file types, it is an ArcGIS ZIP: a package of shapefiles which provide information to create maps or perform geospatial analysis that can be opened with ArcGIS (a geographic information system software program). A file that provides information related to a geographic area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open using a GIS software application to create a map or do geospatial analysis. It can also be opened with a text editor to view raw information. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format for sharing data in a machine-readable way that can store data with more unconventional structures such as complex lists. How to access the data: Open with any text editor (e.g., Notepad) or access through a browser. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format to store and organize data in a machine-readable way that can store data with more unconventional structures (not just data organized in tables). How to access the data: Open with any text editor (e.g., Notepad). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A file that provides information related to an area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open with a geospatial software application that supports the KML format (e.g., Google Earth). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. This format contains files with data from tables used for statistical analysis and data visualization of Statistics Canada census data. How to access the data: Open with the Beyond 20/20 application. A database which links and combines data from different files or applications (including HTML, XML, Excel, etc.). The database file can be converted to a CSV/TXT to make the data machine-readable, but human-readable formatting will be lost. How to access the data: Open with Microsoft Office Access (a database management system used to develop application software). A file that keeps the original layout and
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SOM model code operates with SOM_data.xlsx file.
RE_data excel file is all the samples RE7 result data.
RE_BySection excel file is all the RE7 result data organized by section type (exclude result from undefined section).
Resp_BySection excel file is all the experimental incubation results, organized by section type.
The raw data were organized in one Microsoft Excel file, which can be opened using Microsoft Excel. The raw images were organized in one Microsoft Word file, which can be opened using Microsoft Word.
Data is organized using an excel spreadsheet. We included a METADATA sheet in the data file to document what is in each sheet. Data collected include phosphorus loads from bioretention cells, heavy metals, hydraulicdata collected include phosphorus load/concentration and heavy metals. This dataset is associated with the following publication: Ament, M.R., E.D. Roy, Y. Yuan, and S.E. Hurley. Phosphorus Removal, Metals Dynamics, and Hydraulics in Stormwater Bioretention Systems Amended with Drinking Water Treatment Residuals. Journal of Sustainable Water in the Built Environment. American Society of Civil Engineers (ASCE), New York, NY, USA, 8(3): 04022003, (2022).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Following a request from the European Commission, in 2015 EFSA created a database of host plant species of Xylella fastidiosa (EFSA, 2015). In 2018, a renovated and updated database of Xylella spp. (including both species X. fastidiosa and X. taiwanensis) and the related scientific report were published (EFSA, 2018). EFSA is going to maintain and update this database periodically.
In April 2020, EFSA released a new update of the Xylella spp. host plants database with information retrieved from literature search up to June 2019, Europhyt outbreak notifications up to 15 October 2019, and personal communications of experts (EFSA, 2020). The applied protocol for the extensive literature review, data collection and reporting, as well as results and lists of host plants are described in detail in the scientific report (EFSA, 2020).
The current database includes 343 host plants species in which the infection was assessed with at least two highly reliable detection methods (category A – see section 2.5.2 of EFSA (2020)), up to 595 host plants (regardless the detection methods applied – category E, see section 2.5.2 of EFSA (2020 ).
The Excel Files here attached represent the VERSION 3 of the Xylella spp. host plants database. For a detailed description of the information included in the database, please consult the related scientific report (EFSA, 2020).
The Excel file “Xylella spp. host plants database - VERSION 3” contains several sheets: the LEGENDA (with extensive description of each table), the full detailed raw data of the Xylella spp. host plant database (sheet “observation”) and several examples of data extraction.
Additional Excel files contain lists of host plant species of X. fastidiosa (subsp. unknown (i.e. not reported), fastidiosa, multiplex, pauca, morus, sandyi, tashke, fastidiosa/sandyi) and X. taiwanensis with different infection methods (natural, artificial and not specified, and according to different categories (A,B,C,D,E – see section 2.5.2 of EFSA (2020)).
Bibliography:
EFSA (European Food Safety Authority), 2015. Categorisation of plants for planting, excluding seeds, according to the risk of introduction of Xylella fastidiosa. EFSA Journal 2015;13(3):4061, 31 pp. doi:10.2903/j.efsa.2015.4061
EFSA (European Food Safety Authority), 2018. Scientific report on the update of the Xylella spp. host plant database. EFSA Journal 2018;16(9):5408, 87 pp. https://doi.org/10.2903/j.efsa.2018.5408
EFSA (European Food Safety Authority), 2020. Scientific report on the update of the Xylella spp. host plant database – systematic literature search up to 30 June 2019. EFSA Journal 2020;18(4):6114, 61 pp. https://doi.org/10.2903/j.efsa.2020.6114
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excel spreadsheet of organized data and the graphs included in the study.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
File contains all data present in the manuscript. Every tab in the excel file corresponds to a different panel. Download the whole file to access all the raw data for all the panels.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Learning where to find nutrients while at the same time avoiding toxic food is essential for survival of any animal. Using Drosophila melanogaster larvae as a study case, we investigate the role of gustatory sensory neurons expressing IR76b for associative learning of amino acids, the building blocks of proteins. We found surprising complexity in the neuronal underpinnings of sensing amino acids, and a functional division of sensory neurons. We found that the IR76b receptor is dispensable for amino acid learning, whereas the neurons expressing IR76b are specifically required for the rewarding but not the punishing effect of amino acids. This unexpected dissociation in neuronal processing of amino acids for different behavioural functions provides a study case for functional divisions of labour in gustatory systems.
The data comprises two forms of data collected across four African countries; Ghana, Nigeria, Mozambique and Kenya. These were: • The results of a business survey administered to both migrant-owned and non-migrant owned businesses in the four case study countries. The survey data is contained within an Excel spreadsheet with responses organised in four separate sheets by case study country. The code '777' is used in individual cells to denote that no answer was given for that particular question. • Transcripts of, or fieldnotes from, semi-structured interviews with migrants, organisations connected to migration, host nationals working for migrant businesses and selected government Ministries and Departments connected to migration policy in the four case study countries. The interview data is organised by country and sub-divided into five separate folders categorised by key informant group; i) Government Ministries, Departments and Agencies; ii) Civil Society Organisations, iii) Migrant Community Representatives (organisations or leaders); iv) Migrant Business Owners and; v) Host Nationals Working for Migrant Business owners.
This repository was created to store, organize, and share data collected for the Eastern Kentucky Project, focusing on hydrological research in the region. It serves as a centralized platform to manage data efficiently and facilitate collaboration among researchers and stakeholders involved in the project.
The repository primarily contains data from level loggers, which are crucial for monitoring and recording water levels, temperature, and other hydrological parameters over time. The collected data has been carefully extracted, processed, and stored in Excel files to ensure compatibility with various analysis tools. This structured format enables easy access and seamless integration into research workflows.
In addition to providing secure storage, the repository is designed to support efficient data sharing, transparency, and interdisciplinary collaboration. By offering a well-organized dataset, it enables researchers to analyze and build upon existing data, promoting high-quality research outputs. The repository ultimately aims to advance understanding and inform decision-making in water resource management for Eastern Kentucky.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A large data-set of index and dynamic parameters measured from resonant column (RC) and cyclic torsional shear(CTS) tests on 170 undisturbed isotropically consolidated fine-grained specimens deriving from 90 sites in Central and Northern Italy is made available. Tests were all performed over the past 20 years at the Geotechnical Laboratory of the Civil and Environmental Engineering Department of the Florence University using the same apparatus and following the same standardized procedures.
The experimental data are organized in an excel file (named as “Italian_Clays_Archive.xlsx”). For each tested sample, the main physical, index and dynamic properties measured are archived with the code number of the sample (No) in the sheet named as “Dataset” as well as any information available about the borehole from which the sample has been taken. The list and the meaning of the symbols used can be found in the sheet named as “Legend”. Other sheets containing borehole stratigraphy are named as “XX-ST” (where “XX” stands as the bore-hole code, BH) and they can be recalled directly from the “Dataset” sheet. Note that stratigraphy is given in its original format, when available. However, depth and thickness of each layer can be easily deduced by the figure provided and the soil lithology is well represented by the symbol used that are those generally adopted internationally. Finally, the sheets named as "YY-CTS-STEPZ" (where “YY” and “Z” stand as the sample code, No, and the step number, respectively) contain the shear stress and strain values measured after CTS tests at different steps (i.e. amplitudes of the cyclic dynamic torsional loading applied) during the 1st, 5th, 15th, 20th and 25th.and/or and/or the corresponding shear modulus and damping ratio calculated from the same cycles.
The selected samples were taken mostly in Holocene and Pleistocene fluvio-lacustrine soil deposits at depths ranging from 1 m to 75 m below ground level and they mainly consist of normally and over-consolidated clayey silts or clays (1 < OCR < 9.4) of medium-to-high plasticity (4 < PI < 84), with very low-to high consistency (-1< Ic < 1.9) and initial void ratio, e0, ranging between 0.175 and 2.456. The database also includes some samples of organic clays of low consistency, very high water content and void ratio and low unit weight. The initial (small strain) values of shear modulus, G0, and damping ratio, D0, range between 21 MPa and 292 MPa and between 0.8% and 5.1%, respectively. The smallest and the largest shear strain values induced by RC and CTS tests are 1.9x10-5 % and 6.3x10-1%, respectively.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excel spreadsheet containing raw data, organized by figure.