Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains a collection of data about 454 value chains from 23 rural European areas of 16 countries. This data is obtained through a semi-automatic workflow that transforms raw textual data from an unstructured MS Excel sheet into semantic knowledge graphs.In particular, the repository contains:MS Excel sheet containing different value chains details provided by MOuntain Valorisation through INterconnectedness and Green growth (MOVING) European project;454 CSV files containing events, titles, entities and coordinates of narratives of each value chain, obtained by pre-processing the MS Excel sheet454 Web Ontology Language (OWL) files. This collection of files is the result of the semi-automatic workflow, and is organized as a semantic knowledge graph of narratives, where each narrative is a sub-graph explaining one among the 454 value chains and its territory aspects. The knowledge graph is based on the Narrative Ontology, an ontology developed by Institute of Information Science and Technologies (ISTI-CNR) as an extension of CIDOC CRM, FRBRoo, and OWL Time.Two CSV files that compile all the possible available information extracted from 454 Web Ontology Language (OWL) files.GeoPackage files with the geographic coordinates related to the narratives.The HTML files that show all the different SPARQL and GeoSPARQL queries.The HTML files that show the story maps about the 454 value chains.An image showing how the various components of the dataset interact with each other.
The USDA Agricultural Research Service (ARS) recently established SCINet , which consists of a shared high performance computing resource, Ceres, and the dedicated high-speed Internet2 network used to access Ceres. Current and potential SCINet users are using and generating very large datasets so SCINet needs to be provisioned with adequate data storage for their active computing. It is not designed to hold data beyond active research phases. At the same time, the National Agricultural Library has been developing the Ag Data Commons, a research data catalog and repository designed for public data release and professional data curation. Ag Data Commons needs to anticipate the size and nature of data it will be tasked with handling. The ARS Web-enabled Databases Working Group, organized under the SCINet initiative, conducted a study to establish baseline data storage needs and practices, and to make projections that could inform future infrastructure design, purchases, and policies. The SCINet Web-enabled Databases Working Group helped develop the survey which is the basis for an internal report. While the report was for internal use, the survey and resulting data may be generally useful and are being released publicly. From October 24 to November 8, 2016 we administered a 17-question survey (Appendix A) by emailing a Survey Monkey link to all ARS Research Leaders, intending to cover data storage needs of all 1,675 SY (Category 1 and Category 4) scientists. We designed the survey to accommodate either individual researcher responses or group responses. Research Leaders could decide, based on their unit's practices or their management preferences, whether to delegate response to a data management expert in their unit, to all members of their unit, or to themselves collate responses from their unit before reporting in the survey. Larger storage ranges cover vastly different amounts of data so the implications here could be significant depending on whether the true amount is at the lower or higher end of the range. Therefore, we requested more detail from "Big Data users," those 47 respondents who indicated they had more than 10 to 100 TB or over 100 TB total current data (Q5). All other respondents are called "Small Data users." Because not all of these follow-up requests were successful, we used actual follow-up responses to estimate likely responses for those who did not respond. We defined active data as data that would be used within the next six months. All other data would be considered inactive, or archival. To calculate per person storage needs we used the high end of the reported range divided by 1 for an individual response, or by G, the number of individuals in a group response. For Big Data users we used the actual reported values or estimated likely values. Resources in this dataset:Resource Title: Appendix A: ARS data storage survey questions. File Name: Appendix A.pdfResource Description: The full list of questions asked with the possible responses. The survey was not administered using this PDF but the PDF was generated directly from the administered survey using the Print option under Design Survey. Asterisked questions were required. A list of Research Units and their associated codes was provided in a drop down not shown here. Resource Software Recommended: Adobe Acrobat,url: https://get.adobe.com/reader/ Resource Title: CSV of Responses from ARS Researcher Data Storage Survey. File Name: Machine-readable survey response data.csvResource Description: CSV file includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed. This information is that same data as in the Excel spreadsheet (also provided).Resource Title: Responses from ARS Researcher Data Storage Survey. File Name: Data Storage Survey Data for public release.xlsxResource Description: MS Excel worksheet that Includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel
The MAR Web Geocoder is a web browser-based tool for geocoding locations, typically addresses, in Washington, DC. It is developed by the Office of Chief Technology Officer (OCTO) and can input Excel or CSV files to output an Excel file. Geocoding is the process of assigning a location in the form of geographic coordinates (often expressed as latitude and longitude) to spreadsheet data. This is done by comparing the descriptive geographic data to known geographic locations such as addresses, blocks, intersections, or place names.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Escaped vs. unescaped text import into excel.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global spreadsheet editor market is experiencing robust growth, driven by increasing digitalization across industries and the rising adoption of cloud-based solutions. While precise figures for market size and CAGR are unavailable from the provided data, a reasonable estimation, considering the presence of major players like Microsoft, Google, and Apple, along with numerous smaller competitors, points to a substantial market. Let's assume a 2025 market size of $50 billion, a figure supported by the widespread usage of spreadsheets in various sectors. Considering consistent technological advancements and expanding user bases, a conservative Compound Annual Growth Rate (CAGR) of 8% over the forecast period (2025-2033) seems plausible. This growth is fueled by several factors including the increasing demand for data analysis tools across various business functions, the integration of spreadsheet software with other productivity applications, and the growing popularity of collaborative features enabling real-time teamwork on spreadsheets. Furthermore, the development of advanced features such as improved data visualization capabilities, enhanced automation features (e.g., macros, scripting), and robust mobile accessibility contributes significantly to market expansion. The market's segmentation reflects this diversified demand, encompassing various deployment models (cloud, on-premise), operating systems (Windows, macOS, iOS, Android), and pricing tiers (free, subscription-based). Key players are continuously innovating to gain a competitive edge, focusing on user experience improvements, enhanced security features, and integration with other software ecosystems. The competitive landscape is highly dynamic, with established players facing challenges from both smaller, niche providers and the increasing adoption of free and open-source alternatives. Despite potential restraints like data security concerns and the learning curve associated with advanced features, the overall outlook for the spreadsheet editor market remains positive, promising significant growth over the coming decade.
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
Explore historical ownership and registration records by performing a reverse Whois lookup for the email address get-internet-excel.com@wix-domains.com..
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset in Excel spreadsheet accompanying this article consists of 207 rows and 24 columns. Each row represents an individual responses to questionnaire's items.
Excel document of results from online survey -imported from Survey Monkey
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
The FDA shall publish in a format that is understandable and not misleading to a lay person, and place on public display, a list of 93 harmful and potentially harmful constituents in each tobacco product and smoke established under section 904(e) of the TCA. CTP has determined the best means to display the data is web-based on FDA.GOV. The HPHC back-end database and template for industry will be created in a future release of eSubmissions. The scope of this project is to: Phase 1 (Current POP): 1) build a website to support the display of the HPHC constituents, 2) allow the user to access educational information about the health effects of the chemicals; Phase 2 (next POP): 1) allow users of the website to perform a search by brand and sub-brand, 2) display a report on the HPHCs contained in that tobacco product, and 3) create an admin module to allow for an import of HPHC data via an Excel spreadsheet and to update the list of constituents.
Excel file shows a database of intellectual books about the internet published between 1994-2006, sampled from 10 different academic publishers and using the keywords: internet; web; virtual; digital. The columns include: title; author(s)/editor(s); year of publication; publisher (house); amount of editions and type; amount of translation and languages; amount of pages; amount of citations (at time of sampling - June and July 2023); Google Books urlevoegen
This dataset is a compilation of drill stem test observations, compiled by the Illinois State Geological Survey and published as a web feature service, an ESRI Service, a web map service, and as an Excel spreadsheet for the National Geothermal Data System. The downloadable Excel spreadsheets include information about the template, notes related to revisions of the template, Resource provider information, the data, a field list (data mapping view) and a worksheet with vocabularies for use in populating the spreadsheet (data valid terms).
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
Explore historical ownership and registration records by performing a reverse Whois lookup for the email address high-speed-internet-excel.com@wix-domains.com..
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Ziemann’s supplementary file. Tab-separated, plain text version of the Ziemann et al. [2] supplementary file. (TSV 148 kb)
This Excel template is an example taken from the GEO web site (http://www.ncbi.nlm.nih.gov/geo/info/spreadsheet.html#GAtemplates) which has been modified to conform to the SysMO JERM (Just Enough Results Model). Using templates helps with searching and comparing data as well as making it easier to submit data to public repositories for publications.
This program converts several Cufflinks output files into easily readable Microsoft Excel tables using Apache's POI library. Only the "cuffdiff" output format currently is supported, but future versions may include other output formats. The source code and executable directory structure must be downloaded at the GitHub repository using the "Download ZIP" button on the right-hand side of the page: https://github.com/njdbickhart/ConvertCufflinksToExcel. Installation and usage information can be found at https://github.com/njdbickhart/ConvertCufflinksToExcel/blob/master/README.md. Resources in this dataset:Resource Title: ConvertCufflinksToExcel. File Name: Web Page, url: https://www.ars.usda.gov/research/software/download/?softwareid=493&modecode=80-42-05-30 download page
With this add in it is possible to create map templates from GIS files in KML format, and create choropleths with them.
Providing you have access to KML format map boundary files, it is possible to create your own quick and easy choropleth maps in Excel. The KML format files can be converted from 'shape' files. Many shape files are available to download for free from the web, including from Ordnance Survey and the London Datastore. Standard mapping packages such as QGIS (free to download) and ArcGIS can convert the files to KML format.
A sample of a KML file (London wards) can be downloaded from this page, so that users can easily test the tool out.
Macros must be enabled for the tool to function.
When creating the map using the Excel tool, the 'unique ID' should normally be the area code, the 'Name' should be the area name and then if required and there is additional data in the KML file, further 'data' fields can be added. These columns will appear below and to the right of the map. If not, data can be added later on next to the codes and names.
In the add-in version of the tool the final control, 'Scale (% window)' should not normally be changed. With the default value 0.5, the height of the map is set to be half the total size of the user's Excel window.
To run a choropleth, select the menu option 'Run Choropleth' to get this form.
To specify the colour ramp for the choropleth, the user needs to enter the number of boxes into which the range is to be divided, and the colours for the high and low ends of the range, which is done by selecting coloured option boxes as appropriate. If wished, hit the 'Swap' button to change which colours are for the different ends of the range. Then hit the 'Choropleth' button.
The default options for the colours of the ends of the choropleth colour range are saved in the add in, but different values can be selected but setting up a column range of up to twelve cells, anywhere in Excel, filled with the option colours wanted. Then use the 'Colour range' control to select this range, and hit apply, having selected high or low values as wished. The button 'Copy' sets up a sheet 'ColourRamp' in the active workbook with the default colours, which can just be extended or deleted with just a few cells, so saving the user time.
The add-in was developed entirely within the Excel VBA IDE by Tim Lund. He is kindly distributing the tool for free on the Datastore but suggests that users who find the tool useful make a donation to the Shelter charity. It is not intended to keep the actively maintained, but if any users or developers would like to add more features, email the author.
Acknowledgments
Calculation of Excel freeform shapes from latitudes and longitudes is done using calculations from the Ordnance Survey.
The U.S. Geological Survey (USGS), in cooperation with Connecticut Department of Transportation, completed a study to improve flood-frequency estimates in Connecticut. This companion data release is a Microsoft Excel workbook for: (1) computing flood discharges for the 50- to 0.2-percent annual exceedance probabilities from peak-flow regression equations, and (2) computing additional prediction intervals, not available through the USGS StreamStats web application. The current StreamStats application (version 4) only computes the 90-percent prediction interval for stream sites in Connecticut. The Excel workbook can be used to compute the 70-, 80-, 90-, 95-, and 99-percent prediction intervals. The prediction interval provides upper and lower limits of the estimated flood discharge with a certain probability, or level of confidence in the accuracy of the estimate. The standard error of prediction for the Connecticut peak-flow regression equations ranged from 26.3 to 45.0 percent (Ahearn and Hodgkins, 2020). The Excel workbook consists of four worksheets. The worksheets provide an overview of how the application works; input and output tables of the explanatory variables and flood discharges, and graphical display of the results; and the computational formulas used to estimate the flood discharges and prediction intervals.
Project: Online Sales Analysis
Tool: Microsoft Excel
Overview: Built a dynamic dashboard using Excel to analyze 10,889 sales records across multiple years, locations, and products.
Key Features:
- KPIs: Orders, Sales, Profit, Tax, Product Count
- 10+ interactive visuals: Pie Chart, Stacked Bar, Region-wise Comparison
- Slicers for Year, Month, and Location
Download the Excel file to interact with slicers and explore insights
Excel spreadsheet listing university hospitals in Japan, including their names and website URLs.
This dataset is a compilation of drill stem test observations, compiled by the Kansas Geological Survey and published as a web feature service, an ESRI Service, a web map service, and as an Excel spreadsheet for the National Geothermal Data System. The downloadable Excel spreadsheets include information about the template, notes related to revisions of the template, Resource provider information, the data, a field list (data mapping view) and a worksheet with vocabularies for use in populating the spreadsheet (data valid terms).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains a collection of data about 454 value chains from 23 rural European areas of 16 countries. This data is obtained through a semi-automatic workflow that transforms raw textual data from an unstructured MS Excel sheet into semantic knowledge graphs.In particular, the repository contains:MS Excel sheet containing different value chains details provided by MOuntain Valorisation through INterconnectedness and Green growth (MOVING) European project;454 CSV files containing events, titles, entities and coordinates of narratives of each value chain, obtained by pre-processing the MS Excel sheet454 Web Ontology Language (OWL) files. This collection of files is the result of the semi-automatic workflow, and is organized as a semantic knowledge graph of narratives, where each narrative is a sub-graph explaining one among the 454 value chains and its territory aspects. The knowledge graph is based on the Narrative Ontology, an ontology developed by Institute of Information Science and Technologies (ISTI-CNR) as an extension of CIDOC CRM, FRBRoo, and OWL Time.Two CSV files that compile all the possible available information extracted from 454 Web Ontology Language (OWL) files.GeoPackage files with the geographic coordinates related to the narratives.The HTML files that show all the different SPARQL and GeoSPARQL queries.The HTML files that show the story maps about the 454 value chains.An image showing how the various components of the dataset interact with each other.