Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundMicrosoft Excel automatically converts certain gene symbols, database accessions, and other alphanumeric text into dates, scientific notation, and other numerical representations. These conversions lead to subsequent, irreversible, corruption of the imported text. A recent survey of popular genomic literature estimates that one-fifth of all papers with supplementary gene lists suffer from this issue.ResultsHere, we present an open-source tool, Escape Excel, which prevents these erroneous conversions by generating an escaped text file that can be safely imported into Excel. Escape Excel is implemented in a variety of formats (http://www.github.com/pstew/escape_excel), including a command line based Perl script, a Windows-only Excel Add-In, an OS X drag-and-drop application, a simple web-server, and as a Galaxy web environment interface. Test server implementations are accessible as a Galaxy interface (http://apostl.moffitt.org) and simple non-Galaxy web server (http://apostl.moffitt.org:8000/).ConclusionsEscape Excel detects and escapes a wide variety of problematic text strings so that they are not erroneously converted into other representations upon importation into Excel. Examples of problematic strings include date-like strings, time-like strings, leading zeroes in front of numbers, and long numeric and alphanumeric identifiers that should not be automatically converted into scientific notation. It is hoped that greater awareness of these potential data corruption issues, together with diligent escaping of text files prior to importation into Excel, will help to reduce the amount of Excel-corrupted data in scientific analyses and publications.
Facebook
TwitterThis data release is comprised of geospatial and tabular data developed for the HayWired communities at risk analysis. The HayWired earthquake scenario is a magnitude 7.0 earthquake hypothesized to occur on the Hayward Fault on April 18, 2018, with an epicenter in the city of Oakland, CA. The following 17 counties are included in this analysis unless otherwise specified: Alameda, Contra Costa, Marin, Merced, Monterey, Napa, Sacramento, San Benito, San Francisco, San Joaquin, San Mateo, Santa Clara, Santa Cruz, Solano, Sonoma, Stanislaus, and Yolo. The vector data are a geospatial representation of building damage based on square footage damage estimates by Hazus occupancy class for developed areas covering all census tracts in 17 counties in and around the San Francisco Bay region in California, for (1) earthquake hazards (ground shaking, landslide, and liquefaction) and (2) all hazards (ground shaking, landslide, liquefaction, and fire) resulting from the HayWired earthquake scenario mainshock. The tabular data cover: (1) damage estimates, by Hazus occupancy class, of square footage, building counts, and households affected by the HayWired earthquake scenario mainshock for all census tracts in 17 counties in and around the San Francisco Bay region in California; (2) potential total population residing in block groups in nine counties in the San Francisco Bay region in California (Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solano, and Sonoma); (3) a subset of select tables for 17 counties in and around the San Francisco Bay region in California from the U.S. Census Bureau American Community Survey 5-year (2012-2016) estimates at the block group level selected to represent potentially vulnerable populations that may, in the event of a major disaster, leave an area rather than stay; and (4) building and contents damage estimates (in thousands of dollars, 2005 vintage), by Hazus occupancy class, for the HayWired earthquake scenario mainshock for 17 counties in and around the San Francisco Bay region in California. The vector .SHP datasets were developed and intended for use in GIS applications such as ESRI's ArcGIS software suite. The tab-delimited .TXT datasets were developed and intended for use in standalone spreadsheet or database applications (such as Microsoft Excel or Access). Please note that some of these data are not optimized for use in GIS applications (such as ESRI's ArcGIS software suite) as-is--census tracts or counties are repeated (the data are not "one-to-one"), so not all information belonging to a tract or county would necessarily be associated with a single record. Separate preparation is needed in a standalone spreadsheet or database application like Microsoft Excel or Microsoft Access before using these data in a GIS. These data support the following publications: Johnson, L.A., Jones, J.L., Wein, A.M., and Peters, J., 2020, Communities at risk analysis of the HayWired scenario, chaps. U1-U5 of Detweiler, S.T., and Wein, A.M., eds., The HayWired earthquake scenario--Societal consequences: U.S. Geological Survey Scientific Investigations Report 2017-5013, https://doi.org/10.3133/sir20175013.
Facebook
TwitterMicrosoft 365 is used by over * million companies worldwide, with over *** million customers in the United States alone using the office suite software. Office 365 is the brand name previously used by Microsoft for a group of software applications providing productivity related services to its subscribers. Office 365 applications include Outlook, OneDrive, Word, Excel, PowerPoint, OneNote, SharePoint and Microsoft Teams. The consumer and small business plans of Office 365 were renamed as Microsoft 365 on 21 April, 2020. Global office suite market share An office suite is a collection of software applications (word processing, spreadsheets, database etc.) designed to be used for tasks within an organization. Worldwide market share of office suite technologies is split between Google’s G Suite and Microsoft’s Office 365, with G Suite controlling around ** percent of the global market and Office 365 holding around ** percent. This trend is similar across most worldwide regions.
Facebook
TwitterThe Adventure Works dataset is a comprehensive and widely used sample database provided by Microsoft for educational and testing purposes. It's designed to represent a fictional company, Adventure Works Cycles, which is a global manufacturer of bicycles and related products. The dataset is often used for learning and practicing various data management, analysis, and reporting skills.
1. Company Overview: - Industry: Bicycle manufacturing - Operations: Global presence with various departments such as sales, production, and human resources.
2. Data Structure: - Tables: The dataset includes a variety of tables, typically organized into categories such as: - Sales: Information about sales orders, products, and customer details. - Production: Data on manufacturing processes, inventory, and product specifications. - Human Resources: Employee details, departments, and job roles. - Purchasing: Vendor information and purchase orders.
3. Sample Tables: - Sales.SalesOrderHeader: Contains information about sales orders, including order dates, customer IDs, and total amounts. - Sales.SalesOrderDetail: Details of individual items within each sales order, such as product ID, quantity, and unit price. - Production.Product: Information about the products being manufactured, including product names, categories, and prices. - Production.ProductCategory: Data on product categories, such as bicycles and accessories. - Person.Person: Contains personal information about employees and contacts, including names and addresses. - Purchasing.Vendor: Information on vendors that supply the company with materials.
4. Usage: - Training and Education: It's widely used for teaching SQL, data analysis, and database management. - Testing and Demonstrations: Useful for testing software features and demonstrating data-related functionalities.
5. Tools: - The dataset is often used with Microsoft SQL Server, but it's also compatible with other relational database systems.
The Adventure Works dataset provides a rich and realistic environment for practicing a range of data-related tasks, from querying and reporting to data modeling and analysis.
Facebook
TwitterThe datasets in the .pdf and .zip attached to this record are in support of Intelligent Transportation Systems Joint Program Office (ITS JPO) report FHWA-JPO-15-222, "Impacts Assessment of Dynamic Speed Harmonization with Queue Warning: Task 3, Impacts Assessment Report". The files in these zip files are specifically related to the US-101 Testbed, near San Mateo, CA. The uncompressed and compressed files total 2.0265 GB in size. The files have been uploaded as-is; no further documentation was supplied by NTL. All located .docx files were converted to .pdf document files which are an open, archival format. These .pdfs were then added to the zip file alongside the original .docx files. The attached zip files can be unzipped using any zip compression/decompression software. These zip file contains files in the following formats: .pdf document files which can be read using any pdf reader; .xlsxm macro-enabled spreadsheet files which can be read in Microsoft Excel and some Tech Report spreadsheet programs; .accdb database files which may be opened with Microsoft Access Database software and Tech Report open database software applications ; as well as .db generic database files, often associated with thumbnail images in the Windows operating environment. [software requirements] These files were last accessed in 2017. File and .zip file names include: FHWA_JPO_15_222_INFLO_Performance_Measure_METADATA.pdf ; FHWA_JPO_15_222_INFLO_Performance_Measure_METADATA.docx ; FHWA_JPO_15_222_INFLO_VISSIM_Output_and_Analysis_Spreadsheets.zip ; FHWA_JPO_15_222_INFLO_Spreadsheet_PDFs.zip ; FHWA_JPO_15_222_DATA_CV50.zip ; and, FHWA_JPO_15_222_DATA_CV25.zip
Facebook
TwitterThe Texas Water Development Board (TWDB) Database Reports Application also known as the Secure Agency Reporting Application (SARA), can be used to explore available reports, run reports, and export report data using formats like PDF and Excel. SARA contains reports pertaining to the Water Use Survey, Regional Water Plans, and State Water Plans. Contact Email: webmaster@twdb.texas.gov
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Tandem mass spectrometry-based proteomics experiments produce large amounts of raw data, and different database search engines are needed to reliably identify all the proteins from this data. Here, we present Compid, an easy-to-use software tool that can be used to integrate and compare protein identification results from two search engines, Mascot and Paragon. Additionally, Compid enables extraction of information from large Mascot result files that cannot be opened via the Web interface and calculation of general statistical information about peptide and protein identifications in a data set. To demonstrate the usefulness of this tool, we used Compid to compare Mascot and Paragon database search results for mitochondrial proteome sample of human keratinocytes. The reports generated by Compid can be exported and opened as Excel documents or as text files using configurable delimiters, allowing the analysis and further processing of Compid output with a multitude of programs. Compid is freely available and can be downloaded from http://users.utu.fi/lanatr/compid. It is released under an open source license (GPL), enabling modification of the source code. Its modular architecture allows for creation of supplementary software components e.g. to enable support for additional input formats and report categories.
Facebook
TwitterThe Ontario government, generates and maintains thousands of datasets. Since 2012, we have shared data with Ontarians via a data catalogue. Open data is data that is shared with the public. Click here to learn more about open data and why Ontario releases it. Ontario’s Open Data Directive states that all data must be open, unless there is good reason for it to remain confidential. Ontario’s Chief Digital and Data Officer also has the authority to make certain datasets available publicly. Datasets listed in the catalogue that are not open will have one of the following labels: If you want to use data you find in the catalogue, that data must have a licence – a set of rules that describes how you can use it. A licence: Most of the data available in the catalogue is released under Ontario’s Open Government Licence. However, each dataset may be shared with the public under other kinds of licences or no licence at all. If a dataset doesn’t have a licence, you don’t have the right to use the data. If you have questions about how you can use a specific dataset, please contact us. The Ontario Data Catalogue endeavors to publish open data in a machine readable format. For machine readable datasets, you can simply retrieve the file you need using the file URL. The Ontario Data Catalogue is built on CKAN, which means the catalogue has the following features you can use when building applications. APIs (Application programming interfaces) let software applications communicate directly with each other. If you are using the catalogue in a software application, you might want to extract data from the catalogue through the catalogue API. Note: All Datastore API requests to the Ontario Data Catalogue must be made server-side. The catalogue's collection of dataset metadata (and dataset files) is searchable through the CKAN API. The Ontario Data Catalogue has more than just CKAN's documented search fields. You can also search these custom fields. You can also use the CKAN API to retrieve metadata about a particular dataset and check for updated files. Read the complete documentation for CKAN's API. Some of the open data in the Ontario Data Catalogue is available through the Datastore API. You can also search and access the machine-readable open data that is available in the catalogue. How to use the API feature: Read the complete documentation for CKAN's Datastore API. The Ontario Data Catalogue contains a record for each dataset that the Government of Ontario possesses. Some of these datasets will be available to you as open data. Others will not be available to you. This is because the Government of Ontario is unable to share data that would break the law or put someone's safety at risk. You can search for a dataset with a word that might describe a dataset or topic. Use words like “taxes” or “hospital locations” to discover what datasets the catalogue contains. You can search for a dataset from 3 spots on the catalogue: the homepage, the dataset search page, or the menu bar available across the catalogue. On the dataset search page, you can also filter your search results. You can select filters on the left hand side of the page to limit your search for datasets with your favourite file format, datasets that are updated weekly, datasets released by a particular organization, or datasets that are released under a specific licence. Go to the dataset search page to see the filters that are available to make your search easier. You can also do a quick search by selecting one of the catalogue’s categories on the homepage. These categories can help you see the types of data we have on key topic areas. When you find the dataset you are looking for, click on it to go to the dataset record. Each dataset record will tell you whether the data is available, and, if so, tell you about the data available. An open dataset might contain several data files. These files might represent different periods of time, different sub-sets of the dataset, different regions, language translations, or other breakdowns. You can select a file and either download it or preview it. Make sure to read the licence agreement to make sure you have permission to use it the way you want. Read more about previewing data. A non-open dataset may be not available for many reasons. Read more about non-open data. Read more about restricted data. Data that is non-open may still be subject to freedom of information requests. The catalogue has tools that enable all users to visualize the data in the catalogue without leaving the catalogue – no additional software needed. Have a look at our walk-through of how to make a chart in the catalogue. Get automatic notifications when datasets are updated. You can choose to get notifications for individual datasets, an organization’s datasets or the full catalogue. You don’t have to provide and personal information – just subscribe to our feeds using any feed reader you like using the corresponding notification web addresses. Copy those addresses and paste them into your reader. Your feed reader will let you know when the catalogue has been updated. The catalogue provides open data in several file formats (e.g., spreadsheets, geospatial data, etc). Learn about each format and how you can access and use the data each file contains. A file that has a list of items and values separated by commas without formatting (e.g. colours, italics, etc.) or extra visual features. This format provides just the data that you would display in a table. XLSX (Excel) files may be converted to CSV so they can be opened in a text editor. How to access the data: Open with any spreadsheet software application (e.g., Open Office Calc, Microsoft Excel) or text editor. Note: This format is considered machine-readable, it can be easily processed and used by a computer. Files that have visual formatting (e.g. bolded headers and colour-coded rows) can be hard for machines to understand, these elements make a file more human-readable and less machine-readable. A file that provides information without formatted text or extra visual features that may not follow a pattern of separated values like a CSV. How to access the data: Open with any word processor or text editor available on your device (e.g., Microsoft Word, Notepad). A spreadsheet file that may also include charts, graphs, and formatting. How to access the data: Open with a spreadsheet software application that supports this format (e.g., Open Office Calc, Microsoft Excel). Data can be converted to a CSV for a non-proprietary format of the same data without formatted text or extra visual features. A shapefile provides geographic information that can be used to create a map or perform geospatial analysis based on location, points/lines and other data about the shape and features of the area. It includes required files (.shp, .shx, .dbt) and might include corresponding files (e.g., .prj). How to access the data: Open with a geographic information system (GIS) software program (e.g., QGIS). A package of files and folders. The package can contain any number of different file types. How to access the data: Open with an unzipping software application (e.g., WinZIP, 7Zip). Note: If a ZIP file contains .shp, .shx, and .dbt file types, it is an ArcGIS ZIP: a package of shapefiles which provide information to create maps or perform geospatial analysis that can be opened with ArcGIS (a geographic information system software program). A file that provides information related to a geographic area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open using a GIS software application to create a map or do geospatial analysis. It can also be opened with a text editor to view raw information. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format for sharing data in a machine-readable way that can store data with more unconventional structures such as complex lists. How to access the data: Open with any text editor (e.g., Notepad) or access through a browser. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format to store and organize data in a machine-readable way that can store data with more unconventional structures (not just data organized in tables). How to access the data: Open with any text editor (e.g., Notepad). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A file that provides information related to an area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open with a geospatial software application that supports the KML format (e.g., Google Earth). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. This format contains files with data from tables used for statistical analysis and data visualization of Statistics Canada census data. How to access the data: Open with the Beyond 20/20 application. A database which links and combines data from different files or applications (including HTML, XML, Excel, etc.). The database file can be converted to a CSV/TXT to make the data machine-readable, but human-readable formatting will be lost. How to access the data: Open with Microsoft Office Access (a database management system used to develop application software). A file that keeps the original layout and
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This compilation of additional lecture materials offers a practical introduction to key Computer Science (CS) and Digital tools and concepts aimed at enhancing research, teaching, and administrative efficiency. Prepared by Dr. I. de Zarzà, and also reviewed and edited by Dr. J. de Curtò, are designed as a transversal resource, to support students from diverse disciplines—ranging from engineering and business to public management and health sciences.Topics include:· Introduction to Programming· Spreadsheet software and Excel functions· Word processing and Overleaf (LaTeX)· Presentation tools including PowerPoint, SlidesAI, and Genially· Prompt engineering and AI-assisted writing with Copilot and ChatGPT· Web and blog creation using HTML and Blogger· Introduction to databases (SQL and NoSQL)· Cybersecurity fundamentals and safe digital practices· Multimedia generation with AI (voice, video, and music tools like Suno and Sora)Developed across various undergraduate programs at the Universidad de Zaragoza, the notes combine technical know-how with real-world applications in academic and public sector contexts.
Facebook
TwitterDescription Historical data from 1980 forecasts out ten years for all major macroeconomic indicators accessible by software which graphs and develops country and regional excel spreadsheet and other software which simulates the model results and is more efficient for developing spreadsheets for a small number of variables
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A readme.txt is available alongside the dataset.
File List: Folder for each thesis chapter and appendix. Within each folder, are folders pertaining to each figure, except SEM and optical microscope images which are available in the thesis (main text and appendix).
-Raw (.raw) files generated from PXRD diffractometer directly, these were then converted to .xy files using PowDLL.
- .UA files (SDT files) generated directly from Discovery SDT 650; raw data were opened using TA Instruments Thermal Analysis software (Advantage/Universal Analysis (UA) Software) and exported as an excel spreadsheet.
- .ngb-ddg files generated directly from NETSCH DSC 214 Polyma calorimeter, these files were opened in Netzsch Proteus® software package and exported as .csv files.
-FTIR .txt files were exported directly from the FTIR spectrometers (Bruker Tensor 27 FTIR spectrometer or Thermo Fisher Scientific Nicolet iS50 FT-IR spectrometer). These can be opened in notepad or excel or equivalent programmes.
-Raman spectra sent directly from collaborators (University of Jena) as .xlsx data which can be opened using Microsoft Excel.
- .int01 and .dofr files generated using GudRunX software. Raw data from synchrotron was opened as .xy files in GudRunX, the data was processed (as described in thesis) and .int01 and .dofr files were automatically generated.
- PCA and MLR analysis performed in Origin Pro Graphing software, the output data was copied into .xlsx spreadsheets as described in thesis.
- .cif files downloaded directly from CCDC database, distances (Figure 4.54) were generated using CCDC’s Mercury software and exported directly from Mercury.
- 31P NMR data sent directly from collaborators (Universidad Autónoma de Madrid) as .opju files which were opened in Origin Pro Graphing software. Gaussian fitting was performed in Origin Pro Graphing software and data were copied to Microsoft Excel.
-Nanoindentation data was sent directly from collaborators (University of Jena) as .opju files which were opened in Origin Pro Graphing software (Figures 4.71-4.72) and data were copied to Microsoft Excel.
-EDX .spx files generated on FEI Nova Nano SEM 450 electroscope and can be opened in ESPRIT software (Bruker).
-Data for CO2 isotherms and N2 isotherms (50-70 wt% composites, Ch5 and Ch7 Fig 7.14) were sent directly from collaborators (National Institute of Chemistry, Slovenia) as .opju files which were opened in Origin Pro Graphing software which were copied and pasted into Microsoft Excel.
- .smp files were generated by Micromertics ASAP 2020 machine, data was then exported into Microsoft Excel using Micromeritics’ MicroActive software.
-Rietveld refinement of VT-PXRD data (Ch5, Fig 5.47-5.50): ‘.xy’ files are the raw data from the diffractometer (converted from .raw using PowDLL), these were used in the Rietveld refinement Topas input file (.inp) generated using jEdit Programmer’s text editor.
- Compression tests results (Ch6, Fig 6.26) were sent directly as an .opju file by collaborators (University of Jena). Data were copied into Microsoft Excelt and saved as .xlsx files.
-Gas adsorption data (CO2, N2) for Ch6 samples (Fig 6.27-6.28) were sent as .SMP and the extracted .xlsx files by collaborators (University of Valencia).
- .dsw and .bsw files were generated directly on Agilent UV-VIS spectrophotometer; data were exported as .xlsx or .csv files using Agilent’s Cary WinUV Software platform. (.BSW file in Ch6 Fig 6.29 contains UV/VIS absorbances of PIM-1, agZIF-62 and (PIM-1)0.5(agZIF-62)0.5 composite).
- .spa files (Ch7 FTIR) were generated directly using a Nicolet iS50 FT-IR spectrometer. These were exported as .csv files using Thermo Fisher Scientific’s Omnic Series Software.
- Mass loss data (Fig 7.18) generated by physically weighing samples and recording mass in Microsoft Excel spreadsheet.
- pH values (Fig 7.23) were generated by physically recording pH values obtained and inputted in Microsoft Excel spreadsheet.
- Microsphere sizes (Fig 7.6, 7.26) were recorded using the associated SEM images and inputted in Microsoft Excel spreadsheet.
-Folder: Appendices: A3, A6, A26-28, A39, A56-60, A79, A82-83, A86-89: raw 1H NMR spectra obtained directly from Bruker Advance III HD 500 MHz at the Department of Chemistry, University of Cambridge.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Fully 2D-curated database of 313 QSAR-ready hair dye substances available for download as Excel spreadsheet or SDF (structure-data file). Includes information such as substance name, structure, CASRNs, DTXSIDs, classes, color (if applicable), and computed properties. SDF can be opened using chemical drawing software such as ChemDraw Professional or MarvinView. Marvin (ChemAxon) can be downloaded at: https://chemaxon.com/products/marvin/downloadNew with Version 3: Better defined structures for substances with HDSD ID: 91, 99 and 110 (aromaticity issues resolved).For details regarding database development, analysis, and potential applications, see https://dx.doi.org/10.1021/acssuschemeng.7b03795. We are seeking to improve the HDSD. Please contact Tova N. Williams at tovanwilliams@gmail.com for suggestions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This study presents the development of a novel ecodesign approach based on a parametric life cycle assessment (LCA). The developed method allows for the comparison of environmental impacts of a vast number of different product configurations, which are derived automatically by determining every possible combination of the given design options. The life cycle model features a stochastic failure and repair simulation to account for a wide range of use cases as well as a recycling simulation that can determine the environmentally optimal recycling route. The developed method is tested on an exemplary case study of a smartphone. Despite efficiency limitations of the accompanying software tool prototype that was developed and used for the case study, it could be shown that the method allows to identify the environmental influence of different design options as well as the product configuration with the least annual global warming potential.
This file contains the database Excel file with data and calculations on failure and repair statistics, material compositions, and input tables for the software tool prototype developed in the study. It can be inspected as is to understand the underlying data and procedure presented in the study or used as an input for the Python source code to run the LCA model, which can be found here: 10.5281/zenodo.10611008
Note: References to licensed environmental datasets from the Sphera and ecoinvent databases have been deleted in the published version. In order to run the software tool, please add the respective values for the Global Warming Potential (or alternative impact categories) in the "processes_data" sheet and delete the suffix "_noLCIA" from the file name.
Facebook
TwitterThe data presented in this data release represent observations of postfire debris flows that have been collected from publicly available datasets. Data originate from 13 different countries: the United States, Australia, China, Italy, Greece, Portugal, Spain, the United Kingdom, Austria, Switzerland, Canada, South Korea, and Japan. The data are located in the file called “PFDF_database_sortedbyReference.txt” and a description of each column header can be found in both the file “column_headers.txt” and the metadata file (“Post-fire Debris-Flow Database (Literature Derived).xml”). The observations are derived from areas that have been burned by wildfire and are global in nature. However, this dataset is synthesized from information collected by many different researchers for different purposes, and therefore not all fields are available for each of the observations. Missing information is indicated by the value “-9999” in the ”PFDF_database_sortedbyReference.txt” file. Note that the text file contains special characters and a mix of date-time formats that reflect the original data provided by the authors. The text may not be displayed correctly if it is opened by proprietary software such as Microsoft Excel but will appear correctly when opened in a text editor software.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We have collected the available data from literature and from other more restricted databases. We try to incorporate newly published chronometric dates collected from all kind of available publications. See list of journals inspected in downloadable file "inspected-journals.xls". The database, version 24 (first version was available in2002), contains now 12888 site forms, (most of them with their geographical coordinates), comprising 15500 radiometric data: Conv. 14C and AMS 14C (12024 items), TL (997 items), OSL (461 items), ESR, Th/U and AAR (2015 items) from the European (Russian Siberia included) Lower, Middle and Upper Palaeolithic. All 14C dates are conventional dates BP. 141 new sites are incorporated and 322 sites have a corrected or an updated content. For citation, please use: Vermeersch, P.M., 2018. Radiocarbon Palaeolithic Europe Database, Version 24. Available at: http://ees.kuleuven.be/geography/projects/14c-palaeolithic/index.html. The database uses Microsoft Access ©. After downloading, you can use the database with a procedure as described below. For those who have no access to Microsoft Access software, there is also an excel–file (Palaeolithic Europe Database v24 Oct 2018 extract.xls), with restricted content but with all dates. It can be directly downloaded from http://ees.kuleuven.be/geography/projects/14c-palaeolithic/index.html. The excel file is also available via academia.edu and researchgate.net (https://www.researchgate.net/project/Radiocarbon-Palaeolithic-Europe-Database)
Facebook
TwitterHospitals in Chicago. To view or use these files, compression software, like WinZip, and special GIS software, such as ESRI ArcGIS, is required. The .dbf file may also be opened in Excel, Access or other database programs.
Facebook
TwitterThe Bureau of the Census has released Census 2000 Summary File 1 (SF1) 100-Percent data. The file includes the following population items: sex, age, race, Hispanic or Latino origin, household relationship, and household and family characteristics. Housing items include occupancy status and tenure (whether the unit is owner or renter occupied). SF1 does not include information on incomes, poverty status, overcrowded housing or age of housing. These topics will be covered in Summary File 3. Data are available for states, counties, county subdivisions, places, census tracts, block groups, and, where applicable, American Indian and Alaskan Native Areas and Hawaiian Home Lands. The SF1 data are available on the Bureau's web site and may be retrieved from American FactFinder as tables, lists, or maps. Users may also download a set of compressed ASCII files for each state via the Bureau's FTP server. There are over 8000 data items available for each geographic area. The full listing of these data items is available here as a downloadable compressed data base file named TABLES.ZIP. The uncompressed is in FoxPro data base file (dbf) format and may be imported to ACCESS, EXCEL, and other software formats. While all of this information is useful, the Office of Community Planning and Development has downloaded selected information for all states and areas and is making this information available on the CPD web pages. The tables and data items selected are those items used in the CDBG and HOME allocation formulas plus topics most pertinent to the Comprehensive Housing Affordability Strategy (CHAS), the Consolidated Plan, and similar overall economic and community development plans. The information is contained in five compressed (zipped) dbf tables for each state. When uncompressed the tables are ready for use with FoxPro and they can be imported into ACCESS, EXCEL, and other spreadsheet, GIS and database software. The data are at the block group summary level. The first two characters of the file name are the state abbreviation. The next two letters are BG for block group. Each record is labeled with the code and name of the city and county in which it is located so that the data can be summarized to higher-level geography. The last part of the file name describes the contents . The GEO file contains standard Census Bureau geographic identifiers for each block group, such as the metropolitan area code and congressional district code. The only data included in this table is total population and total housing units. POP1 and POP2 contain selected population variables and selected housing items are in the HU file. The MA05 table data is only for use by State CDBG grantees for the reporting of the racial composition of beneficiaries of Area Benefit activities. The complete package for a state consists of the dictionary file named TABLES, and the five data files for the state. The logical record number (LOGRECNO) links the records across tables.
Facebook
TwitterA free mapping tool that allows you to create a thematic map of London without any specialist GIS skills or software - all you need is Microsoft Excel. Templates are available for London’s Boroughs and Wards. Full instructions are contained within the spreadsheets. Macros The tool works in any version of Excel. But the user MUST ENABLE MACROS, for the features to work. There a some restrictions on functionality in the ward maps in Excel 2003 and earlier - full instructions are included in the spreadsheet. To check whether the macros are enabled in Excel 2003 click Tools, Macro, Security and change the setting to Medium. Then you have to re-start Excel for the changes to take effect. When Excel starts up a prompt will ask if you want to enable macros - click yes. In Excel 2007 and later, it should be set by default to the correct setting, but if it has been changed, click on the Windows Office button in the top corner, then Excel options (at the bottom), Trust Centre, Trust Centre Settings, and make sure it is set to 'Disable all macros with notification'. Then when you open the spreadsheet, a prompt labelled 'Options' will appear at the top for you to enable macros. To create your own thematic borough maps in Excel using the ward map tool as a starting point, read these instructions. You will need to be a confident Excel user, and have access to your boundaries as a picture file from elsewhere. The mapping tools created here are all fully open access with no passwords. Copyright notice: If you publish these maps, a copyright notice must be included within the report saying: "Contains Ordnance Survey data © Crown copyright and database rights." NOTE: Excel 2003 users must 'ungroup' the map for it to work.
Facebook
TwitterData were collected from two sources. Specimens of fish and invertebrates collected from the San Francisco Estuary were used for Sanger DNA sequencing. DNA extractions were performed using the Qiagen Blood and Tissue kit and PCR was performed using primers to amplify the entire barcode sequence. Raw chromatogram data files were manually examined for quality control, aligned, and flanking and primer sequences were trimmed using CodonCode Aligner. For species without physical specimens, or for those specimens that failed PCR/sequencing/QC, publicly available DNA sequences were downloaded from GenBank, and aligned and trimmed to the barcode region using CodonCode Aligner. The combined experimental and downloaded sequences for each barcode were placed into a single .txt file formatted for use with the DADA2 metabarcoding software. For all sequences, an additional verification step was performed by querying the BLASTn database. A separate metadata file (.csv) was also generated for each barc...
Facebook
TwitterThis data set contains vector point information. The original data set was collected through visual field observation by Jennke Visser (University of Louisiana-Lafayette). The observations were made while flying over the study area in a helicopter. Flight was along north/south transects spaced 2000 meters apart from the Texas / Louisiana State line to Corpus Christie Bay. Vegetative data was obtained at pre-determined stations spaced at 1500 meters along each transect. The stations were located using a Global Positioning System (GPS) and a computer running ArcGIS. This information was recorded manually onto field tally sheets and later this information was entered into a Microsoft Excel database using Capturx software and imported into ArcGIS.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundMicrosoft Excel automatically converts certain gene symbols, database accessions, and other alphanumeric text into dates, scientific notation, and other numerical representations. These conversions lead to subsequent, irreversible, corruption of the imported text. A recent survey of popular genomic literature estimates that one-fifth of all papers with supplementary gene lists suffer from this issue.ResultsHere, we present an open-source tool, Escape Excel, which prevents these erroneous conversions by generating an escaped text file that can be safely imported into Excel. Escape Excel is implemented in a variety of formats (http://www.github.com/pstew/escape_excel), including a command line based Perl script, a Windows-only Excel Add-In, an OS X drag-and-drop application, a simple web-server, and as a Galaxy web environment interface. Test server implementations are accessible as a Galaxy interface (http://apostl.moffitt.org) and simple non-Galaxy web server (http://apostl.moffitt.org:8000/).ConclusionsEscape Excel detects and escapes a wide variety of problematic text strings so that they are not erroneously converted into other representations upon importation into Excel. Examples of problematic strings include date-like strings, time-like strings, leading zeroes in front of numbers, and long numeric and alphanumeric identifiers that should not be automatically converted into scientific notation. It is hoped that greater awareness of these potential data corruption issues, together with diligent escaping of text files prior to importation into Excel, will help to reduce the amount of Excel-corrupted data in scientific analyses and publications.