Facebook
TwitterThe USDA Agricultural Research Service (ARS) recently established SCINet , which consists of a shared high performance computing resource, Ceres, and the dedicated high-speed Internet2 network used to access Ceres. Current and potential SCINet users are using and generating very large datasets so SCINet needs to be provisioned with adequate data storage for their active computing. It is not designed to hold data beyond active research phases. At the same time, the National Agricultural Library has been developing the Ag Data Commons, a research data catalog and repository designed for public data release and professional data curation. Ag Data Commons needs to anticipate the size and nature of data it will be tasked with handling. The ARS Web-enabled Databases Working Group, organized under the SCINet initiative, conducted a study to establish baseline data storage needs and practices, and to make projections that could inform future infrastructure design, purchases, and policies. The SCINet Web-enabled Databases Working Group helped develop the survey which is the basis for an internal report. While the report was for internal use, the survey and resulting data may be generally useful and are being released publicly. From October 24 to November 8, 2016 we administered a 17-question survey (Appendix A) by emailing a Survey Monkey link to all ARS Research Leaders, intending to cover data storage needs of all 1,675 SY (Category 1 and Category 4) scientists. We designed the survey to accommodate either individual researcher responses or group responses. Research Leaders could decide, based on their unit's practices or their management preferences, whether to delegate response to a data management expert in their unit, to all members of their unit, or to themselves collate responses from their unit before reporting in the survey. Larger storage ranges cover vastly different amounts of data so the implications here could be significant depending on whether the true amount is at the lower or higher end of the range. Therefore, we requested more detail from "Big Data users," those 47 respondents who indicated they had more than 10 to 100 TB or over 100 TB total current data (Q5). All other respondents are called "Small Data users." Because not all of these follow-up requests were successful, we used actual follow-up responses to estimate likely responses for those who did not respond. We defined active data as data that would be used within the next six months. All other data would be considered inactive, or archival. To calculate per person storage needs we used the high end of the reported range divided by 1 for an individual response, or by G, the number of individuals in a group response. For Big Data users we used the actual reported values or estimated likely values. Resources in this dataset:Resource Title: Appendix A: ARS data storage survey questions. File Name: Appendix A.pdfResource Description: The full list of questions asked with the possible responses. The survey was not administered using this PDF but the PDF was generated directly from the administered survey using the Print option under Design Survey. Asterisked questions were required. A list of Research Units and their associated codes was provided in a drop down not shown here. Resource Software Recommended: Adobe Acrobat,url: https://get.adobe.com/reader/ Resource Title: CSV of Responses from ARS Researcher Data Storage Survey. File Name: Machine-readable survey response data.csvResource Description: CSV file includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed. This information is that same data as in the Excel spreadsheet (also provided).Resource Title: Responses from ARS Researcher Data Storage Survey. File Name: Data Storage Survey Data for public release.xlsxResource Description: MS Excel worksheet that Includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
File name definitions:
'...v_50_175_250_300...' - dataset for velocity ranges [50, 175] + [250, 300] m/s
'...v_175_250...' - dataset for velocity range [175, 250] m/s
'ANNdevelop...' - used to perform 9 parametric sub-analyses where, in each one, many ANNs are developed (trained, validated and tested) and the one yielding the best results is selected
'ANNtest...' - used to test the best ANN from each aforementioned parametric sub-analysis, aiming to find the best ANN model; this dataset includes the 'ANNdevelop...' counterpart
Where to find the input (independent) and target (dependent) variable values for each dataset/excel ?
input values in 'IN' sheet
target values in 'TARGET' sheet
Where to find the results from the best ANN model (for each target/output variable and each velocity range)?
open the corresponding excel file and the expected (target) vs ANN (output) results are written in 'TARGET vs OUTPUT' sheet
Check reference below (to be added when the paper is published)
https://www.researchgate.net/publication/328849817_11_Neural_Networks_-_Max_Disp_-_Railway_Beams
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Excel population distribution across 18 age groups. It lists the population in each age group along with the percentage population relative of the total population for Excel. The dataset can be utilized to understand the population distribution of Excel by age. For example, using this dataset, we can identify the largest age group in Excel.
Key observations
The largest age group in Excel, AL was for the group of age 45 to 49 years years with a population of 74 (15.64%), according to the ACS 2018-2022 5-Year Estimates. At the same time, the smallest age group in Excel, AL was the 85 years and over years with a population of 2 (0.42%). Source: U.S. Census Bureau American Community Survey (ACS) 2018-2022 5-Year Estimates
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2018-2022 5-Year Estimates
Age groups:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel Population by Age. You can refer the same here
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The open repository consists of two folders; Dataset and Picture. The dataset folder consists file “AWS Dataset Pangandaraan.xlsx”. There are 10 columns with three first columns as time attributes and the other six as atmosphere datasets. Each parameter has 8085 data, and Each parameter has a parameter index at the bottom of the column we added, including mMinimum, mMaximum, and Average values.
For further use, the user can choose one or more parameters for calculating or analyzing. For example, wind data (speed and direction) can be utilized to calculate Waves using the Hindcast method. Furthermore, the user can filter data by using the feature in Excel to extract the exact time range for analyzing various phenomena considered correlated to atmosphere data around Pangandaran, Indonesia.
The second folder, named “Picture,” contains three figures, including the monthly distribution of datasets, temporal data, and wind rose. Furthermore, the user can filter data by using the feature in Excel sheet to extract the exact time range for analyzing various phenomena considered correlated to atmosphere data around Pangandaran, Indonesia
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset provides a comprehensive collection of synthetic job postings to facilitate research and analysis in the field of job market trends, natural language processing (NLP), and machine learning. Created for educational and research purposes, this dataset offers a diverse set of job listings across various industries and job types.
We would like to express our gratitude to the Python Faker library for its invaluable contribution to the dataset generation process. Additionally, we appreciate the guidance provided by ChatGPT in fine-tuning the dataset, ensuring its quality, and adhering to ethical standards.
Please note that the examples provided are fictional and for illustrative purposes. You can tailor the descriptions and examples to match the specifics of your dataset. It is not suitable for real-world applications and should only be used within the scope of research and experimentation. You can also reach me via email at: rrana157@gmail.com
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Welcome! This is the dataset sharing program for BitTorrent Threat Network (https://sparkle.pbh-btn.com)! We make public datasets available for non-commercial use to students, researchers, and scientists to provide assistance with any research on BitTorrent. This dataset is licensed under the CC-BY NC protocol, and you can find the legal text in LICENSE.txt.
Dataset Time Range: - banhistory: 2024/12/28 - 2025/06/15 - client_discovery: since last migrate - torrent: since last migrate
Due to the size of the dataset, it is not feasible to open the dataset using a file editor or Excel, we recommend that you read it with a dedicated tool or import it into a database such as SQLite for analysis.
Github Org: https://github.com/PBH-BTN About BTN: https://docs.pbh-btn.com/en/docs/btn/intro/
NOTE: Kaggle seems make our some CSV fields missing from Preview
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents the detailed breakdown of the count of individuals within distinct income brackets, categorizing them by gender (men and women) and employment type - full-time (FT) and part-time (PT), offering valuable insights into the diverse income landscapes within Excel. The dataset can be utilized to gain insights into gender-based income distribution within the Excel population, aiding in data analysis and decision-making..
Key observations
https://i.neilsberg.com/ch/excel-al-income-distribution-by-gender-and-employment-type.jpeg" alt="Excel, AL gender and employment-based income distribution analysis (Ages 15+)">
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Income brackets:
Variables / Data Columns
Employment type classifications include:
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel median household income by gender. You can refer the same here
Facebook
TwitterWe used STATA v12.1 to analyze the data, but the files can also be opened with Microsoft Excel or R.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is digital research data corresponding to a published manuscript in "Pilot-scale H2S and swine odor removal system using commercially available biochar" Agronomy 2021, 11, 1611. Dataset may be assessed via the included link at the Dryad data repository.Although biochars made in laboratory seem to remove H2S and odorous compounds effectively, very few studies are available for commercial biochars. This study evaluated the efficacy of a commercial biochar (CBC) for removing H2S.Methods are described in the manuscript https://www.mdpi.com/2073-4395/11/8/1611. Descriptions corresponding to each figure and table in the manuscript are placed on separate tabs to clarify abbreviations and summarize the data headings and units.The data file, Table1-4Fig3-5.xslx is an Excel spreadsheet consisting of multiple sub-tabs which are associated with Tables 1, 2, 3, and 4, Figures 3, 4, and 5.Tab “Table1” – raw data for physico-chemical characteristics of the commercial pine biochar for Table 1Tab “Table2” – raw data for laboratory absorption column variables for Table 1. For dry or humid conditions, “Dry” or “Humid” is headed for each parameter name.Tab “Table3” - analytical results for odorous volatile organic compounds for 21 days of operation for Table 3. To avoid the complexity, the single values are not repeated in the data. For the multiple raw data for influent and effluent concentrations of the organic compounds larger than the detection limits are presented in this worksheet.Tab “Table4” – raw data (RH, influent and effluent concentrations) for adsorption of H2S using the pilot biochar system for Table 4. All effluent concentrations were below detection limit and not listed.Tab “Fig 3”- raw data for observed pressure drops ratios predicted by Ergun and Classen equations, i.e.,(Ergon) /(Obs) or(Classen) /(Obs), for various gas velocities (U = 0.41, 0.025, 0.164, and 0.370 m/s) in Figure 3.Tab “Fig4” – breakthrough sorption capacity data for two different inlet concentrations (25 and 100 ppm) used for Figure 4Tab “Fig5” – raw data for daily sum of influent and effluent SCOAVs used for Figure 5
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Categorical scatterplots with R for biologists: a step-by-step guide
Benjamin Petre1, Aurore Coince2, Sophien Kamoun1
1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK
Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.
Protocol
• Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.
• Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.
• Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.
Notes
• Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.
• Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.
replicates
graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()
References
Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.
Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035
Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128
Facebook
TwitterWith this add in it is possible to create map templates from GIS files in KML format, and create choropleths with them. Providing you have access to KML format map boundary files, it is possible to create your own quick and easy choropleth maps in Excel. The KML format files can be converted from 'shape' files. Many shape files are available to download for free from the web, including from Ordnance Survey and the London Datastore. Standard mapping packages such as QGIS (free to download) and ArcGIS can convert the files to KML format. A sample of a KML file (London wards) can be downloaded from this page, so that users can easily test the tool out. Macros must be enabled for the tool to function. When creating the map using the Excel tool, the 'unique ID' should normally be the area code, the 'Name' should be the area name and then if required and there is additional data in the KML file, further 'data' fields can be added. These columns will appear below and to the right of the map. If not, data can be added later on next to the codes and names. In the add-in version of the tool the final control, 'Scale (% window)' should not normally be changed. With the default value 0.5, the height of the map is set to be half the total size of the user's Excel window. To run a choropleth, select the menu option 'Run Choropleth' to get this form. To specify the colour ramp for the choropleth, the user needs to enter the number of boxes into which the range is to be divided, and the colours for the high and low ends of the range, which is done by selecting coloured option boxes as appropriate. If wished, hit the 'Swap' button to change which colours are for the different ends of the range. Then hit the 'Choropleth' button. The default options for the colours of the ends of the choropleth colour range are saved in the add in, but different values can be selected but setting up a column range of up to twelve cells, anywhere in Excel, filled with the option colours wanted. Then use the 'Colour range' control to select this range, and hit apply, having selected high or low values as wished. The button 'Copy' sets up a sheet 'ColourRamp' in the active workbook with the default colours, which can just be extended or deleted with just a few cells, so saving the user time. The add-in was developed entirely within the Excel VBA IDE by Tim Lund. He is kindly distributing the tool for free on the Datastore but suggests that users who find the tool useful make a donation to the Shelter charity. It is not intended to keep the actively maintained, but if any users or developers would like to add more features, email the author. Acknowledgments Calculation of Excel freeform shapes from latitudes and longitudes is done using calculations from the Ordnance Survey.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the median household income in Excel. It can be utilized to understand the trend in median household income and to analyze the income distribution in Excel by household type, size, and across various income brackets.
The dataset will have the following datasets when applicable
Please note: The 2020 1-Year ACS estimates data was not reported by the Census Bureau due to the impact on survey collection and analysis caused by COVID-19. Consequently, median household income data for 2020 is unavailable for large cities (population 65,000 and above).
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
Explore our comprehensive data analysis and visual representations for a deeper understanding of Excel median household income. You can refer the same here
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents the the household distribution across 16 income brackets among four distinct age groups in Excel: Under 25 years, 25-44 years, 45-64 years, and over 65 years. The dataset highlights the variation in household income, offering valuable insights into economic trends and disparities within different age categories, aiding in data analysis and decision-making..
Key observations
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2018-2022 5-Year Estimates.
Income brackets:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel median household income by age. You can refer the same here
Facebook
TwitterNo description is available. Visit https://dataone.org/datasets/6ffb72520e80a412991cd50d38f324d6 for complete metadata about this dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents the the household distribution across 16 income brackets among four distinct age groups in Excel: Under 25 years, 25-44 years, 45-64 years, and over 65 years. The dataset highlights the variation in household income, offering valuable insights into economic trends and disparities within different age categories, aiding in data analysis and decision-making..
Key observations
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Income brackets:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel median household income by age. You can refer the same here
Facebook
TwitterThe units and description of variables are in the first sheet of the excel file, "Metadata". There are 4 data sheets total.
Facebook
TwitterThis Excel based tool enables users to query the raw single year of age data so that any age range can easily be calculated without having to carry out often complex, and time consuming formulas that could also be open to human error. Each year the GLA demography team produce sets of population projections. The full raw data by single year of age (SYA) and gender are available as Datastore packages at the links below. How to use the tool Simply select the lower and upper age range for both males and females (starting in cell C3) and the spreadsheet will return the total population for the range. Find out more about GLA population projections on the GLA Demographic Projections page Click here for an archive of population projections from previous years that have since been superseded. 2019-based projections (published November 2020) Central range (upper bound) Central range (lower bound) Low population variant High population variant 2016-based projections (published July 2017) Housing-linked projection incorporating data from the 2016 SHLAA Ward-level projections consistent with the borough housing-led model Ethnic group projections consistent with the borough housing-led model (50MB file)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data set for: Yamashita T, Vavladeli A, Pala A, Galan K, Crochet S, Petersen SSA, Petersen CCH (2018) Diverse long-range axonal projections of excitatory layer 2/3 neurons in mouse barrel cortex. Front Neuroanat 12: 33. https://doi.org/10.3389/fnana.2018.00033
There are 25 files in this data upload:
'2018_Yamashita_FrontNeuroanat.pdf' - this a pdf version of the online publication.
'Yamashita_Figure2_Quantification.xlsx' - this is a Microsoft Excel file giving the locations of high density axonal projections from layer 2/3 pyramidal neurons in the mouse C2 barrel column in the coordinate frame of Paxinos & Franklin (2001) The mouse brain in stereotaxic coordinates. Academic Press. The data are plotted in Figure 2 of Yamashita et al., 2018.
'Yamashita_Figure7_Quantification.xlsx' - this is a Microsoft Excel file giving the dendritic length, number of dendrites, number of dendritic nodes and total axonal length, as well as the axonal length in the different projection zones for each reconstructed neuron. The data are plotted in Figure 7 of Yamashita et al., 2018.
'Yamashita_SupMov1_S2P_AP049.mov' - this is a QuickTime video file, showing the 3D structure of neuron AP049 featured in Figure 3 of Yamashita et al., 2018.
'Yamashita_SupMov2_M1P_TY308.mov' - this is a QuickTime video file, showing the 3D structure of neuron TY308 featured in Figure 5 of Yamashita et al., 2018.
'AV198.zip' - this zipped folder contains data relating to mouse AV198: a) 'AV198_stack.tif' the z-stack of whole-brain fluorescence images from expression of tdTomato in layer 2/3 neurons of the C2 barrel column of mouse AV198. b) 'AV198_ROI_Box.zip' can be loaded into FIJI (https://fiji.sc) and indicates projection regions by a box. c) 'AV198_ROI_Point.zip' can be loaded into FIJI (https://fiji.sc) and indicates projection regions by a point. d) 'AV198_Paxinos' is a folder showing the coronal fluorescent brain sections in pdf format overlaid on the equivalent drawing from Paxinos & Franklin (2001) The mouse brain in stereotaxic coordinates. Academic Press.
'AV199.zip' - same as 'AV198.zip' but for mouse AV199.
'AV201.zip' - same as 'AV198.zip' but for mouse AV201.
'AV202.zip' - same as 'AV198.zip' but for mouse AV202.
'AV203.zip' - same as 'AV198.zip' but for mouse AV203.
'AP042.ASC' - Neurolucida (http://www.mbfbioscience.com/neurolucida) data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse AP042. Brain contours are also traced.
'AP044.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse AP044. Brain contours are also traced.
'AP046.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse AP046. Brain contours are also traced.
'AP047.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse AP047. Brain contours are also traced.
'AP049.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse AP049. Brain contours are also traced.
'TY220.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY220. Brain contours are also traced.
'TY288.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY288. Brain contours are also traced.
'TY300.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY300. Brain contours are also traced.
'TY302.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY302. Brain contours are also traced.
'TY308.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY308. Brain contours are also traced.
'TY310.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY310. Brain contours are also traced.
'TY337.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY337. Brain contours are also traced.
'TY345.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY345. Brain contours are also traced.
'TY367.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY367. Brain contours are also traced.
'TY369.ASC' - Neurolucida data file of the 3D reconstruction of axon and dendrite from the single neuron labelled in mouse TY369. Brain contours are also traced.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The van der Waals volume is a widely used descriptor in modeling physicochemical properties. However, the calculation of the van der Waals volume (VvdW) is rather time-consuming, from Bondi group contributions, for a large data set. A new method for calculating van der Waals volume has been developed, based on Bondi radii. The method, termed Atomic and Bond Contributions of van der Waals volume (VABC), is very simple and fast. The only information needed for calculating VABC is atomic contributions and the number of atoms, bonds, and rings. Then, the van der Waals volume (Å3/molecule) can be calculated from the following formula: VvdW = ∑ all atom contributions − 5.92NB − 14.7RA − 3.8RNR (NB is the number of bonds, RA is the number of aromatic rings, and RNA is the number of nonaromatic rings). The number of bonds present (NB) can be simply calculated by NB = N − 1 + RA + RNA (where N is the total number of atoms). A simple Excel spread sheet has been made to calculate van der Waals volumes for a wide range of 677 organic compounds, including 237 drug compounds. The results show that the van der Waals volumes calculated from VABC are equivalent to the computer-calculated van der Waals volumes for organic compounds.
Facebook
TwitterSuccess.ai presents our Tech Install Data offering, a comprehensive dataset drawn from 28 million verified company profiles worldwide. Our meticulously curated Tech Install Data is designed to empower your sales and marketing strategies by providing in-depth insights into the technology stacks used by companies across various industries. Whether you're targeting small businesses or large enterprises, our data encompasses a diverse range of sectors, ensuring you have the necessary tools to refine your outreach and engagement efforts.
Comprehensive Coverage: Our Tech Install Data includes crucial information on technology installations used by companies. This encompasses software solutions, SaaS products, hardware configurations, and other technological setups critical for businesses. With data spanning industries such as finance, technology, healthcare, manufacturing, education, and more, our database offers unparalleled insights into corporate tech ecosystems.
Data Accuracy and Compliance: At Success.ai, we prioritize data integrity and compliance. Our datasets are not only GDPR-compliant but also adhere to various international data protection regulations, making them safe for use across geographic boundaries. Each profile is AI-validated to ensure the accuracy and timeliness of the information provided, with regular updates to reflect any changes in company tech stacks.
Tailored for Business Development: Leverage our Tech Install Data to enhance your account-based marketing (ABM) campaigns, improve sales prospecting, and execute targeted advertising strategies. Understanding a company's tech stack can help you tailor your messaging, align your product offerings, and address potential needs more effectively. Our data enables you to:
Identify prospects using competing or complementary products. Customize pitches based on the prospect’s existing technology environment. Enhance product recommendations with insights into potential tech gaps in target companies. Data Points and Accessibility: Our Tech Install Data offers detailed fields such as:
Company name and contact information. Detailed descriptions of installed technologies. Usage metrics for software and hardware. Decision-makers’ contact details related to tech purchases. This data is delivered in easily accessible formats, including CSV, Excel, or directly through our API, allowing seamless integration with your CRM or any other marketing automation tools. Guaranteed Best Price and Service: Success.ai is committed to providing high-quality data at the most competitive prices in the market. Our best price guarantee ensures that you receive the most value from your investment in our data solutions. Additionally, our customer support team is always ready to assist with any queries or custom data requests, ensuring you maximize the utility of your purchased data.
Sample Dataset and Custom Requests: To demonstrate the quality and depth of our Tech Install Data, we offer a sample dataset for preliminary review upon request. For specific needs or custom data solutions, our team is adept at creating tailored datasets that precisely match your business requirements.
Engage with Success.ai Today: Connect with us to discover how our Tech Install Data can transform your business strategy and operational efficiency. Our experts are ready to assist you in navigating the data landscape and unlocking actionable insights to drive your company's growth.
Start exploring the potential of detailed tech stack insights with Success.ai and gain the competitive edge necessary to thrive in today’s fast-paced business environment.
Facebook
TwitterThe USDA Agricultural Research Service (ARS) recently established SCINet , which consists of a shared high performance computing resource, Ceres, and the dedicated high-speed Internet2 network used to access Ceres. Current and potential SCINet users are using and generating very large datasets so SCINet needs to be provisioned with adequate data storage for their active computing. It is not designed to hold data beyond active research phases. At the same time, the National Agricultural Library has been developing the Ag Data Commons, a research data catalog and repository designed for public data release and professional data curation. Ag Data Commons needs to anticipate the size and nature of data it will be tasked with handling. The ARS Web-enabled Databases Working Group, organized under the SCINet initiative, conducted a study to establish baseline data storage needs and practices, and to make projections that could inform future infrastructure design, purchases, and policies. The SCINet Web-enabled Databases Working Group helped develop the survey which is the basis for an internal report. While the report was for internal use, the survey and resulting data may be generally useful and are being released publicly. From October 24 to November 8, 2016 we administered a 17-question survey (Appendix A) by emailing a Survey Monkey link to all ARS Research Leaders, intending to cover data storage needs of all 1,675 SY (Category 1 and Category 4) scientists. We designed the survey to accommodate either individual researcher responses or group responses. Research Leaders could decide, based on their unit's practices or their management preferences, whether to delegate response to a data management expert in their unit, to all members of their unit, or to themselves collate responses from their unit before reporting in the survey. Larger storage ranges cover vastly different amounts of data so the implications here could be significant depending on whether the true amount is at the lower or higher end of the range. Therefore, we requested more detail from "Big Data users," those 47 respondents who indicated they had more than 10 to 100 TB or over 100 TB total current data (Q5). All other respondents are called "Small Data users." Because not all of these follow-up requests were successful, we used actual follow-up responses to estimate likely responses for those who did not respond. We defined active data as data that would be used within the next six months. All other data would be considered inactive, or archival. To calculate per person storage needs we used the high end of the reported range divided by 1 for an individual response, or by G, the number of individuals in a group response. For Big Data users we used the actual reported values or estimated likely values. Resources in this dataset:Resource Title: Appendix A: ARS data storage survey questions. File Name: Appendix A.pdfResource Description: The full list of questions asked with the possible responses. The survey was not administered using this PDF but the PDF was generated directly from the administered survey using the Print option under Design Survey. Asterisked questions were required. A list of Research Units and their associated codes was provided in a drop down not shown here. Resource Software Recommended: Adobe Acrobat,url: https://get.adobe.com/reader/ Resource Title: CSV of Responses from ARS Researcher Data Storage Survey. File Name: Machine-readable survey response data.csvResource Description: CSV file includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed. This information is that same data as in the Excel spreadsheet (also provided).Resource Title: Responses from ARS Researcher Data Storage Survey. File Name: Data Storage Survey Data for public release.xlsxResource Description: MS Excel worksheet that Includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel