31 datasets found
  1. 18 excel spreadsheets by species and year giving reproduction and growth...

    • catalog.data.gov
    • data.wu.ac.at
    Updated Aug 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2024). 18 excel spreadsheets by species and year giving reproduction and growth data. One excel spreadsheet of herbicide treatment chemistry. [Dataset]. https://catalog.data.gov/dataset/18-excel-spreadsheets-by-species-and-year-giving-reproduction-and-growth-data-one-excel-sp
    Explore at:
    Dataset updated
    Aug 17, 2024
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).

  2. f

    Data from: Excel Templates: A Helpful Tool for Teaching Statistics

    • tandf.figshare.com
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alejandro Quintela-del-Río; Mario Francisco-Fernández (2023). Excel Templates: A Helpful Tool for Teaching Statistics [Dataset]. http://doi.org/10.6084/m9.figshare.3408052.v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Alejandro Quintela-del-Río; Mario Francisco-Fernández
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This article describes a free, open-source collection of templates for the popular Excel (2013, and later versions) spreadsheet program. These templates are spreadsheet files that allow easy and intuitive learning and the implementation of practical examples concerning descriptive statistics, random variables, confidence intervals, and hypothesis testing. Although they are designed to be used with Excel, they can also be employed with other free spreadsheet programs (changing some particular formulas). Moreover, we exploit some possibilities of the ActiveX controls of the Excel Developer Menu to perform interactive Gaussian density charts. Finally, it is important to note that they can be often embedded in a web page, so it is not necessary to employ Excel software for their use. These templates have been designed as a useful tool to teach basic statistics and to carry out data analysis even when the students are not familiar with Excel. Additionally, they can be used as a complement to other analytical software packages. They aim to assist students in learning statistics, within an intuitive working environment. Supplementary materials with the Excel templates are available online.

  3. Enterprise Survey 2009-2019, Panel Data - Slovenia

    • microdata.worldbank.org
    • catalog.ihsn.org
    Updated Aug 6, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    World Bank Group (WBG) (2020). Enterprise Survey 2009-2019, Panel Data - Slovenia [Dataset]. https://microdata.worldbank.org/index.php/catalog/3762
    Explore at:
    Dataset updated
    Aug 6, 2020
    Dataset provided by
    European Investment Bankhttp://eib.org/
    World Bankhttp://worldbank.org/
    European Bank for Reconstruction and Developmenthttp://ebrd.com/
    Time period covered
    2008 - 2019
    Area covered
    Slovenia
    Description

    Abstract

    The documentation covers Enterprise Survey panel datasets that were collected in Slovenia in 2009, 2013 and 2019.

    The Slovenia ES 2009 was conducted between 2008 and 2009. The Slovenia ES 2013 was conducted between March 2013 and September 2013. Finally, the Slovenia ES 2019 was conducted between December 2018 and November 2019. The objective of the Enterprise Survey is to gain an understanding of what firms experience in the private sector.

    As part of its strategic goal of building a climate for investment, job creation, and sustainable growth, the World Bank has promoted improving the business environment as a key strategy for development, which has led to a systematic effort in collecting enterprise data across countries. The Enterprise Surveys (ES) are an ongoing World Bank project in collecting both objective data based on firms' experiences and enterprises' perception of the environment in which they operate.

    Geographic coverage

    National

    Analysis unit

    The primary sampling unit of the study is the establishment. An establishment is a physical location where business is carried out and where industrial operations take place or services are provided. A firm may be composed of one or more establishments. For example, a brewery may have several bottling plants and several establishments for distribution. For the purposes of this survey an establishment must take its own financial decisions and have its own financial statements separate from those of the firm. An establishment must also have its own management and control over its payroll.

    Universe

    As it is standard for the ES, the Slovenia ES was based on the following size stratification: small (5 to 19 employees), medium (20 to 99 employees), and large (100 or more employees).

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    The sample for Slovenia ES 2009, 2013, 2019 were selected using stratified random sampling, following the methodology explained in the Sampling Manual for Slovenia 2009 ES and for Slovenia 2013 ES, and in the Sampling Note for 2019 Slovenia ES.

    Three levels of stratification were used in this country: industry, establishment size, and oblast (region). The original sample designs with specific information of the industries and regions chosen are included in the attached Excel file (Sampling Report.xls.) for Slovenia 2009 ES. For Slovenia 2013 and 2019 ES, specific information of the industries and regions chosen is described in the "The Slovenia 2013 Enterprise Surveys Data Set" and "The Slovenia 2019 Enterprise Surveys Data Set" reports respectively, Appendix E.

    For the Slovenia 2009 ES, industry stratification was designed in the way that follows: the universe was stratified into manufacturing industries, services industries, and one residual (core) sector as defined in the sampling manual. Each industry had a target of 90 interviews. For the manufacturing industries sample sizes were inflated by about 17% to account for potential non-response cases when requesting sensitive financial data and also because of likely attrition in future surveys that would affect the construction of a panel. For the other industries (residuals) sample sizes were inflated by about 12% to account for under sampling in firms in service industries.

    For Slovenia 2013 ES, industry stratification was designed in the way that follows: the universe was stratified into one manufacturing industry, and two service industries (retail, and other services).

    Finally, for Slovenia 2019 ES, three levels of stratification were used in this country: industry, establishment size, and region. The original sample design with specific information of the industries and regions chosen is described in "The Slovenia 2019 Enterprise Surveys Data Set" report, Appendix C. Industry stratification was done as follows: Manufacturing – combining all the relevant activities (ISIC Rev. 4.0 codes 10-33), Retail (ISIC 47), and Other Services (ISIC 41-43, 45, 46, 49-53, 55, 56, 58, 61, 62, 79, 95).

    For Slovenia 2009 and 2013 ES, size stratification was defined following the standardized definition for the rollout: small (5 to 19 employees), medium (20 to 99 employees), and large (more than 99 employees). For stratification purposes, the number of employees was defined on the basis of reported permanent full-time workers. This seems to be an appropriate definition of the labor force since seasonal/casual/part-time employment is not a common practice, except in the sectors of construction and agriculture.

    For Slovenia 2009 ES, regional stratification was defined in 2 regions. These regions are Vzhodna Slovenija and Zahodna Slovenija. The Slovenia sample contains panel data. The wave 1 panel “Investment Climate Private Enterprise Survey implemented in Slovenia” consisted of 223 establishments interviewed in 2005. A total of 57 establishments have been re-interviewed in the 2008 Business Environment and Enterprise Performance Survey.

    For Slovenia 2013 ES, regional stratification was defined in 2 regions (city and the surrounding business area) throughout Slovenia.

    Finally, for Slovenia 2019 ES, regional stratification was done across two regions: Eastern Slovenia (NUTS code SI03) and Western Slovenia (SI04).

    Mode of data collection

    Computer Assisted Personal Interview [capi]

    Research instrument

    Questionnaires have common questions (core module) and respectfully additional manufacturing- and services-specific questions. The eligible manufacturing industries have been surveyed using the Manufacturing questionnaire (includes the core module, plus manufacturing specific questions). Retail firms have been interviewed using the Services questionnaire (includes the core module plus retail specific questions) and the residual eligible services have been covered using the Services questionnaire (includes the core module). Each variation of the questionnaire is identified by the index variable, a0.

    Response rate

    Survey non-response must be differentiated from item non-response. The former refers to refusals to participate in the survey altogether whereas the latter refers to the refusals to answer some specific questions. Enterprise Surveys suffer from both problems and different strategies were used to address these issues.

    Item non-response was addressed by two strategies: a- For sensitive questions that may generate negative reactions from the respondent, such as corruption or tax evasion, enumerators were instructed to collect the refusal to respond as (-8). b- Establishments with incomplete information were re-contacted in order to complete this information, whenever necessary. However, there were clear cases of low response.

    For 2009 and 2013 Slovenia ES, the survey non-response was addressed by maximizing efforts to contact establishments that were initially selected for interview. Up to 4 attempts were made to contact the establishment for interview at different times/days of the week before a replacement establishment (with similar strata characteristics) was suggested for interview. Survey non-response did occur but substitutions were made in order to potentially achieve strata-specific goals. Further research is needed on survey non-response in the Enterprise Surveys regarding potential introduction of bias.

    For 2009, the number of contacted establishments per realized interview was 6.18. This number is the result of two factors: explicit refusals to participate in the survey, as reflected by the rate of rejection (which includes rejections of the screener and the main survey) and the quality of the sample frame, as represented by the presence of ineligible units. The relatively low ratio of contacted establishments per realized interview (6.18) suggests that the main source of error in estimates in the Slovenia may be selection bias and not frame inaccuracy.

    For 2013, the number of realized interviews per contacted establishment was 25%. This number is the result of two factors: explicit refusals to participate in the survey, as reflected by the rate of rejection (which includes rejections of the screener and the main survey) and the quality of the sample frame, as represented by the presence of ineligible units. The number of rejections per contact was 44%.

    Finally, for 2019, the number of interviews per contacted establishments was 9.7%. This number is the result of two factors: explicit refusals to participate in the survey, as reflected by the rate of rejection (which includes rejections of the screener and the main survey) and the quality of the sample frame, as represented by the presence of ineligible units. The share of rejections per contact was 75.2%.

  4. B

    Data Cleaning Sample

    • borealisdata.ca
    Updated Jul 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rong Luo (2023). Data Cleaning Sample [Dataset]. http://doi.org/10.5683/SP3/ZCN177
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 13, 2023
    Dataset provided by
    Borealis
    Authors
    Rong Luo
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Sample data for exercises in Further Adventures in Data Cleaning.

  5. Students Test Data

    • kaggle.com
    Updated Sep 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ATHARV BHARASKAR (2023). Students Test Data [Dataset]. https://www.kaggle.com/datasets/atharvbharaskar/students-test-data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 12, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ATHARV BHARASKAR
    License

    ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
    License information was derived automatically

    Description

    Dataset Overview: This dataset pertains to the examination results of students who participated in a series of academic assessments at a fictitious educational institution named "University of Exampleville." The assessments were administered across various courses and academic levels, with a focus on evaluating students' performance in general management and domain-specific topics.

    Columns: The dataset comprises 12 columns, each representing specific attributes and performance indicators of the students. These columns encompass information such as the students' names (which have been anonymized), their respective universities, academic program names (including BBA and MBA), specializations, the semester of the assessment, the type of examination domain (general management or domain-specific), general management scores (out of 50), domain-specific scores (out of 50), total scores (out of 100), student ranks, and percentiles.

    Data Collection: The examination data was collected during a standardized assessment process conducted by the University of Exampleville. The exams were designed to assess students' knowledge and skills in general management and their chosen domain-specific subjects. It involved students from both BBA and MBA programs who were in their final year of study.

    Data Format: The dataset is available in a structured format, typically as a CSV file. Each row represents a unique student's performance in the examination, while columns contain specific information about their results and academic details.

    Data Usage: This dataset is valuable for analyzing and gaining insights into the academic performance of students pursuing BBA and MBA degrees. It can be used for various purposes, including statistical analysis, performance trend identification, program assessment, and comparison of scores across domains and specializations. Furthermore, it can be employed in predictive modeling or decision-making related to curriculum development and student support.

    Data Quality: The dataset has undergone preprocessing and anonymization to protect the privacy of individual students. Nevertheless, it is essential to use the data responsibly and in compliance with relevant data protection regulations when conducting any analysis or research.

    Data Format: The exam data is typically provided in a structured format, commonly as a CSV (Comma-Separated Values) file. Each row in the dataset represents a unique student's examination performance, and each column contains specific attributes and scores related to the examination. The CSV format allows for easy import and analysis using various data analysis tools and programming languages like Python, R, or spreadsheet software like Microsoft Excel.

    Here's a column-wise description of the dataset:

    Name OF THE STUDENT: The full name of the student who took the exam. (Anonymized)

    UNIVERSITY: The university where the student is enrolled.

    PROGRAM NAME: The name of the academic program in which the student is enrolled (BBA or MBA).

    Specialization: If applicable, the specific area of specialization or major that the student has chosen within their program.

    Semester: The semester or academic term in which the student took the exam.

    Domain: Indicates whether the exam was divided into two parts: general management and domain-specific.

    GENERAL MANAGEMENT SCORE (OUT of 50): The score obtained by the student in the general management part of the exam, out of a maximum possible score of 50.

    Domain-Specific Score (Out of 50): The score obtained by the student in the domain-specific part of the exam, also out of a maximum possible score of 50.

    TOTAL SCORE (OUT of 100): The total score obtained by adding the scores from the general management and domain-specific parts, out of a maximum possible score of 100.

  6. COVID-19 Case Surveillance Public Use Data

    • data.cdc.gov
    • opendatalab.com
    • +5more
    application/rdfxml +5
    Updated Jul 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CDC Data, Analytics and Visualization Task Force (2024). COVID-19 Case Surveillance Public Use Data [Dataset]. https://data.cdc.gov/Case-Surveillance/COVID-19-Case-Surveillance-Public-Use-Data/vbim-akqf
    Explore at:
    application/rdfxml, tsv, csv, json, xml, application/rssxmlAvailable download formats
    Dataset updated
    Jul 9, 2024
    Dataset provided by
    Centers for Disease Control and Preventionhttp://www.cdc.gov/
    Authors
    CDC Data, Analytics and Visualization Task Force
    License

    https://www.usa.gov/government-workshttps://www.usa.gov/government-works

    Description

    Note: Reporting of new COVID-19 Case Surveillance data will be discontinued July 1, 2024, to align with the process of removing SARS-CoV-2 infections (COVID-19 cases) from the list of nationally notifiable diseases. Although these data will continue to be publicly available, the dataset will no longer be updated.

    Authorizations to collect certain public health data expired at the end of the U.S. public health emergency declaration on May 11, 2023. The following jurisdictions discontinued COVID-19 case notifications to CDC: Iowa (11/8/21), Kansas (5/12/23), Kentucky (1/1/24), Louisiana (10/31/23), New Hampshire (5/23/23), and Oklahoma (5/2/23). Please note that these jurisdictions will not routinely send new case data after the dates indicated. As of 7/13/23, case notifications from Oregon will only include pediatric cases resulting in death.

    This case surveillance public use dataset has 12 elements for all COVID-19 cases shared with CDC and includes demographics, any exposure history, disease severity indicators and outcomes, presence of any underlying medical conditions and risk behaviors, and no geographic data.

    CDC has three COVID-19 case surveillance datasets:

    The following apply to all three datasets:

    Overview

    The COVID-19 case surveillance database includes individual-level data reported to U.S. states and autonomous reporting entities, including New York City and the District of Columbia (D.C.), as well as U.S. territories and affiliates. On April 5, 2020, COVID-19 was added to the Nationally Notifiable Condition List and classified as “immediately notifiable, urgent (within 24 hours)” by a Council of State and Territorial Epidemiologists (CSTE) Interim Position Statement (Interim-20-ID-01). CSTE updated the position statement on August 5, 2020, to clarify the interpretation of antigen detection tests and serologic test results within the case classification (Interim-20-ID-02). The statement also recommended that all states and territories enact laws to make COVID-19 reportable in their jurisdiction, and that jurisdictions conducting surveillance should submit case notifications to CDC. COVID-19 case surveillance data are collected by jurisdictions and reported voluntarily to CDC.

    For more information: NNDSS Supports the COVID-19 Response | CDC.

    The deidentified data in the “COVID-19 Case Surveillance Public Use Data” include demographic characteristics, any exposure history, disease severity indicators and outcomes, clinical data, laboratory diagnostic test results, and presence of any underlying medical conditions and risk behaviors. All data elements can be found on the COVID-19 case report form located at www.cdc.gov/coronavirus/2019-ncov/downloads/pui-form.pdf.

    COVID-19 Case Reports

    COVID-19 case reports have been routinely submitted using nationally standardized case reporting forms. On April 5, 2020, CSTE released an Interim Position Statement with national surveillance case definitions for COVID-19 included. Current versions of these case definitions are available here: https://ndc.services.cdc.gov/case-definitions/coronavirus-disease-2019-2021/.

    All cases reported on or after were requested to be shared by public health departments to CDC using the standardized case definitions for laboratory-confirmed or probable cases. On May 5, 2020, the standardized case reporting form was revised. Case reporting using this new form is ongoing among U.S. states and territories.

    Data are Considered Provisional

    • The COVID-19 case surveillance data are dynamic; case reports can be modified at any time by the jurisdictions sharing COVID-19 data with CDC. CDC may update prior cases shared with CDC based on any updated information from jurisdictions. For instance, as new information is gathered about previously reported cases, health departments provide updated data to CDC. As more information and data become available, analyses might find changes in surveillance data and trends during a previously reported time window. Data may also be shared late with CDC due to the volume of COVID-19 cases.
    • Annual finalized data: To create the final NNDSS data used in the annual tables, CDC works carefully with the reporting jurisdictions to reconcile the data received during the year until each state or territorial epidemiologist confirms that the data from their area are correct.
    • Access Addressing Gaps in Public Health Reporting of Race and Ethnicity for COVID-19, a report from the Council of State and Territorial Epidemiologists, to better understand the challenges in completing race and ethnicity data for COVID-19 and recommendations for improvement.

    Data Limitations

    To learn more about the limitations in using case surveillance data, visit FAQ: COVID-19 Data and Surveillance.

    Data Quality Assurance Procedures

    CDC’s Case Surveillance Section routinely performs data quality assurance procedures (i.e., ongoing corrections and logic checks to address data errors). To date, the following data cleaning steps have been implemented:

    • Questions that have been left unanswered (blank) on the case report form are reclassified to a Missing value, if applicable to the question. For example, in the question “Was the individual hospitalized?” where the possible answer choices include “Yes,” “No,” or “Unknown,” the blank value is recoded to Missing because the case report form did not include a response to the question.
    • Logic checks are performed for date data. If an illogical date has been provided, CDC reviews the data with the reporting jurisdiction. For example, if a symptom onset date in the future is reported to CDC, this value is set to null until the reporting jurisdiction updates the date appropriately.
    • Additional data quality processing to recode free text data is ongoing. Data on symptoms, race and ethnicity, and healthcare worker status have been prioritized.

    Data Suppression

    To prevent release of data that could be used to identify people, data cells are suppressed for low frequency (<5) records and indirect identifiers (e.g., date of first positive specimen). Suppression includes rare combinations of demographic characteristics (sex, age group, race/ethnicity). Suppressed values are re-coded to the NA answer option; records with data suppression are never removed.

    For questions, please contact Ask SRRG (eocevent394@cdc.gov).

    Additional COVID-19 Data

    COVID-19 data are available to the public as summary or aggregate count files, including total counts of cases and deaths by state and by county. These

  7. e

    Create your own mapping templates - Excel Add-In

    • data.europa.eu
    Updated Jun 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Greater London Authority (2025). Create your own mapping templates - Excel Add-In [Dataset]. https://data.europa.eu/data/datasets/create-your-own-mapping-templates-excel-add-in~~1?locale=lt
    Explore at:
    Dataset updated
    Jun 9, 2025
    Dataset authored and provided by
    Greater London Authority
    Description

    With this add in it is possible to create map templates from GIS files in KML format, and create choropleths with them.

    Providing you have access to KML format map boundary files, it is possible to create your own quick and easy choropleth maps in Excel. The KML format files can be converted from 'shape' files. Many shape files are available to download for free from the web, including from Ordnance Survey and the London Datastore. Standard mapping packages such as QGIS (free to download) and ArcGIS can convert the files to KML format.

    A sample of a KML file (London wards) can be downloaded from this page, so that users can easily test the tool out.

    Macros must be enabled for the tool to function.

    When creating the map using the Excel tool, the 'unique ID' should normally be the area code, the 'Name' should be the area name and then if required and there is additional data in the KML file, further 'data' fields can be added. These columns will appear below and to the right of the map. If not, data can be added later on next to the codes and names.

    In the add-in version of the tool the final control, 'Scale (% window)' should not normally be changed. With the default value 0.5, the height of the map is set to be half the total size of the user's Excel window.

    To run a choropleth, select the menu option 'Run Choropleth' to get this form.

    To specify the colour ramp for the choropleth, the user needs to enter the number of boxes into which the range is to be divided, and the colours for the high and low ends of the range, which is done by selecting coloured option boxes as appropriate. If wished, hit the 'Swap' button to change which colours are for the different ends of the range. Then hit the 'Choropleth' button.

    The default options for the colours of the ends of the choropleth colour range are saved in the add in, but different values can be selected but setting up a column range of up to twelve cells, anywhere in Excel, filled with the option colours wanted. Then use the 'Colour range' control to select this range, and hit apply, having selected high or low values as wished. The button 'Copy' sets up a sheet 'ColourRamp' in the active workbook with the default colours, which can just be extended or deleted with just a few cells, so saving the user time.

    The add-in was developed entirely within the Excel VBA IDE by Tim Lund. He is kindly distributing the tool for free on the Datastore but suggests that users who find the tool useful make a donation to the Shelter charity. It is not intended to keep the actively maintained, but if any users or developers would like to add more features, email the author.

    Acknowledgments

    Calculation of Excel freeform shapes from latitudes and longitudes is done using calculations from the Ordnance Survey.

  8. f

    Supplementary Data supporting: "Obstacle avoidance in aerial pursuit".

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Aug 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KleinHeerenbrink, Marco; Brighton, Caroline; Kempton, James; France, Lydia; Minano, Sofia; Taylor, Graham (2023). Supplementary Data supporting: "Obstacle avoidance in aerial pursuit". [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000975109
    Explore at:
    Dataset updated
    Aug 2, 2023
    Authors
    KleinHeerenbrink, Marco; Brighton, Caroline; Kempton, James; France, Lydia; Minano, Sofia; Taylor, Graham
    Description

    Supplementary Data supporting "Obstacle avoidance in aerial pursuit" by Caroline H. Brighton, James A. Kempton, Lydia A. France, Marco KleinHeerenbrink, Sofía Miñano, and Graham K. Taylor. Published in Current Biology (2023). https://doi.org/10.1016/j.cub.2023.06.047. The zipped folder SupplementaryData.zip contains the following data files, which are readable in MATLAB (The Mathworks Inc., Natick, MA, USA) and Excel (Microsoft Inc. Redmond, WA, USA). Please see the paper for definitions and descriptions.flightsOF.matThis MATLAB workspace contains data on n=231 flights without obstacles comprising the n=128 obstacle-free training flights (flights OF001 to OF128), and the n=103 obstacle-free test flights (flights OF129 to OF231). The data are contained in the following arrays:birdPosUpsampled - [X, Y, 0] horizontal position of bird; 20 kHz spline fitbirdVelupsampled - [U, V, 0] horizontal velocity of bird; 20 kHz spline fitlurePosUpsampled - [X, Y, 0] horizontal position of lure; 20 kHz spline fitlureVelupsampled - [U, V, 0] horizontal velocity of lure; 20 kHz spline fitflightsWO.matThis MATLAB workspace contains data on n=155 flights with obstacles, including the n=154 obstacle test flights (flights WO001 to WO155 excluding flight WO119). The data are contained in the following arrays:birdPosUpsampled - [X, Y, 0] horizontal position of bird; 20 kHz spline fitbirdVelupsampled - [U, V, 0] horizontal velocity of bird; 20 kHz spline fitlurePosUpsampled - [X, Y, 0] horizontal position of lure; 20 kHz spline fitlureVelupsampled - [U, V, 0] horizontal velocity of lure; 20 kHz spline fitobstaclesPos - [f, X, Y, Z] XYZ positions of all obstacle markers identified in each frame (f)modelFitsSummary.xlsxThis Excel spreadsheet contains summaries of the fitted models referred to in the main text, with flights numbered according to the scheme above. The tab names refer to the relevant sections of the Results.

  9. f

    Repeated Measures data files

    • auckland.figshare.com
    zip
    Updated Nov 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gavin T. L. Brown (2020). Repeated Measures data files [Dataset]. http://doi.org/10.17608/k6.auckland.13211120.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 9, 2020
    Dataset provided by
    The University of Auckland
    Authors
    Gavin T. L. Brown
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This zip file contains data files for 3 activities described in the accompanying PPT slides 1. an excel spreadsheet for analysing gain scores in a 2 group, 2 times data array. this activity requires access to –https://campbellcollaboration.org/research-resources/effect-size-calculator.html to calculate effect size.2. an AMOS path model and SPSS data set for an autoregressive, bivariate path model with cross-lagging. This activity is related to the following article: Brown, G. T. L., & Marshall, J. C. (2012). The impact of training students how to write introductions for academic essays: An exploratory, longitudinal study. Assessment & Evaluation in Higher Education, 37(6), 653-670. doi:10.1080/02602938.2011.5632773. an AMOS latent curve model and SPSS data set for a 3-time latent factor model with an interaction mixed model that uses GPA as a predictor of the LCM start and slope or change factors. This activity makes use of data reported previously and a published data analysis case: Peterson, E. R., Brown, G. T. L., & Jun, M. C. (2015). Achievement emotions in higher education: A diary study exploring emotions across an assessment event. Contemporary Educational Psychology, 42, 82-96. doi:10.1016/j.cedpsych.2015.05.002andBrown, G. T. L., & Peterson, E. R. (2018). Evaluating repeated diary study responses: Latent curve modeling. In SAGE Research Methods Cases Part 2. Retrieved from http://methods.sagepub.com/case/evaluating-repeated-diary-study-responses-latent-curve-modeling doi:10.4135/9781526431592

  10. Z

    Background music and cognitive task performance: systematic review dataset

    • data.niaid.nih.gov
    • explore.openaire.eu
    Updated Nov 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eduardo Coutinho (2023). Background music and cognitive task performance: systematic review dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6301060
    Explore at:
    Dataset updated
    Nov 29, 2023
    Dataset provided by
    Hoo Keat Wong
    Yiting Cheah
    Eduardo Coutinho
    Michael Spitzer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the raw data used for a systematic review of the impact of background music on cognitive task performance (Cheah et al., 2022). Our intention is to facilitate future updates to this work. Contents description This repository contains eight Microsoft Excel files, each containing the synthesised data pertaining to each of the six cognitive domains analysed in the review, as well as task difficulty, and population characteristics:

    raw-data-attention raw-data-inhibition raw-data-language raw-data-memory raw-data-thinking raw-data-processing-speed raw-data-task-difficulty raw-data--population Files description Tabs organisation The files pertaining to each cognitive domain include individual tabs for each cognitive task analysed (c.f. Figure 2 in the original paper for the list of cognitive tasks). The file with the population characteristics data also contains separate tabs for each characteristic (extraversion, music training, gender, and working memory capacity). Tabs contents In all files and tabs, each row corresponds to the data of a test. The same article can have more than one row if it reports multiple tests. For instance, the study by Cassidy and MacDonald (2007; cf. Memory.xlsx, tab: Memory-all) contains two experiments (immediate and delayed free recall) each with multiple test (immediate free recall: tests 25 – 32; delayed free recall: tests 58 – 61). Each test (one per row), in this experiment, pertains to comparisons between conditions where the background music has different levels of arousal, between groups of participants with different extraversion levels, between different tasks material (words or paragraphs) and different combinations of the previous (e.g., high arousing music vs silence test among extraverts whilst completing an immediate free recall task involving paragraphs; cf. test 30). The columns are organised as follows:

    "TESTS": the index of the test in a particular tab (for easy reference); "ID": abbreviation of the cognitive tasks involved in a specific experiment (see glossary for meaning); "REFERENCE": the article where the data was taken from (see main publications for list of articles); "CONDITIONS": an abbreviated description of the music condition of a given test; "MEANS (music)": the average performance across all participants in a given experiment with background music; "MEANS (silence)": the average performance across all participants in a given experiment without background music. Then, in horizontal arrangement, we also include groups of two columns that breakdown specific comparisons related to each test (i.e., all tests comparing the same two types of condition, e.g., L-BgM vs I-BgM, will appear under the same set of columns). For each one, we indicate mean difference between the respective conditions ("MD" column) and the direction of effect ("Standard Metric" column). Each file also contains a "Glossary" tab that explains all the abbreviations used in each document. Bibliography Cheah, Y., Wong, H. K., Spitzer, M., & Coutinho, E. (2022). Background music and cognitive task performance: A systematic review of task, music and population impact. Music & Science, 5(1), 1-38. https://doi.org/10.1177/20592043221134392

  11. d

    GP Practice Prescribing Presentation-level Data - July 2014

    • digital.nhs.uk
    csv, zip
    Updated Oct 31, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2014). GP Practice Prescribing Presentation-level Data - July 2014 [Dataset]. https://digital.nhs.uk/data-and-information/publications/statistical/practice-level-prescribing-data
    Explore at:
    csv(1.4 GB), zip(257.7 MB), csv(1.7 MB), csv(275.8 kB)Available download formats
    Dataset updated
    Oct 31, 2014
    License

    https://digital.nhs.uk/about-nhs-digital/terms-and-conditionshttps://digital.nhs.uk/about-nhs-digital/terms-and-conditions

    Time period covered
    Jul 1, 2014 - Jul 31, 2014
    Area covered
    United Kingdom
    Description

    Warning: Large file size (over 1GB). Each monthly data set is large (over 4 million rows), but can be viewed in standard software such as Microsoft WordPad (save by right-clicking on the file name and selecting 'Save Target As', or equivalent on Mac OSX). It is then possible to select the required rows of data and copy and paste the information into another software application, such as a spreadsheet. Alternatively, add-ons to existing software, such as the Microsoft PowerPivot add-on for Excel, to handle larger data sets, can be used. The Microsoft PowerPivot add-on for Excel is available from Microsoft http://office.microsoft.com/en-gb/excel/download-power-pivot-HA101959985.aspx Once PowerPivot has been installed, to load the large files, please follow the instructions below. Note that it may take at least 20 to 30 minutes to load one monthly file. 1. Start Excel as normal 2. Click on the PowerPivot tab 3. Click on the PowerPivot Window icon (top left) 4. In the PowerPivot Window, click on the "From Other Sources" icon 5. In the Table Import Wizard e.g. scroll to the bottom and select Text File 6. Browse to the file you want to open and choose the file extension you require e.g. CSV Once the data has been imported you can view it in a spreadsheet. What does the data cover? General practice prescribing data is a list of all medicines, dressings and appliances that are prescribed and dispensed each month. A record will only be produced when this has occurred and there is no record for a zero total. For each practice in England, the following information is presented at presentation level for each medicine, dressing and appliance, (by presentation name): - the total number of items prescribed and dispensed - the total net ingredient cost - the total actual cost - the total quantity The data covers NHS prescriptions written in England and dispensed in the community in the UK. Prescriptions written in England but dispensed outside England are included. The data includes prescriptions written by GPs and other non-medical prescribers (such as nurses and pharmacists) who are attached to GP practices. GP practices are identified only by their national code, so an additional data file - linked to the first by the practice code - provides further detail in relation to the practice. Presentations are identified only by their BNF code, so an additional data file - linked to the first by the BNF code - provides the chemical name for that presentation.

  12. B

    Field Variable Permeability Tests (Slug Tests) in Boreholes Made by Driven...

    • borealisdata.ca
    • search.dataone.org
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert P. Chapuis (2024). Field Variable Permeability Tests (Slug Tests) in Boreholes Made by Driven Flush-Joint Casings, or Driven Flush-Joint Casing Permeameters, or Between Packers in Cored Rock Boreholes, or in Monitoring Wells ― Overdamped Response / Essais de perméabilité à niveau variable (Slug Tests) dans des forages faits avec un tubage battu à joints lisses, ou un perméamètre battu à joints lisses, ou entre des obturateurs dans un trou foré dans le roc, ou dans un puits de surveillance ― Cas de la réponse suramortie [Dataset]. http://doi.org/10.5683/SP2/YUAUGX
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    Borealis
    Authors
    Robert P. Chapuis
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Civil and geological engineers have used field variable-head permeability tests (VH tests or slug tests) for over one century to assess the local hydraulic conductivity of tested soils and rocks. The water level in the pipe or riser casing reaches, after some rest time, a static position or elevation, z2. Then, the water level position is changed rapidly, by adding or removing some water volume, or by inserting or removing a solid slug. Afterward, the water level position or elevation z1(t) is recorded vs. time t, yielding a difference in hydraulic head or water column defined as Z(t) = z1(t) - z2. The water level at rest is assumed to be the piezometric level or PL for the tested zone, before drilling a hole and installing test equipment. All equations use Z(t) or Z*(t) = Z(t) / Z(t=0). The water-level response vs. time may be a slow return to equilibrium (overdamped test), or an oscillation back to equilibrium (underdamped test). This document deals exclusively with overdamped tests. Their data may be analyzed using several methods, known to yield different results for the hydraulic conductivity. The methods fit in three groups: group 1 neglects the influence of the solid matrix strain, group 2 is for tests in aquitards with delayed strain caused by consolidation, and group 3 takes into account some elastic and instant solid matrix strain. This document briefly explains what is wrong with certain theories and why. It shows three ways to plot the data, which are the three diagnostic graphs. According to experience with thousands of tests, most test data are biased by an incorrect estimate z2 of the piezometric level at rest. The derivative or velocity plot does not depend upon this assumed piezometric level, but can verify its correctness. The document presents experimental results and explains the three-diagnostic graphs approach, which unifies the theories and, most important, yields a user-independent result. Two free spreadsheet files are provided. The spreadsheet "Lefranc-Test-English-Model" follows the Canadian standards and is used to explain how to treat correctly the test data to reach a user-independent result. The user does not modify this model spreadsheet but can make as many copies as needed, with different names. The user can treat any other data set in a copy, and can also modify any copy if needed. The second Excel spreadsheet contains several sets of data that can be used to practice with the copies of the model spreadsheet. En génie civil et géologique, on a utilisé depuis plus d'un siècle les essais in situ de perméabilité à niveau variable (essais VH ou slug tests), afin d'évaluer la conductivité hydraulique locale des sols et rocs testés. Le niveau d'eau dans le tuyau ou le tubage prend, après une période de repos, une position ou élévation statique, z2. Ensuite, on modifie rapidement la position du niveau d'eau, en ajoutant ou en enlevant rapi-dement un volume d'eau, ou en insérant ou retirant un objet solide. La position ou l'élévation du niveau d'eau, z1(t), est alors notée en fonction du temps, t, ce qui donne une différence de charge hydraulique définie par Z(t) = z1(t) - z2. Le niveau d'eau au repos est supposé être le niveau piézométrique pour la zone testée, avant de forer un trou et d'installer l'équipement pour un essai. Toutes les équations utilisent Z(t) ou Z*(t) = Z(t) / Z(t=0). La réponse du niveau d'eau avec le temps peut être soit un lent retour à l'équilibre (cas suramorti) soit une oscillation amortie retournant à l'équilibre (cas sous-amorti). Ce document ne traite que des cas suramortis. Leurs données peuvent être analysées à l'aide de plusieurs méthodes, connues pour donner des résultats différents pour la conductivité hydraulique. Les méthodes appartiennent à trois groupes : le groupe 1 néglige l'influence de la déformation de la matrice solide, le groupe 2 est pour les essais dans des aquitards avec une déformation différée causée par la consolidation, et le groupe 3 prend en compte une certaine déformation élastique et instantanée de la matrice solide. Ce document explique brièvement ce qui est incorrect dans les théories et pourquoi. Il montre trois façons de tracer les données, qui sont les trois graphiques de diagnostic. Selon l'expérience de milliers d'essais, la plupart des données sont biaisées par un estimé incorrect de z2, le niveau piézométrique supposé. Le graphe de la dérivée ou graphe des vitesses ne dépend pas de la valeur supposée pour le niveau piézomé-trique, mais peut vérifier son exactitude. Le document présente des résultats expérimentaux et explique le diagnostic à trois graphiques, qui unifie les théories et donne un résultat indépendant de l'utilisateur, ce qui est important. Deux fichiers Excel gratuits sont fournis. Le fichier"Lefranc-Test-English-Model" suit les normes canadiennes : il sert à expliquer comment traiter correctement les données d'essai pour avoir un résultat indépendant de l'utilisateur. Celui-ci ne modifie pas ce...

  13. Datasheet.Excel.Pretest-posttest.Statistical t-test results.xlsx

    • figshare.com
    xlsx
    Updated Nov 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Theophile Shyiramunda (2023). Datasheet.Excel.Pretest-posttest.Statistical t-test results.xlsx [Dataset]. http://doi.org/10.6084/m9.figshare.24486685.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Nov 13, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Theophile Shyiramunda
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set is for the research entitled"Group Discussions in Secondary School Chemistry: Unveiling Pedagogical Alchemy for Academic Advancement".

  14. T

    Tokyo's Test examination(2000 to 2017)

    • en.graphtochart.com
    csv
    Updated Apr 24, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LBB Limited Liability Company (2021). Tokyo's Test examination(2000 to 2017) [Dataset]. https://en.graphtochart.com/japan/tokyo-test-examination.php
    Explore at:
    csvAvailable download formats
    Dataset updated
    Apr 24, 2021
    Dataset authored and provided by
    LBB Limited Liability Company
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2000 - 2017
    Area covered
    Description

    's Test examination is 934,828[number of contracts] which is the 1st highest in Japan (by Prefecture). Transition Graphs and Comparison chart between Tokyo and Kanagawa(Kanagawa)(Closest Prefecture in Population) are available. Various data can be downloaded and output in csv format for use in EXCEL free of charge.

  15. e

    DANS Data Station Life Sciences

    • b2find.eudat.eu
    Updated Jul 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). DANS Data Station Life Sciences [Dataset]. https://b2find.eudat.eu/dataset/611e4162-5508-5616-baf5-22d0017985cd
    Explore at:
    Dataset updated
    Jul 30, 2025
    Description

    This study was part of the Horizon 2020 Poultry and Pig Low-input and Organic production systems' Welfare (PPILOW) project. In this study, the effect of early-life conditions on the visual discrimination abilities of adult, free-ranging laying hens was examined. These early-life treatments entailed incubation in a 12/12h green light/dark cycle and rearing with Black soldier fly larvae (BSFL) as foraging enrichment. Through a modified pebble-floor test, 171 hens of 41 to 42 wk old, housed in mobile stables with outdoor access, were tested for their ability to discriminate between food and nonfood items (mealworms and decoy mealworms). Each hen was allowed 60 pecks during the trial, from which the overall success rate, as well as within-trial learning was investigated. The latter was accomplished by dividing the 60 pecks into 3 blocks of 20 pecks and comparing the success rate between these blocks. Due to another ongoing experiment on range use, roughly half the hens received range enrichment (mealworms) at the time of testing, so this was included as a covariate in the analysis. All data regarding this experiment can be found in the excel: Data_VDA trial All feeding schemes can be found in the excel: Feed

  16. Road safety statistics: data tables

    • gov.uk
    Updated Jul 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department for Transport (2025). Road safety statistics: data tables [Dataset]. https://www.gov.uk/government/statistical-data-sets/reported-road-accidents-vehicles-and-casualties-tables-for-great-britain
    Explore at:
    Dataset updated
    Jul 31, 2025
    Dataset provided by
    GOV.UKhttp://gov.uk/
    Authors
    Department for Transport
    Description

    These tables present high-level breakdowns and time series. A list of all tables, including those discontinued, is available in the table index. More detailed data is available in our data tools, or by downloading the open dataset.

    Latest data and table index

    The tables below are the latest final annual statistics for 2023. The latest data currently available are provisional figures for 2024. These are available from the latest provisional statistics.

    A list of all reported road collisions and casualties data tables and variables in our data download tool is available in the https://assets.publishing.service.gov.uk/media/683709928ade4d13a63236df/reported-road-casualties-gb-index-of-tables.ods">Tables index (ODS, 30.1 KB).

    All collision, casualty and vehicle tables

    https://assets.publishing.service.gov.uk/media/66f44e29c71e42688b65ec43/ras-all-tables-excel.zip">Reported road collisions and casualties data tables (zip file) (ZIP, 16.6 MB)

    Historic trends (RAS01)

    RAS0101: https://assets.publishing.service.gov.uk/media/66f44bd130536cb927482733/ras0101.ods">Collisions, casualties and vehicles involved by road user type since 1926 (ODS, 52.1 KB)

    RAS0102: https://assets.publishing.service.gov.uk/media/66f44bd1080bdf716392e8ec/ras0102.ods">Casualties and casualty rates, by road user type and age group, since 1979 (ODS, 142 KB)

    Road user type (RAS02)

    RAS0201: https://assets.publishing.service.gov.uk/media/66f44bd1a31f45a9c765ec1f/ras0201.ods">Numbers and rates (ODS, 60.7 KB)

    RAS0202: https://assets.publishing.service.gov.uk/media/66f44bd1e84ae1fd8592e8f0/ras0202.ods">Sex and age group (ODS, 167 KB)

    RAS0203: https://assets.publishing.service.gov.uk/media/67600227b745d5f7a053ef74/ras0203.ods">Rates by mode, including air, water and rail modes (ODS, 24.2 KB)

    Road type (RAS03)

    RAS0301: https://assets.publishing.service.gov.uk/media/66f44bd1c71e42688b65ec3e/ras0301.ods">Speed limit, built-up and non-built-up roads (ODS, 49.3 KB)

    RAS0302: https://assets.publishing.service.gov.uk/media/66f44bd1080bdf716392e8ee/ras0302.ods">Urban and rural roa

  17. d

    Data from: Lifestyle and sense of coherence: A comparative analysis among...

    • datadryad.org
    • data.niaid.nih.gov
    zip
    Updated Jun 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    João Paulo Costa Braga; Eduardo Wolfgram; João Paulo Batista de Souza; Roberto de Almeida; Cezar Rangel Pestana (2023). Lifestyle and sense of coherence: A comparative analysis among university students in different areas of knowledge [Dataset]. http://doi.org/10.5061/dryad.bcc2fqzhd
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 14, 2023
    Dataset provided by
    Dryad
    Authors
    João Paulo Costa Braga; Eduardo Wolfgram; João Paulo Batista de Souza; Roberto de Almeida; Cezar Rangel Pestana
    Time period covered
    Jun 2, 2023
    Description

    Data gathering The researchers invited the students to answer an online form - through Google Forms virtual platform - containing the questionnaires: sociodemographic information, FANTASTIC questionnaire on Lifestyle, and a questionnaire on Sense of Coherence. The researchers clearly explained the research objectives and collection procedures on the home page, and the participants were given the Free and Informed Consent Form. The data gathered in the online form were transferred to a spreadsheet in Microsoft Excel. The results were filtered, classified, and treated in order to be in line with the desired statistical analysis and could feed the statistical programs used. Statistical analysis The statistical analyses were performed by the JASP statistical software, and part of the graphics by the SPSS software. First, the researchers submitted the results to normality (Shapiro Wilk) and homogeneity (Levene test) analysis. Next, the normal homogeneous data were submitted to the ANOVA anal...

  18. f

    The underlying data used for the analysis in this article.

    • plos.figshare.com
    zip
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PLOS ONE (2023). The underlying data used for the analysis in this article. [Dataset]. http://doi.org/10.1371/journal.pone.0279960.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset authored and provided by
    PLOS ONE
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Pubmed, Embase and Cochrane library databases were searched up until 2017, the results are shown in the list in folder “Search strategy”. These results were imported into Endnote resulting in 34 articles being included in this study (folder “Full texts extraction”). Data were extracted into Excel (file “HER2 New”), including hazard ratios (HRs) for survival with 95% confidence intervals (CIs). Subgroup analyses (file “HER2 New”), publication bias and sensitivity analyses (folder “Figures”) were carried out. Estimates of overall survival (OS), progress-free survival (PFS) and disease-free survival (DFS) were weighted and pooled using Der Simonian-Laird random-effect model (file “HER2 New”). Stata was used to draw figures (folder “Figures”). (ZIP)

  19. m

    Surface plasmon resonance dataset of Trypanosoma evansi RoTat1.2 variant...

    • data.mendeley.com
    Updated May 18, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ajit Singh (2021). Surface plasmon resonance dataset of Trypanosoma evansi RoTat1.2 variant surface glycoprotein antigen interaction with monoclonal antibody- coated biosensors [Dataset]. http://doi.org/10.17632/scmz2vr49s.2
    Explore at:
    Dataset updated
    May 18, 2021
    Authors
    Ajit Singh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The surface plasmon resonance (SPR) bio-(immuno-)sensors are being developed for the diagnosis of infectious diseases, cancers, food safety, etc. The SPR immunosensor using a monoclonal antibody as the capture biomolecule coated onto the gold chip allows direct, rapid, real-time, label-free, quantitative and cost-effective detection of the target antigens as analyte in a test sample. We developed for the first time SPR immunosensors using two monoclonal antibodies, viz., 2E11 (IgG1) and 1C2 (IgG1), produced in our laboratory for real-time, label-free, and rapid detection of their target antigen, i.e., Trypanosoma evansi RoTat 1.2 variant surface glycoprotein (VSG) in sera samples from the laboratory rodents and the field bovines [File 1 pdf]. First, we produced by the hybridoma technique several mAbs that reacted with the T. evansi RoTat 1.2 lysate Ags [File 2 pdf]. One of these mAbs, viz., 2E11 mAb was then used to immunoprecipitate the target Ag in the parasite lysate [Fig. 1A & 1B; File 3 pdf], which was then identified as T. evansi VSG by mass spectrometry [Fig. 2; File 3 pdf & File 4 Excel data]. Both 2E11 and 1C2 mAbs reacted with the VSG Ag in the Western blots [Fig. 3; File 3 pdf]. Then, the interactions of these mAbs with the above VSG Ag in the parasite lysate were analyzed by the respective SPR-immunosensor. The immunosensor was developed by binding of the biotinylated mAbs onto streptavidin immobilized on the gold chip [Dutra, RF and Kubota, LT. (2006). Clinica Chimica Acta. 379, 114-120]. The equilibrium dissociation constants (KD= kd/ka) of mAbs-VSG were determined to be 127 nM (ka=196.4 ± 61.9 s-M-; kd=2.51E-05 s-) for 2E11 mAb and 290 pM (ka=4616.1 ± 170.1 s-M-; kd=1.36E-06 s-) for 1C2 mAb (Table 1 pdf; Files 5 & 6 Excel data; Fig. 4-5 pdf). Further, we produced the SPR data and the sensograms of the interactions of 2E11 and 1C2 mAbs with the VSG Ag in the sera samples of the parasite-infected laboratory rodents as well as the test sera samples from the field cattle and buffaloes [File 7 with Fig. 6-11 Excel data; Fig. 6-11 pdf; File 8 with Fig. 13-17 Excel data; Fig. 12-17 pdf]. In addition, the kinetic parameters of the mAb interactions with two synthetic peptide mimotopes of the VSG were determined (Table 1 pdf; File 9 with Fig. 18-21 Excel data; Fig. 18-21 pdf). The peptide mimotopes of VSG were previously selected by PhD-12 phage display library panning against 2E11 and 1C2 mAbs in another study [Dataset reference: Mendeley Data, V2, doi: 10.17632/bs6pbskc8n.2]. These data provide valuable information for developing the real-time, label-free, SPR- based immunosensors for the diagnosis of surra caused by Trypanosoma evansi infection in a wide variety of domestic, zoo, and wildlife animal species.

  20. 🦈 Shark Tank India dataset 🇮🇳

    • kaggle.com
    Updated Apr 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Satya Thirumani (2025). 🦈 Shark Tank India dataset 🇮🇳 [Dataset]. https://www.kaggle.com/datasets/thirumani/shark-tank-india
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 20, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Satya Thirumani
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Shark Tank India Data set.

    Shark Tank India - Season 1 to season 4 information, with 80 fields/columns and 630+ records.

    All seasons/episodes of 🦈 SHARKTANK INDIA 🇮🇳 were broadcasted on SonyLiv OTT/Sony TV.

    Here is the data dictionary for (Indian) Shark Tank season's dataset.

    • Season Number - Season number
    • Startup Name - Company name or product name
    • Episode Number - Episode number within the season
    • Pitch Number - Overall pitch number
    • Season Start - Season first aired date
    • Season End - Season last aired date
    • Original Air Date - Episode original/first aired date, on OTT/TV
    • Episode Title - Episode title in SonyLiv
    • Anchor - Name of the episode presenter/host
    • Industry - Industry name or type
    • Business Description - Business Description
    • Company Website - Company Website URL
    • Started in - Year in which startup was started/incorporated
    • Number of Presenters - Number of presenters
    • Male Presenters - Number of male presenters
    • Female Presenters - Number of female presenters
    • Transgender Presenters - Number of transgender/LGBTQ presenters
    • Couple Presenters - Are presenters wife/husband ? 1-yes, 0-no
    • Pitchers Average Age - All pitchers average age, <30 young, 30-50 middle, >50 old
    • Pitchers City - Presenter's town/city or place where company head office exists
    • Pitchers State - Indian state pitcher hails from or state where company head office exists
    • Yearly Revenue - Yearly revenue, in lakhs INR, -1 means negative revenue, 0 means pre-revenue
    • Monthly Sales - Total monthly sales, in lakhs
    • Gross Margin - Gross margin/profit of company, in percentages
    • Net Margin - Net margin/profit of company, in percentages
    • EBITDA - Earnings Before Interest, Taxes, Depreciation, and Amortization
    • Cash Burn - In loss in current year; burning/paying money from their pocket (yes/no)
    • SKUs - Stock Keeping Units or number of varieties, at the time of pitch
    • Has Patents - Pitcher has Patents/Intellectual property (filed/granted), at the time of pitch
    • Bootstrapped - Startup is bootstrapped or not (yes/no)
    • Part of Match off - Competition between two similar brands, pitched at same time
    • Original Ask Amount - Original Ask Amount, in lakhs INR
    • Original Offered Equity - Original Offered Equity, in percentages
    • Valuation Requested - Valuation Requested, in lakhs INR
    • Received Offer - Received offer or not, 1-received, 0-not received
    • Accepted Offer - Accepted offer or not, 1-accepted, 0-rejected
    • Total Deal Amount - Total Deal Amount, in lakhs INR
    • Total Deal Equity - Total Deal Equity, in percentages
    • Total Deal Debt - Total Deal debt/loan amount, in lakhs INR
    • Debt Interest - Debt interest rate, in percentages
    • Deal Valuation - Deal Valuation, in lakhs INR
    • Number of sharks in deal - Number of sharks involved in deal
    • Deal has conditions - Deal has conditions or not? (yes or no)
    • Royalty Percentage - Royalty percentage, if it's royalty deal
    • Royalty Recouped Amount - Royalty recouped amount, if it's royalty deal, in lakhs
    • Advisory Shares Equity - Deal with Advisory shares or equity, in percentages
    • Namita Investment Amount - Namita Investment Amount, in lakhs INR
    • Namita Investment Equity - Namita Investment Equity, in percentages
    • Namita Debt Amount - Namita Debt Amount, in lakhs INR
    • Vineeta Investment Amount - Vineeta Investment Amount, in lakhs INR
    • Vineeta Investment Equity - Vineeta Investment Equity, in percentages
    • Vineeta Debt Amount - Vineeta Debt Amount, in lakhs INR
    • Anupam Investment Amount - Anupam Investment Amount, in lakhs INR
    • Anupam Investment Equity - Anupam Investment Equity, in percentages
    • Anupam Debt Amount - Anupam Debt Amount, in lakhs INR
    • Aman Investment Amount - Aman Investment Amount, in lakhs INR
    • Aman Investment Equity - Aman Investment Equity, in percentages
    • Aman Debt Amount - Aman Debt Amount, in lakhs INR
    • Peyush Investment Amount - Peyush Investment Amount, in lakhs INR
    • Peyush Investment Equity - Peyush Investment Equity, in percentages
    • Peyush Debt Amount - Peyush Debt Amount, in lakhs INR
    • Ritesh Investment Amount - Ritesh Investment Amount, in lakhs INR
    • Ritesh Investment Equity - Ritesh Investment Equity, in percentages
    • Ritesh Debt Amount - Ritesh Debt Amount, in lakhs INR
    • Amit Investment Amount - Amit Investment Amount, in lakhs INR
    • Amit Investment Equity - Amit Investment Equity, in percentages
    • Amit Debt Amount - Amit Debt Amount, in lakhs INR
    • Guest Investment Amount - Guest Investment Amount, in lakhs INR
    • Guest Investment Equity - Guest Investment Equity, in percentages
    • Guest Debt Amount - Guest Debt Amount, in lakhs INR
    • Invested Guest Name - Name of the guest(s) who invested in deal
    • All Guest Names - Name of all guests, who are present in episode
    • Namita Present - Whether Namita present in episode or not
    • Vineeta Present - Whether Vineeta present in episode or not
    • Anupam ...
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. EPA Office of Research and Development (ORD) (2024). 18 excel spreadsheets by species and year giving reproduction and growth data. One excel spreadsheet of herbicide treatment chemistry. [Dataset]. https://catalog.data.gov/dataset/18-excel-spreadsheets-by-species-and-year-giving-reproduction-and-growth-data-one-excel-sp
Organization logo

18 excel spreadsheets by species and year giving reproduction and growth data. One excel spreadsheet of herbicide treatment chemistry.

Explore at:
Dataset updated
Aug 17, 2024
Dataset provided by
United States Environmental Protection Agencyhttp://www.epa.gov/
Description

Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).

Search
Clear search
Close search
Google apps
Main menu