23 datasets found
  1. A/B testing among leading Google Play apps in the U.S. 2023, by frequency

    • statista.com
    Updated Sep 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). A/B testing among leading Google Play apps in the U.S. 2023, by frequency [Dataset]. https://www.statista.com/statistics/1490294/us-google-play-ab-testing-frequency/
    Explore at:
    Dataset updated
    Sep 4, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2023
    Area covered
    United States
    Description

    According to a study conducted in 2023 among the most popular Android mobile apps available on the Google Play Store in the United States, 40 percent of apps conducted two or more A/B testings in the past year on the screenshots they displayed. A/B testings, also called as split testings, were not a popular App Store Optimization practice for testing what icons, videos, and feature graphics were more effective in onboarding users on the Google Play Store.

  2. Market Survey on AB Testing Software Market Covering Sales Outlook,...

    • futuremarketinsights.com
    pdf
    Updated Aug 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Future Market Insights (2023). Market Survey on AB Testing Software Market Covering Sales Outlook, Up-to-date Key Trends, Market Size and Forecast, Market Statistics, Penetration Analysis, Pricing Analysis and Company Ecosystem [Dataset]. https://www.futuremarketinsights.com/reports/ab-testing-software-market
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Aug 4, 2023
    Dataset authored and provided by
    Future Market Insights
    License

    https://www.futuremarketinsights.com/privacy-policyhttps://www.futuremarketinsights.com/privacy-policy

    Time period covered
    2023 - 2033
    Area covered
    Worldwide
    Description

    Newly released AB Testing Software Market analysis report by Future Market Insights reveals that global sales of AB Testing Software Market in 2023 are estimated at USD 1,211.3 million. With a 11.7% projected growth rate during 2023 to 2033, the market is expected to reach a valuation of USD 3,673.5 million by 2033.

    AttributesDetails
    Global AB Testing Software Market Size (2023)USD 1,211.3 million
    Global AB Testing Software Market Size (2033)USD 3,673.5 million
    Global AB Testing Software Market CAGR (2023 to 2033)11.7%
    United States AB Testing Software Market Size (2033)USD 1.2 billion
    United States AB Testing Software Market CAGR (2023 to 2033)11.6%
    Key Companies CoveredOptimizely; VWO; AB Tasty; Instapage; Dynamic Yield; Adobe; Freshmarketer; Unbounce; Monetate; Kameleoon; Evergage; SiteSpect; Evolv Ascend; Omniconvert; Landingi
  3. data set for A/B testing for web developer

    • kaggle.com
    zip
    Updated Jul 17, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rami Ashraf (2022). data set for A/B testing for web developer [Dataset]. https://www.kaggle.com/datasets/ramiashraf/data-set-for-ab-testing-for-web-developer/suggestions
    Explore at:
    zip(5328022 bytes)Available download formats
    Dataset updated
    Jul 17, 2022
    Authors
    Rami Ashraf
    Description

    Dataset

    This dataset was created by Rami Ashraf

    Contents

  4. A/B Testing Data

    • kaggle.com
    Updated May 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicole Miller (2022). A/B Testing Data [Dataset]. https://www.kaggle.com/datasets/nmiller0714/ab-testing-data/suggestions
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 27, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Nicole Miller
    Description

    Dataset

    This dataset was created by Nicole Marsh

    Contents

  5. Personalization tools used by marketers in the U.S. 2020

    • statista.com
    Updated Dec 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Personalization tools used by marketers in the U.S. 2020 [Dataset]. https://www.statista.com/statistics/1208504/personalization-tools-used-marketers/
    Explore at:
    Dataset updated
    Dec 10, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Feb 20, 2020 - Mar 27, 2020
    Area covered
    United States
    Description

    In a survey concluded in March 2020, among marketers in the United States, respondents were asked about the type of tools they used to execute personalization across their channels. According to the findings, 67 percent of survey participants were using e-mail marketing solutions and the same share used an A/B testing tool. Some 30 percent of marketers indicated using a customer data platform (CDP).

  6. A/B Testing Data Conversinon

    • kaggle.com
    zip
    Updated Aug 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Igor Kocic (2022). A/B Testing Data Conversinon [Dataset]. https://www.kaggle.com/datasets/igorkocic/ab-testing-data-conversinon/discussion
    Explore at:
    zip(1765489 bytes)Available download formats
    Dataset updated
    Aug 3, 2022
    Authors
    Igor Kocic
    Description

    Dataset

    This dataset was created by Igor Kocic

    Contents

  7. AB Testing Market By Type (Web Based and Cloud Based) and By Application...

    • fnfresearch.com
    pdf
    Updated Mar 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Facts and Factors (2025). AB Testing Market By Type (Web Based and Cloud Based) and By Application (SMEs and Large Enterprises): Global Industry Outlook, Market Size, Business Intelligence, Consumer Preferences, Statistical Surveys, Comprehensive Analysis, Historical Developments, Current Trends, and Forecast 2020–2026 [Dataset]. https://www.fnfresearch.com/ab-testing-market-by-type-web-based-and-889
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Mar 17, 2025
    Dataset provided by
    Authors
    Facts and Factors
    License

    https://www.fnfresearch.com/privacy-policyhttps://www.fnfresearch.com/privacy-policy

    Time period covered
    2022 - 2030
    Area covered
    Global
    Description

    The global AB Testing market in 2019 was approximately USD 570 million. The market is expected to grow at a CAGR of 9% and is anticipated to reach around USD 1040 million by 2026.

  8. Grocery website data for AB test

    • kaggle.com
    Updated Sep 19, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tetiana Klimonova (2020). Grocery website data for AB test [Dataset]. https://www.kaggle.com/datasets/tklimonova/grocery-website-data-for-ab-test/suggestions
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 19, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Tetiana Klimonova
    Description

    Dataset

    This dataset was created by Tetiana Klimonova

    Contents

  9. A/B test data

    • kaggle.com
    Updated Jul 29, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohamed-El haddad (2021). A/B test data [Dataset]. https://www.kaggle.com/datasets/mohamedahmed10000/ab-test-data/data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 29, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Mohamed-El haddad
    Description

    Dataset

    This dataset was created by Mohamed-El haddad

    Contents

  10. Types of marketing analytics performed in-house worldwide 2024

    • statista.com
    Updated Oct 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Types of marketing analytics performed in-house worldwide 2024 [Dataset]. https://www.statista.com/statistics/1467476/marketing-analytics-worldwide/
    Explore at:
    Dataset updated
    Oct 27, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    World
    Description

    During a global 2024 survey, it was found that A/B and multivariate testing was the type of advanced analytics performed most often in-house, named by 55 percent of respodents. Ad platform optimization ranked second, mentioned by 42 percent of respondents.

  11. J

    Data from: Randomization in Online Experiments

    • journaldata.zbw.eu
    zip
    Updated Mar 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Konstantin Golyaev; Konstantin Golyaev (2021). Randomization in Online Experiments [Dataset]. http://doi.org/10.15456/jbnst.2018192.235844
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 3, 2021
    Dataset provided by
    ZBW - Leibniz Informationszentrum Wirtschaft
    Authors
    Konstantin Golyaev; Konstantin Golyaev
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Most scientists consider randomized experiments to be the best method available to establish causality. On the Internet, during the past twenty-five years, randomized experiments have become common, often referred to as A/B testing. For practical reasons, much A/B testing does not use pseudo-random number generators to implement randomization. Instead, hash functions are used to transform the distribution of identifiers of experimental units into a uniform distribution. Using two large, industry data sets, I demonstrate that the success of hash-based quasi-randomization strategies depends greatly on the hash function used: MD5 yielded good results, while SHA512 yielded less impressive ones.

  12. Share of public high school students scoring above avg. on Calculus AB Exams...

    • statista.com
    Updated Feb 3, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2012). Share of public high school students scoring above avg. on Calculus AB Exams [Dataset]. https://www.statista.com/statistics/219511/share-of-public-high-school-students-scoring-above-avg-on-calculus-ab-exams/
    Explore at:
    Dataset updated
    Feb 3, 2012
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2010
    Area covered
    United States
    Description

    The statistic shows the percentage of public high school students in the United States scoring 3 or higher on at least one Advanced Placement Calculus Exam in 2010 by state. Nationally, the share of the graduating class that demonstrated a mastery of Calculus AB by scoring a 3 or higher on the AP Exam was 3.5 percent in 2010.

  13. data_AB_testing

    • kaggle.com
    zip
    Updated Sep 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muharrem Görkem (2023). data_AB_testing [Dataset]. https://www.kaggle.com/datasets/muharremg/data-ab-testing
    Explore at:
    zip(18717 bytes)Available download formats
    Dataset updated
    Sep 26, 2023
    Authors
    Muharrem Görkem
    Description

    Dataset

    This dataset was created by Muharrem Görkem

    Contents

  14. f

    Data from: Factorial Designs for Online Experiments

    • tandf.figshare.com
    txt
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tamar Haizler; David M. Steinberg (2023). Factorial Designs for Online Experiments [Dataset]. http://doi.org/10.6084/m9.figshare.11348135.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Tamar Haizler; David M. Steinberg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Online experiments and specifically A/B testing are commonly used to identify whether a proposed change to a web page is in fact an effective one. This study focuses on basic settings in which a binary outcome is obtained from each user who visits the website and the probability of a response may be affected by numerous factors. We use Bayesian probit regression to model the factor effects and combine elements from traditional two-level factorial experiments and multiarmed bandits to construct sequential designs that embed attractive features of estimation and exploitation.

  15. Low level hydrogen peroxide vapor data for Cary test house decontamination...

    • catalog.data.gov
    • s.cnmilf.com
    Updated Nov 12, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2020). Low level hydrogen peroxide vapor data for Cary test house decontamination study, using a Bacillus anthracis surrogate [Dataset]. https://catalog.data.gov/dataset/low-level-hydrogen-peroxide-vapor-data-for-cary-test-house-decontamination-study-using-a-b
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    The data set is comprised of five Excel spreadsheets, one for each of the tests described in the research article. The data in the spreadsheets are the colony forming unit (CFU) data for each coupon material replicate and location. This dataset is associated with the following publication: Mickelsen, L., J. Wood, W. Calfee, S. Serre, S. Ryan, A. Touati, F. Delafield, and D. Aslett. Low‐concentration hydrogen peroxide decontamination for Bacillus spore contamination in buildings. Remediation Journal. John Wiley & Sons, Inc., Hoboken, NJ, USA, 30(1): 47-56, (2019).

  16. Time to Update the Split-Sample Approach in Hydrological Model Calibration...

    • zenodo.org
    zip
    Updated May 31, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hongren Shen; Hongren Shen; Bryan A. Tolson; Bryan A. Tolson; Juliane Mai; Juliane Mai (2022). Time to Update the Split-Sample Approach in Hydrological Model Calibration v1.0 [Dataset]. http://doi.org/10.5281/zenodo.5915374
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Hongren Shen; Hongren Shen; Bryan A. Tolson; Bryan A. Tolson; Juliane Mai; Juliane Mai
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Time to Update the Split-Sample Approach in Hydrological Model Calibration

    Hongren Shen1, Bryan A. Tolson1, Juliane Mai1

    1Department of Civil and Environmental Engineering, University of Waterloo, Waterloo, Ontario, Canada

    Corresponding author: Hongren Shen (hongren.shen@uwaterloo.ca)

    Abstract

    Model calibration and validation are critical in hydrological model robustness assessment. Unfortunately, the commonly-used split-sample test (SST) framework for data splitting requires modelers to make subjective decisions without clear guidelines. This large-sample SST assessment study empirically assesses how different data splitting methods influence post-validation model testing period performance, thereby identifying optimal data splitting methods under different conditions. This study investigates the performance of two lumped conceptual hydrological models calibrated and tested in 463 catchments across the United States using 50 different data splitting schemes. These schemes are established regarding the data availability, length and data recentness of the continuous calibration sub-periods (CSPs). A full-period CSP is also included in the experiment, which skips model validation. The assessment approach is novel in multiple ways including how model building decisions are framed as a decision tree problem and viewing the model building process as a formal testing period classification problem, aiming to accurately predict model success/failure in the testing period. Results span different climate and catchment conditions across a 35-year period with available data, making conclusions quite generalizable. Calibrating to older data and then validating models on newer data produces inferior model testing period performance in every single analysis conducted and should be avoided. Calibrating to the full available data and skipping model validation entirely is the most robust split-sample decision. Experimental findings remain consistent no matter how model building factors (i.e., catchments, model types, data availability, and testing periods) are varied. Results strongly support revising the traditional split-sample approach in hydrological modeling.

    Data description

    This data was used in the paper entitled "Time to Update the Split-Sample Approach in Hydrological Model Calibration" by Shen et al. (2022).

    Catchment, meteorological forcing and streamflow data are provided for hydrological modeling use. Specifically, the forcing and streamflow data are archived in the Raven hydrological modeling required format. The GR4J and HMETS model building results in the paper, i.e., reference KGE and KGE metrics in calibration, validation and testing periods, are provided for replication of the split-sample assessment performed in the paper.

    Data content

    The data folder contains a gauge info file (CAMELS_463_gauge_info.txt), which reports basic information of each catchment, and 463 subfolders, each having four files for a catchment, including:

    (1) Raven_Daymet_forcing.rvt, which contains Daymet meteorological forcing (i.e., daily precipitation in mm/d, minimum and maximum air temperature in deg_C, shortwave in MJ/m2/day, and day length in day) from Jan 1st 1980 to Dec 31 2014 in a Raven hydrological modeling required format.

    (2) Raven_USGS_streamflow.rvt, which contains daily discharge data (in m3/s) from Jan 1st 1980 to Dec 31 2014 in a Raven hydrological modeling required format.

    (3) GR4J_metrics.txt, which contains reference KGE and GR4J-based KGE metrics in calibration, validation and testing periods.

    (4) HMETS_metrics.txt, which contains reference KGE and HMETS-based KGE metrics in calibration, validation and testing periods.

    Data collection and processing methods

    Data source

    • Catchment information and the Daymet meteorological forcing are retrieved from the CAMELS data set, which can be found here.
    • The USGS streamflow data are collected from the U.S. Geological Survey's (USGS) National Water Information System (NWIS), which can be found here.
    • The GR4J and HMETS performance metrics (i.e., reference KGE and KGE) are produced in the study by Shen et al. (2022).

    Forcing data processing

    • A quality assessment procedure was performed. For example, daily maximum air temperature should be larger than the daily minimum air temperature; otherwise, these two values will be swapped.
    • Units are converted to Raven-required ones. Precipitation: mm/day, unchanged; daily minimum/maximum air temperature: deg_C, unchanged; shortwave: W/m2 to MJ/m2/day; day length: seconds to days.
    • Data for a catchment is archived in a RVT (ASCII-based) file, in which the second line specifies the start time of the forcing series, the time step (= 1 day), and the total time steps in the series (= 12784), respectively; the third and the fourth lines specify the forcing variables and their corresponding units, respectively.
    • More details of Raven formatted forcing files can be found in the Raven manual (here).

    Streamflow data processing

    • Units are converted to Raven-required ones. Daily discharge originally in cfs is converted to m3/s.
    • Missing data are replaced with -1.2345 as Raven requires. Those missing time steps will not be counted in performance metrics calculation.
    • Streamflow series is archived in a RVT (ASCII-based) file, which is open with eight commented lines specifying relevant gauge and streamflow data information, such as gauge name, gauge ID, USGS reported catchment area, calculated catchment area (based on the catchment shapefiles in CAMELS dataset), streamflow data range, data time step, and missing data periods. The first line after the commented lines in the streamflow RVT files specifies data type (default is HYDROGRAPH), subbasin ID (i.e., SubID), and discharge unit (m3/s), respectively. And the next line specifies the start of the streamflow data, time step (=1 day), and the total time steps in the series(= 12784), respectively.

    GR4J and HMETS metrics

    The GR4J and HMETS metrics files consists of reference KGE and KGE in model calibration, validation, and testing periods, which are derived in the massive split-sample test experiment performed in the paper.

    • Columns in these metrics files are gauge ID, calibration sub-period (CSP) identifier, KGE in calibration, validation, testing1, testing2, and testing3, respectively.
    • We proposed 50 different CSPs in the experiment. "CSP_identifier" is a unique name of each CSP. e.g., CSP identifier "CSP-3A_1990" stands for the model is built in Jan 1st 1990, calibrated in the first 3-year sample (1981-1983), calibrated in the rest years during the period of 1980 to 1989. Note that 1980 is always used for spin-up.
    • We defined three testing periods (independent to calibration and validation periods) for each CSP, which are the first 3 years from model build year inclusive, the first 5 years from model build year inclusive, and the full years from model build year inclusive. e.g., "testing1", "testing2", and "testing3" for CSP-3A_1990 are 1990-1992, 1990-1994, and 1990-2014, respectively.
    • Reference flow is the interannual mean daily flow based on a specific period, which is derived for a one-year period and then repeated in each year in the calculation period.
      • For calibration, its reference flow is based on spin-up + calibration periods.
      • For validation, its reference flow is based on spin-up + calibration periods.
      • For testing, its reference flow is based on spin-up +calibration + validation periods.
    • Reference KGE is calculated based on the reference flow and observed streamflow in a specific calculation period (e.g., calibration). Reference KGE is computed using the KGE equation with substituting the "simulated" flow for "reference" flow in the period for calculation. Note that the reference KGEs for the three different testing periods corresponds to the same historical period, but are different, because each testing period spans in a different time period and covers different series of observed flow.

    More details of the split-sample test experiment and modeling results analysis can be referred to the paper by Shen et al. (2022).

    Citation

    Journal Publication

    This study:

    Shen, H., Tolson, B. A., & Mai, J.(2022). Time to update the split-sample approach in hydrological model calibration. Water Resources Research, 58, e2021WR031523. https://doi.org/10.1029/2021WR031523

    Original CAMELS dataset:

    A. J. Newman, M. P. Clark, K. Sampson, A. Wood, L. E. Hay, A. Bock, R. J. Viger, D. Blodgett, L. Brekke, J. R. Arnold, T. Hopson, and Q. Duan (2015). Development of a large-sample watershed-scale hydrometeorological dataset for the contiguous USA: dataset characteristics and assessment of regional variability in hydrologic model performance. Hydrol. Earth Syst. Sci., 19, 209-223, http://doi.org/10.5194/hess-19-209-2015

    Data Publication

    This study:

    H. Shen, B.

  17. O

    MD COVID-19 - Total Testing Volume by County 2022 Archive V1

    • opendata.maryland.gov
    • catalog.data.gov
    application/rdfxml +5
    Updated Jan 12, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maryland Department of Health Prevention and Health Promotion Administration, MDH PHPA (2023). MD COVID-19 - Total Testing Volume by County 2022 Archive V1 [Dataset]. https://opendata.maryland.gov/Health-and-Human-Services/MD-COVID-19-Total-Testing-Volume-by-County-2022-Ar/3he6-e37c
    Explore at:
    application/rdfxml, xml, csv, json, tsv, application/rssxmlAvailable download formats
    Dataset updated
    Jan 12, 2023
    Dataset authored and provided by
    Maryland Department of Health Prevention and Health Promotion Administration, MDH PHPA
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Area covered
    Maryland
    Description

    NOTE: This dataset is no longer being updated as of 4/27/2023. It is retired and no longer included in public COVID-19 data dissemination.

    See this link for more information https://imap.maryland.gov/pages/covid-data Summary The total number of COVID-19 tests administered and the 7-day average percent positive rate in each Maryland jurisdiction.

    Description Testing volume data represent the total number of PCR COVID-19 tests electronically reported for Maryland residents; this count does not include test results submitted by labs and other clinical facilities through non-electronic means. The 7-day percent positive rate is a rolling average of each day’s posi"tivity percentage. The percentage is calculated using the total number of tests electronically reported to MDH (by date of report) and the number of positive tests electronically reported to MDH (by date of report). Electronic lab reports from NEDDSS. Upon reaching a limit to the Socrata Platform, we decided to break the data into two parts (now 3 parts). We now have "MD COVID-19 - Total Testing Volume by County" (for 2023), "MD COVID-19 - Total Testing Volume by County 2022 Archive", "MD COVID-19 - Total Testing Volume by County 2021 Archive", and "MD COVID-19 - Total Testing Volume by County 2020 Archive"

    Terms of Use The Spatial Data, and the information therein, (collectively the "Data") is provided "as is" without warranty of any kind, either expressed, implied, or statutory. The user assumes the entire risk as to quality and performance of the Data. No guarantee of accuracy is granted, nor is any responsibility for reliance thereon assumed. In no event shall the State of Maryland be liable for direct, indirect, incidental, consequential or special damages of any kind. The State of Maryland does not accept liability for any damages or misrepresentation caused by inaccuracies in the Data or as a result to changes to the Data, nor is there responsibility assumed to maintain the Data in any manner or form. The Data can be freely distributed as long as the metadata entry is not modified or deleted. Any data derived from the Data must acknowledge the State of Maryland in the metadata.

  18. datatrove-tests

    • huggingface.co
    Updated May 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hugging Face (2024). datatrove-tests [Dataset]. https://huggingface.co/datasets/huggingface/datatrove-tests
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 5, 2024
    Dataset authored and provided by
    Hugging Facehttps://huggingface.co/
    Description

    Datasets used for datatrove testing. Each split contains the same data: dst = [ {"text": "hello"}, {"text": "world"}, {"text": "how"}, {"text": "are"}, {"text": "you"}, ]

    But based on the split name the data are sharded into n-bins

  19. Z

    Dataset for Cost-effective Simulation-based Test Selection in Self-driving...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 31, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ganz, Nicolas (2022). Dataset for Cost-effective Simulation-based Test Selection in Self-driving Cars Software with SDC-Scissor [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5903160
    Explore at:
    Dataset updated
    Jan 31, 2022
    Dataset provided by
    Khatiri, Sajad
    Panichella, Sebastiano
    Birchler, Christian
    Gambi, Alessio
    Ganz, Nicolas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    SDC-Scissor tool for Cost-effective Simulation-based Test Selection in Self-driving Cars Software

    This dataset provides test cases for self-driving cars with the BeamNG simulator. Check out the repository and demo video to get started.

    GitHub: github.com/ChristianBirchler/sdc-scissor

    This project extends the tool competition platform from the Cyber-Phisical Systems Testing Competition which was part of the SBST Workshop in 2021.

    Usage

    Demo

    YouTube Link

    Installation

    The tool can either be run with Docker or locally using Poetry.

    When running the simulations a working installation of BeamNG.research is required. Additionally, this simulation cannot be run in a Docker container but must run locally.

    To install the application use one of the following approaches:

    Docker: docker build --tag sdc-scissor .

    Poetry: poetry install

    Using the Tool

    The tool can be used with the following two commands:

    Docker: docker run --volume "$(pwd)/results:/out" --rm sdc-scissor [COMMAND] OPTIONS

    Poetry: poetry run python sdc-scissor.py [COMMAND] [OPTIONS]

    There are multiple commands to use. For simplifying the documentation only the command and their options are described.

    Generation of tests:

    generate-tests --out-path /path/to/store/tests

    Automated labeling of Tests:

    label-tests --road-scenarios /path/to/tests --result-folder /path/to/store/labeled/tests

    Note: This only works locally with BeamNG.research installed

    Model evaluation:

    evaluate-models --dataset /path/to/train/set --save

    Split train and test data:

    split-train-test-data --scenarios /path/to/scenarios --train-dir /path/for/train/data --test-dir /path/for/test/data --train-ratio 0.8

    Test outcome prediction:

    predict-tests --scenarios /path/to/scenarios --classifier /path/to/model.joblib

    Evaluation based on random strategy:

    evaluate --scenarios /path/to/test/scenarios --classifier /path/to/model.joblib

    The possible parameters are always documented with --help.

    Linting

    The tool is verified the linters flake8 and pylint. These are automatically enabled in Visual Studio Code and can be run manually with the following commands:

    poetry run flake8 . poetry run pylint **/*.py

    License

    The software we developed is distributed under GNU GPL license. See the LICENSE.md file.

    Contacts

    Christian Birchler - Zurich University of Applied Science (ZHAW), Switzerland - birc@zhaw.ch

    Nicolas Ganz - Zurich University of Applied Science (ZHAW), Switzerland - gann@zhaw.ch

    Sajad Khatiri - Zurich University of Applied Science (ZHAW), Switzerland - mazr@zhaw.ch

    Dr. Alessio Gambi - Passau University, Germany - alessio.gambi@uni-passau.de

    Dr. Sebastiano Panichella - Zurich University of Applied Science (ZHAW), Switzerland - panc@zhaw.ch

    References

    Christian Birchler, Nicolas Ganz, Sajad Khatiri, Alessio Gambi, and Sebastiano Panichella. 2022. Cost-effective Simulation-based Test Selection in Self-driving Cars Software with SDC-Scissor. In 2022 IEEE 29th International Conference on Software Analysis, Evolution and Reengineering (SANER), IEEE.

    If you use this tool in your research, please cite the following papers:

    @INPROCEEDINGS{Birchler2022, author={Birchler, Christian and Ganz, Nicolas and Khatiri, Sajad and Gambi, Alessio, and Panichella, Sebastiano}, booktitle={2022 IEEE 29th International Conference on Software Analysis, Evolution and Reengineering (SANER), title={Cost-effective Simulationbased Test Selection in Self-driving Cars Software with SDC-Scissor}, year={2022}, }

  20. d

    Data from: A comparison of new cardiovascular endurance test using the...

    • search.dataone.org
    • data.niaid.nih.gov
    • +1more
    Updated Jul 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Suchai Surapichpong; Sucheela Jisarojito; Chaiyanut Surapichpong (2024). A comparison of new cardiovascular endurance test using the 2-minute marching test vs. 6-minute walk test in healthy volunteers: A crossover randomized controlled trial [Dataset]. http://doi.org/10.5061/dryad.31zcrjdv2
    Explore at:
    Dataset updated
    Jul 25, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    Suchai Surapichpong; Sucheela Jisarojito; Chaiyanut Surapichpong
    Description

    This was a 2×2 randomized crossover control trial to compare the cardiovascular endurance of healthy volunteers using a 2-minute marching test (2MMT) and a 6-minute walk test (6MWT). This study included 254 participants of both sexes, aged 20–50 years, with a height and body mass index (BMI) of ≥150 cm and ≤25 kg/m2, respectively. Participants could perform activities independently and had normal annual chest radiographs and electrocardiograms. A group-randomized design was used to assign participants to Sequence 1 (AB) or 2 (BA). The tests were conducted over 2 consecutive days, with a 1-day washout period. On day 1, the participants randomly underwent either a 6MWT or 2MMT in a single-anonymized setup, and on day 2, the tests were performed in reverse order. We analyzed maximal oxygen consumption (VO2max) as the primary outcome and heart rate (HR), respiratory rate (RR), blood pressure (BP), oxygen saturation, dyspnea, and leg fatigue as secondary outcomes. Data were collected from 12..., Sample size The sample size required for the equivalence study was estimated using nQuery software and calculated using two one-sided equivalence tests for crossover design. To calculate the sample size, we set the alpha error probability, statistical power, the lower equivalence limit, and upper equivalence limit at 5%, 90%, -2.00, and +2.00, respectively, using the clinical margin (minimal clinically important difference [MCID] of VO2max from a previous study, which was 2 ml/kg/min [15], and standard deviation was 8.6 [16]. Based on these values, we needed 101 participants for the crossover design, allowing for a 20% dropout rate. Therefore, we decided to randomize 127 patients per arm, resulting in 254 participants. However, due to the COVID-19 pandemic, data collection was incomplete, and we could only analyze 127 data sets in this study. Inclusion and exclusion criteria The inclusion criteria were male and female healthy volunteers, aged 20–50 years, with height: ≥150 cm and, BMI ≤..., , # A comparison of new cardiovascular endurance test using the 2-minute marching test vs. 6-minute walk test in healthy volunteers: A crossover randomized controlled trial

    https://doi.org/10.5061/dryad.31zcrjdv2

    We have submitted data tables 1-3 for the description of (Fig 1 CONSORT diagram of the study_figure.TIFF), (Fig 2 The trial design_figure.TIFF, and(Fig 3 The study protocol_figure.TIFF), and submitted of data analysis (Table 1 Baseline characteristics_data.CVS),(Table 2 Equivalence test of VO2max between 2MMT and 6MWT_data.CVS), (Table 3 Mean and standard deviation of 6MWT and 2MMT_data.CVS), and (Table 4 Comparison of secondary outcomes between 6MWT and 2MMT_data.CVS)

    Description

    Fig 1 CONSORT diagram of the study\Figure

    Materials and Methods

    The trial protocol and supporting Consolidated Standards of Reporting Trials (CONSORT) checklist are available as supporting information (S1 File CONSORT Checklist) and the CO...

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista (2024). A/B testing among leading Google Play apps in the U.S. 2023, by frequency [Dataset]. https://www.statista.com/statistics/1490294/us-google-play-ab-testing-frequency/
Organization logo

A/B testing among leading Google Play apps in the U.S. 2023, by frequency

Explore at:
Dataset updated
Sep 4, 2024
Dataset authored and provided by
Statistahttp://statista.com/
Time period covered
2023
Area covered
United States
Description

According to a study conducted in 2023 among the most popular Android mobile apps available on the Google Play Store in the United States, 40 percent of apps conducted two or more A/B testings in the past year on the screenshots they displayed. A/B testings, also called as split testings, were not a popular App Store Optimization practice for testing what icons, videos, and feature graphics were more effective in onboarding users on the Google Play Store.

Search
Clear search
Close search
Google apps
Main menu