Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We modeled a simultaneous multi-round auction with BPMN models, transformed the latter to Petri nets, and used a model checker to verify whether certain outcomes of the auction are possible or not.
Dataset Characteristics: Tabular
Subject Area: Computer Science
Associated Tasks: Classification, Regression
Instances: 2043
Features: 7
For what purpose was the dataset created? The dataset was created as part of a scientific study. The goal was to find out whether one could replace costly verification of complex process models (here: simultaneous multi-round auctions, as used for auctioning frequency spectra) with predictions of the outcome.
What do the instances in this dataset represent? Each instance represents one verification run. Verification checks whether a particular price is possible for a particular product, and (for only some of the instances) whether a particular bidder might win the product to that price.
Additional Information Our code to prepare the dataset and to make predictions is available here: https://github.com/Jakob-Bach/Analyzing-Auction-Verification
Has Missing Values? No
Title: Analyzing and Predicting Verification of Data-Aware Process Models – a Case Study with Spectrum Auctions
Authors: Elaheh Ordoni, Jakob Bach, Ann-Katrin Fleck. 2022
Journal: Published in Journal
Verification techniques play an essential role in detecting undesirable behaviors in many applications like spectrum auctions. By verifying an auction design, one can detect the least favorable outcomes, e.g., the lowest revenue of an auctioneer. However, verification may be infeasible in practice, given the vast size of the state space on the one hand and the large number of properties to be verified on the other hand. To overcome this challenge, we leverage machine-learning techniques. In particular, we create a dataset by verifying properties of a spectrum auction first. Second, we use this dataset to analyze and predict outcomes of the auction and characteristics of the verification procedure. To evaluate the usefulness of machine learning in the given scenario, we consider prediction quality and feature importance. In our experiments, we observe that prediction models can capture relationships in our dataset well, though one needs to be careful to obtain a representative and sufficiently large training dataset. While the focus of this article is on a specific verification scenario, our analysis approach is general and can be adapted to other domains.
Citation:Ordoni,Elaheh, Bach,Jakob, Fleck,Ann-Katrin, and Bach,Jakob. (2022). Auction Verification. UCI Machine Learning Repository. https://doi.org/10.24432/C52K6N.
BibTeX:@misc{misc_auction_verification_713,
author = {Ordoni,Elaheh, Bach,Jakob, Fleck,Ann-Katrin, and Bach,Jakob},
title = {{Auction Verification}},
year = {2022},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: https://doi.org/10.24432/C52K6N}
}
pip install ucimlrepo
`from ucimlrepo import fetch_ucirepo
auction_verification = fetch_ucirepo(id=713)
X = auction_verification.data.features y = auction_verification.data.targets
print(auction_verification.metadata)
print(auction_verification.variables) `
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The construction of a robust healthcare information system is fundamental to enhancing countries’ capabilities in the surveillance and control of hepatitis B virus (HBV). Making use of China’s rapidly expanding primary healthcare system, this innovative approach using big data and machine learning (ML) could help towards the World Health Organization’s (WHO) HBV infection elimination goals of reaching 90% diagnosis and treatment rates by 2030. We aimed to develop and validate HBV detection models using routine clinical data to improve the detection of HBV and support the development of effective interventions to mitigate the impact of this disease in China. Relevant data records extracted from the Family Medicine Clinic of the University of Hong Kong-Shenzhen Hospital’s Hospital Information System were structuralized using state-of-the-art Natural Language Processing techniques. Several ML models have been used to develop HBV risk assessment models. The performance of the ML model was then interpreted using the Shapley value (SHAP) and validated using cohort data randomly divided at a ratio of 2:1 using a five-fold cross-validation framework. The patterns of physical complaints of patients with and without HBV infection were identified by processing 158,988 clinic attendance records. After removing cases without any clinical parameters from the derivation sample (n = 105,992), 27,392 cases were analysed using six modelling methods. A simplified model for HBV using patients’ physical complaints and parameters was developed with good discrimination (AUC = 0.78) and calibration (goodness of fit test p-value >0.05). Suspected case detection models of HBV, showing potential for clinical deployment, have been developed to improve HBV surveillance in primary care setting in China. (Word count: 264) This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections.We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China. This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections. We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China.
Facebook
Twitterhttps://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy
The email validation tools market is experiencing robust growth, driven by the increasing need for businesses to maintain clean and accurate email lists for effective marketing campaigns. The rising adoption of email marketing as a primary communication channel, coupled with stricter data privacy regulations like GDPR and CCPA, necessitates the use of tools that ensure email deliverability and prevent bounces. This market, estimated at $500 million in 2025, is projected to grow at a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033, reaching approximately $1.5 billion by 2033. This expansion is fueled by the growing sophistication of email validation techniques, including real-time verification, syntax checks, and mailbox monitoring, offering businesses more robust solutions to improve their email marketing ROI. Key market segments include small and medium-sized businesses (SMBs), large enterprises, and email marketing agencies, each exhibiting varying levels of adoption and spending based on their specific needs and email marketing strategies. The competitive landscape is characterized by a mix of established players and emerging startups, offering a range of features and pricing models to cater to diverse customer requirements. The market's growth is, however, subject to factors like increasing costs associated with maintaining data accuracy and the potential for false positives in email verification. The key players in this dynamic market, such as Mailgun, BriteVerify, and similar companies, are continuously innovating to improve accuracy, speed, and integration with other marketing automation platforms. The market's geographical distribution is diverse, with North America and Europe currently holding significant market share due to higher email marketing adoption rates and a robust technological infrastructure. However, Asia-Pacific and other emerging markets are poised for considerable growth in the coming years due to increasing internet penetration and rising adoption of digital marketing techniques. The ongoing evolution of email marketing strategies, the increasing emphasis on data hygiene, and the rise of artificial intelligence in email verification are likely to further shape the trajectory of this market in the years to come, leading to further innovation and growth.
Facebook
TwitterA method of verifying the accuracy or authenticity of alphanumeric magnetic data on a document, wherein the configuration of a pictorial or graphic magnetic reference image in the document is made visible by bringing movable particulate magnetic material into proximity therewith such that the particulate magnetic material takes up a distribution corresponding to the magnetic field of the reference image; and the magnetic image configuration thus revealed is compared with the reference image to identify any significant disconformity suggesting past exposure of the document to a magnetic field capable of altering said magnetic data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Advances in neuroimaging, genomic, motion tracking, eye-tracking and many other technology-based data collection methods have led to a torrent of high dimensional datasets, which commonly have a small number of samples because of the intrinsic high cost of data collection involving human participants. High dimensional data with a small number of samples is of critical importance for identifying biomarkers and conducting feasibility and pilot work, however it can lead to biased machine learning (ML) performance estimates. Our review of studies which have applied ML to predict autistic from non-autistic individuals showed that small sample size is associated with higher reported classification accuracy. Thus, we have investigated whether this bias could be caused by the use of validation methods which do not sufficiently control overfitting. Our simulations show that K-fold Cross-Validation (CV) produces strongly biased performance estimates with small sample sizes, and the bias is still evident with sample size of 1000. Nested CV and train/test split approaches produce robust and unbiased performance estimates regardless of sample size. We also show that feature selection if performed on pooled training and testing data is contributing to bias considerably more than parameter tuning. In addition, the contribution to bias by data dimensionality, hyper-parameter space and number of CV folds was explored, and validation methods were compared with discriminable data. The results suggest how to design robust testing methodologies when working with small datasets and how to interpret the results of other studies based on what validation method was used.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This data contains Text Function, Date, Data Validation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This brief literature survey groups the (numerical) validation methods and emphasizes the contradictions and confusion considering bias, variance and predictive performance. A multicriteria decision-making analysis has been made using the sum of absolute ranking differences (SRD), illustrated with five case studies (seven examples). SRD was applied to compare external and cross-validation techniques, indicators of predictive performance, and to select optimal methods to determine the applicability domain (AD). The ordering of model validation methods was in accordance with the sayings of original authors, but they are contradictory within each other, suggesting that any variant of cross-validation can be superior or inferior to other variants depending on the algorithm, data structure and circumstances applied. A simple fivefold cross-validation proved to be superior to the Bayesian Information Criterion in the vast majority of situations. It is simply not sufficient to test a numerical validation method in one situation only, even if it is a well defined one. SRD as a preferable multicriteria decision-making algorithm is suitable for tailoring the techniques for validation, and for the optimal determination of the applicability domain according to the dataset in question.
Facebook
TwitterA method and apparatus for determining the time behavior of a digital circuit based on a starting assumption is disclosed. Generally, in a formal verification of a digital circuit, the time behavior of a digital circuit is monitored to verify or refute whether formulated properties, which comprise an assumption and an assertion, result as a consequence of a presence of an assumption in the digital circuit. In order to determine the behavior of the digital circuit, the time behavior of the digital circuit is examined from a starting initial state of the digital circuit. A relevant auxiliary property is activated and the assertion of the auxiliary property is added to the digital circuit. The digital circuit is then monitored over a period of time.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains cropped face images of actors sourced from online film databases and public web resources. The data was collected and processed to support tasks such as face recognition, identification or verification. Each image has been automatically cropped using Haar cascade classifiers to focus on the facial area, and then filtered for quality and minimum quantity per identity. All images are standardized to 100x100 pixels to ensure uniformity across the dataset. Dataset is balanced 3339 different / 3318 same person image pairs
Facebook
TwitterOne of NASA’s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation techniques address this problem: given a vector of sensor readings, decide whether sensors have failed, therefore producing bad data. We take in this paper a probabilistic approach, using Bayesian networks, to diagnosis and sensor validation, and investigate several relevant but slightly different Bayesian network queries. We emphasize that on-board inference can be performed on a compiled model, giving fast and predictable execution times. Our results are illustrated using an electrical power system, and we show that a Bayesian network with over 400 nodes can be compiled into an arithmetic circuit that can correctly answer queries in less than 500 microseconds on average. Reference: O. J. Mengshoel, A. Darwiche, and S. Uckun, "Sensor Validation using Bayesian Networks." In Proc. of the 9th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS-08), Los Angeles, CA, 2008. BibTex Reference: @inproceedings{mengshoel08sensor, author = {Mengshoel, O. J. and Darwiche, A. and Uckun, S.}, title = {Sensor Validation using {Bayesian} Networks}, booktitle = {Proceedings of the 9th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS-08)}, year = {2008} }
Facebook
Twitterhttps://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
| BASE YEAR | 2024 |
| HISTORICAL DATA | 2019 - 2023 |
| REGIONS COVERED | North America, Europe, APAC, South America, MEA |
| REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
| MARKET SIZE 2024 | 1042.9(USD Million) |
| MARKET SIZE 2025 | 1129.5(USD Million) |
| MARKET SIZE 2035 | 2500.0(USD Million) |
| SEGMENTS COVERED | Service Type, Deployment Type, End User, Verification Method, Regional |
| COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
| KEY MARKET DYNAMICS | growing demand for accurate marketing, rising concerns over email fraud, increasing regulations on data privacy, need for enhanced customer engagement, emergence of AI-driven verification solutions |
| MARKET FORECAST UNITS | USD Million |
| KEY COMPANIES PROFILED | NeverBounce, EmailChecker, Debounce, BulkEmailVerifier, VerifyBee, EmailOnDeck, DataValidation, ZeroBounce, ListWise, QuickEmailVerification, MyEmailVerifier, MailboxValidator, BriteVerify, Hunter, EmailListVerify |
| MARKET FORECAST PERIOD | 2025 - 2035 |
| KEY MARKET OPPORTUNITIES | Growth in e-commerce platforms, Increasing focus on data quality, Rising demand for digital marketing, Expansion of cloud-based services, Need for GDPR compliance solutions |
| COMPOUND ANNUAL GROWTH RATE (CAGR) | 8.3% (2025 - 2035) |
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This Data contains the PEN-Predictor-Keras-Model as well as the 100 validation data sets.
Facebook
Twitterhttps://catalogue.elra.info/static/from_media/metashare/licences/ELRA_END_USER.pdfhttps://catalogue.elra.info/static/from_media/metashare/licences/ELRA_END_USER.pdf
The Multi Modal Verification for Teleservices and Security applications project (M2VTS), running under the European ACTS programme, has produced a database designed to facilitate access control using multimodal identification of human faces. This technique improves recognition efficiency by combining individual modalities (i.e. face and voice). Its relative novelty means that new test material had to be created, since no existing database could offer all modalities needed.The M2VTS database comprises 37 different faces, with 5 shots of each being taken at one-week intervals, or when drastic face changes occurred in the mean time. During each shot, subjects were asked to count from 0 to 9 in their native language (generally French), and to move their heads from left to right, both with and without glasses. The data were then used to create three sequences, for voice, motion and "glasses off". The first sequence can be used for speech verification, 2-D dynamic face verification and speech/lips movement correlation, while the second and third provide information on 3-D face recognition, and may also be used to compare other recognition techniques.
Facebook
Twitterhttps://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy
The Identity Verification Market encompasses a diverse range of sophisticated products, each designed to address specific security and authentication needs. These solutions leverage advanced technologies to ensure accurate and reliable identity verification across various applications and industries. Biometrics: Biometric authentication utilizes unique biological traits, including fingerprints, facial recognition, iris scanning, voice recognition, and behavioral biometrics (typing patterns, gait analysis), offering robust and secure identity verification. The ongoing advancement of biometric technologies ensures high accuracy and resistance to spoofing attempts. Document Verification: These solutions go beyond simple visual inspection, employing advanced image analysis and data validation techniques to authenticate identity documents such as passports, driver's licenses, national IDs, and other official documentation. This includes verifying document integrity, detecting forgeries, and confirming data consistency across multiple sources. Facial Recognition: Facial recognition systems utilize sophisticated algorithms to identify and verify individuals based on their facial features. These systems are constantly evolving, incorporating AI and machine learning to improve accuracy and adapt to variations in lighting, age, and expression. Knowledge-Based Authentication (KBA): While traditionally vulnerable to data breaches, modern KBA solutions utilize dynamic question sets and sophisticated risk assessment techniques to enhance security. These methods are often combined with other authentication factors for improved protection. Multi-Factor Authentication (MFA): MFA significantly strengthens security by requiring users to provide multiple forms of verification, combining factors like something they know (password), something they have (mobile device), and something they are (biometrics). This layered approach significantly reduces the risk of unauthorized access. Address Verification Systems (AVS): These systems verify the validity and accuracy of provided addresses, helping to prevent fraudulent activities and ensure accurate record-keeping. Device Fingerprinting: This technology creates a unique identifier for each device, providing an additional layer of security and enabling risk-based authentication. Recent developments include: July 2020: Experian teamed up with Data Consortium to improve its consumer identity verification services. Through this agreement, Experian clients can better meet Know Your Customer (KYC) and Anti Money Laundering (AML) regulatory requirements, enroll customers more quickly and increase anti-fraud safeguards., June 2022: Onfido joined TISA to support the organization's Digital ID program, which encourages reusable identification. Following its swift development of the Digital ID program, which now includes Barclays, Signicat, OneSpan, and Daon, Onfido is the newest member to join TISA.. Key drivers for this market are: Rising Digitalization with Initiatives, . Increasing Adoption of BYOD Trends in Enterprises. Potential restraints include: Privacy concerns and data protection regulations, such as the General Data Protection Regulation (GDPR) in Europe pose challenges. Notable trends are: Growing shift toward mobile-based identity verification solutions.
Facebook
TwitterWe used this dataset to verify and validate functions in the USEPA National Stormwater Calculator, and then applied field data and commonly-available datasets to illustrate calibration techniques and uncertainty evaluation. This dataset is associated with the following publication: Schifman, L., M. Tryby, J. Berner, and W. Shuster. Managing Uncertainty in Runoff Estimation with the U.S. Environmental Protection Agency National Stormwater Calculator.. JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION. American Water Resources Association, Middleburg, VA, USA, 54(1): 148-159, (2018).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the programs on which the Combined Approach was evaluated.
The Java source code of each evaluated program is in the "src" folder.
The compiled jar file of each evaluated program is in the "testdata" folder.
The .joak file contains the annotated sources and sinks for each evaluated program.
Note that you may not be able to load the .joak files directly into the Combined
Approach program as the various paths in the .joak files need to be fixed first.
The purpose of the .joak files in this repository is to document the sources and sinks.
You can also generate new .joak files with the Combined Approach application.
Facebook
TwitterThis data package contains information on Structured Product Labeling (SPL) Terminology for SPL validation procedures and information on performing SPL validations.
Facebook
TwitterSoftware Health Management (SWHM) is a new field that is concerned with the development of tools and technologies to enable automated detection, diagnosis, prediction, and mitigation of adverse events due to software anomalies. Significant effort has been expended in the last several decades in the development of verification and validation methods for software intensive systems, but it is becoming increasingly more apparent that this is not enough to guarantee that a complex software system meets all safety and reliability requirements. Modern software systems can exhibit a variety of failure modes which can go undetected in a verification and validation process. While standard techniques for error handling, fault detection and isolation can have significant benefits for many systems, it is becoming increasingly evident that new technologies and methods are necessary for the development of techniques to detect, diagnose, predict, and then mitigate the adverse events due to software that has already undergone significant verification and validation procedures. These software faults often arise due to the interaction between the software and the operating environment. Unanticipated environmental changes lead to software anomalies that may have significant impact on the overall success of the mission. Because software is ubiquitous, it is not sufficient that errors are detected only after they occur. Rather, software must be instrumented and monitored for failures before they happen. This prognostic capability will yield safer and more dependable systems for the future. This paper addresses the motivation, needs, and requirements of software health management as a new discipline. Published in the Proceedings of the IEEE Conference on Space Mission Challenges for Information Technology, Palo Alto, CA, August 2011.
Facebook
Twitterhttps://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Email Verification Market size was valued at USD 5.243 Billion in 2023 and is projected to reach USD 9.849 Billion by 2031, growing at a CAGR of 8.2% during the forecast period 2024-2031.
Email Verification Market: Definition/ Overview
Email verification is a crucial process that confirms the existence, authenticity, and deliverability of an email address. It involves procedures such as syntax validation, domain validation, and mailbox validation. Email verification can be automated or manual and is used in various applications such as email marketing, account registration, customer communication, and fraud prevention.
As technology advances, email verification processes become more sophisticated, leading to higher accuracy in identifying valid email addresses and reducing false positives. AI-powered algorithms can analyze patterns and behaviors to enhance email verification processes, identifying anomalies and potential fraud more effectively.
Future email verification methods may prioritize data privacy and security, prioritizing methods that protect user privacy while ensuring verification accuracy. Blockchain technology offers potential for secure and decentralized email verification systems, reducing reliance on centralized authorities and enhancing trust in the verification process.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Verification and Validation is a multi-disciplinary activity that encompasses elements of systems engineering, safety, software engineering and test. The elements that go into the V&V of a complex, software intensive product come out of activities that are performed by all of these disciplines while also spanning the complete system development cycle. As modern systems become more reliant on software intensive solutions to perform mission and safety critical functions, the effort that is required for system certification experiences a corresponding increase. These systems are expected to perform correctly and safely while being flexible and portable enough to go though system refresh cycles and evolvable enough to take on new system functionality throughout the system lifecycle. . We propose a method of addressing this challenge with advanced modular safety cases to specify system safety properties and support the V&V of those properties with argument and evidence chains. The modular safety cases make use of formal specification of safety claims and use contracts to formalize the dependencies between the case modules. These cases can be used to form powerful verification and validation arguments for a system that are maintainable and can be used to support incremental V&V techniques.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We modeled a simultaneous multi-round auction with BPMN models, transformed the latter to Petri nets, and used a model checker to verify whether certain outcomes of the auction are possible or not.
Dataset Characteristics: Tabular
Subject Area: Computer Science
Associated Tasks: Classification, Regression
Instances: 2043
Features: 7
For what purpose was the dataset created? The dataset was created as part of a scientific study. The goal was to find out whether one could replace costly verification of complex process models (here: simultaneous multi-round auctions, as used for auctioning frequency spectra) with predictions of the outcome.
What do the instances in this dataset represent? Each instance represents one verification run. Verification checks whether a particular price is possible for a particular product, and (for only some of the instances) whether a particular bidder might win the product to that price.
Additional Information Our code to prepare the dataset and to make predictions is available here: https://github.com/Jakob-Bach/Analyzing-Auction-Verification
Has Missing Values? No
Title: Analyzing and Predicting Verification of Data-Aware Process Models – a Case Study with Spectrum Auctions
Authors: Elaheh Ordoni, Jakob Bach, Ann-Katrin Fleck. 2022
Journal: Published in Journal
Verification techniques play an essential role in detecting undesirable behaviors in many applications like spectrum auctions. By verifying an auction design, one can detect the least favorable outcomes, e.g., the lowest revenue of an auctioneer. However, verification may be infeasible in practice, given the vast size of the state space on the one hand and the large number of properties to be verified on the other hand. To overcome this challenge, we leverage machine-learning techniques. In particular, we create a dataset by verifying properties of a spectrum auction first. Second, we use this dataset to analyze and predict outcomes of the auction and characteristics of the verification procedure. To evaluate the usefulness of machine learning in the given scenario, we consider prediction quality and feature importance. In our experiments, we observe that prediction models can capture relationships in our dataset well, though one needs to be careful to obtain a representative and sufficiently large training dataset. While the focus of this article is on a specific verification scenario, our analysis approach is general and can be adapted to other domains.
Citation:Ordoni,Elaheh, Bach,Jakob, Fleck,Ann-Katrin, and Bach,Jakob. (2022). Auction Verification. UCI Machine Learning Repository. https://doi.org/10.24432/C52K6N.
BibTeX:@misc{misc_auction_verification_713,
author = {Ordoni,Elaheh, Bach,Jakob, Fleck,Ann-Katrin, and Bach,Jakob},
title = {{Auction Verification}},
year = {2022},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: https://doi.org/10.24432/C52K6N}
}
pip install ucimlrepo
`from ucimlrepo import fetch_ucirepo
auction_verification = fetch_ucirepo(id=713)
X = auction_verification.data.features y = auction_verification.data.targets
print(auction_verification.metadata)
print(auction_verification.variables) `