SQL-Eval is an open-source PostgreSQL evaluation dataset released by Defog, constructed based on Spider. The original link can be found at https://github.com/defog-ai/sql-eval. Our evaluation methodology is more stringent, as it compares the execution accuracy of the predicted SQL queries against the sole ground truth SQL query.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
In 2023, the global market size for Execution Management Systems (EMS) was valued at approximately $3.5 billion, and it is projected to reach around $10.2 billion by 2032, growing at a robust CAGR of 12.5%. The primary growth factors driving this expansion include the increasing need for automation in business processes, the rising complexity of trading operations, and the growing demand for real-time data analytics and decision-making tools.
One of the key growth drivers for the EMS market is the escalating complexity of trading operations in financial markets. As financial instruments and trading strategies become more sophisticated, there is a burgeoning need for advanced systems that can manage and execute trades with high efficiency and accuracy. Execution Management Systems provide the necessary infrastructure to handle large volumes of trades, optimize execution, and comply with regulatory requirements, thus making them indispensable for modern trading environments.
Another significant factor contributing to the growth of the EMS market is the increasing adoption of automation and artificial intelligence (AI) in various industries. Businesses are progressively leveraging AI and machine learning algorithms to analyze vast amounts of data and make informed decisions. Execution Management Systems, equipped with advanced AI capabilities, enable organizations to automate routine tasks, thereby increasing operational efficiency and reducing the likelihood of human errors. This trend is particularly pronounced in sectors like finance, healthcare, and manufacturing, where precision and speed are critical.
The surge in the demand for real-time data analytics is also a pivotal growth factor for the EMS market. Companies are increasingly relying on real-time data to make strategic business decisions. Execution Management Systems provide a platform for the seamless integration of real-time data from various sources, offering comprehensive analytics and insights. This capability is essential for industries such as retail and telecommunications, where timely and accurate data can significantly impact customer satisfaction and operational efficiency.
In the context of the manufacturing industry, the integration of a Manufacturing Execution System (MES) is becoming increasingly vital. MES solutions provide manufacturers with real-time insights into production processes, enabling them to optimize operations, enhance product quality, and reduce production costs. By bridging the gap between enterprise resource planning (ERP) systems and the shop floor, MES facilitates seamless communication and data exchange, ensuring that production schedules are adhered to and resources are utilized efficiently. This level of integration is crucial for manufacturers aiming to remain competitive in a rapidly evolving market landscape.
Geographically, North America holds a dominant position in the EMS market, driven by the presence of major financial institutions and advanced technological infrastructure. The region is followed by Europe and Asia Pacific, where the market is growing rapidly due to increasing digitalization and the adoption of advanced trading systems. The Middle East & Africa and Latin America are also witnessing steady growth, albeit at a slower pace, as these regions continue to develop their technological capabilities and infrastructure.
The EMS market is segmented into software and services based on components. The software segment holds a substantial share of the market, driven by its critical role in managing and optimizing trade execution processes. Advanced EMS software solutions offer a range of functionalities, including order routing, trade execution, performance analysis, and compliance monitoring. These capabilities are essential for financial institutions that aim to enhance trading efficiency and comply with stringent regulatory requirements. The continuous advancements in software technology, such as the integration of AI and machine learning, further propel the growth of this segment.
On the other hand, the services segment, although smaller compared to software, is gaining traction due to the growing need for professional services such as consulting, implementation, and maintenance. As businesses increasingly adopt execution management systems, they require expert guidance to ensure seamless integration with existing infrast
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Appendix, implementation, benchmark programs, and experimental data for the paper currently in submission.
1. dynaboost.zip: Main DynaBoost implementation
2. bingo-ci-exp.tgz: Benchmarks + Sparrow output
3. dynaout.zip: Output for DynaBoost
4. dynaout-sample.zip: DynaBoost output for Figure 7
5. workspace.zip: Instrumented programs and testing logs
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The uploaded data set was generated by the fuzzy model published in T. Galli, F. Chiclana, and F. Siewe. Genetic algorithm-based fuzzy inference system for describing execution tracing quality. Mathematics, 9(21), 2021. ISSN 2227-7390. doi: https://doi.org/10.3390/ma th9212822. URL https://www.mdpi.com/2571-5577/4/1/20.
The goal of the data generation is to make the published model available in the form of data points in a 5D space, which facilitates the construction of simpler models to approximate the original model. The names of the columns in the .csv file constitute the quality properties of execution tracing: (1) accuracy, (2) legibility, (3) implementation, and (4) security, while column (5) contains execution tracing quality derived from the fuzzy model. The indices in brackets show the column indices in the .csv file.
All variables lie in the continuous range [0, 100], where 100 means the best possible quality value and 0 the complete lack of quality or the lack of the given quality property. While generating the data, the inputs were increased by a step-size 5 and the model's output was collected, i.e. 4 inputs, from including 0 to 100 with 21 data points (21^4 = 194481).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison of error values and execution time of Chicago, New York, and Lahore datasets for monthly crime prediction using BiLSTM.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: Current architectures provide many control knobs for the reduction of power consumption of applications, like reducing the number of used cores or scaling down their frequency. However, choosing the right values for these knobs in order to satisfy requirements on performance and/or power consumption is a complex task and trying all the possible combinations of these values is an unfeasible solution since it would require too much time. For this reasons, there is the need for techniques that allow an accurate estimation of the performance and power consumption of an application when a specific configuration of the control knobs values is used. Usually, this is done by executing the application with different configurations and by using these information to predict its behaviour when the values of the knobs are changed. However, since this is a time consuming process, we would like to execute the application in the fewest number of configurations possible. In this work, we consider as control knobs the number of cores used by the application and the frequency of these cores. We show that on most Parsec benchmark programs, by executing the application in 1% of the total possible configurations and by applying a multiple linear regression model we are able to achieve an average accuracy of 96% in predicting its execution time and power consumption in all the other possible knobs combinations.
This dataset includes the raw data of the experiments as well as the scripts used to plot them.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThe use of Artificial Intelligence (AI) is exponentially rising in the healthcare sector. This change influences various domains of early identification, diagnosis, and treatment of diseases.PurposeThis study examines the integration of AI in healthcare, focusing on its transformative potential in diagnostics and treatment, and the challenges and methodologies. shaping its future development.MethodsThe review included 68 academic studies retracted from different databases (WOS, Scopus and Pubmed) from January 2020 and April 2024. After careful review and data analysis, AI methodologies, benefits and challenges, were summarized.ResultsThe number of studies showed a steady rise from 2020 to 2023. Most of them were the results of a collaborative work with international universities (92.1%). The majority (66.7%) were published in top-tier (Q1) journals and 40% were cited 2–10 times. The results have shown that AI tools such as deep learning methods and machine learning continue to significantly improve accuracy and timely execution of medical processes. Benefits were discussed from both the organizational and the patient perspective in the categories of diagnosis, treatment, consultation and health monitoring of diseases. However, some challenges may exist, despite these benefits, and are related to data integration, errors related to data processing and decision making, and patient safety.ConclusionThe article examines the present status of AI in medical applications and explores its potential future applications. The findings of this review are useful for healthcare professionals to acquire deeper knowledge on the use of medical AI from design to implementation stage. However, a thorough assessment is essential to gather more insights into whether AI benefits outweigh its risks. Additionally, ethical and privacy issues need careful consideration.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Execution time (seconds) of machine learning models.
Salient Features of Dentists Email Addresses
So make sure that you don’t find excuses for failing at global marketing campaigns and in reaching targeted medical practitioners and healthcare specialists. With our Dentists Email Leads, you will seldom have a reason not to succeed! So make haste and take action today!
How Can Our Dentists Data Help You to Market to Dentists?
We provide a variety of methods for marketing your dental appliances or products to the top-rated dentists in the United States. Take a glance at some of the available channels:
• Email blast • Marketing viability • Test campaigns • Direct mail • Sales leads • Drift campaigns • ABM campaigns • Product launches • B2B marketing
Data Sources
The contact details of your targeted healthcare professionals are compiled from highly credible resources like: • Websites • Medical seminars • Medical records • Trade shows • Medical conferences
What’s in for you? Over choosing us, here are a few advantages we authenticate- • Locate, target, and prospect leads from 170+ countries • Design and execute ABM and multi-channel campaigns • Seamless and smooth pre-and post-sale customer service • Connect with old leads and build a fruitful customer relationship • Analyze the market for product development and sales campaigns • Boost sales and ROI with increased customer acquisition and retention
Our security compliance
We use of globally recognized data laws like –
GDPR, CCPA, ACMA, EDPS, CAN-SPAM and ANTI CAN-SPAM to ensure the privacy and security of our database. We engage certified auditors to validate our security and privacy by providing us with certificates to represent our security compliance.
Our USPs- what makes us your ideal choice?
At DataCaptive™, we strive consistently to improve our services and cater to the needs of businesses around the world while keeping up with industry trends.
• Elaborate data mining from credible sources • 7-tier verification, including manual quality check • Strict adherence to global and local data policies • Guaranteed 95% accuracy or cash-back • Free sample database available on request
Guaranteed benefits of our Dentists email database!
85% email deliverability and 95% accuracy on other data fields
We understand the importance of data accuracy and employ every avenue to keep our database fresh and updated. We execute a multi-step QC process backed by our Patented AI and Machine learning tools to prevent anomalies in consistency and data precision. This cycle repeats every 45 days. Although maintaining 100% accuracy is quite impractical, since data such as email, physical addresses, and phone numbers are subjected to change, we guarantee 85% email deliverability and 95% accuracy on other data points.
100% replacement in case of hard bounces
Every data point is meticulously verified and then re-verified to ensure you get the best. Data Accuracy is paramount in successfully penetrating a new market or working within a familiar one. We are committed to precision. However, in an unlikely event where hard bounces or inaccuracies exceed the guaranteed percentage, we offer replacement with immediate effect. If need be, we even offer credits and/or refunds for inaccurate contacts.
Other promised benefits
• Contacts are for the perpetual usage • The database comprises consent-based opt-in contacts only • The list is free of duplicate contacts and generic emails • Round-the-clock customer service assistance • 360-degree database solutions
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper proposes a feature selection method based on a hybrid optimization algorithm that combines the Golden Jackal Optimization (GJO) and Grey Wolf Optimizer (GWO). The primary objective of this method is to create an effective data dimensionality reduction technique for eliminating redundant, irrelevant, and noisy features within high-dimensional datasets. Drawing inspiration from the Chinese idiom “Chai Lang Hu Bao,” hybrid algorithm mechanisms, and cooperative behaviors observed in natural animal populations, we amalgamate the GWO algorithm, the Lagrange interpolation method, and the GJO algorithm to propose the multi-strategy fusion GJO-GWO algorithm. In Case 1, the GJO-GWO algorithm addressed eight complex benchmark functions. In Case 2, GJO-GWO was utilized to tackle ten feature selection problems. Experimental results consistently demonstrate that under identical experimental conditions, whether solving complex benchmark functions or addressing feature selection problems, GJO-GWO exhibits smaller means, lower standard deviations, higher classification accuracy, and reduced execution times. These findings affirm the superior optimization performance, classification accuracy, and stability of the GJO-GWO algorithm.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Mean uni-modal (train and test on listening data) and cross-modal (train on listening, test on execution) classification accuracy and p-values of the other areas.
Global Email Address & Contact B2B Email Data Solutions: 230M+ Verified Emails and Phone Numbers for B2B Email outreach. Boost your marketing and sales strategies with Forager.ai's Global Contact Data and Email address Data. Our comprehensive database offers access to over 230 million verified email addresses, along with phone number data and detailed B2B Email data and contact information. Whether you're focused on expanding your B2B Email outreach or improving lead generation, our solutions provide the tools you need to engage decision-makers and drive success.
Designed to support your Email data-driven marketing efforts, Forager.ai delivers valuable insights with email data, phone number data, and contact details for both B2B and B2C audiences. Build meaningful connections and leverage high-quality, verified Email data to execute precise and effective outreach strategies.
Core Features of Forager.ai B2B Email Data Solutions: Targeted B2B Email Data: Gain access to a diverse collection of email addresses that help you execute personalized email campaigns targeting key decision-makers across industries.
Comprehensive Phone Number Data: Enhance your sales and telemarketing strategies with our extensive phone number database, perfect for direct outreach and boosting customer engagement.
B2B and B2C Contact Data: Tailor your messaging with B2B contact data and B2C contact Email address data that allow you to effectively connect with C-suite executives, decision-makers, and key consumer groups.
CEO Contact Information: Unlock direct access to CEO contact details, ideal for high-level networking, partnership building, and executive outreach.
Strategic Applications of Forager.ai Data: Online Marketing & Campaigns: Utilize our email address data and phone number information to run targeted online marketing campaigns, increasing conversion rates and boosting outreach effectiveness.
Database Enrichment: Improve your sales databases and CRM systems by enriching them with accurate and up-to-date contact data, supporting more informed decision-making.
B2B Lead Generation: Tap into our rich B2B Email data to expand your business networks, refine your outreach efforts, and generate high-quality leads.
Sales Data Amplification: Supercharge your sales strategies by integrating enriched contact data for better targeting and higher sales conversion rates.
Competitive Market Intelligence: Gain valuable insights into your competitors by leveraging our comprehensive contact data to analyze trends and shifts in the market.
Why Forager.ai Stands Out: Precision & Accuracy: With a 95%+ accuracy rate, Forager.ai ensures that your email data and contact information is always fresh, reliable, and ready to be used for maximum impact.
Global Reach, Local Relevance: Our Email address data solutions cover global markets while allowing you to focus on specific regions, industries, and audience segments tailored to your business needs.
Cost-Effective Solutions: We offer scalable, affordable B2B email data and B2B contact data packages, ensuring you get high-value results without breaking your budget.
Ethical, Compliant Data: We strictly adhere to GDPR guidelines, ensuring that all contact data is ethically sourced and legally compliant, protecting both your business and your customers.
Unlock the Power of Verified Email (Personal Email data & Business Email data) Contact Data with Forager.ai Explore the potential of our 230M+ verified email addresses and phone numbers to elevate your B2B email marketing, sales outreach, and data-driven initiatives. Our contact data solutions are tailored to support your lead generation, sales pipeline, and competitive intelligence efforts, giving you the tools to execute more effective and impactful campaigns.
Top Use Cases for Forager.ai Data Solutions: Lead Generation & B2B Prospecting
Cold B2B Email Outreach
CRM Enrichment & Marketing Automation
Account-Based Marketing (ABM)
Recruiting & Executive Search
Market Research & Competitive Intelligence
Flexible Data Licensing & Access Options: One-Time Data Files available upon request
24/7 API Access for seamless integration
Monthly & Annual Plans tailored to your needs
API Credits Roll Over with no expiration
Reach out to us today to discover how Forager.ai's high-quality B2B Email data and contact data can transform your outreach strategies and drive greater business success.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
About the Data
Overview of Data
This data set contains execution logs of 12,000 Android applications and machine learning feature matrices constructed from them.
Paper Abstract
With Android being the most widespread mobile platform, protecting it against malicious applications is essential. Android users typically install applications from large remote repositories, which provides ample opportunities for malicious newcomers. In this paper, we evaluate a few techniques for detecting malicious Android applications on a repository level. The techniques perform automatic classification based on tracking system calls while applications are executed in a sandbox environment. We implemented the techniques in the maline tool, and performed extensive empirical evaluation on a suite of around 12,000 applications. The evaluation considers the size and type of inputs used in analyses. We show that simple and relatively small inputs result in an overall detection accuracy of 93% with a 5% benign application classification error, while results are improved to a 96% detection accuracy with up-sampling. This indicates that system-call based techniques are viable to be used in practice. Finally, we show that even simplistic feature choices are effective, suggesting that more heavyweight approaches should be thoroughly (re)evaluated.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Average classification accuracy of different algorithms on datasets.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the data and code necessary to derive the main results of the paper “An optical neural network using less than 1 photon per multiplication”. It includes the datasets generated during the characterization of dot-product accuracy (Figure 2) and the execution of the ONN using different photon budgets (Figure 3), along with the code used to analyze them. Besides, the codes for training the neural network model and collecting data in the experimental setup are also included.
The scripts and data in "Figure 2.zip" ("Figure 3.zip") were used to generate Figure 2 (Figure 3) in the main manuscript.
"ONN-device-control.zip" contains the device control codes in the experiments for data collection, available at https://github.com/mcmahon-lab/ONN-device-control.
"ONN-QAT-SQL.zip" contains the trained model and training scripts (Python 3.8.3, PyTorch version 1.7.0, torchvision version 0.8.1) for the neural network executed in the experiment, available at https://github.com/mcmahon-lab/ONN-QAT-SQL.
For more details, please refer to the README files in the root directory and each separate folder.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison of Chicago district-wise prediction error values for a month and a week using statistical and deep learning methods.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
AI enabled benefits and challenges in the healthcare setting.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Mean accuracy (%) and back-transformed mean RT data (ms) with 95% confidence intervals (CI) for the secondary task by domain, dual task load, age group and strategy.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper proposes a feature selection method based on a hybrid optimization algorithm that combines the Golden Jackal Optimization (GJO) and Grey Wolf Optimizer (GWO). The primary objective of this method is to create an effective data dimensionality reduction technique for eliminating redundant, irrelevant, and noisy features within high-dimensional datasets. Drawing inspiration from the Chinese idiom “Chai Lang Hu Bao,” hybrid algorithm mechanisms, and cooperative behaviors observed in natural animal populations, we amalgamate the GWO algorithm, the Lagrange interpolation method, and the GJO algorithm to propose the multi-strategy fusion GJO-GWO algorithm. In Case 1, the GJO-GWO algorithm addressed eight complex benchmark functions. In Case 2, GJO-GWO was utilized to tackle ten feature selection problems. Experimental results consistently demonstrate that under identical experimental conditions, whether solving complex benchmark functions or addressing feature selection problems, GJO-GWO exhibits smaller means, lower standard deviations, higher classification accuracy, and reduced execution times. These findings affirm the superior optimization performance, classification accuracy, and stability of the GJO-GWO algorithm.
SQL-Eval is an open-source PostgreSQL evaluation dataset released by Defog, constructed based on Spider. The original link can be found at https://github.com/defog-ai/sql-eval. Our evaluation methodology is more stringent, as it compares the execution accuracy of the predicted SQL queries against the sole ground truth SQL query.