Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
AcTBeCalf Dataset Description
The AcTBeCalf dataset is a comprehensive dataset designed to support the classification of pre-weaned calf behaviors from accelerometer data. It contains detailed accelerometer readings aligned with annotated behaviors, providing a valuable resource for research in multivariate time-series classification and animal behavior analysis. The dataset includes accelerometer data collected from 30 pre-weaned Holstein Friesian and Jersey calves, housed in group pens at the Teagasc Moorepark Research Farm, Ireland. Each calf was equipped with a 3D accelerometer sensor (AX3, Axivity Ltd, Newcastle, UK) sampling at 25 Hz and attached to a neck collar from one week of birth over 13 weeks.
This dataset encompasses 27.4 hours of accelerometer data aligned with calf behaviors, including both prominent behaviors like lying, standing, and running, as well as less frequent behaviors such as grooming, social interaction, and abnormal behaviors.
The dataset consists of a single CSV file with the following columns:
dateTime: Timestamp of the accelerometer reading, sampled at 25 Hz.
calfid: Identification number of the calf (1-30).
accX: Accelerometer reading for the X axis (top-bottom direction)*.
accY: Accelerometer reading for the Y axis (backward-forward direction)*.
accZ: Accelerometer reading for the Z axis (left-right direction)*.
behavior: Annotated behavior based on an ethogram of 23 behaviors.
segId: Segment identification number associated with each accelerometer reading/row, representing all readings of the same behavior segment.
Code Files Description
The dataset is accompanied by several code files to facilitate the preprocessing and analysis of the accelerometer data and to support the development and evaluation of machine learning models. The main code files included in the dataset repository are:
accelerometer_time_correction.ipynb: This script corrects the accelerometer time drift, ensuring the alignment of the accelerometer data with the reference time.
shake_pattern_detector.py: This script includes an algorithm to detect shake patterns in the accelerometer signal for aligning the accelerometer time series with reference times.
aligning_accelerometer_data_with_annotations.ipynb: This notebook aligns the accelerometer time series with the annotated behaviors based on timestamps.
manual_inspection_ts_validation.ipynb: This notebook provides a manual inspection process for ensuring the accurate alignment of the accelerometer data with the annotated behaviors.
additional_ts_generation.ipynb: This notebook generates additional time-series data from the original X, Y, and Z accelerometer readings, including Magnitude, ODBA (Overall Dynamic Body Acceleration), VeDBA (Vectorial Dynamic Body Acceleration), pitch, and roll.
genSplit.py: This script provides the logic used for the generalized subject separation for machine learning model training, validation and testing.
active_inactive_classification.ipynb: This notebook details the process of classifying behaviors into active and inactive categories using a RandomForest model, achieving a balanced accuracy of 92%.
four_behv_classification.ipynb: This notebook employs the mini-ROCKET feature derivation mechanism and a RidgeClassifierCV to classify behaviors into four categories: drinking milk, lying, running, and other, achieving a balanced accuracy of 84%.
Kindly cite one of the following papers when using this data:
Dissanayake, O., McPherson, S. E., Allyndrée, J., Kennedy, E., Cunningham, P., & Riaboff, L. (2024). Evaluating ROCKET and Catch22 features for calf behaviour classification from accelerometer data using Machine Learning models. arXiv preprint arXiv:2404.18159.
Dissanayake, O., McPherson, S. E., Allyndrée, J., Kennedy, E., Cunningham, P., & Riaboff, L. (2024). Development of a digital tool for monitoring the behaviour of pre-weaned calves using accelerometer neck-collars. arXiv preprint arXiv:2406.17352
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Ground reaction forces are often used by sport scientists and clinicians to analyze the mechanical risk-factors of running related injuries or athletic performance during a running analysis. An interesting ground reaction force-derived variable to track is the maximal vertical instantaneous loading rate (VILR). This impact characteristic is traditionally derived from a fixed force platform, but wearable inertial sensors nowadays might approximate its magnitude while running outside the lab. The time-discrete axial peak tibial acceleration (APTA) has been proposed as a good surrogate that can be measured using wearable accelerometers in the field. This paper explores the hypothesis that applying machine learning to time continuous data (generated from bilateral tri-axial shin mounted accelerometers) would result in a more accurate estimation of the VILR. Therefore, the purpose of this study was to evaluate the performance of accelerometer-based predictions of the VILR with various machine learning models trained on data of 93 rearfoot runners. A subject-dependent gradient boosted regression trees (XGB) model provided the most accurate estimates (mean absolute error: 5.39 ± 2.04 BW⋅s–1, mean absolute percentage error: 6.08%). A similar subject-independent model had a mean absolute error of 12.41 ± 7.90 BW⋅s–1 (mean absolute percentage error: 11.09%). All of our models had a stronger correlation with the VILR than the APTA (p < 0.01), indicating that multiple 3D acceleration features in a learning setting showed the highest accuracy in predicting the lab-based impact loading compared to APTA.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Aggressive driving behavior is the leading factor of road traffic accidents. As reported by the AAA Foundation for Traffic Safety, 106,727 fatal crashes – 55.7 percent of the total – during a recent four-year period involved drivers who committed one or more aggressive driving actions. Therefore, how to predict dangerous driving behavior quickly and accurately?
Aggressive driving includes speeding, sudden breaks and sudden left or right turns. All these events are reflected on accelerometer and gyroscope data. Therefore, knowing that almost everyone owns a smartphone nowadays which has a wide variety of sensors, we've designed a data collector application in android based on the accelerometer and gyroscope sensors.
Facebook
Twitterhttps://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F6372737%2F47d4254cf709e8038a199096ce2e3b75%2Fimage_thumbnail%20(4).jpg?generation=1665091063695984&alt=media" alt="">
The dataset, acquired from WISDM Lab, consists of data collected from 36 different users performing six types of human activities (ascending and descending stairs, sitting, walking, jogging, and standing) for specific periods of time.
These data were acquired from accelerometers, which are able of detecting the orientation of the device measuring the acceleration along the three different dimensions. They were collected using a sample rate of 20 Hz (1 sample every 50 millisecond) that is equivalent to 20 samples per second.
These time-series data can be used to perform various techniques, such as human activity recognition.
activity: the activity that the user was carrying out. It could be:
timestamp: generally the phone's uptime in nanoseconds.
x-axis: The acceleration in the x direction as measured by the android phone's accelerometer.
Floating-point values between -20 and 20. A value of 10 = 1g = 9.81 m/s^2, and 0 = no acceleration.
The acceleration recorded includes gravitational acceleration toward the center of the Earth, so that when the phone is at rest on a flat surface the vertical axis will register +-10.
y-axis: same as x-axis, but along y axis.
z-axis: same as x-axis, but along z axis.
Remember to upvote if you found the dataset useful :).
The data can be used to perform human activity prediction. I strongly suggest you to take a look to this article if you want to have a reference for performing this task, and considering that the given dataset was already cleaned. In addition, you can try to perform other feature engineering and selection techniques, and using more complex models for prediction.
Data were fetched from the WISDM dataset website, and they were cleaned, deleting missing values, replacing inconsistent strings and converting the dataset to csv.
Jeffrey W. Lockhart, Tony Pulickal, and Gary M. Weiss (2012).
"Applications of Mobile Activity Recognition,"
Proceedings of the ACM UbiComp International Workshop
on Situation, Activity, and Goal Awareness, Pittsburgh,
PA.
Gary M. Weiss and Jeffrey W. Lockhart (2012). "The Impact of
Personalization on Smartphone-Based Activity Recognition,"
Proceedings of the AAAI-12 Workshop on Activity Context
Representation: Techniques and Languages, Toronto, CA.
Jennifer R. Kwapisz, Gary M. Weiss and Samuel A. Moore (2010).
"Activity Recognition using Cell Phone Accelerometers,"
Proceedings of the Fourth International Workshop on
Knowledge Discovery from Sensor Data (at KDD-10), Washington
DC.
Facebook
Twitter
According to our latest research, the global Ground Data Processing Acceleration market size reached USD 6.42 billion in 2024, reflecting the rapid adoption of advanced data processing solutions across critical industries. The market is projected to grow at a robust CAGR of 12.7% from 2025 to 2033, with the market size forecasted to reach USD 18.85 billion by 2033. This strong growth is primarily driven by the increasing demand for real-time analytics, the proliferation of satellite and remote sensing data, and the growing necessity for high-performance computing in earth observation, defense, and commercial applications.
A key growth factor for the Ground Data Processing Acceleration market is the explosive rise in satellite launches and the subsequent surge in data generation. The advent of small satellite constellations and the integration of high-resolution sensors have exponentially increased the volume of raw data transmitted to ground stations. Processing this data efficiently requires advanced acceleration technologies, including specialized hardware, optimized software algorithms, and scalable cloud-based platforms. Organizations in sectors such as earth observation, weather forecasting, and defense are increasingly investing in these solutions to derive actionable insights in near real-time, thereby enhancing mission outcomes, operational efficiency, and decision-making accuracy.
Another significant driver is the growing adoption of artificial intelligence (AI) and machine learning (ML) for automated data analysis and anomaly detection. As satellite and remote sensing data become more complex, traditional processing methods struggle to deliver timely results. The integration of AI/ML with ground data processing acceleration solutions enables automated feature extraction, image classification, and predictive analytics at unprecedented speeds. This not only improves the accuracy of applications such as disaster management and environmental monitoring but also opens new avenues for commercial exploitation, including precision agriculture, resource exploration, and smart city planning.
The market is further propelled by advancements in high-performance computing (HPC) infrastructure and the increasing shift towards hybrid and cloud-based deployment models. Organizations seek scalable, flexible, and cost-effective solutions that can handle fluctuating workloads and diverse data types. Cloud-based processing acceleration platforms offer seamless access to powerful computing resources, facilitating collaboration, data sharing, and integration with other digital ecosystems. This trend is particularly evident in research institutes, commercial enterprises, and government agencies that require agility and scalability for large-scale data processing projects.
From a regional perspective, North America currently dominates the Ground Data Processing Acceleration market, supported by substantial investments in space exploration, defense modernization, and commercial satellite ventures. However, Asia Pacific is emerging as a high-growth region, driven by increasing government initiatives, expanding satellite programs, and the rapid adoption of digital technologies across industries. Europe also holds a significant market share, benefiting from robust research and development activities and strong collaborations among space agencies, academia, and the private sector.
The Component segment of the Ground Data Processing Acceleration market is segmented into hardware, software, and services. Hardware solutions, including specialized processors, field-programmable gate arrays (FPGAs), and high-speed storage systems, play a crucial role in enabling real-time data ingestion, processing, and transmission. These components are engineered to handle massive data throughput and complex computations, making them indispensable for applications requiring low latency and high reliability. As the volume and complexi
Facebook
TwitterObjectiveA method to estimate absolute left ventricular (LV) pressure and its maximum rate of rise (LV dP/dtmax) from epicardial accelerometer data and machine learning is proposed.MethodsFive acute experiments were performed on pigs. Custom-made accelerometers were sutured epicardially onto the right ventricle, LV, and right atrium. Different pacing configurations and contractility modulations, using isoflurane and dobutamine infusions, were performed to create a wide variety of hemodynamic conditions. Automated beat-by-beat analysis was performed on the acceleration signals to evaluate amplitude, time, and energy-based features. For each sensing location, bootstrap aggregated classification tree ensembles were trained to estimate absolute maximum LV pressure (LVPmax) and LV dP/dtmax using amplitude, time, and energy-based features. After extraction of acceleration and pressure-based features, location specific, bootstrap aggregated classification ensembles were trained to estimate absolute values of LVPmax and its maximum rate of rise (LV dP/dtmax) from acceleration data.ResultsWith a dataset of over 6,000 beats, the algorithm narrowed the selection of 17 predefined features to the most suitable 3 for each sensor location. Validation tests showed the minimal estimation accuracies to be 93% and 86% for LVPmax at estimation intervals of 20 and 10 mmHg, respectively. Models estimating LV dP/dtmax achieved an accuracy of minimal 93 and 87% at estimation intervals of 100 and 200 mmHg/s, respectively. Accuracies were similar for all sensor locations used.ConclusionUnder pre-clinical conditions, the developed estimation method, employing epicardial accelerometers in conjunction with machine learning, can reliably estimate absolute LV pressure and its first derivative.
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This dataset includes time-series data generated by accelerometer and gyroscope sensors (attitude, gravity, userAcceleration, and rotationRate). It is collected with an iPhone 6s kept in the participant's front pocket using SensingKit which collects information from Core Motion framework on iOS devices. A total of 24 participants in a range of gender, age, weight, and height performed 6 activities in 15 trials in the same environment and conditions: downstairs, upstairs, walking, jogging, sitting, and standing. With this dataset, we aim to look for personal attributes fingerprints in time-series of sensor data, i.e. attribute-specific patterns that can be used to infer gender or personality of the data subjects in addition to their activities.
[A simple code for importing dataset and to get your hands in][1]
For each participant, the study had been commenced by collecting their demographic (age and gender) and physically-related (height and weight) information. Then, we provided them with a dedicated smartphone (iPhone 6) and asked them to store it in their trousers' front pocket during the experiment. All the participant were asked to wear flat shoes. We then asked them to perform 6 different activities (walk downstairs, walk upstairs, sit, stand and jogging) around the Queen Mary University of London's Mile End campus. For each trial, the researcher set up the phone and gave it to the current participants, then the researcher stood in a corner. Then, the participant pressed the start button of Crowdsense app and put it in their trousers' front pocket and performed the specified activity. We asked them to do it as natural as possible, like their everyday life. At the end of each trial, they took the phone out of their pocket and pressed the stop button. The exact places and routes for running all the activities are shown in the illustrative map in the following Figure.
As we can see, there are 15 trials:
There are 24 data subjects. The A_DeviceMotion_data folder contains time-series collected by both Accelerometer and Gyroscope for all 15 trials. For every trial we have a multivariate time-series. Thus, we have time-series with 12 features: attitude.roll, attitude.pitch, attitude.yaw, gravity.x, gravity.y, gravity.z, rotationRate.x, rotationRate.y, rotationRate.z, userAcceleration.x, userAcceleration.y, userAcceleration.z.
The accelerometer measures the sum of two acceleration vectors: gravity and user acceleration. User acceleration is the acceleration that the user imparts to the device. Because Core Motion is able to track a device’s attitude using both the gyroscope and the accelerometer, it can differentiate between gravity and user acceleration. A CMDeviceMotion object provides both measurements in the gravity and userAcceleration properties. ([More info][3])
There are 6 different labels:
dws: downstairs
ups: upstairs
sit: sitting
std: standing
wlk: walking
jog: jogging
If you use this dataset, please cite the following paper:
@inproceedings{Malekzadeh:2019:MSD:3302505.3310068,
author = {Malekzadeh, Mohammad and Clegg, Richard G. and Cavallaro, Andrea and Haddadi, Hamed},
title = {Mobile Sensor Data Anonymization},
booktitle = {Proceedings of the International Conference on Internet of Things Design and Implementation},
series = {IoTDI '19},
year = {2019},
isbn = {978-1-4503-6283-2},
location = {Montreal, Quebec, Canada}, pages = {49--58},
numpages = {10},
doi = {10.1145/3302505.3310068},
acmid = {3310068},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {adversarial training, deep learning, edge computing, sensor data privacy, time series analysis},
}
Or
@inproceedings{Malekzadeh:2018:PSD:3195258.3195260,
author = {Malekzadeh, Mohammad and Clegg, Richard G. and Cavallaro, Andrea and Haddadi, Hamed},
title = {Protecting Sensory Data Against Sensitive Inferences},
booktitle = {Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems},
series = {W-P2DS'18},
year = {2018},
isbn = {978-1-4503-5654-1},
location = {Porto, Portugal},
pages = {2:1--2:6},
articleno = {2},
numpages = {6},
url = {http://doi.acm.org/10.1145/3195258.3195...
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global market for Processors for AI Acceleration is estimated to reach a value of XXX million by 2033, with a CAGR of XX% over the forecast period 2025-2033. The market growth is primarily fueled by the increasing adoption of Artificial Intelligence (AI) in various industries, such as healthcare, automotive, and manufacturing. The growing demand for real-time analytics and machine learning applications is also driving the need for processors that can handle complex algorithms and large datasets efficiently. Key market trends include the increasing adoption of edge computing, the rise of cloud-based AI services, and the development of specialized AI accelerators. The increasing popularity of self-driving cars, smart homes, and wearable devices is also driving the demand for Processors for AI Acceleration. The market is expected to continue to grow in the coming years as more industries adopt AI technology and as the demand for real-time analytics increases. North America is the largest regional market, followed by Asia Pacific and Europe. The key players in the market include Intel, NXP Semiconductors, XMOS, Texas Instruments, Nvidia, Kneron Inc, Gyrfalcon Technology Inc, Eta Compute Inc, Syntiant Corp, and GreenWaves Technologies.
Facebook
Twitter
According to our latest research, the global on-instrument secondary analysis acceleration market size reached USD 1.12 billion in 2024, reflecting robust growth driven by technological advancements and the rising adoption of high-throughput sequencing platforms. The market is projected to expand at a CAGR of 13.7% from 2025 to 2033, reaching a forecasted value of USD 3.48 billion by 2033. The primary growth factor is the increasing demand for rapid, accurate, and scalable data analysis solutions in genomics and related life sciences fields, as laboratories and research institutions prioritize efficiency and precision in large-scale omics studies.
The growth trajectory of the on-instrument secondary analysis acceleration market is strongly influenced by the surging volume of next-generation sequencing (NGS) data generated worldwide. As sequencing costs continue to decrease and throughput increases, the bottleneck has shifted from data generation to data interpretation and analysis. This shift has created an urgent need for advanced hardware accelerators and software solutions capable of performing real-time or near real-time secondary data analysis directly on sequencing instruments. The integration of these accelerators reduces turnaround times, minimizes data transfer bottlenecks, and enables researchers and clinicians to make faster, more informed decisions, thereby driving widespread adoption across both clinical and research applications.
Another significant growth factor is the ongoing convergence of artificial intelligence, machine learning, and high-performance computing with omics data analysis. The implementation of AI-driven algorithms and parallel processing architectures within on-instrument acceleration platforms allows for the efficient handling of increasingly complex and voluminous datasets. This convergence not only enhances the accuracy of variant calling, alignment, and quantification tasks but also supports the development of new analytical pipelines tailored to emerging applications such as single-cell genomics, spatial transcriptomics, and multi-omics integration. As a result, both established institutions and emerging biotech startups are investing heavily in upgrading their analytical infrastructure, further fueling market expansion.
Additionally, the growing emphasis on personalized medicine and precision healthcare is catalyzing demand for on-instrument secondary analysis acceleration solutions. Healthcare providers, pharmaceutical companies, and diagnostic laboratories are seeking to leverage genomic and proteomic insights to guide treatment decisions, drug development, and patient stratification. The ability to accelerate secondary analysis workflows directly on sequencing or mass spectrometry instruments is critical for achieving rapid turnaround times in clinical settings, particularly for applications such as oncology, rare disease diagnosis, and infectious disease surveillance. Regulatory support for clinical genomics and increasing investments in translational research are expected to further boost market growth in the coming years.
From a regional perspective, North America currently dominates the on-instrument secondary analysis acceleration market, accounting for the largest share in 2024, followed closely by Europe and the Asia Pacific. The robust presence of leading sequencing technology providers, high R&D expenditure, and favorable reimbursement frameworks in the United States and Canada have contributed to rapid market adoption. Europe is also witnessing substantial growth, supported by strong government initiatives and collaborative research networks. Meanwhile, the Asia Pacific region is emerging as a lucrative market, driven by expanding genomics research programs, increasing healthcare investments, and a growing focus on precision medicine in countries such as China, Japan, and India.
The product landscape of the on-instrument secondary analysis a
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global On-Instrument Secondary Analysis Acceleration market size reached USD 1.38 billion in 2024, driven by the increasing demand for rapid and accurate data analysis in omics research. The market is expected to grow at a robust CAGR of 13.4% from 2025 to 2033, projecting a value of USD 4.14 billion by 2033. This growth is primarily fueled by the surge in next-generation sequencing (NGS) adoption, the expansion of multi-omics applications, and the need for streamlined data processing directly at the instrument level, which enables faster turnaround times and improved clinical and research outcomes.
One of the primary growth drivers for the On-Instrument Secondary Analysis Acceleration market is the exponential increase in genomic and multi-omics data generated by advanced sequencing platforms. Modern sequencing instruments, including those used for genomics, proteomics, and metabolomics, are capable of producing terabytes of data per run. Traditional data analysis pipelines, which often require manual data transfer and off-instrument computation, are becoming bottlenecks in high-throughput environments. On-instrument acceleration solutions, integrating hardware and software directly into sequencing devices, enable real-time or near-real-time analysis, significantly reducing the time from data acquisition to actionable insights. This efficiency is crucial in clinical diagnostics, infectious disease surveillance, and personalized medicine, where rapid turnaround can directly impact patient outcomes.
Another significant growth factor is the increasing integration of artificial intelligence (AI) and machine learning (ML) algorithms into on-instrument analysis platforms. These technologies have revolutionized secondary analysis by automating variant calling, error correction, and data interpretation. AI-powered accelerators can process complex datasets with higher accuracy and speed compared to traditional computational methods. Furthermore, the ongoing miniaturization and cost reduction of high-performance computing components, such as GPUs and FPGAs, have made it feasible to embed powerful analytics directly within sequencing instruments. This democratization of advanced analytics is expanding adoption not only in large academic centers but also in smaller hospitals and decentralized laboratories, further propelling market growth.
The growing emphasis on precision medicine and translational research is also catalyzing the adoption of on-instrument secondary analysis acceleration solutions. Healthcare providers and pharmaceutical companies are increasingly leveraging omics data to develop targeted therapies and diagnostics. On-instrument acceleration enables more efficient workflow integration, reducing the need for specialized bioinformatics expertise and infrastructure. This is particularly valuable in clinical settings where rapid decision-making is essential. Additionally, regulatory agencies are beginning to recognize the value of integrated analysis for compliance and traceability, which is incentivizing the adoption of validated, on-instrument solutions across regulated environments.
From a regional perspective, North America currently leads the On-Instrument Secondary Analysis Acceleration market due to its advanced healthcare infrastructure, significant investments in genomics research, and strong presence of key market players. Europe follows closely, driven by robust funding for biomedical research and increasing adoption of precision medicine initiatives. The Asia Pacific region is witnessing the fastest growth, attributed to expanding healthcare access, rising investments in biotechnology, and government initiatives to promote omics research. Latin America and the Middle East & Africa are also emerging markets, with growing awareness and adoption of advanced sequencing technologies, albeit at a comparatively slower pace.
The Product Type segment of the On-Instrument Secondary Analysis Acceleration market comprises hardware accelerators, software solutions, and integrated systems. Hardware accelerators, such as GPUs, FPGAs, and ASICs, are designed to perform computationally intensive tasks like sequence alignment, variant calling, and data compression directly within the instrument. These accelerators dramatically reduce analysis time and energy consumption, making them esse
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
**SUBF Dataset v1.0: Bearing Fault Diagnosis using Vibration Signals **
Description The SUBF dataset v1.0 has been designed for the analysis and diagnosis of mechanical bearing faults. The mechanical setup consists of a motor, a frame/base, bearings, and a shaft, simulating different machine conditions such as a healthy state, inner race fault, and outer race fault. This dataset aims to facilitate reproducibility and support research in mechanical fault diagnosis and machine condition monitoring.
The dataset is part of the research paper "Aziz, S., Khan, M. U., Faraz, M., & Montes, G. A. (2023). Intelligent bearing faults diagnosis featuring automated relative energy-based empirical mode decomposition and novel cepstral autoregressive features. Measurement, 216, 112871." DOI: https://doi.org/10.1016/j.measurement.2023.112871
The dataset can be used with MATLAB and Python.
Experimental Setup Motor: A 3-phase AC motor, 0.25 HP, operating at 1440 RPM, 50 Hz frequency, and 440 Volts. Target Bearings: The left-side bearing was replaced to represent three categories: - Normal Bearings - Inner Race Fault Bearings - Outer Race Fault Bearings
Instrumentation - Sensor: BeanDevice 2.4 GHz AX-3D, a wireless vibration sensor, was used to record vibration data. - Recording: Data collected via BeanGateway and stored on a PC. - Sampling: 1000 Hz.
Data Acquisition - Duration: 18 hours of data collection (6 hours per class). - Segmenting: Signals were divided into 10-second segments, resulting in 2160 signals for each fault category. - Classes: Healthy state, inner race fault, and outer race fault.
Dataset Organization The dataset is structured as follows: Main Folder: Contains two subfolders for .mat and .csv file formats to accommodate different user preferences.
Subfolder 1: .mat Files Healthy: Contains .mat files representing vibration signals for the healthy state. Inner Race Fault: Contains .mat files representing vibration signals for bearings with an inner race fault. Outer Race Fault: Contains .mat files representing vibration signals for bearings with an outer race fault.
Subfolder 2: .csv Files Healthy: Contains .csv files representing vibration signals for the healthy state. Inner Race Fault: Contains .csv files representing vibration signals for bearings with an inner race fault. Outer Race Fault: Contains .csv files representing vibration signals for bearings with an outer race fault.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F7973470%2F59b468d1431202f361679b0a99d328da%2Fs3.png?generation=1732113352733560&alt=media" alt="">
Applications This dataset is suitable for tasks such as: Fault detection and diagnosis Signal processing and feature extraction research Development and benchmarking of machine learning and deep learning models
Usage This dataset can be used for academic research, industrial fault diagnosis applications, and algorithm development. Please cite the following reference when using this dataset: Aziz, S., Khan, M. U., Faraz, M., & Montes, G. A. (2023). Intelligent bearing faults diagnosis featuring automated relative energy based empirical mode decomposition and novel cepstral autoregressive features. Measurement, 216, 112871." DOI: https://doi.org/10.1016/j.measurement.2023.112871
Licence
This dataset is made publicly available for research purposes. Ensure appropriate citation and credit when using the data.https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F7973470%2F1a76f7de5a0ca312ddc2d9ed0caf99a5%2Fs2.png?generation=1732113357938012&alt=media" alt="">
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Adaptive Compute Acceleration Platform market size reached USD 8.42 billion in 2024, registering a robust growth trajectory. The market is projected to expand at a CAGR of 18.7% during the forecast period, reaching approximately USD 43.19 billion by 2033. This significant growth is driven by the soaring demand for high-performance, energy-efficient computing across diverse industries, including artificial intelligence, data centers, and automotive applications. The widespread adoption of cloud computing, increasing data volumes, and the need for real-time analytics are further propelling the market forward as per our comprehensive industry analysis.
One of the primary growth factors for the Adaptive Compute Acceleration Platform market is the exponential rise in data generation and processing requirements across industries. Enterprises are increasingly leveraging big data analytics, artificial intelligence, and machine learning, all of which demand scalable and flexible compute resources. Adaptive Compute Acceleration Platforms (ACAPs) deliver a unique blend of flexibility and high performance, allowing organizations to efficiently process massive datasets in real time. The growing complexity of workloads and the need for rapid data-driven decision-making are compelling organizations to invest in advanced compute acceleration solutions, further fueling market expansion.
Another major driver is the proliferation of edge computing and the Internet of Things (IoT). As businesses seek to process data closer to the source, there is a growing need for platforms that can deliver low-latency, high-throughput compute capabilities at the edge. ACAPs, with their reconfigurable architectures, are uniquely positioned to address these requirements, enabling real-time data processing for applications such as autonomous vehicles, smart factories, and connected healthcare devices. This trend is particularly pronounced in the automotive and industrial segments, where adaptive compute solutions are critical for enabling next-generation functionalities and enhancing operational efficiency.
The rapid advancements in artificial intelligence and high-performance computing are also acting as significant catalysts for the market. ACAPs are increasingly being integrated into AI workflows to accelerate deep learning, neural network inference, and other compute-intensive tasks. Their ability to dynamically adapt to different workloads makes them ideal for heterogeneous computing environments, where multiple types of processing are required. As organizations continue to push the boundaries of AI and machine learning, the demand for adaptive compute acceleration platforms is expected to witness sustained growth, especially in data centers and research institutions.
From a regional perspective, North America currently dominates the Adaptive Compute Acceleration Platform market, owing to the presence of leading technology companies, robust infrastructure, and high adoption rates of advanced computing solutions. Europe and Asia Pacific are also witnessing significant growth, driven by increasing investments in digital transformation, smart manufacturing, and automotive innovation. The Asia Pacific region, in particular, is expected to record the fastest CAGR over the forecast period, fueled by rapid industrialization, expanding IT ecosystems, and government initiatives promoting technological advancements. Meanwhile, Latin America and the Middle East & Africa are gradually emerging as promising markets, supported by growing digitalization and infrastructure development.
The Adaptive Compute Acceleration Platform market by component is segmented into hardware, software, and services, each playing a pivotal role in the overall value chain. Hardware forms the backbone of ACAPs, encompassing field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and other acceleration modules. These hardware components are engineered to deliver high-throughput, low-latency performance, making them indispensable for compute-intensive applications. The increasing demand for customized, energy-efficient hardware solutions in data centers and edge environments is driving substantial investments in this segment. Major industry players are focusing on developing next-generation hardware that supports dynamic reconfiguration, enab
Facebook
Twitterhttps://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The Edge Inference Chips and Acceleration Cards market is experiencing robust growth, driven by the increasing demand for real-time data processing and analysis at the edge of the network. This market is projected to reach $15 billion by 2025 and is expected to exhibit a Compound Annual Growth Rate (CAGR) of 25% from 2025 to 2033. This significant growth is fueled by several key factors, including the proliferation of IoT devices, the rise of AI-powered applications in various sectors like automotive, healthcare, and industrial automation, and the need for lower latency and enhanced security in data processing. Major players like Nvidia, Intel, Qualcomm, and AMD are actively investing in research and development to enhance the performance and efficiency of edge inference chips, leading to a competitive landscape characterized by innovation and continuous improvement. The market segmentation reveals a strong demand across various sectors, with automotive and industrial applications leading the charge. Furthermore, advancements in deep learning algorithms and the development of more energy-efficient chips are expected to further accelerate market expansion throughout the forecast period. The restraints to growth primarily involve the high initial investment costs associated with deploying edge inference solutions and the complexity of integrating these solutions into existing infrastructure. However, these challenges are expected to be mitigated by ongoing technological advancements that lead to cost reductions and simplified integration processes. The competitive landscape continues to evolve, with companies focusing on developing specialized chips for specific applications and forging strategic partnerships to expand market reach. The regional distribution shows strong growth across North America, Europe, and Asia-Pacific, reflecting the global adoption of edge computing technologies. Overall, the long-term outlook for the Edge Inference Chips and Acceleration Cards market remains exceptionally positive, promising substantial growth opportunities for stakeholders across the value chain.
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Edge AI Acceleration Card market is experiencing robust growth, driven by the increasing demand for real-time AI processing at the edge of networks. This demand stems from various sectors including manufacturing, healthcare, autonomous vehicles, and smart cities, all requiring immediate, low-latency AI capabilities. The market's expansion is fueled by the proliferation of IoT devices generating massive data volumes, necessitating on-site processing to reduce bandwidth consumption and latency. Furthermore, advancements in AI algorithms, miniaturization of hardware, and the development of more energy-efficient processors are contributing to the market's rapid expansion. Major players like NVIDIA, AMD, and Intel are heavily investing in R&D, fostering competition and innovation within the sector. However, challenges remain, including the high initial cost of implementation and the need for skilled professionals to deploy and manage these systems. Despite these hurdles, the long-term outlook for the Edge AI Acceleration Card market remains incredibly positive, projected for significant growth over the next decade. The market's segmentation is likely diverse, encompassing cards based on different processing architectures (e.g., ARM, x86), power consumption levels (high-power, low-power), and target applications. Companies are focusing on developing specialized solutions to cater to specific needs within various industries. The competitive landscape is highly dynamic, with established players and emerging startups vying for market share. Regional variations in adoption rates are also expected, with regions exhibiting strong technological infrastructure and a high concentration of AI-focused businesses leading the charge. The successful companies in this space will be those that can offer a balance of performance, power efficiency, cost-effectiveness, and ease of integration into existing infrastructure. Future growth hinges on continued technological advancements, the development of industry standards, and the successful addressing of security and data privacy concerns.
Facebook
Twitterhttps://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
| BASE YEAR | 2024 |
| HISTORICAL DATA | 2019 - 2023 |
| REGIONS COVERED | North America, Europe, APAC, South America, MEA |
| REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
| MARKET SIZE 2024 | 2397.5(USD Million) |
| MARKET SIZE 2025 | 2538.9(USD Million) |
| MARKET SIZE 2035 | 4500.0(USD Million) |
| SEGMENTS COVERED | Technology, Application, Industry, Deployment Type, Regional |
| COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
| KEY MARKET DYNAMICS | increasing demand for low latency, rising cloud adoption, need for enhanced performance, growing AI and ML applications, expansion of IoT devices |
| MARKET FORECAST UNITS | USD Million |
| KEY COMPANIES PROFILED | Advanced Micro Devices, IBM, Amazon Web Services, Oracle, NVIDIA, Salesforce, Qualcomm, SAP, Intel, Microsoft, Google, Cisco Systems |
| MARKET FORECAST PERIOD | 2025 - 2035 |
| KEY MARKET OPPORTUNITIES | AI-driven acceleration solutions, Increased demand for cloud optimization, Growth in edge computing applications, Expansion in IoT devices integration, Adoption in automotive technology advancements |
| COMPOUND ANNUAL GROWTH RATE (CAGR) | 5.9% (2025 - 2035) |
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The data center GPU market is booming, projected to reach $96.5 billion by 2025 and grow at a CAGR of 35.5% through 2033. Fueled by AI, cloud computing, and innovation from NVIDIA, AMD, and Intel, this report reveals key market trends, regional insights, and growth projections.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
(Always use the latest version of the dataset. )
Human Activity Recognition (HAR) refers to the capacity of machines to perceive human actions. This dataset contains information on 18 different activities collected from 90 participants (75 male and 15 female) using smartphone sensors (Accelerometer and Gyroscope). It has 1945 raw activity samples collected directly from the participants, and 20750 subsamples extracted from them. The activities are:
Stand➞ Standing still (1 min) Sit➞ Sitting still (1 min) Talk-sit➞ Talking with hand movements while sitting (1 min) Talk-stand➞ Talking with hand movements while standing or walking(1 min) Stand-sit➞ Repeatedly standing up and sitting down (5 times) Lay➞ Laying still (1 min) Lay-stand➞ Repeatedly standing up and laying down (5 times) Pick➞ Picking up an object from the floor (10 times) Jump➞ Jumping repeatedly (10 times) Push-up➞ Performing full push-ups (5 times) Sit-up➞ Performing sit-ups (5 times) Walk➞ Walking 20 meters (≈12 s) Walk-backward➞ Walking backward for 20 meters (≈20 s) Walk-circle➞ Walking along a circular path (≈ 20 s) Run➞ Running 20 meters (≈7 s) Stair-up➞ Ascending on a set of stairs (≈1 min) Stair-down➞ Descending from a set of stairs (≈50 s) Table-tennis➞ Playing table tennis (1 min)
Contents of the attached .zip files are: 1.Raw_time_domian_data.zip➞ Originally collected 1945 time-domain samples in separate .csv files. The arrangement of information in each .csv file is: Column 1, 5➞ exact time (elapsed since the start) when the Accelerometer & Gyro output was recorded (in ms) Col. 2, 3, 4➞ Acceleration along X,Y,Z axes (in m/s^2) Col. 6, 7, 8➞ Rate of rotation around X,Y,Z axes (in rad/s)
2.Trimmed_interpolated_raw_data.zip➞ Unnecessary parts of the samples were trimmed (only from the beginning and the end). The samples were interpolated to keep a constant sampling rate of 100 Hz. The arrangement of information is the same as above.
3.Time_domain_subsamples.zip➞ 20750 subsamples extracted from the 1945 collected samples provided in a single .csv file. Each of them contains 3 seconds of non-overlapping data of the corresponding activity. Arrangement of information: Col. 1–300, 301–600, 601–900➞ Acc.meter X, Y, Z axes readings Col. 901–1200, 1201–1500, 1501–1800➞ Gyro X, Y, Z axes readings Col. 1801➞ Class ID (0 to 17, in the order mentioned above) Col. 1802➞ length of the each channel data in the subsample Col. 1803➞ serial no. of the subsample
Gravity acceleration was omitted from the Acc.meter data, and no filter was applied to remove noise. The dataset is free to download, modify, and use.
More information is provided in the data paper which is currently under review: N. Sikder, A.-A. Nahid, KU-HAR: An open dataset for heterogeneous human activity recognition, Pattern Recognit. Lett. (submitted).
A preprint will be available soon.
Backup: drive.google.com/drive/folders/1yrG8pwq3XMlyEGYMnM-8xnrd6js0oXA7
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional File 5: Script S1. ZIP-folder containing everything necessary to run the model with the highest macro balanced accuracy that were generated as part of this study. We included an R-script, the model file, and an example dataset (acceleration values and simultaneous behavior of a red deer individual) to run this model. The most accessible approach would be to unzip the folder and open the Behavioral_classification.Rproj file in RStudio directly.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains 3D voxel data extracted from Minecraft world saves, providing detailed block-level information for machine learning and data analysis applications. Each sample represents a 16×32×16 chunk section combining two vertical layers of the Minecraft world (Y-sections 3-4, corresponding to world heights 48-79).
minecraft_voxel_dataset_flat.csv)biome: Integer biome IDblock_0 to block_8191: Block IDs in flattened orderminecraft_voxel_dataset.npz)blocks: (num_chunks, 16, 32, 16) - 3D block databiomes: (num_chunks,) - Biome IDsy_sections: Which Y-sections were combined [3, 4]num_chunks: Total number of chunksblock_shape: Dimensions (16, 32, 16)minecraft_voxel_dataset.pt)Please refer to the following Kaggle notebook for more details:
Example Notebook
This dataset is suitable for:
pip install numpy pandas torch scikit-learn matplotlib
pip install anvil-parser # If you want to generate your own data
pip install plotly # For 3D visualizations
Facebook
Twitterhttps://ec.europa.eu/info/legal-notice_enhttps://ec.europa.eu/info/legal-notice_en
The main scope of this project is to implement a Human Activity Recognition System (HAR System). The purpose of this system is to identify the action which the user would be doing, solely based off of the changes in motion of the user’s body during the performance of specific actions.
Most HAR systems implemented would use specialised motion sensors that would be secured to the users’ body, including, but not limited to, the waist, chest, arms and legs. However, the main problem with this type of system is the complex setup the user would be required to wear during the activity, in addition to the added expenses when purchasing these sensors. Considering the simplicity of the application, many users are more likely to get discouraged in using such a complex, albeit excessive, set up. As a result of the rapid advancements in the technological field as well as the efforts of many researchers, this setup has been reduced to needing only a smartphone. This initial set-up made use of a bulkier mounting system through the use of a belt, an aspect of the set-up which can be improved upon, with the users’ comfort being the main priority.
Therefore, for our project, we were aiming to develop a simple Human Activity Recognition prototype which only uses the built-in sensors found in an average smartphone and eliminating the use of a belt mount, allowing the user to carry their phone in their pockets. While this may result in less accurate predictions, it allows users to retain their usual habits; keeping their phone in their pockets.
Moreover, two separate datasets were gathered by the three members working on this APT. The first dataset was done to mimic that made in the paper [1] with six total actions: Walking, Walking Downstairs, Walking Upstairs, Sitting, Standing, and Laying. This dataset was created in order to compare the difference in results gathered and processed by Anguita et al. and ourselves. However, we also collected a second dataset in which we chose physical activities which were not included in the existing data. These also required body movement from the user and were recorded through the accelerometer and gyroscope sensors found in the smartphone. The final new activities are Cycling, Football, Swimming, Tennis, Jump Rope and Push-ups. In summary, the main aim of this project was not only to interpret the original six activities that most Human Activity Recognition papers tend to focus on, but also to recognise another six unique physical activities.
The process of classifying the data with a high accuracy can be divided into two steps: data collection and modelling. In order to collect a sufficient amount of data, the free app ‘AndroSensor’ was used. Using this app allowed for the collection of data using the four main inertia sensors: gyroscope, gravity, accelerometer and linear acceleration. The data collected consists of roughly one hour worth of data for each of the 12 activities mentioned above, allowing for the model to be developed with an even distribution of data across all the categories.
After the data is collected, it is pre-processed. The pre-processing entails the removal of “NaN” and duplicated values while also generating statistical readings from the ‘csv’ file produced by ‘AndroSensor’. After being processed, the data is analysed via a t-SNE algorithm which aids the visualisation of data clusters. Finally, the data is modelled and classified using four different supervised machine learning algorithms: Logistic Regression, Support Vector Machines, Decision Trees and K-Nearest Neighbours.
Do not hestitate to contact me on owenagius24@gmail.com if you wish to see the whole report.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
AcTBeCalf Dataset Description
The AcTBeCalf dataset is a comprehensive dataset designed to support the classification of pre-weaned calf behaviors from accelerometer data. It contains detailed accelerometer readings aligned with annotated behaviors, providing a valuable resource for research in multivariate time-series classification and animal behavior analysis. The dataset includes accelerometer data collected from 30 pre-weaned Holstein Friesian and Jersey calves, housed in group pens at the Teagasc Moorepark Research Farm, Ireland. Each calf was equipped with a 3D accelerometer sensor (AX3, Axivity Ltd, Newcastle, UK) sampling at 25 Hz and attached to a neck collar from one week of birth over 13 weeks.
This dataset encompasses 27.4 hours of accelerometer data aligned with calf behaviors, including both prominent behaviors like lying, standing, and running, as well as less frequent behaviors such as grooming, social interaction, and abnormal behaviors.
The dataset consists of a single CSV file with the following columns:
dateTime: Timestamp of the accelerometer reading, sampled at 25 Hz.
calfid: Identification number of the calf (1-30).
accX: Accelerometer reading for the X axis (top-bottom direction)*.
accY: Accelerometer reading for the Y axis (backward-forward direction)*.
accZ: Accelerometer reading for the Z axis (left-right direction)*.
behavior: Annotated behavior based on an ethogram of 23 behaviors.
segId: Segment identification number associated with each accelerometer reading/row, representing all readings of the same behavior segment.
Code Files Description
The dataset is accompanied by several code files to facilitate the preprocessing and analysis of the accelerometer data and to support the development and evaluation of machine learning models. The main code files included in the dataset repository are:
accelerometer_time_correction.ipynb: This script corrects the accelerometer time drift, ensuring the alignment of the accelerometer data with the reference time.
shake_pattern_detector.py: This script includes an algorithm to detect shake patterns in the accelerometer signal for aligning the accelerometer time series with reference times.
aligning_accelerometer_data_with_annotations.ipynb: This notebook aligns the accelerometer time series with the annotated behaviors based on timestamps.
manual_inspection_ts_validation.ipynb: This notebook provides a manual inspection process for ensuring the accurate alignment of the accelerometer data with the annotated behaviors.
additional_ts_generation.ipynb: This notebook generates additional time-series data from the original X, Y, and Z accelerometer readings, including Magnitude, ODBA (Overall Dynamic Body Acceleration), VeDBA (Vectorial Dynamic Body Acceleration), pitch, and roll.
genSplit.py: This script provides the logic used for the generalized subject separation for machine learning model training, validation and testing.
active_inactive_classification.ipynb: This notebook details the process of classifying behaviors into active and inactive categories using a RandomForest model, achieving a balanced accuracy of 92%.
four_behv_classification.ipynb: This notebook employs the mini-ROCKET feature derivation mechanism and a RidgeClassifierCV to classify behaviors into four categories: drinking milk, lying, running, and other, achieving a balanced accuracy of 84%.
Kindly cite one of the following papers when using this data:
Dissanayake, O., McPherson, S. E., Allyndrée, J., Kennedy, E., Cunningham, P., & Riaboff, L. (2024). Evaluating ROCKET and Catch22 features for calf behaviour classification from accelerometer data using Machine Learning models. arXiv preprint arXiv:2404.18159.
Dissanayake, O., McPherson, S. E., Allyndrée, J., Kennedy, E., Cunningham, P., & Riaboff, L. (2024). Development of a digital tool for monitoring the behaviour of pre-weaned calves using accelerometer neck-collars. arXiv preprint arXiv:2406.17352