The global number of households with a computer in was forecast to continuously increase between 2024 and 2029 by in total 88.6 million households (+8.6 percent). After the fifteenth consecutive increasing year, the computer households is estimated to reach 1.1 billion households and therefore a new peak in 2029. Notably, the number of households with a computer of was continuously increasing over the past years.Computer households are defined as households possessing at least one computer.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of households with a computer in countries like Caribbean and Africa.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global laptop market size is projected to grow significantly from USD 150 billion in 2023 to an estimated USD 230 billion by 2032, representing a compound annual growth rate (CAGR) of 4.9%. This growth is driven by a variety of factors including technological advancements, increasing demand for remote work solutions, and the continuous rise in e-learning. The proliferation of digital content, coupled with the need for portable computing devices, is further propelling the market. The shift towards more powerful, energy-efficient, and lightweight devices is also contributing to this upward trend. As consumers and businesses alike continue to value mobility without compromising on performance, the laptop market is poised for sustained growth over the forecast period.
One of the primary growth factors for the laptop market is the increase in remote work and distance learning opportunities. With the global shift towards remote working environments, particularly accelerated by the COVID-19 pandemic, there has been an unprecedented demand for portable and efficient computing devices. Laptops offer the flexibility needed for remote work, enabling users to access work-related resources from any location. The education sector has also witnessed a surge in demand as educational institutions have increasingly adopted digital learning platforms, necessitating the widespread use of laptops for students and educators. This trend is expected to continue as both the corporate world and educational institutions recognize the long-term benefits of flexible work and learning models.
Technological advancements in the laptop market are another critical growth driver. The development of high-performance processors, enhanced graphics capabilities, and longer battery life are setting new benchmarks in the industry. Manufacturers are focusing on innovation to meet the increasing expectations of consumers who are seeking devices that can handle more complex tasks. The advent of 5G technology and its integration into laptops is also expected to create new opportunities by enabling faster connectivity and improved performance. Additionally, the trend towards thinner and lighter laptops, such as ultrabooks, continues to gain traction, appealing to consumers who prioritize both portability and power.
The growth of the gaming industry is also significantly impacting the laptop market. Gaming laptops, which boast powerful processors and high-end graphics cards, are increasingly in demand. The rise of eSports and competitive gaming has fueled the need for devices that can deliver immersive gaming experiences. Moreover, the growing popularity of virtual reality (VR) and augmented reality (AR) has further driven the demand for high-performance laptops. As gaming becomes more mainstream and diverse, manufacturers are investing in developing specialized gaming laptops that cater to different segments of gamers, from casual players to serious enthusiasts.
Regionally, Asia Pacific is expected to exhibit the highest growth in the laptop market, driven by a burgeoning middle-class population and rapid digitalization. Countries like China and India are witnessing an increased adoption of laptops across various sectors, including education and business. North America and Europe remain key markets due to their technological infrastructure and high adoption rates of new technologies. Meanwhile, Latin America and the Middle East & Africa are gradually emerging as potential growth areas, with increasing investments in technology and education sectors. As the global landscape evolves, regional dynamics will continue to play a significant role in shaping the future of the laptop market.
The laptop market is broadly segmented into various product types, each catering to distinct consumer needs and preferences. Traditional laptops continue to dominate the market due to their versatility and affordability. These devices are favored by a wide range of consumers, from students to professionals, due to their well-balanced features that offer adequate performance for everyday tasks. Manufacturers have been focusing on enhancing the specifications of traditional laptops to include better processors, increased storage, and improved battery life, ensuring they remain competitive in a market with evolving consumer demands.
2-in-1 laptops, also known as convertible laptops, are gaining popularity due to their multifunctionality. These devices can switch between laptop and tablet modes, offering users the flexibility to use them for both work and entertainment purposes. The
The global household computer penetration in was forecast to continuously increase between 2024 and 2029 by in total 2.4 percentage points. After the eleventh consecutive increasing year, the computer penetration rate is estimated to reach 52.78 percent and therefore a new peak in 2029. Depicted is the estimated share of households owning at least one computer.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the household computer penetration in countries like Australia & Oceania and Caribbean.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
"Amazon Laptop Specs" is a comprehensive dataset containing detailed specifications of various laptop models sold on Amazon. The dataset consists of about 100 laptop models and covers a wide range of brands, including Dell, HP, Lenovo, Apple, Acer, Asus, and more.
The data includes various attributes of each laptop, such as the processor type, RAM size, hard disk size, screen size, graphics card, operating system, battery life, and more. Additionally, the dataset includes information on the price, customer reviews, and ratings for each laptop model.
The dataset is suitable for researchers, analysts, and data scientists who are interested in exploring the market trends, comparing the performance of different laptop models, or building predictive models to understand customer behavior.
This dataset can also be used by e-commerce businesses to analyze customer preferences and identify the most popular laptop models, which can help in making informed decisions about inventory management, pricing, and marketing strategies
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ODDS Smart Building Depth Dataset
#Introduction:
The goal of this dataset is to facilitate research focusing on recognizing objects in smart buildings using the depth sensor mounted at the ceiling. This dataset contains annotations of depth images for eight frequently seen object classes. The classes are: person, backpack, laptop, gun, phone, umbrella, cup, and box.
#Data Collection:
We collected data from two settings. We had Kinect mounted at a 9.3 feet ceiling near to a 6 feet wide door. We also used a tripod with a horizontal extender holding the kinect at a similar height looking downwards. We asked about 20 volunteers to enter and exit a number of times each in different directions (3 times walking straight, 3 times walking towards left side, 3 times walking towards right side) holding objects in many different ways and poses underneath the Kinect. Each subject was using his/her own backpack, purse, laptop, etc. As a result, we considered varieties within the same object, e.g., for laptops, we considered Macbooks, HP laptops, Lenovo laptops of different years and models, and for backpacks, we considered backpacks, side bags, and purse of women. We asked the subjects to walk while holding it in many ways, e.g., for laptop, the laptop was fully open, partially closed, and fully closed while carried. Also, people hold laptops in front and side of their bodies, and underneath their elbow. The subjects carried their backpacks in their back, in their side at different levels from foot to shoulder. We wanted to collect data with real guns. However, bringing real guns to the office is prohibited. So, we obtained a few nerf guns and the subjects were carrying these guns pointing it to front, side, up, and down while walking.
#Annotated Data Description:
The Annotated dataset is created following the structure of Pascal VOC devkit, so that the data preparation becomes simple and it can be used quickly with different with object detection libraries that are friendly to Pascal VOC style annotations (e.g. Faster-RCNN, YOLO, SSD). The annotated data consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the eight classes present in the image. Multiple objects from multiple classes may be present in the same image. The dataset has 3 main directories:
1)DepthImages: Contains all the images of training set and validation set.
2)Annotations: Contains one xml file per image file, (e.g., 1.xml for image file 1.png). The xml file includes the bounding box annotations for all objects in the corresponding image.
3)ImagesSets: Contains two text files training_samples.txt and testing_samples.txt. The training_samples.txt file has the name of images used in training and the testing_samples.txt has the name of images used for testing. (We randomly choose 80%, 20% split)
#UnAnnotated Data Description:
The un-annotated data consists of several set of depth images. No ground-truth annotation is available for these images yet. These un-annotated sets contain several challenging scenarios and no data has been collected from this office during annotated dataset construction. Hence, it will provide a way to test generalization performance of the algorithm.
#Citation:
If you use ODDS Smart Building dataset in your work, please cite the following reference in any publications:
@inproceedings{mithun2018odds,
title={ODDS: Real-Time Object Detection using Depth Sensors on Embedded GPUs},
author={Niluthpol Chowdhury Mithun and Sirajum Munir and Karen Guo and Charles Shelton},
booktitle={ ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN)},
year={2018},
}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We present two urban road disease datasets: DURDD for road disease detection and CURDD for road disease classification. DURDD includes four main types of underground road diseases: cavity, detachment, water-rich, and looseness. It also contains disease detection datasets in three base formats: COCO, Pascal VOC, and YOLO. In CURDD, the dataset is divided into two levels: level 0 and level 1, corresponding to the "Cls0" and "Cls1" catalogs, respectively. Level 1 includes cavity, detachment, water-rich, looseness, and background. Level 0 categories combine the four main disease types mentioned earlier into a single "diseases" category, with the other category being "background." This dataset was jointly published by Hebei University and the 519 Team of North China Geological Exploration Bureau. We support individuals or teams using the data for research purposes. We also welcome collaboration for commercial use. For commercial inquiries, please contact us for authorization.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Depression is a psychological state of mind that often influences a person in an unfavorable manner. While it can occur in people of all ages, students are especially vulnerable to it throughout their academic careers. Beginning in 2020, the COVID-19 epidemic caused major problems in people’s lives by driving them into quarantine and forcing them to be connected continually with mobile devices, such that mobile connectivity became the new norm during the pandemic and beyond. This situation is further accelerated for students as universities move towards a blended learning mode. In these circumstances, monitoring student mental health in terms of mobile and Internet connectivity is crucial for their wellbeing. This study focuses on students attending an International University of Bangladesh to investigate their mental health due to their continual use of mobile devices (e.g., smartphones, tablets, laptops etc.). A cross-sectional survey method was employed to collect data from 444 participants. Following the exploratory data analysis, eight machine learning (ML) algorithms were used to develop an automated normal-to-extreme severe depression identification and classification system. When the automated detection was incorporated with feature selection such as Chi-square test and Recursive Feature Elimination (RFE), about 3 to 5% increase in accuracy was observed by the method. Similarly, a 5 to 15% increase in accuracy has been observed when a feature extraction method such as Principal Component Analysis (PCA) was performed. Also, the SparsePCA feature extraction technique in combination with the CatBoost classifier showed the best results in terms of accuracy, F1-score, and ROC-AUC. The data analysis revealed no sign of depression in about 44% of the total participants. About 25% of students showed mild-to-moderate and 31% of students showed severe-to-extreme signs of depression. The results suggest that ML models, incorporating a proper feature engineering method can serve adequately in multi-stage depression detection among the students. This model might be utilized in other disciplines for detecting early signs of depression among people.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A low-resolution infrared thermal dataset of people and thermal objects, such as a working laptop, in indoor environments. The dataset was collected by a far infrared thermal camera (32 by 24 pixels), which can capture the position and shape information of thermal objects without privacy issues that enable trustworthy computer vision applications. The dataset consists of 1770 thermal images with high-quality annotation collected from an indoor room with around 15 degrees.
https://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
Self-reported data from approximately 380 public libraries, First Nation public libraries and contracting organizations. The data includes:
Data from 2011 and onwards is from a refreshed database. New fields were added for:
In 2012, new fields were added for:
In 2013 more fields were added for social media visits and other professional staff.
In 2016 a field was added for indigenous language training and retention, while circulating and reference holdings information was combined.
In 2017 fields were added for e-learning services, students hired for a summer or semester, circulating wireless hot spots, and library service visits to residence-bound people.
In 2019 fields were added for Facility Rentals and Bookings, ‘Pop-up’ Libraries, Extended Services and Facilities, Government Services Partnerships, and Business and Economic Sector Partnerships.
The database uses the common name "LibStats".
The product has been discontinued since: 03 Jul 2018. Access to the internet via wireless connection using a portable computer away from home or work.
The Room environment - v0
We have released a challenging Gymnasium compatible environment. The best strategy for this environment is to have both episodic and semantic memory systems. See the paper for more information.
Prerequisites
A unix or unix-like x86 machine python 3.10 or higher. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python. This env is added to the PyPI server. Just run: pip install room-env
Data collection Data is collected from querying ConceptNet APIs. For simplicity, we only collect triples whose format is (head, atlocation, tail). Here head is one of the 80 MS COCO dataset categories. This was kept in mind so that later on we can use images as well.
If you want to collect the data manually, then run below:
python collect_data.py
How does this environment work? The Gymnasium-compatible Room environment is one big room with Npeople number of people who can freely move around. Each of them selects one object, among Nobjects, and places it in one of the Nlocations locations. Nagents number of agent(s) are also in this room. They can only observe one human placing an object, one at a time; x(t). At the same time, they are given one question about the location of an object; q(t). x(t) is given as a quadruple, (h(t),r(t),t(t),t), For example,
The reason why the observations and questions are given as RDF-triple-like format is two folds. One is that this structured format is easily readable / writable by both humans and machines. Second is that we can use existing knowledge graphs, such as ConceptNet .
To simplify the environment, the agents themselves are not actually moving, but the room is continuously changing. There are several random factors in this environment to be considered:
With the chance of pcommonsense, a human places an object in a commonsense location (e.g., a laptop on a desk). The commonsense knowledge we use is from ConceptNet. With the chance of 1 − pcommonsense, an object is placed at a non-commonsense random location (e.g., a laptop on the tree).
With the chance of pnew_location, a human changes object location.
With the chance of pnew_object, a human changes his/her object to another one.
With the chance of pswitch_person, two people switch their locations. This is done to mimic an agent moving around the room.
All of the four probabilities account for the Bernoulli distributions.
Consider there is only one agent. Then this is a POMDP, where St = (x(t), q(t)), At = (do something with x(t), answer q(t)), and Rt ∈ {0, 1}.
Currently there is no RL trained for this. We only have some heuristics. Take a look at the paper for more details.
RoomEnv-v0 ```python import gymnasium as gym
env = gym.make("room_env:RoomEnv-v0") (observation, question), info = env.reset() rewards = 0
while True: (observation, question), reward, done, truncated, info = env.step("This is my answer!") rewards += reward if done: break
print(rewards) ```
Every time when an agent takes an action, the environment will give you an observation and a question to answer. You can try directly answering the question, such as env.step("This is my answer!"), but a better strategy is to keep the observations in memory systems and take advantage of the current observation and the history of them in the memory systems.
Take a look at this repo for an actual interaction with this environment to learn a policy.
Contributing Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
Fork the Project Create your Feature Branch (git checkout -b feature/AmazingFeature) Run make test && make style && make quality in the root repo directory, to ensure code quality. Commit your Changes (git commit -m 'Add some AmazingFeature') Push to the Branch (git push origin feature/AmazingFeature) Open a Pull Request
Cite our paper bibtex @misc{https://doi.org/10.48550/arxiv.2204.01611, doi = {10.48550/ARXIV.2204.01611}, url = {https://arxiv.org/abs/2204.01611}, author = {Kim, Taewoon and Cochez, Michael and Francois-Lavet, Vincent and Neerincx, Mark and Vossen, Piek}, keywords = {Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {A Machine With Human-Like Memory Systems}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} }
Authors
Taewoon Kim Michael Cochez Vincent Francois-Lavet Mark Neerincx Piek Vossen
License MIT
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
I'm not the only one blessed with an old computer. :) People on the other side of the meeting wonder where the noise is coming from. :) Well, this dataset is an effort to make a model that can eliminate fan noise from audio
As of yet, there are 3 Wav audio files that have been recorded using Audacity and exported as Signed 16 bit PCM whatever that means. Recording volume 0.70 Future audio files will include recording with higher volume to get as much noise as possible.
To all those blessed with old hardware. :) You are amazing
You never know what kind of knowledge is there in the data. I once read about using fan noise to get computer data! Isn't that amazing. You are welcome to share your findings.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
First-person video dataset recorded in daily life situations of 17 participants, annotated by themselves for privacy sensitivity. The dataset of Steil et al. contains more than 90 hours of data recorded continuously from 20 participants (six females, aged 22-31) over more than four hours each. Participants were students with different backgrounds and subjects with normal or corrected- to-normal vision. During the recordings, participants roamed a university campus and performed their everyday activities, such as meeting people, eating, or working as they normally would on any day at the university. To obtain some data from multiple, and thus also “privacy-sensitive”, places on the university campus, participants were asked to not stay in one place for more than 30 minutes. Participants were further asked to stop the recording after about one and a half hours so that the laptop’s battery packs could be changed and the eye tracker re-calibrated. This yielded three recordings of about 1.5 hours per participant. Participants regularly interacted with a mobile phone provided to them and were also encouraged to use their own laptop, desktop computer, or music player if desired. The dataset thus covers a rich set of representative real-world situations, including sensitive environments and tasks.
The product has been discontinued since: 03 Jul 2018.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here are a few use cases for this project:
Classroom Attendance Management: "kmutnb_person_com" can be used in educational institutions to monitor and track student attendance by analyzing images or video feeds from classrooms, identifying the presence of students (person class) and checking them against the registered list of students in the class.
Workspace Productivity Analysis: Businesses can utilize this model to analyze office spaces and monitor employee activities. By identifying individuals (person class) and their interaction with computers (com_on), companies can better understand and optimize employee productivity.
Smart Building Management: The computer vision model can be integrated into smart building systems to detect and monitor the occupancy of shared spaces such as conference rooms, study areas, or libraries. The model can identify the presence of people and their activities, providing building managers information on space utilization and energy consumption.
Retail Space Analysis: Retailers can use "kmutnb_person_com" to analyze and improve their store layouts. By detecting the presence of customers (person class) and their activities near products or computers (com_on), retailers can optimize the arrangement of merchandise and digital displays to increase customer engagement and sales.
Security and Surveillance: The computer vision model could be implemented in security systems to monitor and detect unauthorized access to sensitive areas where computers or critical infrastructure are present. By detecting the presence of individuals (person class), security personnel can quickly respond to potential threats.
This database automatically captures metadata, the source of which is the GOVERNMENT OF THE REPUBLIC OF SLOVENIA STATISTICAL USE OF THE REPUBLIC OF SLOVENIA and corresponding to the source database entitled “Frequency and place of use of computers by individuals, by education and sex, Slovenia, 2007-2017”.
Actual data are available in Px-Axis format (.px). With additional links, you can access the source portal page for viewing and selecting data, as well as the PX-Win program, which can be downloaded free of charge. Both allow you to select data for display, change the format of the printout, and store it in different formats, as well as view and print tables of unlimited size, as well as some basic statistical analyses and graphics.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
In this competition, you'll write an algorithm to classify whether images contain either a dog or a cat. This is easy for humans, dogs, and cats. Your computer will find it a bit more difficult.
https://www.ethosvet.com/wp-content/uploads/cat-dog-625x375.png" alt="">
The Asirra data set
Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords.
Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface:
Asirra is unique because of its partnership with Petfinder.com, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research. Image recognition attacks
While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459. State of the art
The current literature suggests machine classifiers can score above 80% accuracy on this task [1]. Therfore, Asirra is no longer considered safe from attack. We have created this contest to benchmark the latest computer vision and deep learning approaches to this problem. Can you crack the CAPTCHA? Can you improve the state of the art? Can you create lasting peace between cats and dogs?
Submission Format
Your submission should have a header. For each image in the test set, predict a label for its id (1 = dog, 0 = cat):
id,label 1,0 2,0 3,0 etc...
Computer price index, by type of purchaser (CPPI) by by North American Product Classification System (NAPCS). Monthly data are available from January 2010. The table presents data for the most recent reference period and the last four periods. The base period for the index is (2015=100).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The expansion of Internet connectivity has revolutionized our daily lives, with people increasingly relying on smartphones and laptops for various tasks. This technological evolution has prompted the development of innovative solutions to enhance the quality of life for diverse populations, including the elderly and individuals with disabilities. Among the most impactful advancements are voice-command-enabled technologies such as SIRI and Google voice commands, which are built upon the foundation of Speech Recognition modules, a critical component in facilitating human-machine communication.Automatic Speech Recognition (ASR) has witnessed significant progress in achieving human-like performance through data-driven methods. In the context of our research, we have meticulously crafted an Arabic voice command dataset to facilitate advancements in ASR and other speech processing tasks. This dataset comprises 10 distinct commands spoken by 10 unique speakers, each repeated 10 times. Despite its modest size, the dataset has demonstrated remarkable performance across a range of speech processing tasks.The dataset was rigorously evaluated, yielding exceptional results. In ASR, it achieved an accuracy of 95.9%, showcasing its potential for effectively transcribing spoken Arabic commands. Furthermore, the dataset excelled in speaker identification, gender recognition, accent recognition, and spoken language understanding, with macro F1 scores of 99.67%, 100%, 100%, and 97.98%, respectively.This Arabic Voice Command Dataset represents a valuable resource for researchers and developers in the field of speech processing and human-machine interaction. Its quality and diversity make it a robust foundation for developing and testing ASR and other related systems, ultimately contributing to the advancement of voice-command technologies and their widespread accessibility.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The provided dataset, titled "product_price_dataset.csv," contains information about various products across different categories. It can be used for a project titled "Dynamic Product Price Adjustment Using Machine Learning." The dataset includes the following columns:
1) ProductID: A unique identifier for each product. 2) ProductName: The name of the product. 3) Brand: The brand or company that manufactures the product. 4) Category: The category to which the product belongs (e.g., Laptops, Mobile Phones, Wearable Tech, Home Appliances, etc.). 5) Weight: The weight of the product, typically in kilograms. 6) Dimensions: The dimensions of the product, specified as length x width x height. 7) Material: The primary material used in the construction of the product. 8) Color: The color of the product. 9) Rating: The average rating of the product based on customer reviews, usually on a scale of 1 to 5. 10) NumReviews: The number of customer reviews for the product. 11) Price: The current price of the product.
This dataset contains information about 120 different products spanning various categories such as electronics, home appliances, fitness and health, outdoor and sports equipment, and more. The dataset includes products like laptops, smartphones, headphones, smartwatches, gaming consoles, tablets, cameras, drones, fitness trackers, wireless mice, external hard drives, and many others. With this comprehensive dataset, machine learning techniques can be applied to analyze the relationships between product features (such as brand, category, weight, dimensions, material, color, rating, and number of reviews) and the price. The goal would be to develop a dynamic pricing model that can adjust product prices based on these features, potentially helping businesses optimize their pricing strategies and increase profitability. Additionally, the dataset can be used for other tasks such as product recommendation systems, market segmentation, and demand forecasting, among others.
The global number of households with a computer in was forecast to continuously increase between 2024 and 2029 by in total 88.6 million households (+8.6 percent). After the fifteenth consecutive increasing year, the computer households is estimated to reach 1.1 billion households and therefore a new peak in 2029. Notably, the number of households with a computer of was continuously increasing over the past years.Computer households are defined as households possessing at least one computer.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of households with a computer in countries like Caribbean and Africa.