The number of smartphone users in the United States was forecast to continuously increase between 2024 and 2029 by in total 17.4 million users (+5.61 percent). After the fifteenth consecutive increasing year, the smartphone user base is estimated to reach 327.54 million users and therefore a new peak in 2029. Notably, the number of smartphone users of was continuously increasing over the past years.Smartphone users here are limited to internet users of any age using a smartphone. The shown figures have been derived from survey data that has been processed to estimate missing demographics.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of smartphone users in countries like Mexico and Canada.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The ORBIT (Object Recognition for Blind Image Training) -India Dataset is a collection of 105,243 images of 76 commonly used objects, collected by 12 individuals in India who are blind or have low vision. This dataset is an "Indian subset" of the original ORBIT dataset [1, 2], which was collected in the UK and Canada. In contrast to the ORBIT dataset, which was created in a Global North, Western, and English-speaking context, the ORBIT-India dataset features images taken in a low-resource, non-English-speaking, Global South context, a home to 90% of the world’s population of people with blindness. Since it is easier for blind or low-vision individuals to gather high-quality data by recording videos, this dataset, like the ORBIT dataset, contains images (each sized 224x224) derived from 587 videos. These videos were taken by our data collectors from various parts of India using the Find My Things [3] Android app. Each data collector was asked to record eight videos of at least 10 objects of their choice.
Collected between July and November 2023, this dataset represents a set of objects commonly used by people who are blind or have low vision in India, including earphones, talking watches, toothbrushes, and typical Indian household items like a belan (rolling pin), and a steel glass. These videos were taken in various settings of the data collectors' homes and workspaces using the Find My Things Android app.
The image dataset is stored in the ‘Dataset’ folder, organized by folders assigned to each data collector (P1, P2, ...P12) who collected them. Each collector's folder includes sub-folders named with the object labels as provided by our data collectors. Within each object folder, there are two subfolders: ‘clean’ for images taken on clean surfaces and ‘clutter’ for images taken in cluttered environments where the objects are typically found. The annotations are saved inside a ‘Annotations’ folder containing a JSON file per video (e.g., P1--coffee mug--clean--231220_084852_coffee mug_224.json) that contains keys corresponding to all frames/images in that video (e.g., "P1--coffee mug--clean--231220_084852_coffee mug_224--000001.jpeg": {"object_not_present_issue": false, "pii_present_issue": false}, "P1--coffee mug--clean--231220_084852_coffee mug_224--000002.jpeg": {"object_not_present_issue": false, "pii_present_issue": false}, ...). The ‘object_not_present_issue’ key is True if the object is not present in the image, and the ‘pii_present_issue’ key is True, if there is a personally identifiable information (PII) present in the image. Note, all PII present in the images has been blurred to protect the identity and privacy of our data collectors. This dataset version was created by cropping images originally sized at 1080 × 1920; therefore, an unscaled version of the dataset will follow soon.
This project was funded by the Engineering and Physical Sciences Research Council (EPSRC) Industrial ICASE Award with Microsoft Research UK Ltd. as the Industrial Project Partner. We would like to acknowledge and express our gratitude to our data collectors for their efforts and time invested in carefully collecting videos to build this dataset for their community. The dataset is designed for developing few-shot learning algorithms, aiming to support researchers and developers in advancing object-recognition systems. We are excited to share this dataset and would love to hear from you if and how you use this dataset. Please feel free to reach out if you have any questions, comments or suggestions.
REFERENCES:
Daniela Massiceti, Lida Theodorou, Luisa Zintgraf, Matthew Tobias Harris, Simone Stumpf, Cecily Morrison, Edward Cutrell, and Katja Hofmann. 2021. ORBIT: A real-world few-shot dataset for teachable object recognition collected from people who are blind or low vision. DOI: https://doi.org/10.25383/city.14294597
microsoft/ORBIT-Dataset. https://github.com/microsoft/ORBIT-Dataset
Linda Yilin Wen, Cecily Morrison, Martin Grayson, Rita Faia Marques, Daniela Massiceti, Camilla Longden, and Edward Cutrell. 2024. Find My Things: Personalized Accessibility through Teachable AI for People who are Blind or Low Vision. In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA '24). Association for Computing Machinery, New York, NY, USA, Article 403, 1–6. https://doi.org/10.1145/3613905.3648641
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Introduction This dataset was gathered during the Vid2Real online video-based study, which investigates humans’ perception of robots' intelligence in the context of an incidental Human-Robot encounter. The dataset contains participants' questionnaire responses to four video study conditions, namely Baseline, Verbal, Body language, and Body language + Verbal. The videos depict a scenario where a pedestrian incidentally encounters a quadruped robot trying to enter a building. The robot uses verbal commands or body language to try to ask for help from the pedestrian in different study conditions. The differences in the conditions were manipulated using the robot’s verbal and expressive movement functionalities. Dataset Purpose The dataset includes the responses of human subjects about the robots' social intelligence used to validate the hypothesis that robot social intelligence is positively correlated with human compliance in an incidental human-robot encounter context. The video based dataset was also developed to obtain empirical evidence that can be used to design future real-world HRI studies. Dataset Contents Four videos, each corresponding to a study condition. Four sets of Perceived Social Intelligence Scale data. Each set corresponds to one study condition Four sets of compliance likelihood questions, each set include one Likert question and one free-form question One set of Godspeed questionnaire data. One set of Anthropomorphism questionnaire data. A csv file containing the participants demographic data, Likert scale data, and text responses. A data dictionary explaining the meaning of each of the fields in the csv file. Study Conditions There are 4 videos (i.e. study conditions), the video scenarios are as follows. Baseline: The robot walks up to the entrance and waits for the pedestrian to open the door without any additional behaviors. This is also the "control" condition. Verbal: The robot walks up to the entrance, and says ”can you please open the door for me” to the pedestrian while facing the same direction, then waits for the pedestrian to open the door. Body Language: The robot walks up to the entrance, turns its head to look at the pedestrian, then turns its head to face the door, and waits for the pedestrian to open the door. Body Language + Verbal: The robot walks up to the entrance, turns its head to look at the pedestrian, and says ”Can you open the door for me” to the pedestrian, then waits for the pedestrian to open the door. Image showing the Verbal condition. Image showing the Body Language condition. A within-subject design was adopted, and all participants experienced all conditions. The order of the videos, as well as the PSI scales, were randomized. After receiving consent from the participants, they were presented with one video, followed by the PSI questions and the two exploratory questions (compliance likelihood) described above. This set was repeated 4 times, after which the participants would answer their general perceptions of the robot with Godspeed and AMPH questionnaires. Each video was around 20 seconds and the total study time was around 10 minutes. Video as a Study Method A video-based study in human-robot interaction research is a common method for data collection. Videos can easily be distributed via online participant recruiting platforms, and can reach a larger sample than in-person/lab-based studies. Therefore, it is a fast and easy method for data collection for research aiming to obtain empirical evidence. Video Filming The videos were filmed with a first-person point-of-view in order to maximize the alignment of video and real-world settings. The device used for the recording was an iPhone 12 pro, and the videos were shot in 4k 60 fps. For better accessibility, the videos have been converted to lower resolutions. Instruments The questionnaires used in the study include the Perceived Social Intelligence Scale (PSI), Godspeed Questionnaire, and Anthropomorphism Questionnaire (AMPH). In addition to these questionnaires, a 5-point Likert question and a free-text response measuring human compliance were added for the purpose of the video-based study. Participant demographic data was also collected. Questionnaire items are attached as part of this dataset. Human Subjects For the purpose of this project, the participants are recruited through Prolific. Therefore, the participants are users of Prolific. Additionally, they are restricted to people who are currently living in the United States, fluent in English, and have no hearing or visual impairments. No other restrictions were imposed. Among the 385 participants, 194 participants identified as female, and 191 as male, the age ranged from 19 to 75 (M = 38.53, SD = 12.86). Human subjects remained anonymous. Participants were compensated with $4 upon submission approval. This study was reviewed and approved by UT Austin Internal Review Board. Robot The dataset contains data about humans’ perceived...
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Socially-assistive robots (SARs) hold significant potential to transform the management of chronic healthcare conditions (e.g. diabetes, Alzheimer's, dementia) outside the clinic walls. However doing so entails embedding such autonomous robots into people's daily lives and home living environments, which are deeply shaped by the cultural and geographic locations within which they are situated. That begs the question of whether we can design autonomous interactive behaviors between SARs and humans based on universal machine learning (ML) and deep learning (DL) models of robotic sensor data that would work across such diverse environments. To investigate this, we conducted a long-term user study with 26 participants across two diverse locations (the United States and South Korea) with SARs deployed in each user's home for several weeks. We collected robotic sensor data every second of every day, combined with sophisticated ecological momentary assessment (EMA) sampling techniques, to generate a large-scale dataset of over 270 million data points representing 173 hours of randomly-sampled naturalistic interaction data between the human and robot pet. Interaction behaviors included activities like playing, petting, talking, cooking, etc.
Mobile devices connected to the internet are vulnerable to targeted attacks and security threats. In December 2023, the number of global mobile cyberattacks was approximately 5.4 million, up by 147 percent compared to December 2022. Cyberattacks targeting mobile devices have been decreasing since the end of 2020, after experiencing an annual peak of almost 6.4 million in October 2020. Mobile concerns: Smishing While mobile operating systems come with vulnerabilities requiring patching and regular maintenance, watchful usage can reduce the risk for users of incurring security threats. Smishing attacks are especially reliant on users’ accidental mistakes or naivety. Smishing, or SMS phishing, uses text messages to lure users into accessing fake websites requesting personal data, or into clicking on malicious download links that could infect the device with malware. In the first quarter of 2024, AdWare and RiskTool were the most encountered types of mobile malware worldwide, while Trojan malware accounted for 11 percent of the total. Smishing attacks do not interest regular users alone, but can also target organizations and professionals. In 2023, it was found that the share of IT professionals and organizations targeted by smishing attacks was at 75 percent. Mobile app privacy According to a survey of global consumers carried out in August 2021, both Android and iOS users appeared equally keen to stop using an app if their privacy expectations were not met. Mobile apps have to collect different types of data for functionality purposes, including app diagnostic and device data for location-based services. However, mobile apps also collect other types of more personal user data, such as search history, browsing history, health data, and financial information. The data can be then used by the company that collected them in the first place (1st party data), or with entities that do not have a direct relationship with the users, and obtain data from the main tracking source (3rd party data). Social media apps, like other app categories, rely on acquiring 3rd party data from users for their advertisement business. As of February 2022, TikTok was found to have the highest number of potential 3rd party trackers, followed by Telegram, and Twitter.
The number of smartphone users in the Philippines was forecast to increase between 2024 and 2029 by in total 5.6 million users (+7.29 percent). This overall increase does not happen continuously, notably not in 2026, 2027, 2028 and 2029. The smartphone user base is estimated to amount to 82.33 million users in 2029. Smartphone users here are limited to internet users of any age using a smartphone. The shown figures have been derived from survey data that has been processed to estimate missing demographics.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of smartphone users in countries like Thailand and Indonesia.
The number of mobile broadband connections in the Philippines was forecast to continuously increase between 2024 and 2029 by in total 18.3 million connections (+20.46 percent). After the ninth consecutive increasing year, the number of connections is estimated to reach 107.69 million connections and therefore a new peak in 2029. Mobile broadband connections include cellular connections with a download speed of at least 256 kbit/s (without satellite or fixed-wireless connections). Cellular Internet-of-Things (IoT) or machine-to-machine (M2M) connections are excluded. The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of mobile broadband connections in countries like Vietnam and Laos.
The smartphone penetration in the Philippines was forecast to continuously decrease between 2024 and 2029 by in total 6.4 percentage points. According to this forecast, in 2029, the penetration will have decreased for the fourth consecutive year to 65.75 percent. The penetration rate refers to the share of the total population.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the smartphone penetration in countries like Laos and Malaysia.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The number of smartphone users in the United States was forecast to continuously increase between 2024 and 2029 by in total 17.4 million users (+5.61 percent). After the fifteenth consecutive increasing year, the smartphone user base is estimated to reach 327.54 million users and therefore a new peak in 2029. Notably, the number of smartphone users of was continuously increasing over the past years.Smartphone users here are limited to internet users of any age using a smartphone. The shown figures have been derived from survey data that has been processed to estimate missing demographics.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of smartphone users in countries like Mexico and Canada.