As of April 2024, almost 32 percent of global Instagram audiences were aged between 18 and 24 years, and 30.6 percent of users were aged between 25 and 34 years. Overall, 16 percent of users belonged to the 35 to 44 year age group. Instagram users With roughly one billion monthly active users, Instagram belongs to the most popular social networks worldwide. The social photo sharing app is especially popular in India and in the United States, which have respectively 362.9 million and 169.7 million Instagram users each. Instagram features One of the most popular features of Instagram is Stories. Users can post photos and videos to their Stories stream and the content is live for others to view for 24 hours before it disappears. In January 2019, the company reported that there were 500 million daily active Instagram Stories users. Instagram Stories directly competes with Snapchat, another photo sharing app that initially became famous due to it’s “vanishing photos” feature. As of the second quarter of 2021, Snapchat had 293 million daily active users.
https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement
Welcome to the Native American Child Faces Dataset, meticulously curated to enhance face recognition models and support the development of advanced biometric identification systems, child identification models, and other facial recognition technologies.
This dataset comprises over 3,000 child image sets, divided into participant-wise sets with each set including:
The dataset includes contributions from a diverse network of children across Native American countries:
To ensure high utility and robustness, all images are captured under varying conditions:
Each facial image set is accompanied by detailed metadata for each participant, including:
This metadata is essential for training models that can accurately recognize and identify children's faces across different demographics and conditions.
This facial image dataset is ideal for various applications in the field of computer vision, including but not limited to:
https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement
Welcome to the Native American Facial Images from Past Dataset, meticulously curated to enhance face recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.
This dataset comprises over 5,000+ images, divided into participant-wise sets with each set including:
The dataset includes contributions from a diverse network of individuals across Native American countries:
To ensure high utility and robustness, all images are captured under varying conditions:
Each image set is accompanied by detailed metadata for each participant, including:
This metadata is essential for training models that can accurately recognize and identify Native American faces across different demographics and conditions.
This facial image dataset is ideal for various applications in the field of computer vision, including but not limited to:
The image data set of captive giant pandas used in this study are all from the Chengdu Research Base of Giant Panda Breeding and its partner units, such as: Yunnan Wildlife Park, Suzhou Wildlife Park, Shenzhen Wildlife Park, etc. By using a Panasonic dvx200 video camera and three digital cameras (Canon 1DXmarkII, Canon 5DmarkIII, and a Panasonic Lumix DMC-GH4), the image data of 218 captive pandas was collected. All data were cleaned and annotated to establish a larger giant panda age image database.
https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement
Welcome to the East Asian Facial Images from Past Dataset, meticulously curated to enhance face recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.
This dataset comprises over 10,000+ images, divided into participant-wise sets with each set including:
The dataset includes contributions from a diverse network of individuals across East Asian countries:
To ensure high utility and robustness, all images are captured under varying conditions:
Each image set is accompanied by detailed metadata for each participant, including:
This metadata is essential for training models that can accurately recognize and identify East Asian faces across different demographics and conditions.
This facial image dataset is ideal for various applications in the field of computer vision, including but not limited to:
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset for face anti spoofing and face recognition includes images and videos of сaucasian people. The dataset helps in enchancing the performance of the model by providing wider range of data for a specific ethnic group.
The videos were gathered by capturing faces of genuine individuals presenting spoofs, using facial presentations. Our dataset proposes a novel approach that learns and detects spoofing techniques, extracting features from the genuine facial images to prevent the capturing of such information by fake users.
The dataset contains images and videos of real humans with various resolutions, views, and colors, making it a comprehensive resource for researchers working on anti-spoofing technologies.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2F09524087833ccb985350545376670f7d%2FFrame%20102.png?generation=1712318079960855&alt=media" alt="">
Our dataset also explores the use of neural architectures, such as deep neural networks, to facilitate the identification of distinguishing patterns and textures in different regions of the face, increasing the accuracy and generalizability of the anti-spoofing models.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2F0b17f6b68aea01fda89c4608db97a94f%2FFrame%20101.png?generation=1712314613427348&alt=media" alt="">
The dataset consists of: - files - includes 10 folders corresponding to each person and including 1 image and 1 video, - .csv file - contains information about the files and people in the dataset
keywords: liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, ibeta dataset, face anti spoofing, large-scale face anti spoofing, rich annotations anti spoofing dataset
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
RESEARCH APPROACH
The research approach adopted for the study consists of seven phases which includes as shown in Figure 1:
Pre-acquisition
data pre-processing
Raw images collection
Image pre-processing
Naming of images
Dataset Repository
Performance Evaluation
The different phases in the study are discussed in the sections below.
PRE-ACQUISITION
The volunteers are given brief orientation on how their data will be managed and used for research purposes only. After the volunteers agrees, a consent form is given to be read and signed. The sample of the consent form filled by the volunteers is shown in Figure 1.
The capturing of images was started with the setup of the imaging device. The camera is set up on a tripod stand in stationary position at the height 90 from the floor and distance 20cm from the subject.
EAR IMAGE ACQUISITION
Image acquisition is an action of retrieving image from an external source for further processing. The image acquisition is purely a hardware dependent process by capturing unprocessed images of the volunteers using a professional camera. This was acquired through a subject posing in front of the camera. It is also a process through which digital representation of a scene can be obtained. This representation is known as an image and its elements are called pixels (picture elements). The imaging sensor/camera used in this study is a Canon E0S 60D professional camera which is placed at a distance of 3 feet form the subject and 20m from the ground.
This is the first step in this project to achieve the project’s aim of developing an occlusion and pose sensitive image dataset for black ear recognition. (OPIB ear dataset). To achieve the objectives of this study, a set of black ear images were collected mostly from undergraduate students at a public University in Nigeria.
The image dataset required is captured in two scenarios:
The image dataset captured is purely black ear with partial occlusion in a constrained and unconstrained environment.
The ear images captured were from black subjects in controlled environment. To make the OPIB dataset pose invariant, the volunteers stand on a marked positions on the floor indicating the angles at which the imaging sensor was captured the volunteers’ ear. The capturing of the images in this category requires that the subject stand and rotates in the following angles 60o, 30o and 0o towards their right side to capture the left ear and then towards the left to capture the right ear (Fernando et al., 2017) as shown in Figure 4. Six (6) images were captured per subject at angles 60o, 30o and 0o for the left and right ears of 152 volunteers making a total of 907 images (five volunteers had 5 images instead of 6, hence folders 34, 22, 51, 99 and 102 contain 5 images).
To make the OPIB dataset occlusion and pose sensitive, partial occlusion of the subject’s ears were simulated using rings, hearing aid, scarf, earphone/ear pods, etc. before the images are captured.
CONSENT FORM
This form was designed to obtain participant’s consent on the project titled: An Occlusion and Pose Sensitive Image Dataset for Black Ear Recognition (OPIB). The information is purely needed for academic research purposes and the ear images collected will curated anonymously and the identity of the volunteers will not be shared with anyone. The images will be uploaded on online repository to aid research in ear biometrics.
The participation is voluntary, and the participant can withdraw from the project any time before the final dataset is curated and warehoused.
Kindly sign the form to signify your consent.
I consent to my image being recorded in form of still images or video surveillance as part of the OPIB ear images project.
Tick as appropriate:
GENDER Male Female
AGE (18-25) (26-35) (36-50)
………………………………..
SIGNED
Figure 1: Sample of Subject’s Consent Form for the OPIB ear dataset
RAW IMAGE COLLECTION
The ear images were captured using a digital camera which was set to JPEG because if the camera format is set to raw, no processing will be applied, hence the stored file will contain more tonal and colour data. However, if set to JPEG, the image data will be processed, compressed and stored in the appropriate folders.
IMAGE PRE-PROCESSING
The aim of pre-processing is to improve the quality of the images with regards to contrast, brightness and other metrics. It also includes operations such as: cropping, resizing, rescaling, etc. which are important aspect of image analysis aimed at dimensionality reduction. The images are downloaded on a laptop for processing using MATLAB.
Image Cropping
The first step in image pre-processing is image cropping. Some irrelevant parts of the image can be removed, and the image Region of Interest (ROI) is focused. This tool provides a user with the size information of the cropped image. MATLAB function for image cropping realizes this operation interactively by waiting for a user to specify the crop rectangle with the mouse and operate on the current axes. The output images of the cropping process are of the same class as the input image.
Naming of OPIB Ear Images
The OPIB ear images were labelled based on the naming convention formulated from this study as shown in Figure 5. The images are given unique names that specifies the subject, the side of the ear (left or right) and the angle of capture. The first and second letters (SU) in the image names is block letter simply representing subject for subject 1-to-n in the dataset, while the left and right ears is distinguished using L1, L2, L3 and R1, R2, R3 for angles 600, 300 and 00, respectively as shown in Table 1.
Table 1: Naming Convention for OPIB ear images
NAMING CONVENTION
Label
Degrees 600 300 00
No of the degree 1 2 3
Subject 1 indicates (first image in dataset) SU1
Subject n indicates (last image in dataset) SUn
Left Image 1 L 1
Left image n L n
Right Image 1 R 1
Right Image n R n
SU1L1 SU1RI
SU1L2 SU1R2
SU1L3 SU1R3
OPIB EAR DATASET EVALUATION
The prominent challenges with the current evaluation practices in the field of ear biometrics are the use of different databases, different evaluation matrices, different classifiers that mask the feature extraction performance and the time spent developing framework (Abaza et al., 2013; Emeršič et al., 2017).
The toolbox provides environment in which the evaluation of methods for person recognition based on ear biometric data is simplified. It executes all the dataset reads and classification based on ear descriptors.
DESCRIPTION OF OPIB EAR DATASET
OPIB ear dataset was organised into a structure with each folder containing 6 images of the same person. The images were captured with both left and right ear at angle 0, 30 and 60 degrees. The images were occluded with earing, scarves and headphone etc. The collection of the dataset was done both indoor and outdoor. The dataset was gathered through the student at a public university in Nigeria. The percentage of female (40.35%) while Male (59.65%). The ear dataset was captured through a profession camera Nikon D 350. It was set-up with a camera stand where an individual captured in a process order. A total number of 907 images was gathered.
The challenges encountered in term of gathering students for capturing, processing of the images and annotations. The volunteers were given a brief orientation on what their ear could be used for before, it was captured, for processing. It was a great task in arranging the ear (dataset) into folders and naming accordingly.
Table 2: Overview of the OPIB Ear Dataset
Location
Both Indoor and outdoor environment
Information about Volunteers
Students
Gender
Female (40.35%) and male (59.65%)
Head Side Left and Right
Side Left and Right
Total number of volunteers
152
Per Subject images
3 images of left ear and 3 images of right ear
Total Images
907
Age group
18 to 35 years
Colour Representation
RGB
Image Resolution
224x224
An MRI data set that demonstrates the utility of a mega-analytic approach by identifying the effects of age and gender on the resting-state networks (RSNs) of 603 healthy adolescents and adults (mean age: 23.4 years, range: 12-71 years). Data were collected on the same scanner, preprocessed using an automated analysis pipeline based in SPM, and studied using group independent component analysis. RSNs were identified and evaluated in terms of three primary outcome measures: time course spectral power, spatial map intensity, and functional network connectivity. Results revealed robust effects of age on all three outcome measures, largely indicating decreases in network coherence and connectivity with increasing age. Gender effects were of smaller magnitude but suggested stronger intra-network connectivity in females and more inter-network connectivity in males, particularly with regard to sensorimotor networks. These findings, along with the analysis approach and statistical framework described, provide a useful baseline for future investigations of brain networks in health and disease.
https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement
Welcome to the East Asian Child Faces Dataset, meticulously curated to enhance face recognition models and support the development of advanced biometric identification systems, child identification models, and other facial recognition technologies.
This dataset comprises over 5,000 child image sets, divided into participant-wise sets with each set including:
The dataset includes contributions from a diverse network of children across East Asian countries:
To ensure high utility and robustness, all images are captured under varying conditions:
Each facial image set is accompanied by detailed metadata for each participant, including:
This metadata is essential for training models that can accurately recognize and identify children's faces across different demographics and conditions.
This facial image dataset is ideal for various applications in the field of computer vision, including but not limited to:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset contains x-ray images, mammography, from breast cancer screening at the Karolinska University Hospital, Stockholm, Sweden, collected by principal investigator Fredrik Strand at Karolinska Institutet. The purpose for compiling the dataset was to perform AI research to improve screening, diagnostics and prognostics of breast cancer. The dataset is based on a selection of cases with and without a breast cancer diagnosis, taken from a more comprehensive source dataset. 1,103 cases of first-time breast cancer for women in the screening age range (40-74 years) during the included time period (November 2008 to December 2015) were included. Of these, a random selection of 873 cases have been included in the published dataset. A random selection of 10,000 healthy controls during the same time period were included. Of these, a random selection of 7,850 cases have been included in the published dataset. For each individual all screening mammograms, also repeated over time, were included; as well as the date of screening and the age. In addition, there are pixel-level annotations of the tumors created by a breast radiologist (small lesions such as micro-calcifications have been annotated as an area). Annotations were also drawn in mammograms prior to diagnosis; if these contain a single pixel it means no cancer was seen but the estimated location of the center of the future cancer was shown by a single pixel annotation. In addition to images, the dataset also contains cancer data created at the Karolinska University Hospital and extracted through the Regional Cancer Center Stockholm-Gotland. This data contains information about the time of diagnosis and cancer characteristics including tumor size, histology and lymph node metastasis. The precision of non-image data was decreased, through categorisation and jittering, to ensure that no single individual can be identified. The following types of files are available: - CSV: The following data is included (if applicable): cancer/no cancer (meaning breast cancer during 2008 to 2015), age group at screening, days from image to diagnosis (if any), cancer histology, cancer size group, ipsilateral axillary lymph node metastasis. There is one csv file for the entire dataset, with one row per image. Any information about cancer diagnosis is repeated for all rows for an individual who was diagnosed (i.e., it is also included in rows before diagnosis). For each exam date there is the assessment by radiologist 1, radiologist 2 and the consensus decision. - DICOM: Mammograms. For each screening, four images for the standard views were acuqired: left and right, mediolateral oblique and craniocaudal. There should be four files per examination date. - PNG: Cancer annotations. For each DICOM image containing a visible tumor. Access: The dataset is available upon request due to the size of the material. The image files in DICOM and PNG format comprises approximately 2.5 TB. Access to the CSV file including parametric data is possible via download as associated documentation.
Population distribution : the race distribution is Asians, Caucasians and black people, the gender distribution is male and female, the age distribution is from children to the elderly
Collecting environment : including indoor and outdoor scenes (such as supermarket, mall and residential area, etc.)
Data diversity : different ages, different time periods, different cameras, different human body orientations and postures, different ages collecting environment
Device : surveillance cameras, the image resolution is not less than 1,9201,080
Data format : the image data format is .jpg, the annotation file format is .json
Annotation content : human body rectangular bounding boxes, 15 human body attributes
Quality Requirements : A rectangular bounding box of human body is qualified when the deviation is not more than 3 pixels, and the qualified rate of the bounding boxes shall not be lower than 97%;Annotation accuracy of attributes is over 97%
Fieldname | Definition |
age_group | the lowest limit of the age group (ex. 0 represent 0 up to 5 and 5 represents 5 up to 10) |
population_percent | proportion of population in the age group to total population |
age_count |
Race distribution : Asians, Caucasians, black people
Gender distribution : gender balance
Age distribution : ranging from teenager to the elderly, the middle-aged and young people are the majorities
Collecting environment : including indoor and outdoor scenes
Data diversity : different shooting heights, different ages, different light conditions, different collecting environment, clothes in different seasons, multiple human poses
Device : cameras
Data format : the data format is .jpg/mp4, the annotation file format is .json, the camera parameter file format is .json, the point cloud file format is .pcd
Accuracy : based on the accuracy of the poses, the accuracy exceeds 97%;the accuracy of labels of gender, race, age, collecting environment and clothes are more than 97%
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A verb as the fundamental part of a sentence is important and its retrieval consists of different cognitive stages. Additionally, verb retrieval difficulty is reported in some types of aphasia and other neurological diseases, and some psycholinguistic variables can influence the verb retrieval process. This study aimed to provide a normative database in the Persian language for 92 black and white action pictures and related verbs in two groups of young ages (20 to 40 years old) and middle ages (41 to 64 years old). A total of 150 volunteers participated in this study, and the groups had similar characteristics due to education. The pictures were normed for variables such as name agreement, familiarity, visual complexity, age of acquisition, and image agreement. Correlation coefficients were calculated values among these measures, and comparisons were made between the two age groups. The results of the comparisons between the two groups showed that name agreement and familiarities were age-dependent. The results revealed that all measures varied with age. Also, the present study provided a set of verbs and their pictures in the Persian language and normative data were obtained on the psycholinguistic variables such that it can be used for clinical practice and research in the areas of verb processing and their naming.
GapMaps Live is an easy-to-use location intelligence platform available across 25 countries globally that allows you to visualise your own store data, combined with the latest demographic, economic and population movement intel right down to the micro level so you can make faster, smarter and surer decisions when planning your network growth strategy.
With one single login, you can access the latest estimates on resident and worker populations, census metrics (eg. age, income, ethnicity), consuming class, retail spend insights and point-of-interest data across a range of categories including fast food, cafe, fitness, supermarket/grocery and more.
Some of the world's biggest brands including McDonalds, Subway, Burger King, Anytime Fitness and Dominos use GapMaps Live as a vital strategic tool where business success relies on up-to-date, easy to understand, location intel that can power business case validation and drive rapid decision making.
Primary Use Cases for GapMaps Live includes:
Some of features our clients love about GapMaps Live include: - View business locations, competitor locations, demographic, economic and social data around your business or selected location - Understand consumer visitation patterns (“where from” and “where to”), frequency of visits, dwell time of visits, profiles of consumers and much more. - Save searched locations and drop pins - Turn on/off all location listings by category - View and filter data by metadata tags, for example hours of operation, contact details, services provided - Combine public data in GapMaps with views of private data Layers - View data in layers to understand impact of different data Sources - Share maps with teams - Generate demographic reports and comparative analyses on different locations based on drive time, walk time or radius. - Access multiple countries and brands with a single logon - Access multiple brands under a parent login - Capture field data such as photos, notes and documents using GapMaps Connect and integrate with GapMaps Live to get detailed insights on existing and proposed store locations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ContentThis work is a derivative from the COBRE sample found in the International Neuroimaging Data-sharing Initiative (INDI), originally released under Creative Commons -- Attribution Non-Commercial. It includes preprocessed resting-state functional magnetic resonance images for 72 patients diagnosed with schizophrenia (58 males, age range = 18-65 yrs) and 74 healthy controls (51 males, age range = 18-65 yrs). The fMRI dataset for each subject are single nifti files (.nii.gz), featuring 150 EPI blood-oxygenation level dependent (BOLD) volumes were obtained in 5 mns (TR = 2 s, TE = 29 ms, FA = 75°, 32 slices, voxel size = 3x3x4 mm3 , matrix size = 64x64, FOV = mm2 ). The data processing as well as packaging was implemented by Pierre Bellec, CRIUGM, Department of Computer Science and Operations Research, University of Montreal, 2016.The COBRE preprocessed fMRI release more specifically contains the following files:README.md: a markdown (text) description of the release.phenotypic_data.tsv.gz: A gzipped tabular-separated value file, with each column representing a phenotypic variable as well as measures of data quality (related to motions). Each row corresponds to one participant, except the first row which contains the names of the variables (see file below for a description).keys_phenotypic_data.json: a json file describing each variable found in phenotypic_data.tsv.gz.fmri_XXXXXXX.tsv.gz: A gzipped tabular-separated value file, with each column representing a confounding variable for the time series of participant XXXXXXX (which is the same participant ID found in phenotypic_data.tsv.gz). Each row corresponds to a time frame, except for the first row, which contains the names of the variables (see file below for a definition).keys_confounds.json: a json file describing each variable found in the files fmri_XXXXXXX.tsv.gz.fmri_XXXXXXX.nii.gz: a 3D+t nifti volume at 6 mm isotropic resolution, stored as short (16 bits) integers, in the MNI non-linear 2009a symmetric space (http://www.bic.mni.mcgill.ca/ServicesAtlases/ICBM152NLin2009). Each fMRI data features 150 volumes.Usage recommendationsIndividual analyses: You may want to remove some time frames with excessive motion for each subject, see the confounding variable called scrub in fmri_XXXXXXX.tsv.gz. Also, after removing these time frames there may not be enough usable data. We recommend a minimum number of 60 time frames. A fairly large number of confounds have been made available as part of the release (slow time drifts, motion paramaters, frame displacement, scrubbing, average WM/Vent signal, COMPCOR, global signal). We strongly recommend regression of slow time drifts. Everything else is optional.Group analyses: There will also be some residuals effect of motion, which you may want to regress out from connectivity measures at the group level. The number of acceptable time frames as well as a measure of residual motion (called frame displacement, as described by Power et al., Neuroimage 2012), can be found in the variables Frames OK and FD scrubbed in phenotypic_data.tsv.gz. Finally, the simplest use case with these data is to predict the overall presence of a diagnosis of schizophrenia (values Control or Patient in the phenotypic variable Subject Type). You may want to try to match the control and patient samples in terms of amounts of motion, as well as age and sex. Note that more detailed diagnostic categories are available in the variable Diagnosis.PreprocessingThe datasets were analysed using the NeuroImaging Analysis Kit (NIAK https://github.com/SIMEXP/niak) version 0.17, under CentOS version 6.3 with Octave (http://gnu.octave.org) version 4.0.2 and the Minc toolkit (http://www.bic.mni.mcgill.ca/ServicesSoftware/ServicesSoftwareMincToolKit) version 0.3.18. Each fMRI dataset was corrected for inter-slice difference in acquisition time and the parameters of a rigid-body motion were estimated for each time frame. Rigid-body motion was estimated within as well as between runs, using the median volume of the first run as a target. The median volume of one selected fMRI run for each subject was coregistered with a T1 individual scan using Minctracc (Collins and Evans, 1998), which was itself non-linearly transformed to the Montreal Neurological Institute (MNI) template (Fonov et al., 2011) using the CIVET pipeline (Ad-Dabbagh et al., 2006). The MNI symmetric template was generated from the ICBM152 sample of 152 young adults, after 40 iterations of non-linear coregistration. The rigid-body transform, fMRI-to-T1 transform and T1-to-stereotaxic transform were all combined, and the functional volumes were resampled in the MNI space at a 6 mm isotropic resolution.Note that a number of confounding variables were estimated and are made available as part of the release. WARNING: no confounds were actually regressed from the data, so it can be done interactively by the user who will be able to explore different analytical paths easily. The “scrubbing” method of (Power et al., 2012), was used to identify the volumes with excessive motion (frame displacement greater than 0.5 mm). A minimum number of 60 unscrubbed volumes per run, corresponding to ~120 s of acquisition, is recommended for further analysis. The following nuisance parameters were estimated: slow time drifts (basis of discrete cosines with a 0.01 Hz high-pass cut-off), average signals in conservative masks of the white matter and the lateral ventricles as well as the six rigid-body motion parameters (Giove et al., 2009), anatomical COMPCOR signal in the ventricles and white matter (Chai et al., 2012), PCA-based estimator of the global signal (Carbonell et al., 2011). The fMRI volumes were not spatially smoothed.ReferencesAd-Dab’bagh, Y., Einarson, D., Lyttelton, O., Muehlboeck, J. S., Mok, K., Ivanov, O., Vincent, R. D., Lepage, C., Lerch, J., Fombonne, E., Evans, A. C., 2006. The CIVET Image-Processing Environment: A Fully Automated Comprehensive Pipeline for Anatomical Neuroimaging Research. In: Corbetta, M. (Ed.), Proceedings of the 12th Annual Meeting of the Human Brain Mapping Organization. Neuroimage, Florence, Italy.Bellec, P., Rosa-Neto, P., Lyttelton, O. C., Benali, H., Evans, A. C., Jul. 2010. Multi-level bootstrap analysis of stable clusters in resting-state fMRI. NeuroImage 51 (3), 1126–1139. URL http://dx.doi.org/10.1016/j.neuroimage.2010.02.082F. Carbonell, P. Bellec, A. Shmuel. Validation of a superposition model of global and system-specific resting state activity reveals anti-correlated networks. Brain Connectivity 2011 1(6): 496-510. doi:10.1089/brain.2011.0065Chai, X. J., Castan, A. N. N., Ongr, D., Whitfield-Gabrieli, S., Jan. 2012. Anticorrelations in resting state networks without global signal regression. NeuroImage 59 (2), 1420-1428. http://dx.doi.org/10.1016/j.neuroimage.2011.08.048 Collins, D. L., Evans, A. C., 1997. Animal: validation and applications of nonlinear registration-based segmentation. International Journal of Pattern Recognition and Artificial Intelligence 11, 1271–1294.Fonov, V., Evans, A. C., Botteron, K., Almli, C. R., McKinstry, R. C., Collins, D. L., Jan. 2011. Unbiased average age-appropriate atlases for pediatric studies. NeuroImage 54 (1), 313–327. URLhttp://dx.doi.org/10.1016/j.neuroimage.2010.07.033Giove, F., Gili, T., Iacovella, V., Macaluso, E., Maraviglia, B., Oct. 2009. Images-based suppression of unwanted global signals in resting-state functional connectivity studies. Magnetic resonance imaging 27 (8), 1058–1064. URLhttp://dx.doi.org/10.1016/j.mri.2009.06.004Power, J. D., Barnes, K. A., Snyder, A. Z., Schlaggar, B. L., Petersen, S. E., Feb. 2012. Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage 59 (3), 2142–2154. URLhttp://dx.doi.org/10.1016/j.neuroimage.2011.10.018
This data set was collected in 2004 to 2006 in the United Kingdom. Subjects were adult males and females, some of whom were healthy (control group), some with age-related macular degeneration (AMD group), and some were diabetic patients (diabetic group). Unfortunately, no other information from this time exists about this subjects.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Age Classification is a dataset for classification tasks - it contains Age Groups annotations for 2,772 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Purpose For the purpose of informing tobacco intervention programs, this dataset was created and used to explore how online social networks of smokers differed from those of nonsmokers. The study was a secondary analysis of data collected as part of a randomized control trial conducted within Facebook. (See "Other References" in "Metadata" for parent study information.) Basic description of 4 anonymized data files of study participants. fbr_friends: Anonymized Facebook friends networks, basic ego demographics, basic ego social media activity fbr_family: Anonymized Facebook family networks, basic ego demographics, basic ego social media activity fbr_photos: Anonymized Facebook photo networks, basic ego demographics, basic ego social media activity fbr_groups: Anonymized Facebook group networks, basic ego demographics, basic ego social media activity Each network comprises the ego, the ego's first degree connections, and the (second degree) connections between the ego's friends. Missing data and users who did not have friend, family, photo, or group networks were cleaned from the data beforehand. Each data file contains the following columns of data, taken with participant knowledge and consent participant_id: Nonidentifying ids assigned to different study participants. is_smoker: Binary value (0,1) that takes on the value 1 if participant was a smoker and 0 otherwise. gender: One of three categories: male, female, or blank, which signified Other (different from missing data). country: One of four categories: Canada (ca), US (us), Mexico (mx), or Other (xx). likes_count: Numeric data indicating number of Facebook likes the participant had made up to the date the data was collected. wall_count: Numeric data indicating number of Facebook wall posts the participant had made up to the date the data was collected. t_count_page_views: Numeric data indicating number of pages participant had visited in the UbiQUITous app up to the date the data was collected. yearsOld: Numeric data indicating age in years of the participant; right censored at 90 years for data anonymity. vertices: Number of people in the participant's network. edges: Number of connections between people in the network. density: The portion of potential connections in a network that are actual connections; a network-level metric; calculated after removing ego and isolates. mean_betweenness_centrality: An average of the relative importance of all individuals within their own network; a network-level metric; calculated after removing ego and isolates. transitivity: The extent to which the relationship between two nodes in a network that are connected by an edge is transitive (calculated as the number of triads divided by all possible connections); a network-level metric; calculated after removing ego and isolates. mean_closeness: Average of how closely associated members are to one another; a network-level metric; calculated after removing ego and isolates. isolates2: Number of individuals with no connections other than to the ego; a network-level metric. diameter3: Maximum degree of separation between any two individuals in the network; a network-level metric; calculated after removing ego and isolates. clusters3: Number of subnetworks; a network-level metric; calculated after removing ego and isolates. communities3: Number of groups, sorted to increase dense connections within the group and decrease sparse connections outside it (i.e., to maximize modularity); a network-level metric; calculated after removing ego and isolates. modularity3: The strength of division of a network into communities (calculated as the fraction of ties between community members in excess of the expected number of ties within communities if ties were random); a network-level metric. Detailed information on network metrics in the associated manuscript: "An exploration of the Facebook social networks of smokers and non-smokers" by Fu, L, Jacobs MA, Brookover J, Valente TW, Cobb NK, and Graham AL.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Wildlife that share habitats with humans with limited options for spatial avoidance must either tolerate frequent human encounters or concentrate their activity on those periods with the least risk of encountering people. Based on 5,259 camera trap images of adult wolves from eight territories, we analyzed the extent to which diel activity patterns in a highly cultivated landscape with extensive public access (Denmark) could be explained by diel variation in darkness, human activity, and prey (deer) activity. A resource selection function that contrasted every camera observation (use) with 24 alternative hourly observations from the same day (availability), revealed that diel activity correlated with all three factors simultaneously with human activity having the strongest effect (negative), followed by darkness (positive) and deer activity (positive). A model incorporating these three effects had lower parsimony and classified use and availability observations just as well as a ‘circadian’ model that smoothed the use-availability ratio as a function of time of the day. Most of the selection for darkness was explained by variation in human activity, supporting the notion that nocturnality (proportion of observations registered at night vs. day at the equinox) is a proxy for temporal human avoidance. Contrary to our expectations, wolves were no more nocturnal in territories with unrestricted public access than in territories where public access was restricted to roads, possibly because wolves in all territories had few possibilities to walk more than a few hundred meters without crossing roads. Overall, Danish wolf packs were 6.5 (95% CI: 4.6-9.6) times more active at night than at daylight, which makes them amongst the most nocturnally active wolves reported so far. These results confirm the prediction that wolves in habitats with limited options for spatial human avoidance, invest more in temporal avoidance. Methods Population monitoring and data collection Since 2017, the Natural History Museum Aarhus and Aarhus University have monitored all wolves in Denmark for the Danish Environmental Protection Agency. The occurrence and turnover of individuals are registered from genetic markers obtained from scat, hair, saliva, or urine samples collected by systematic patrolling of forest roads and by snow tracking (active monitoring) as well as saliva samples from livestock kills obtained by the Danish Nature Agency. A territory was defined as the area patrolled by a single wolf, pair, or pack for a minimum of six months. The core areas and approximate territory extensions were estimated from the distribution of wolf signs (scats, tracks, kills, photos, etc.) within the landscape. With permission from the landowners, we placed wildlife cameras in places known (from the appearance of footprints, scats, or other signs) or suspected (leading lines in the landscape which from experience are known to be used by wolves when commuting, e.g. forest roads) to be used by the wolves within the territories. At locations with public access, visitors were informed about the presence of the cameras through signs containing project information and our contact details. We used cameras with fast trigger times able to record fast-moving species, that recorded videos and/or multiple pictures. Cameras were usually visited every two to six weeks, checking battery levels and changing memory cards. Where possible, the wolves on the images were identified to age defined as pup (born same calendar year) or adult (not a pup, hence all grown-up wolves observed January-June were coded as adults) and coded in the database. If multiple wolves on the same photo or video sequence were identified as different ages or individuals, they were registered as different records in the database. Prior to the analyses, such doublets or triplets were removed, so only one unique camera observation entered the analysis as an observation unit. As the cameras were placed to maximize the number of wolf observations, sampling effort was concentrated in the central parts of the territories where wolf sign concentrations were highest. Cameras aimed at recording wolves were usually placed along trails and forest roads used by wolves when traversing their territories and at places with a high density of scats and footprints, that indicated frequent use by wolves at a given time. In a subset of the territories we also had cameras placed in the terrain, optimized to register all large and medium-sized mammal species. For wolf population monitoring purposes, observations from both types of surveys were entered into the Danish National Database of Wolf Observations. The effort expended in terms of camera days was not registered in this database. Observations of general wildlife were logged in a separate database, which also included information on the effort expended in terms of the number of camera days (26,210 in total). Due to resource constraints, this database only contained a subset of the total number of camera observations available in the raw data. The wolf data used for this analysis were therefore drawn from the first database. The number of different camera locations, resulting in wolf observations varied from 26 to 198 per territory (median: 51) and the total area covered (100% minimum convex polygon) by cameras delivering wolf data for the analysis, ranging from 6.5 to 79.3 (median: 21.3) km2 per territory. Selection of observations for analyses We selected wolf camera trap data separated by a minimum of 5 minutes from eight independent territories from six areas (one territorial area was occupied by three different constellations of individuals during different time periods. This selection resulted in 5,259 camera observations of adults, 1,814 observations only showing pups (representing five litters from four territories), and 158 observations where the age could not be determined (excluded from the analyses). Of the 5,259 observations of adult wolves, 3,280 (62%) originated from cameras for monitoring wolves, 1,257 (24%) from cameras for monitoring general wildlife, and 722 (14%) from cameras where the initial purpose had not been recorded. As any dependence between observations within territories was accounted for in the statistical analyses by stating territory as random effect (see below), we decided to use the full data set rather than a reduced data set based on observations separated by 30 minutes, as often recommended to avoid serial dependence of observations. Under all circumstances, increasing the minimum sampling interval from 5 to 30 minutes only reduced the data set by 250 observations, and did not change the outcome of any of the statistical analyses. Digitized data on wildlife and human activity was available from three of the six territory areas (five of eight territories. As ungulates, especially cervids (Cervidae, ‘deer’ hereafter) constitute the main and most selected prey type of wolves in Central Europe, we used 11,315 camera observations of deer (species composition: red deer Cervus elaphus: 60%, roe deer Capreolus capreolus: 31%, fallow deer Dama dama: 4%, unidentified deer species: 5%) to represent diel activity of prey. The 15,017 observations of humans were divided between 48% pedestrians, 16% bicyclists, 31% motorized vehicles, and 5% horse riders. Seasonal definition To account for seasonal variation, we divided the year into quartiles: November-January (mean day length at 56°N: 8.47 hours; range: 7.92-9.68 hours), February-April (11.97; 9.22-14.77), May-July (16.01; 14.83-16.55) and August-October (12.57; 9.73-15.32). These not only contrasted the two three-month periods with the shortest and longest daylengths, but also provided a good division of the ecological and reproductive annual cycle for wolf packs that have young offspring in May-July, mobile offspring (frequenting rendezvous sites) in August-October, increasingly independent offspring through November-January, and a pre-parturition period from February-April, when last year’s offspring have attained full independence. Statistical analysis of diel activity patterns of wolves, deer, and humans For each three-month period, we quantified the general variation in the diel activity of juvenile and adult wolves (W hereafter), deer (D), and humans (H), by modelling the relative frequency of observations per one-hour-interval from midnight to midnight (0: 00:00-00:59, 1: 01:00-01:59, etc.). We used the R package ‘mgcv’ v. 1.8 to fit generalized additive models (GAM) with beta distribution and logit-link, an adaptive cyclical cubic smoothing spline for time, and territory ID as random effect. Models were visually validated by plotting standardized model residuals against fitted values. As data for human and deer activity was not available for all areas, extrapolation was necessary. As the season-specific diel activity curves for humans were highly correlated between the different study areas (Supporting information), we produced one season-specific diel activity function based on all data pooled. Among the deer, diel activity correlated less between fenced and unfenced areas. As we know from GPS-data that red deer in fenced areas move shorter hourly distances around dusk and dawn than red deer in unfenced nature areas (P. Sunde and R.M. Mortensen, unpublished), we created one activity distribution for deer based on data from all three areas (“Deer-Total”, abbreviated to “DT”), and one differentiated between fenced and unfenced areas (“Deer-Local”, abbreviated to “DL”). To quantify the extent to which diel activity levels of adult wolves, deer, and humans were associated with light conditions and correlated internally throughout the year, we created a correlation matrix comprised of 24 * 365 = 8,760 hourly time observations, covering the entire year (1 January-31 December). For each hourly time observation, we assigned light conditions
As of April 2024, almost 32 percent of global Instagram audiences were aged between 18 and 24 years, and 30.6 percent of users were aged between 25 and 34 years. Overall, 16 percent of users belonged to the 35 to 44 year age group. Instagram users With roughly one billion monthly active users, Instagram belongs to the most popular social networks worldwide. The social photo sharing app is especially popular in India and in the United States, which have respectively 362.9 million and 169.7 million Instagram users each. Instagram features One of the most popular features of Instagram is Stories. Users can post photos and videos to their Stories stream and the content is live for others to view for 24 hours before it disappears. In January 2019, the company reported that there were 500 million daily active Instagram Stories users. Instagram Stories directly competes with Snapchat, another photo sharing app that initially became famous due to it’s “vanishing photos” feature. As of the second quarter of 2021, Snapchat had 293 million daily active users.