Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
YouTube was launched in 2005. It was founded by three PayPal employees: Chad Hurley, Steve Chen, and Jawed Karim, who ran the company from an office above a small restaurant in San Mateo. The first...
In the most recently reported fiscal year, Google's revenue amounted to 348.16 billion U.S. dollars. Google's revenue is largely made up by advertising revenue, which amounted to 264.59 billion U.S. dollars in 2024. As of October 2024, parent company Alphabet ranked first among worldwide internet companies, with a market capitalization of 2,02 billion U.S. dollars. Google’s revenue Founded in 1998, Google is a multinational internet service corporation headquartered in California, United States. Initially conceptualized as a web search engine based on a PageRank algorithm, Google now offers a multitude of desktop, mobile and online products. Google Search remains the company’s core web-based product along with advertising services, communication and publishing tools, development and statistical tools as well as map-related products. Google is also the producer of the mobile operating system Android, Chrome OS, Google TV as well as desktop and mobile applications such as the internet browser Google Chrome or mobile web applications based on pre-existing Google products. Recently, Google has also been developing selected pieces of hardware which ranges from the Nexus series of mobile devices to smart home devices and driverless cars. Due to its immense scale, Google also offers a crisis response service covering disasters, turmoil and emergencies, as well as an open source missing person finder in times of disaster. Despite the vast scope of Google products, the company still collects the majority of its revenue through online advertising on Google Site and Google network websites. Other revenues are generated via product licensing and most recently, digital content and mobile apps via the Google Play Store, a distribution platform for digital content. As of September 2020, some of the highest-grossing Android apps worldwide included mobile games such as Candy Crush Saga, Pokemon Go, and Coin Master.
http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
This dataset provides a synthetic, daily record of financial market activities related to companies involved in Artificial Intelligence (AI). There are key financial metrics and events that could influence a company's stock performance like launch of Llama by Meta, launch of GPT by OpenAI, launch of Gemini by Google etc. Here, we have the data about how much amount the companies are spending on R & D of their AI's Products & Services, and how much revenue these companies are generating. The data is from January 1, 2015, to December 31, 2024, and includes information for various companies : OpenAI, Google and Meta.
This data is available as a CSV file. We are going to analyze this data set using the Pandas DataFrame.
This analyse will be helpful for those working in Finance or Share Market domain.
From this dataset, we extract various insights using Python in our Project.
1) How much amount the companies spent on R & D ?
2) Revenue Earned by the companies
3) Date-wise Impact on the Stock
4) Events when Maximum Stock Impact was observed
5) AI Revenue Growth of the companies
6) Correlation between the columns
7) Expenditure vs Revenue year-by-year
8) Event Impact Analysis
9) Change in the index wrt Year & Company
These are the main Features/Columns available in the dataset :
1) Date: This column indicates the specific calendar day for which the financial and AI-related data is recorded. It allows for time-series analysis of the trends and impacts.
2) Company: This column specifies the name of the company to which the data in that particular row belongs. Examples include "OpenAI" and "Meta".
3) R&D_Spending_USD_Mn: This column represents the Research and Development (R&D) spending of the company, measured in Millions of USD. It serves as an indicator of a company's investment in innovation and future growth, particularly in the AI sector.
4) AI_Revenue_USD_Mn: This column denotes the revenue generated specifically from AI-related products or services, also measured in Millions of USD. This metric highlights the direct financial success derived from AI initiatives.
5) AI_Revenue_Growth_%: This column shows the percentage growth of AI-related revenue for the company on a daily basis. It indicates the pace at which a company's AI business is expanding or contracting.
6) Event: This column captures any significant events or announcements made by the company that could potentially influence its financial performance or market perception. Examples include "Cloud AI launch," "AI partnership deal," "AI ethics policy update," and "AI speech recognition release." These events are crucial for understanding sudden shifts in stock impact.
7) Stock_Impact_%: This column quantifies the percentage change in the company's stock price on a given day, likely in response to the recorded financial metrics or events. It serves as a direct measure of market reaction.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
so if you have to have a G+ account (for YouTube, location services, or other reasons) - here's how you can make it totally private! No one will be able to add you, send you spammy links, or otherwise annoy you. You need to visit the "Audience Settings" page - https://plus.google.com/u/0/settings/audience You can then set a "custom audience" - usually you would use this to restrict your account to people from a specific geographic location, or within a specific age range. In this case, we're going to choose a custom audience of "No-one" Check the box and hit save. Now, when people try to visit your Google+ profile - they'll see this "restricted" message. You can visit my G+ Profile if you want to see this working. (https://plus.google.com/114725651137252000986) If you are not able to understand you can follow this website : http://www.livehuntz.com/google-plus/support-phone-number
This action recognition dataset contains short video clips sourced from CCTV footage from existing CCTV datasets as well as YouTube and Google. 13 Action categories are present: Fall, Grab, Gun, Hit, Kick, LyingDown, Run, Sit, Stand, Sneak, Struggle, Throw, Walk. Each are represented by 200 video clips each. It should be noted that Throw, Kick and Sneak contain 100 unique video clips which have been duplicated to create 200.
For a given video clip name "NTU_fight0003_fall_2",
NTU: is the source of the data fight0003: is the name of the video clip fall: is the action category 2: is the clip number that has been sourced from the same original video file, in this case it is the second video clip.
Test and Train Splits have been generated for your convenience, as well as using 50% and 75% of the original dataset size. Within the Text files, for each video clip, 1 is for training and 2 is for evaluation.
This dataset was used in my PhD research, which identified that multiple actions in scene make the singular annotations that the video clips have wildly impracticable. Yolov5 + StrongSort was used to isolate each person instance in each video clip, with the area they occupy cropped, extracted and overlayed onto a blacked out background in a new video clip, thereby ensuring only one person is ever present in a given video clip. This approach improved action recognition performance by approximately 8% when using OpenPose/AlphaPose skeletal data combined with STGCN and 2sAGCN.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
If using this dataset, please cite the following paper and the current Zenodo repository.
This dataset is described in detail in the following paper:
[1] Yao, Y., Stebner, A., Tuytelaars, T., Geirnaert, S., & Bertrand, A. (2024). Identifying temporal correlations between natural single-shot videos and EEG signals. Journal of Neural Engineering, 21(1), 016018. doi:10.1088/1741-2552/ad2333
The associated code is available at: https://github.com/YYao-42/Identifying-Temporal-Correlations-Between-Natural-Single-shot-Videos-and-EEG-Signals?tab=readme-ov-file
Introduction
The research work leading to this dataset was conducted at the Department of Electrical Engineering (ESAT), KU Leuven.
This dataset contains electroencephalogram (EEG) data collected from 19 young participants with normal or corrected-to-normal eyesight when they were watching a series of carefully selected YouTube videos. The videos were muted to avoid the confounds introduced by audio. For synchronization, a square box was encoded outside of the original frames and flashed every 30 seconds in the top right corner of the screen. A photosensor, detecting the light changes from this flashing box, was affixed to that region using black tape to ensure that the box did not distract participants. The EEG data was recorded using a BioSemi ActiveTwo system at a sample rate of 2048 Hz. Participants wore a 64-channel EEG cap, and 4 electrooculogram (EOG) sensors were positioned around the eyes to track eye movements.
The dataset includes a total of (19 subjects x 63 min + 9 subjects x 24 min) of data. Further details can be found in the following section.
Content
YouTube Videos: Due to copyright constraints, the dataset includes links to the original YouTube videos along with precise timestamps for the segments used in the experiments. The features proposed in 1 have been extracted and can be downloaded here: https://drive.google.com/file/d/1J1tYrxVizrl1xP-W1imvlA_v-DPzZ2Qh/view?usp=sharing.
Raw EEG Data: Organized by subject ID, the dataset contains EEG segments corresponding to the presented videos. Both EEGLAB .set files (containing metadata) and .fdt files (containing raw data) are provided, which can also be read by popular EEG analysis Python packages such as MNE.
The naming convention links each EEG segment to its corresponding video. E.g., the EEG segment 01_eeg corresponds to video 01_Dance_1, 03_eeg corresponds to video 03_Acrob_1, Mr_eeg corresponds to video Mr_Bean, etc.
The raw data have 68 channels. The first 64 channels are EEG data, and the last 4 channels are EOG data. The position coordinates of the standard BioSemi headcaps can be downloaded here: https://www.biosemi.com/download/Cap_coords_all.xls.
Due to minor synchronization ambiguities, different clocks in the PC and EEG recorder, and missing or extra video frames during video playback (rarely occurred), the length of the EEG data may not perfectly match the corresponding video data. The difference, typically within a few milliseconds, can be resolved by truncating the modality with the excess samples.
Signal Quality Information: A supplementary .txt file detailing potential bad channels. Users can opt to create their own criteria for identifying and handling bad channels.
The dataset is divided into two subsets: Single-shot and MrBean, based on the characteristics of the video stimuli.
Single-shot Dataset
The stimuli of this dataset consist of 13 single-shot videos (63 min in total), each depicting a single individual engaging in various activities such as dancing, mime, acrobatics, and magic shows. All the participants watched this video collection.
Video ID Link Start time (s) End time (s)
01_Dance_1 https://youtu.be/uOUVE5rGmhM 8.54 231.20
03_Acrob_1 https://youtu.be/DjihbYg6F2Y 4.24 231.91
04_Magic_1 https://youtu.be/CvzMqIQLiXE 3.68 348.17
05_Dance_2 https://youtu.be/f4DZp0OEkK4 5.05 227.99
06_Mime_2 https://youtu.be/u9wJUTnBdrs 5.79 347.05
07_Acrob_2 https://youtu.be/kRqdxGPLajs 183.61 519.27
08_Magic_2 https://youtu.be/FUv-Q6EgEFI 3.36 270.62
09_Dance_3 https://youtu.be/LXO-jKksQkM 5.61 294.17
12_Magic_3 https://youtu.be/S84AoWdTq3E 1.76 426.36
13_Dance_4 https://youtu.be/0wc60tA1klw 14.28 217.18
14_Mime_3 https://youtu.be/0Ala3ypPM3M 21.87 386.84
15_Dance_5 https://youtu.be/mg6-SnUl0A0 15.14 233.85
16_Mime_6 https://youtu.be/8V7rhAJF6Gc 31.64 388.61
MrBean Dataset
Additionally, 9 participants watched an extra 24-minute clip from the first episode of Mr. Bean, where multiple (moving) objects may exist and interact, and the camera viewpoint may change. The subject IDs and the signal quality files are inherited from the single-shot dataset.
Video ID Link Start time (s) End time (s)
Mr_Bean https://www.youtube.com/watch?v=7Im2I6STbms 39.77 1495.00
Acknowledgement
This research is funded by the Research Foundation - Flanders (FWO) project No G081722N, junior postdoctoral fellowship fundamental research of the FWO (for S. Geirnaert, No. 1242524N), the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 802895), the Flemish Government (AI Research Program), and the PDM mandate from KU Leuven (for S. Geirnaert, No PDMT1/22/009).
We also thank the participants for their time and effort in the experiments.
Contact Information
Executive researcher: Yuanyuan Yao, yuanyuan.yao@kuleuven.be
Led by: Prof. Alexander Bertrand, alexander.bertrand@kuleuven.be
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
This data set consists of links to social network items for 34 different forensic events that took place between August 14th, 2018 and January 06th, 2021. The majority of the text and images are from Twitter (a minor part is from Flickr, Facebook and Google+), and every video is from YouTube.
Data Collection
We used Social Tracker (https://github.com/MKLab-ITI/mmdemo-dockerized), along with the social medias' APIs, to gather most of the collections. For a minor part, we used Twint (https://github.com/twintproject/twint). In both cases, we provided keywords related to the event to receive the data.
It is important to mention that, in procedures like this one, usually only a small fraction of the collected data is in fact related to the event and useful for a further forensic analysis.
Content
We have data from 34 events, and for each of them we provide the files:
items_full.csv: It contains links to any social media post that was collected.
images.csv: Enlists the images collected. In some files there is a field called "ItemUrl", that refers to the social network post (e.g., a tweet) that mentions that media.
video.csv: Urls of YouTube videos that were gathered about the event.
video_tweet.csv: This file contains IDs of tweets and IDs of YouTube videos. A tweet whose ID is in this file has a video in its content. In turn, the link of a Youtube video whose ID is in this file was mentioned by at least one collected tweet. Only two collections have this file.
description.txt: Contains some standard information about the event, and possibly some comments about any specific issue related to it.
In fact, most of the collections do not have all the files above. Such an issue is due to changes in our collection procedure throughout the time of this work.
Events
We divided the events into six groups. They are,
1. Fire
Devastating fire is the main issue of the event, therefore most of the informative pictures show flames or burned constructions
14 Events
2. Collapse
Most of the relevant images depict collapsed buildings, bridges, etc. (not caused by fire).
5 Events
3. Shooting
Likely images of guns and police officers. Few or no destruction of the environment.
5 Events
4. Demonstration
Plethora of people on the streets. Possibly some problem took place on that, but in most cases the demonstration is the actual event.
7 Events
5. Collision
Traffic collision. Pictures of damaged vehicles on an urban landscape. Possibly there are images with victims on the street.
1 Event
6. Flood
Events that range from fierce rain to a tsunami. Many pictures depict water.
2 Events
We enlist the events in the file recod-ai-events-dataset-list.pdf
Media Content
Due to the terms of use from the social networks, we do not make publicly available the texts, images and videos that were collected. However, we can provide some extra piece of media content related to one (or more) events by contacting the authors.
Funding
DéjàVu thematic project, São Paulo Research Foundation (grants 2017/12646-3, 2018/18264-8 and 2020/02241-9)
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
We wanted to do something for our COVID Warriors and hence decided to detect the PPE kit(Mask, Face Sheild, Full Cover, Gloves, Goggles) which they wear before entering a Ward using Machine Learning so that anyone without a PPE can be detected and doesn't incidentally get into a COVID ward. The Dataset didn't exist so we decided to create one.
We have collected data from Google Image Search and Images extracted using Youtube Video (Tutorials showing how to wear PPE). It was then labeled for 5 Classes(Mask, Face Sheild, Full Cover, Gloves, Goggles) using LabelImage.
We have included a sample trained model and TensorFlow record files.
We thank all these Youtube Videos, Newspaper cuttings with PPE kits, and public images available on Google Search. 1. https://youtu.be/eCcX1oIIXPE 2. https://youtu.be/R8lmIdLHEgI 3. https://youtu.be/FrauHnD9pPU 4. https://youtu.be/H4jQUBAlBrI
We would love to see how the Kaggle community would utilize this dataset! We have trained a model which gives us good results (find it in the model folder)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Update — December 7, 2014. – Evidence-based medicine (EBM) is not working for many reasons, for example: 1. Incorrect in their foundations (paradox): hierarchical levels of evidence are supported by opinions (i.e., lowest strength of evidence according to EBM) instead of real data collected from different types of study designs (i.e., evidence). http://dx.doi.org/10.6084/m9.figshare.1122534 2. The effect of criminal practices by pharmaceutical companies is only possible because of the complicity of others: healthcare systems, professional associations, governmental and academic institutions. Pharmaceutical companies also corrupt at the personal level, politicians and political parties are on their payroll, medical professionals seduced by different types of gifts in exchange of prescriptions (i.e., bribery) which very likely results in patients not receiving the proper treatment for their disease, many times there is no such thing: healthy persons not needing pharmacological treatments of any kind are constantly misdiagnosed and treated with unnecessary drugs. Some medical professionals are converted in K.O.L. which is only a puppet appearing on stage to spread lies to their peers, a person supposedly trained to improve the well-being of others, now deceits on behalf of pharmaceutical companies. Probably the saddest thing is that many honest doctors are being misled by these lies created by the rules of pharmaceutical marketing instead of scientific, medical, and ethical principles. Interpretation of EBM in this context was not anticipated by their creators. “The main reason we take so many drugs is that drug companies don’t sell drugs, they sell lies about drugs.” ―Peter C. Gøtzsche “doctors and their organisations should recognise that it is unethical to receive money that has been earned in part through crimes that have harmed those people whose interests doctors are expected to take care of. Many crimes would be impossible to carry out if doctors weren’t willing to participate in them.” —Peter C Gøtzsche, The BMJ, 2012, Big pharma often commits corporate crime, and this must be stopped. Pending (Colombia): Health Promoter Entities (In Spanish: EPS ―Empresas Promotoras de Salud).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Accident Detection Model is made using YOLOv8, Google Collab, Python, Roboflow, Deep Learning, OpenCV, Machine Learning, Artificial Intelligence. It can detect an accident on any accident by live camera, image or video provided. This model is trained on a dataset of 3200+ images, These images were annotated on roboflow.
https://user-images.githubusercontent.com/78155393/233774342-287492bb-26c1-4acf-bc2c-9462e97a03ca.png" alt="Survey">
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data about water are found in many types of formats distributed by many different sources and depicting different spatial representations such as points, polygons and grids. How do we find and explore the data we need for our specific research or application? This seminar will present common challenges and strategies for finding and accessing relevant datasets, focusing on time series data from sites commonly represented as fixed geographical points. This type of data may come from automated monitoring stations such as river gauges and weather stations, from repeated in-person field observations and samples, or from model output and processed data products. We will present and explore useful data catalogs, including the CUAHSI HIS catalog accessible via HydroClient, CUAHSI HydroShare, the EarthCube Data Discovery Studio, Google Dataset search, and agency-specific catalogs. We will also discuss programmatic data access approaches and tools in Python, particularly the ulmo data access package, touching on the role of community standards for data formats and data access protocols. Once we have accessed datasets we are interested in, the next steps are typically exploratory, focusing on visualization and statistical summaries. This seminar will illustrate useful approaches and Python libraries used for processing and exploring time series data, with an emphasis on the distinctive needs posed by temporal data. Core Python packages used include Pandas, GeoPandas, Matplotlib and the geospatial visualization tools introduced at the last seminar. Approaches presented can be applied to other data types that can be summarized as single time series, such as averages over a watershed or data extracts from a single cell in a gridded dataset – the topic for the next seminar.
Cyberseminar recording is available on Youtube at https://youtu.be/uQXuS1AB2M0
As of February 2025, India was the country with the largest YouTube audience by far, with approximately 491 million users engaging with the popular social video platform. The United States followed, with around 253 million YouTube viewers. Brazil came in third, with 144 million users watching content on YouTube. The United Kingdom saw around 54.8 million internet users engaging with the platform in the examined period. What country has the highest percentage of YouTube users? In July 2024, the United Arab Emirates was the country with the highest YouTube penetration worldwide, as around 94 percent of the country's digital population engaged with the service. In 2024, YouTube counted around 100 million paid subscribers for its YouTube Music and YouTube Premium services. YouTube mobile markets In 2024, YouTube was among the most popular social media platforms worldwide. In terms of revenues, the YouTube app generated approximately 28 million U.S. dollars in revenues in the United States in January 2024, as well as 19 million U.S. dollars in Japan.
As of February 2025, around 322 million people in the United States accessed the internet, making it one of the largest online markets worldwide. The country currently ranks third after China and India by the online audience size. Overview of internet usage in the United States The digital population in the United States has constantly increased in recent years. Among the most common reasons is the growing accessibility of broadband internet. A big part of the country's digital audience accesses the web via mobile phones. In 2024, the country saw an estimated 97.1 percent mobile internet user penetration. According to a 2024 survey, over 51 percent of U.S. women and 43 percent of men said it is important to them to have mobile internet access anywhere, at any time. Another 41 percent of respondents could not imagine their everyday life without the internet. Google and YouTube are the most visited websites in the country, while music, food, and drinks were the most discussed online topics. Internet usage demographics in the United States While some users can no longer imagine their life without the internet, others do not use it at all. According to 2021 data, 25 percent of U.S. adults 65 and older reported not using the internet. Despite this, online usage was strong across other age groups, especially young adults aged 18 to 49. This age group also reported the highest percentage of smartphone usage in the country as of 2023. Due to a persistent lack of connectivity in rural areas, more online users were based in urban areas of the U.S. than in the countryside.
As of the third quarter of 2024, internet users in South Africa spent more than **** hours and ** minutes online per day, ranking first among the regions worldwide. Brazil followed, with roughly **** hours of daily online usage. As of the examined period, Japan registered the lowest number of daily hours spent online, with users in the country spending an average of over **** hours per day using the internet. The data includes the daily time spent online on any device. Social media usage In recent years, social media has become integral to internet users' daily lives, with users spending an average of *** minutes daily on social media activities. In April 2024, global social network penetration reached **** percent, highlighting its widespread adoption. Among the various platforms, YouTube stands out, with over *** billion monthly active users, making it one of the most popular social media platforms. YouTube’s global popularity In 2023, the keyword "YouTube" ranked among the most popular search queries on Google, highlighting the platform's immense popularity. YouTube generated most of its traffic through mobile devices, with about 98 billion visits. This popularity was particularly evident in the United Arab Emirates, where YouTube penetration reached approximately **** percent, the highest in the world.
In 2024, children in the United Kingdom spent an average of *** minutes per day on TikTok. This was followed by Instagram, as children in the UK reported using the app for an average of ** minutes daily. Children in the UK aged between four and 18 years also used Facebook for ** minutes a day on average in the measured period. Mobile ownership and usage among UK children In 2021, around ** percent of kids aged between eight and 11 years in the UK owned a smartphone, while children aged between five and seven having access to their own device were approximately ** percent. Mobile phones were also the second most popular devices used to access the web by children aged between eight and 11 years, as tablet computers were still the most popular option for users aged between three and 11 years. Children were not immune to the popularity acquired by short video format content in 2020 and 2021, spending an average of ** minutes per day engaging with TikTok, as well as over ** minutes on the YouTube app in 2021. Children data protection In 2021, ** percent of U.S. parents and ** percent of UK parents reported being slightly concerned with their children’s device usage habits. While the share of parents reporting to be very or extremely concerned was considerably smaller, children are considered among the most vulnerable digital audiences and need additional attention when it comes to data and privacy protection. According to a study conducted during the first quarter of 2022, ** percent of children’s apps hosted in the Google Play Store and ** percent of apps hosted in the Apple App Store transmitted users’ locations to advertisers. Additionally, ** percent of kids’ apps were found to collect persistent identifiers, such as users’ IP addresses, which could potentially lead to Children’s Online Privacy Protection Act (COPPA) violations in the United States. In the United Kingdom, companies have to take into account several obligations when considering online environments for children, including an age-appropriate design and avoiding sharing children’s data.
As of April 2024, it was found that men between the ages of 25 and 34 years made up Facebook largest audience, accounting for 18.4 percent of global users. Additionally, Facebook's second largest audience base could be found with men aged 18 to 24 years.
Facebook connects the world
Founded in 2004 and going public in 2012, Facebook is one of the biggest internet companies in the world with influence that goes beyond social media. It is widely considered as one of the Big Four tech companies, along with Google, Apple, and Amazon (all together known under the acronym GAFA). Facebook is the most popular social network worldwide and the company also owns three other billion-user properties: mobile messaging apps WhatsApp and Facebook Messenger,
as well as photo-sharing app Instagram. Facebook usersThe vast majority of Facebook users connect to the social network via mobile devices. This is unsurprising, as Facebook has many users in mobile-first online markets. Currently, India ranks first in terms of Facebook audience size with 378 million users. The United States, Brazil, and Indonesia also all have more than 100 million Facebook users each.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
YouTube was launched in 2005. It was founded by three PayPal employees: Chad Hurley, Steve Chen, and Jawed Karim, who ran the company from an office above a small restaurant in San Mateo. The first...