In 2022, the average data used per smartphone per month worldwide amounted to ** gigabytes (GB). The source forecasts that this will increase almost four times reaching ** GB per smartphone per month globally in 2028.
This statistic shows the average monthly wireless data usage per user in the United States by age in the first two quarters of 2018. In the first half of 2018, users 25 years and younger used *** GB of cellular and **** GB of Wi-Fi wireless data.
North America registered the highest mobile data consumption per connection in 2023, with the average connection consuming ** gigabytes per month. This figure is set to triple by 2030, driven by the adoption of data intensive activities such as 4K streaming.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
United States rose 107.6% of Average per Capita Monthly Mobile Data Use in 2014, compared to the previous year.
This statistic displays the average monthly smartphone cellular data usage from 2016 to 2021. In 2016, on average a smartphone consumed *** GBs of cellular data per month. That number is projected to reach *** GBs in 2021.
https://data.gov.tw/licensehttps://data.gov.tw/license
Average monthly data usage per mobile broadband user (including 4G, 5G)
The statistic shows estimated internet data traffic per month in the United States from 2018 to 2023. In 2018, total internet data traffic was estimated to amount to 33.45 million exabytes per month.
This statistic shows the amount of monthly mobile data used per mobile subscription in Sweden in 2020, by subscription type. Private subscriptions used an average of ** gigabytes of mobile data per month, while business subscriptions used ** gigabytes.
In the observed period, the average amount of mobile data use per SIM card in Czechia increased rapidly. While in 2017, the average monthly usage was just 0.8 gigabytes, in 2021, this number rose to 4.6 GB, an increase of more than 535 percent. In 2023, it was estimated to have increased to 9.8 GB.
This statistic shows the average monthly wireless data usage per Android user in the United States in *************, broken down by plan type. In *************, Android users on the service plan with monthly allowance consumed an average of ****** megabytes of Wifi data per month in the United States.
Residential customers use an average of about 1,000 kWh of electricity per month, with usage higher during hot summer months and lower in the winter. View tables show monthly average usage in kWh by month for residential customers starting in 2000. Tables include monthly fuel charges and electric bill amounts.
The average mobile data usage per capita in 2018 was significantly less for the ETNO perimeter of Europe than for Japan, South Korea, and the United States. Europeans on average used *** gigabytes per month of mobile data compared to that of ****, ****, and **** gigabytes per month in Japan, South Korea, and the United States, respectively. It is important to note that there is a huge variation between European countries in terms of average usage, as Europe* is a regional representation compared to the selected countries included in this study.
In November 2024, each broadband subscription in Hong Kong had an average data usage of ***** megabytes on average, a slight decrease from ***** megabytes from the previous month. The total data usage in Hong Kong reached over *** million megabytes in that month.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The average Facebook user spends about 19.6 per month on Facebook every month. This works out to be about 39 minutes per day.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
You are an analyst at "Megaline," a federal mobile operator. The company offers two tariff plans to customers: "Smart" and "Ultra." To adjust the advertising budget, the commercial department wants to understand which tariff generates more revenue.
You need to conduct a preliminary analysis of the tariffs on a small sample of customers. You have data on 500 users of "Megaline": who they are, where they are from, which tariff they use, how many calls and messages they sent in 2018. You need to analyze customer behavior and conclude which tariff is better.
"Smart" Tariff: - Monthly fee: 550 rubles - Included: 500 minutes of calls, 50 messages, and 15 GB of internet traffic - Cost of services beyond the tariff package: 1. Call minute: 3 rubles (Megaline always rounds up minutes and megabytes. If the user talked for just 1 second, it counts as a whole minute); 2. Message: 3 rubles; 3. 1 GB of internet traffic: 200 rubles.
"Ultra" Tariff: - Monthly fee: 1950 rubles - Included: 3000 minutes of calls, 1000 messages, and 30 GB of internet traffic - Cost of services beyond the tariff package: 1. Call minute: 1 ruble; 2. Message: 1 ruble; 3. 1 GB of internet traffic: 150 rubles.
Note: Megaline always rounds up seconds to minutes and megabytes to gigabytes. Each call is rounded up individually: even if it lasted just 1 second, it is counted as 1 minute. For web traffic, separate sessions are not counted. Instead, the total amount for the month is rounded up. If a subscriber uses 1025 megabytes in a month, they are charged for 2 gigabytes.
Step 1: Open the file with data and study the general information
File paths:
- /datasets/calls.csv
- /datasets/internet.csv
- /datasets/messages.csv
- /datasets/tariffs.csv
- /datasets/users.csv
Step 2: Prepare the data - Convert data to the required types; - Find and fix errors in the data, if any. Explain what errors you found and how you fixed them. You will find calls with zero duration in the data. This is not an error: missed calls are indicated by zeros, so they do not need to be deleted.
For each user, calculate: - Number of calls made and minutes spent per month; - Number of messages sent per month; - Amount of internet traffic used per month; - Monthly revenue from each user (subtract the free limit from the total number of calls, messages, and internet traffic; multiply the remainder by the value from the tariff plan; add the corresponding tariff plan's subscription fee).
Step 3: Analyze the data Describe the behavior of the operator's customers based on the sample. How many minutes of calls, how many messages, and how much internet traffic do users of each tariff need per month? Calculate the average, variance, and standard deviation. Create histograms. Describe the distributions.
Step 4: Test hypotheses - The average revenue of users of the "Ultra" and "Smart" tariffs is different; - The average revenue of users from Moscow differs from the revenue of users from other regions. Moscow is written as 'Москва'. You can put it in your value, when check the hypothesis
Set the threshold alpha value yourself.
Explain: - How you formulated the null and alternative hypotheses; - Which criterion you used to test the hypotheses and why.
Step 5: Write a general conclusion
Formatting: Perform the task in Jupyter Notebook. Fill the program code in the cells of type code
, and the textual explanations in the cells of type markdown
. Apply formatting and headers.
Table users
(user information):
- user_id
: unique user identifier
- first_name
: user's first name
- last_name
: user's last name
- age
: user's age (years)
- reg_date
: date of tariff connection (day, month, year)
- churn_date
: date of tariff discontinuation (if the value is missing, the tariff was still active at the time of data extraction)
- city
: user's city of residence
- tariff
: name of the tariff plan
Table calls
(call information):
- id
: unique call number
- call_date
: call date
- duration
: call duration in minutes
- user_id
: identifier of the user who made the call
Table messages
(message information):
- id
: unique message number
- message_date
: message date
- user_id
: identifier of the user who sent the message
Table internet
(internet session information):
- id
: unique session number
- mb_used
: amount of internet traffic used during the session (in megabytes)
- session_date
: internet session date
- user_id
: user identifier
Table tariffs
(tariff information):
- tariff_name
: tariff name
- rub_monthly_fee
: monthly subscription fee in rubles
- minutes_included
: number of call minutes included per month
- `messages_included...
https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/cc-by/cc-by_f24dc630aa52ab8c52a0ac85c03bc35e0abc850b4d7453bdc083535b41d5a5c3.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/cc-by/cc-by_f24dc630aa52ab8c52a0ac85c03bc35e0abc850b4d7453bdc083535b41d5a5c3.pdf
ERA5 is the fifth generation ECMWF reanalysis for the global climate and weather for the past 8 decades. Data is available from 1940 onwards. ERA5 replaces the ERA-Interim reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. This principle, called data assimilation, is based on the method used by numerical weather prediction centres, where every so many hours (12 hours at ECMWF) a previous forecast is combined with newly available observations in an optimal way to produce a new best estimate of the state of the atmosphere, called analysis, from which an updated, improved forecast is issued. Reanalysis works in the same way, but at reduced resolution to allow for the provision of a dataset spanning back several decades. Reanalysis does not have the constraint of issuing timely forecasts, so there is more time to collect observations, and when going further back in time, to allow for the ingestion of improved versions of the original observations, which all benefit the quality of the reanalysis product. ERA5 provides hourly estimates for a large number of atmospheric, ocean-wave and land-surface quantities. An uncertainty estimate is sampled by an underlying 10-member ensemble at three-hourly intervals. Ensemble mean and spread have been pre-computed for convenience. Such uncertainty estimates are closely related to the information content of the available observing system which has evolved considerably over time. They also indicate flow-dependent sensitive areas. To facilitate many climate applications, monthly-mean averages have been pre-calculated too, though monthly means are not available for the ensemble mean and spread. ERA5 is updated daily with a latency of about 5 days (monthly means are available around the 6th of each month). In case that serious flaws are detected in this early release (called ERA5T), this data could be different from the final release 2 to 3 months later. In case that this occurs users are notified. The data set presented here is a regridded subset of the full ERA5 data set on native resolution. It is online on spinning disk, which should ensure fast and easy access. It should satisfy the requirements for most common applications. An overview of all ERA5 datasets can be found in this article. Information on access to ERA5 data on native resolution is provided in these guidelines. Data has been regridded to a regular lat-lon grid of 0.25 degrees for the reanalysis and 0.5 degrees for the uncertainty estimate (0.5 and 1 degree respectively for ocean waves). There are four main sub sets: hourly and monthly products, both on pressure levels (upper air fields) and single levels (atmospheric, ocean-wave and land surface quantities). The present entry is "ERA5 monthly mean data on single levels from 1940 to present".
As of the second quarter of 2024, the average residential high-speed broadband subscription in Canada downloaded around *** gigabytes of data per month. This was a decrease on the previous quarter, when the average download volume reached a record *** gigabytes per month.
The locations of approximately 23,000 current and historical U.S. Geological Survey (USGS) streamgages in the United States and Puerto Rico (with the exception of Alaska) have been snapped to the medium resolution National Hydrography Dataset (NHD). The NHD contains geospatial information about mapped surface-water features, such as streams, lakes, and reservoirs, etc., creating a hydrologic network that can be used to determine what is upstream or downstream from a point of interest on the NHD network. An automated snapping process made the initial determination of the NHD location of each streamgage. These initial NHD locations were comprehensively reviewed by local USGS personnel to ensure that streamgages were snapped to the correct NHD reaches. About 75 percent of the streamgages snapped to the appropriate NHD reach location initially and 25 percent required adjustment and relocation. This process resulted in approximately 23,000 gages being successfully snapped to the NHD. This dataset contains the latitude and longitude coordinates of the point on the NHD to which the streamgage is snapped and the location of the gage house for each streamgage. A process known as indexing may be used to create reference points (event tables) to the NHD reaches, expressed as a reach code and measure (distance along the reach). Indexing is dependent on the version of NHD to which the indexing is referenced. These data are well suited for use in indexing because nearly all the streamgage NHD locations have been reviewed and adjusted if necessary, to ensure they will index to the appropriate NHD reach. Flow characteristics were computed from the daily streamflow data recorded at each streamgage for the period of record. The flow characteristics associated with each streamgage include: First date (year, month, day) of streamflow data Last date (year, month, day) of streamflow data Number of days of streamflow data Number of days of non-zero streamflow data Minimum and maximum daily flow for the period of record (cubic feet per second) Percentiles (1, 5, 10, 20, 25, 50, 75, 80, 90, 95, 99) of daily flow for the period of record (cubic feet per second) Average and standard deviation of daily flow for the period of record (cubic feet per second) Mean annual base-flow index (BFI) computed for the period of record (fraction, ranging from 0 to 1) Year-to-year standard deviation of the annual base-flow index computed for the period of record (fraction) Number of years of data used to compute the base-flow index (years) The streamflow data used to compute flow characteristics were copied from the NWIS-Web historical daily discharge archive (http://waterdata.usgs.gov/nwis/sw) on June 15, 2005.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
ChatGPT has taken the world by storm, setting a record for the fastest app to reach a 100 million users, which it hit in two months. The implications of this tool are far-reaching, universities...
This dataset consists of raster geotiff outputs from a series of modeling simulations for the California Central Valley. The full methods and results of this research are described in detail in “Integrated modeling of climate, land use, and water availability scenarios and their impacts on managed wetland habitat: A case study from California’s Central Valley” (2021). Land-use and land-cover change for California's Central Valley were modeled using the LUCAS model and five different scenarios were simulated from 2011 to 2101 across the entirety of the valley. The five future scenario projections originated from the four scenarios developed as part of the Central Valley Landscape Conservation Project (http://climate.calcommons.org/cvlcp ). The 4 original scenarios include a Bad-Business-As-Usual (BBAU; high water availability, poor management), California Dreamin’ (DREAM; high water availability, good management), Central Valley Dustbowl (DUST; low water availability, poor management), and Everyone Equally Miserable (EEM; low water availability, good management). These scenarios represent alternative plausible futures, capturing a range of climate variability, land management activities, and habitat restoration goals. We parameterized our models based on close interpretation of these four scenario narratives to best reflect stakeholder interests, adding a baseline Historical Business-As-Usual scenario (HBAU) for comparison. The flood probability raster maps represent the average annual flooding probability of a cell over a specified time period for a specified land use and land cover group and type. Each filename has the associated scenario ID (scn418 = DUST, scn419 = DREAM, scn420 = HBAU, scn421 = BBAU, and scn426 = EEM), flooding probability per pixel per month, over a 30-year period, model iteration (= it0 in all cases as only 1 Monte Carlo simulation was modeled and no iteration data used in the calculation of the probability value), timestep of the 30-year transition summary end date (ts2041 = average annual 30-year transition probability from modeled time steps 2012 to 2041, ts2071 = average annual 30-year flooding probability from modeled timesteps 2042 to 2071, and ts101 = average annual 30-year flooding probability from modeled timesteps 2072 to 2101). The filename will also include one of the 12 monthly flooding designations (e.g. Apr = April; Nov = November). For example, the following filename “scn418_DUST_tgapFLOODING_30yr_Apr_2041.tif” represents 30-year average annual flooding probability for the month of April, for the modeled scenario 418 DUST, over the 2011 to 2041 model period. More information about the LUCAS model can be found here: https://geography.wr.usgs.gov/LUCC/the_lucas_model.php. For more information on the specific parameter settings used in the model contact Tamara S. Wilson (tswilson@usgs.gov)
In 2022, the average data used per smartphone per month worldwide amounted to ** gigabytes (GB). The source forecasts that this will increase almost four times reaching ** GB per smartphone per month globally in 2028.