Facebook
TwitterSocial media companies are starting to offer users the option to subscribe to their platforms in exchange for monthly fees. Until recently, social media has been predominantly free to use, with tech companies relying on advertising as their main revenue generator. However, advertising revenues have been dropping following the COVID-induced boom. As of July 2023, Meta Verified is the most costly of the subscription services, setting users back almost 15 U.S. dollars per month on iOS or Android. Twitter Blue costs between eight and 11 U.S. dollars per month and ensures users will receive the blue check mark, and have the ability to edit tweets and have NFT profile pictures. Snapchat+, drawing in four million users as of the second quarter of 2023, boasts a Story re-watch function, custom app icons, and a Snapchat+ badge.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Description:
The "Daily Social Media Active Users" dataset provides a comprehensive and dynamic look into the digital presence and activity of global users across major social media platforms. The data was generated to simulate real-world usage patterns for 13 popular platforms, including Facebook, YouTube, WhatsApp, Instagram, WeChat, TikTok, Telegram, Snapchat, X (formerly Twitter), Pinterest, Reddit, Threads, LinkedIn, and Quora. This dataset contains 10,000 rows and includes several key fields that offer insights into user demographics, engagement, and usage habits.
Dataset Breakdown:
Platform: The name of the social media platform where the user activity is tracked. It includes globally recognized platforms, such as Facebook, YouTube, and TikTok, that are known for their large, active user bases.
Owner: The company or entity that owns and operates the platform. Examples include Meta for Facebook, Instagram, and WhatsApp, Google for YouTube, and ByteDance for TikTok.
Primary Usage: This category identifies the primary function of each platform. Social media platforms differ in their primary usage, whether it's for social networking, messaging, multimedia sharing, professional networking, or more.
Country: The geographical region where the user is located. The dataset simulates global coverage, showcasing users from diverse locations and regions. It helps in understanding how user behavior varies across different countries.
Daily Time Spent (min): This field tracks how much time a user spends on a given platform on a daily basis, expressed in minutes. Time spent data is critical for understanding user engagement levels and the popularity of specific platforms.
Verified Account: Indicates whether the user has a verified account. This feature mimics real-world patterns where verified users (often public figures, businesses, or influencers) have enhanced status on social media platforms.
Date Joined: The date when the user registered or started using the platform. This data simulates user account history and can provide insights into user retention trends or platform growth over time.
Context and Use Cases:
Researchers, data scientists, and developers can use this dataset to:
Model User Behavior: By analyzing patterns in daily time spent, verified status, and country of origin, users can model and predict social media engagement behavior.
Test Analytics Tools: Social media monitoring and analytics platforms can use this dataset to simulate user activity and optimize their tools for engagement tracking, reporting, and visualization.
Train Machine Learning Algorithms: The dataset can be used to train models for various tasks like user segmentation, recommendation systems, or churn prediction based on engagement metrics.
Create Dashboards: This dataset can serve as the foundation for creating user-friendly dashboards that visualize user trends, platform comparisons, and engagement patterns across the globe.
Conduct Market Research: Business intelligence teams can use the data to understand how various demographics use social media, offering valuable insights into the most engaged regions, platform preferences, and usage behaviors.
Sources of Inspiration: This dataset is inspired by public data from industry reports, such as those from Statista, DataReportal, and other market research platforms. These sources provide insights into the global user base and usage statistics of popular social media platforms. The synthetic nature of this dataset allows for the use of realistic engagement metrics without violating any privacy concerns, making it an ideal tool for educational, analytical, and research purposes.
The structure and design of the dataset are based on real-world usage patterns and aim to represent a variety of users from different backgrounds, countries, and activity levels. This diversity makes it an ideal candidate for testing data-driven solutions and exploring social media trends.
Future Considerations:
As the social media landscape continues to evolve, this dataset can be updated or extended to include new platforms, engagement metrics, or user behaviors. Future iterations may incorporate features like post frequency, follower counts, engagement rates (likes, comments, shares), or even sentiment analysis from user-generated content.
By leveraging this dataset, analysts and data scientists can create better, more effective strategies ...
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The USDA Agricultural Research Service (ARS) recently established SCINet , which consists of a shared high performance computing resource, Ceres, and the dedicated high-speed Internet2 network used to access Ceres. Current and potential SCINet users are using and generating very large datasets so SCINet needs to be provisioned with adequate data storage for their active computing. It is not designed to hold data beyond active research phases. At the same time, the National Agricultural Library has been developing the Ag Data Commons, a research data catalog and repository designed for public data release and professional data curation. Ag Data Commons needs to anticipate the size and nature of data it will be tasked with handling.
The ARS Web-enabled Databases Working Group, organized under the SCINet initiative, conducted a study to establish baseline data storage needs and practices, and to make projections that could inform future infrastructure design, purchases, and policies. The SCINet Web-enabled Databases Working Group helped develop the survey which is the basis for an internal report. While the report was for internal use, the survey and resulting data may be generally useful and are being released publicly.
From October 24 to November 8, 2016 we administered a 17-question survey (Appendix A) by emailing a Survey Monkey link to all ARS Research Leaders, intending to cover data storage needs of all 1,675 SY (Category 1 and Category 4) scientists. We designed the survey to accommodate either individual researcher responses or group responses. Research Leaders could decide, based on their unit's practices or their management preferences, whether to delegate response to a data management expert in their unit, to all members of their unit, or to themselves collate responses from their unit before reporting in the survey.
Larger storage ranges cover vastly different amounts of data so the implications here could be significant depending on whether the true amount is at the lower or higher end of the range. Therefore, we requested more detail from "Big Data users," those 47 respondents who indicated they had more than 10 to 100 TB or over 100 TB total current data (Q5). All other respondents are called "Small Data users." Because not all of these follow-up requests were successful, we used actual follow-up responses to estimate likely responses for those who did not respond.
We defined active data as data that would be used within the next six months. All other data would be considered inactive, or archival.
To calculate per person storage needs we used the high end of the reported range divided by 1 for an individual response, or by G, the number of individuals in a group response. For Big Data users we used the actual reported values or estimated likely values.
Resources in this dataset:Resource Title: Appendix A: ARS data storage survey questions. File Name: Appendix A.pdfResource Description: The full list of questions asked with the possible responses. The survey was not administered using this PDF but the PDF was generated directly from the administered survey using the Print option under Design Survey. Asterisked questions were required. A list of Research Units and their associated codes was provided in a drop down not shown here. Resource Software Recommended: Adobe Acrobat,url: https://get.adobe.com/reader/ Resource Title: CSV of Responses from ARS Researcher Data Storage Survey. File Name: Machine-readable survey response data.csvResource Description: CSV file includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed. This information is that same data as in the Excel spreadsheet (also provided).Resource Title: Responses from ARS Researcher Data Storage Survey. File Name: Data Storage Survey Data for public release.xlsxResource Description: MS Excel worksheet that Includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel
Facebook
Twitterhttps://www.ibisworld.com/about/termsofuse/https://www.ibisworld.com/about/termsofuse/
The Data Processing and Hosting Services industry has transformed over the past decade, with the growth of cloud computing creating new markets. Demand surged in line with heightened demand from banks and a rising number of mobile connections across Europe. Many companies regard cloud computing as an innovative way of reducing their operating costs, which has led to the introduction of new services that make the sharing of data more efficient. Over the five years through 2025, revenue is expected to hike at a compound annual rate of 4.3% to €113.5 billion, including a 5.6% jump in 2025. Industry profit has been constrained by pricing pressures between companies and regions. Investments in new-generation data centres, especially in digital hubs like Frankfurt, London, and Paris, have consistently outpaced available supply, underlining the continent’s insatiable appetite for processing power. Meanwhile, 5G network roll-outs and heightened consumer expectations for real-time digital services have made agile hosting and robust cloud infrastructure imperative, pushing providers to invest in both core and edge data solutions. Robust growth has been fuelled by rapid digitalisation, widespread cloud adoption, and exploding demand from sectors such as e-commerce and streaming. Scaling cloud infrastructure, driven by both established giants, like Amazon Web Services (AWS), Microsoft Azure and Google Cloud and nimble local entrants, has allowed the industry to keep pace with unpredictable spikes in online activity and increasingly complex data needs. Rising investment in data centre capacity and the proliferation of high-availability hosting have significantly boosted operational efficiency and market competitiveness, with revenue growth closely tracking the boom in cloud and streaming services across the continent. Industry revenue is set to grow moving forward as European businesses incorporate data technology into their operations. Revenue is projected to boom, growing at a compound annual rate of 10.3% over the five years through 2030, to reach €185.4 billion. Growth is likely to be assisted by ongoing cloud adoption, accelerated 5G expansion, and soaring investor interest in hyperscale and sovereign data centres. Technical diversification seen in hybrid cloud solutions, edge computing deployments, and sovereign clouds, will create significant opportunities for incumbents and disruptors alike. Pricing pressures, intensified by global hyperscalers’ economies of scale and assertive licensing strategies, will pressurise profit, especially for smaller participants confronting rising capital expenditure and compliance costs.
Facebook
TwitterThe data were created by transformation of vector cadastral component of SM 5 to raster file. In territories, where vector SM 5 has not been created yet, the cadastral and altimetry components were created by scanning of individual printing masters of planimetry and altimetry from the last issue of the State Map 1:5,000 - derived. The cadastral component does not contain parcel numbers.
Facebook
TwitterThe National Land Cover Database products are created through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (EPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (FWS), the Bureau of Land Management (BLM) and the USDA Natural Resources Conservation Service (NRCS). Previously, NLCD consisted of three major data releases based on a 10-year cycle. These include a circa 1992 conterminous U.S. land cover dataset with one thematic layer (NLCD 1992), a circa 2001 50-state/Puerto Rico updated U.S. land cover database (NLCD 2001) with three layers including thematic land cover, percent imperviousness, and percent tree canopy, and a 1992/2001 Land Cover Change Retrofit Product. With these national data layers, there is often a 5-year time lag between the image capture date and product release. In some areas, the land cover can undergo significant change during production time, resulting in products that may be perpetually out of date. To address these issues, this circa 2006 NLCD land cover product (NLCD 2006) was conceived to meet user community needs for more frequent land cover monitoring (moving to a 5-year cycle) and to reduce the production time between image capture and product release. NLCD 2006 is designed to provide the user both updated land cover data and additional information that can be used to identify the pattern, nature, and magnitude of changes occurring between 2001 and 2006 for the conterminous United States at medium spatial resolution. For NLCD 2006, there are 3 primary data products: 1) NLCD 2006 Land Cover map; 2) NLCD 2001/2006 Change Pixels labeled with the 2006 land cover class; and 3) NLCD 2006 Percent Developed Imperviousness. Four additional data products were developed to provide supporting documentation and to provide information for land cover change analysis tasks: 4) NLCD 2001/2006 Percent Developed Imperviousness Change; 5) NLCD 2001/2006 Maximum Potential Change derived from the raw spectral change analysis; 6) NLCD 2001/2006 From-To Change pixels; and 7) NLCD 2006 Path/Row Index vector file showing the footprint of Landsat scene pairs used to derive 2001/2006 spectral change with change pair acquisition dates and scene identification numbers included in the attribute table. In addition to the 2006 data products listed in the paragraph above, two of the original release NLCD 2001 data products have been revised and reissued. Generation of NLCD 2006 data products helped to identify some update issues in the NLCD 2001 land cover and percent developed imperviousness data products. These issues were evaluated and corrected, necessitating a reissue of NLCD 2001 data products (NLCD 2001 Version 2.0) as part of the NLCD 2006 release. A majority of NLCD 2001 updates occur in coastal mapping zones where NLCD 2001 was published prior to the National Oceanic and Atmospheric Administration (NOAA) Coastal Change Analysis Program (C-CAP) 2001 land cover products. NOAA C-CAP 2001 land cover has now been seamlessly integrated with NLCD 2001 land cover for all coastal zones. NLCD 2001 percent developed imperviousness was also updated as part of this process. Land cover maps, derivatives and all associated documents are considered "provisional" until a formal accuracy assessment can be conducted. The NLCD 2006 is created on a path/row basis and mosaicked to create a seamless national product. Questions about the NLCD 2006 land cover product can be directed to the NLCD 2006 land cover mapping team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
Facebook
TwitterThe harmonized data set on health, created and published by the ERF, is a subset of Iraq Household Socio Economic Survey (IHSES) 2012. It was derived from the household, individual and health modules, collected in the context of the above mentioned survey. The sample was then used to create a harmonized health survey, comparable with the Iraq Household Socio Economic Survey (IHSES) 2007 micro data set.
----> Overview of the Iraq Household Socio Economic Survey (IHSES) 2012:
Iraq is considered a leader in household expenditure and income surveys where the first was conducted in 1946 followed by surveys in 1954 and 1961. After the establishment of Central Statistical Organization, household expenditure and income surveys were carried out every 3-5 years in (1971/ 1972, 1976, 1979, 1984/ 1985, 1988, 1993, 2002 / 2007). Implementing the cooperation between CSO and WB, Central Statistical Organization (CSO) and Kurdistan Region Statistics Office (KRSO) launched fieldwork on IHSES on 1/1/2012. The survey was carried out over a full year covering all governorates including those in Kurdistan Region.
The survey has six main objectives. These objectives are:
The raw survey data provided by the Statistical Office were then harmonized by the Economic Research Forum, to create a comparable version with the 2006/2007 Household Socio Economic Survey in Iraq. Harmonization at this stage only included unifying variables' names, labels and some definitions. See: Iraq 2007 & 2012- Variables Mapping & Availability Matrix.pdf provided in the external resources for further information on the mapping of the original variables on the harmonized ones, in addition to more indications on the variables' availability in both survey years and relevant comments.
National coverage: Covering a sample of urban, rural and metropolitan areas in all the governorates including those in Kurdistan Region.
1- Household/family. 2- Individual/person.
The survey was carried out over a full year covering all governorates including those in Kurdistan Region.
Sample survey data [ssd]
----> Design:
Sample size was (25488) household for the whole Iraq, 216 households for each district of 118 districts, 2832 clusters each of which includes 9 households distributed on districts and governorates for rural and urban.
----> Sample frame:
Listing and numbering results of 2009-2010 Population and Housing Survey were adopted in all the governorates including Kurdistan Region as a frame to select households, the sample was selected in two stages: Stage 1: Primary sampling unit (blocks) within each stratum (district) for urban and rural were systematically selected with probability proportional to size to reach 2832 units (cluster). Stage two: 9 households from each primary sampling unit were selected to create a cluster, thus the sample size of total survey clusters was 25488 households distributed on the governorates, 216 households in each district.
----> Sampling Stages:
In each district, the sample was selected in two stages: Stage 1: based on 2010 listing and numbering frame 24 sample points were selected within each stratum through systematic sampling with probability proportional to size, in addition to the implicit breakdown urban and rural and geographic breakdown (sub-district, quarter, street, county, village and block). Stage 2: Using households as secondary sampling units, 9 households were selected from each sample point using systematic equal probability sampling. Sampling frames of each stages can be developed based on 2010 building listing and numbering without updating household lists. In some small districts, random selection processes of primary sampling may lead to select less than 24 units therefore a sampling unit is selected more than once , the selection may reach two cluster or more from the same enumeration unit when it is necessary.
Face-to-face [f2f]
----> Preparation:
The questionnaire of 2006 survey was adopted in designing the questionnaire of 2012 survey on which many revisions were made. Two rounds of pre-test were carried out. Revision were made based on the feedback of field work team, World Bank consultants and others, other revisions were made before final version was implemented in a pilot survey in September 2011. After the pilot survey implemented, other revisions were made in based on the challenges and feedbacks emerged during the implementation to implement the final version in the actual survey.
----> Questionnaire Parts:
The questionnaire consists of four parts each with several sections: Part 1: Socio – Economic Data: - Section 1: Household Roster - Section 2: Emigration - Section 3: Food Rations - Section 4: housing - Section 5: education - Section 6: health - Section 7: Physical measurements - Section 8: job seeking and previous job
Part 2: Monthly, Quarterly and Annual Expenditures: - Section 9: Expenditures on Non – Food Commodities and Services (past 30 days). - Section 10 : Expenditures on Non – Food Commodities and Services (past 90 days). - Section 11: Expenditures on Non – Food Commodities and Services (past 12 months). - Section 12: Expenditures on Non-food Frequent Food Stuff and Commodities (7 days). - Section 12, Table 1: Meals Had Within the Residential Unit. - Section 12, table 2: Number of Persons Participate in the Meals within Household Expenditure Other Than its Members.
Part 3: Income and Other Data: - Section 13: Job - Section 14: paid jobs - Section 15: Agriculture, forestry and fishing - Section 16: Household non – agricultural projects - Section 17: Income from ownership and transfers - Section 18: Durable goods - Section 19: Loans, advances and subsidies - Section 20: Shocks and strategy of dealing in the households - Section 21: Time use - Section 22: Justice - Section 23: Satisfaction in life - Section 24: Food consumption during past 7 days
Part 4: Diary of Daily Expenditures: Diary of expenditure is an essential component of this survey. It is left at the household to record all the daily purchases such as expenditures on food and frequent non-food items such as gasoline, newspapers…etc. during 7 days. Two pages were allocated for recording the expenditures of each day, thus the roster will be consists of 14 pages.
----> Raw Data:
Data Editing and Processing: To ensure accuracy and consistency, the data were edited at the following stages: 1. Interviewer: Checks all answers on the household questionnaire, confirming that they are clear and correct. 2. Local Supervisor: Checks to make sure that questions has been correctly completed. 3. Statistical analysis: After exporting data files from excel to SPSS, the Statistical Analysis Unit uses program commands to identify irregular or non-logical values in addition to auditing some variables. 4. World Bank consultants in coordination with the CSO data management team: the World Bank technical consultants use additional programs in SPSS and STAT to examine and correct remaining inconsistencies within the data files. The software detects errors by analyzing questionnaire items according to the expected parameter for each variable.
----> Harmonized Data:
Iraq Household Socio Economic Survey (IHSES) reached a total of 25488 households. Number of households refused to response was 305, response rate was 98.6%. The highest interview rates were in Ninevah and Muthanna (100%) while the lowest rates were in Sulaimaniya (92%).
Facebook
TwitterThis dataset provides locations of open spaces in London identified by research and data analysis as Privately Owned Public Spaces (POPS), based on the definition below and available data in 2017. This is not a fully comprehensive dataset and is based on multiple sources of information. Subsequent versions will provide updates as more information becomes available. Read more here. The dataset has been created by Greenspace Information for Greater London CIC (GiGL). GiGL mobilises, curates and shares data that underpin our knowledge of London’s natural environment. We provide impartial evidence to enable informed discussion and decision-making in policy and practice. GiGL maps under licence from the Greater London Authority. Research for this dataset has been assisted by The Guardian Cities team. Data sources Boundaries and attributes are based on GiGL’s Open Space dataset, which is a collated dataset of spatial and attribute information from various sources, including: habitat and open space survey information provided to GiGL by the GLA and London boroughs, borough open space survey information where provided to GiGL or available under open licence, other attribute information inferred from field visits or research. Available open space information has been analysed by GiGL to identify POPS included in this dataset. Future updates to the GiGL Open Space dataset will inform future, improved releases of the POPS dataset. Definition For the purposes of creating the dataset, POPS have been carefully defined as below. The definition is based on review of similar definitions internationally and appropriateness for application to available London data. Privately Owned Public Spaces (POPS): publicly accessible spaces which are provided and maintained by private developers, offices or residential building owners. They include city squares, atriums and small parks. The spaces provide several functional amenities for the public. They are free to enter and may be open 24 hours or have restricted access arrangements. Whilst the spaces look public, there are often constraints to use. For the Greater London dataset no consideration is taken as to a site’s formal status in planning considerations, and only unenclosed POPS are included. POPS may be destination spaces, which attract visitors from outside of the space’s immediate area and are designed for use by a broad audience, or neighbourhood spaces, which draw residents and employees from the immediate locale and are usually strongly linked with the adjacent street or host building. These spaces are of high quality and include a range of amenities. The POPS may also be a hiatus space, accommodating the passing user for a brief stop only – for example it may include seating but few other amenities, a circulation space, designed to improve a pedestrian’s journey from A to B, or a marginal space, which whilst a public space is not very accommodating and experiences low levels of usage. (Ref: Privately Owned Public Space: The New York City Experience, by Jerold S. Kayden, The New York City Department of City Planning, and the Municipal Art Society of New York, published by John Wiley & Sons, 2000). NOTE: The boundaries are based on Ordnance Survey mapping and the data is published under Ordnance Survey's 'presumption to publish'.Contains OS data © Crown copyright and database rights 2017.
Facebook
TwitterThe capstone was completed in Power Bi. Due to restrictions on sharing, I've made a powerpoint of the report that demonstrates the data in use and the insight gained from the research.
dailyActivity_merged contains a summary of daily activity such as total distance, intensities (i.e., very active, sedentary), and total minutes in intensities.
There is a discrepancy between the total distance and the sum of VeryActiveDistance, ModeratelyActiveDistance, LightActiveDistance and SedentaryActiveDistance. With an average of 5.489702122 miles in tracker distance, this can be off on average up to .077053 miles or 370 feet.
1.6% (15/940) of tracker distances listed do not match total distance. I will need clarification between total distance and tracker distance. For my report, I will be using total distance.
Aggregated daily data does not contain null values. No assumptions need to be made based on this.
98/940 records are <= 500 feet. 77/98 have a total of 0 steps and the remaining data is 0. A filter has been added to void records where total steps are <= 500.
I removed 5/12/2016 due to lack of sufficient user data.
dailyActivity_merged contains the same calories as dailyCalories_merged when using activity date and ID as a primary key.
dailyActivity_merged does not contain the same calories as hourlyCalories_merged when summing the calories per day in the hourly table.
PseudoData contains mock data I created for users. Pseudo names were created for the ID's to make data relatable for the audience. Teams were generated in the event the analysis discussed this possibility.
heartrate_seconds_merged contains heart rate value every 15 seconds over time.
I removed 5/12/2016 due to lack of sufficient user data. Events were averaged to the nearest hour. The windows function lag() was used to find time between events to determine usage. The visuals will show lag, or time when the device is not used, if it's greater than the total charge time, 2 hours.
hourlyCalories_merged contains calories per hour per ID. The Date and Time were separated into two columns.
Facebook
TwitterIdaho’s landscape-scale wetland condition assessment tool— Methods and applications in conservation and restoration planningLandscape-scale wetland threat and impairment assessment has been widely applied, both at the national level (NatureServe 2009) and in various states, including Colorado (Lemly et al. 2011), Delaware and Maryland (Tiner 2002 and 2005; Weller et al. 2007), Minnesota (Sands 2002), Montana (Daumiller 2003, Vance 2009), North Dakota (Mita et al. 2007), Ohio (Fennessy et al. 2007), Pennsylvania (Brooks et al. 2002 and 2004; Hychka et al. 2007; Wardrop et al. 2007), and South Dakota (Troelstrup and Stueven 2007). Most of these landscape-scale analyses use a relatively similar list of spatial layer inputs to calculate metrics for condition analyses. This is a cost-effective, objective way to obtain this information from all wetlands in a broad geographic area. Similar landscape-scale assessment projects in Idaho (Murphy and Schmidt 2010) used spatial analysis to estimate the relative condition of wetlands habitats throughout Idaho. Spatial data sources: Murphy and Schmidt (2010) reviewed literature and availability of spatial data to choose which spatial layers to include in their model of landscape integrity. Spatial layers preferably had statewide coverage for inclusion in the analysis. Nearly all spatial layers were downloaded from the statewide geospatial data clearinghouse, the Interactive Numeric and Spatial Information Data Engine for Idaho (INSIDE Idaho; http://inside.uidaho.edu/index.html). A complete list of layers used in the landscape integrity model is in Table 1. Statewide spatial layers were lacking for some important potential condition indicators, such as mine tailings, beaver presence, herbicide or pesticide use, non-native species abundance, nutrient loading, off-highway vehicle use, recreational and boating impacts, and sediment accumulation. Statewide spatial layers were also lacking for two presumably important potential indicators of wetland/riparian condition, recent timber harvest and livestock grazing. To rectify this, GIS models of potential recent timber harvest and livestock grazing were created using National Land Cover Data, grazing allotment maps, and NW ReGAP land cover maps. Calculation of landscape and disturbance metrics: We used a landscape integrity model approach similar to that used by Lemly et al. (2011), Vance (2009), and Faber-Langendoen et al. (2006). Spatial analysis in GIS was used to calculate human land use, or disturbance, metrics for every 30 m2 pixel across Idaho. A single raster layer that indicated threats and impairments for that pixel was produced. This was accomplished by first calculating the distance from each human land use category, development type, or disturbance for each pixel. This inverse weighted distance model is based on the assumption that ecological condition will be poorer in areas of the landscape with the most cumulative human activities and disturbances. Condition improves as you move toward least developed areas (Faber-Langendoen et al. 2006, Vance 2009, Lemly et al. 2011). Land uses or disturbances within 50 m were considered to have twice the impact of those 50 - 100 m away. For this model, land uses and disturbances > 100 m away were assumed to have zero or negligible impact. Because not all land uses impact wetlands the same way, weights for each land use or disturbance type were then determined using published literature (Hauer et al. 2002, Brown and Vivas 2005, Fennessy et al. 2007, Durkalec et al. 2009). A list of weights applied to each land use or disturbance type is in Table 2. A condition value for each pixel was then calculated. For example, the value for a pixel with a 2-lane highway and railroad within 50 m and a home and urban park between 50 and 100 m would be: Weight x Distance = Impact Factor2-lane highway = 7.81 2 15.62railroad = 7.81 2 + 15.62single family home - low density = 6.91 1 + 6.91recreation / open space - medium intensity = 4.38 1 + 4.38 Total Disturbance Value = 42.53The integrity of each pixel was then ranked relative to all others in Idhao using methods analogous to Stoddard et al. (2005), Fennessy et al. (2007), Mita et al. (2007), and Troelstrup and Stueven (2007). Five condition categories based on the sum of weighted impacts present in each pixel were used: 1 = minimally disturbed (top 1% of wetlands); wetland present in the absence or near absence of human disturbances; zero to few stressors are present; land use is almost completely not human-created; equivalent to reference condition; conservation priority;2 = lightly disturbed (2 - 5%); wetland deviates the least from that in the minimally disturbed class based on existing landscape impacts; few stressors are present; majority of land use is not human-created; these are the best wetlands in areas where human influences are present; ecosystem processes and functions are within natural ranges of variation found in the reference condition, but threats exist; conservation and/or restoration priority; 3 = moderately disturbed (6 - 15%); several stressors are present; land use is roughly split between human-created and non-human land use; ecosystem processes and functions are impaired and somewhat outside the range of variation found in the reference condition, but are still present; ecosystem processes are restorable;4 = severely disturbed (16 - 40%); numerous stressors are present; land use is majority human-created; ecosystem processes and functions are severely altered or disrupted and outside the range of variation found in the reference condition; ecosystem processes are restorable, but may require large investments of energy and money for successful restoration; 5 = completely disturbed (bottom 41 - 100%); many stressors are present; land use is nearly completely human-created; ecosystem processes and functions are disrupted and outside the range of variation in the reference condition; ecosystem processes are very difficult to restore.The resulting layer was then filtered using the map of potential wetland occurrence to show only those pixels potentially supporting wetlands.Results of GIS landscape-scale assessment were verified by comparing results with the condition of wetlands determined by in the field using rapid assessment methods. The landscape assessment matched the rapidly assessed condition estimated in the field 61% of the time (Murphy et al. 2012). Thirty-one percent of the sites were misclassified by one condition class and 8% misclassified by two condition classes. These results were similar to an accuracy assessment of landscape scale assessment performed by Mita et al. (2007) in North Dakota. When sites classified correctly and those only off by one condition class were combined (92% of the samples), results were similar to Vance (2009) in Montana (85%). The model of landscape integrity performed much better than the initial prototype model produced for Idaho by Murphy and Schmidt (2010).
Facebook
TwitterThe National Child Development Study (NCDS) is a continuing longitudinal study that seeks to follow the lives of all those living in Great Britain who were born in one particular week in 1958. The aim of the study is to improve understanding of the factors affecting human development over the whole lifespan.
The NCDS has its origins in the Perinatal Mortality Survey (PMS) (the original PMS study is held at the UK Data Archive under SN 2137). This study was sponsored by the National Birthday Trust Fund and designed to examine the social and obstetric factors associated with stillbirth and death in early infancy among the 17,000 children born in England, Scotland and Wales in that one week. Selected data from the PMS form NCDS sweep 0, held alongside NCDS sweeps 1-3, under SN 5565.
Survey and Biomeasures Data (GN 33004):
To date there have been ten attempts to trace all members of the birth cohort in order to monitor their physical, educational and social development. The first three sweeps were carried out by the National Children's Bureau, in 1965, when respondents were aged 7, in 1969, aged 11, and in 1974, aged 16 (these sweeps form NCDS1-3, held together with NCDS0 under SN 5565). The fourth sweep, also carried out by the National Children's Bureau, was conducted in 1981, when respondents were aged 23 (held under SN 5566). In 1985 the NCDS moved to the Social Statistics Research Unit (SSRU) - now known as the Centre for Longitudinal Studies (CLS). The fifth sweep was carried out in 1991, when respondents were aged 33 (held under SN 5567). For the sixth sweep, conducted in 1999-2000, when respondents were aged 42 (NCDS6, held under SN 5578), fieldwork was combined with the 1999-2000 wave of the 1970 Birth Cohort Study (BCS70), which was also conducted by CLS (and held under GN 33229). The seventh sweep was conducted in 2004-2005 when the respondents were aged 46 (held under SN 5579), the eighth sweep was conducted in 2008-2009 when respondents were aged 50 (held under SN 6137), the ninth sweep was conducted in 2013 when respondents were aged 55 (held under SN 7669), and the tenth sweep was conducted in 2020-24 when the respondents were aged 60-64 (held under SN 9412).
A Secure Access version of the NCDS is available under SN 9413, containing detailed sensitive variables not available under Safeguarded access (currently only sweep 10 data). Variables include uncommon health conditions (including age at diagnosis), full employment codes and income/finance details, and specific life circumstances (e.g. pregnancy details, year/age of emigration from GB).
Four separate datasets covering responses to NCDS over all sweeps are available. National Child Development Deaths Dataset: Special Licence Access (SN 7717) covers deaths; National Child Development Study Response and Outcomes Dataset (SN 5560) covers all other responses and outcomes; National Child Development Study: Partnership Histories (SN 6940) includes data on live-in relationships; and National Child Development Study: Activity Histories (SN 6942) covers work and non-work activities. Users are advised to order these studies alongside the other waves of NCDS.
From 2002-2004, a Biomedical Survey was completed and is available under Safeguarded Licence (SN 8731) and Special Licence (SL) (SN 5594). Proteomics analyses of blood samples are available under SL SN 9254.
Linked Geographical Data (GN 33497):
A number of geographical variables are available, under more restrictive access conditions, which can be linked to the NCDS EUL and SL access studies.
Linked Administrative Data (GN 33396):
A number of linked administrative datasets are available, under more restrictive access conditions, which can be linked to the NCDS EUL and SL access studies. These include a Deaths dataset (SN 7717) available under SL and the Linked Health Administrative Datasets (SN 8697) available under Secure Access.
Multi-omics Data and Risk Scores Data (GN 33592)
Proteomics analyses were run on the blood samples collected from NCDS participants in 2002-2004 and are available under SL SN 9254. Metabolomics analyses were conducted on respondents of sweep 10 and are available under SL SN 9411. Polygenic indices are available under SL SN 9439. Derived summary scores have been created that combine the estimated effects of many different genes on a specific trait or characteristic, such as a person's risk of Alzheimer's disease, asthma, substance abuse, or mental health disorders, for example. These scores can be combined with existing survey data to offer a more nuanced understanding of how cohort members' outcomes may be shaped.
Additional Sub-Studies (GN 33562):
In addition to the main NCDS sweeps, further studies have also been conducted on a range of subjects such as parent migration, unemployment, behavioural studies and respondent essays. The full list of NCDS studies available from the UK Data Service can be found on the NCDS series access data webpage.
How to access genetic and/or bio-medical sample data from a range of longitudinal surveys:
For information on how to access biomedical data from NCDS that are not held at the UKDS, see the CLS Genetic data and biological samples webpage.
Further information about the full NCDS series can be found on the Centre for Longitudinal Studies website.
The National Child Development Study: Biomedical Survey 2002-2004 was funded under the Medical Research Council 'Health of the Public' initiative, and was carried out in 2002-2004 in collaboration with the Institute of Child Health, St George's Hospital Medical School, and NatCen. The survey was designed to obtain objective measures of ill-health and biomedical risk factors in order to address a wide range of specific hypotheses relating to anthropometry: cardiovascular, respiratory and allergic diseases; visual and hearing impairment; and mental ill-health.
The majority of the biomedical data (1,064 variables) are now available under EUL (SN 8731), with some data considered sensitive still available under Special Licence (SN 5594). This decision was the result of the CLS's disclosure assessment of each variable and the broad aim to make as much data available with the lowest possible barriers. Information about the medication taken by the cohort members of the study is also available under EUL for the first time. These data were collected in 2002-2004, but they were never released via the UKDS.
Facebook
Twitterhttps://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice
Download Free Sample
The private cloud services market is expected to grow at a CAGR of 25% during the forecast period. Growing adoption of cloud among smes, drivers.2, and drivers.3 are some of the significant factors fueling private cloud services market growth.
Growing adoption of cloud among smes
To estimate the size of the global private cloud services market, Technavio has tracked the recent trends and developments in the industry. The market size has been developed in terms of value by considering the following factors: The market size has been calculated based on investments made by cloud service providers, colocation service providers, and enterprises in setting up new data centers or upgrading their existing data centers. The market size excludes all discounts and allowances, and government subsidies. Revenues: Taken in local currencies, if not available in US dollars, for each country and vendor and then converted to US dollars using the yearly average currency exchange rate of 2019, the base year. This implies that the figures reflect industry trends, not distorted by fluctuations in international exchange rates. Exclusions: The report does not consider the effect of inflation and price fluctuation over the forecast period. Currency: Unless explicitly mentioned, all revenues are represented in US dollars. The market sizing has been built and validated using multiple demand-side and supply-side approaches for a detailed understanding of the global private cloud services market. The specific market sizing approaches used for evaluating the global private cloud services market are: Top down: Validated the market on the basis of the contribution of global private cloud services market to the overall IT spending market Bottom up: Validated the market on the basis of the revenue of key technology solution providers from the global private cloud services market. Combination: Using a combination of more than one approaches described above and integrating the results in a data model Within the above-mentioned market sizing models, analysts have made assumptions and estimates listed below: Extensive use of private cloud services for storage purpose Stringent government regulations For this report, we have also used the following macro data in modeling the market size for 2019: GDP growth Mobile penetration rates Internet penetration rates Broadband penetration rates Based on the above data models, Technavio has estimated the total market for private cloud services as $34.14 billion in 2019.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The collection of in situ data is generally a costly process, with the Arctic being no exception. Indeed, there has been a perception that the Arctic lacks for in situ sampling; however, after many years of concerted effort and international collaboration, the Arctic is now rather well sampled with many cruise expeditions every year. For example, the GLODAP product has a greater density of in situ sample points within the Arctic than along the equator. While this is useful for open ocean processes, the fjords of the Arctic, which serve as crucially important intersections of terrestrial, coastal, and marine processes, are sampled in a much more ad hoc process. This is not to say they are not well sampled, but rather that the data are more difficult to source and combine for further analysis. It was therefore noted that the fjords of the Arctic are lacking in FAIR (Findable, Accessible, Interoperable, and Reusable) data. To address this issue a single dataset has been created from publicly available, predominantly in situ data from a number of online platforms. After finding and accessing the data, they were amalgamated into a single project-wide standard, ensuring their interoperability. The dataset was then uploaded to PANGAEA so that it itself can be findable and reusable into the future. The focus of the data collection was driven by the key drivers of change in Arctic fjords identified in a companion review paper. This dataset is a work in progress and as new datasets containing the relevant key drivers are released they will be added to an updated version planned for late 2024.
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
A. SUMMARY This archived dataset includes data for population characteristics that are no longer being reported publicly. The date on which each population characteristic type was archived can be found in the field “data_loaded_at”.
B. HOW THE DATASET IS CREATED Data on the population characteristics of COVID-19 cases are from: * Case interviews * Laboratories * Medical providers These multiple streams of data are merged, deduplicated, and undergo data verification processes.
Race/ethnicity * We include all race/ethnicity categories that are collected for COVID-19 cases. * The population estimates for the "Other" or “Multi-racial” groups should be considered with caution. The Census definition is likely not exactly aligned with how the City collects this data. For that reason, we do not recommend calculating population rates for these groups.
Gender * The City collects information on gender identity using these guidelines.
Skilled Nursing Facility (SNF) occupancy * A Skilled Nursing Facility (SNF) is a type of long-term care facility that provides care to individuals, generally in their 60s and older, who need functional assistance in their daily lives. * This dataset includes data for COVID-19 cases reported in Skilled Nursing Facilities (SNFs) through 12/31/2022, archived on 1/5/2023. These data were identified where “Characteristic_Type” = ‘Skilled Nursing Facility Occupancy’.
Sexual orientation * The City began asking adults 18 years old or older for their sexual orientation identification during case interviews as of April 28, 2020. Sexual orientation data prior to this date is unavailable. * The City doesn’t collect or report information about sexual orientation for persons under 12 years of age. * Case investigation interviews transitioned to the California Department of Public Health, Virtual Assistant information gathering beginning December 2021. The Virtual Assistant is only sent to adults who are 18+ years old. https://www.sfdph.org/dph/files/PoliciesProcedures/COM9_SexualOrientationGuidelines.pdf">Learn more about our data collection guidelines pertaining to sexual orientation.
Comorbidities * Underlying conditions are reported when a person has one or more underlying health conditions at the time of diagnosis or death.
Homelessness Persons are identified as homeless based on several data sources: * self-reported living situation * the location at the time of testing * Department of Public Health homelessness and health databases * Residents in Single-Room Occupancy hotels are not included in these figures. These methods serve as an estimate of persons experiencing homelessness. They may not meet other homelessness definitions.
Single Room Occupancy (SRO) tenancy * SRO buildings are defined by the San Francisco Housing Code as having six or more "residential guest rooms" which may be attached to shared bathrooms, kitchens, and living spaces. * The details of a person's living arrangements are verified during case interviews.
Transmission Type * Information on transmission of COVID-19 is based on case interviews with individuals who have a confirmed positive test. Individuals are asked if they have been in close contact with a known COVID-19 case. If they answer yes, transmission category is recorded as contact with a known case. If they report no contact with a known case, transmission category is recorded as community transmission. If the case is not interviewed or was not asked the question, they are counted as unknown.
C. UPDATE PROCESS This dataset has been archived and will no longer update as of 9/11/2023.
D. HOW TO USE THIS DATASET Population estimates are only available for age groups and race/ethnicity categories. San Francisco population estimates for race/ethnicity and age groups can be found in a view based on the San Francisco Population and Demographic Census dataset. These population estimates are from the 2016-2020 5-year American Community Survey (ACS).
This dataset includes many different types of characteristics. Filter the “Characteristic Type” column to explore a topic area. Then, the “Characteristic Group” column shows each group or category within that topic area and the number of cases on each date.
New cases are the count of cases within that characteristic group where the positive tests were collected on that specific specimen collection date. Cumulative cases are the running total of all San Francisco cases in that characteristic group up to the specimen collection date listed.
This data may not be immediately available for recently reported cases. Data updates as more information becomes available.
To explore data on the total number of cases, use the ARCHIVED: COVID-19 Cases Over Time dataset.
E. CHANGE LOG
Facebook
Twitterhttps://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de456864https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de456864
Abstract (en): The purpose of this data collection is to provide an official public record of the business of the federal courts. The data originate from 94 district and 12 appellate court offices throughout the United States. Information was obtained at two points in the life of a case: filing and termination. The termination data contain information on both filing and terminations, while the pending data contain only filing information. For the appellate and civil data, the unit of analysis is a single case. The unit of analysis for the criminal data is a single defendant. ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection: Performed consistency checks.; Standardized missing values.; Checked for undocumented or out-of-range codes.. All federal court cases, 1970-2000. 2012-05-22 All parts are being moved to restricted access and will be available only using the restricted access procedures.2005-04-29 The codebook files in Parts 57, 94, and 95 have undergone minor edits and been incorporated with their respective datasets. The SAS files in Parts 90, 91, 227, and 229-231 have undergone minor edits and been incorporated with their respective datasets. The SPSS files in Parts 92, 93, 226, and 228 have undergone minor edits and been incorporated with their respective datasets. Parts 15-28, 34-56, 61-66, 70-75, 82-89, 96-105, 107, 108, and 115-121 have had identifying information removed from the public use file and restricted data files that still include that information have been created. These parts have had their SPSS, SAS, and PDF codebook files updated to reflect the change. The data, SPSS, and SAS files for Parts 34-37 have been updated from OSIRIS to LRECL format. The codebook files for Parts 109-113 have been updated. The case counts for Parts 61-66 and 71-75 have been corrected in the study description. The LRECL for Parts 82, 100-102, and 105 have been corrected in the study description.2003-04-03 A codebook was created for Part 105, Civil Pending, 1997. Parts 232-233, SAS and SPSS setup files for Civil Data, 1996-1997, were removed from the collection since the civil data files for those years have corresponding SAS and SPSS setup files.2002-04-25 Criminal data files for Parts 109-113 have all been replaced with updated files. The updated files contain Criminal Terminations and Criminal Pending data in one file for the years 1996-2000. Part 114, originally Criminal Pending 2000, has been removed from the study and the 2000 pending data are now included in Part 113.2001-08-13 The following data files were revised to include plaintiff and defendant information: Appellate Terminations, 2000 (Part 107), Appellate Pending, 2000 (Part 108), Civil Terminations, 1996-2000 (Parts 103, 104, 115-117), and Civil Pending, 2000 (Part 118). The corresponding SAS and SPSS setup files and PDF codebooks have also been edited.2001-04-12 Criminal Terminations (Parts 109-113) data for 1996-2000 and Criminal Pending (Part 114) data for 2000 have been added to the data collection, along with corresponding SAS and SPSS setup files and PDF codebooks.2001-03-26 Appellate Terminations (Part 107) and Appellate Pending (Part 108) data for 2000 have been added to the data collection, along with corresponding SAS and SPSS setup files and PDF codebooks.1997-07-16 The data for 18 of the Criminal Data files were matched to the wrong part numbers and names, and now have been corrected. Funding insitution(s): United States Department of Justice. Office of Justice Programs. Bureau of Justice Statistics. (1) Several, but not all, of these record counts include a final blank record. Researchers may want to detect this occurrence and eliminate this record before analysis. (2) In July 1984, a major change in the recording and disposition of an appeal occurred, and several data fields dealing with disposition were restructured or replaced. The new structure more clearly delineates mutually exclusive dispositions. Researchers must exercise care in using these fields for comparisons. (3) In 1992, the Administrative Office of the United States Courts changed the reporting period for statistical data. Up to 1992, the reporting period...
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains a subset of data from our Building Permit Application dataset. The data has been filtered to only include permit applications for solar (permit type = PVRS).To learn more about solar energy in Cary check out our Solar Energy webpage.This file is created from the Town of Cary permit application data. It has been created to conform to the BLDS open data specification for building permit data (permitdata.org). In the Town of Cary a permit application may result in the creation of several permits. Rows in this table represent applications for permits, not individual permits. Individual permits may be released as a separate dataset. With the exception of a few fields, we have applied all of the required and preferred fields of the required dataset for permits. This data is updated daily.Used as a part of the Solar, Cary, and You Dashboard
Facebook
TwitterMap Information
This nowCOAST time-enabled map service provides maps depicting the latest global forecast guidance of water currents, water temperature, and salinity at forecast projections: 0, 12, 24, 36, 48, 60, 72, 84, and 96-hours from the NWS/NCEP Global Real-Time Ocean Forecast System (GRTOFS). The surface water currents velocity maps displays the direction using white or black streaklets. The magnitude of the current is indicated by the length and width of the streaklet. The maps of the GRTOFS surface forecast guidance are updated on the nowCOAST map service once per day. For more detailed information about the update schedule, see: http://new.nowcoast.noaa.gov/help/#section=updateschedule
Background Information
GRTOFS is based on the Hybrid Coordinates Ocean Model (HYCOM), an eddy resolving, hybrid coordinate numerical ocean prediction model.
GRTOFS has global coverge and a horizontal resolution of 1/12 degree and 32 hybrid vertical layers. It has one forecast cycle per day
(i.e. 0000 UTC) which generates forecast guidance out to 144 hours (6 days). However, nowCOAST only provides guidance out to 96 hours (4 days).
The forecast cycle uses 3-hourly momentum and radiation fluxes along
with precipitation predictions from the NCEP Global Forecast System (GFS). Each forecast cycle is preceded with a 48-hr long nowcast cycle.
The nowcast cycle uses daily initial 3-D fields from the NAVOCEANO operational HYCOM-based forecast system which assimilates situ profiles
of temperature and salinity from a variety of sources and remotely sensed SST, SSH and sea-ice concentrations. GRTOFS was developed by
NCEP/EMC/Marine Modeling and Analysis Programs. GRTOFS is run once per day (0000 UTC forecast cycle) on the NOAA Weather and Climate Operational Supercomputer System
(WCOSS) operated by NWS/NCEP Central Operations.
The maps are generated using a visualization technique was developed by the Data Visualization Research Lab at The University of New Hampshire Center for Coastal and Ocean Mapping (http://www.ccom.unh.edu/vislab/). The method combines two techniques. First, equally spaced streamlines are computed in the flow field using Jobard and Lefer's (1977) algorithm. Second, a series of "streaklets" are rendered head to tail along each streamline to show the direction of flow. Each of these varies along its length in size, color and transparency using a method developed by Fowler and Ware (1989), and later refined by Mr. Pete Mitchell and Dr. Colin Ware (Mitchell, 2007).
Time Information
This map is time-enabled, meaning that each individual layer contains time-varying data and can be utilized by clients capable of making map requests that include a time component.
This particular service can be queried with or without the use of a time component. If the time parameter is specified in a request, the data or imagery most relevant to the provided time value, if any, will be returned. If the time parameter is not specified in a request, the latest data or imagery valid for the present system time will be returned to the client. If the time parameter is not specified and no data or imagery is available for the present time, no data will be returned.
In addition to ArcGIS Server REST access, time-enabled OGC WMS 1.3.0 access is also provided by this service.
Due to software limitations, the time extent of the service and map layers displayed below does not provide the most up-to-date start and end times of available data. Instead, users have three options for determining the latest time information about the service:
Issue a returnUpdates=true request for an individual layer or for
the service itself, which will return the current start and end times of
available data, in epoch time format (milliseconds since 00:00 January 1,
1970). To see an example, click on the "Return Updates" link at the bottom of
this page under "Supported Operations". Refer to the
ArcGIS REST API Map Service Documentation
for more information.
Issue an Identify (ArcGIS REST) or GetFeatureInfo (WMS) request against
the proper layer corresponding with the target dataset. For raster
data, this would be the "Image Footprints with Time Attributes" layer
in the same group as the target "Image" layer being displayed. For
vector (point, line, or polygon) data, the target layer can be queried
directly. In either case, the attributes returned for the matching
raster(s) or vector feature(s) will include the following:
validtime: Valid timestamp.
starttime: Display start time.
endtime: Display end time.
reftime: Reference time (sometimes reffered to as
issuance time, cycle time, or initialization time).
projmins: Number of minutes from reference time to valid
time.
desigreftime: Designated reference time; used as a
common reference time for all items when individual reference
times do not match.
desigprojmins: Number of minutes from designated
reference time to valid time.
Query the nowCOAST LayerInfo web service, which has been created to
provide additional information about each data layer in a service,
including a list of all available "time stops" (i.e. "valid times"),
individual timestamps, or the valid time of a layer's latest available
data (i.e. "Product Time"). For more information about the LayerInfo
web service, including examples of various types of requests, refer to
the nowCOAST help documentation at:
http://new.nowcoast.noaa.gov/help/#section=layerinfo
References
Fowler, D. and C. Ware, 1989: Strokes for Representing Vector Field Maps. Proceedings: Graphics Interface '98 249-253. Jobard, B and W. Lefer,1977: Creating evenly spaced streamlines of arbitrary density. Proceedings: Eurographics workshop on Visualization in Scientific Computing. 43-55. Mitchell, P.W., 2007: The Perceptual optimization of 2D Flow Visualizations Using Human in the Loop Local Hill Climbing. University of New Hampshire Masters Thesis. Department of Computer Science. NWS, 2013: About Global RTOFS, NCEP/EMC/MMAB, College Park, MD (Available at http://polar.ncep.noaa.gov/global/about/). Chassignet, E.P., H.E. Hurlburt, E.J. Metzger, O.M. Smedstad, J. Cummings, G.R. Halliwell, R. Bleck, R. Baraille, A.J. Wallcraft, C. Lozano, H.L. Tolman, A. Srinivasan, S. Hankin, P. Cornillon, R. Weisberg, A. Barth, R. He, F. Werner, and J. Wilkin, 2009: U.S. GODAE: Global Ocean Prediction with the HYbrid Coordinate Ocean Model (HYCOM). Oceanography, 22(2), 64-75. Mehra, A, I. Rivin, H. Tolman, T. Spindler, and B. Balasubramaniyan, 2011: A Real-Time Operational Global Ocean Forecast System, Poster, GODAE OceanView –GSOP-CLIVAR Workshop in Observing System Evaluation and Intercomparisons, Santa Cruz, CA.
Facebook
TwitterThe National Child Development Study (NCDS) is a continuing longitudinal study that seeks to follow the lives of all those living in Great Britain who were born in one particular week in 1958. The aim of the study is to improve understanding of the factors affecting human development over the whole lifespan.
The NCDS has its origins in the Perinatal Mortality Survey (PMS) (the original PMS study is held at the UK Data Archive under SN 2137). This study was sponsored by the National Birthday Trust Fund and designed to examine the social and obstetric factors associated with stillbirth and death in early infancy among the 17,000 children born in England, Scotland and Wales in that one week. Selected data from the PMS form NCDS sweep 0, held alongside NCDS sweeps 1-3, under SN 5565.
Survey and Biomeasures Data (GN 33004):
To date there have been ten attempts to trace all members of the birth cohort in order to monitor their physical, educational and social development. The first three sweeps were carried out by the National Children's Bureau, in 1965, when respondents were aged 7, in 1969, aged 11, and in 1974, aged 16 (these sweeps form NCDS1-3, held together with NCDS0 under SN 5565). The fourth sweep, also carried out by the National Children's Bureau, was conducted in 1981, when respondents were aged 23 (held under SN 5566). In 1985 the NCDS moved to the Social Statistics Research Unit (SSRU) - now known as the Centre for Longitudinal Studies (CLS). The fifth sweep was carried out in 1991, when respondents were aged 33 (held under SN 5567). For the sixth sweep, conducted in 1999-2000, when respondents were aged 42 (NCDS6, held under SN 5578), fieldwork was combined with the 1999-2000 wave of the 1970 Birth Cohort Study (BCS70), which was also conducted by CLS (and held under GN 33229). The seventh sweep was conducted in 2004-2005 when the respondents were aged 46 (held under SN 5579), the eighth sweep was conducted in 2008-2009 when respondents were aged 50 (held under SN 6137), the ninth sweep was conducted in 2013 when respondents were aged 55 (held under SN 7669), and the tenth sweep was conducted in 2020-24 when the respondents were aged 60-64 (held under SN 9412).
A Secure Access version of the NCDS is available under SN 9413, containing detailed sensitive variables not available under Safeguarded access (currently only sweep 10 data). Variables include uncommon health conditions (including age at diagnosis), full employment codes and income/finance details, and specific life circumstances (e.g. pregnancy details, year/age of emigration from GB).
Four separate datasets covering responses to NCDS over all sweeps are available. National Child Development Deaths Dataset: Special Licence Access (SN 7717) covers deaths; National Child Development Study Response and Outcomes Dataset (SN 5560) covers all other responses and outcomes; National Child Development Study: Partnership Histories (SN 6940) includes data on live-in relationships; and National Child Development Study: Activity Histories (SN 6942) covers work and non-work activities. Users are advised to order these studies alongside the other waves of NCDS.
From 2002-2004, a Biomedical Survey was completed and is available under Safeguarded Licence (SN 8731) and Special Licence (SL) (SN 5594). Proteomics analyses of blood samples are available under SL SN 9254.
Linked Geographical Data (GN 33497):
A number of geographical variables are available, under more restrictive access conditions, which can be linked to the NCDS EUL and SL access studies.
Linked Administrative Data (GN 33396):
A number of linked administrative datasets are available, under more restrictive access conditions, which can be linked to the NCDS EUL and SL access studies. These include a Deaths dataset (SN 7717) available under SL and the Linked Health Administrative Datasets (SN 8697) available under Secure Access.
Multi-omics Data and Risk Scores Data (GN 33592)
Proteomics analyses were run on the blood samples collected from NCDS participants in 2002-2004 and are available under SL SN 9254. Metabolomics analyses were conducted on respondents of sweep 10 and are available under SL SN 9411. Polygenic indices are available under SL SN 9439. Derived summary scores have been created that combine the estimated effects of many different genes on a specific trait or characteristic, such as a person's risk of Alzheimer's disease, asthma, substance abuse, or mental health disorders, for example. These scores can be combined with existing survey data to offer a more nuanced understanding of how cohort members' outcomes may be shaped.
Additional Sub-Studies (GN 33562):
In addition to the main NCDS sweeps, further studies have also been conducted on a range of subjects such as parent migration, unemployment, behavioural studies and respondent essays. The full list of NCDS studies available from the UK Data Service can be found on the NCDS series access data webpage.
How to access genetic and/or bio-medical sample data from a range of longitudinal surveys:
For information on how to access biomedical data from NCDS that are not held at the UKDS, see the CLS Genetic data and biological samples webpage.
Further information about the full NCDS series can be found on the Centre for Longitudinal Studies website.
The National Child Development Study: Age 55, Sweep 9 Geographical Identifiers, 2011 Census Boundaries, 2013-2014: Secure Access data held under SN 7869 include sweep 9 detailed geographical variables that can be linked to the NCDS End User Licence (EUL) and Special Licence (SL) access studies listed on the NCDS series page. Besides SN 7669 - National Child Development Study: Age 55, Sweep 9, 2013, which is provided by default, users should indicate on their ESRC Research Proposal form all other Safeguarded dataset(s) that they wish to access alongside the study.
International Data Access Network (IDAN)
These data are now available to researchers based outside the UK. Selected UKDS SecureLab/controlled datasets from the Institute for Social and Economic Research (ISER) and the Centre for Longitudinal Studies (CLS) have been made available under the International Data Access Network (IDAN) scheme, via a Safe Room access point at one of the UKDS IDAN partners. Prospective users should read the UKDS SecureLab application guide for non-ONS data for researchers outside of the UK via Safe Room Remote Desktop Access. Further details about the IDAN scheme can be found on the UKDS International Data Access Network webpage and on the IDAN website.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The Sikyon Survey Project is a fully integrated multidisciplinary research program to study the human presence and activity on the plateau of ancient Sikyon, a city in northeastern Peloponnese between Corinth and Achaia. The urban survey was begun in the summer of 2004 by the University of Thessaly in collaboration with the 37th Ephoreia of Prehistoric and Classical Antiquities, the Institute of Mediterranean Studies at FORTH, and the University of York (UK). A previous extensive regional survey was conducted between 1996 and 2002 in the ca. 360 km2 territory of the ancient city. The goal of the research was twofold: the primary aim was to produce a multidisciplinary study of the intra-mural area across the ages, and to trace human presence and activity from prehistoric times to the early modern era. The second and more broad-ranging aim was to investigate the plateau in its context within the landscape and thus build upon the framework of the previous extensive survey of the territory of Sikyon.
Background
The plateau, which rises some 3.5 km southwest of the Corinthian gulf between the Asopos and Helisson rivers, was according to the ancient sources the acropolis of the Archaic and Classical city, which was itself located on the coast. In Archaic times, when ruled by the tyrannic family of the Orthagorids, Sikyon was one of the most powerful states of the Greek world and a cradle of the arts. Its artistic reputation carried on through the Classical and Hellenistic ages thanks to such famous painters and sculptors as Pausias, Kanachos and Lysippos. In 303 BCE, Demetrios Poliorketes, son of Antigonos I, destroyed the city in the plain and transferred it to the site of its acropolis. This initiative, beyond its practical purposes, conveyed a strong political message since Sikyon-Demetrias is one of the only two cities ever founded, or more precisely refounded, by a Macedonian ruler in the Peloponnese. The city grew in its new setting during the Hellenistic and Roman periods and witnessed a golden age in the third century BCE under general Aratos, head of the Achaian Confederacy. During the Roman Empire Sikyon lived in the shadow of Corinth, which was the capital of the province of Achaia. Likewise her bishopric attested already from the late 4th century CE depended on the archibishopric of Corinth. After the collapse of the Roman Empire Sikyon appears again in sources related to Frankish possessions in the Corinthia of the 13th and 14th centuries, this time under the name of Vasilika or Vasiliko. The village of Vasiliko, which presently occupies the southeastern corner of the plateau, is often mentioned in archives of the Ottoman and Second Venetian period (15th-18th centuries).
Data
The data in this collection is the "dissemination" data for the project, which is data that can be easily and immediately re-used in the applications that created the data. It is understood that this data is unlikely to maintain usability in the long-term due to the nature of software/format obsolescence. A companion collection of "preservation" data (10.5281/zenodo.1054552) has been created which is an exact replica of the data content but in more stable open formats for long-term preservation. These collections will not be actively migrated like a proper archive kept at the ADS or tDAR, but best efforts have been made to ensure their re-usability.
The collection has the following data:
Database
Microsoft Access Database files of the project databases
An Entity Relationship Diagram showing table relationships
GIS
ESRI Shapefiles of the project spatial data
Geophysics
Geoplot data of the magnetometry data
Plots of the processed surveys
Ground Penetrating Radar data/plots
Photos
Square photographs, including pottery finds and landscape features
Tract photographs with vegetation and landscape conditions
Architectural Features
General photos of the team working in the field and off-site
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/36489/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36489/terms
In 2008, the Administrative Office of the United States Courts (AOUSC) began implementing the NewSTATS (New Streamline Timely Access to Statistics) Project with respect to bankruptcy data. The project's goals were to modernize the system for collecting, processing, analyzing, and reporting statistics of the federal court system. Based on the records for bankruptcy cases in NewSTATS, a data base for internal use in the Research Division of the Federal Judicial Center has been created. That data base is the Bankruptcy Petition NewSTATS Snapshots [BPNS] Data Base.
Facebook
TwitterSocial media companies are starting to offer users the option to subscribe to their platforms in exchange for monthly fees. Until recently, social media has been predominantly free to use, with tech companies relying on advertising as their main revenue generator. However, advertising revenues have been dropping following the COVID-induced boom. As of July 2023, Meta Verified is the most costly of the subscription services, setting users back almost 15 U.S. dollars per month on iOS or Android. Twitter Blue costs between eight and 11 U.S. dollars per month and ensures users will receive the blue check mark, and have the ability to edit tweets and have NFT profile pictures. Snapchat+, drawing in four million users as of the second quarter of 2023, boasts a Story re-watch function, custom app icons, and a Snapchat+ badge.