Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Zoom is a dataset for object detection tasks - it contains 1 annotations for 437 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Action recognition has received increasing attentions from the computer vision and machine learning community in the last decades. Ever since then, the recognition task has evolved from single view recording under controlled laboratory environment to unconstrained environment (i.e., surveillance environment or user generated videos). Furthermore, recent work focused on other aspect of action recognition problem, such as cross-view classification, cross domain learning, multi-modality learning, and action localization. Despite the large variations of studies, we observed limited works that explore the open-set and open-view classification problem, which is a genuine inherited properties in action recognition problem. In other words, a well designed algorithm should robustly identify an unfamiliar action as “unknown” and achieved similar performance across sensors with similar field of view. The Multi-Camera Action Dataset (MCAD) is designed to evaluate the open-view classification problem under surveillance environment.
In our multi-camera action dataset, different from common action datasets we use a total of five cameras, which can be divided into two types of cameras (StaticandPTZ), to record actions. Particularly, there are three Static cameras (Cam04 & Cam05 & Cam06) with fish eye effect and two PanTilt-Zoom (PTZ) cameras (PTZ04 & PTZ06). Static camera has a resolution of 1280×960 pixels, while PTZ camera has a resolution of 704×576 pixels and a smaller field of view than Static camera. What’s more, we don’t control the illumination environment. We even set two contrasting conditions (Daytime and Nighttime environment) which makes our dataset more challenge than many controlled datasets with strongly controlled illumination environment.The distribution of the cameras is shown in the picture on the right.
We identified 18 units single person daily actions with/without object which are inherited from the KTH, IXMAS, and TRECIVD datasets etc. The list and the definition of actions are shown in the table. These actions can also be divided into 4 types actions. Micro action without object (action ID of 01, 02 ,05) and with object (action ID of 10, 11, 12 ,13). Intense action with object (action ID of 03, 04 ,06, 07, 08, 09) and with object (action ID of 14, 15, 16, 17, 18). We recruited a total of 20 human subjects. Each candidate repeats 8 times (4 times during the day and 4 times in the evening) of each action under one camera. In the recording process, we use five cameras to record each action sample separately. During recording stage we just tell candidates the action name then they could perform the action freely with their own habit, only if they do the action in the field of view of the current camera. This can make our dataset much closer to reality. As a results there is high intra action class variation among different action samples as shown in picture of action samples.
URL: http://mmas.comp.nus.edu.sg/MCAD/MCAD.html
Resources:
How to Cite:
Please cite the following paper if you use the MCAD dataset in your work (papers, articles, reports, books, software, etc):
By Centers for Disease Control and Prevention [source]
This dataset offers an in-depth look into the National Health and Nutrition Examination Survey (NHANES), which provides valuable insights on various health indicators throughout the United States. It includes important information such as the year when data was collected, location of the survey, data source and value, priority areas of focus, category and topic related to the survey, break out categories of data values, geographic location coordinates and other key indicators.Discover patterns in mortality rates from cardiovascular disease or analyze if pregnant women are more likely to report poor health than those who are not expecting with this NHANES dataset — a powerful collection for understanding personal health behaviors
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
Step 1: Understand the Data Format - Before beginning to work with NHANES data, you should become familiar with the different columns in the dataset. Each column contains a specific type of information about the data such as year collected, geographic location abbreviations and descriptions, sources used for collecting data, priority areas assigned by researchers or institutions associated with understanding health trends in a given area or population group as well as indicator values related to nutrition/health.
Step 2: Choose an Indicator - Once you understand what is included in each column and what type of values correspond to each field it is time to select which indicator(s) you would like plots or visualizations against demographic/geographical characteristics represented by NHANES data. Selecting an appropriate indicator helps narrow down your search criteria when conducting analyses of health/nutrition trends over time in different locations or amongst different demographic groups.
Step 3: Utilizing Subsets - When narrowing down your search criteria it may be beneficial to break up large datasets into smaller subsets that focus on a single area or topic for study (i.e., looking at nutrition trends among rural communities). This allows users to zoom into certain datasets if needed within their larger studies so they can further drill down on particular topics that are relevant for their research objectives without losing greater context from more general analysis results when viewing overall datasets containing all available fields for all locations examined by NHANES over many years of records collected at specific geographical areas requested within the parameters set forth by those wanting insights from external research teams utilizing this dataset remotely via Kaggle access granted through user accounts giving them authorized access controls solely limited by base administration permissions set forth where required prior granting needs authorization process has been met prior downloading/extraction activities successful completion finalized allowed beyond initial site signup page make sure rules followed while also ensuring positive experience interactive engagement processes fluid flow signature one-time registration entry after exit page exits once completed neutralize logout button pops finish downloading extract image files transfer end destination requires hard drive storage efficient manner duplicate second backup remain resilient mitigate file corruption concerns start working properly formatted smooth transition between systems be seamless reflective channel dynamic organization approach complement function beneficial effort allow comprehensive review completed quality control standards align desires outcomes desired critical path
- Creating a health calculator to help people measure their health risk. The indicator and data value fields can be used to create an algorithm that will generate a personalized label for each user's health status.
- Developing a visual representation of the nutritional habits of different populations based on the DataSource, LocationAbbr, and PriorityArea fields from this dataset.
- Employing machine learning to discern patterns in the data or predict potential health risks in different regions or populations by using the GeoLocation field as inputs for geographic analysis.
If you use this dataset in your research, please credit the original authors. Data Source
**Unknown License - Please check the dataset description for more information....
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This data is used for a broadband mapping initiative conducted by the Washington State Broadband Office. This dataset provides global fixed broadband and mobile (cellular) network performance metrics in zoom level 16 web mercator tiles (approximately 610.8 meters by 610.8 meters at the equator). Data is projected in EPSG:4326. Download speed, upload speed, and latency are collected via the Speedtest by Ookla applications for Android and iOS and averaged for each tile. Measurements are filtered to results containing GPS-quality location accuracy. The data was processed and published to ArcGIS Living Atlas by Esri.AboutSpeedtest data is used today by commercial fixed and mobile network operators around the world to inform network buildout, improve global Internet quality, and increase Internet accessibility. Government regulators such as the United States Federal Communications Commission and the Malaysian Communications and Multimedia Commission use Speedtest data to hold telecommunications entities accountable and direct funds for rural and urban connectivity development. Ookla licenses data to NGOs and educational institutions to fulfill its mission: to help make the internet better, faster and more accessible for everyone. Ookla hopes to further this mission by distributing the data to make it easier for individuals and organizations to use it for the purposes of bridging the social and economic gaps between those with and without modern Internet access.DataHundreds of millions of Speedtests are taken on the Ookla platform each month. In order to create a manageable dataset, we aggregate raw data into tiles. The size of a data tile is defined as a function of "zoom level" (or "z"). At z=0, the size of a tile is the size of the whole world. At z=1, the tile is split in half vertically and horizontally, creating 4 tiles that cover the globe. This tile-splitting continues as zoom level increases, causing tiles to become exponentially smaller as we zoom into a given region. By this definition, tile sizes are actually some fraction of the width/height of Earth according to Web Mercator projection (EPSG:3857). As such, tile size varies slightly depending on latitude, but tile sizes can be estimated in meters.For the purposes of these layers, a zoom level of 16 (z=16) is used for the tiling. This equates to a tile that is approximately 610.8 meters by 610.8 meters at the equator (18 arcsecond blocks). The geometry of each tile is represented in WGS 84 (EPSG:4326) in the tile field.The data can be found at: https://github.com/teamookla/ookla-open-dataUpdate CadenceThe tile aggregates start in Q1 2019 and go through the most recent quarter. They will be updated shortly after the conclusion of the quarter.Esri ProcessingThis layer is a best available aggregation of the original Ookla dataset. This means that for each tile that data is available, the most recent data is used. So for instance, if data is available for a tile for Q2 2019 and for Q4 2020, the Q4 2020 data is awarded to the tile. The default visualization for the layer is the "broadband index". The broadband index is a bivariate index based on both the average download speed and the average upload speed. For Mobile, the score is indexed to a standard of 25 megabits per second (Mbps) download and 3 Mbps upload. A tile with average Speedtest results of 25/3 Mbps is awarded 100 points. Tiles with average speeds above 25/3 are shown in green, tiles with average speeds below this are shown in fuchsia. For Fixed, the score is indexed to a standard of 100 Mbps download and 3 Mbps upload. A tile with average Speedtest results of 100/20 Mbps is awarded 100 points. Tiles with average speeds above 100/20 are shown in green, tiles with average speeds below this are shown in fuchsia.Tile AttributesEach tile contains the following adjoining attributes:The year and the quarter that the tests were performed.The average download speed of all tests performed in the tile, represented in megabits per second.The average upload speed of all tests performed in the tile, represented in megabits per second.The average latency of all tests performed in the tile, represented in millisecondsThe number of tests taken in the tile.The number of unique devices contributing tests in the tile.The quadkey representing the tile.QuadkeysQuadkeys can act as a unique identifier for the tile. This can be useful for joining data spatially from multiple periods (quarters), creating coarser spatial aggregations without using geospatial functions, spatial indexing, partitioning, and an alternative for storing and deriving the tile geometry.LayersThere are two layers:Ookla_Mobile_Tiles - Tiles containing tests taken from mobile devices with GPS-quality location and a cellular connection type (e.g. 4G LTE, 5G NR).Ookla_Fixed_Tiles - Tiles containing tests taken from mobile devices with GPS-quality location and a non-cellular connection type (e.g. WiFi, ethernet).The layers are set to draw at scales 1:3,000,000 and larger.Time Period and update Frequency Layers are generated based on a quarter year of data (three months) and files will be updated and added on a quarterly basis. A /year=2020/quarter=1/ period, the first quarter of the year 2020, would include all data generated on or after 2020-01-01 and before 2020-04-01.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this research, we are interested in the use of what we call pan-scalar maps, i.e. interactive, zoomable, multi-scale maps such as Google Maps. It is frequent to feel disorientation when we use these pan-scalar maps, and the absence of consistent landmarks or anchors across scales can be one of the causes of this disorientation \citep{touya_where_2023}. As a consequence, within the virtual environment of a pan-scalar map, we make the hypothesis that map objects, parts of objects, or groups of objects can comport comparably to the qualities of anchors or landmarks in a real space for spatialization purposes.For that we designed a user study where participants were asked to draw on top of the memorable, salient landmarks they saw on the map.
- RAW_DATA contains 2 CSV files: the first contains all drawings, the second all participations.
- MAP_DRAWING contains all drawings split by view (location, style, zoom).
- DRAWING_ANCHORS split drawings by view into pan-scalar anchors (Location, style, zoom, drawings_anchor).
- ANCHORS contains the vector delineation of pan-scalar anchors (Location, style, zoom,anchor).
-STATISTIC_DRAWING contains anchoress, presence... attribute information in xls of drawings (Location, style, zoom,drawings_statistics)
-BOUNDED_ANCHOR contains vector data for anchor lines that have been drawn in the same hue (Location, style, zoom,bounded_anchor)
-WORFLOW_ANCHOR : Contains all QGIS workflows used for AnchorWhat analysis.
- ILLUSTATIONS contains some illustrations from the AnchorWhat analysis.
Use the 3D Viewer template to showcase your scene with default 3D navigation tools, including zoom controls, pan, rotate, and compass. Include a locator map and bookmarks to provide context to your scene and guide app viewers to points of interest. Line of sight, measure, and slice tools allow viewers to interpret 3D data. Set the option to disable scrolling in the app to seamlessly embed this app in another app or site. Examples: Present a detailed 3D view of a mountainous region at a large scale while the 2D inset map provides context of where you are in the world. Display a 3D plan for new urban development that app viewers can explore with slice and measurement tools. Allow users to visualize the impact of shadows on a scene using daylight animation. Data requirements The 3D Viewer template requires a web scene. Key app capabilities 3D navigation and Compass tool - Allow app users to pan or rotate the scene and orient their view to north. Locator map - Display an inset map with the app's map area in the context of a broader area. Line of sight - Visualize whether one or multiple targets are visible from an observer point. Measurement tools - Provide tools that measure distance and area and find and convert coordinates. Slice - Excludes specific layers to change the view of a scene. Bookmarks - Provide a collection of preset extents that are saved in the scene to which users can navigate the map. Disable scroll - Prevent the map from zooming when app users scroll Language switcher - Provide translations for custom text and create a multilingual app. Home, Zoom controls, Legend, Layer List, Search Supportability This web app is designed responsively to be used in browsers on desktops, mobile phones, and tablets. We are committed to ongoing efforts towards making our apps as accessible as possible. Please feel free to leave a comment on how we can improve the accessibility of our apps for those who use assistive technologies.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the supporting data for the following article:Zhang, C., Jepson, K.M., Lohfink, G. & Arvaniti, A. (2021). Comparing acoustic analyses of speech data collected remotely. The Journal of the Acoustical Society of America, 149 (6), 3910-3916. doi: https://doi.org/10.1121/10.0005132Face-to-face speech data collection has been next to impossible globally as a result of the COVID-19 restrictions. To address this problem, simultaneous recordings of three repetitions of the cardinal vowels were made using a Zoom H6 Handy Recorder with an external microphone (henceforth, H6) and compared with two alternatives accessible to potential participants at home: the Zoom meeting application (henceforth, Zoom) and two lossless mobile phone applications (Awesome Voice Recorder, and Recorder; henceforth, Phone). F0 was tracked accurately by all of the devices; however, for formant analysis (F1, F2, F3), Phone performed better than Zoom, i.e., more similarly to H6, although the data extraction method (VoiceSauce, Praat) also resulted in differences. In addition, Zoom recordings exhibited unexpected drops in intensity. The results suggest that lossless format phone recordings present a viable option for at least some phonetic studies.This dataset contains two data files:- "data_Praat.csv" contains data extracted using Praat- "data_VoiceSauce.csv" contains data extracted using VoiceSauceThe full information of participants, materials, recording procedures, and measurements are recorded in the above-mentioned article.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary
This submission contains three tomographic datasets of a fragment of fabric woven in tapestry weave. The data is collected at three different zoom levels to achieve different reconstructed image resolution.
The data is made available as part of [Bossema 2020].
Apparatus
The dataset is acquired using the custom-built and highly flexible CT scanner, FleX-ray Laboratory, developed by TESCAN-XRE, located at CWI in Amsterdam. This apparatus consists of a cone-beam microfocus X-ray point source that projects polychromatic X-rays onto a 1944-by-1536 pixels, 14-bit, flat detector panel. Full details can be found in [Coban 2020].
Sample Information
The sample is a fragment of woven fabric approximately 7cm x 15cm in size. The fabric was hung vertically on a piece of foam, held on the top by a wooden stick through one of the holes and on the bottom by a piece of plastic tape. See Figure 5 in [Bossema 2020] for a picture of the object and examples of the reconstruction.
Experimental Plan
The data in this submission was collected to illustrate the use of zooming for the investigation of cultural heritage objects. Three region-of-interest scans of the lower part, containing a hole, were collected at different zoom levels. For each scan, the sample was rotated 360° in circular and continuous motion, with a dark-field (closed-shutter), and flat-field (open-shutter) image taken before the acquisition. Each dataset consists of 1200 projections. The source-detector distance was kept at 1098mm. At the first level of zooming, the object was placed at 963mm from the source yielding a magnification of 1.14 and 131 micron resolution. For the second level the object was moved closer to the source so that the source-object distance was 603mm, yielding a magnification of 1.82 and 82 micron resolution. At the third level, the source-object distance was reduced to 243, yielding a magnification of 4.5 and 33 micron resolution.
All raw data (i.e. no corrections) is made available in .tif format.
List of Contents
The content of the submission is given below.
Each data folder contains:
Additional Links
These datasets are produced by the Computational Imaging group at Centrum Wiskunde & Informatica (CI-CWI). For any relevant Python/MATLAB scripts for the FleX-ray datasets, we refer the reader to our group's GitHub page.
Contact Details
For more information or guidance in using these datasets, please get in touch with
Acknowledgments
We thank Suzan Meijer of the Rijksmuseum for providing this sample.
This research project explores the rhetorical function of contemporary Anglophone speculative fiction in southern Africa. Focusing on short fiction produced between 2008 and 2018, the project delineates this literary production both theoretically and historically. It is the “difference” of contemporary African speculative fiction that needs attention, the thesis argues, and through such difference we might evaluate how this literature manifests as a prominent, collective call to de-colonise dominant ways of seeing. Moreover, the contemporary speculative fiction scene in the southern region of the continent is not well represented in scholarship. To date, much more work has been done on, for example, speculative fiction in Nigeria, or indeed in South Africa. Similarly, far more studies exist on the novel form. And yet, it is a contention of this project that short sf is far more abundant in Africa today. The project therefore addresses a number of important gaps by providing perspective on short speculative fiction in Malawi, Zimbabwe, and South Africa. It also draws attention to various questions that will need further study in the field of Afrofuturism and Africanfuturism. The methodological choice to view this literature within a rhetorical framework complicates the duality of culture and text characteristic of cultural studies and focuses instead on how relationships among texts, writers, publishing agents and readers function. This is especially relevant in an African literary studies context because such approach works to bypass practices of silencing and the appropriation of certain perspectives, reorients agency, and redirects the conversation to focus on interactions between literary actors more closely. Since little attention has been paid to African literary narratives from a rhetorical angle, even less to African speculative texts, an effort is made to fill this gap using pragmatic frames (Bitzer 1968), rhetorical narrative (Phelan 2007), and the “literary field” (Bourdieu 1993). In addition, the project draws insights from the social sciences to combine quantitative and qualitative methods of investigation. The thesis consists of an introduction and three main chapters: “Mapping the Field”, “Ways of Seeing – Temporalities”, and “Ways of Seeing – Spatialities”. It also includes an Excel spreadsheet, in which much mapping of the scene occurs. Along with notes on authorship and production, the spreadsheet contains documentation of keywords noted while reading the sf short stories. Three keywords made themselves most manifest: time, space and ways of seeing, and thereafter helped to structure the latter chapters of the research project. The findings in the spreadsheet and interview material particularly inform Chapter One, which illuminates the emerging field of speculative fiction in the southern region of Africa by mapping the scene of production (2008-2018). Chapter Two investigates the rhetorical function of time and temporalities as nodes of interest in five speculative fiction texts. And, finally, the third chapter, explores to what effect space and spatialities are employed by sf writers in another five short stories.
The files contain interviewees speaking about short stories and speculative fiction production in southern Africa. The dataset files comprise both audio-visual Zoom recordings (mp4, 9 files) [size ranges between 49 MB and 794.5 MB], and transcriptions of those recordings in MS Word documents (docx, 10 files) [document size ranges between 22 KB and 38 KB]. For most audio-visual files there is a corresponding MS document (i.e. transcript of the audio). But, there is one MS Word file without corresponding audio-visual file. Mp4 software is needed to listen to / watch the audio-visual Zoom files; e.g. the files will open with Quick Time Player.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This data is used for a broadband mapping initiative conducted by the Washington State Broadband Office.This dataset provides global fixed broadband and mobile (cellular) network performance metrics in zoom level 16 web mercator tiles (approximately 610.8 meters by 610.8 meters at the equator). Data is projected in EPSG:4326. Download speed, upload speed, and latency are collected via the Speedtest by Ookla applications for Android and iOS and averaged for each tile. Measurements are filtered to results containing GPS-quality location accuracy. The data was processed and published to ArcGIS Living Atlas by Esri.AboutSpeedtest data is used today by commercial fixed and mobile network operators around the world to inform network buildout, improve global Internet quality, and increase Internet accessibility. Government regulators such as the United States Federal Communications Commission and the Malaysian Communications and Multimedia Commission use Speedtest data to hold telecommunications entities accountable and direct funds for rural and urban connectivity development. Ookla licenses data to NGOs and educational institutions to fulfill its mission: to help make the internet better, faster and more accessible for everyone. Ookla hopes to further this mission by distributing the data to make it easier for individuals and organizations to use it for the purposes of bridging the social and economic gaps between those with and without modern Internet access.DataTilesHundreds of millions of Speedtests are taken on the Ookla platform each month. In order to create a manageable dataset, we aggregate raw data into tiles. The size of a data tile is defined as a function of "zoom level" (or "z"). At z=0, the size of a tile is the size of the whole world. At z=1, the tile is split in half vertically and horizontally, creating 4 tiles that cover the globe. This tile-splitting continues as zoom level increases, causing tiles to become exponentially smaller as we zoom into a given region. By this definition, tile sizes are actually some fraction of the width/height of Earth according to Web Mercator projection (EPSG:3857). As such, tile size varies slightly depending on latitude, but tile sizes can be estimated in meters.For the purposes of these layers, a zoom level of 16 (z=16) is used for the tiling. This equates to a tile that is approximately 610.8 meters by 610.8 meters at the equator (18 arcsecond blocks). The geometry of each tile is represented in WGS 84 (EPSG:4326) in the tile field.The data can be found at: https://github.com/teamookla/ookla-open-dataUpdate Cadence The tile aggregates start in Q1 2019 and go through the most recent quarter. They will be updated shortly after the conclusion of the quarter.Esri ProcessingThis layer is a best available aggregation of the original Ookla dataset. This means that for each tile that data is available, the most recent data is used. So for instance, if data is available for a tile for Q2 2019 and for Q4 2020, the Q4 2020 data is awarded to the tile. The default visualization for the layer is the "broadband index". The broadband index is a bivariate index based on both the average download speed and the average upload speed. For Mobile, the score is indexed to a standard of 25 megabits per second (Mbps) download and 3 Mbps upload. A tile with average Speedtest results of 25/3 Mbps is awarded 100 points. Tiles with average speeds above 25/3 are shown in green, tiles with average speeds below this are shown in fuchsia. For Fixed, the score is indexed to a standard of 100 Mbps download and 3 Mbps upload. A tile with average Speedtest results of 100/20 Mbps is awarded 100 points. Tiles with average speeds above 100/20 are shown in green, tiles with average speeds below this are shown in fuchsia.Tile Attributes Each tile contains the following adjoining attributes:The year and the quarter that the tests were performed.The average download speed of all tests performed in the tile, represented in megabits per second.The average upload speed of all tests performed in the tile, represented in megabits per second.The average latency of all tests performed in the tile, represented in millisecondsThe number of tests taken in the tile.The number of unique devices contributing tests in the tile.The quadkey representing the tile.QuadkeysQuadkeys can act as a unique identifier for the tile. This can be useful for joining data spatially from multiple periods (quarters), creating coarser spatial aggregations without using geospatial functions, spatial indexing, partitioning, and an alternative for storing and deriving the tile geometry.LayersThere are two layers:Ookla_Mobile_Tiles - Tiles containing tests taken from mobile devices with GPS-quality location and a cellular connection type (e.g. 4G LTE, 5G NR).Ookla_Fixed_Tiles - Tiles containing tests taken from mobile devices with GPS-quality location and a non-cellular connection type (e.g. WiFi, ethernet).The layers are set to draw at scales 1:3,000,000 and larger.Time Period and Update FrequencyLayers are generated based on a quarter year of data (three months) and files will be updated and added on a quarterly basis. A /year=2020/quarter=1/ period, the first quarter of the year 2020, would include all data generated on or after 2020-01-01 and before 2020-04-01.
Energy systems are changing rapidly, bringing new types of risks, and new forms of potential disruption to energy supplies. Our growing dependence on energy, particularly electricity, means that more than ever we need to plan for disruptions and be prepared for them. What happens during the disruption is important: we need to understand how individuals, communities and organisations experience the event, and what measures can be taken to reduce the overall impacts. This study investigates how people and communities in the city of Glasgow (Scotland) might be expected to respond to a lengthy, widespread disruption to energy supplies. A novel three-stage diary-interview methodology was used to explore energy practices and expectations dependency, and to understand the ways in which people’s experience of disruptions may change in the coming decade. The results show that the most consistent determinant of participants’ perceived resilience, over and above socio-demographic factors, is their expectations and their degree of dependency on routine. In addition, the results suggest that common assumptions regarding people’s vulnerability may be misplaced, and are shifting rapidly as digital dependency grows, and are sometimes misplaced: in particular, determinants such as age and income should not be seen as straightforward proxies for vulnerability. A new set of ‘indicators of vulnerability’ are identified. For longer outages, people’s ability to cope will likely decrease with duration in a non-linear ‘step-change’ fashion, as interdependent infrastructures and services are affected. Community-level actions can improve resilience, and local scales may be more appropriate for identifying vulnerabilities than socio-demographic proxies, but this is only feasible if organisations and institutions are adequately resourced.Recent events have highlighted the potential impact of long, widespread energy supply interruptions, and the need for resilience is likely to create a requirement for greater flexibility from both the electricity and gas systems. This project will examine the engineering risks, and assess the need for new industry standards to drive appropriate action; and conduct a systematic assessment of the impacts of a serious energy disruption on consumers and critical services, such as heating, water, communications, health and transport. 24 diary-interviews with members of the general public (aged 18 to 85) living in the Greater Glasgow area. Three-stage diary interview method, comprising a 1-hour semi-structured interview (on Zoom), followed by a home-based diary task to be completed by the participant on two days of the week, then a 1-hour follow-up interview a week later (on Zoom). Recruitment used topic-blind random sampling, conducted by a third-party professional recruitment company. Although due to its size the sample was not intended to be representative of the population, the aim was to ensure a balance of age, ethnicity, gender, income, and location (inner city, suburbs, outskirts). Participants were offered a £70 honorarium for their time. 25 participants were recruited, with one no-show; everyone else completed all three stages. All interviews and diaries were conducted during Covid-19 restrictions on socialising, movement, and non-essential businesses and services.
This study used a mixed method approach comprising of an online survey with public contributors involved in health and social care research; an online survey with public involvement professionals, those who are employed by organisations; and qualitative interviews with public contributors. We had 244 respondents to the public contributor survey and 65 for the public involvement professionals (PIPs) survey and conducted 22 qualitative interviews.This study has been prompted by the shift to non-face-to-face - remote - forms of working in patient public involvement and engagement (PPIE) brought on by Covid-19 prevention measures (such as social distancing). Working remotely includes using digital technologies such as: online conferencing software (Zoom, Microsoft Teams), emails, telephone calls and social media (WhatsApp, Facebook). Due to measures such as shielding and social distancing the usual ways of involving the public in research that included face-to-face meetings and events are not possible, and even with the eventual easing of lockdown, remote working is likely to continue. This creates particular challenges for ensuring access and engagement from all parts of society in health and social care research. There is a well-documented digital divide between those who use or have access to digital technologies and those who do not. This digital divide reflects the existing socio-economic inequalities, and PPIE that takes place remotely has the potential to further exclude already disadvantaged groups. This project aims to facilitate and improve ways of doing PPIE remotely and increase the diversity of public contributors involved in health and social care research. Our objectives are to: 1. Understand the barriers and facilitators to remote working, by: a. Exploring public contributors and PPIE professionals' experiences of remote PPIE. b. Exploring public contributors' preferences for different types of remote working. 2. Develop mechanisms for implementing improvements in remote working and ways to increase diversity in PPIE by: a. Conducting a rapid review of research and 'how to guides'. b. Develop training packages. We will recruit public contributors involved in research projects across the UK: the NIHR, charities, universities and other research organisations and people involved professionally with PPIE. This is a mixed-methods study with: surveys, qualitative interviews, and a discrete choice experiment. We will produce an analysis of how remote working in PPIE is affected by socio-economic and health inequalities, make recommendations for improving practice and develop training packages. The public contributor survey was comprised of tick box questions, Likert scale questions and open-ended questions where participants could enter free text responses. The survey asked general questions about role and PPIE experience, digital literacy and different aspects of remote working. We collected demographic information to enable us to draw conclusions from the data on how age, ethnicity, living arrangements and socio-economic status impact on participants use of remote communication tools. The survey ran from September to November 2020. For the survey for PPIE professionals, those who work in PPIE, organising PPIE activities, we developed the survey with input from our public contributors and PPIE professionals from the ARC NWC and the NIHR Research Design Service. Again, the development of this survey drew on our own experiences. We piloted the survey with members of the ARC team and public contributor (NT) to check for sense, consistency and readability. Like the PPIE contributor survey, the professional version was comprised of tick box questions, Likert scale questions, and open-ended questions for additional response. We asked what support and training they offered their public contributors; and any suggestions they had for improving remote working in PPIE. After the survey conducted with public contributors had closed, we purposively sample informants from key communities and conducted 22 semi-structured qualitative interviews with public contributors from across the UK. The topic guide was co-developed with the research team and public contributor (NT) from a preliminary analysis of the survey results and was designed to probe and explore the issues raised by the survey. The interviews were conducted via Zoom and audio reordered with the participant’s consent. The interviews were transcribed and then checked for accuracy and anonymised. The interviews last on average 60 minutes.
Use the Chart Viewer template to display bar charts, line charts, pie charts, histograms, and scatterplots to complement a map. Include multiple charts to view with a map or side by side with other charts for comparison. Up to three charts can be viewed side by side or stacked, but you can access and view all the charts that are authored in the map. Examples: Present a bar chart representing average property value by county for a given area. Compare charts based on multiple population statistics in your dataset. Display an interactive scatterplot based on two values in your dataset along with an essential set of map exploration tools. Data requirements The Chart Viewer template requires a map with at least one chart configured. Key app capabilities Multiple layout options - Choose Stack to display charts stacked with the map, or choose Side by side to display charts side by side with the map. Manage chart - Reorder, rename, or turn charts on and off in the app. Multiselect chart - Compare two charts in the panel at the same time. Bookmarks - Allow users to zoom and pan to a collection of preset extents that are saved in the map. Home, Zoom controls, Legend, Layer List, Search Supportability This web app is designed responsively to be used in browsers on desktops, mobile phones, and tablets. We are committed to ongoing efforts towards making our apps as accessible as possible. Please feel free to leave a comment on how we can improve the accessibility of our apps for those who use assistive technologies.
Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
Priest Map Series {title at top of page}Data Developers: Burhans, Molly A., Cheney, David M., Emege, Thomas, Gerlt, R.. . “Priest Map Series {title at top of page}”. Scale not given. Version 1.0. MO and CT, USA: GoodLands Inc., Catholic Hierarchy, Environmental Systems Research Institute, Inc., 2019.Web map developer: Molly Burhans, October 2019Web app developer: Molly Burhans, October 2019GoodLands’ polygon data layers, version 2.0 for global ecclesiastical boundaries of the Roman Catholic Church:Although care has been taken to ensure the accuracy, completeness and reliability of the information provided, due to this being the first developed dataset of global ecclesiastical boundaries curated from many sources it may have a higher margin of error than established geopolitical administrative boundary maps. Boundaries need to be verified with appropriate Ecclesiastical Leadership. The current information is subject to change without notice. No parties involved with the creation of this data are liable for indirect, special or incidental damage resulting from, arising out of or in connection with the use of the information. We referenced 1960 sources to build our global datasets of ecclesiastical jurisdictions. Often, they were isolated images of dioceses, historical documents and information about parishes that were cross checked. These sources can be viewed here:https://docs.google.com/spreadsheets/d/11ANlH1S_aYJOyz4TtG0HHgz0OLxnOvXLHMt4FVOS85Q/edit#gid=0To learn more or contact us please visit: https://good-lands.org/The Catholic Leadership global maps information is derived from the Annuario Pontificio, which is curated and published by the Vatican Statistics Office annually, and digitized by David Cheney at Catholic-Hierarchy.org -- updated are supplemented with diocesan and news announcements. GoodLands maps this into global ecclesiastical boundaries. Admin 3 Ecclesiastical Territories:Burhans, Molly A., Cheney, David M., Gerlt, R.. . “Admin 3 Ecclesiastical Territories For Web”. Scale not given. Version 1.2. MO and CT, USA: GoodLands Inc., Environmental Systems Research Institute, Inc., 2019.Derived from:Global Diocesan Boundaries:Burhans, M., Bell, J., Burhans, D., Carmichael, R., Cheney, D., Deaton, M., Emge, T. Gerlt, B., Grayson, J., Herries, J., Keegan, H., Skinner, A., Smith, M., Sousa, C., Trubetskoy, S. “Diocesean Boundaries of the Catholic Church” [Feature Layer]. Scale not given. Version 1.2. Redlands, CA, USA: GoodLands Inc., Environmental Systems Research Institute, Inc., 2016.Using: ArcGIS. 10.4. Version 10.0. Redlands, CA: Environmental Systems Research Institute, Inc., 2016.Boundary ProvenanceStatistics and Leadership DataCheney, D.M. “Catholic Hierarchy of the World” [Database]. Date Updated: August 2019. Catholic Hierarchy. Using: Paradox. Retrieved from Original Source.Catholic HierarchyAnnuario Pontificio per l’Anno .. Città del Vaticano :Tipografia Poliglotta Vaticana, Multiple Years.The data for these maps was extracted from the gold standard of Church data, the Annuario Pontificio, published yearly by the Vatican. The collection and data development of the Vatican Statistics Office are unknown. GoodLands is not responsible for errors within this data. We encourage people to document and report errant information to us at data@good-lands.org or directly to the Vatican.Additional information about regular changes in bishops and sees comes from a variety of public diocesan and news announcements.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
AboutSpeedtest data is used today by commercial fixed and mobile network operators around the world to inform network buildout, improve global Internet quality, and increase Internet accessibility. Government regulators such as the United States Federal Communications Commission and the Malaysian Communications and Multimedia Commission use Speedtest data to hold telecommunications entities accountable and direct funds for rural and urban connectivity development. Ookla licenses data to NGOs and educational institutions to fulfill its mission: to help make the internet better, faster and more accessible for everyone. Ookla hopes to further this mission by distributing the data to make it easier for individuals and organizations to use it for the purposes of bridging the social and economic gaps between those with and without modern Internet access.DataOverviewTilesHundreds of millions of Speedtests are taken on the Ookla platform each month. In order to create a manageable dataset, we aggregate raw data into tiles. The size of a data tile is defined as a function of "zoom level" (or "z"). At z=0, the size of a tile is the size of the whole world. At z=1, the tile is split in half vertically and horizontally, creating 4 tiles that cover the globe. This tile-splitting continues as zoom level increases, causing tiles to become exponentially smaller as we zoom into a given region. By this definition, tile sizes are actually some fraction of the width/height of Earth according to Web Mercator projection (EPSG:3857). As such, tile size varies slightly depending on latitude, but tile sizes can be estimated in meters.For the purposes of these layers, a zoom level of 16 (z=16) is used for the tiling. This equates to a tile that is approximately 610.8 meters by 610.8 meters at the equator (18 arcsecond blocks). The geometry of each tile is represented in WGS 84 (EPSG:4326) in the tile field.The data can be found at: https://github.com/teamookla/ookla-open-dataUpdate CadenceThe tile aggregates start in Q1 2019 and go through the most recent quarter. They will be updated shortly after the conclusion of the quarter.Esri ProcessingThis layer is a best available aggregation of the original Ookla dataset. This means that for each tile that data is available, the most recent data is used. So for instance, if data is available for a tile for Q2 2019 and for Q4 2020, the Q4 2020 data is awarded to the tile. The default visualization for the layer is the "broadband index". The broadband index is a bivariate index based on both the average download speed and the average upload speed. For Mobile, the score is indexed to a standard of 35 megabits per second (Mbps) download and 3 Mbps upload. A tile with average Speedtest results of 25/3 Mbps is awarded 100 points. Tiles with average speeds above 25/3 are shown in green, tiles with average speeds below this are shown in fuchsia. For Fixed, the score is indexed to a standard of 100 Mbps download and 3 Mbps upload. A tile with average Speedtest results of 100/20 Mbps is awarded 100 points. Tiles with average speeds above 100/20 are shown in green, tiles with average speeds below this are shown in fuchsia.Tile AttributesEach tile contains the following attributes:The year and the quarter that the tests were performed.The average download speed of all tests performed in the tile, represented in megabits per second.The average upload speed of all tests performed in the tile, represented in megabits per second.The average latency of all tests performed in the tile, represented in millisecondsThe number of tests taken in the tile.The number of unique devices contributing tests in the tile.The quadkey representing the tile.QuadkeysQuadkeys can act as a unique identifier for the tile. This can be useful for joining data spatially from multiple periods (quarters), creating coarser spatial aggregations without using geospatial functions, spatial indexing, partitioning, and an alternative for storing and deriving the tile geometry.LayersThere are two layers:Ookla_Mobile_Tiles - Tiles containing tests taken from mobile devices with GPS-quality location and a cellular connection type (e.g. 4G LTE, 5G NR).Ookla_Fixed_Tiles - Tiles containing tests taken from mobile devices with GPS-quality location and a non-cellular connection type (e.g. WiFi, ethernet).The layers are set to draw at scales 1:3,000,000 and larger.Time Period and Update FrequencyLayers are generated based on a quarter year of data (three months) and files will be updated and added on a quarterly basis. A year=2020/quarter=1, the first quarter of the year 2020, would include all data generated on or after 2020-01-01 and before 2020-04-01.Data is subject to be reaggregated regularly in order to honor Data Subject Access Requests (DSAR) as is applicable in certain jurisdictions under laws including but not limited to General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and Lei Geral de Proteção de Dados (LGPD). Therefore, data accessed at different times may result in variation in the total number of tests, tiles, and resulting performance metrics.
https://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
Many Ontario lidar point cloud datasets have been made available for direct download by the Government of Canada through the federal Open Government Portal under the LiDAR Point Clouds – CanElevation Series record. Instructions for bulk data download are available in the Download Instructions document linked from that page. To download individual tiles, zoom in on the map in GeoHub and click a tile for a pop-up containing a download link. See the LIO Support - Large Data Ordering Instructions to obtain a copy of data for projects that are not yet available for direct download. Data can be requested by project area or a set of tiles. To determine which project contains your area of interest or to view single tiles, zoom in on the map above and click. For bulk tile orders follow the link in the Additional Documentation section below to download the tile index in shapefile format. Data sizes by project area are listed below. The Ontario Point Cloud (Lidar-Derived) consists of points containing elevation and intensity information derived from returns collected by an airborne topographic lidar sensor. The minimum point cloud classes are Unclassified, Ground, Water, High and Low Noise. The data is structured into non-overlapping 1-km by 1-km tiles in LAZ format. This dataset is a compilation of lidar data from multiple acquisition projects, as such specifications, parameters, accuracy and sensors vary by project. Some projects have additional classes, such as vegetation and buildings. See the detailed User Guide and contractor metadata reports linked below for additional information, including information about interpreting the index for placement of data orders. Raster derivatives have been created from the point clouds. These products may meet your needs and are available for direct download. For a representation of bare earth, see the Ontario Digital Terrain Model (Lidar-Derived). For a model representing all surface features, see the Ontario Digital Surface Model (Lidar-Derived). You can monitor the availability and status of lidar projects on the Ontario Lidar Coverage map on the Ontario Elevation Mapping Program hub page. Additional DocumentationOntario Classified Point Cloud (Lidar-Derived) - User Guide (DOCX) OMAFRA Lidar 2016-18 - Cochrane - Additional Metadata (PDF)OMAFRA Lidar 2016-18 - Peterborough - Additional Metadata (PDF)OMAFRA Lidar 2016-18 - Lake Erie - Additional Metadata (PDF)CLOCA Lidar 2018 - Additional Contractor Metadata (PDF)South Nation Lidar 2018-19 - Additional Contractor Metadata (PDF)OMAFRA Lidar 2022 - Lake Huron - Additional Metadata (PDF)OMAFRA Lidar 2022 - Lake Simcoe - Additional Metadata (PDF)Huron-Georgian Bay Lidar 2022-23 - Additional Metadata (Word)Kawartha Lakes Lidar 2023 - Additional Metadata (Word)Sault Ste Marie Lidar 2023-24 - Additional Metadata (Word)Thunder Bay Lidar 2023-24 - Additional Metadata (Word)Timmins Lidar 2024 - Additional Metadata (Word) OMAFRA Lidar Point Cloud 2016-18 - Cochrane - Lift Metadata (SHP)OMAFRA Lidar Point Cloud 2016-18- Peterborough - Lift Metadata (SHP)OMAFRA Lidar Point Cloud 2016-18 - Lake Erie - Lift Metadata (SHP)CLOCA Lidar Point Cloud 2018 - Lift Metadata (SHP)South Nation Lidar Point Cloud 2018-19 - Lift Metadata (SHP)York-Lake Simcoe Lidar Point Cloud 2019 - Lift Metadata (SHP)Ottawa River Lidar Point Cloud 2019-20 - Lift Metadata (SHP)OMAFRA Lidar Point Cloud 2022 - Lake Huron - Lift Metadata (SHP)OMAFRA Lidar Point Cloud 2022 - Lake Simcoe - Lift Metadata (SHP)Eastern Ontario Lidar Point Cloud 2021-22 - Lift Medatadata (SHP)DEDSFM Huron-Georgian Bay Lidar Point Cloud 2022-23 - Lift Metadata (SHP)DEDSFM Kawartha Lakes Lidar Point Cloud 2023 - Lift Metadata (SHP)DEDSFM Sault Ste Marie Lidar Point Cloud 2023-24 - Lift Metadata (SHP)DEDSFM Sudbury Lidar Point Cloud 2023-24 - Lift Metadata (SHP)DEDSFM Thunder Bay Lidar Point Cloud 2023-24 - Lift Metadata (SHP)DEDSFM Timmins Lidar Point Cloud 2024 - Lift Metadata (SHP)DEDSFM Cataraqui Lidar Point Cloud 2024 - Lift Metadata (SHP)DEDSFM Chapleau Lidar Point Cloud 2024 - Lift Metadata (SHP)DEDSFM Dryden Lidar Point Cloud 2024 - Lift Metadata (SHP)DEDSFM Ignace Lidar Point Cloud 2024 - Lift Metadata (SHP)DEDSFM Sioux Lookout Lidar Point Cloud 2024 - Lift Metadata (SHP)DEDSFM Northeastern Ontario Lidar Point Cloud 2024 - Lift Metadata (SHP)DEDSFM Atikokan Lidar Point Cloud 2024 - Lift Metadata (SHP)GTA 2023 - Lift Metadata (SHP) Ontario Classified Point Cloud (Lidar-Derived) - Tile Index (SHP)Ontario Lidar Project Extents (SHP)Data Package SizesLEAP 2009 - 22.9 GBOMAFRA Lidar 2016-18 - Cochrane - 442 GBOMAFRA Lidar 2016-18 - Lake Erie - 1.22 TBOMAFRA Lidar 2016-18 - Peterborough - 443 GBGTA 2014 - 57.6 GBGTA 2015 - 63.4 GBBrampton 2015 - 5.9 GBPeel 2016 - 49.2 GBMilton 2017 - 15.3 GBHalton 2018 - 73 GBCLOCA 2018 - 36.2 GBSouth Nation 2018-19 - 72.4 GBYork Region-Lake Simcoe Watershed 2019 - 75 GBOttawa River 2019-20 - 836 GBLake Nipissing 2020 - 700 GBOttawa-Gatineau 2019-20 - 551 GBHamilton-Niagara 2021 - 660 GBOMAFRA Lidar 2022 - Lake Huron - 204 GBOMAFRA Lidar 2022 - Lake Simcoe - 154 GBBelleville 2022 - 1.09 TBEastern Ontario 2021-22 - 1.5 TBHuron Shores 2021 - 35.5 GBMuskoka 2018 - 72.1 GBMuskoka 2021 - 74.2 GBMuskoka 2023 - 532 GBDigital Elevation Data to Support Flood Mapping 2022-26:Huron-Georgian Bay 2022 - 1.37 TBHuron-Georgian Bay 2023 - 257 GBHuron-Georgian Bay 2023 Bruce - 95.2 GBKawartha Lakes 2023 - 385 GBSault Ste Marie 2023-24 - 1.15 TBSudbury 2023-24 - 741 GBThunder Bay 2023-24 - 654 GBTimmins 2024 - 318 GBCataraqui 2024 - 50.5 GBChapleau 2024 - 127 GBDryden 2024 - 187 GBIgnace 2024 - 10.7 GBNortheastern Ontario 2024 - 82.3 GBSioux Lookout 2024 - 112 GBAtikokan 2024 - 64 GBGTA 2023 - 985 GBStatusOn going: Data is continually being updated Maintenance and Update FrequencyAs needed: Data is updated as deemed necessary ContactOntario Ministry of Natural Resources - Geospatial Ontario, geospatial@ontario.ca
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Zoom is a dataset for object detection tasks - it contains 1 annotations for 437 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).