Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Small uncrewed aircraft systems (sUAS, aka ‘drones’) are being increasingly employed to support an expanding array of applications and business needs. Natural resource professionals are increasingly incorporating drones as a data collection tool to support management decisions. This educational guide is based off of the sUAS Operations Technician DACUM, and was developed to directly respond to the needs of educators. The information and activities included in this workbook are provided as an ‘educational buffet’. Educators can use the entire resource, or they can select and extract specific activities to target their needs. This resource can be used as a follow-up to the publication “sUAS Manual Flight Exercises”.
Introducing Job Posting Datasets: Uncover labor market insights!
Elevate your recruitment strategies, forecast future labor industry trends, and unearth investment opportunities with Job Posting Datasets.
Job Posting Datasets Source:
Indeed: Access datasets from Indeed, a leading employment website known for its comprehensive job listings.
Glassdoor: Receive ready-to-use employee reviews, salary ranges, and job openings from Glassdoor.
StackShare: Access StackShare datasets to make data-driven technology decisions.
Job Posting Datasets provide meticulously acquired and parsed data, freeing you to focus on analysis. You'll receive clean, structured, ready-to-use job posting data, including job titles, company names, seniority levels, industries, locations, salaries, and employment types.
Choose your preferred dataset delivery options for convenience:
Receive datasets in various formats, including CSV, JSON, and more. Opt for storage solutions such as AWS S3, Google Cloud Storage, and more. Customize data delivery frequencies, whether one-time or per your agreed schedule.
Why Choose Oxylabs Job Posting Datasets:
Fresh and accurate data: Access clean and structured job posting datasets collected by our seasoned web scraping professionals, enabling you to dive into analysis.
Time and resource savings: Focus on data analysis and your core business objectives while we efficiently handle the data extraction process cost-effectively.
Customized solutions: Tailor our approach to your business needs, ensuring your goals are met.
Legal compliance: Partner with a trusted leader in ethical data collection. Oxylabs is a founding member of the Ethical Web Data Collection Initiative, aligning with GDPR and CCPA best practices.
Pricing Options:
Standard Datasets: choose from various ready-to-use datasets with standardized data schemas, priced from $1,000/month.
Custom Datasets: Tailor datasets from any public web domain to your unique business needs. Contact our sales team for custom pricing.
Experience a seamless journey with Oxylabs:
Effortlessly access fresh job posting data with Oxylabs Job Posting Datasets.
Envestnet®| Yodlee®'s Retail Transaction Data (Aggregate/Row) Panels consist of de-identified, near-real time (T+1) USA credit/debit/ACH transaction level data – offering a wide view of the consumer activity ecosystem. The underlying data is sourced from end users leveraging the aggregation portion of the Envestnet®| Yodlee®'s financial technology platform.
Envestnet | Yodlee Consumer Panels (Aggregate/Row) include data relating to millions of transactions, including ticket size and merchant location. The dataset includes de-identified credit/debit card and bank transactions (such as a payroll deposit, account transfer, or mortgage payment). Our coverage offers insights into areas such as consumer, TMT, energy, REITs, internet, utilities, ecommerce, MBS, CMBS, equities, credit, commodities, FX, and corporate activity. We apply rigorous data science practices to deliver key KPIs daily that are focused, relevant, and ready to put into production.
We offer free trials. Our team is available to provide support for loading, validation, sample scripts, or other services you may need to generate insights from our data.
Investors, corporate researchers, and corporates can use our data to answer some key business questions such as: - How much are consumers spending with specific merchants/brands and how is that changing over time? - Is the share of consumer spend at a specific merchant increasing or decreasing? - How are consumers reacting to new products or services launched by merchants? - For loyal customers, how is the share of spend changing over time? - What is the company’s market share in a region for similar customers? - Is the company’s loyal user base increasing or decreasing? - Is the lifetime customer value increasing or decreasing?
Additional Use Cases: - Use spending data to analyze sales/revenue broadly (sector-wide) or granular (company-specific). Historically, our tracked consumer spend has correlated above 85% with company-reported data from thousands of firms. Users can sort and filter by many metrics and KPIs, such as sales and transaction growth rates and online or offline transactions, as well as view customer behavior within a geographic market at a state or city level. - Reveal cohort consumer behavior to decipher long-term behavioral consumer spending shifts. Measure market share, wallet share, loyalty, consumer lifetime value, retention, demographics, and more.) - Study the effects of inflation rates via such metrics as increased total spend, ticket size, and number of transactions. - Seek out alpha-generating signals or manage your business strategically with essential, aggregated transaction and spending data analytics.
Use Cases Categories (Our data provides an innumerable amount of use cases, and we look forward to working with new ones): 1. Market Research: Company Analysis, Company Valuation, Competitive Intelligence, Competitor Analysis, Competitor Analytics, Competitor Insights, Customer Data Enrichment, Customer Data Insights, Customer Data Intelligence, Demand Forecasting, Ecommerce Intelligence, Employee Pay Strategy, Employment Analytics, Job Income Analysis, Job Market Pricing, Marketing, Marketing Data Enrichment, Marketing Intelligence, Marketing Strategy, Payment History Analytics, Price Analysis, Pricing Analytics, Retail, Retail Analytics, Retail Intelligence, Retail POS Data Analysis, and Salary Benchmarking
Investment Research: Financial Services, Hedge Funds, Investing, Mergers & Acquisitions (M&A), Stock Picking, Venture Capital (VC)
Consumer Analysis: Consumer Data Enrichment, Consumer Intelligence
Market Data: AnalyticsB2C Data Enrichment, Bank Data Enrichment, Behavioral Analytics, Benchmarking, Customer Insights, Customer Intelligence, Data Enhancement, Data Enrichment, Data Intelligence, Data Modeling, Ecommerce Analysis, Ecommerce Data Enrichment, Economic Analysis, Financial Data Enrichment, Financial Intelligence, Local Economic Forecasting, Location-based Analytics, Market Analysis, Market Analytics, Market Intelligence, Market Potential Analysis, Market Research, Market Share Analysis, Sales, Sales Data Enrichment, Sales Enablement, Sales Insights, Sales Intelligence, Spending Analytics, Stock Market Predictions, and Trend Analysis
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Sound-based taxon occurrences which have been provided by citizen scientists through PlutoF workbench and connected mobile application "Minu loodusheli" (My naturesound). Every occurrence can be validated via accompanying sound recording.
Mostly opportunistic observations by citizen scientists using mobile application for annotating coordinates and time of observation and adding sound recording to the observation.
All observations are provided with sound recording which is used to identify the species.
Observation is recorded with mobile app "Minu loodusheli" (My naturesound), observation record with sound files is transferred to PlutoF cloud database. Observation is moderated within PlutoF system by appointed persons
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset contains 65,000+ photo of more than 5,000 people from 40 countries, making it a valuable resource for exploring and developing identity verification solutions. This collection serves as a valuable resource for researchers and developers working on biometric verification solutions, especially in areas like facial recognition and financial services.
By utilizing this dataset, researchers can develop more robust re-identification algorithms, a key factor in ensuring privacy and security in various applications. - Get the data
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F1014bc8e62e232cc2ecb28e7d8ccdc3c%2F.png?generation=1730863166146276&alt=media" alt="">
This dataset offers a opportunity to explore re-identification challenges by providing 13 selfies of individuals against diverse backgrounds with different lighting, paired with 2 ID photos from different document types.
Devices: Samsung M31, Infinix note11, Tecno Pop 7, Samsung A05, Iphone 15 Pro Max and other
Resolution: 1000 x 750 and higher
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F0f1a70b3b5056e2610f22499cac19c7f%2FFrame%20136.png?generation=1730588713101089&alt=media" alt="">
This dataset enables the development of more robust and reliable authentication systems, ultimately contributing to enhancing customer onboarding experiences by streamlining verification processes, minimizing fraud, and improving overall security measures for a wide range of services, including online platforms, financial institutions, and government agencies.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Introducing the Finnish Product Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the Finnish language.
Dataset Contain & Diversity:Containing a total of 2000 images, this Finnish OCR dataset offers diverse distribution across different types of front images of Products. In this dataset, you'll find a variety of text that includes product names, taglines, logos, company names, addresses, product content, etc. Images in this dataset showcase distinct fonts, writing formats, colors, designs, and layouts.
To ensure the diversity of the dataset and to build a robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personally identifiable information (PII) and to ensure that in each image a minimum of 80% of space contains visible Finnish text.
Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, to build a balanced OCR dataset. The collection features images in portrait and landscape modes.
All these images were captured by native Finnish people to ensure the text quality, avoid toxic content and PII text. We used the latest iOS and Android mobile devices above 5MP cameras to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.
Metadata:Along with the image data, you will also receive detailed structured metadata in CSV format. For each image, it includes metadata like image orientation, county, language, and device information. Each image is properly renamed corresponding to the metadata.
The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of Finnish text recognition models.
Update & Custom Collection:We're committed to expanding this dataset by continuously adding more images with the assistance of our native Finnish crowd community.
If you require a custom product image OCR dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.
Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific project requirements using our crowd community.
License:This Image dataset, created by FutureBeeAI, is now available for commercial use.
Conclusion:Leverage the power of this product image OCR dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the Finnish language. Your journey to enhanced language understanding and processing starts here.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Introducing the Arabic Newspaper, Books, and Magazine Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the Arabic language.
Dataset Contain & Diversity:Containing a total of 5000 images, this Arabic OCR dataset offers an equal distribution across newspapers, books, and magazines. Within, you'll find a diverse collection of content, including articles, advertisements, cover pages, headlines, call outs, and author sections from a variety of newspapers, books, and magazines. Images in this dataset showcases distinct fonts, writing formats, colors, designs, and layouts.
To ensure the diversity of the dataset and to build robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personal identifiable information (PII), and in each image a minimum of 80% space is contain visible Arabic text.
Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, further enhancing dataset diversity. The collection features images in portrait and landscape modes.
All these images were captured by native Arabic people to ensure the text quality, avoid toxic content and PII text. We used latest iOS and android mobile devices above 5MP camera to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.
Metadata:Along with the image data you will also receive detailed structured metadata in CSV format. For each image it includes metadata like device information, source type like newspaper, magazine or book image, and image type like portrait or landscape etc. Each image is properly renamed corresponding to the metadata.
The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of Arabic text recognition models.
Update & Custom Collection:We're committed to expanding this dataset by continuously adding more images with the assistance of our native Arabic crowd community.
If you require a custom dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.
Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific requirements using our crowd community.
License:This Image dataset, created by FutureBeeAI, is now available for commercial use.
Conclusion:Leverage the power of this image dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the Arabic language. Your journey to enhanced language understanding and processing starts here.
ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
A. SUMMARY Case information on COVID-19 Laboratory testing. This data includes a daily count of test results reported, and how many of those were positive, negative, and indeterminate. Reported tests include tests with a positive, negative or indeterminate result. Indeterminate results, which could not conclusively determine whether COVID-19 virus was present, are not included in the calculation of percent positive. Testing for the novel coronavirus is available through commercial, clinical, and hospital laboratories, as well as the SFDPH Public Health Laboratory.
Tests are de-duplicated by an individual and date. This means that if a person gets tested multiple times on different dates in the last 30 days, all of those individual tests will be included in this data as individual tests (on each specimen collection date).
Total positive test results is not equal to the total number of COVID-19 cases in San Francisco.
B. HOW THE DATASET IS CREATED Laboratory test volume and positivity for COVID-19 is based on electronic laboratory test reports. Deduplication, quality assurance measures and other data verification processes maximize accuracy of laboratory test information.
C. UPDATE PROCESS Updates automatically at 05:00 Pacific Time each day. A redundant run is scheduled at 09:00 in case of pipeline failure.
D. HOW TO USE THIS DATASET Due to the high degree of variation in the time needed to complete tests by different labs there is a delay in this reporting. On March 24 the Health Officer ordered all labs in the City to report complete COVID-19 testing information to the local and state health departments. In order to track trends over time, a data user can analyze this data by "result_date" and see how the count of reported results and positivity rate have changed over time.
E. CHANGE LOG
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Concentrations of the rare earth elements (REE) and Thorium-232 (232Th) are presented for filtered air (dust) samples collected from the northern Gulf of Alaska region, including from Middleton Island (AK)(59.4214 N, 146.3493 W) and the Copper River delta (60.4324 N, 145.0954 W). Size-fractionated samples were collected in November 2019, using a Tisch Volumetric Flow Controlled (VFC) high volume sampler (Tisch Environmental, TE-5170V- BL) outfitted with a Cascade impactor. The six size fractions collected ranged from <0.49 micrometers (um) to >7.2 um in diameter. This sampler technology is discussed in greater detail in Morton et al, 2013. Samples were filtered with acid-washed Whatman 41 (W41) cellulose fiber filters. Additional bulk dust samples were collected in October 2012, using a Thermo Partisol Plus 2025 using Teflon filters. Samples were fully digested using concentrated nitric and hydrofluoric acids, following the approach of Morton et al, 2013. Samples were analyzed using a Thermofisher iCAP inductively coupled plasma mass spectrometer (ICP-MS) in KED mode, with He as a collision cell gas, adapted from the approach of Trommetter et al (2020). Concentrations were determined from standard curves using a REE ICP-MS standard from High-Purity Standards (that also contained 232Th). Three internal standards (Ge, In, and Bi) were added to both samples and standards, to correct for short-term variability in the instrument response and to evaluate stability of mass response during the ICP-MS run. Concentration estimates for the REE and 232Th were blank-corrected using full-process blanks that included filters deployed during times when there was no known dust deposition. Most of the full-process blank concentrations were 100 times or more smaller than the concentrations of our lowest standard (with the exception of Ce, the concentration of which was ~seven times smaller than our lowest standard. This means that our blank concentrations were very low but also not quantified extremely accurately. Our best estimates are that the full-process blanks, including filters, ranged from 0.02 picograms per square centimeter (pg cm-2) for Eu, Tb, and Ho, to 2 pg cm-2 for Ce. These blank concentrations were in all cases 40 times or more smaller than our lowest REE sample concentration for the <0.49 um size fraction with the smallest amount of dust, and ~3 orders of magnitude smaller than the signal of the largest samples. The REE data are also presented in a double-normalized format that first normalizes to concentrations of Post Archean Australian Shale and then normalizes to the mean REE concentration. The normalization approach is slightly modified from that of Serno et al, 2014. Methods Sampling approaches:Alaskan glacial dust samples were collected in a variety of ways from two different locations, during multiple years. Bulk dust was collected using a Thermo Partisol Plus 2025 on Middleton Island, Alaska (59.42144 °N, 146.3493 °W) continuously during two years from 2011-2012, as described in Schroth et al, 2017. The sampling location, at a high point in the southwestern third of the ~50-meter (m) high island, was selected to be close to a source of needed electricity but far from frequently traveled roads and other local sources of aerosol (e.g. diesel generators at the northeastern end). Size-fractionated samples were also collected on Middleton Island (same location) during a large glacial dust event from 10-12 November 2019 using a Tisch Volumetric Flow Controlled (VFC) high volume sampler (Tisch Environmental, TE-5170V- BL) outfitted with a Cascade impactor. The six size fractions collected ranged from <0.49 micrometers (um) to >7.2 um in diameter. This sampler technology is discussed in greater detail in Morton et al, 2013. Samples were filtered with Whatman 41 (W41) cellulose fiber filters. Filters were acid-washed using environmental grade 0.5 M HCl and very thoroughly rinsed with milli-Q water, then dried, in a HEPA filtered laminar flow hood. Finally, one large-volume sample was generated by collecting dust on July 29, 2011, from the shelves of a shed in the Copper River delta (close to Million Dollar Bridge, at 60.67454 °N, 144.74811 °W) that was in the path of the Copper River-derived dust plume and had a hole in its roof, which served as the sample collection device. This large dust sample probably integrated over a few years prior to collection. The validity of this sample, despite a very unconventional sampling approach, is confirmed by the striking similarity of its rare earth element (REE) signature to the REE signature of the other samples collected from Middleton Island (especially to the >7.2 um diameter sample)). Digestion methods:Samples of a few milligrams (mg) of dust (typical for the largest size fractions; sometimes less) were weighed after equilibration in lab air overnight in a laminar flow hood to minimize moisture content fluctuations. Moisture content of the dust has the potential to alter mass estimates substantially, hence this step was essential. Total particle digestions were carried out in Savillex 15-milliliter (mL) Teflon vials on a hotplate using a three-step digestion, at 140-150 degrees Celsius (°C), patterned after Morton et al (2013) using: 1) Optima concentrated HNO3; 2) a 4:1 mixture of Optima concentrated HNO3 and Optima concentrated HF; 3) Optima concentrated HNO3, followed in each case by evaporation to dryness. Samples were then redissolved in 4 M HNO3 at 90°C for two hours. Digestions and evaporation steps were carried out within a polycarbonate enclosure, with HEPA-filtered air intake, on top of clean polyethylene sheet, within an exhausting fume hood. Analyses:Concentrations of the REE and 232Th were determined on a Thermofisher iCAP inductively coupled plasma mass spectrometer (ICP-MS) in KED mode, with He as a collision cell gas, adapted from the approach of Trommetter et al (2020). Concentrations were determined from standard curves using a REE ICP-MS standard from High-Purity Standards (that also contained 232Th). Three internal standards (Ge, In, and Bi) were added to both samples and standards, to correct for short-term variability in the instrument response and to evaluate stability of mass response during the ICP-MS run. Concentration estimates for the REE and 232Th were blank-corrected using full-process blanks that included filters deployed during times when there was no known dust deposition. Most of the full-process blank concentrations were 100 times or more smaller than the concentrations of our lowest standard (with the exception of Ce, the concentration of which was ~7 times smaller than our lowest standard. This means that our blank concentrations were very low but also not quantified extremely accurately. Our best estimates are that the full-process blanks, including filters, ranged from 0.02 picograms per square centimeter (pg cm-2) for Eu, Tb, and Ho, to 2 pg cm-2 for Ce. These blank concentrations were in all cases 40 times or more smaller than our lowest REE sample concentration for the <0.49 um size fraction with the smallest amount of dust, and ~3 orders of magnitude smaller than the signal of the largest samples. Hence, the blanks did not impact our REE concentration estimates significantly. Solid reference materials were analyzed to evaluate the completeness of the particle digestion process and the accuracy of the standard solution calibrations. These included PACS-3 and a large sample of Columbia River Basalt (BCR-UW) collected, ground, and homogenized by Prof. Bruce Nelson of UW Earth and Space Sciences, from the same location as BCR-1 and BCR-2. In addition, Arizona test dust and a large-volume dust sample from the Copper River delta (see sampling methods discussion) were analyzed multiple times as additional constraints on reproducibility. Our REE concentration estimates agree within roughly four percent of the published concentrations for the reference materials PACS-3 and BCR-UW, while 232Th concentrations agree within ~5%. We used BCR-UW as a reference material because the USGS Geochemical Reference Materials lab was not able to provide us with any BCR-2 upon request, and we assume that this sample is of the same composition as BCR-2, as is suggested by the REE concentrations. Double normalized REE data processing:REE concentrations were double normalized as follows. They were normalized to the average Post-Archean Australian Shale (PAAS) REE concentration (McLennan, 1989), following recent practice (Grenier et al, 2018; Zhang et al, 2008; Friend et al, 2008). Such normalization generates smooth plots of the REE because it helps to reduce the effect whereby even atomic-numbered REE are more abundant than odd atomic-numbered REE (see Grenier et al, 2018). Concentrations were then normalized again to the mean PAAS-normalized REE concentration of each sample, similar to the approach Serno et al (2014) used normalizing to average Upper Continental Crust. See Serno et al (2014) for more detail on double normalization. This normalization approach differs slightly from the Upper Continental Crust normalization used by Serno et al (2014), although there is a very minor impact on the REE patterns (Garcia-Solsona et al, 2014, Appx. A.). Another reason we used this PAAS normalization approach is that it has been used in recent publications where the europium anomaly, Eu/Eu*, is estimated (e.g. Friend et al, 2008; Zhang et al, 2008; Grenier et al, 2018) and is thus consistent with recent practice in the oceanographic community.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Introducing the Korean Product Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the Korean language.
Dataset Contain & Diversity:Containing a total of 2000 images, this Korean OCR dataset offers diverse distribution across different types of front images of Products. In this dataset, you'll find a variety of text that includes product names, taglines, logos, company names, addresses, product content, etc. Images in this dataset showcase distinct fonts, writing formats, colors, designs, and layouts.
To ensure the diversity of the dataset and to build a robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personally identifiable information (PII) and to ensure that in each image a minimum of 80% of space contains visible Korean text.
Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, to build a balanced OCR dataset. The collection features images in portrait and landscape modes.
All these images were captured by native Korean people to ensure the text quality, avoid toxic content and PII text. We used the latest iOS and Android mobile devices above 5MP cameras to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.
Metadata:Along with the image data, you will also receive detailed structured metadata in CSV format. For each image, it includes metadata like image orientation, county, language, and device information. Each image is properly renamed corresponding to the metadata.
The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of Korean text recognition models.
Update & Custom Collection:We're committed to expanding this dataset by continuously adding more images with the assistance of our native Korean crowd community.
If you require a custom product image OCR dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.
Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific project requirements using our crowd community.
License:This Image dataset, created by FutureBeeAI, is now available for commercial use.
Conclusion:Leverage the power of this product image OCR dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the Korean language. Your journey to enhanced language understanding and processing starts here.
Our new Strategic Plan serves as a blueprint for enhancing our department and services over the next five years andbeyond, aligning our programs, projects, and activities with the City Council’s goals and priorities. Developed with significant input from our employees, this Plan reflects a bottom-up process that incorporates your ideas and contributions. The focus of this Plan is to set goals and objectives that are actionable, aspirational, and achievable by our Public Works department. Your thoughtful input was fundamentalin shaping our mission, vision, and values.The Plan encompasses service delivery to our customers and internal factors such as training, employee development, public outreach, customer service, and communication. With your participation, we have identified our strengths and weaknesses and developed strategies to become amore effective, responsive, and transparent organization – making it a better place to work.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Note: The schema changed in February 2025 - please see below. We will post a roadmap of upcoming changes, but service URLs and schema are now stable. For deployment status of new services beginning in February 2025, see https://gis.data.ca.gov/pages/city-and-county-boundary-data-status. Additional roadmap and status links at the bottom of this metadata.This dataset is regularly updated as the source data from CDTFA is updated, as often as many times a month. If you require unchanging point-in-time data, export a copy for your own use rather than using the service directly in your applications. PurposeCounty boundaries along with third party identifiers used to join in external data. Boundaries are from the California Department of Tax and Fee Administration (CDTFA). These boundaries are the best available statewide data source in that CDTFA receives changes in incorporation and boundary lines from the Board of Equalization, who receives them from local jurisdictions for tax purposes. Boundary accuracy is not guaranteed, and though CDTFA works to align boundaries based on historical records and local changes, errors will exist. If you require a legal assessment of boundary location, contact a licensed surveyor.This dataset joins in multiple attributes and identifiers from the US Census Bureau and Board on Geographic Names to facilitate adding additional third party data sources. In addition, we attach attributes of our own to ease and reduce common processing needs and questions. Finally, coastal buffers are separated into separate polygons, leaving the land-based portions of jurisdictions and coastal buffers in adjacent polygons. This feature layer is for public use. Related LayersThis dataset is part of a grouping of many datasets:Cities: Only the city boundaries and attributes, without any unincorporated areasWith Coastal BuffersWithout Coastal BuffersCounties: Full county boundaries and attributes, including all cities within as a single polygonWith Coastal Buffers (this dataset)Without Coastal BuffersCities and Full Counties: A merge of the other two layers, so polygons overlap within city boundaries. Some customers require this behavior, so we provide it as a separate service.With Coastal BuffersWithout Coastal BuffersCity and County AbbreviationsUnincorporated Areas (Coming Soon)Census Designated PlacesCartographic CoastlinePolygonLine source (Coming Soon) Working with Coastal Buffers The dataset you are currently viewing includes the coastal buffers for cities and counties that have them in the source data from CDTFA. In the versions where they are included, they remain as a second polygon on cities or counties that have them, with all the same identifiers, and a value in the COASTAL field indicating if it"s an ocean or a bay buffer. If you wish to have a single polygon per jurisdiction that includes the coastal buffers, you can run a Dissolve on the version that has the coastal buffers on all the fields except OFFSHORE and AREA_SQMI to get a version with the correct identifiers. Point of ContactCalifornia Department of Technology, Office of Digital Services, odsdataservices@state.ca.gov Field and Abbreviation DefinitionsCDTFA_COUNTY: CDTFA county name. For counties, this will be the name of the polygon itself. For cities, it is the name of the county the city polygon is within.CDTFA_COPRI: county number followed by the 3-digit city primary number used in the Board of Equalization"s 6-digit tax rate area numbering system. The boundary data originate with CDTFA's teams managing tax rate information, so this field is preserved and flows into this dataset.CENSUS_GEOID: numeric geographic identifiers from the US Census BureauCENSUS_PLACE_TYPE: City, County, or Town, stripped off the census name for identification purpose.GNIS_PLACE_NAME: Board on Geographic Names authorized nomenclature for area names published in the Geographic Name Information SystemGNIS_ID: The numeric identifier from the Board on Geographic Names that can be used to join these boundaries to other datasets utilizing this identifier.CDT_COUNTY_ABBR: Abbreviations of county names - originally derived from CalTrans Division of Local Assistance and now managed by CDT. Abbreviations are 3 characters.CDT_NAME_SHORT: The name of the jurisdiction (city or county) with the word "City" or "County" stripped off the end. Some changes may come to how we process this value to make it more consistent.AREA_SQMI: The area of the administrative unit (city or county) in square miles, calculated in EPSG 3310 California Teale Albers.OFFSHORE: Indicates if the polygon is a coastal buffer. Null for land polygons. Additional values include "ocean" and "bay".PRIMARY_DOMAIN: Currently empty/null for all records. Placeholder field for official URL of the city or countyCENSUS_POPULATION: Currently null for all records. In the future, it will include the most recent US Census population estimate for the jurisdiction.GlobalID: While all of the layers we provide in this dataset include a GlobalID field with unique values, we do not recommend you make any use of it. The GlobalID field exists to support offline sync, but is not persistent, so data keyed to it will be orphaned at our next update. Use one of the other persistent identifiers, such as GNIS_ID or GEOID instead. Boundary AccuracyCounty boundaries were originally derived from a 1:24,000 accuracy dataset, with improvements made in some places to boundary alignments based on research into historical records and boundary changes as CDTFA learns of them. City boundary data are derived from pre-GIS tax maps, digitized at BOE and CDTFA, with adjustments made directly in GIS for new annexations, detachments, and corrections.Boundary accuracy within the dataset varies. While CDTFA strives to correctly include or exclude parcels from jurisdictions for accurate tax assessment, this dataset does not guarantee that a parcel is placed in the correct jurisdiction. When a parcel is in the correct jurisdiction, this dataset cannot guarantee accurate placement of boundary lines within or between parcels or rights of way. This dataset also provides no information on parcel boundaries. For exact jurisdictional or parcel boundary locations, please consult the county assessor's office and a licensed surveyor. CDTFA's data is used as the best available source because BOE and CDTFA receive information about changes in jurisdictions which otherwise need to be collected independently by an agency or company to compile into usable map boundaries. CDTFA maintains the best available statewide boundary information. CDTFA's source data notes the following about accuracy: City boundary changes and county boundary line adjustments filed with the Board of Equalization per Government Code 54900. This GIS layer contains the boundaries of the unincorporated county and incorporated cities within the state of California. The initial dataset was created in March of 2015 and was based on the State Board of Equalization tax rate area boundaries. As of April 1, 2024, the maintenance of this dataset is provided by the California Department of Tax and Fee Administration for the purpose of determining sales and use tax rates. The boundaries are continuously being revised to align with aerial imagery when areas of conflict are discovered between the original boundary provided by the California State Board of Equalization and the boundary made publicly available by local, state, and federal government. Some differences may occur between actual recorded boundaries and the boundaries used for sales and use tax purposes. The boundaries in this map are representations of taxing jurisdictions for the purpose of determining sales and use tax rates and should not be used to determine precise city or county boundary line locations. Boundary ProcessingThese data make a structural change from the source data. While the full boundaries provided by CDTFA include coastal buffers of varying sizes, many users need boundaries to end at the shoreline of the ocean or a bay. As a result, after examining existing city and county boundary layers, these datasets provide a coastline cut generally along the ocean facing coastline. For county boundaries in northern California, the cut runs near the Golden Gate Bridge, while for cities, we cut along the bay shoreline and into the edge of the Delta at the boundaries of Solano, Contra Costa, and Sacramento counties. In the services linked above, the versions that include the coastal buffers contain them as a second (or third) polygon for the city or county, with the value in the COASTAL field set to whether it"s a bay or ocean polygon. These can be processed back into a single polygon by dissolving on all the fields you wish to keep, since the attributes, other than the COASTAL field and geometry attributes (like areas) remain the same between the polygons for this purpose. SliversIn cases where a city or county"s boundary ends near a coastline, our coastline data may cross back and forth many times while roughly paralleling the jurisdiction"s boundary, resulting in many polygon slivers. We post-process the data to remove these slivers using a city/county boundary priority algorithm. That is, when the data run parallel to each other, we discard the coastline cut and keep the CDTFA-provided boundary, even if it extends into the ocean a small amount. This processing supports consistent boundaries for Fort Bragg, Point Arena, San Francisco, Pacifica, Half Moon Bay, and Capitola, in addition to others. More information on this algorithm will be provided soon. Coastline CaveatsSome cities have buffers extending into water bodies that we do not cut at the shoreline. These include South Lake Tahoe and Folsom, which extend into neighboring lakes, and San Diego and surrounding cities that extend into San Diego Bay, which our shoreline encloses. If you have feedback on the exclusion of these items, or others, from the shoreline
http://www.apache.org/licenses/LICENSE-2.0http://www.apache.org/licenses/LICENSE-2.0
DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning
This repository makes available the source code and public dataset for the work, "DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning", published with open access by Scientific Reports: https://www.nature.com/articles/s41598-018-38343-3. The DeepWeeds dataset consists of 17,509 images capturing eight different weed species native to Australia in situ with neighbouring flora. In our work, the dataset was classified to an average accuracy of 95.7% with the ResNet50 deep convolutional neural network.
The source code, images and annotations are licensed under CC BY 4.0 license. The contents of this repository are released under an Apache 2 license.
Download the dataset images and our trained models
images.zip (468 MB)
models.zip (477 MB)
Due to the size of the images and models they are hosted outside of the Github repository. The images and models must be downloaded into directories named "images" and "models", respectively, at the root of the repository. If you execute the python script (deepweeds.py), as instructed below, this step will be performed for you automatically.
TensorFlow Datasets
Alternatively, you can access the DeepWeeds dataset with TensorFlow Datasets, TensorFlow's official collection of ready-to-use datasets. DeepWeeds was officially added to the TensorFlow Datasets catalog in August 2019.
Weeds and locations
The selected weed species are local to pastoral grasslands across the state of Queensland. They include: "Chinee apple", "Snake weed", "Lantana", "Prickly acacia", "Siam weed", "Parthenium", "Rubber vine" and "Parkinsonia". The images were collected from weed infestations at the following sites across Queensland: "Black River", "Charters Towers", "Cluden", "Douglas", "Hervey Range", "Kelso", "McKinlay" and "Paluma". The table and figure below break down the dataset by weed, location and geographical distribution.
Data organization
Images are assigned unique filenames that include the date/time the image was photographed and an ID number for the instrument which produced the image. The format is like so: YYYYMMDD-HHMMSS-ID, where the ID is simply an integer from 0 to 3. The unique filenames are strings of 17 characters, such as 20170320-093423-1.
labels
The labels.csv file assigns species labels to each image. It is a comma separated text file in the format:
Filename,Label,Species ... 20170207-154924-0,jpg,7,Snake weed 20170610-123859-1.jpg,1,Lantana 20180119-105722-1.jpg,8,Negative ...
Note: The specific label subsets of training (60%), validation (20%) and testing (20%) for the five-fold cross validation used in the paper are also provided here as CSV files in the same format as "labels.csv".
models
We provide the most successful ResNet50 and InceptionV3 models saved in Keras' hdf5 model format. The ResNet50 model, which provided the best results, has also been converted to UFF format in order to construct a TensorRT inference engine.
resnet.hdf5 inception.hdf5 resnet.uff
deepweeds.py
This python script trains and evaluates Keras' base implementation of ResNet50 and InceptionV3 on the DeepWeeds dataset, pre-trained with ImageNet weights. The performance of the networks are cross validated for 5 folds. The final classification accuracy is taken to be the average across the five folds. Similarly, the final confusion matrix from the associated paper aggregates across the five independent folds. The script also provides the ability to measure the inference speeds within the TensorFlow environment.
The script can be executed to carry out these computations using the following commands.
To train and evaluate the ResNet50 model with five-fold cross validation, use python3 deepweeds.py cross_validate --model resnet.
To train and evaluate the InceptionV3 model with five-fold cross validation, use python3 deepweeds.py cross_validate --model inception.
To measure inference times for the ResNet50 model, use python3 deepweeds.py inference --model models/resnet.hdf5.
To measure inference times for the InceptionV3 model, use python3 deepweeds.py inference --model models/inception.hdf5.
Dependencies
The required Python packages to execute deepweeds.py are listed in requirements.txt.
tensorrt
This folder includes C++ source code for creating and executing a ResNet50 TensorRT inference engine on an NVIDIA Jetson TX2 platform. To build and run on your Jetson TX2, execute the following commands:
cd tensorrt/src make -j4 cd ../bin ./resnet_inference
Citations
If you use the DeepWeeds dataset in your work, please cite it as:
IEEE style citation: “A. Olsen, D. A. Konovalov, B. Philippa, P. Ridd, J. C. Wood, J. Johns, W. Banks, B. Girgenti, O. Kenny, J. Whinney, B. Calvert, M. Rahimi Azghadi, and R. D. White, “DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning,” Scientific Reports, vol. 9, no. 2058, 2 2019. [Online]. Available: https://doi.org/10.1038/s41598-018-38343-3 ”
BibTeX
@article{DeepWeeds2019, author = {Alex Olsen and Dmitry A. Konovalov and Bronson Philippa and Peter Ridd and Jake C. Wood and Jamie Johns and Wesley Banks and Benjamin Girgenti and Owen Kenny and James Whinney and Brendan Calvert and Mostafa {Rahimi Azghadi} and Ronald D. White}, title = {{DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning}}, journal = {Scientific Reports}, year = 2019, number = 2058, month = 2, volume = 9, issue = 1, day = 14, url = "https://doi.org/10.1038/s41598-018-38343-3", doi = "10.1038/s41598-018-38343-3" }
By downloading the data, you agree with the terms & conditions mentioned below:
Data Access: The data in the research collection may only be used for research purposes. Portions of the data are copyrighted and have commercial value as data, so you must be careful to use them only for research purposes.
Summaries, analyses and interpretations of the linguistic properties of the information may be derived and published, provided it is impossible to reconstruct the information from these summaries. You may not try identifying the individuals whose texts are included in this dataset. You may not try to identify the original entry on the fact-checking site. You are not permitted to publish any portion of the dataset besides summary statistics or share it with anyone else.
We grant you the right to access the collection's content as described in this agreement. You may not otherwise make unauthorised commercial use of, reproduce, prepare derivative works, distribute copies, perform, or publicly display the collection or parts of it. You are responsible for keeping and storing the data in a way that others cannot access. The data is provided free of charge.
Citation
Please cite our work as
@InProceedings{clef-checkthat:2022:task3, author = {K{"o}hler, Juliane and Shahi, Gautam Kishore and Stru{\ss}, Julia Maria and Wiegand, Michael and Siegel, Melanie and Mandl, Thomas}, title = "Overview of the {CLEF}-2022 {CheckThat}! Lab Task 3 on Fake News Detection", year = {2022}, booktitle = "Working Notes of CLEF 2022---Conference and Labs of the Evaluation Forum", series = {CLEF~'2022}, address = {Bologna, Italy},}
@article{shahi2021overview, title={Overview of the CLEF-2021 CheckThat! lab task 3 on fake news detection}, author={Shahi, Gautam Kishore and Stru{\ss}, Julia Maria and Mandl, Thomas}, journal={Working Notes of CLEF}, year={2021} }
Problem Definition: Given the text of a news article, determine whether the main claim made in the article is true, partially true, false, or other (e.g., claims in dispute) and detect the topical domain of the article. This task will run in English and German.
Task 3: Multi-class fake news detection of news articles (English) Sub-task A would detect fake news designed as a four-class classification problem. Given the text of a news article, determine whether the main claim made in the article is true, partially true, false, or other. The training data will be released in batches and roughly about 1264 articles with the respective label in English language. Our definitions for the categories are as follows:
False - The main claim made in an article is untrue.
Partially False - The main claim of an article is a mixture of true and false information. The article contains partially true and partially false information but cannot be considered 100% true. It includes all articles in categories like partially false, partially true, mostly true, miscaptioned, misleading etc., as defined by different fact-checking services.
True - This rating indicates that the primary elements of the main claim are demonstrably true.
Other- An article that cannot be categorised as true, false, or partially false due to a lack of evidence about its claims. This category includes articles in dispute and unproven articles.
Cross-Lingual Task (German)
Along with the multi-class task for the English language, we have introduced a task for low-resourced language. We will provide the data for the test in the German language. The idea of the task is to use the English data and the concept of transfer to build a classification model for the German language.
Input Data
The data will be provided in the format of Id, title, text, rating, the domain; the description of the columns is as follows:
ID- Unique identifier of the news article
Title- Title of the news article
text- Text mentioned inside the news article
our rating - class of the news article as false, partially false, true, other
Output data format
public_id- Unique identifier of the news article
predicted_rating- predicted class
Sample File
public_id, predicted_rating 1, false 2, true
IMPORTANT!
We have used the data from 2010 to 2022, and the content of fake news is mixed up with several topics like elections, COVID-19 etc.
Baseline: For this task, we have created a baseline system. The baseline system can be found at https://zenodo.org/record/6362498
Related Work
Shahi GK. AMUSED: An Annotation Framework of Multi-modal Social Media Data. arXiv preprint arXiv:2010.00502. 2020 Oct 1.https://arxiv.org/pdf/2010.00502.pdf
G. K. Shahi and D. Nandini, “FakeCovid – a multilingual cross-domain fact check news dataset for covid-19,” in workshop Proceedings of the 14th International AAAI Conference on Web and Social Media, 2020. http://workshop-proceedings.icwsm.org/abstract?id=2020_14
Shahi, G. K., Dirkson, A., & Majchrzak, T. A. (2021). An exploratory study of covid-19 misinformation on twitter. Online Social Networks and Media, 22, 100104. doi: 10.1016/j.osnem.2020.100104
Shahi, G. K., Struß, J. M., & Mandl, T. (2021). Overview of the CLEF-2021 CheckThat! lab task 3 on fake news detection. Working Notes of CLEF.
Nakov, P., Da San Martino, G., Elsayed, T., Barrón-Cedeno, A., Míguez, R., Shaar, S., ... & Mandl, T. (2021, March). The CLEF-2021 CheckThat! lab on detecting check-worthy claims, previously fact-checked claims, and fake news. In European Conference on Information Retrieval (pp. 639-649). Springer, Cham.
Nakov, P., Da San Martino, G., Elsayed, T., Barrón-Cedeño, A., Míguez, R., Shaar, S., ... & Kartal, Y. S. (2021, September). Overview of the CLEF–2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News. In International Conference of the Cross-Language Evaluation Forum for European Languages (pp. 264-291). Springer, Cham.
Harness the Power of Fresh New Homeowner Audience Data
Our comprehensive New Homeowner Audience Data file is a meticulously curated compilation of Direct Marketing data, enriched with valuable Email Address Data. This essential resource offers unparalleled access to Consumers and Prospects who have recently moved into new homes or apartments.
Averaging an impressive 1.1 million records monthly, our dataset is continually updated with the latest information, including a dedicated 30-day hotline file for the most recent movers. This ensures you're always working with the freshest and most relevant data.
With an average income surpassing $55K and a high concentration of families, these new homeowners present a prime opportunity for businesses across various sectors. From healthcare providers and home improvement specialists to financial advisors and interior designers, our data empowers you to identify and reach your ideal customer.
Benefit from our flexible pricing options, allowing you to tailor your data acquisition to your specific business needs. Choose from transactional purchases or opt for annual licensing with unlimited use cases for marketing and analytics.
Unlock the full potential of your marketing campaigns with our New Homeowner Audience Data.
COVID-19 Cases information is reported through the Pennsylvania State Department’s National Electronic Disease Surveillance System (PA-NEDSS). As new cases are passed to the Allegheny County Health Department they are investigated by case investigators. During investigation some cases which are initially determined by the State to be in the Allegheny County jurisdiction may change, which can account for differences between publication of the files on the number of cases, deaths and tests. Additionally, information is not always reported to the State in a timely manner, delays can range from days to weeks, which can also account for discrepancies between previous and current files. Test and Case information will be updated daily. This resource contains individuals who received a COVID-19 test and individuals whom are probable cases. Every day, these records are overwritten with updates. Each row in the data reflects a person that is tested, not tests that are conducted. People that are tested more than once will have their testing and case data updated using the following rules: Positive tests overwrite negative tests. Polymerase chain reaction (PCR) tests overwrite antibody or antigen (AG) tests. The first positive PCR test is never overwritten. Data collected from additional tests do not replace the first positive PCR test. Note: On April 4th 2022 the Pennsylvania Department of Health no longer required labs to report negative AG tests. Therefore aggregated counts that included AG tests have been removed from the Municipality/Neighborhood files going forward. Versions of this data up to this cut-off have been retained as archived files. Individual Test information is also updated daily. This resource contains the details and results of individual tests along with demographic information of the individual tested. Only PCR and AG tests are included. Every day, these records are overwritten with updates. This resource should be used to determine positivity rates. The remaining datasets provide statistics on death demographics. Demographic, municipality and neighborhood information for deaths are reported on a weekly schedule and are not included with individual cases or tests. This has been done to protect the privacy and security of individuals and their families in accordance with the Health Insurance Portability and Accountability Act (HIPAA). Municipality or City of Pittsburgh Neighborhood is based off the geocoded home address of the individual tested. Individuals whose home address is incomplete may not be in Allegheny County but whose temporary residency, work or other mitigating circumstance are determined to be in Allegheny County by the Pennsylvania Department of Health are counted as "Undefined". Since the start of the pandemic, the ACHD has mapped every day’s COVID tests, cases, and deaths to their Allegheny County municipality and neighborhood. Tests were mapped to patient address, and if this was not available, to the provider _location. This has recently resulted in apparent testing rates that exceeded the populations of various municipalities -- mostly those with healthcare providers. As this was brought to our attention, the health department and our data partners began researching and comparing methods to most accurately display the data. This has led us to leave those with missing home addresses off the map. Although these data will still appear in test, case and death counts, there will be over 20,000 fewer tests and almost 1000 fewer cases on the map. In addition to these map changes, we have identified specific health systems and laboratories that had data uploading errors that resulted in missing locations, and are working with them to correct these errors. Due to minor discrepancies in the Municipal boundary and the City of Pittsburgh Neighborhood files individuals whose City Neighborhood cannot be identified are be counted as “Undefined (Pittsburgh)”.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
(:unav)...........................................
Acclime is a Asia-focused corporate services specialist trusted to deliver with speed, flexibility, and precision. With a genuine, on-the-ground presence in all of Asia's hardest-to-navigate markets, we help clients manage local governmental and administrative compliance issues quickly, with a minimum of fuss. Our years of in-market experience and in-depth knowledge enable us to navigate the complexities and challenges of the regional regulatory environment, making your Asia expansion fully compliant and seamless.
Whether you're looking to establish an investment vehicle, provide support for cross-border transactions, or set up a fully operational local or regional presence, we'll help you make the most out of every incentive and benefit this unique region has to offer. With our multidisciplinary team of real experts on the ground, you can trust that our advice, management, and support will be highly relevant, market-specific, and delivered to the highest international standards.
Florida COVID-19 Cases by County exported from the Florida Department of Health GIS Layer on date seen in file name. Archived by the University of South Florida Libraries, Digital Heritage and Humanities Collections. Contact: LibraryGIS@usf.edu.Please Cite Our GIS HUB. If you are a researcher or other utilizing our Florida COVID-19 HUB as a tool or accessing and utilizing the data provided herein, please provide an acknowledgement of such in any publication or re-publication. The following citation is suggested: University of South Florida Libraries, Digital Heritage and Humanities Collections. 2020. Florida COVID-19 Hub. Available at https://covid19-usflibrary.hub.arcgis.com/ . https://doi.org/10.5038/USF-COVID-19-GISLive FDOH DataSource: https://services1.arcgis.com/CY1LXxl9zlJeBuRZ/arcgis/rest/services/Florida_COVID19_Cases/FeatureServerFor data 5/10/2020 or after: Archived data was exported directly from the live FDOH layer into the archive. For data prior to 5/10/2020: Data was exported by the University of South Florida - Digital Heritage and Humanities Collection using ArcGIS Pro Software. Data was then converted to shapefile and csv and uploaded into ArcGIS Online archive. Up until 3/25 the FDOH Cases by County layer was updated twice a day, archives are taken from the 11AM update.For data definitions please visit the following box folder: https://usf.box.com/s/vfjwbczkj73ucj19yvwz53at6v6w614hData definition files names include the relative date they were published. The below information was taken from ancillary documents associated with the original layer from FDOH.Persons Under Investigation/Surveillance (PUI):Essentially, PUIs are any person who has been or is waiting to be tested. This includes: persons who are considered high-risk for COVID-19 due to recent travel, contact with a known case, exhibiting symptoms of COVID-19 as determined by a healthcare professional, or some combination thereof. PUI’s also include people who meet laboratory testing criteria based on symptoms and exposure, as well as confirmed cases with positive test results. PUIs include any person who is or was being tested, including those with negative and pending results. All PUIs fit into one of three residency types: 1. Florida residents tested in Florida2. Non-Florida residents tested in Florida3. Florida residents tested outside of Florida Florida Residents Tested Elsewhere: The total number of Florida residents with positive COVID-19 test results who were tested outside of Florida, and were not exposed/infectious in Florida.Non-Florida Residents Tested in Florida: The total number of people with positive COVID-19 test results who were tested, exposed, and/or infectious while in Florida, but are legal residents of another state. Total Cases: The total (sum) number of Persons Under Investigation (PUI) who tested positive for COVID-19 while in Florida, as well as Florida residents who tested positive or were exposed/contagious while outside of Florida, and out-of-state residents who were exposed, contagious and/or tested in Florida.Deaths: The Deaths by Day chart shows the total number of Florida residents with confirmed COVID-19 that died on each calendar day (12:00 AM - 11:59 PM). Caution should be used in interpreting recent trends, as deaths are added as they are reported to the Department. Death data often has significant delays in reporting, so data within the past two weeks will be updated frequently.Prefix guide: "PUI" = PUI: Persons under surveillance (any person for which we have data about)"T_ " = Testing: Testing information for all PUIs and cases."C_" = Cases only: Information about cases, which are those persons who have COVID-19 positive test results on file“W_” = Surveillance and syndromic dataKey Data about Testing:T_negative : Testing: Total negative persons tested for all Florida and non-Florida residents, including Florida residents tested outside of the state, and those tested at private facilities.T_positive : Testing: Total positive persons tested for all Florida and non-Florida resident types, including Florida residents tested outside of the state, and those tested at private facilities.PUILab_Yes : All persons tested with lab results on file, including negative, positive and inconclusive. This total does NOT include those who are waiting to be tested or have submitted tests to labs for which results are still pending.Key Data about Confirmed COVID-19 Positive Cases: CasesAll: Cases only: The sum total of all positive cases, including Florida residents in Florida, Florida residents outside Florida, and non-Florida residents in FloridaFLResDeaths: Deaths of Florida ResidentsC_Hosp_Yes : Cases (confirmed positive) with a hospital admission notedC_AgeRange Cases Only: Age range for all cases, regardless of residency typeC_AgeMedian: Cases Only: Median range for all cases, regardless of residency typeC_AllResTypes : Cases Only: Sum of COVID-19 positive Florida Residents; includes in and out of state Florida residents, but does not include out-of-state residents who were treated/tested/isolated in Florida. All questions regarding this dataset should be directed to the Florida Department of Health.
Florida COVID-19 Cases by County exported from the Florida Department of Health GIS Layer on date seen in file name. Archived by the University of South Florida Libraries, Digital Heritage and Humanities Collections. Contact: LibraryGIS@usf.edu.Please Cite Our GIS HUB. If you are a researcher or other utilizing our Florida COVID-19 HUB as a tool or accessing and utilizing the data provided herein, please provide an acknowledgement of such in any publication or re-publication. The following citation is suggested: University of South Florida Libraries, Digital Heritage and Humanities Collections. 2020. Florida COVID-19 Hub. Available at https://covid19-usflibrary.hub.arcgis.com/ . https://doi.org/10.5038/USF-COVID-19-GISLive FDOH DataSource: https://services1.arcgis.com/CY1LXxl9zlJeBuRZ/arcgis/rest/services/Florida_COVID19_Cases/FeatureServerFor data 5/10/2020 or after: Archived data was exported directly from the live FDOH layer into the archive. For data prior to 5/10/2020: Data was exported by the University of South Florida - Digital Heritage and Humanities Collection using ArcGIS Pro Software. Data was then converted to shapefile and csv and uploaded into ArcGIS Online archive. Up until 3/25 the FDOH Cases by County layer was updated twice a day, archives are taken from the 11AM update.For data definitions please visit the following box folder: https://usf.box.com/s/vfjwbczkj73ucj19yvwz53at6v6w614hData definition files names include the relative date they were published. The below information was taken from ancillary documents associated with the original layer from FDOH.Persons Under Investigation/Surveillance (PUI):Essentially, PUIs are any person who has been or is waiting to be tested. This includes: persons who are considered high-risk for COVID-19 due to recent travel, contact with a known case, exhibiting symptoms of COVID-19 as determined by a healthcare professional, or some combination thereof. PUI’s also include people who meet laboratory testing criteria based on symptoms and exposure, as well as confirmed cases with positive test results. PUIs include any person who is or was being tested, including those with negative and pending results. All PUIs fit into one of three residency types: 1. Florida residents tested in Florida2. Non-Florida residents tested in Florida3. Florida residents tested outside of Florida Florida Residents Tested Elsewhere: The total number of Florida residents with positive COVID-19 test results who were tested outside of Florida, and were not exposed/infectious in Florida.Non-Florida Residents Tested in Florida: The total number of people with positive COVID-19 test results who were tested, exposed, and/or infectious while in Florida, but are legal residents of another state. Total Cases: The total (sum) number of Persons Under Investigation (PUI) who tested positive for COVID-19 while in Florida, as well as Florida residents who tested positive or were exposed/contagious while outside of Florida, and out-of-state residents who were exposed, contagious and/or tested in Florida.Deaths: The Deaths by Day chart shows the total number of Florida residents with confirmed COVID-19 that died on each calendar day (12:00 AM - 11:59 PM). Caution should be used in interpreting recent trends, as deaths are added as they are reported to the Department. Death data often has significant delays in reporting, so data within the past two weeks will be updated frequently.Prefix guide: "PUI" = PUI: Persons under surveillance (any person for which we have data about)"T_ " = Testing: Testing information for all PUIs and cases."C_" = Cases only: Information about cases, which are those persons who have COVID-19 positive test results on file“W_” = Surveillance and syndromic dataKey Data about Testing:T_negative : Testing: Total negative persons tested for all Florida and non-Florida residents, including Florida residents tested outside of the state, and those tested at private facilities.T_positive : Testing: Total positive persons tested for all Florida and non-Florida resident types, including Florida residents tested outside of the state, and those tested at private facilities.PUILab_Yes : All persons tested with lab results on file, including negative, positive and inconclusive. This total does NOT include those who are waiting to be tested or have submitted tests to labs for which results are still pending.Key Data about Confirmed COVID-19 Positive Cases: CasesAll: Cases only: The sum total of all positive cases, including Florida residents in Florida, Florida residents outside Florida, and non-Florida residents in FloridaFLResDeaths: Deaths of Florida ResidentsC_Hosp_Yes : Cases (confirmed positive) with a hospital admission notedC_AgeRange Cases Only: Age range for all cases, regardless of residency typeC_AgeMedian: Cases Only: Median range for all cases, regardless of residency typeC_AllResTypes : Cases Only: Sum of COVID-19 positive Florida Residents; includes in and out of state Florida residents, but does not include out-of-state residents who were treated/tested/isolated in Florida. All questions regarding this dataset should be directed to the Florida Department of Health.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Small uncrewed aircraft systems (sUAS, aka ‘drones’) are being increasingly employed to support an expanding array of applications and business needs. Natural resource professionals are increasingly incorporating drones as a data collection tool to support management decisions. This educational guide is based off of the sUAS Operations Technician DACUM, and was developed to directly respond to the needs of educators. The information and activities included in this workbook are provided as an ‘educational buffet’. Educators can use the entire resource, or they can select and extract specific activities to target their needs. This resource can be used as a follow-up to the publication “sUAS Manual Flight Exercises”.