Facebook
TwitterResearch Ship Laurence M. Gould Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z." in your query. '=~' indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '' says to match the previous character 0 or more times. (Don't include backslashes in your query.) See the tutorial for regular expressions at https://www.vogella.com/tutorials/JavaRegularExpressions/article.html
Facebook
TwitterA list of the studies from which data were obtained.
Facebook
TwitterGLAH06 is used in conjunction with GLAH05 to create the Level-2 altimetry products. Level-2 altimetry data provide surface elevations for ice sheets (GLAH12), sea ice (GLAH13), land (GLAH14), and oceans (GLAH15). Data also include the laser footprint geolocation and reflectance, as well as geodetic, instrument, and atmospheric corrections for range measurements. The Level-2 elevation products, are regional products archived at 14 orbits per granule, starting and stopping at the same demarcation (± 50° latitude) as GLAH05 and GLAH06. Each regional product is processed with algorithms specific to that surface type. Surface type masks define which data are written to each of the products. If any data within a given record fall within a specific mask, the entire record is written to the product. Masks can overlap: for example, non-land data in the sea ice region may be written to the sea ice and ocean products. This means that an algorithm may write the same data to more than one Level-2 product. In this case, different algorithms calculate the elevations in their respective products. The surface type masks are versioned and archived at NSIDC, so users can tell which data to expect in each product.Each data granule has an associated browse product.
Facebook
TwitterThis dataset contains certain agency information for each NTD reporter filing an annual report for Report Year 2024. It is a subset of the data provided in the dataset https://www.transit.dot.gov/ntd/data-product/2024-annual-database-agency-information
The purpose of providing this data is to provide basic information to establish the location of a reporting entity and the basis for reporting to the NTD. It also provides indicators to uniquely identify it using other key indicators like Universal Entity ID. Finally, it describes the fiscal year period for which data contained in annual reports was collected by the agency.
Facebook
TwitterThis dataset contains raw (binary "filmstrip" imagery) files of PMS-2D data collected by the C-130 during ICE-L. Summary data have been merged with the "NCAR C-130 Navigation, State Parameter, and Microphysics LRT (1-sps) Data" These data are in a format compatible with xpms2d, available from the EOL xpms2d download page. Click on "Order" to see a table listing flight dates and times during which data is available.
Facebook
TwitterNOAA Ship Oregon II Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z." in your query. "=~" indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '' says to match the previous character 0 or more times. See the tutorial for regular expressions at https://www.vogella.com/tutorials/JavaRegularExpressions/article.html
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Research Ship Knorr Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program.
IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z.*" in your query. '=~' indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. (Don't include backslashes in your query.) See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.html
Facebook
TwitterNOAA Ship Fairweather Underway Meteorological Data (Near Real Time, updated daily) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program.
IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query:
flag=~"ZZZ........Z.*"
in your query.
"=~" indicates this is a regular expression constraint.
The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data.
The '.'s say to match any character.
The '*' says to match the previous character 0 or more times.
See the tutorial for regular expressions at
https://www.vogella.com/tutorials/JavaRegularExpressions/article.html
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
AUTHOR RETRACTION STATEMENTThe data files described in this data record were retracted on 3 March 2020, associated with the retraction of the related article. The article retraction note can be found here: https://doi.org/10.1038/s41586-020-1945-1. For full transparency, we leave the retracted data in place. The data file errors fall into three categories: (1) wrong calculation of water yield using the reported values in the source literature; (2) disparate study designs that proved limiting in the categorical binning of the type of forest treatment or ground cover change; (3) epistemic uncertainty in the source papers. An example of the latter is, after our paper was published, we were informed that the underlying data in one of the source papers showed completely opposite trends to the trends reported in the source paper. We caution against use of these data for any further analysis. To that end, we are working on a new data compilation together with all parties associated with the Matters Arising and Retraction and will alert the community to its availability here when available.The description below remains unchanged This dataset contains two .xlsx spreadsheets and two .txt files relating to the prediction of streamflow response to forest cover management.The two .xlsx spreadsheets comprise a Paired Watershed Studies (PWS) database for 502 catchments, tabulated as 251 treatment-control catchment pairs, as follows:- pws data planting.xlsx: data compiled from 90 paired watershed studies in which the intervention schemes involved planting (conversion, regrowth, afforestation/forestation). References to the original studies are provided, along with pertinent data such as site location, catchment area and water yield response.- pws data removal.xlsx: data compiled from 161 paired watershed studies in which the intervention schemes involved removal (deforestation). The spreadsheet layout is identical to pws data planting.xlsx.The two .txt files contains outputs of statistical models aimed at predicting water yield response. These are also spreadsheets, but are stored as .txt due to their large size. Contained data are as follows:- pws model complete.txt: model predictions for >400 K catchments worldwide where data for all predictor variables are available. Predictor variables were: potential storage, PET (potential evapotranspiration), AET (actual evapotranspiration), rootzone storage, runoff coefficient, permeability, catchment area.- pws model complete_incomplete.txt: model predictions for >2 million catchments worldwide. This includes catchments where data for all predictor variables are available ('complete') and not available ('incomplete').The related study was a global synthesis work on PWS--which are watershed studies in which one watershed serves as a reference while the adjacent watershed(s) are treated by various forest management approaches, such as forest harvesting, conversion, afforestation. The authors aimed to assess the factors controlling streamflow response to forest planting and removal. They introduced a vegetation-to-bedrock model to explain the impacts of forest removal and planting on water yield.Acronyms: PWS=paired watershed studies; AET=actual evapotranspiration; PET=potential evapotranspiration; P=precipitation; SDG=sustainable development goal; BRIC+US=Brazil, Russia, India, China and the United States; IQR=interquartile range; SFRA=streamflow reduction activities; RASE=root average squared error; AAE=average absolute error; RC=runoff coefficient
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The Sloan Digital Sky Survey (SDSS) Moving Object Catalog lists astrometric and photometric data for moving objects detected in the SDSS. The catalog includes various identification parameters, SDSS astrometric measurements (five SDSS magnitudes and their errors), and orbital elements for previously cataloged asteroids. The data set also includes a list of the runs from which data are included, and filter response curves.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Lebanon LB: Proportion of People Living Below 50 Percent Of Median Income: % data was reported at 10.700 % in 2011. Lebanon LB: Proportion of People Living Below 50 Percent Of Median Income: % data is updated yearly, averaging 10.700 % from Dec 2011 (Median) to 2011, with 1 observations. The data reached an all-time high of 10.700 % in 2011 and a record low of 10.700 % in 2011. Lebanon LB: Proportion of People Living Below 50 Percent Of Median Income: % data remains active status in CEIC and is reported by World Bank. The data is categorized under Global Database’s Lebanon – Table LB.World Bank.WDI: Social: Poverty and Inequality. The percentage of people in the population who live in households whose per capita income or consumption is below half of the median income or consumption per capita. The median is measured at 2017 Purchasing Power Parity (PPP) using the Poverty and Inequality Platform (http://www.pip.worldbank.org). For some countries, medians are not reported due to grouped and/or confidential data. The reference year is the year in which the underlying household survey data was collected. In cases for which the data collection period bridged two calendar years, the first year in which data were collected is reported.;World Bank, Poverty and Inequality Platform. Data are based on primary household survey data obtained from government statistical agencies and World Bank country departments. Data for high-income economies are mostly from the Luxembourg Income Study database. For more information and methodology, please see http://pip.worldbank.org.;;The World Bank’s internationally comparable poverty monitoring database now draws on income or detailed consumption data from more than 2000 household surveys across 169 countries. See the Poverty and Inequality Platform (PIP) for details (www.pip.worldbank.org).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IT: Proportion of People Living Below 50 Percent Of Median Income: % data was reported at 15.300 % in 2021. This records a decrease from the previous number of 15.600 % for 2020. IT: Proportion of People Living Below 50 Percent Of Median Income: % data is updated yearly, averaging 14.050 % from Dec 1977 (Median) to 2021, with 36 observations. The data reached an all-time high of 16.200 % in 1993 and a record low of 9.700 % in 1982. IT: Proportion of People Living Below 50 Percent Of Median Income: % data remains active status in CEIC and is reported by World Bank. The data is categorized under Global Database’s Italy – Table IT.World Bank.WDI: Social: Poverty and Inequality. The percentage of people in the population who live in households whose per capita income or consumption is below half of the median income or consumption per capita. The median is measured at 2017 Purchasing Power Parity (PPP) using the Poverty and Inequality Platform (http://www.pip.worldbank.org). For some countries, medians are not reported due to grouped and/or confidential data. The reference year is the year in which the underlying household survey data was collected. In cases for which the data collection period bridged two calendar years, the first year in which data were collected is reported.;World Bank, Poverty and Inequality Platform. Data are based on primary household survey data obtained from government statistical agencies and World Bank country departments. Data for high-income economies are mostly from the Luxembourg Income Study database. For more information and methodology, please see http://pip.worldbank.org.;;The World Bank’s internationally comparable poverty monitoring database now draws on income or detailed consumption data from more than 2000 household surveys across 169 countries. See the Poverty and Inequality Platform (PIP) for details (www.pip.worldbank.org).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Essure: database of biomedical studies - New versions of this database will be uploaded in the future. - History of changes will be available in this figshare article. - Version 1. September 16, 2014.Original data values downloaded from ClinicalTrials.gov and PubMed - Google Drive Spreadsheet is open (worldwide) & comments are also allowed (1st URL).
Dec 29, 2014 - Update 1. Evidence does not support Bayer statements:The use of Essure as a safe contraceptive method in women is highly questionable. 2. Bayer is withholding Essure safety data. 3. Agencies from several countries (e.g., FDA and INVIMA) acted as facilitators (not regulators): their actions accelerated the entry of Essure into the market. 4. Regulators are also withholding information involving the market approval of Essure. 5. Let's be straight up about the Essure case -- no need to be over-polite here --This discussion is about pharmaceutical crimes committed against thousands of women. These criminal actions were only possible by the complicities of others (e.g., drug regulators and author's of hidden clinical studies). - I have evidence to support the above statements. - Thousands of women have reported serious adverse events after receiving the Essure placement procedure. - Bayer responses to these women are disrespectful, offensive, and shameful. I am here to say that these women are real cases of serious adverse events associated with Essure: - More than one paper will be submitted to peer-review medical journals. - More than one post about Essure will be published in this blog (additional to journal articles). - This is evidence of misleading advertising:http://gilmedica.com/nuestros-productos/quirurgica/pelvis-femenina-2/essure/ Notes:- Please take a look to other types of medical devices advertised by the company promoting the use of Essure in Colombia.-The geographical location of Gilmedica is very close to my home.
Facebook
TwitterGlobal Population of the World (GPW) translates census population data to a latitude-longitude grid so that population data may be used in cross-disciplinary studies. There are three data files with this data set for the reference years 1990 and 1995. Over 127,000 administrative units and population counts were collected and integrated from various sources to create the gridded data. In brief, GPW was created using the following steps: * Population data were estimated for the product reference years, 1990 and 1995, either by the data source or by interpolating or extrapolating the given estimates for other years. * Additional population estimates were created by adjusting the source population data to match UN national population estimates for the reference years. * Borders and coastlines of the spatial data were matched to the Digital Chart of the World where appropriate and lakes from the Digital Chart of the World were added. * The resulting data were then transformed into grids of UN-adjusted and unadjusted population counts for the reference years. * Grids containing the area of administrative boundary data in each cell (net of lakes) were created and used with the count grids to produce population densities.As with any global data set based on multiple data sources, the spatial and attribute precision of GPW is variable. The level of detail and accuracy, both in time and space, vary among the countries for which data were obtained.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Djibouti DJ: Proportion of People Living Below 50 Percent Of Median Income: % data was reported at 17.200 % in 2017. This records a decrease from the previous number of 18.900 % for 2013. Djibouti DJ: Proportion of People Living Below 50 Percent Of Median Income: % data is updated yearly, averaging 18.050 % from Dec 2002 (Median) to 2017, with 4 observations. The data reached an all-time high of 18.900 % in 2013 and a record low of 15.400 % in 2002. Djibouti DJ: Proportion of People Living Below 50 Percent Of Median Income: % data remains active status in CEIC and is reported by World Bank. The data is categorized under Global Database’s Djibouti – Table DJ.World Bank.WDI: Social: Poverty and Inequality. The percentage of people in the population who live in households whose per capita income or consumption is below half of the median income or consumption per capita. The median is measured at 2017 Purchasing Power Parity (PPP) using the Poverty and Inequality Platform (http://www.pip.worldbank.org). For some countries, medians are not reported due to grouped and/or confidential data. The reference year is the year in which the underlying household survey data was collected. In cases for which the data collection period bridged two calendar years, the first year in which data were collected is reported.;World Bank, Poverty and Inequality Platform. Data are based on primary household survey data obtained from government statistical agencies and World Bank country departments. Data for high-income economies are mostly from the Luxembourg Income Study database. For more information and methodology, please see http://pip.worldbank.org.;;The World Bank’s internationally comparable poverty monitoring database now draws on income or detailed consumption data from more than 2000 household surveys across 169 countries. See the Poverty and Inequality Platform (PIP) for details (www.pip.worldbank.org).
Facebook
TwitterThe data comes from The Humane League's US Egg Production dataset by Samara Mendez. Dataset and code is available for this project on OSF at US Egg Production Data Set.
This dataset tracks the supply of cage-free eggs in the United States from December 2007 to February 2021. For TidyTuesday we've used data through February 2021, but the full dataset, with data through the present, is available in the OSF project.
egg-production.csv| variable | class | description |
|---|---|---|
| observed_month | double | Month in which report observations are collected,Dates are recorded in ISO 8601 format YYYY-MM-DD |
| prod_type | character | type of egg product: hatching, table eggs |
| prod_process | character | type of production process and housing: cage-free (organic), cage-free (non-organic), all. The value 'all' includes cage-free and conventional housing. |
| n_hens | double | number of hens produced by hens for a given month-type-process combo |
| n_eggs | double | number of eggs producing eggs for a given month-type-process combo |
| source | character | Original USDA report from which data are sourced. Values correspond to titles of PDF reports. Date of report is included in title. |
cage-free-percentages.csv| variable | class | description |
|---|---|---|
| observed_month | double | Month in which report observations are collected,Dates are recorded in ISO 8601 format YYYY-MM-DD |
| percent_hens | double | observed or computed percentage of cage-free hens relative to all table-egg-laying hens |
| percent_eggs | double | computed percentage of cage-free eggs relative to all table eggs,This variable is not available for data sourced from the Egg Markets Overview report |
| source | character | Original USDA report from which data are sourced. Values correspond to titles of PDF reports. Date of report is included in title. |
Facebook
Twitter
As per our latest research, the global Data Offload Station (Automotive) market size reached USD 1.18 billion in 2024, reflecting robust adoption across automotive OEMs, fleet operators, and service providers. The market is anticipated to grow at a CAGR of 17.3% during the forecast period, with projections indicating a value of USD 5.13 billion by 2033. This remarkable expansion is being driven by the exponential growth in vehicle data generation, the proliferation of advanced driver assistance systems (ADAS), and the increasing integration of connected and autonomous vehicle technologies.
The primary growth factor for the Data Offload Station (Automotive) market is the surging volume of data generated by modern vehicles, especially with the integration of high-resolution cameras, LIDAR, radar, and telematics devices. Vehicles today are equipped with a suite of sensors and infotainment systems that continuously collect and transmit data related to vehicle performance, driver behavior, and environmental conditions. This data is critical for real-time analytics, predictive maintenance, and enhancing the overall driving experience. However, the sheer volume of data exceeds the capacity of traditional on-board storage and wireless transmission methods, necessitating the deployment of dedicated data offload stations. These stations enable rapid, secure, and efficient transfer of large datasets from vehicles to cloud or local servers, underpinning the digital transformation in the automotive sector.
Another significant driver is the evolving regulatory landscape and the push towards vehicle safety, emissions compliance, and smart mobility. Regulatory bodies across North America, Europe, and Asia Pacific are mandating the adoption of advanced telematics, diagnostics, and ADAS features in both passenger and commercial vehicles. This regulatory pressure compels automakers and fleet operators to invest in robust data management infrastructure, in which data offload stations play a pivotal role. Furthermore, the rise of electric vehicles (EVs) and autonomous vehicles (AVs) is amplifying the need for real-time data offloading to ensure safe and efficient operation. As these vehicle types become mainstream, the demand for high-capacity, low-latency offload solutions will continue to surge, further fueling market growth.
Technological advancements are also propelling the Data Offload Station (Automotive) market forward. The integration of 5G, edge computing, and advanced cybersecurity protocols is enhancing the speed, reliability, and security of data transfer processes. Automotive OEMs and technology providers are collaborating to develop scalable and interoperable offload solutions that cater to diverse vehicle types and use cases. Additionally, the growing trend of fleet digitalization and the emergence of Mobility-as-a-Service (MaaS) are creating new opportunities for data-driven services, such as predictive analytics, remote diagnostics, and personalized infotainment. These innovations are not only optimizing vehicle operations but also unlocking new revenue streams for stakeholders across the automotive value chain.
The role of Automotive Data Communication is becoming increasingly crucial as vehicles become more connected and autonomous. This communication involves the seamless exchange of data between various vehicle systems, external networks, and cloud platforms. As vehicles generate vast amounts of data from sensors, cameras, and other devices, efficient data communication protocols are essential to ensure that this information is transmitted accurately and in real-time. This capability not only supports advanced driver assistance systems (ADAS) and telematics but also enhances vehicle safety, performance, and user experience. The integration of robust data communication solutions is thus a key enabler for the digital transformation of the automotive industry, allowing for more intelligent and responsive vehicle systems.
From a regional perspective, Asia Pacific dominates the Data Offload Station (Automotive) market, accounting for over 38% of the global revenue in 2024. The region's leadership is underpinned by the rapid adoption of connected vehicles, the presence of leading automotive OEMs, and significant investments in smart mobility infras
Facebook
TwitterThe Global Historical Climatology Network daily (GHCNd) is an integrated database of daily climate summaries from land surface stations across the globe. GHCNd is made up of daily climate records from numerous sources that have been integrated and subjected to a common suite of quality assurance reviews.
GHCNd contains records from more than 100,000 stations in 180 countries and territories. NCEI provides numerous daily variables, including maximum and minimum temperature, total daily precipitation, snowfall, and snow depth. About half the stations only report precipitation. Both record length and period of record vary by station and cover intervals ranging from less than a year to more than 175 years.
The process of integrating data from multiple sources into GHCNd takes place in three steps:
%3C!-- --%3E
The process performs the first two of these steps whenever a new source dataset or additional stations become available, while the mingling of data is part of the automated processing that creates GHCNd on a regular basis.
https://redivis.com/fileUploads/1f1c8a18-87fe-4564-a682-cb8a22339da8%3E" alt="ndp041_precip.gif">
A station within a source dataset is considered for inclusion in GHCNd if it meets all of the following conditions:
%3C!-- --%3E
The next step is to determine for each station in the source dataset if data for the same location are already contained in GHCNd, or if the station represents a new site. Whenever possible, stations are matched on the basis of network affiliation and station identification number. If no match exists, then there is consultation from different networks for existing cross-referenced lists that identify the correspondence of station identification numbers.
For example, data for Alabaster Shelby County Airport, Alabama, USA, is stored under Cooperative station ID 010116 in NCEI's datasets 3200 and 3206 as well as in the data stream from the High Plains Regional Climate Center; they are combined into one GHCNd record based on the ID. In data set 3210 and the various sources for ASOS stations, however, the data for this location are stored under WBAN ID 53864 and must be matched with the corresponding Cooperative station ID using NCEI's Master Station History Record.
A third approach is to match stations on the basis of their names and location. This strategy is more difficult to automate than the other two approaches because identification of multiple stations within the same city or town, with the same name and small differences in coordinates, can be the result of either differences in accuracy or the existence of multiple stations in close proximity to each other. As a result, the employment of the third approach is used only when stations cannot be matched on the basis of station identification numbers or cross-reference information. This is the case, for example, when there is a need for matching stations outside the U.S. whose data originate from the Global Summary of the Day dataset and from the International Collection.
The implementation of the above classification strategies yields a list of GHCNd stations and an inventory of the source datasets for integration of each station. This list forms the basis for integrating, or mingling, the data from the various sources to create GHCNd. Mingling takes place according to a hierarchy of data sources and in a manner that attempts to maximize the amount of data included while also minimizing the degree to which data from sources with different characteristics are mixed. While the mingling of precipitation, snowfall, and snow depth are separate, consideration of maximum and minimum temperatures is performed together in order to ensure the temperatures for a particular station and day always originate from the same source. Data from the Global Summary of the Day dataset are used only if no observations are available from any other source for that station, month, and element. Among the other sources, consideration of each day is made individually; if an observation for a particular station and day is available from more than one source, GHCNd uses the observation from the most preferred source availa
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system.
https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present/ijzp-q8t2/data
Facebook
TwitterThis dataset is part of the Cadastral National Spatial Data Infrastructure (CadNSDI) publication dataset for rectangular and non‐rectangular Public Land Survey System (PLSS) data.
This dataset represents the GIS Version of the Public Land Survey System including both rectangular and non-rectangular surveys. The primary source for the data is cadastral survey records housed by the BLM supplemented with local records and geographic control coordinates from states, counties as well as other federal agencies such as the USGS and USFS. The data has been converted from source documents to digital form and transferred into a GIS format that is compliant with FGDC Cadastral Data Content Standards and Guidelines for publication. This data is optimized for data publication and sharing rather than for specific "production" or operation and maintenance. This data set includes the following: PLSS Fully Intersected (all of the PLSS feature at the atomic or smallest polygon level), PLSS Townships, First Divisions and Second Divisions (the hierarchical break down of the PLSS Rectangular surveys) PLSS Special surveys (non-rectangular components of the PLSS) Meandered Water, Corners and Conflicted Areas (known areas of gaps or overlaps between Townships or state boundaries). The Entity-‐ Attribute section of this metadata describes these components in greater detail.
The CadNSDI or the Cadastral Publication Data Standard is the cadastral data component of the NSDI. This is the publication guideline for cadastral data that is intended to provide a common format and structure and content for cadastral information that can be made available across jurisdictional boundaries, providing a consistent and uniform cadastral data to meet business need that includes connections to the source information from the data stewards. The data stewards determine which data are published and should be contacted for any questions on data content or for additional information. The cadastral publication data is data provided by cadastral data producers in a standard form on a regular basis.
Cadastral publication data has two primary components, land parcel data and cadastral reference data. It is important to recognize that the publication data are not the same as the operation and maintenance or production data. The production data is structured to optimize maintenance processes, is integrated with internal agency operations and contains much more detail than the publication data. The publication data is a subset of the more complete production data and is reformatted to meet a national standard so data can be integrated across jurisdictional boundaries and be presented in a consistent and standard form nationally.
Facebook
TwitterResearch Ship Laurence M. Gould Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z." in your query. '=~' indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '' says to match the previous character 0 or more times. (Don't include backslashes in your query.) See the tutorial for regular expressions at https://www.vogella.com/tutorials/JavaRegularExpressions/article.html