99 datasets found
  1. f

    Clustering of samples and variables with mixed-type data

    • plos.figshare.com
    tiff
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Manuela Hummel; Dominic Edelmann; Annette Kopp-Schneider (2023). Clustering of samples and variables with mixed-type data [Dataset]. http://doi.org/10.1371/journal.pone.0188274
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Manuela Hummel; Dominic Edelmann; Annette Kopp-Schneider
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of data measured on different scales is a relevant challenge. Biomedical studies often focus on high-throughput datasets of, e.g., quantitative measurements. However, the need for integration of other features possibly measured on different scales, e.g. clinical or cytogenetic factors, becomes increasingly important. The analysis results (e.g. a selection of relevant genes) are then visualized, while adding further information, like clinical factors, on top. However, a more integrative approach is desirable, where all available data are analyzed jointly, and where also in the visualization different data sources are combined in a more natural way. Here we specifically target integrative visualization and present a heatmap-style graphic display. To this end, we develop and explore methods for clustering mixed-type data, with special focus on clustering variables. Clustering of variables does not receive as much attention in the literature as does clustering of samples. We extend the variables clustering methodology by two new approaches, one based on the combination of different association measures and the other on distance correlation. With simulation studies we evaluate and compare different clustering strategies. Applying specific methods for mixed-type data proves to be comparable and in many cases beneficial as compared to standard approaches applied to corresponding quantitative or binarized data. Our two novel approaches for mixed-type variables show similar or better performance than the existing methods ClustOfVar and bias-corrected mutual information. Further, in contrast to ClustOfVar, our methods provide dissimilarity matrices, which is an advantage, especially for the purpose of visualization. Real data examples aim to give an impression of various kinds of potential applications for the integrative heatmap and other graphical displays based on dissimilarity matrices. We demonstrate that the presented integrative heatmap provides more information than common data displays about the relationship among variables and samples. The described clustering and visualization methods are implemented in our R package CluMix available from https://cran.r-project.org/web/packages/CluMix.

  2. Quantitative data.pdf

    • figshare.com
    pdf
    Updated Aug 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zaitunnatakhin Zamli (2024). Quantitative data.pdf [Dataset]. http://doi.org/10.6084/m9.figshare.26711965.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Aug 16, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Zaitunnatakhin Zamli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset shows the individual SUS and preference scores of the HMD and DB applications.

  3. a

    Doctor Doom In The Marvel Age: Data

    • figshare.arts.ac.uk
    pdf
    Updated Apr 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Hibbett (2023). Doctor Doom In The Marvel Age: Data [Dataset]. http://doi.org/10.25441/arts.16676830.v3
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Apr 7, 2023
    Dataset provided by
    University of the Arts London
    Authors
    Mark Hibbett
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is data collected as part of the PhD thesis 'Doctor Doom In The Marvel Age: An Empirical Approach To Transmedia Character Coherence'. 266 Texts were identified in which Doctor Doom appeared, taken from comics dated November 1961 to October 1987 - 'The Marvel Age' - and from other media texts issued contemporaneously.From this corpus, a sample of 69 texts was selected using stratified random sampling.Each text in the sample was examined for signifiers to do with Doctor Doom. The data was recorded using a unified catalogue of transmedia character components which brings together aspects of the models devised by Pearson and Uricchio, Klastrup and Tosca, Marie-Laurie Ryan, Paolo Bertetti and Matthew Freeman within a framework based on Jan-Noël Thon's ideas of Transmedia Character Networks that extends Henry Jenkin's formulation of 'transmedia' in line with Scolari, Bertetti and Freeman's Transmedia Archaeology. Where gaps were identified within these definitions, specifically around the area of 'behaviour', additional definitions were brought in using the psycholexical approach, the Big Five Index, and the idea of character motivations from creative writing practice. Where necessary the components were re-named for clarity, and finally were placed into groups based on Matthew Freeman's classification of transmedia, with 'behaviour' extracted into a group of its own. In theory this catalogue can be used as a tool for mapping the coherence of transmedia characters as they move across time and media. Used over the course of a sample of texts, and by recording the signifiers within each component for each text, it should be possible not only to identify a character's core components across time, but also to see whether they vary across different media or storyworlds. This idea is investigated within the thesis itself.

  4. a

    Make Your Own Smart Maps

    • hub.arcgis.com
    Updated May 17, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    State of Delaware (2019). Make Your Own Smart Maps [Dataset]. https://hub.arcgis.com/documents/489f36a63ace4785a31d08051fb3523c
    Explore at:
    Dataset updated
    May 17, 2019
    Dataset authored and provided by
    State of Delaware
    Description

    Use the resources below to understand how smart mapping tools work and how to apply them to enhance your mapping projects.GoalsAccess available smart mapping options to explore and better understand your data.Style maps using cartographically correct symbology for qualitative and quantitative data.Use Arcade expressions to quickly customize data display and map elements.

  5. d

    Water quality monitoring data GC-MS and LC-MS semi-quantitative screen

    • environment.data.gov.uk
    • data.europa.eu
    Updated Aug 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Environment Agency (2024). Water quality monitoring data GC-MS and LC-MS semi-quantitative screen [Dataset]. https://environment.data.gov.uk/dataset/e85a7a52-7a75-4856-a0b3-8c6e4e303858
    Explore at:
    Dataset updated
    Aug 28, 2024
    Dataset authored and provided by
    Environment Agency
    License

    Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
    License information was derived automatically

    Description

    The Environment Agency uses semi-quantitative gas chromatography-mass spectrometry (GC-MS and LC-MS) targeted screening methods to analyse water samples for a wide range of substances at once. This dataset represents positive GC-MS and LC-MS results for samples collected and analysed by the Environment Agency since 2007.

    More specifically, the data shows what substance has been detected in a sample, the specific method used, the analytical method limit of detection, the sampling location and time, sample type, purpose and material. Where available, a semi quantitative concentration measurement (µg/l) is reported. As the target database is continually being updated the date of addition is also listed against each substance.

    Users should ensure they have read the 'Read Me' file included in the downloaded package in full before viewing or making use of the data, as it contains important information.

  6. q

    Data from: Quantitative analysis of tumour spheroid structure

    • researchdatafinder.qut.edu.au
    Updated Feb 2, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander Browning (2022). Quantitative analysis of tumour spheroid structure [Dataset]. https://researchdatafinder.qut.edu.au/display/n26538
    Explore at:
    Dataset updated
    Feb 2, 2022
    Dataset provided by
    Queensland University of Technology (QUT)
    Authors
    Alexander Browning
    Description

    Code and associated data for the following preprint:

    AP Browning, JA Sharp, RJ Murphy, G Gunasingh, B Lawson, K Burrage, NK Haass, MJ Simpson. 2021 Quantitative analysis of tumour spheroid structure. eLife http://dx.doi.org/https://doi.org/10.7554/eLife.73020

    Data comprises measurements relating to the size and inner structure of spheroids grown from WM793b and WM983b melanoma cells over up to 24 days.

    Code, data, and interactive figures are available as a Julia module on GitHub:

    Browning AP (2021) Github ID v.0.6.2. Quantitative analysis of tumour spheroid structure. https://github.com/ap-browning/Spheroids

    (copy archived here)

    Code used to process the experimental images is available on Zenodo:

    Browning AP, Murphy RJ (2021) Zenodo Image processing algorithm to identify structure of tumour spheroids with cell cycle labelling. https://doi.org/10.5281/zenodo.5121093

  7. A

    6-Hour Quantitative Precipitation Amount (inches)

    • data.amerigeoss.org
    csv, esri rest +2
    Updated Jul 5, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AmeriGEO ArcGIS (2017). 6-Hour Quantitative Precipitation Amount (inches) [Dataset]. https://data.amerigeoss.org/ca/dataset/fa7ecc3b-a536-471e-ae96-00c50666f024
    Explore at:
    geojson, csv, esri rest, htmlAvailable download formats
    Dataset updated
    Jul 5, 2017
    Dataset provided by
    AmeriGEO ArcGIS
    Description

    Map Information

    This nowCOAST time-enabled map service provides maps depicting NWS gridded forecasts of the following selected sensible surface weather variables or elements: air temperature (including daily maximum and minimum), apparent air temperature, dew point temperature, relative humidity, wind velocity, wind speed, wind gust, total sky cover, and significant wave height for the next 6-7 days. Additional forecast maps are available for 6-hr quantitative precipitation (QPF), 6-hr quantitative snowfall, and 12-hr probability of precipitation. These NWS forecasts are from the National Digital Forecast Database (NDFD) at a 2.5 km horizontal spatial resolution. Surface is defined as 10 m (33 feet) above ground level (AGL) for wind variables and 2 m (5.5 ft) AGL for air temperature, dew point temperature, and relative humidity variables. The forecasts extend out to 7 days from 0000 UTC on Day 1 (current day). The forecasts are updated in the nowCOAST map service four times per day. For more detailed information about the update schedule, please see: http://new.nowcoast.noaa.gov/help/#section=updateschedule

    The forecast projection availability times listed below are generally accurate, however forecast interval and forecast horizon vary by region and variable. For the most up-to-date information, please see http://www.nws.noaa.gov/ndfd/resources/NDFD_element_status.pdf and http://graphical.weather.gov/docs/datamanagement.php.

    The forecasts of the air, apparent, and dew point temperatures are displayed using different colors at 2 degree Fahrenheit increments from -30 to 130 degrees F in order to use the same color legend throughout the year for the United States. This is the same color scale used for displaying the NDFD maximum and minimum air temperature forecasts. Air and dew point temperature forecasts are available every hour out to +36 hours from forecast issuance time, at 3-hour intervals from +36 to +72 hours, and at 6-hour intervals from +72 to +168 hours (7 days). Maximum and minimum air temperature forecasts are each available every 24 hours out to +168 hours (7 days) from 0000 UTC on Day 1 (current day).

    The relative humidity (RH) forecasts are depicted using different colors for every 5-percent interval. The increment and color scale used to display the RH forecasts were developed to highlight NWS local fire weather watch/red flag warning RH criteria at the low end (e.g. 15, 25, & 35% thresholds) and important high end RH thresholds for other users (e.g. agricultural producers) such as 95%. The RH forecasts are available every hour out to +36 hours from 0000 UTC on Day 1 (current day), at 3-hour intervals from +36 to +72 hours, and at 6-hour intervals from +72 to +168 hours (7 days).

    The 6-hr total precipitation amount forecasts or QPFs are symbolized using different colors at 0.01, 0.10, 0.25 inch intervals, at 1/4 inch intervals up to 4.0 (e.g. 0.50, 0.75, 1.00, 1.25, etc.), at 1-inch intervals from 4 to 10 inches and then at 2-inch intervals up to 14 inches. The increments from 0.01 to 1.00 or 2.00 inches are similar to what are used on NCEP/Weather Prediction Center's QPF products and the NWS River Forecast Center (RFC) daily precipitation analysis. Precipitation forecasts are available for each 6-hour period out to +72 hours (3 days) from 0000 UTC on Day 1 (current day).

    The 6-hr total snowfall amount forecasts are depicted using different colors at 1-inch intervals for snowfall greater than 0.01 inches. Snowfall forecasts are available for each 6-hour period out to +48 hours (2 days) from 0000 UTC on Day 1 (current day).

    The 12-hr probability of precipitation (PoP) forecasts are displayed for probabilities over 10 percent using different colors at 10, 20, 30, 60, and 85+ percent. The probability of precipitation forecasts are available for each 12-hour period out to +72 hours (3 days) from 0000 UTC on Day 1 (current day).

    The wind speed and wind gust forecasts are depicted using different colors at 5 knots increment up to 115 knots. The legend includes tick marks for both knots and miles per hour. The same color scale is used for displaying the RTMA surface wind speed forecasts. The wind velocity is depicted by curved wind barbs along streamlines. The direction of the wind is indicated with an arrowhead on the wind barb. The flags on the wind barb are the standard meteorological convention in units of knots. The wind speed and wind velocity forecasts are available hourly out to +36 hours from 00:00 UTC on Day 1 (current day), at 3-hour intervals out to +72 hours, and at 6-hour intervals from +72 to +168 hours (7 days). The wind gust forecasts are available hourly out to +36 hours from 0000 UTC on Day 1 (current day) and at 3-hour intervals out to +72 hours (3 days).

    The total sky cover forecasts are displayed using progressively darker shades of gray for 10, 30, 60, and 80+ percentage values. Sky cover values under 10 percent are shown as transparent. The sky cover forecasts are available for each hour out to +36 hours from 0000 UTC on Day 1 (current day), every 3 hours from +36 to +72 hours, and every 6 hours from +72 to +168 hours (7 days).

    The significant wave height forecasts are symbolized with different colors at 1-foot intervals up to 20 feet and at 5-foot intervals from 20 feet to 35+ feet. The significant wave height forecasts are available for each hour out to +36 hours from 0000 UTC on Day 1 (current day), every 3 hours from +36 to +72 hours, and every 6 hours from +72 to +144 hours (6 days).

    Background Information

    The NDFD is a seamless composite or mosaic of gridded forecasts from individual NWS Weather Forecast Offices (WFOs) from around the U.S. as well as the NCEP/Ocean Prediction Center and National Hurricane Center/TAFB. NDFD has a spatial resolution of 2.5 km. The 2.5km resolution NDFD forecasts are presently experimental, but are scheduled to become operational in May/June 2014. The time resolution of forecast projections varies by variable (element) based on user needs, forecast skill, and forecaster workload. Each WFO prepares gridded NDFD forecasts for their specific geographic area of responsibility. When these locally generated forecasts are merged into a national mosaic, occasionally areas of discontinuity will be evident. Staff at NWS forecast offices attempt to resolve discontinuities along the boundaries of the forecasts by coordinating with forecasters at surrounding WFOs and using workstation forecast tools that identify and resolve some of these differences. The NWS is making progress in this area, and recognizes that this is a significant issue in which improvements are still needed. The NDFD was developed by NWS Meteorological Development Laboratory.

    As mentioned above, a curved wind barb with an arrow head is used to display the wind velocity forecasts instead of the traditional wind barb. The curved wind barb was developed and evaluated at the Data Visualization Laboratory of the NOAA-UNH Joint Hydrographic Center/Center for Coastal and Ocean Mapping (Ware et al., 2014). The curved wind barb combines the best features of the wind barb, that it displays speed in a readable form, with the best features of the streamlines which shows wind patterns. The arrow head helps to convey the flow direction.

    Time Information

    This map is time-enabled, meaning that each individual layer contains time-varying data and can be utilized by clients capable of making map requests that include a time component.

    This particular service can be queried with or without the use of a time component. If the time parameter is specified in a request, the data or imagery most relevant to the provided time value, if any, will be returned. If the time parameter is not specified in a request, the latest data or imagery valid for the present system time will be returned to the client. If the time parameter is not specified and no data or imagery is available for the present time, no data will be returned.

    In addition to ArcGIS Server REST access, time-enabled OGC WMS 1.3.0 access is also provided by this service.

    Due to software limitations, the time extent of the service and map layers displayed below does not provide the most up-to-date start and end times of available data. Instead, users have three options for determining the latest time information about the service:

    1. Issue a returnUpdates=true request for an individual layer or for the service itself, which will return the current start and end times of available data, in epoch time format (milliseconds since 00:00 January 1, 1970). To see an example, click on the "Return Updates" link at the bottom of this page under "Supported Operations". Refer to the ArcGIS REST API Map Service Documentation for more information.
    2. Issue an Identify (ArcGIS REST) or GetFeatureInfo (WMS) request against the proper layer corresponding with the target dataset. For raster data, this would be the "Image Footprints with Time Attributes" layer in the same group as the target "Image" layer being displayed. For vector (point, line, or polygon) data, the target layer can be queried directly. In either case, the attributes

  8. Data Visualization Cheat sheets and Resources

    • kaggle.com
    zip
    Updated Feb 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kash (2021). Data Visualization Cheat sheets and Resources [Dataset]. https://www.kaggle.com/kaushiksuresh147/data-visualization-cheat-cheats-and-resources
    Explore at:
    zip(133638507 bytes)Available download formats
    Dataset updated
    Feb 20, 2021
    Authors
    Kash
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The Data Visualization Corpus

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1430847%2F29f7950c3b7daf11175aab404725542c%2FGettyImages-1187621904-600x360.jpg?generation=1601115151722854&alt=media" alt="">

    Data Visualization

    Data visualization is the graphical representation of information and data. By using visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data.

    In the world of Big Data, data visualization tools and technologies are essential to analyze massive amounts of information and make data-driven decisions

    The Data Visualizaion Copus

    The Data Visualization corpus consists:

    • 32 cheat sheets: This includes A-Z about the techniques and tricks that can be used for visualization, Python and R visualization cheat sheets, Types of charts, and their significance, Storytelling with data, etc..

    • 32 Charts: The corpus also consists of a significant amount of data visualization charts information along with their python code, d3.js codes, and presentations relation to the respective charts explaining in a clear manner!

    • Some recommended books for data visualization every data scientist's should read:

      1. Beautiful Visualization by Julie Steele and Noah Iliinsky
      2. Information Dashboard Design by Stephen Few
      3. Knowledge is beautiful by David McCandless (Short abstract)
      4. The Functional Art: An Introduction to Information Graphics and Visualization by Alberto Cairo
      5. The Visual Display of Quantitative Information by Edward R. Tufte
      6. storytelling with data: a data visualization guide for business professionals by cole Nussbaumer knaflic
      7. Research paper - Cheat Sheets for Data Visualization Techniques by Zezhong Wang, Lovisa Sundin, Dave Murray-Rust, Benjamin Bach

    Suggestions:

    In case, if you find any books, cheat sheets, or charts missing and if you would like to suggest some new documents please let me know in the discussion sections!

    Resources:

    Request to kaggle users:

    • A kind request to kaggle users to create notebooks on different visualization charts as per their interest by choosing a dataset of their own as many beginners and other experts could find it useful!

    • To create interactive EDA using animation with a combination of data visualization charts to give an idea about how to tackle data and extract the insights from the data

    Suggestion and queries:

    Feel free to use the discussion platform of this data set to ask questions or any queries related to the data visualization corpus and data visualization techniques

    Kindly upvote the dataset if you find it useful or if you wish to appreciate the effort taken to gather this corpus! Thank you and have a great day!

  9. f

    Data from: Clusters Beat Trend!? Testing Feature Hierarchy in Statistical...

    • tandf.figshare.com
    txt
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Susan VanderPlas; Heike Hofmann (2023). Clusters Beat Trend!? Testing Feature Hierarchy in Statistical Graphics [Dataset]. http://doi.org/10.6084/m9.figshare.3485534
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Susan VanderPlas; Heike Hofmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Graphics are very effective for communicating numerical information quickly and efficiently, but many of the design choices we make are based on subjective measures, such as personal taste or conventions of the discipline rather than objective criteria. We briefly introduce perceptual principles such as preattentive features and gestalt heuristics, and then discuss the design and results of a factorial experiment examining the effect of plot aesthetics such as color and trend lines on participants’ assessment of ambiguous data displays. The quantitative and qualitative experimental results strongly suggest that plot aesthetics have a significant impact on the perception of important features in data displays. Supplementary materials for this article are available online.

  10. Surface Meteorological and Hydrologic Analyses - Quantitative Precipitation...

    • gis-calema.opendata.arcgis.com
    Updated Sep 10, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CA Governor's Office of Emergency Services (2022). Surface Meteorological and Hydrologic Analyses - Quantitative Precipitation Estimates [Dataset]. https://gis-calema.opendata.arcgis.com/maps/d0eb629f909e490ab699371b2767b2ea
    Explore at:
    Dataset updated
    Sep 10, 2022
    Dataset provided by
    California Governor's Office of Emergency Services
    Authors
    CA Governor's Office of Emergency Services
    Area covered
    Description

    Map InformationThis nowCOAST time-enabled map service provides maps depicting the NWS Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimate mosaics for 1-, 3-, 6-, 12-, 24-, 48-, and 72-hr time periods at a 1 km (0.6 miles) horizontal resolution for CONUS and southern part of Canada. The precipitation estimates are based only on radar data. The total precipitation amount is indicated by different colors at 0.01, 0.10, 0.25 and then at 1/4 inch intervals up to 4.0 inches (e.g. 0.50, 0.75, 1.00, 1.25, etc.), at 1-inch intervals from 4 to 10 inches and then at 2-inch intervals up to 14 inches. The increments from 0.01 to 1.00 or 2.00 inches are similar to what are used on NCEP's Weather Prediction Center QPF products and the NWS River Forecast Center (RFC) daily precipitation analysis. The 1-hr mosaic is updated every 4 minutes with a latency on nowCOAST of about 6-7 minutes from valid time. The 3-, 6-, 12-, and 24-hr QPEs are updated on nowCOAST every hour for the period ending at the top of the hour. The 48- and 72-hr QPEs are generated daily for the period ending at 12 UTC (i.e. 7AM EST) and available on nowCOAST shortly afterwards. For more detailed information about the update schedule.Background InformationThe NWS Multi-Radar Multi-Sensor System (MRMS)/Q3 QPEs are radar-only based quantitative precipitation analyses. The 1-h precipitation accumulation is obtained by aggregating 12 instantaneous rate fields. Missing rate fields are filled with the neighboring rate fields if the data gap is not significantly large (e.g.<=15 minutes). The instantaneous rate is computed from the hybrid scan reflectivity and the precipitation flag fields. (Both are 2-D derivative products from the National 3-D Reflectivity Mosaic grid which has a 1-km horizontal resolution, 31 vertical levels and a 5-minute update cycle). The instantaneous rate currently uses four Z-R relationships (i.e. tropical, convective, stratiform, or snow). The particular ZR relationship used in any grid cell depends on precipitation type which is indicated by the precipitation flag. The other accumulation products are derived by aggregating the hourly accumulations. The 1-hr QPE are generated every 4 minutes, while the 3-,6-,12-, and 24-hr accumulations are generated every hour at the top of the hour. The 48- and 72-hr QPEs are updated daily at approximately 12 UTC. MRMS was developed by NOAA/OAR/National Severe Storms Laboratory and migrated into NWS operations at NOAA Integrated Dissemination Program.Time InformationThis map is time-enabled, meaning that each individual layer contains time-varying data and can be utilized by clients capable of making map requests that include a time component.This particular service can be queried with or without the use of a time component. If the time parameter is specified in a request, the data or imagery most relevant to the provided time value, if any, will be returned. If the time parameter is not specified in a request, the latest data or imagery valid for the present system time will be returned to the client. If the time parameter is not specified and no data or imagery is available for the present time, no data will be returned.In addition to ArcGIS Server REST access, time-enabled OGC WMS 1.3.0 access is also provided by this service.Due to software limitations, the time extent of the service and map layers displayed below does not provide the most up-to-date start and end times of available data. Instead, users have three options for determining the latest time information about the service:Issue a returnUpdates=true request for an individual layer or for the service itself, which will return the current start and end times of available data, in epoch time format (milliseconds since 00:00 January 1, 1970). To see an example, click on the "Return Updates" link at the bottom of this page under "Supported Operations". Refer to the ArcGIS REST API Map Service Documentation for more information.Issue an Identify (ArcGIS REST) or GetFeatureInfo (WMS) request against the proper layer corresponding with the target dataset. For raster data, this would be the "Image Footprints with Time Attributes" layer in the same group as the target "Image" layer being displayed. For vector (point, line, or polygon) data, the target layer can be queried directly. In either case, the attributes returned for the matching raster(s) or vector feature(s) will include the following:validtime: Valid timestamp.starttime: Display start time.endtime: Display end time.reftime: Reference time (sometimes reffered to as issuance time, cycle time, or initialization time).projmins: Number of minutes from reference time to valid time.desigreftime: Designated reference time; used as a common reference time for all items when individual reference times do not match.desigprojmins: Number of minutes from designated reference time to valid time.Query the nowCOAST LayerInfo web service, which has been created to provide additional information about each data layer in a service, including a list of all available "time stops" (i.e. "valid times"), individual timestamps, or the valid time of a layer's latest available data (i.e. "Product Time"). For more information about the LayerInfo web service, including examples of various types of requests, refer to the nowCOAST help documentation.References For more information about the MRMS/Q3 system.

  11. A

    6-Hour Quantitative Snowfall Amount (inches)

    • data.amerigeoss.org
    csv, esri rest +2
    Updated Jul 5, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AmeriGEO ArcGIS (2017). 6-Hour Quantitative Snowfall Amount (inches) [Dataset]. https://data.amerigeoss.org/hu/dataset/showcases/6-hour-quantitative-snowfall-amount-inches
    Explore at:
    html, csv, esri rest, geojsonAvailable download formats
    Dataset updated
    Jul 5, 2017
    Dataset provided by
    AmeriGEO ArcGIS
    Description

    Map Information

    This nowCOAST time-enabled map service provides maps depicting NWS gridded forecasts of the following selected sensible surface weather variables or elements: air temperature (including daily maximum and minimum), apparent air temperature, dew point temperature, relative humidity, wind velocity, wind speed, wind gust, total sky cover, and significant wave height for the next 6-7 days. Additional forecast maps are available for 6-hr quantitative precipitation (QPF), 6-hr quantitative snowfall, and 12-hr probability of precipitation. These NWS forecasts are from the National Digital Forecast Database (NDFD) at a 2.5 km horizontal spatial resolution. Surface is defined as 10 m (33 feet) above ground level (AGL) for wind variables and 2 m (5.5 ft) AGL for air temperature, dew point temperature, and relative humidity variables. The forecasts extend out to 7 days from 0000 UTC on Day 1 (current day). The forecasts are updated in the nowCOAST map service four times per day. For more detailed information about the update schedule, please see: http://new.nowcoast.noaa.gov/help/#section=updateschedule

    The forecast projection availability times listed below are generally accurate, however forecast interval and forecast horizon vary by region and variable. For the most up-to-date information, please see http://www.nws.noaa.gov/ndfd/resources/NDFD_element_status.pdf and http://graphical.weather.gov/docs/datamanagement.php.

    The forecasts of the air, apparent, and dew point temperatures are displayed using different colors at 2 degree Fahrenheit increments from -30 to 130 degrees F in order to use the same color legend throughout the year for the United States. This is the same color scale used for displaying the NDFD maximum and minimum air temperature forecasts. Air and dew point temperature forecasts are available every hour out to +36 hours from forecast issuance time, at 3-hour intervals from +36 to +72 hours, and at 6-hour intervals from +72 to +168 hours (7 days). Maximum and minimum air temperature forecasts are each available every 24 hours out to +168 hours (7 days) from 0000 UTC on Day 1 (current day).

    The relative humidity (RH) forecasts are depicted using different colors for every 5-percent interval. The increment and color scale used to display the RH forecasts were developed to highlight NWS local fire weather watch/red flag warning RH criteria at the low end (e.g. 15, 25, & 35% thresholds) and important high end RH thresholds for other users (e.g. agricultural producers) such as 95%. The RH forecasts are available every hour out to +36 hours from 0000 UTC on Day 1 (current day), at 3-hour intervals from +36 to +72 hours, and at 6-hour intervals from +72 to +168 hours (7 days).

    The 6-hr total precipitation amount forecasts or QPFs are symbolized using different colors at 0.01, 0.10, 0.25 inch intervals, at 1/4 inch intervals up to 4.0 (e.g. 0.50, 0.75, 1.00, 1.25, etc.), at 1-inch intervals from 4 to 10 inches and then at 2-inch intervals up to 14 inches. The increments from 0.01 to 1.00 or 2.00 inches are similar to what are used on NCEP/Weather Prediction Center's QPF products and the NWS River Forecast Center (RFC) daily precipitation analysis. Precipitation forecasts are available for each 6-hour period out to +72 hours (3 days) from 0000 UTC on Day 1 (current day).

    The 6-hr total snowfall amount forecasts are depicted using different colors at 1-inch intervals for snowfall greater than 0.01 inches. Snowfall forecasts are available for each 6-hour period out to +48 hours (2 days) from 0000 UTC on Day 1 (current day).

    The 12-hr probability of precipitation (PoP) forecasts are displayed for probabilities over 10 percent using different colors at 10, 20, 30, 60, and 85+ percent. The probability of precipitation forecasts are available for each 12-hour period out to +72 hours (3 days) from 0000 UTC on Day 1 (current day).

    The wind speed and wind gust forecasts are depicted using different colors at 5 knots increment up to 115 knots. The legend includes tick marks for both knots and miles per hour. The same color scale is used for displaying the RTMA surface wind speed forecasts. The wind velocity is depicted by curved wind barbs along streamlines. The direction of the wind is indicated with an arrowhead on the wind barb. The flags on the wind barb are the standard meteorological convention in units of knots. The wind speed and wind velocity forecasts are available hourly out to +36 hours from 00:00 UTC on Day 1 (current day), at 3-hour intervals out to +72 hours, and at 6-hour intervals from +72 to +168 hours (7 days). The wind gust forecasts are available hourly out to +36 hours from 0000 UTC on Day 1 (current day) and at 3-hour intervals out to +72 hours (3 days).

    The total sky cover forecasts are displayed using progressively darker shades of gray for 10, 30, 60, and 80+ percentage values. Sky cover values under 10 percent are shown as transparent. The sky cover forecasts are available for each hour out to +36 hours from 0000 UTC on Day 1 (current day), every 3 hours from +36 to +72 hours, and every 6 hours from +72 to +168 hours (7 days).

    The significant wave height forecasts are symbolized with different colors at 1-foot intervals up to 20 feet and at 5-foot intervals from 20 feet to 35+ feet. The significant wave height forecasts are available for each hour out to +36 hours from 0000 UTC on Day 1 (current day), every 3 hours from +36 to +72 hours, and every 6 hours from +72 to +144 hours (6 days).

    Background Information

    The NDFD is a seamless composite or mosaic of gridded forecasts from individual NWS Weather Forecast Offices (WFOs) from around the U.S. as well as the NCEP/Ocean Prediction Center and National Hurricane Center/TAFB. NDFD has a spatial resolution of 2.5 km. The 2.5km resolution NDFD forecasts are presently experimental, but are scheduled to become operational in May/June 2014. The time resolution of forecast projections varies by variable (element) based on user needs, forecast skill, and forecaster workload. Each WFO prepares gridded NDFD forecasts for their specific geographic area of responsibility. When these locally generated forecasts are merged into a national mosaic, occasionally areas of discontinuity will be evident. Staff at NWS forecast offices attempt to resolve discontinuities along the boundaries of the forecasts by coordinating with forecasters at surrounding WFOs and using workstation forecast tools that identify and resolve some of these differences. The NWS is making progress in this area, and recognizes that this is a significant issue in which improvements are still needed. The NDFD was developed by NWS Meteorological Development Laboratory.

    As mentioned above, a curved wind barb with an arrow head is used to display the wind velocity forecasts instead of the traditional wind barb. The curved wind barb was developed and evaluated at the Data Visualization Laboratory of the NOAA-UNH Joint Hydrographic Center/Center for Coastal and Ocean Mapping (Ware et al., 2014). The curved wind barb combines the best features of the wind barb, that it displays speed in a readable form, with the best features of the streamlines which shows wind patterns. The arrow head helps to convey the flow direction.

    Time Information

    This map is time-enabled, meaning that each individual layer contains time-varying data and can be utilized by clients capable of making map requests that include a time component.

    This particular service can be queried with or without the use of a time component. If the time parameter is specified in a request, the data or imagery most relevant to the provided time value, if any, will be returned. If the time parameter is not specified in a request, the latest data or imagery valid for the present system time will be returned to the client. If the time parameter is not specified and no data or imagery is available for the present time, no data will be returned.

    In addition to ArcGIS Server REST access, time-enabled OGC WMS 1.3.0 access is also provided by this service.

    Due to software limitations, the time extent of the service and map layers displayed below does not provide the most up-to-date start and end times of available data. Instead, users have three options for determining the latest time information about the service:

    1. Issue a returnUpdates=true request for an individual layer or for the service itself, which will return the current start and end times of available data, in epoch time format (milliseconds since 00:00 January 1, 1970). To see an example, click on the "Return Updates" link at the bottom of this page under "Supported Operations". Refer to the ArcGIS REST API Map Service Documentation for more information.
    2. Issue an Identify (ArcGIS REST) or GetFeatureInfo (WMS) request against the proper layer corresponding with the target dataset. For raster data, this would be the "Image Footprints with Time Attributes" layer in the same group as the target "Image" layer being displayed. For vector (point, line, or polygon) data, the target layer can be queried directly. In either case, the attributes

  12. Siletz Bay NWR: Tidal Marsh Restoration and Reference Sites: Baseline Plant...

    • catalog.data.gov
    Updated Feb 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Fish and Wildlife Service (2025). Siletz Bay NWR: Tidal Marsh Restoration and Reference Sites: Baseline Plant Community (Vegetation) Monitoring and Mapping - Geospatial Data, 2001 [Dataset]. https://catalog.data.gov/dataset/siletz-bay-nwr-tidal-marsh-restoration-and-reference-sites-baseline-plant-community-vegeta
    Explore at:
    Dataset updated
    Feb 21, 2025
    Dataset provided by
    U.S. Fish and Wildlife Servicehttp://www.fws.gov/
    Description

    This reference contains geospatial datasets (shapefiles) for the Siletz Bay NWR: Tidal Marsh Restoration and Reference Sites: Baseline Plant Community Monitoring and Mapping inventory performed by Green Point Consulting in 2001. During summer 2001, Green Point Consulting (GPC) conducted baseline monitoring and computerized (GIS) mapping of emergent wetland plant communities at USFWS tidal marsh restoration sites and matched reference sites in the Siletz and Nestucca estuaries. The monitoring work used a robust, quantitative protocol which allows comparison of results to tidal marsh monitoring activities elsewhere on the west coast. Plant community composition was measured, and results were used to map plant communities throughout the restoration and reference sites. The goals of the project were to assist USFWS in restoration planning, implementation, evaluation, and adaptive management; and to advance scientific understanding of west coast estuaries. Plant community classification and mapping also provides the basis for stratification of further sampling in future monitoring projects. This project's vegetation analysis and mapping modifies and adds detail to the estuary-wide mapping of Siletz tidal marsh completed by GPC for the Confederated Tribes of Siletz Indians in 2001. Products include GIS layers of plant communities and sampling locations; paper maps of plant communities; graphic displays of plant community composition; photodocumentation of transects; spreadsheets of quantitative data collected and analysis of that data; and this narrative report. This project was not designed to provide data for regulatory requirements (such as wetland fill-removal permits), but it's results may be useful as supplementary data in a regulatory context.

  13. A

    Surface Meteorological and Hydrologic Analyses - Quantitative Precipitation...

    • data.amerigeoss.org
    • disasterpartners.org
    arcgis online map
    Updated Aug 25, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2022). Surface Meteorological and Hydrologic Analyses - Quantitative Precipitation Estimates [Dataset]. https://data.amerigeoss.org/dataset/surface-meteorological-and-hydrologic-analyses-quantitative
    Explore at:
    arcgis online mapAvailable download formats
    Dataset updated
    Aug 25, 2022
    Dataset provided by
    United States
    Description

    Map Information This nowCOAST time-enabled map service provides maps depicting the NWS Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimate mosaics for 1-, 3-, 6-, 12-, 24-, 48-, and 72-hr time periods at a 1 km (0.6 miles) horizontal resolution for CONUS and southern part of Canada. The precipitation estimates are based only on radar data. The total precipitation amount is indicated by different colors at 0.01, 0.10, 0.25 and then at 1/4 inch intervals up to 4.0 inches (e.g. 0.50, 0.75, 1.00, 1.25, etc.), at 1-inch intervals from 4 to 10 inches and then at 2-inch intervals up to 14 inches. The increments from 0.01 to 1.00 or 2.00 inches are similar to what are used on NCEP's Weather Prediction Center QPF products and the NWS River Forecast Center (RFC) daily precipitation analysis. The 1-hr mosaic is updated every 4 minutes with a latency on nowCOAST of about 6-7 minutes from valid time. The 3-, 6-, 12-, and 24-hr QPEs are updated on nowCOAST every hour for the period ending at the top of the hour. The 48- and 72-hr QPEs are generated daily for the period ending at 12 UTC (i.e. 7AM EST) and available on nowCOAST shortly afterwards. For more detailed information about the update schedule, please see: http://new.nowcoast.noaa.gov/help/#section=updateschedule Background Information The NWS Multi-Radar Multi-Sensor System (MRMS)/Q3 QPEs are radar-only based quantitative precipitation analyses. The 1-h precipitation accumulation is obtained by aggregating 12 instantaneous rate fields. Missing rate fields are filled with the neighboring rate fields if the data gap is not significantly large (e.g.=15 minutes). The instantaneous rate is computed from the hybrid scan reflectivity and the precipitation flag fields. (Both are 2-D derivative products from the National 3-D Reflectivity Mosaic grid which has a 1-km horizontal resolution, 31 vertical levels and a 5-minute update cycle). The instantaneous rate currently uses four Z-R relationships (i.e. tropical, convective, stratiform, or snow). The particular ZR relationship used in any grid cell depends on precipitation type which is indicated by the precipitation flag. The other accumulation products are derived by aggregating the hourly accumulations. The 1-hr QPE are generated every 4 minutes, while the 3-,6-,12-, and 24-hr accumulations are generated every hour at the top of the hour. The 48- and 72-hr QPEs are updated daily at approximately 12 UTC. MRMS was developed by NOAA/OAR/National Severe Storms Laboratory and migrated into NWS operations at NOAA Integrated Dissemination Program. Time Information This map is time-enabled, meaning that each individual layer contains time-varying data and can be utilized by clients capable of making map requests that include a time component. This particular service can be queried with or without the use of a time component. If the time parameter is specified in a request, the data or imagery most relevant to the provided time value, if any, will be returned. If the time parameter is not specified in a request, the latest data or imagery valid for the present system time will be returned to the client. If the time parameter is not specified and no data or imagery is available for the present time, no data will be returned. In addition to ArcGIS Server REST access, time-enabled OGC WMS 1.3.0 access is also provided by this service. Due to software limitations, the time extent of the service and map layers displayed below does not provide the most up-to-date start and end times of available data. Instead, users have three options for determining the latest time information about the service: Issue a returnUpdates=true request for an individual layer or for the service itself, which will return the current start and end times of available data, in epoch time format (milliseconds since 00:00 January 1, 1970). To see an example, click on the Return Updates link at the bottom of this page under Supported Operations. Refer to the ArcGIS REST API Map Service Documentation for more information. Issue an Identify (ArcGIS REST) or GetFeatureInfo (WMS) request against the proper layer corresponding with the target dataset. For raster data, this would be the Image Footprints with Time Attributes layer in the same group as the target Image layer being displayed. For vector (point, line, or polygon) data, the target layer can be queried directly. In either case, the attributes returned for the matching raster(s) or vector feature(s) will include the following: validtime: Valid timestamp. starttime: Display start time. endtime: Display end time. reftime: Reference time (sometimes reffered to as issuance time, cycle time, or initialization time). projmins: Number of minutes from reference time to valid time. desigreftime: Designated reference time; used as a common reference time for all items when individual reference times do not match. desigprojmins: Number of minutes from designated reference time to valid time. Query the nowCOAST LayerInfo web service, which has been created to provide additional information about each data layer in a service, including a list of all available time stops (i.e. valid times), individual timestamps, or the valid time of a layer's latest available data (i.e. Product Time). For more information about the LayerInfo web service, including examples of various types of requests, refer to the nowCOAST help documentation at: http://new.nowcoast.noaa.gov/help/#section=layerinfo References For more information about the MRMS/Q3 system, please see http://nmq.ou.edu and http://www.nssl.noaa.gov/projects/mrms.

  14. o

    Data from: Dominance in Domestic Dogs: A Quantitative Analysis of Its...

    • omicsdi.org
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dominance in Domestic Dogs: A Quantitative Analysis of Its Behavioural Measures. [Dataset]. https://www.omicsdi.org/dataset/biostudies/S-EPMC4556277
    Explore at:
    Variables measured
    Unknown
    Description

    A dominance hierarchy is an important feature of the social organisation of group living animals. Although formal and/or agonistic dominance has been found in captive wolves and free-ranging dogs, applicability of the dominance concept in domestic dogs is highly debated, and quantitative data are scarce. Therefore, we investigated 7 body postures and 24 behaviours in a group of domestic dogs for their suitability as formal status indicators. The results showed that high posture, displayed in most dyadic relationships, and muzzle bite, displayed exclusively by the highest ranking dogs, qualified best as formal dominance indicators. The best formal submission indicator was body tail wag, covering most relationships, and two low postures, covering two-thirds of the relationships. In addition, both mouth lick, as included in Schenkel's active submission, and pass under head qualified as formal submission indicators but were shown almost exclusively towards the highest ranking dogs. Furthermore, a status assessment based on changes in posture displays, i.e., lowering of posture (LoP) into half-low, low, low-on-back or on-back, was the best status indicator for most relationships as it showed good coverage (91% of the dyads), a nearly linear hierarchy (h' = 0.94, p<0.003) and strong unidirectionality (DCI = 0.97). The associated steepness of 0.79 (p<0.0001) indicated a tolerant dominance style for this dog group. No significant correlations of rank with age or weight were found. Strong co-variation between LoP, high posture, and body tail wag justified the use of dominance as an intervening variable. Our results are in line with previous findings for captive wolves and free-ranging dogs, for formal dominance with strong linearity based on submission but not aggression. They indicate that the ethogram for dogs is best redefined by distinguishing body postures from behavioural activities. A good insight into dominance hierarchies and its indicators will be helpful in properly interpreting dog-dog relationships and diagnosing problem behaviour in dogs.

  15. t

    BIOGRID CURATED DATA FOR PUBLICATION: Quantitative genome-wide genetic...

    • thebiogrid.org
    zip
    Updated Feb 1, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BioGRID Project (2014). BIOGRID CURATED DATA FOR PUBLICATION: Quantitative genome-wide genetic interaction screens reveal global epistatic relationships of protein complexes in Escherichia coli. [Dataset]. https://thebiogrid.org/188434/publication/quantitative-genome-wide-genetic-interaction-screens-reveal-global-epistatic-relationships-of-protein-complexes-in-escherichia-coli.html
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 1, 2014
    Dataset authored and provided by
    BioGRID Project
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Protein-Protein, Genetic, and Chemical Interactions for Babu M (2014):Quantitative genome-wide genetic interaction screens reveal global epistatic relationships of protein complexes in Escherichia coli. curated by BioGRID (https://thebiogrid.org); ABSTRACT: Large-scale proteomic analyses in Escherichia coli have documented the composition and physical relationships of multiprotein complexes, but not their functional organization into biological pathways and processes. Conversely, genetic interaction (GI) screens can provide insights into the biological role(s) of individual gene and higher order associations. Combining the information from both approaches should elucidate how complexes and pathways intersect functionally at a systems level. However, such integrative analysis has been hindered due to the lack of relevant GI data. Here we present a systematic, unbiased, and quantitative synthetic genetic array screen in E. coli describing the genetic dependencies and functional cross-talk among over 600,000 digenic mutant combinations. Combining this epistasis information with putative functional modules derived from previous proteomic data and genomic context-based methods revealed unexpected associations, including new components required for the biogenesis of iron-sulphur and ribosome integrity, and the interplay between molecular chaperones and proteases. We find that functionally-linked genes co-conserved among γ-proteobacteria are far more likely to have correlated GI profiles than genes with divergent patterns of evolution. Overall, examining bacterial GIs in the context of protein complexes provides avenues for a deeper mechanistic understanding of core microbial systems.

  16. d

    Data from: Quantitative profiling of protease specificity

    • datadryad.org
    zip
    Updated Feb 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Piotr Cieplak; Boris Ratnikov; Jeffrey Smith; Albert Remacle; Elise Nguyen (2021). Quantitative profiling of protease specificity [Dataset]. http://doi.org/10.5061/dryad.ns1rn8pq1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 3, 2021
    Dataset provided by
    Dryad
    Authors
    Piotr Cieplak; Boris Ratnikov; Jeffrey Smith; Albert Remacle; Elise Nguyen
    Time period covered
    Jun 19, 2020
    Description

    See information included in the text of our manuscript being submitted to Plos Computational Biology.

    There are no missing values.

  17. Z

    Data from: A proteome-wide quantitative platform for nanoscale spatially...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Aug 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gupta, Kallol (2024). A proteome-wide quantitative platform for nanoscale spatially resolved extraction of membrane proteins into native nanodiscs [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11175124
    Explore at:
    Dataset updated
    Aug 6, 2024
    Dataset provided by
    Gupta, Kallol
    GHOSH, SNEHASISH
    Brown, Caroline
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    EM Quantitation:

    Raw data gathered from EM images taken to determine nanodisc population size distribution.

    NNB TGN46 analysis:

    Data analysis of the Native Nanobleach experiments of TGN46 in native nanodiscs to determine population distribution of oligomeric organizations.

    Polymer conditions:

    Physiochemical characteristic and extraction conditions for all polymers in the library both commercially available and in-house.

    Protein groups polymer screen original file:

    Original output of MaxQuant data processing of polymer screen data.

    Organelle matching:

    Code used for mathcing proteins identified in the proteomics output to organelle or residence for all organellar annotations.

    Polymer code:

    Code used to process and normalize the MaxQuant output and calulate extraction efficiency across all detected proteins.

  18. e

    Data from: Time resolved quantitative phospho-tyrosine analysis reveals...

    • ebi.ac.uk
    • data.niaid.nih.gov
    Updated Nov 7, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pankaj Dwivedi (2018). Time resolved quantitative phospho-tyrosine analysis reveals Bruton’s Tyrosine kinase mediated signaling downstream of the mutated granulocyte-colony stimulating factor receptors [Dataset]. https://www.ebi.ac.uk/pride/archive/projects/PXD009662
    Explore at:
    Dataset updated
    Nov 7, 2018
    Authors
    Pankaj Dwivedi
    Variables measured
    Proteomics
    Description

    Granulocyte-colony stimulating factor receptor (G-CSFR) controls myeloid progenitor proliferation and differentiation to neutrophils. Mutations in CSF3R (encoding G-CSFR) have been reported in patients with chronic neutrophilic leukemia (CNL) and acute myeloid leukemia (AML); however, despite years of research, the malignant downstream signaling of the mutated G-CSFRs is not well understood. Here, we utilized a quantitative phospho-tyrosine analysis to generate a comprehensive signaling map of G-CSF induced tyrosine phosphorylation in the normal versus mutated (proximal: T618I and truncated: Q741x) G-CSFRs. Unbiased clustering and kinase enrichment analysis identified rapid induction of phospho-proteins associated with endocytosis by the wild-type G-CSFR only; while G-CSFR mutants showed abnormal kinetics of canonical STAT3, STAT5 and MAPK phosphorylation, and aberrant activation of Bruton’s Tyrosine Kinase (Btk). Mutant-G-CSFR-expressing cells displayed enhanced sensitivity (5-fold lower IC50) for Ibrutinib-based chemical inhibition of Btk. Finally, primary murine progenitor cells from G-CSFR-d715x knock-in mice validate activation of Btk by the mutant receptor, and display enhanced sensitivity to Ibrutinib. Together, these data demonstrate the strength of unsupervised proteomics analyses in dissecting oncogenic pathways, and suggest repositioning Ibrutinib for therapy of myeloid leukemia bearing CSF3R mutations.

  19. d

    Data from: Quantitative support for the benefits of proactive management for...

    • datadryad.org
    • data.niaid.nih.gov
    zip
    Updated Jun 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Molly Bletz; Graziella DiRenzo; Evan Grant (2024). Quantitative support for the benefits of proactive management for wildlife disease control [Dataset]. http://doi.org/10.5061/dryad.bk3j9kdhk
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 5, 2024
    Dataset provided by
    Dryad
    Authors
    Molly Bletz; Graziella DiRenzo; Evan Grant
    Time period covered
    Jan 9, 2024
    Description

    Start early and stay the course: Proactive management outperforms reactive actions for wildlife disease control :

    To understand the impact of different management scenarios on host and pathogen persistence, we used a multistate dynamic occupancy model. We incorporated management actions (or lack of) via the estimated effects on parameters in the transition matrix. We considered four scenarios for the timing of management interventions on host and pathogen persistence: (i) no management scenario, (ii) proactive management scenario, (iii) reactive management scenario, and (iv) proactive + reactive management scenario. Expert elicitation was used to obtain estimates for the majority of the parameters given the limited empirical data available. The raw estimates and confidence level from the 4-point elicitation method are provided in the two data files provided here. Methodologies are further explained in the main text of the paper.

    Description of the Data and file structure

    ...

  20. d

    Data from: A hierarchical statistical model for estimating population...

    • catalog.data.gov
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institutes of Health (2025). A hierarchical statistical model for estimating population properties of quantitative genes [Dataset]. https://catalog.data.gov/dataset/a-hierarchical-statistical-model-for-estimating-population-properties-of-quantitative-gene
    Explore at:
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    National Institutes of Health
    Description

    Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Manuela Hummel; Dominic Edelmann; Annette Kopp-Schneider (2023). Clustering of samples and variables with mixed-type data [Dataset]. http://doi.org/10.1371/journal.pone.0188274

Clustering of samples and variables with mixed-type data

Explore at:
24 scholarly articles cite this dataset (View in Google Scholar)
tiffAvailable download formats
Dataset updated
Jun 1, 2023
Dataset provided by
PLOS ONE
Authors
Manuela Hummel; Dominic Edelmann; Annette Kopp-Schneider
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Analysis of data measured on different scales is a relevant challenge. Biomedical studies often focus on high-throughput datasets of, e.g., quantitative measurements. However, the need for integration of other features possibly measured on different scales, e.g. clinical or cytogenetic factors, becomes increasingly important. The analysis results (e.g. a selection of relevant genes) are then visualized, while adding further information, like clinical factors, on top. However, a more integrative approach is desirable, where all available data are analyzed jointly, and where also in the visualization different data sources are combined in a more natural way. Here we specifically target integrative visualization and present a heatmap-style graphic display. To this end, we develop and explore methods for clustering mixed-type data, with special focus on clustering variables. Clustering of variables does not receive as much attention in the literature as does clustering of samples. We extend the variables clustering methodology by two new approaches, one based on the combination of different association measures and the other on distance correlation. With simulation studies we evaluate and compare different clustering strategies. Applying specific methods for mixed-type data proves to be comparable and in many cases beneficial as compared to standard approaches applied to corresponding quantitative or binarized data. Our two novel approaches for mixed-type variables show similar or better performance than the existing methods ClustOfVar and bias-corrected mutual information. Further, in contrast to ClustOfVar, our methods provide dissimilarity matrices, which is an advantage, especially for the purpose of visualization. Real data examples aim to give an impression of various kinds of potential applications for the integrative heatmap and other graphical displays based on dissimilarity matrices. We demonstrate that the presented integrative heatmap provides more information than common data displays about the relationship among variables and samples. The described clustering and visualization methods are implemented in our R package CluMix available from https://cran.r-project.org/web/packages/CluMix.

Search
Clear search
Close search
Google apps
Main menu