We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Semantic Artist Similarity dataset consists of two datasets of artists entities with their corresponding biography texts, and the list of top-10 most similar artists within the datasets used as ground truth. The dataset is composed by a corpus of 268 artists and a slightly larger one of 2,336 artists, both gathered from Last.fm in March 2015. The former is mapped to the MIREX Audio and Music Similarity evaluation dataset, so that its similarity judgments can be used as ground truth. For the latter corpus we use the similarity between artists as provided by the Last.fm API. For every artist there is a list with the top-10 most related artists. In the MIREX dataset there are 188 artists with at least 10 similar artists, the other 80 artists have less than 10 similar artists. In the Last.fm API dataset all artists have a list of 10 similar artists. There are 4 files in the dataset.mirex_gold_top10.txt and lastfmapi_gold_top10.txt have the top-10 lists of artists for every artist of both datasets. Artists are identified by MusicBrainz ID. The format of the file is one line per artist, with the artist mbid separated by a tab with the list of top-10 related artists identified by their mbid separated by spaces.artist_mbid \t artist_mbid_top10_list_separated_by_spaces mb2uri_mirex and mb2uri_lastfmapi.txt have the list of artists. In each line there are three fields separated by tabs. First field is the MusicBrainz ID, second field is the last.fm name of the artist, and third field is the DBpedia uri.artist_mbid \t lastfm_name \t dbpedia_uri There are also 2 folders in the dataset with the biography texts of each dataset. Each .txt file in the biography folders is named with the MusicBrainz ID of the biographied artist. Biographies were gathered from the Last.fm wiki page of every artist.Using this datasetWe would highly appreciate if scientific publications of works partly based on the Semantic Artist Similarity dataset quote the following publication:Oramas, S., Sordo M., Espinosa-Anke L., & Serra X. (In Press). A Semantic-based Approach for Artist Similarity. 16th International Society for Music Information Retrieval Conference.We are interested in knowing if you find our datasets useful! If you use our dataset please email us at mtg-info@upf.edu and tell us about your research. https://www.upf.edu/web/mtg/semantic-similarity
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This formatted dataset (AnalysisDatabaseGBD) originates from raw data files from the Institute of Health Metrics and Evaluation (IHME) Global Burden of Disease Study (GBD2017) affiliated with the University of Washington. We are volunteer collaborators with IHME and not employed by IHME or the University of Washington.
The population weighted GBD2017 data are on male and female cohorts ages 15-69 years including noncommunicable diseases (NCDs), body mass index (BMI), cardiovascular disease (CVD), and other health outcomes and associated dietary, metabolic, and other risk factors. The purpose of creating this population-weighted, formatted database is to explore the univariate and multiple regression correlations of health outcomes with risk factors. Our research hypothesis is that we can successfully model NCDs, BMI, CVD, and other health outcomes with their attributable risks.
These Global Burden of disease data relate to the preprint: The EAT-Lancet Commission Planetary Health Diet compared with Institute of Health Metrics and Evaluation Global Burden of Disease Ecological Data Analysis.
The data include the following:
1. Analysis database of population weighted GBD2017 data that includes over 40 health risk factors, noncommunicable disease deaths/100k/year of male and female cohorts ages 15-69 years from 195 countries (the primary outcome variable that includes over 100 types of noncommunicable diseases) and over 20 individual noncommunicable diseases (e.g., ischemic heart disease, colon cancer, etc).
2. A text file to import the analysis database into SAS
3. The SAS code to format the analysis database to be used for analytics
4. SAS code for deriving Tables 1, 2, 3 and Supplementary Tables 5 and 6
5. SAS code for deriving the multiple regression formula in Table 4.
6. SAS code for deriving the multiple regression formula in Table 5
7. SAS code for deriving the multiple regression formula in Supplementary Table 7
8. SAS code for deriving the multiple regression formula in Supplementary Table 8
9. The Excel files that accompanied the above SAS code to produce the tables
For questions, please email davidkcundiff@gmail.com. Thanks.
The 1980 SAS Transport Files portion of the Archive of Census Related Products (ACRP) contains housing and population demographics from the 1980 Summary Tape File (STF3A) database and are organized by state. The population data includes education levels, ethnicity, income distribution, nativity, labor force status, means of transportation and family structure while the housing data embodies size, state and structure of housing Unit, value of the Unit, tenure and occupancy status in housing Unit, source of water, sewage disposal, availability of telephone, heating and air conditioning, kitchen facilities, rent, mortgage status and monthly owner costs. This portion of the ACRP is produced by the Columbia University Center for International Earth Science Information Network (CIESIN).
In 2023, the number of pilots employed by Scandinavian Airlines was 781, representing a decrease of 24 full-time equivalent pilots compared to the previous years. Since 2020, Scandinavian airlines have had one of the lowest numbers of pilots in their 14 years history.
In a shared spectrum environment, as is the case in the 3.5 GHz Citizens Broadband Radio Service (CBRS), the secondary users with lower priority are managed by independent spectrum access systems (SASs) in order to protect the incumbents with higher priority from interference. The interference protection is guaranteed in terms of a percentile of the aggregate interference power. The current practice requires each SAS to obtain a global snapshot of interference and use a common algorithm to manage it. We present a simplified method to permit each SAS to independently manage its users while still meeting overall aggregate interference protection requirements. The data include statistical upper bound on aggregate interference for some known distributions, which is the core idea of the proposed method. The data also include numerical results of using the proposed interference protection criterion in terms of two metrics, the total number of users moved from the channel in order to protect the incumbent (i.e., the size of the move list) and the realized aggregate interference of all co-channel users at the incumbent. The data is associated with the letter, "Independent Calculation of Move Lists for Incumbent Protection in a Multi-SAS Shared Spectrum Environment," M. R. Souryal and T. T. Nguyen, in IEEE Wireless Communication Letters, Jan. 2021.
This research work studied the effect of timing constraint and overloading of Spectrum Access System (SAS) on the SAS-CBSD protocol. Specifically, it studies how Heartbeat and Grant Request fail as number of CBSDs served by a SAS becomes large for a given service rate. It also looks at the time taken by CBSDs to vacate a channel (on which an incumbent has appeared) at different Heartbeat Interval. These study results are captured in the following files. (1) Number of CBSD vs number of Heartbeat timeout when SAS service rate is 40 requests/sec (2) Number of CBSD vs number of Heartbeat timeout when SAS service rate is 60 requests/sec (3) Number of CBSD vs number of failed grants when SAS service rate is 40 requests/sec (4) Number of CBSD vs number of failed grants when SAS service rate is 60 requests/sec (5) CDF of duration of CBSDs vacating a channel when number of CBSD=700, mean heartbeat interval = 90 s (6) CDF of duration of CBSDs vacating a channel when number of CBSD=1200, mean heartbeat interval = 150 s (7) CDF of duration of CBSDs vacating a channel when number of CBSD=1500, mean heartbeat interval = 220 s
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Description of the experiment setting: location, influential climatic conditions, controlled conditions (e.g. temperature, light cycle) In 1986, the Congress enacted Public Laws 99-500 and 99-591, requiring a biennial report on the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). In response to these requirements, FNS developed a prototype system that allowed for the routine acquisition of information on WIC participants from WIC State Agencies. Since 1992, State Agencies have provided electronic copies of these data to FNS on a biennial basis. FNS and the National WIC Association (formerly National Association of WIC Directors) agreed on a set of data elements for the transfer of information. In addition, FNS established a minimum standard dataset for reporting participation data. For each biennial reporting cycle, each State Agency is required to submit a participant-level dataset containing standardized information on persons enrolled at local agencies for the reference month of April. The 2016 Participant and Program Characteristics (PC2016) is the thirteenth data submission to be completed using the WIC PC reporting system. In April 2016, there were 90 State agencies: the 50 States, American Samoa, the District of Columbia, Guam, the Northern Mariana Islands, Puerto Rico, the American Virgin Islands, and 34 Indian tribal organizations. Processing methods and equipment used Specifications on formats (“Guidance for States Providing Participant Data”) were provided to all State agencies in January 2016. This guide specified 20 minimum dataset (MDS) elements and 11 supplemental dataset (SDS) elements to be reported on each WIC participant. Each State Agency was required to submit all 20 MDS items and any SDS items collected by the State agency. Study date(s) and duration The information for each participant was from the participants’ most current WIC certification as of April 2016. Due to management information constraints, Connecticut provided data for a month other than April 2016, specifically August 16 – September 15, 2016. Study spatial scale (size of replicates and spatial scale of study area) In April 2016, there were 90 State agencies: the 50 States, American Samoa, the District of Columbia, Guam, the Northern Mariana Islands, Puerto Rico, the American Virgin Islands, and 34 Indian tribal organizations. Level of true replication Unknown Sampling precision (within-replicate sampling or pseudoreplication) State Agency Data Submissions. PC2016 is a participant dataset consisting of 8,815,472 active records. The records, submitted to USDA by the State Agencies, comprise a census of all WIC enrollees, so there is no sampling involved in the collection of this data. PII Analytic Datasets. State agency files were combined to create a national census participant file of approximately 8.8 million records. The census dataset contains potentially personally identifiable information (PII) and is therefore not made available to the public. National Sample Dataset. The public use SAS analytic dataset made available to the public has been constructed from a nationally representative sample drawn from the census of WIC participants, selected by participant category. The nationally representative sample is composed of 60,003 records. The distribution by category is 5,449 pregnant women, 4,661 breastfeeding women, 3,904 postpartum women, 13,999 infants, and 31,990 children. Level of subsampling (number and repeat or within-replicate sampling) The proportionate (or self-weighting) sample was drawn by WIC participant category: pregnant women, breastfeeding women, postpartum women, infants, and children. In this type of sample design, each WIC participant has the same probability of selection across all strata. Sampling weights are not needed when the data are analyzed. In a proportionate stratified sample, the largest stratum accounts for the highest percentage of the analytic sample. Study design (before–after, control–impacts, time series, before–after-control–impacts) None – Non-experimental Description of any data manipulation, modeling, or statistical analysis undertaken Each entry in the dataset contains all MDS and SDS information submitted by the State agency on the sampled WIC participant. In addition, the file contains constructed variables used for analytic purposes. To protect individual privacy, the public use file does not include State agency, local agency, or case identification numbers. Description of any gaps in the data or other limiting factors Due to management information constraints, Connecticut provided data for a month other than April 2016, specifically August 16 – September 15, 2016. Outcome measurement methods and equipment used None Resources in this dataset:Resource Title: WIC Participant and Program Characteristics 2016. File Name: wicpc_2016_public.csvResource Description: The 2016 Participant and Program Characteristics (PC2016) is the thirteenth data submission to be completed using the WIC PC reporting system. In April 2016, there were 90 State agencies: the 50 States, American Samoa, the District of Columbia, Guam, the Northern Mariana Islands, Puerto Rico, the American Virgin Islands, and 34 Indian tribal organizations.Resource Software Recommended: SAS, version 9.4,url: https://www.sas.com/en_us/software/sas9.html Resource Title: WIC Participant and Program Characteristics 2016 Codebook. File Name: WICPC2016_PUBLIC_CODEBOOK.xlsxResource Software Recommended: SAS, version 9.4,url: https://www.sas.com/en_us/software/sas9.html Resource Title: WIC Participant and Program Characteristics 2016 - Zip File with SAS, SPSS and STATA data. File Name: WIC_PC_2016_SAS_SPSS_STATA_Files.zipResource Description: WIC Participant and Program Characteristics 2016 - Zip File with SAS, SPSS and STATA data
The simulated synthetic aperture sonar (SAS) data presented here was generated using PoSSM [Johnson and Brown 2018]. The data is suitable for bistatic, coherent signal processing and will form acoustic seafloor imagery. Included in this data package is simulated sonar data in Generic Data Format (GDF) files, a description of the GDF file contents, example SAS imagery, and supporting information about the simulated scenes. In total, there are eleven 60 m x 90 m scenes, labeled scene00 through scene10, with scene00 provided with the scatterers in isolation, i.e. no seafloor texture. This is provided for beamformer testing purposes and should result in an image similar to the one labeled "PoSSM-scene00-scene00-starboard-0.tif" in the Related Data Sets tab. The ten other scenes have varying degrees of model variation as described in "Description_of_Simulated_SAS_Data_Package.pdf". A description of the data and the model is found in the associated document called "Description_of_Simulated_SAS_Data_Package.pdf" and a description of the format in which the raw binary data is stored is found in the related document "PSU_GDF_Format_20240612.pdf". The format description also includes MATLAB code that will effectively parse the data to aid in signal processing and image reconstruction. It is left to the researcher to develop a beamforming algorithm suitable for coherent signal and image processing. Each 60 m x 90 m scene is represented by 4 raw (not beamformed) GDF files, labeled sceneXX-STARBOARD-000000 through 000003. It is possible to beamform smaller scenes from any one of these 4 files, i.e. the four files are combined sequentially to form a 60 m x 90 m image. Also included are comma separated value spreadsheets describing the locations of scatterers and objects of interest within each scene. In addition to the binary GDF data, a beamformed GeoTIFF image and a single-look complex (SLC, science file) data of each scene is provided. The SLC data (science) is stored in the Hierarchical Data Format 5 (https://www.hdfgroup.org/), and appended with ".hdf5" to indicate the HDF5 format. The data are stored as 32-bit real and 32-bit complex values. A viewer is available that provides basic graphing, image display, and directory navigation functions (https://www.hdfgroup.org/downloads/hdfview/). The HDF file contains all the information necessary to reconstruct a synthetic aperture sonar image. All major and contemporary programming languages have library support for encoding/decoding the HDF5 format. Supporting documentation that outlines positions of the seafloor scatterers is included in "Scatterer_Locations_Scene00.csv", while the locations of the objects of interest for scene01-scene10 are included in "Object_Locations_All_Scenes.csv". Portable Network Graphic (PNG) images that plot the location of objects of all the objects of interest in each scene in Along-Track and Cross-Track notation are provided.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
7005 Global import shipment records of Sas Hard Disk Drives with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
analyze the survey of income and program participation (sipp) with r if the census bureau's budget was gutted and only one complex sample survey survived, pray it's the survey of income and program participation (sipp). it's giant. it's rich with variables. it's monthly. it follows households over three, four, now five year panels. the congressional budget office uses it for their health insurance simulation . analysts read that sipp has person-month files, get scurred, and retreat to inferior options. the american community survey may be the mount everest of survey data, but sipp is most certainly the amazon. questions swing wild and free through the jungle canopy i mean core data dictionary. legend has it that there are still species of topical module variables that scientists like you have yet to analyze. ponce de león would've loved it here. ponce. what a name. what a guy. the sipp 2008 panel data started from a sample of 105,663 individuals in 42,030 households. once the sample gets drawn, the census bureau surveys one-fourth of the respondents every four months, over f our or five years (panel durations vary). you absolutely must read and understand pdf pages 3, 4, and 5 of this document before starting any analysis (start at the header 'waves and rotation groups'). if you don't comprehend what's going on, try their survey design tutorial. since sipp collects information from respondents regarding every month over the duration of the panel, you'll need to be hyper-aware of whether you want your results to be point-in-time, annualized, or specific to some other period. the analysis scripts below provide examples of each. at every four-month interview point, every respondent answers every core question for the previous four months. after that, wave-specific addenda (called topical modules) get asked, but generally only regarding a single prior month. to repeat: core wave files contain four records per person, topical modules contain one. if you stacked every core wave, you would have one record per person per month for the duration o f the panel. mmmassive. ~100,000 respondents x 12 months x ~4 years. have an analysis plan before you start writing code so you extract exactly what you need, nothing more. better yet, modify something of mine. cool? this new github repository contains eight, you read me, eight scripts: 1996 panel - download and create database.R 2001 panel - download and create database.R 2004 panel - download and create database.R 2008 panel - download and create database.R since some variables are character strings in one file and integers in anoth er, initiate an r function to harmonize variable class inconsistencies in the sas importation scripts properly handle the parentheses seen in a few of the sas importation scripts, because the SAScii package currently does not create an rsqlite database, initiate a variant of the read.SAScii
function that imports ascii data directly into a sql database (.db) download each microdata file - weights, topical modules, everything - then read 'em into sql 2008 panel - full year analysis examples.R< br /> define which waves and specific variables to pull into ram, based on the year chosen loop through each of twelve months, constructing a single-year temporary table inside the database read that twelve-month file into working memory, then save it for faster loading later if you like read the main and replicate weights columns into working memory too, merge everything construct a few annualized and demographic columns using all twelve months' worth of information construct a replicate-weighted complex sample design with a fay's adjustment factor of one-half, again save it for faster loading later, only if you're so inclined reproduce census-publish ed statistics, not precisely (due to topcoding described here on pdf page 19) 2008 panel - point-in-time analysis examples.R define which wave(s) and specific variables to pull into ram, based on the calendar month chosen read that interview point (srefmon)- or calendar month (rhcalmn)-based file into working memory read the topical module and replicate weights files into working memory too, merge it like you mean it construct a few new, exciting variables using both core and topical module questions construct a replicate-weighted complex sample design with a fay's adjustment factor of one-half reproduce census-published statistics, not exactly cuz the authors of this brief used the generalized variance formula (gvf) to calculate the margin of error - see pdf page 4 for more detail - the friendly statisticians at census recommend using the replicate weights whenever possible. oh hayy, now it is. 2008 panel - median value of household assets.R define which wave(s) and spe cific variables to pull into ram, based on the topical module chosen read the topical module and replicate weights files into working memory too, merge once again construct a replicate-weighted complex sample design with a...
Multienvironment trials (METs) enable the evaluation of the same genotypes under a v ariety of environments and management conditions. We present META (Multi Environment Trial Analysis), a suite of 31 SAS programs that analyze METs with complete or incomplete block designs, with or without adjustment by a covariate. The entire program is run through a graphical user interface. The program can produce boxplots or histograms for all traits, as well as univariate statistics. It also calculates best linear unbiased estimators (BLUEs) and best linear unbiased predictors for the main response variable and BLUEs for all other traits. For all traits, it calculates variance components by restricted maximum likelihood, least significant difference, coefficient of variation, and broad-sense heritability using PROC MIXED. The program can analyze each location separately, combine the analysis by management conditions, or combine all locations. The flexibility and simplicity of use of this program makes it a valuable tool for analyzing METs in breeding and agronomy. The META program can be used by any researcher who knows only a few fundamental principles of SAS.
After the number of flights decreased by 48 percent in 2020 due to the impact of the coronavirus pandemic and fell further in 2021, flight numbers began to recover in 2022. In 2022, SAS operated 183,500 scheduled flights. The positive trend persisted in the subsequent year, 2023, with a total of 217,000 flights.
The 1990 SAS Transport Files portion of the Archive of Census Related Products (ACRP) contains housing and population data from the U.S. Census Bureau's 1990 Summary tape File (STF3A) database. The data are available by state and county, county subdivision/mcd, blockgroup, and places, as well as Indian reservations, tribal districts and congressional districts. This portion of the ACRP is produced by the Columbia University Center for International Earth Science Information Network (CIESIN).
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The SAS (Serial Attached SCSI) switch market, while experiencing a period of transition due to the rise of NVMe-over-Fabrics technologies, still holds significant value and is projected for steady growth. The market, estimated at $2.5 billion in 2025, is driven by the continued demand for high-performance storage solutions in data centers, particularly in sectors like cloud computing, high-performance computing (HPC), and big data analytics. Key drivers include the increasing need for higher bandwidth and lower latency in data storage networks, supporting ever-growing data volumes and demanding applications. Leading vendors like Inspur, Huawei, and IBM are actively involved in developing and delivering advanced SAS switch solutions, incorporating features like enhanced scalability, power efficiency, and improved management capabilities. However, the market faces constraints, primarily the gradual adoption of NVMe/TCP and NVMe-oF, which are emerging as compelling alternatives offering superior performance and scalability. This competitive pressure will likely influence the future growth trajectory of the SAS switch market. Despite the emergence of competing technologies, the SAS switch market will continue to maintain relevance for the foreseeable future. The substantial installed base of SAS-based storage infrastructure ensures continued demand for upgrade and replacement cycles. Furthermore, the cost-effectiveness of SAS technology, compared to newer alternatives, especially for certain applications and deployment scenarios, will continue to support market growth. Segmentation within the market, based on factors like port density, switch architecture, and application type, offers opportunities for specialized vendors to cater to niche demands. Geographic expansion, particularly in developing regions with growing data center infrastructure investments, represents another area of potential market expansion. The forecast period of 2025-2033 suggests a moderate but consistent growth rate, influenced by a balance of sustained demand and competitive pressure. A CAGR of 5% is a reasonable estimate given these market dynamics, resulting in a market size projected to exceed $3.5 billion by 2033.
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
List of SAS and bike tour-to-right for the territory of the city of Angers * * * * Definition of SAS: Layout, inscribed in the road code since 1998, the bicycle or bicycle lock (the regulatory term) is a demarcated space, which, at a crossroads with a traffic light, allows bicycles to be placed between the stop line of the cars and the pedestrian crossing, and to the right or left of the airlock if they want to turn. Definition of turn-to-right: Signage entirely dedicated to cyclists. Placed at a red light, it allows them to continue their path without having to mark the stop even if the light is red. This authorisation is permitted only for the direction(s) indicated by an arrow on the sign (conditional crossing authorisation sign M12), and the cyclist must always give priority to other users, including pedestrians. * * * * Description of certain fields in the dataset: VEL_TAD: Presence of a tour-to-right cyclist VEL_SAS: Presence of a bicycle lock POST: panel support > FEUX (automotive tricolor lamp); Velo (tricolor fire on bike layout); Tram (Tricolor light on Tram line) ID_VOIE: track ID. This identifier makes it possible to join with the referential Tronçons des voie d’Angers Loire Métropole
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Credit report of I F - Sas contains unique and detailed export import market intelligence with it's phone, email, Linkedin and details of each import and export shipment like product, quantity, price, buyer, supplier names, country and date of shipment.
The simulated synthetic aperture sonar (SAS) data presented here was generated using PoSSM [Johnson and Brown 2018]. The data is suitable for bistatic, coherent signal processing and will form acoustic seafloor imagery. Included in this data package is simulated sonar data in Generic Data Format (GDF) files, a description of the GDF file contents, example SAS imagery, and supporting information about the simulated scenes. In total, there are eleven 60 m x 90 m scenes, labeled scene00 through scene10, with scene00 provided with the scatterers in isolation, i.e. no seafloor texture. This is provided for beamformer testing purposes and should result in an image similar to the one labeled "PoSSM-scene00-scene00-starboard-0.tif" in the Related Data Sets tab. The ten other scenes have varying degrees of model variation as described in "Description_of_Simulated_SAS_Data_Package.pdf". A description of the data and the model is found in the associated document called "Description_of_Simulated_SAS_Data_Package.pdf" and a description of the format in which the raw binary data is stored is found in the related document "PSU_GDF_Format_20240612.pdf". The format description also includes MATLAB code that will effectively parse the data to aid in signal processing and image reconstruction. It is left to the researcher to develop a beamforming algorithm suitable for coherent signal and image processing. Each 60 m x 90 m scene is represented by 4 raw (not beamformed) GDF files, labeled sceneXX-STARBOARD-000000 through 000003. It is possible to beamform smaller scenes from any one of these 4 files, i.e. the four files are combined sequentially to form a 60 m x 90 m image. Also included are comma separated value spreadsheets describing the locations of scatterers and objects of interest within each scene. In addition to the binary GDF data, a beamformed GeoTIFF image and a single-look complex (SLC, science file) data of each scene is provided. The SLC data (science) is stored in the Hierarchical Data Format 5 (https://www.hdfgroup.org/), and appended with ".hdf5" to indicate the HDF5 format. The data are stored as 32-bit real and 32-bit complex values. A viewer is available that provides basic graphing, image display, and directory navigation functions (https://www.hdfgroup.org/downloads/hdfview/). The HDF file contains all the information necessary to reconstruct a synthetic aperture sonar image. All major and contemporary programming languages have library support for encoding/decoding the HDF5 format. Supporting documentation that outlines positions of the seafloor scatterers is included in "Scatterer_Locations_Scene00.csv", while the locations of the objects of interest for scene01-scene10 are included in "Object_Locations_All_Scenes.csv". Portable Network Graphic (PNG) images that plot the location of objects of all the objects of interest in each scene in Along-Track and Cross-Track notation are provided.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
NSnot statistically-significant difference versus right SAS.
Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.
We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).