Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data contains simulations of brain connectivity with known population differences at local and global levels. This is a research data accompanying a submitted paper.
Facebook
Twitteranalyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global SAS HBA market size was valued at USD 1.62 billion in 2024, reflecting robust demand across data-intensive sectors. The market is projected to expand at a CAGR of 7.1% from 2025 to 2033, reaching a forecasted value of USD 3.02 billion by 2033. This growth is primarily driven by the increasing need for high-speed data transfer, rising adoption of big data analytics, and the proliferation of data centers worldwide.
A significant growth factor propelling the SAS HBA market is the exponential surge in data generation and storage requirements across enterprises. As organizations shift towards digital transformation, the volume of data created, processed, and stored has skyrocketed, necessitating advanced storage solutions. SAS HBAs (Serial Attached SCSI Host Bus Adapters) play a vital role in ensuring seamless connectivity between servers and storage devices, delivering high throughput, low latency, and robust reliability. Enterprises in sectors such as BFSI, healthcare, and e-commerce are increasingly investing in scalable storage infrastructure, further fueling the demand for SAS HBAs. Additionally, the rise in virtualization and cloud computing has placed greater emphasis on high-performance storage networks, where SAS HBAs are indispensable.
Another critical driver for the SAS HBA market is the ongoing evolution of IT infrastructure, particularly in data centers and enterprise environments. As organizations upgrade to next-generation servers and storage arrays, there is a corresponding need for advanced connectivity solutions that can handle higher data rates and larger workloads. The transition from traditional HDDs to hybrid and all-flash storage arrays has amplified the requirement for reliable and high-speed SAS interfaces. SAS HBAs are integral to these deployments, offering superior scalability, backward compatibility, and enhanced data protection features. Furthermore, the emergence of hyper-converged infrastructure and software-defined storage is creating new opportunities for SAS HBA vendors to innovate and capture market share.
The SAS HBA market is also benefiting from the growing adoption of analytics, artificial intelligence, and machine learning applications across industries. These technologies demand rapid access to vast datasets, driving the need for fast and efficient storage connectivity. SAS HBAs, with their ability to support multiple devices and high data transfer rates, are increasingly preferred in environments where performance and reliability are paramount. Moreover, advancements in SAS technology, such as 24G SAS, are further enhancing the capabilities of HBAs, enabling them to meet the evolving requirements of modern workloads. This technological progress is expected to sustain the growth momentum of the SAS HBA market over the forecast period.
From a regional perspective, North America continues to dominate the SAS HBA market, owing to its advanced IT infrastructure, high concentration of data centers, and early adoption of cutting-edge technologies. The Asia Pacific region, however, is witnessing the fastest growth, driven by rapid digitalization, expanding cloud services, and increasing investments in data center construction. Europe follows closely, with significant demand from the BFSI, healthcare, and manufacturing sectors. Latin America and the Middle East & Africa are emerging markets, gradually increasing their share due to rising IT investments and the modernization of enterprise infrastructure. This diverse regional landscape underscores the global relevance and growth potential of the SAS HBA market.
The SAS HBA market is segmented by product type into Internal SAS HBA, External SAS HBA, and Universal SAS HBA. Internal SAS HBAs are widely adopted in enterprise servers and storage devices, providing direct connectivity between the motherboard and SAS-enabled drives within the same chassis. These adapters are favored for their reliability, high-speed data transfer, and ease of integration into existing server architectures. As enterprises continue to scale their storage systems to accommodate growing data volumes, the demand for internal SAS HBAs remains strong, particularly in mission-critical applications where performance and data integrity are crucial. The ongoing trend of server virtualization and the deployment of high-density storage solutions further bolster the adoption of internal SAS H
Facebook
TwitterThe simulated synthetic aperture sonar (SAS) data presented here was generated using PoSSM [Johnson and Brown 2018]. The data is suitable for bistatic, coherent signal processing and will form acoustic seafloor imagery. Included in this data package is simulated sonar data in Generic Data Format (GDF) files, a description of the GDF file contents, example SAS imagery, and supporting information about the simulated scenes. In total, there are eleven 60 m x 90 m scenes, labeled scene00 through scene10, with scene00 provided with the scatterers in isolation, i.e. no seafloor texture. This is provided for beamformer testing purposes and should result in an image similar to the one labeled "PoSSM-scene00-scene00-starboard-0.tif" in the Related Data Sets tab. The ten other scenes have varying degrees of model variation as described in "Description_of_Simulated_SAS_Data_Package.pdf". A description of the data and the model is found in the associated document called "Description_of_Simulated_SAS_Data_Package.pdf" and a description of the format in which the raw binary data is stored is found in the related document "PSU_GDF_Format_20240612.pdf". The format description also includes MATLAB code that will effectively parse the data to aid in signal processing and image reconstruction. It is left to the researcher to develop a beamforming algorithm suitable for coherent signal and image processing. Each 60 m x 90 m scene is represented by 4 raw (not beamformed) GDF files, labeled sceneXX-STARBOARD-000000 through 000003. It is possible to beamform smaller scenes from any one of these 4 files, i.e. the four files are combined sequentially to form a 60 m x 90 m image. Also included are comma separated value spreadsheets describing the locations of scatterers and objects of interest within each scene. In addition to the binary GDF data, a beamformed GeoTIFF image and a single-look complex (SLC, science file) data of each scene is provided. The SLC data (science) is stored in the Hierarchical Data Format 5 (https://www.hdfgroup.org/), and appended with ".hdf5" to indicate the HDF5 format. The data are stored as 32-bit real and 32-bit complex values. A viewer is available that provides basic graphing, image display, and directory navigation functions (https://www.hdfgroup.org/downloads/hdfview/). The HDF file contains all the information necessary to reconstruct a synthetic aperture sonar image. All major and contemporary programming languages have library support for encoding/decoding the HDF5 format. Supporting documentation that outlines positions of the seafloor scatterers is included in "Scatterer_Locations_Scene00.csv", while the locations of the objects of interest for scene01-scene10 are included in "Object_Locations_All_Scenes.csv". Portable Network Graphic (PNG) images that plot the location of objects of all the objects of interest in each scene in Along-Track and Cross-Track notation are provided.
Facebook
TwitterThe 1990 SAS Transport Files portion of the Archive of Census Related Products (ACRP) contains housing and population data from the U.S. Census Bureau's 1990 Summary tape File (STF3A) database. The data are available by state and county, county subdivision/mcd, blockgroup, and places, as well as Indian reservations, tribal districts and congressional districts. This portion of the ACRP is produced by the Columbia University Center for International Earth Science Information Network (CIESIN).
Facebook
Twitteranalyze the health and retirement study (hrs) with r the hrs is the one and only longitudinal survey of american seniors. with a panel starting its third decade, the current pool of respondents includes older folks who have been interviewed every two years as far back as 1992. unlike cross-sectional or shorter panel surveys, respondents keep responding until, well, death d o us part. paid for by the national institute on aging and administered by the university of michigan's institute for social research, if you apply for an interviewer job with them, i hope you like werther's original. figuring out how to analyze this data set might trigger your fight-or-flight synapses if you just start clicking arou nd on michigan's website. instead, read pages numbered 10-17 (pdf pages 12-19) of this introduction pdf and don't touch the data until you understand figure a-3 on that last page. if you start enjoying yourself, here's the whole book. after that, it's time to register for access to the (free) data. keep your username and password handy, you'll need it for the top of the download automation r script. next, look at this data flowchart to get an idea of why the data download page is such a righteous jungle. but wait, good news: umich recently farmed out its data management to the rand corporation, who promptly constructed a giant consolidated file with one record per respondent across the whole panel. oh so beautiful. the rand hrs files make much of the older data and syntax examples obsolete, so when you come across stuff like instructions on how to merge years, you can happily ignore them - rand has done it for you. the health and retirement study only includes noninstitutionalized adults when new respondents get added to the panel (as they were in 1992, 1993, 1998, 2004, and 2010) but once they're in, they're in - respondents have a weight of zero for interview waves when they were nursing home residents; but they're still responding and will continue to contribute to your statistics so long as you're generalizing about a population from a previous wave (for example: it's possible to compute "among all americans who were 50+ years old in 1998, x% lived in nursing homes by 2010"). my source for that 411? page 13 of the design doc. wicked. this new github repository contains five scripts: 1992 - 2010 download HRS microdata.R loop through every year and every file, download, then unzip everything in one big party impor t longitudinal RAND contributed files.R create a SQLite database (.db) on the local disk load the rand, rand-cams, and both rand-family files into the database (.db) in chunks (to prevent overloading ram) longitudinal RAND - analysis examples.R connect to the sql database created by the 'import longitudinal RAND contributed files' program create tw o database-backed complex sample survey object, using a taylor-series linearization design perform a mountain of analysis examples with wave weights from two different points in the panel import example HRS file.R load a fixed-width file using only the sas importation script directly into ram with < a href="http://blog.revolutionanalytics.com/2012/07/importing-public-data-with-sas-instructions-into-r.html">SAScii parse through the IF block at the bottom of the sas importation script, blank out a number of variables save the file as an R data file (.rda) for fast loading later replicate 2002 regression.R connect to the sql database created by the 'import longitudinal RAND contributed files' program create a database-backed complex sample survey object, using a taylor-series linearization design exactly match the final regression shown in this document provided by analysts at RAND as an update of the regression on pdf page B76 of this document . click here to view these five scripts for more detail about the health and retirement study (hrs), visit: michigan's hrs homepage rand's hrs homepage the hrs wikipedia page a running list of publications using hrs notes: exemplary work making it this far. as a reward, here's the detailed codebook for the main rand hrs file. note that rand also creates 'flat files' for every survey wave, but really, most every analysis you c an think of is possible using just the four files imported with the rand importation script above. if you must work with the non-rand files, there's an example of how to import a single hrs (umich-created) file, but if you wish to import more than one, you'll have to write some for loops yourself. confidential to sas, spss, stata, and sudaan users: a tidal wave is coming. you can get water up your nose and be dragged out to sea, or you can grab a surf board. time to transition to r. :D
Facebook
Twitter
According to our latest research, the global SAS HBA (Serial Attached SCSI Host Bus Adapter) market size reached USD 1.47 billion in 2024, and it is poised to grow at a CAGR of 7.1% during the forecast period, reaching an estimated USD 2.74 billion by 2033. This robust growth is driven by increasing demand for high-speed and reliable data transfer solutions across data centers, enterprise storage, and server environments. The proliferation of big data analytics, cloud computing, and the expansion of enterprise IT infrastructure are among the primary factors fueling market expansion, as organizations worldwide seek efficient and scalable storage connectivity solutions.
One of the most significant growth factors for the SAS HBA market is the exponential rise in data generation and storage requirements across various industries. With digital transformation initiatives accelerating globally, organizations are investing heavily in advanced storage systems to manage and process vast volumes of data efficiently. SAS HBAs play a crucial role in enabling high-speed, low-latency connections between servers and storage devices, ensuring seamless data flow and robust performance. The growing adoption of cloud-based services, virtualization, and high-performance computing (HPC) further amplifies the need for scalable and reliable storage connectivity, driving the demand for SAS HBA solutions in both enterprise and hyperscale data center environments.
Another critical driver propelling the SAS HBA market is the ongoing evolution of storage technologies and the increasing complexity of enterprise IT infrastructure. As businesses transition from traditional storage architectures to more sophisticated, hybrid, and software-defined storage environments, the need for versatile and high-capacity connectivity solutions has become paramount. SAS HBAs offer backward compatibility, enhanced error correction, and superior scalability compared to legacy solutions, making them an ideal choice for organizations seeking to future-proof their storage investments. The integration of advanced features such as multi-path I/O, improved power management, and support for higher data transfer rates positions SAS HBAs as essential components in modern IT ecosystems.
Furthermore, the surge in demand for mission-critical applications and real-time data processing across sectors such as BFSI, healthcare, manufacturing, and government is accelerating the adoption of SAS HBA solutions. These applications require uninterrupted access to large datasets and depend on the high reliability and performance provided by SAS HBA technology. The increasing prevalence of AI, machine learning, and IoT-driven workloads is also contributing to the marketÂ’s momentum, as these technologies necessitate robust storage connectivity to handle intensive data processing requirements. As a result, vendors are continuously innovating and expanding their product portfolios to cater to the evolving needs of diverse end-users.
In addition to SAS HBAs, Fibre Channel HBA technology is gaining traction as an alternative storage connectivity solution, particularly in environments where high-speed data transfer and low latency are critical. Fibre Channel HBAs are known for their ability to provide dedicated bandwidth and enhanced reliability, making them a preferred choice for mission-critical applications in sectors such as finance, healthcare, and telecommunications. As organizations continue to seek robust and scalable storage solutions, the integration of Fibre Channel HBAs into existing IT infrastructures offers a pathway to achieving optimal performance and efficiency. The growing adoption of this technology underscores the importance of versatile connectivity options in modern data center environments.
From a regional perspective, North America continues to dominate the global SAS HBA market, accounting for the largest revenue share in 2024, followed by Europe and the Asia Pacific. The strong presence of leading technology companies, early adoption of advanced storage solutions, and significant investments in data center infrastructure are key factors supporting North AmericaÂ’s leadership position. Meanwhile, the Asia Pacific region is witnessing the fastest growth, driven by rapid digitalization, expanding enterprise IT infrastructure, and increasing investment
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS Code for Spatial Optimization of Supply Chain Network for Nitrogen Based Fertilizer in North America, by type, by mode of transportation, per county, for all major crops, using Proc OptModel. the code specifies set of random values to run the mixed integer stochastic spatial optimization model repeatedly and collect results for each simulation that are then compiled and exported to be projected in GIS (geographic information systems). Certain supply nodes (fertilizer plants) are specified to work at either 70 percent of their capacities or more. Capacities for nodes of supply (fertilizer plants), demand (county centroids), transhipment nodes (transfer points-mode may change), and actual distance travelled are specified over arcs.
Facebook
TwitterThe 1980 SAS Transport Files portion of the Archive of Census Related Products (ACRP) contains housing and population demographics from the 1980 Summary Tape File (STF3A) database and are organized by state. The population data includes education levels, ethnicity, income distribution, nativity, labor force status, means of transportation and family structure while the housing data embodies size, state and structure of housing Unit, value of the Unit, tenure and occupancy status in housing Unit, source of water, sewage disposal, availability of telephone, heating and air conditioning, kitchen facilities, rent, mortgage status and monthly owner costs. This portion of the ACRP is produced by the Columbia University Center for International Earth Science Information Network (CIESIN).
Facebook
TwitterPatient appointment information is obtained from the Veterans Health Information Systems and Technology Architecture Scheduling module. The Patient Appointment Information application gathers appointment data to be loaded into a national database for statistical reporting. Patient appointments are scanned from September 1, 2002 to the present, and appointment data meeting specified criteria are transmitted to the Austin Information Technology Center Patient Appointment Information Transmission (PAIT) national database. Subsequent transmissions (bi-monthly) update PAIT bi-monthly via Health Level Seven message transmissions through Vitria Interface Engine (VIE) connections. A Statistical Analysis Software (SAS) program in Austin utilizes PAIT data to create a bi-monthly SAS dataset on the Austin mainframe. This additional data is used to supplement the existing Clinic Appointment Wait Time and Clinic Utilization extracts created by the Veterans Health Administration Support Service Center (VSSC).
Facebook
TwitterIntroductionThis study aimed to investigate the possible associations between problematic smartphone use and brain functions in terms of both static and dynamic functional connectivity patterns.Materials and methodsResting-state functional magnetic resonance imaging data were scanned from 53 young healthy adults, all of whom completed the Short Version of the Smartphone Addiction Scale (SAS-SV) to assess their problematic smartphone use severity. Both static and dynamic functional brain network measures were evaluated for each participant. The brain network measures were correlated the SAS-SV scores, and compared between participants with and without a problematic smartphone use after adjusting for sex, age, education, and head motion.ResultsTwo participants were excluded because of excessive head motion, and 56.9% (29/51) of the final analyzed participants were found to have a problematic smartphone use (SAS-SV scores ≥ 31 for males and ≥ 33 for females, as proposed in prior research). At the global network level, the SAS-SV score was found to be significantly positively correlated with the global efficiency and local efficiency of static brain networks, and negatively correlated with the temporal variability using the dynamic brain network model. Large-scale subnetwork analyses indicated that a higher SAS-SV score was significantly associated with higher strengths of static functional connectivity within the frontoparietal and cinguloopercular subnetworks, as well as a lower temporal variability of dynamic functional connectivity patterns within the attention subnetwork. However, no significant differences were found when directly comparing between the groups of participants with and without a problematic smartphone use.ConclusionOur results suggested that problematic smartphone use is associated with differences in both the static and dynamic brain network organizations in young adults. These findings may help to identify at-risk population for smartphone addiction and guide targeted interventions for further research. Nevertheless, it might be necessary to confirm our findings in a larger sample, and to investigate if a more applicable SAS-SV cutoff point is required for defining problematic smartphone use in young Chinese adults nowadays.
Facebook
TwitterThis data set contains a composite of the highest resolution (i.e. the "native" resolution) upper air sounding data from all sources for the Southeast Atmosphere Study (SAS). Sounding data is included from two sources: the National Weather Service (16 sites and 1438 soundings) and the NCAR/EOL ISS GAUS radiosonde site near the SOAS Centreville site in central Alabama (1 site and 105 soundings). Included are soundings from 30 May to 15 July 2013.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset provides granular records of spectrum assignments in the CBRS 3.5GHz band, detailing the three-tier access model (Incumbent, PAL, GAA), frequency blocks, locations, SAS operator coordination, and assignment status. It enables analysis of dynamic spectrum allocation, regulatory compliance, and operational insights for wireless network planning and management.
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
The Data Integration & Imaging Informatics (DI-Cubed) project explored the issue of lack of standardized data capture at the point of data creation, as reflected in the non-image data accompanying 4 TCIA breast cancer collections (Multi-center breast DCE-MRI data and segmentations from patients in the I-SPY 1/ACRIN 6657 trials (ISPY1), BREAST-DIAGNOSIS, Single site breast DCE-MRI data and segmentations from patients undergoing neoadjuvant chemotherapy (Breast-MRI-NACT-Pilot), The Cancer Genome Atlas Breast Invasive Carcinoma Collection (TCGA-BRCA)) and the Ivy Glioblastoma Atlas Project (IvyGAP) brain cancer collection. The work addressed the desire for semantic interoperability between various NCI initiatives by aligning on common clinical metadata elements and supporting use cases that connect clinical, imaging, and genomics data. Accordingly, clinical and measurement data imported into I2B2 were cross-mapped to industry standard concepts for names and values including those derived from BRIDG, CDISC SDTM, DICOM Structured Reporting models and using NCI Thesaurus, SNOMED CT and LOINC controlled terminology. A subset of the standardized data was then exported from I2B2 in SDTM compliant SAS transport files. The SDTM data was derived from data taken from both the curated TCIA spreadsheets as well as tumor measurements and dates from the TCIA Restful API. Due to the nature of the available data not all SDTM conformance rules were applicable or adhered to. These Study Data Tabulation Model format (SDTM) datasets were validated using Pinnacle 21 CDISC validation software. The validation software reviews datasets according to their degree of conformance to rules developed for the purposes of FDA submissions of electronic data. Iterative refinements were made to the datasets based upon group discussions and feedback from the validation tool. Export datasets for the following SDTM domains were generated:
Facebook
TwitterBreast density is a radiologic feature that reflects fibroglandular tissue content relative to breast area or volume, and it is a breast cancer risk factor. This study employed deep learning approaches to identify histologic correlates in radiologically-guided biopsies that may underlie breast density and distinguish cancer among women with elevated and low density. Data access: Datasets supporting figure 2, tables 2 and 3 and supplementary table 2 of the published article are publicly available in the figshare repository, as part of this data record (https://doi.org/10.6084/m9.figshare.9786152). These datasets are contained in the zip file NPJ FigShare.zip. Datasets supporting figure 3, table 1 and supplementary table 1 of the published article are not publicly available to protect patient privacy, but can be made available on request from Dr. Gretchen L. Gierach, Senior Investigator, Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD, USA, email address: gierachg@mail.nih.gov. Study description and aims: The study aimed to identify tissue correlates of breast density that may be important for distinguishing malignant from benign biopsy diagnoses separately among women with high and low breast density, to help inform cancer risk stratification among women undergoing a biopsy following an abnormal mammogram. Haematoxylin and eosin (H&E)-stained digitized images from image-guided breast biopsies (n=852 patients) were evaluated. Breast density was assessed as global and localized fibroglandular volume (%). A convolutional neural network characterized H&E composition. 37 features were extracted from the network output, describing tissue quantities and morphological structure. A random forest regression model was trained to identify correlates most predictive of fibroglandular volume (n=588). Correlations between predicted and radiologically quantified fibroglandular volume were assessed in 264 independent patients. A second random forest classifier was trained to predict diagnosis (invasive vs. benign); performance was assessed using area under receiver-operating characteristics curves (AUC). For more details on the methodology please see the published article. Study approval: The Institutional Review Boards at the NCI and the University of Vermont approved the protocol for this project for either active consenting or a waiver of consent to enrol participants, link data and perform analytical studies. Dataset descriptions: Data supporting figure 2: Datasets Figure 2A H&E.jpg, Figure 2A Mammogram.jpg, Figure 2B H&E.jpg and Figure 2B Mammogram.jpg are in .jpg file format and consist of histological whole slide H&E images and corresponding full-field digital mammograms from patients whose biopsies yielded diagnoses of atypical ductal hyperplasia and invasive carcinoma. Data supporting figure 3: Dataset Figure 3.xls is in .xls file format and contains raw data used to generate the Receiver Operating Characteristic (ROC) curves for the prediction of invasive cancer among women with high percent global fibroglandular volume, low percent global fibroglandular volume, high percent localized fibroglandular volume and low percent localized fibroglandular volume. Data supporting table 1: Dataset Table1_analysis.sas7bdat is in SAS file format and contains the characteristics of study participants in the BREAST Stamp Project, who were referred for an image-guided breast biopsy, stratified by the training and testing sets (n = 852). Data supporting table 2: Datasets Global FGV.xls (accompanying Global FGV.png file) and Localized FGV.xls (accompanying Localized FGV.png file) are in .xls file format and the accompanying files are in .png file format. The data contain histologic features identified in the random forest model for the prediction of global and localized % fibroglandular volume. Data supporting table 3: Datasets HighGlobal_feature_importance.xls, HighGlobal_feature_importance.pdf, HighLocal_feature_importance.xls, HighLocal_feature_importance.pdf, LowGlobal_feature_importance.xls, LowGlobal_feature_importance.pdf, LowLocal_feature_importance.xls, LowLocal_feature_importance.pdf are in .xls file format. The accompanying figures generated from the data in the .xls files are in .pdf file format. These files contain histologic features identified in the random forest model for the prediction of invasive cancer status among women with high vs. low % fibroglandular volume. Data supporting supplementary table 1: Datasets testfeatures.xls and trainfeatures.xls are in .xls file format and include the distribution and description of the 37 histologic features extracted from the convolutional neural network deep learning output in the H&E stained whole slide images from the training and testing sets. Data supporting supplementary table 2: Datasets All_samples_global.xls, All_samples_global.png, All_samples_local.xls, All_samples_local.png, PostMeno_global.xls, PostMeno_global.png, PostMeno_local.xls, PostMeno_local.png, PreMeno_global.xls, PreMeno_global.png, PreMeno_local.xls, PreMeno_local.png are in .xls file format. The accompanying figures generated from the data in the .xls files are in .png file format. These data include the histologic features identified in the random forest model that included BMI for the prediction of global and localized % fibroglandular volume.Software needed to access the data: Data files in SAS file format require the SAS software to be accessed.
Facebook
Twitterhttps://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The Indonesia Big Data Analytics Software market is experiencing robust growth, projected to reach a market size of $43.15 million in 2025, exhibiting a Compound Annual Growth Rate (CAGR) of 9.35% from 2019 to 2033. This expansion is driven by several key factors. The increasing adoption of cloud-based solutions offers scalability and cost-effectiveness, appealing to both SMEs and large enterprises. Furthermore, various end-user verticals, including manufacturing, oil and gas, retail, healthcare, and others, are increasingly leveraging big data analytics to gain valuable insights from their data, improve operational efficiency, and enhance decision-making processes. Government initiatives promoting digital transformation and technological advancement within Indonesia are also contributing significantly to market growth. The preference for on-premises solutions remains, catering to organizations with stringent data security and compliance requirements. However, this segment's growth might be comparatively slower than the cloud segment due to higher initial investment costs and ongoing maintenance needs. Competition is fierce, with established players like Teradata, SAS, SAP, Tableau, IBM, Oracle, Google, Microsoft, and Cloudera, among others, vying for market share. This competitive landscape fosters innovation and drives the development of advanced analytics solutions tailored to the specific needs of the Indonesian market. The forecast period (2025-2033) anticipates continued strong growth, fueled by increasing digitalization across industries and a rising demand for data-driven insights. While precise figures for individual market segments and regional breakdowns within Indonesia are unavailable, extrapolating from the overall market size and CAGR suggests a substantial expansion across all segments. Growth will likely be unevenly distributed, with the cloud deployment mode and large enterprise segments potentially outpacing others due to their higher adoption rates and greater budgets for advanced analytics technology. The success of individual vendors will depend on factors such as their ability to adapt to the local market’s specific needs, provide strong customer support, and offer competitive pricing and technological advancements. Recent developments include: June 2024: Indosat Ooredoo Hutchison (Indosat) and Google Cloud expanded their long-term alliance to accelerate Indosat’s transformation from telco to AI Native TechCo. The collaboration will combine Indosat’s vast network, operational, and customer datasets with Google Cloud’s unified AI stack to deliver exceptional experiences to over 100 million Indosat customers and generative AI (GenAI) solutions for businesses across Indonesia. These include geospatial analytics and predictive modeling, real-time conversation analysis, and back-office transformation. Indosat’s early adoption of an AI-ready data analytics platform exemplifies its forward-thinking approach., June 2024: Palo Alto Networks launched a new cloud facility in Indonesia, catering to the rising demand for local data residency compliance. The move empowers organizations in Indonesia with access to Palo Alto Networks' Cortex XDR advanced AI and analytics platform that offers a comprehensive security solution by unifying endpoint, network, and cloud data. With this new infrastructure, Indonesian customers can ensure data residency by housing their logs and analytics within the country.. Key drivers for this market are: Higher Emphasis on the Use of Analytics Tools to Empower Decision Making, Rapid Increase in the Generation of Data Coupled with Availability of Several End User Specific Tools due to the Growth in the Local Landscape. Potential restraints include: Higher Emphasis on the Use of Analytics Tools to Empower Decision Making, Rapid Increase in the Generation of Data Coupled with Availability of Several End User Specific Tools due to the Growth in the Local Landscape. Notable trends are: Small and Medium Enterprises to Hold Major Market Share.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Factors associated with participants who engaged in CVI between 2008 and 2009 in China (N = 2958).
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/37171/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/37171/terms
These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed.This study addressed the dearth of information about facilitators of transnational organized crime (TOC) by developing a method for identifying criminal facilitators of TOC within existing datasets and extend the available descriptive information about facilitators through analysis of pre-sentence investigation reports (PSRs). The study involved a two-step process: the first step involved the development of a methodology for identifying TOCFs; the second step involved screening PSRs to validate the methodology and systematically collect data on facilitators and their organizations. Our ultimate goal was to develop a predictive model which can be applied to identify TOC facilitators in the data efficiently.The collection contains 1 syntax text file (TOCF_Summary_Stats_NACJD.sas). No data is included in this collection.
Facebook
TwitterThis CD consists of a series of data files and SAS and SPSS code files containing the Public Use Microdata Sample L. It was produced by the U.S. Bureau of the Census under contract with the Louisiana Population Data Center, LSU Agricultural Center. PUMS-L contains a unique labor market area (LMA) geography delineated by Charles M. Tolbert (LSU) and Molly Sizer (University of Arkansas). PUMS-L is a minimum 0.25 percent sample. Like all PUMS geographic units, the labor market areas must have a population of at least 100,000 persons. To avoid having as few as 250 cases in smaller LMAs, the Bureau made an effort to supply at least 2000 person records per LMA. Inclusion of these additional person records resulted in a 0.45 percent sample. Sampling weights are included in the file that compensate for this oversampling of smaller LMAs. The resulting file contains information on 519,237 households and 1,139,142 persons. Weighted totals are: households - 101,916,857, persons - 248,709,867. This CD-ROM edition of PUMS-L was prepared and mastered by the Louisiana Population Data Center. The files on this CD-ROM are organized in several directories. These directories contain raw PUMS-L data files, equivalency files that document the labor market area geography, Atlas Graphics files that can be used to produce maps, and compressed, rectangularized SAS and SPSS-PC system files. One of the SAS files is an experienced civilian labor force extract that may facilitate research on labor market issues. Also included are SAS and SPSS programs configured for PUMS-L.
Note to Users: This CD is part of a collection located in the Data Archive of the Odum Institute for Research in Social Science, at the University of North Carolina at Chapel Hill. The collection is located in Room 10, Manning Hall. Users may check the CDs out subscribing to the honor system. Items can be checked out for a period of two weeks. Loan forms are located adjacent to the collection.
Facebook
TwitterTen zbiór danych jest zgodny ze specyfikacją programu „Część pojazdów niskoemisyjnych w odnowie floty” dostępnego na schema.data.gouv.fr
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data contains simulations of brain connectivity with known population differences at local and global levels. This is a research data accompanying a submitted paper.