Academic journals indicators developed from the information contained in the Scopus database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains.
A catalog of high-value public science and research data sets from across the Federal Government.
https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html
Gms-index-mediator is a standalone index for spatio-temporal data acting as a mediator between an application and a database. Even modern databases need several minutes to execute a spatio-temporal query to huge tables containing several million entries. Our index-mediator speeds the execution of such queries up by several magnitues, resulting in response times around 100ms. This version is tailored towards the GeoMultiSens database, but can be adapted to work with custom table layouts with reasonable effort.
This report provides a strategy to ensure that digital scientific data can be reliably preserved for maximum use in catalyzing progress in science and society.Empowered by an array of new digital technologies, science in the 21st century will be conducted in a fully digital world. In this world, the power of digital information to catalyze progress is limited only by the power of the human mind. Data are not consumed by the ideas and innovations they spark but are an endless fuel for creativity. A few bits, well found, can drive a giant leap of creativity. The power of a data set is amplified by ingenuity through applications unimagined by the authors and distant from the original field...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This composition appears in the I-Sc region of phase space. It's relative stability is shown in the I-Sc phase diagram (left). The relative stability of all other phases at this composition (and the combination of other stable phases, if no compound at this composition is stable) is shown in the relative stability plot (right)
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global data science platform market size was valued at approximately USD 49.3 billion in 2023 and is projected to reach USD 174.4 billion by 2032, growing at a compound annual growth rate (CAGR) of 15.1% during the forecast period. This exponential growth can be attributed to the increasing demand for data-driven decision-making processes, the surge in big data technologies, and the need for more advanced analytics solutions across various industries.
One of the primary growth factors driving the data science platform market is the rapid digital transformation efforts undertaken by organizations globally. Companies are shifting towards data-centric business models to gain a competitive edge, improve operational efficiency, and enhance customer experiences. The proliferation of IoT devices and the subsequent explosion of data generated have further propelled the need for sophisticated data science platforms capable of analyzing vast datasets in real-time. This transformation is not only seen in large enterprises but also increasingly in small and medium enterprises (SMEs) that recognize the potential of data analytics in driving business growth.
Moreover, the advancements in artificial intelligence (AI) and machine learning (ML) technologies have significantly augmented the capabilities of data science platforms. These technologies enable the automation of complex data analysis processes, allowing for more accurate predictions and insights. As a result, sectors such as healthcare, finance, and retail are increasingly adopting data science solutions to leverage AI and ML for personalized services, fraud detection, and supply chain optimization. The integration of AI/ML into data science platforms is thus a critical factor contributing to market growth.
Another crucial factor is the growing regulatory and compliance requirements across various industries. Organizations are mandated to ensure data accuracy, security, and privacy, necessitating the adoption of robust data science platforms that can handle these aspects efficiently. The implementation of regulations such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States has compelled organizations to invest in advanced data management and analytics solutions. These regulatory frameworks are not only a challenge but also an opportunity for the data science platform market to innovate and provide compliant solutions.
Regionally, North America dominates the data science platform market due to the early adoption of advanced technologies, a strong presence of key market players, and significant investments in research and development. However, the Asia Pacific region is expected to witness the highest growth rate during the forecast period. This growth can be attributed to the increasing digitalization initiatives, a growing number of tech startups, and the rising demand for analytics solutions in countries like China, India, and Japan. The competitive landscape and economic development in these regions are creating ample opportunities for market expansion.
The data science platform market, segmented by components, includes platforms and services. The platform segment encompasses software and tools designed for data integration, preparation, and analysis, while the services segment covers professional and managed services that support the implementation and maintenance of these platforms. The platform component is crucial as it provides the backbone for data science operations, enabling data scientists to perform data wrangling, model building, and deployment efficiently. The increasing demand for customized solutions tailored to specific business needs is driving the growth of the platform segment. Additionally, with the rise of open-source platforms, organizations have more flexibility and control over their data science workflows, further propelling this segment.
On the other hand, the services segment is equally vital as it ensures that organizations can effectively deploy and utilize data science platforms. Professional services include consulting, training, and support, which help organizations in the seamless integration of data science solutions into their existing IT infrastructure. Managed services provide ongoing support and maintenance, ensuring data science platforms operate optimally. The rising complexity of data ecosystems and the shortage of skilled data scientists are factors contributing to the growth of the services segment, as organizations often rely on external expert
The U.S. Geological Survey (USGS), Woods Hole Science Center (WHSC) has been an active member of the Woods Hole research community for over 40 years. In that time there have been many sediment collection projects conducted by USGS scientists and technicians for the research and study of seabed environments and processes. These samples are collected at sea or near shore and then brought back to the WHSC for study. While at the Center, samples are stored in ambient temperature, cold or freezing conditions, depending on the best mode of preparation for the study being conducted or the duration of storage planned for the samples. Recently, storage methods and available storage space have become a major concern at the WHSC. The shapefile sed_archive.shp, gives a geographical view of the samples in the WHSC's collections, and where they were collected along with images and hyperlinks to useful resources.
This document describes data collected from the Main Collection of the Web of Science database. Records of published studies addressing the intersection of Open Science and data repository were searched up to January 15th, 2024, and the final dataset was comprised of 545 records for bibliometric analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
etc)
Database of bibliographic citations of multidisciplinary areas that covers various journals of medical, scientific, and social sciences including humanities.Publisher independent global citation database.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Diveboard (https://www.diveboard.com/) is an online scuba diving citizen science platform, where divers can digitize or log their dives, participate in citizen science surveys and projects, and interact with others. More then 10,000 divers have already registered with Diveboard and the community is still growing. This dataset contains all observations made by Diveboarders worldwide (mainly fishes) and are linked to the Encyclopedia of Life. The Diveboard community has dedicated the data to the public domain under a Creative Commons Zero waiver, so these can be used as widely as possible. If you have a specific survey need or question, get in touch: Diveboarders are everywhere and willing to help!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Clarivate Analytics, managers of Web of Science, publishes an annual listing of highly cited researchers. The opening sentence of the 2019 report asks, "Who would contest that in the race for knowledge it is human capital that is most essential?". They state that "talent - including intelligence, creativity, ambition, and social competence (where needed) - outpaces other capacities such as access to funding and facilities". This contradicts the findings of Sinay et al. (2019), who found that the algorithm used by search engines, including the Web of Science, is possibly more influential than human capital. Using Clarivate Analytics' database for 2018, we investigated which factors are most relevant in the impact race. Rather than human capital alone, we found that language, gender, funding and facilities introduce bias to assessments and possibly prevent talent and discoveries from emerging. We found that the profile of the highly cited scholars is so narrow that it may compromise the validity of scientific knowledge, because it is biased towards the perception and interests of male scholars affiliated with very-highly-developed countries where English is commonly spoken. These scholars accounted for 80 percent of the random sample analyzed; absent were women from Latin-America, Africa, Asia and Oceania; and scholars affiliated with institutions in low-human-development countries. Ninety-eight percent of the published research came from institutions in very-highly-developed countries. Providing evidence that challenges the view that 'talent is the primary driver of scientific advancement' is important because search engines, such as the Web of Science, can modify their algorithms. This would ensure the work of scholars that do not fit the currently dominant profile can have their importance elevated so that their findings can more equitably contribute to knowledge development. This, in turn, will increase the validity of scientific enquiry. Data was collected from Clarivate Analytics
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
.SCIENCE Whois Database, discover comprehensive ownership details, registration dates, and more for .SCIENCE TLD with Whois Data Center.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data obtained from computational DFT calculations on Tetragonal ScI is provided. Available data include crystal structure, bandgap energy, stability, density of states, and calculation input/output files.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionThe rising prevalence of complex chronic conditions and growing intricacies of healthcare systems emphasizes the need for interdisciplinary partnerships to advance coordination and quality of rehabilitation care. Registry databases are increasingly used for clinical monitoring and quality improvement (QI) of health system change. Currently, it is unclear how interdisciplinary partnerships can best mobilize registry data to support QI across care settings for complex chronic conditions.PurposeWe employed spinal cord injury (SCI) as a case study of a highly disruptive and debilitating complex chronic condition, with existing registry data that is underutilized for QI. We aimed to compare and converge evidence from previous reports and multi-disciplinary experts in order to outline the major elements of a strategy to effectively mobilize registry data for QI of care for complex chronic conditions.MethodsThis study used a convergent parallel-database variant mixed design, whereby findings from a systematic review and a qualitative exploration were analyzed independently and then simultaneously. The scoping review used a three-stage process to review 282 records, which resulted in 28 articles reviewed for analysis. Concurrent interviews were conducted with multidisciplinary-stakeholders, including leadership from condition-specific national registries, members of national SCI communities, leadership from SCI community organizations, and a person with lived experience of SCI. Descriptive analysis was used for the scoping review and qualitative description for stakeholder interviews.ResultsThere were 28 articles included in the scoping review and 11 multidisciplinary-stakeholders in the semi-structured interviews. The integration of the results allowed the identification of three key learnings to enhance the successful design and use of registry data to inform the planning and development of a QI initiative: enhance utility and reliability of registry data; form a steering committee lead by clinical champions; and design effective, feasible, and sustainable QI initiatives.ConclusionThis study highlights the importance of interdisciplinary partnerships to support QI of care for persons with complex conditions. It provides practical strategies to determine mutual priorities that promote implementation and sustained use of registry data to inform QI. Learnings from this work could enhance interdisciplinary collaboration to support QI of care for rehabilitation for persons with complex chronic conditions.
The U.S. Geological Survey (USGS), in partnership with several federal agencies, has developed and released four National Land Cover Database (NLCD) products over the past two decades: NLCD 1992, 2001, 2006, and 2011. These products provide spatially explicit and reliable information on the Nation’s land cover and land cover change. To continue the legacy of NLCD and further establish a long-term monitoring capability for the Nation’s land resources, the USGS has designed a new generation of NLCD products named NLCD 2016. The NLCD 2016 design aims to provide innovative, consistent, and robust methodologies for production of a multi-temporal land cover and land cover change database from 2001 to 2016 at 2–3-year intervals. Comprehensive research was conducted and resulted in developed strategies for NLCD 2016: a streamlined process for assembling and preprocessing Landsat imagery and geospatial ancillary datasets; a multi-source integrated training data development and decision-tree based land cover classifications; a temporally, spectrally, and spatially integrated land cover change analysis strategy; a hierarchical theme-based post-classification and integration protocol for generating land cover and change products; a continuous fields biophysical parameters modeling method; and an automated scripted operational system for the NLCD 2016 production. The performance of the developed strategies and methods were tested in twenty World Reference System-2 path/row throughout the conterminous U.S. An overall agreement ranging from 71% to 97% between land cover classification and reference data was achieved for all tested area and all years. Results from this study confirm the robustness of this comprehensive and highly automated procedure for NLCD 2016 operational mapping. Questions about the NLCD 2016 land cover product can be directed to the NLCD 2016 land cover mapping team at USGS EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov. See included spatial metadata for more details.
The Giotto Radio Science Original Experiment Data Set includes two types of data -- an Archival Tracking Data File (ATDF) from closed loop receivers and Original Data Records (ODRs) from open loop receivers.
This dataset is comprised of a collection of example DMPs from a wide array of fields; obtained from a number of different sources outlined below. Data included/extracted from the examples include the discipline and field of study, author, institutional affiliation and funding information, location, date created, title, research and data-type, description of project, link to the DMP, and where possible external links to related publications or grant pages. This CSV document serves as the content for a McMaster Data Management Plan (DMP) Database as part of the Research Data Management (RDM) Services website, located at https://u.mcmaster.ca/dmps. Other universities and organizations are encouraged to link to the DMP Database or use this dataset as the content for their own DMP Database. This dataset will be updated regularly to include new additions and will be versioned as such. We are gathering submissions at https://u.mcmaster.ca/submit-a-dmp to continue to expand the collection.
This Level 1 (L1) dataset contains the Version 2.1 geo-located Delay Doppler Maps (DDMs) calibrated into Power Received (Watts) and Bistatic Radar Cross Section (BRCS) expressed in units of meters squared from the Delay Doppler Mapping Instrument aboard the CYGNSS satellite constellation. This version supersedes Version 2.0. Other useful scientific and engineering measurement parameters include the DDM of Normalized Bistatic Radar Cross Section (NBRCS), the Delay Doppler Map Average (DDMA) of the NBRCS near the specular reflection point, and the Leading Edge Slope (LES) of the integrated delay waveform. The L1 dataset contains a number of other engineering and science measurement parameters, including sets of quality flags/indicators, error estimates, and bias estimates as well as a variety of orbital, spacecraft/sensor health, timekeeping, and geolocation parameters. At most, 8 netCDF data files (each file corresponding to a unique spacecraft in the CYGNSS constellation) are provided each day; under nominal conditions, there are typically 6-8 spacecraft retrieving data each day, but this can be maximized to 8 spacecraft under special circumstances in which higher than normal retrieval frequency is needed (i.e., during tropical storms and or hurricanes). Latency is approximately 6 days (or better) from the last recorded measurement time. The Version 2.1 release represents the second science-quality release. Here is a summary of improvements that reflect the quality of the Version 2.1 data release: 1) data is now available when the CYGNSS satellites are rolled away from nadir during orbital high beta-angle periods, resulting in a significant amount of additional data; 2) correction to coordinate frames result in more accurate estimates of receiver antenna gain at the specular point; 3) improved calibration for analog-to-digital conversion results in better consistency between CYGNSS satellites measurements at nearly the same location and time; 4) improved GPS EIRP and transmit antenna pattern calibration results in significantly reduced PRN-dependence in the observables; 5) improved estimation of the location of the specular point within the DDM; 6) an altitude-dependent scattering area is used to normalize the scattering cross section (v2.0 used a simpler scattering area model that varied with incidence and azimuth angles but not altitude); 7) corrections added for noise floor-dependent biases in scattering cross section and leading edge slope of delay waveform observed in the v2.0 data. Users should also note that the receiver antenna pattern calibration is not applied per-DDM-bin in this v2.1 release.
https://www.rootsanalysis.com/privacy.htmlhttps://www.rootsanalysis.com/privacy.html
The data science platform market size is projected to grow from USD 138 billion in 2024 to USD 1,678 trillion by 2035, representing a high CAGR of 25.47%.
Academic journals indicators developed from the information contained in the Scopus database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains.