Facebook
TwitterNorth Saami is replacing the use of possessive suffixes on nouns with a morphologically simpler analytic construction. Our data (>2K examples culled from >.5M words) track this change through three generations and parameters of semantics, syntax, and geography. Intense contact pressure on this minority language probably promotes morphological simplification, yielding an advantage for the innovative construction. The innovative construction is additionally advantaged because it has a wider syntactic and semantic range and is indispensable, whereas its competitor can always be replaced. The one environment where the possessive suffix is most strongly retained even in the youngest generation is in the Nominative singular case, and here we find evidence that the possessive suffix is being reinterpreted as a vocative case marker. The files make it possible to see all of our data and to do the statistical analysis and plots in R.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundRobotic surgery holds particular promise for complex oncologic colorectal resections, as it can overcome many limitations of the laparoscopic approach. However, similar to the situation in laparoscopic surgery, appropriate case selection (simple vs. complex) with respect to the actual robotic expertise of the team may be a critical determinant of outcome. The present study aimed to analyze the clinical outcome after robotic colorectal surgery over time based on the complexity of the surgical procedure.MethodsAll robotic colorectal resections (n = 85) performed at the Department of Surgery, Medical University of Vienna, between the beginning of the program in April 2015 until December 2019 were retrospectively analyzed. To compare surgical outcome over time, the cohort was divided into 2 time periods based on case sequence (period 1: patients 1–43, period 2: patients 44–85). Cases were assigned a complexity level (I-IV) according to the type of resection, severity of disease, sex and body mass index (BMI). Postoperative complications were classified using the Clavien-Dindo classification.ResultsIn total, 47 rectal resections (55.3%), 22 partial colectomies (25.8%), 14 abdomino-perineal resections (16.5%) and 2 proctocolectomies (2.4%) were performed. Of these, 69.4% (n = 59) were oncologic cases. The overall rate of major complications (Clavien Dindo III-V) was 16.5%. Complex cases (complexity levels III and IV) were more often followed by major complications than cases with a low to medium complexity level (I and II; 25.0 vs. 5.4%, p = 0.016). Furthermore, the rate of major complications decreased over time from 25.6% (period 1) to 7.1% (period 2, p = 0.038). Of note, the drop in major complications was associated with a learning effect, which was particularly pronounced in complex cases as well as a reduction of case complexity from 67.5% to 45.2% in the second period (p = 0.039).ConclusionsThe risk of major complications after robotic colorectal surgery increases significantly with escalating case complexity (levels III and IV), particularly during the initial phase of a new colorectal robotic surgery program. Before robotic proficiency has been achieved, it is therefore advisable to limit robotic colorectal resection to cases with complexity levels I and II in order to keep major complication rates at a minimum.
Facebook
Twitterhttps://www.law.cornell.edu/uscode/text/17/106https://www.law.cornell.edu/uscode/text/17/106
Graph representation learning—especially via graph neural networks (GNNs)—has demonstrated considerable promise in modeling intricate interaction systems, such as social networks and molecular structures. However, the deployment of GNN-based frameworks in industrial settings remains challenging due to the inherent complexity and noise in real-world graph data. This dissertation systematically addresses these challenges by advancing novel methodologies to improve the comprehensiveness and robustness of graph representation learning, with a dual focus on resolving data complexity and denoising across diverse graph-learning scenarios. In addressing graph data denoising, we design auxiliary self-supervised optimization objectives that disentangle noisy topological structures and misinformation while preserving the representational sufficiency of critical graph features. These tasks operate synergistically with primary learning objectives to enhance robustness against data corruption. The efficacy of these techniques is demonstrated through their application to real-world opioid prescription time series data for predicting potential opioid over-prescription. To mitigate data complexity, the study investigates two complementary approaches: (1) multimodal fusion, which employs attentive integration of graph data with features from other modalities, and (2) hierarchical substructure mining, which extracts semantic patterns at multiple granularities to enhance model generalization in demanding contexts. Finally, the dissertation explores the adaptability of graph data in a range of practical applications, including E-commerce demand forecasting and recommendations, to further enhance prediction and reasoning capabilities.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data used in analyses for "Brain structural connectivity predicts brain functional complexity: DTI derived centrality accounts for variance in fractal properties of fMRI signal"
Facebook
TwitterWe study the NP-hard graph problem COLLAPSED K-CORE where, given an undirected graph G and integers b, x, and k, we are asked to remove b vertices such that the k-core of remaining graph, that is, the (uniquely determined) largest induced subgraph with minimum degree k, has size at most x. COLLAPSED K-CORE was introduced by Zhang et al. (2017) and it is motivated by the study of engagement behavior of users in a social network and measuring the resilience of a network against user drop outs. COLLAPSED K-CORE is a generalization of R-DEGENERATE VERTEX DELETION (which is known to be NP-hard for all r ≥ 0) where, given an undirected graph G and integers b and r, we are asked to remove b vertices such that the remaining graph is r-degenerate, that is, every its subgraph has minimum degree at most r. We investigate the parameterized complexity of COLLAPSED K-CORE with respect to the parameters b, x, and k, and several structural parameters of the input graph. We reveal a dichotomy in the computational complexity of COLLAPSED K-CORE for k ≤ 2 and k ≥ 3. For the latter case it is known that for all x ≥ 0 COLLAPSED K-CORE is W[P]-hard when parameterized by b. For k ≤ 2 we show that COLLAPSED K-CORE is W[1]-hard when parameterized by b and in FPT when parameterized by (b + x). Furthermore, we outline that COLLAPSED K-CORE is in FPT when parameterized by the treewidth of the input graph and presumably does not admit a polynomial kernel when parameterized by the vertex cover number of the input graph.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Visual representation of all analyses used within this project.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview of the classification of the LLMs answers to the false-belief tasks.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Important note: Information on the official Corona traffic light of the BMSGPK including the associated open data sets can be found at - the following application is a third-party application based on the data set COVID-19: Number of cases per district (BMSGPK) and is not directly related to the official Corona traffic light of the BMSGPK, which is available at . --- At district level, the Corona traffic light of the Complexity Science Hub Vienna provides an overview of the number of new infections (positively tested persons) within the last two weeks. Districts are colored according to a traffic light system: Green if there has been less than 1 new case per 10,000 inhabitants in the last two weeks, yellow (less than 10 cases), red (10 or more cases). The historical development of the case numbers can be explored interactively and can be read at a curve for each district. Attention: the screenshot below does not represent the current situation, please call up the "link to the application" for current data!
Facebook
TwitterIntroduction: Fragile X syndrome (FXS) is a genetic disorder caused by a mutation of the fragile X mental retardation 1 gene (FMR1). FXS is associated with neurophysiological abnormalities, including cortical hyperexcitability. Alterations in electroencephalogram (EEG) resting-state power spectral density (PSD) are well-defined in FXS and were found to be linked to neurodevelopmental delays. Whether non-linear dynamics of the brain signal are also altered remains to be studied.Methods: In this study, resting-state EEG power, including alpha peak frequency (APF) and theta/beta ratio (TBR), as well as signal complexity using multi-scale entropy (MSE) were compared between 26 FXS participants (ages 5–28 years), and 77 neurotypical (NT) controls with a similar age distribution. Subsequently a replication study was carried out, comparing our cohort to 19 FXS participants independently recorded at a different site.Results: PSD results confirmed the increased gamma, decreased alpha power and APF in FXS participants compared to NT controls. No alterations in TBR were found. Importantly, results revealed reduced signal complexity in FXS participants, specifically in higher scales, suggesting that altered signal complexity is sensitive to brain alterations in this population. The replication study mostly confirmed these results and suggested critical points of stagnation in the neurodevelopmental curve of FXS.Conclusion: Signal complexity is a powerful feature that can be added to the electrophysiological biomarkers of brain maturation in FXS.
Facebook
TwitterPhysiologic signals such as the electroencephalogram (EEG) demonstrate irregular behaviors due to the interaction of multiple control processes operating over different time scales. The complexity of this behavior can be quantified using multi-scale entropy (MSE). High physiologic complexity denotes health, and a loss of complexity can predict adverse outcomes. Since postoperative delirium is particularly hard to predict, we investigated whether the complexity of preoperative and intraoperative frontal EEG signals could predict postoperative delirium and its endophenotype, inattention. To calculate MSE, the sample entropy of EEG recordings was computed at different time scales, then plotted against scale; complexity is the total area under the curve. MSE of frontal EEG recordings was computed in 50 patients ≥ age 60 before and during surgery. Average MSE was higher intra-operatively than pre-operatively (p = 0.0003). However, intraoperative EEG MSE was lower than preoperative MSE at smaller scales, but higher at larger scales (interaction p < 0.001), creating a crossover point where, by definition, preoperative, and intraoperative MSE curves met. Overall, EEG complexity was not associated with delirium or attention. In 42/50 patients with single crossover points, the scale at which the intraoperative and preoperative entropy curves crossed showed an inverse relationship with delirium-severity score change (Spearman ρ = −0.31, p = 0.054). Thus, average EEG complexity increases intra-operatively in older adults, but is scale dependent. The scale at which preoperative and intraoperative complexity is equal (i.e., the crossover point) may predict delirium. Future studies should assess whether the crossover point represents changes in neural control mechanisms that predispose patients to postoperative delirium.
Facebook
TwitterConventional differential expression analyses have been successfully employed to identify genes whose levels change across experimental conditions. One limitation of this approach is the inability to discover central regulators that control gene expression networks. In addition, while methods for identifying central nodes in a network are widely implemented, the bioinformatics validation process and the theoretical error estimates that reflect the uncertainty in each step of the analysis are rarely considered. Using the betweenness centrality measure, we identified Etv5 as a potential tissue-level regulator in murine neurofibromatosis type 1 (Nf1) low-grade brain tumors (optic gliomas). As such, the expression of Etv5 and Etv5 target genes were increased in multiple independently-generated mouse optic glioma models relative to non-neoplastic (normal healthy) optic nerves, as well as in the cognate human tumors (pilocytic astrocytoma) relative to normal human brain. Importantly, differential Etv5 and Etv5 network expression was not directly the result of Nf1 gene dysfunction in specific cell types, but rather reflects a property of the tumor as an aggregate tissue. Moreover, this differential Etv5 expression was independently validated at the RNA and protein levels. Taken together, the combined use of network analysis, differential RNA expression findings, and experimental validation highlights the potential of the computational network approach to provide new insights into tumor biology.
Facebook
TwitterWe introduce a dynamic version of the NP -hard graph modification problem Cluster Editing . The essential point here is to take into account dynamically evolving input graphs: having a cluster graph (that is, a disjoint union of cliques) constituting a solution for a first input graph, can we cost-efficiently transform it into a “similar” cluster graph that is a solution for a second (“subsequent”) input graph? This model is motivated by several application scenarios, including incremental clustering, the search for compromise clusterings, or also local search in graph-based data clustering. We thoroughly study six problem variants (three modification scenarios edge editing, edge deletion, edge insertion; each combined with two distance measures between cluster graphs). We obtain both fixed-parameter tractability as well as (parameterized) hardness results, thus (except for three open questions) providing a fairly complete picture of the parameterized computational complexity landscape under the two perhaps most natural parameterizations: the distances of the new “similar” cluster graph to (1) the second input graph and to (2) the input cluster graph.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Decline Curve Analysis Software market size in 2024 stands at USD 1.23 billion, reflecting robust industry adoption and digital transformation across the energy sector. The market is experiencing a healthy expansion with a CAGR of 8.1% during the forecast period. By 2033, the market is projected to reach USD 2.49 billion, driven primarily by increasing demand for advanced reservoir management solutions, the growing complexity of oil and gas extraction, and the integration of AI and machine learning into analytical platforms. This growth trajectory highlights the critical role that decline curve analysis software plays in optimizing hydrocarbon production and maximizing asset value in a competitive global landscape.
The rising complexity of oil and gas reservoirs, coupled with the imperative to enhance production efficiency, is a significant growth factor for the Decline Curve Analysis Software market. As mature oilfields dominate global production, operators are increasingly turning to sophisticated analytical tools to predict future production rates, optimize recovery strategies, and extend the productive life of wells. Decline curve analysis software enables engineers to model reservoir performance accurately, reducing uncertainty in production forecasting and investment planning. The software's ability to integrate with real-time data sources, automate workflows, and deliver actionable insights is transforming decision-making in both conventional and unconventional resource development. This technological evolution is further supported by the industry's shift toward digitalization, as companies seek to leverage big data analytics and cloud-based platforms to maintain a competitive edge.
Another crucial driver is the accelerated adoption of cloud-based deployment models within the Decline Curve Analysis Software market. Cloud-based solutions offer enhanced scalability, remote accessibility, and seamless integration with other digital oilfield technologies. These advantages are particularly valuable for multinational oil and gas companies operating across geographically dispersed assets. The cloud deployment model also supports collaborative workflows, allowing teams to share data and insights in real time, which is essential for efficient reservoir management. Moreover, the subscription-based pricing models associated with cloud solutions lower the barriers to entry for small and medium-sized operators, democratizing access to advanced analytical capabilities. As cybersecurity measures for cloud platforms continue to improve, concerns about data privacy and integrity are being addressed, further fueling market growth.
The expansion of unconventional resource development, including shale gas and coal bed methane, is a major catalyst for the Decline Curve Analysis Software market. Unconventional reservoirs present unique challenges due to their heterogeneous nature and complex production profiles. Advanced decline curve analysis software is indispensable for accurately modeling these reservoirs, optimizing hydraulic fracturing operations, and maximizing recovery rates. The software's integration with machine learning algorithms enables continuous improvement of production forecasts as more data becomes available, enhancing operational efficiency and reducing costs. This trend is particularly pronounced in regions such as North America and Asia Pacific, where unconventional resource development is a strategic priority. As the energy transition accelerates, the need for efficient resource management and cost optimization will continue to drive demand for advanced analytical solutions.
From a regional perspective, North America remains the dominant market for Decline Curve Analysis Software, accounting for the largest share in 2024, followed by Europe and Asia Pacific. The strong presence of leading oil and gas companies, a mature digital ecosystem, and significant investments in unconventional resource development underpin North America's leadership. Europe is witnessing steady adoption, driven by stringent regulatory requirements and the need for efficient asset management in mature fields. Meanwhile, Asia Pacific is emerging as a high-growth region, supported by expanding exploration activities and increasing digitalization efforts across the oil and gas sector. The Middle East & Africa and Latin America are also showing promising growth, fueled by ongoing investments in upstream activities and the modernization of legacy inf
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper investigates the computational complexity of sparse label propagation which has been proposed recently for processing network-structured data. Sparse label propagation amounts to a convex optimization problem and might be considered as an extension of basis pursuit from sparse vectors to clustered graph signals representing the label information contained in network-structured datasets. Using a standard first-order oracle model, we characterize the number of iterations for sparse label propagation to achieve a prescribed accuracy. In particular, we derive an upper bound on the number of iterations required to achieve a certain accuracy and show that this upper bound is sharp for datasets having a chain structure (e.g., time series).
Facebook
TwitterOutbreaks of the predator crown-of-thorns seastar (COTS) Acanthaster planci cause widespread coral mortality across the Indo-Pacific. Like many marine invertebrates, COTS is a nocturnal species whose cryptic behaviour during the day can affect its detectability, particularly in structurally complex reef habitats that provide many refuges for benthic creatures. We performed extensive day and night surveys of COTS populations in coral reef habitats showing differing levels of structural complexity and COTS abundance. We tested whether estimations of COTS density varied between day and night observations, and if the differences were related to changes in COTS abundance, reef structural complexity, and the spatial scale of observation. Estimations of COTS density were on average 27% higher at night than during the day. Differences in COTS detection varied with changing seastar abundance but not reef structural complexity or scale of observation. Underestimation of COTS abundance in daytime ...
Facebook
Twitter
According to our latest research, the global Decline Curve Analytics Platforms market size is valued at USD 1.45 billion in 2024 and is projected to reach USD 3.98 billion by 2033, growing at a robust CAGR of 11.7% during the forecast period. The market is experiencing strong momentum due to the increasing adoption of advanced analytics in oil and gas operations, the need for accurate production forecasting, and the rising complexity of reservoir management. These drivers are shaping a dynamic landscape for decline curve analytics platforms as the industry seeks to maximize asset value and operational efficiency.
One of the primary growth factors for the decline curve analytics platforms market is the escalating demand for data-driven decision-making in the oil and gas sector. As the industry faces volatile commodity prices and increasing operational costs, companies are under pressure to optimize production and extend the life of existing wells. Decline curve analytics platforms empower operators with predictive insights, enabling them to forecast production rates, evaluate asset performance, and make informed investment decisions. The integration of machine learning and artificial intelligence into these platforms further enhances their predictive accuracy, allowing for more granular analysis of well behavior and reservoir dynamics. This shift toward digital transformation is expected to accelerate the adoption of decline curve analytics solutions across upstream operations globally.
Another significant factor fueling market expansion is the growing complexity of reservoir management. Mature oilfields, unconventional resources, and enhanced oil recovery (EOR) projects require sophisticated analytical tools to assess well performance and reservoir potential. Decline curve analytics platforms provide advanced modeling capabilities that help operators understand production trends, identify underperforming wells, and optimize recovery strategies. The ability to integrate multiple data sources—such as historical production, pressure data, and geological information—enables a holistic approach to reservoir management. As companies increasingly focus on maximizing recovery and minimizing operational risks, the demand for comprehensive analytics platforms continues to rise.
Furthermore, the market benefits from the increasing regulatory and environmental scrutiny faced by the oil and gas industry. Regulatory bodies are mandating more transparent reporting and efficient resource utilization, prompting companies to adopt advanced analytics to demonstrate compliance and improve sustainability. Decline curve analytics platforms play a crucial role in providing the transparency and accountability required by regulators, investors, and other stakeholders. As the industry shifts toward more responsible resource management, the adoption of these platforms is expected to grow, particularly among independent operators and consulting firms seeking to differentiate themselves through technological innovation and operational excellence.
From a regional perspective, North America remains the dominant market for decline curve analytics platforms, driven by the extensive presence of unconventional oil and gas plays, particularly in the United States and Canada. The region's early adoption of digital technologies, coupled with a strong focus on shale development and enhanced recovery techniques, has created a fertile environment for analytics platform providers. Europe and the Asia Pacific regions are also witnessing significant growth, fueled by increasing investments in upstream activities and the need for efficient reservoir management in mature fields. As the global energy landscape evolves and operators seek to balance profitability with sustainability, the demand for advanced decline curve analytics platforms is expected to remain strong across all major regions.
Facebook
TwitterThe approximation of probability measures on compact metric spaces and in particular on Riemannian manifolds by atomic or empirical ones is a classical task in approximation and complexity theory with a wide range of applications. Instead of point measures we are concerned with the approximation by measures supported on Lipschitz curves. Special attention is paid to push-forward measures of Lebesgue measures on the unit interval by such curves. Using the discrepancy as distance between measures, we prove optimal approximation rates in terms of the curve’s length and Lipschitz constant. Having established the theoretical convergence rates, we are interested in the numerical minimization of the discrepancy between a given probability measure and the set of push-forward measures of Lebesgue measures on the unit interval by Lipschitz curves. We present numerical examples for measures on the 2- and 3-dimensional torus, the 2-sphere, the rotation group on R3 and the Grassmannian of all 2-dimensional linear subspaces of R4. Our algorithm of choice is a conjugate gradient method on these manifolds, which incorporates second-order information. For efficient gradient and Hessian evaluations within the algorithm, we approximate the given measures by truncated Fourier series and use fast Fourier transform techniques on these manifolds.
Facebook
Twitterhttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Network of 45 papers and 90 citation links related to "The complexity and stability of ecosystems".
Facebook
Twitterhttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Network of 37 papers and 104 citation links related to "The knowledge complexity of quadratic residuosity languages".
Facebook
Twitter
According to our latest research, the global Capacity Market Demand Curve Modeling market size in 2024 stands at USD 1.27 billion. The market has demonstrated robust growth, driven primarily by the increasing complexity of electricity markets and the need for advanced grid reliability analysis. The market is expected to grow at a CAGR of 10.8% during the forecast period, reaching USD 3.04 billion by 2033. This expansion is underpinned by the integration of renewable energy sources, regulatory reforms, and the rising adoption of sophisticated modeling techniques to optimize capacity market outcomes.
One of the principal growth factors for the Capacity Market Demand Curve Modeling market is the accelerating transition towards renewable energy integration across global power grids. As governments and private entities push for decarbonization, the variability and unpredictability of renewables such as wind and solar have made demand curve modeling essential for maintaining grid stability and efficient capacity allocation. The need to balance intermittent generation with reliable supply has prompted utilities and independent power producers to invest in advanced software and hardware solutions, thereby fueling market growth. Additionally, the increasing complexity of market mechanisms, such as capacity auctions and forward contracts, necessitates precise modeling to optimize bidding strategies and ensure compliance with regulatory requirements.
Another significant driver is the evolution of regulatory frameworks and market structures in both established and emerging electricity markets. Regulatory bodies are increasingly mandating transparent and robust capacity market mechanisms to ensure long-term reliability and avoid supply shortfalls. This has led to a surge in demand for modeling services and software that can simulate different market scenarios, analyze policy impacts, and forecast future capacity needs. The growing emphasis on grid reliability, coupled with the need for accurate policy analysis, is pushing market participants to adopt sophisticated demand curve modeling tools that can accommodate a wide range of variables and uncertainties.
Furthermore, the proliferation of digital technologies, such as artificial intelligence, machine learning, and big data analytics, is revolutionizing the Capacity Market Demand Curve Modeling market. The adoption of these technologies enables more granular and dynamic modeling, allowing stakeholders to optimize their operational and investment decisions. The integration of real-time data feeds, advanced simulation engines, and cloud-based platforms is making demand curve modeling more accessible and scalable for a variety of end-users, including utilities, energy traders, and regulatory agencies. These innovations are expected to further accelerate market growth by enhancing the accuracy, speed, and flexibility of capacity market analyses.
Regionally, North America continues to dominate the global market, accounting for approximately 38% of the total market size in 2024, followed closely by Europe and Asia Pacific. The United States, in particular, has seen significant investments in capacity market infrastructure and modeling capabilities, driven by regulatory initiatives and the expansion of renewable energy portfolios. Europe is also witnessing rapid growth, fueled by the integration of cross-border electricity markets and ambitious decarbonization targets. Meanwhile, Asia Pacific is emerging as a high-growth region, supported by large-scale grid modernization projects and rising electricity demand. Latin America and the Middle East & Africa are gradually adopting capacity market mechanisms, presenting new opportunities for market expansion in the coming years.
The Component segment of the Capacity Market Demand Curve Modeling market is categorized into Software, Services, and Hardware. Software solutions for
Facebook
TwitterNorth Saami is replacing the use of possessive suffixes on nouns with a morphologically simpler analytic construction. Our data (>2K examples culled from >.5M words) track this change through three generations and parameters of semantics, syntax, and geography. Intense contact pressure on this minority language probably promotes morphological simplification, yielding an advantage for the innovative construction. The innovative construction is additionally advantaged because it has a wider syntactic and semantic range and is indispensable, whereas its competitor can always be replaced. The one environment where the possessive suffix is most strongly retained even in the youngest generation is in the Nominative singular case, and here we find evidence that the possessive suffix is being reinterpreted as a vocative case marker. The files make it possible to see all of our data and to do the statistical analysis and plots in R.