https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The no-code development platform market is experiencing rapid growth, driven by the increasing demand for faster application development, reduced IT costs, and the expansion of citizen development initiatives. The market, estimated at $15 billion in 2025, is projected to maintain a robust Compound Annual Growth Rate (CAGR) of 25% throughout the forecast period (2025-2033). This growth is fueled by several key factors. Businesses are increasingly seeking to accelerate digital transformation efforts, and no-code platforms offer a powerful solution by enabling faster development cycles and reduced reliance on scarce skilled developers. The rise of citizen developers—individuals within organizations who create applications without extensive coding skills—further contributes to market expansion. Furthermore, the increasing availability of user-friendly, intuitive platforms with robust functionalities is making no-code development accessible to a wider range of users, including small and medium-sized enterprises (SMEs). The market's segmentation reflects diverse needs, encompassing platforms tailored for specific industries and application types. Leading players like Ninox, AppSheet, Appy Pie, and Microsoft Power Apps are constantly innovating, enhancing features, and expanding their market reach through strategic partnerships and acquisitions, further intensifying competition and driving market expansion. However, despite the significant growth potential, certain challenges hinder market penetration. Security concerns surrounding data integrity and application vulnerabilities remain a key restraint. Integration complexities with existing enterprise systems can also present obstacles for adoption. Moreover, the lack of customization options in some no-code platforms might limit their suitability for complex applications, potentially pushing users towards traditional coding methods. Addressing these concerns through robust security measures, improved integration capabilities, and enhanced customization features will be crucial for continued market growth and widespread adoption of no-code development platforms. The market's future growth trajectory will likely depend on continuous innovation, addressing security and integration concerns, and expanding the range of application possibilities offered by these platforms.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Smart home automation application demands efficient energy management to limit the unnecessary power consumption of home appliances while maintaining maximum occupant comfort. Three-tier fog computing-based gateway node at the network edge can provide wireless solutions on real-time remote supervision, aggregated data management with cloud platforms, and autonomous home apparatus controlling to limit unnecessary energy consumption at reduced network traffic and low latency. Historical data need to gather through a series of experimental analyses to investigate the performance of such gateway nodes in various aspects. Such as investigating platform load profiles like current and power consumption measurements, platform resource utilization as CPU and network bandwidth usage, and the response time requirement analysis. Measurements were carried out by deploying the gateway in Intel NUC, and RPi 3B+ platform under three test case scenarios. The corresponding raw data measurement has been recorded and reported in this dataset.
@ Dataset Summery:
1. RF Network Performance: Raw data of the received data packet, and RSSI in dBm of the Gateway Node.
2. Current Consumption: Current consumption measurements in ampere (A) of the RPi deployed Gateway Node.
3. Power Consumption: Power Consumption measurements in watts (W) of the RPi and NUC deployed Gateway Node.
4. Historic CPU Temperature: Raw data of the Historical CPU Temperature measurements in Celsius (°C) of the RPi deployed Gateway Node.
5. Historic Network Utilization: Historical Network Bandwidth Utilization measurements in megabyte per seconds (MB/s) of the RPi and NUC Gateway Node.
6. CPU Consumption: Percentage of platform CPU Consumption measurements of the RPi and NUC deployed Gateway Node.
7. Response Time Analysis: ThingSpeak and ThingsSentral cloud Response Time measurements of the RPi and NUC deployed Gateway web requests.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
With the ongoing energy transition, power grids are evolving fast. They operate more and more often close to their technical limit, under more and more volatile conditions. Fast, essentially real-time computational approaches to evaluate their operational safety, stability and reliability are therefore highly desirable. Machine Learning methods have been advocated to solve this challenge, however they are heavy consumers of training and testing data, while historical operational data for real-world power grids are hard if not impossible to access.
This dataset contains long time series for production, consumption, and line flows, amounting to 20 years of data with a time resolution of one hour, for several thousands of loads and several hundreds of generators of various types representing the ultra-high-voltage transmission grid of continental Europe. The synthetic time series have been statistically validated agains real-world data.
The algorithm is described in a Nature Scientific Data paper. It relies on the PanTaGruEl model of the European transmission network -- the admittance of its lines as well as the location, type and capacity of its power generators -- and aggregated data gathered from the ENTSO-E transparency platform, such as power consumption aggregated at the national level.
The network information is encoded in the file europe_network.json. It is given in PowerModels format, which it itself derived from MatPower and compatible with PandaPower. The network features 7822 power lines and 553 transformers connecting 4097 buses, to which are attached 815 generators of various types.
The time series forming the core of this dataset are given in CSV format. Each CSV file is a table with 8736 rows, one for each hourly time step of a 364-day year. All years are truncated to exactly 52 weeks of 7 days, and start on a Monday (the load profiles are typically different during weekdays and weekends). The number of columns depends on the type of table: there are 4097 columns in load files, 815 for generators, and 8375 for lines (including transformers). Each column is described by a header corresponding to the element identifier in the network file. All values are given in per-unit, both in the model file and in the tables, i.e. they are multiples of a base unit taken to be 100 MW.
There are 20 tables of each type, labeled with a reference year (2016 to 2020) and an index (1 to 4), zipped into archive files arranged by year. This amount to a total of 20 years of synthetic data. When using loads, generators, and lines profiles together, it is important to use the same label: for instance, the files loads_2020_1.csv, gens_2020_1.csv, and lines_2020_1.csv represent a same year of the dataset, whereas gens_2020_2.csv is unrelated (it actually shares some features, such as nuclear profiles, but it is based on a dispatch with distinct loads).
The time series can be used without a reference to the network file, simply using all or a selection of columns of the CSV files, depending on the needs. We show below how to select series from a particular country, or how to aggregate hourly time steps into days or weeks. These examples use Python and the data analyis library pandas, but other frameworks can be used as well (Matlab, Julia). Since all the yearly time series are periodic, it is always possible to define a coherent time window modulo the length of the series.
This example illustrates how to select generation data for Switzerland in Python. This can be done without parsing the network file, but using instead gens_by_country.csv, which contains a list of all generators for any country in the network. We start by importing the pandas library, and read the column of the file corresponding to Switzerland (country code CH):
import pandas as pd
CH_gens = pd.read_csv('gens_by_country.csv', usecols=['CH'], dtype=str)
The object created in this way is Dataframe with some null values (not all countries have the same number of generators). It can be turned into a list with:
CH_gens_list = CH_gens.dropna().squeeze().to_list()
Finally, we can import all the time series of Swiss generators from a given data table with
pd.read_csv('gens_2016_1.csv', usecols=CH_gens_list)
The same procedure can be applied to loads using the list contained in the file loads_by_country.csv.
This second example shows how to change the time resolution of the series. Suppose that we are interested in all the loads from a given table, which are given by default with a one-hour resolution:
hourly_loads = pd.read_csv('loads_2018_3.csv')
To get a daily average of the loads, we can use:
daily_loads = hourly_loads.groupby([t // 24 for t in range(24 * 364)]).mean()
This results in series of length 364. To average further over entire weeks and get series of length 52, we use:
weekly_loads = hourly_loads.groupby([t // (24 * 7) for t in range(24 * 364)]).mean()
The code used to generate the dataset is freely available at https://github.com/GeeeHesso/PowerData. It consists in two packages and several documentation notebooks. The first package, written in Python, provides functions to handle the data and to generate synthetic series based on historical data. The second package, written in Julia, is used to perform the optimal power flow. The documentation in the form of Jupyter notebooks contains numerous examples on how to use both packages. The entire workflow used to create this dataset is also provided, starting from raw ENTSO-E data files and ending with the synthetic dataset given in the repository.
This work was supported by the Cyber-Defence Campus of armasuisse and by an internal research grant of the Engineering and Architecture domain of HES-SO.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The no-code development platform market is experiencing robust growth, driven by the increasing demand for rapid application development and the need to bridge the widening skills gap in software development. Businesses of all sizes are adopting these platforms to streamline their operations, build custom applications faster, and reduce reliance on expensive and scarce skilled developers. While precise market sizing data is unavailable, a reasonable estimation based on industry trends and the presence of numerous established and emerging players suggests a 2025 market size of approximately $15 billion USD. Considering a conservative Compound Annual Growth Rate (CAGR) of 25% observed in recent years, the market is projected to reach over $60 billion by 2033. This growth is fueled by several key drivers including the rising popularity of citizen development, the need for increased agility and faster time-to-market for applications, and the expanding integration capabilities of these platforms. The market also benefits from the increasing adoption of cloud-based solutions and the growing emphasis on digital transformation across various industries. However, the market also faces some restraints. These include concerns around data security and integration complexities with legacy systems. The market is highly fragmented, with numerous vendors offering diverse functionalities and pricing models, leading to potential challenges in selecting the right platform. Despite these limitations, the continued innovation in no-code platforms, including advancements in AI-powered development features and improved user experiences, is expected to drive further adoption and propel the market towards significant expansion in the coming years. Key players like Ninox, AppSheet, Appy Pie, and Microsoft Power Apps are continuously evolving their offerings to cater to the ever-growing demand, while smaller companies are emerging with niche functionalities, further stimulating competition and innovation. The segmentation of the market across various industries and deployment models (cloud, on-premise) also contributes to its diversified nature and overall growth potential.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The low-code development platform market is experiencing robust growth, driven by the increasing demand for rapid application development and the need to bridge the widening skills gap in software development. The market, estimated at $15 billion in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of 20% between 2025 and 2033, reaching an estimated $50 billion by 2033. This expansion is fueled by several key factors. Firstly, SMEs are increasingly adopting low-code platforms to rapidly build and deploy applications, overcoming resource constraints and accelerating digital transformation initiatives. Secondly, large enterprises leverage these tools to streamline internal processes, improve operational efficiency, and support agile development methodologies. The prevalence of cloud-based solutions contributes significantly to market growth, offering scalability, accessibility, and reduced infrastructure costs. Furthermore, the continuous innovation in features and functionalities within these platforms, including advanced integrations and AI capabilities, is further attracting a wider range of users. However, the market also faces challenges. Security concerns surrounding data privacy and application vulnerabilities remain a significant restraint. The lack of customization options compared to traditional coding methodologies also limits adoption in some segments. Despite these constraints, the ongoing trend towards digital transformation across various industries, coupled with the increasing availability of user-friendly low-code platforms, is expected to propel market growth in the coming years. The competitive landscape is characterized by a mix of established technology vendors like Microsoft and Salesforce, alongside specialized low-code platform providers like OutSystems and Mendix. The market segmentation across application types (SMEs vs. Large Enterprises) and deployment models (Cloud vs. On-premises) further underscores the diverse needs and adoption patterns across different user groups.
Abstract: In current computing systems, many applications require guarantees on their maximum power consumption to not exceed the available power budget. On the other hand, for some applications, it could be possible to decrease their performance, yet maintaining an acceptable level, in order to reduce their power consumption. To provide such guarantees, a possible solution consists in changing the number of cores assigned to the application, their clock frequency and the placement of application threads over the cores. However, power consumption and performance have different trends depending on the application considered and on its input. Finding a configuration of resources satisfying user requirements is in the general case a challenging task. In this paper we propose Nornir, an algorithm to automatically derive, without relying on historical data about previous executions, performance and power consumption models of an application in different configurations. By using these models, we are able to select a close to optimal configuration for the given user requirement, either performance or power consumption. The configuration of the application will be changed on-the-fly throughout the execution to adapt to workload fluctuations, external interferences and/or application's phase changes. We validate the algorithm by simulating it over the applications of the PARSEC benchmark suite. Then, we implement our algorithm and we analyse its accuracy and overhead over some of these applications on a real execution environment. Eventually, we compare the quality of our proposal with that of the optimal algorithm and of some state of the art solutions. This dataset contains the raw data of the experiments and the scripts used to plot them.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper presents the topology and machine learning-based intelligent control of high-power PV inverter for maximum power extraction and optimal energy utilization. Modular converters with reduced components economic and reliable for high power applications. The proposed integrated intelligent machine learning based control delivers power conversion control with maximum power extraction and supervisory control for optimal load demand control. The topology of the inverter, operating modes, power control and supervisory control aspects are presented. Simulation is carried out in MATLAB/SIMULINK to verify the feasibility of the proposed inverter and control algorithm. The experimental study is presented to validate the simulation results. The operational performance of the proposed topology is evaluated in terms of operational parameters such as regulation of output power, and load relay control and is compared to existing topologies. The economic performance is also evaluated in terms of power switch sizing and reliability in power delivery concerning switch or power sources failure.
In the realm of real estate data solutions, BatchData Property Data Search API emerges as a technical marvel, tailored for product and engineering leadership seeking robust and scalable solutions. This purpose-built API seamlessly integrates diverse datasets, offering over 600 data points, to provide a holistic view of property characteristics, valuation, homeowner information, listing data, county assessor details, photos, and foreclosure information. With state-of-the-art infrastructure and performance features, BatchData sets the standard for efficiency, reliability, and developer satisfaction.
Unraveling the Technical Prowess of BatchData Property Data Search API:
State-of-the-Art Infrastructure: At the heart of BatchData lies a state-of-the-art infrastructure that leverages the latest technologies available. Our systems are engineered to handle increased loads and growing datasets with ease, ensuring optimal performance without significant degradation. This commitment to technological advancement ensures that our data infrastructure and API systems operate at peak efficiency, even in the face of evolving demands and complexities.
Integration Capabilities: BatchData boasts integration capabilities that are second to none, thanks to our innovative data lake house architecture. This architecture empowers us to seamlessly integrate our data with any data platforms or pipelines in a matter of minutes. Whether it's connecting with existing data systems, third-party applications, or internal pipelines, our API offers limitless integration possibilities, enabling product and engineering teams to unlock the full potential of property data with minimal effort.
Developer Documentation: One of the hallmarks of BatchData is our clear and comprehensive developer documentation, which developers love. We understand the importance of providing developers with the resources they need to integrate our API seamlessly into their projects. Our documentation offers detailed guides, code samples, API reference materials, and best practices, empowering developers to hit the ground running and leverage the full capabilities of BatchData with confidence.
Performance Features: BatchData Property Search API is engineered for performance, delivering lightning-fast response times and seamless scalability. Our API is designed to efficiently handle increased loads and growing datasets, ensuring that users experience minimal latency and maximum reliability. Whether it's retrieving property data, conducting complex queries, or accessing real-time updates, our API delivers exceptional performance, empowering product and engineering teams to build high-performance applications and systems with ease. BatchData's APIs work for both residential real estate data and commercial real estate data.
Common Use Cases for BatchData Property Data Search API:
Powering Data-Driven Applications: Product and engineering teams can leverage BatchData Property Data Search API to power data-driven applications tailored for the real estate industry. Whether it's building real estate websites, mobile applications, or internal tools, our API offers comprehensive property data that can drive informed decision-making, enhance user experiences, and streamline operations.
Enabling Advanced Analytics: With BatchData, product and engineering leaders can unlock the power of advanced analytics and reporting capabilities. Our API provides access to rich property data, enabling analysts and researchers to uncover insights, identify trends, and make data-driven recommendations with confidence. Whether it's analyzing market trends, evaluating investment opportunities, or conducting competitive analysis, BatchData empowers teams to derive actionable insights from vast property datasets.
Optimizing Data Infrastructure: BatchData Property Data Search API can play a pivotal role in optimizing data infrastructure within organizations. By seamlessly integrating our API with existing data platforms and pipelines, product and engineering teams can streamline data workflows, improve data accessibility, and enhance overall data infrastructure efficiency. Our API's integration capabilities and performance features ensure that organizations can leverage property data seamlessly across their data ecosystem, driving operational excellence and innovation.
Conclusion: BatchData Property Data Search API stands at the forefront of real estate data solutions, offering product and engineering leaders a comprehensive, scalable, and high-performance API for accessing property data. With state-of-the-art infrastructure, seamless integration capabilities, clear developer documentation, and exceptional performance features, BatchData empowers teams to build data-driven applications, optimize data infrastructure, and unlock actionable insights with ease. As the real estate industry continues to evolve, BatchData remains committed to delivering innovative sol...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Theoretical literature in finance has shown that the risk of financial time series can be well quantified by their expected shortfall, also known as the tail value-at-risk. In this paper, I construct a parametric estimator for the expected shortfall based on a flexible family of densities, called the asymmetric power distribution (APD). The APD family extends the generalized power distribution to cases where the data exhibits asymmetry. The first contribution of the paper is to provide a detailed description of the properties of an APD random variable, such as its quantiles and expected shortfall. The second contribution of the paper is to derive the asymptotic distribution of the APD maximum likelihood estimator (MLE) and construct a consistent estimator for its asymptotic covariance matrix. The latter is based on the APD score whose analytic expression is also provided. A small Monte Carlo experiment examines the small sample properties of the MLE and the empirical coverage of its confidence intervals. An empirical application to four daily financial market series reveals that returns tend to be asymmetric, with innovations which cannot be modeled by either Laplace (double-exponential) or Gaussian distribution, even if we allow the latter to be asymmetric. In an out-of-sample exercise, I compare the performances of the expected shortfall forecasts based on the APD-GARCH, Skew-t-GARCH and GPD-EGARCH models. While the GPD-EGARCH 1% expected shortfall forecasts seem to outperform the competitors, all three models perform equally well at forecasting the 5% and 10% expected shortfall.
NASA's goal in Earth science is to observe, understand, and model the Earth system to discover how it is changing, to better predict change, and to understand the consequences for life on Earth. The Applied Sciences Program, within the Earth Science Division of the NASA Science Mission Directorate, serves individuals and organizations around the globe by expanding and accelerating societal and economic benefits derived from Earth science, information, and technology research and development.
The Prediction Of Worldwide Energy Resources (POWER) Project, funded through the Applied Sciences Program at NASA Langley Research Center, gathers NASA Earth observation data and parameters related to the fields of surface solar irradiance and meteorology to serve the public in several free, easy-to-access and easy-to-use methods. POWER helps communities become resilient amid observed climate variability by improving data accessibility, aiding research in energy development, building energy efficiency, and supporting agriculture projects.
The POWER project contains over 380 satellite-derived meteorology and solar energy Analysis Ready Data (ARD) at four temporal levels: hourly, daily, monthly, and climatology. The POWER data archive provides data at the native resolution of the source products. The data is updated nightly to maintain near real time availability (2-3 days for meteorological parameters and 5-7 days for solar). The POWER services catalog consists of a series of RESTful Application Programming Interfaces, geospatial enabled image services, and web mapping Data Access Viewer. These three service offerings support data discovery, access, and distribution to the project’s user base as ARD and as direct application inputs to decision support tools.
The latest data version update includes hourly-based source ARD, in addition to enhanced daily, monthly, annual, and climatology data. The daily time series for meteorology is available from 1981, while solar-based parameters start in 1984. The hourly source data are from Clouds and the Earth's Radiant Energy System (CERES) and Global Modeling and Assimilation Office (GMAO), spanning from 1984 for meteorology and from 2001 for solar-based parameters. The hourly data equips users with the ARD needed to model building system energy performance, providing information directly amenable to decision support tools introducing the industry standard EnergyPlus Weather file format.
Data Center Rack PDU Market Size 2024-2028
The data center rack PDU market size is forecast to increase by USD 983.8 million at a CAGR of 8% between 2023 and 2028. The market is experiencing significant growth due to several key drivers. The increasing demand for edge data centers, which require efficient power management solutions, is one such factor. Additionally, the wave in mobile data traffic and the resulting need for more data consumption capacity is driving the market. The Internet of Things (IoT) platforms and the proliferation of smart electric devices are also contributing to increased power usage in server rooms. To address these challenges, advanced Smart PDUs are gaining popularity due to their ability to monitor and manage power consumption in real-time. The proliferation of data-driven applications, cloud microservices, and IoT platforms is driving the need for edge data centers, which require PDUs that can efficiently manage power distribution and cooling systems. These devices offer precise power distribution and energy efficiency, making them an essential component of modern data center infrastructure. Overall, the market is expected to continue growing as businesses seek to optimize their power usage and reduce costs.
Request Free Sample
The market is evolving rapidly with the rise of digital traffic and data-driven applications. Modern data centers require reliable power distribution and efficient equipment cooling solutions to handle increasing bandwidth demands. Cloud based services and IoT integration have driven the adoption of smart PDUs, which offer real-time monitoring and advanced control over power usage, ensuring optimal performance and reducing energy consumption. Fiber optic lines and copper wires are essential for high-speed data transmission, while electric devices within server rooms require constant monitoring and protection. With application dominance in sectors like cloud computing and e-commerce, the need for strong PDUs, capable of managing power distribution and cooling effectively, has never been greater. A Smart PDU in a server room, integrated with cloud-based services and IoT (Internet of Things), enables real time monitoring of electric device, optimizing energy usage and supporting data driven applications for efficient power management.
Market Segmentation
The market research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2024-2028, as well as historical data from 2018-2022 for the following segments.
Product
Non-intelligent rack PDU
Intelligent rack PDU
Type
Colocation
Hosting
Geography
North America
Canada
US
APAC
China
Europe
Germany
Italy
South America
Middle East and Africa
By Product Insights
The non-intelligent rack PDU segment is estimated to witness significant growth during the forecast period. The market is experiencing a decline, but the demand for rack-mounted PDUs continues to rise. This trend is driven by the advantages they offer, including efficient use of space and cost-effective power distribution to network switches, servers, and other electronic devices.
The increasing adoption of cloud computing by Small and Medium Enterprises (SMEs) is leading to the proliferation of mini data centers, where basic PDUs are commonly utilized for power management. However, the market share of basic PDUs lags behind that of intelligent PDUs due to their limitations, such as the lack of remote access and monitoring capabilities. Despite this, the cost-effectiveness of basic PDUs makes them a popular choice for many organizations seeking to optimize their IT services.
Get a glance at the market share of various segments Request Free Sample
The non-intelligent rack PDU segment accounted for USD 875.00 million in 2018 and showed a gradual increase during the forecast period.
Regional Insights
North America is estimated to contribute 37% to the growth of the global market during the forecast period. Technavio's analysts have elaborately explained the regional trends and drivers that shape the market during the forecast period.
For more insights on the market share of various regions Request Free Sample
In North America, the expansion of data centers is on the rise, fueled by substantial investments from cloud service providers, colocation companies, and businesses seeking to enhance their IT infrastructure. Edge computing, 5G, multi-cloud services, data analytics, and the Internet of Things (IoT) are key drivers of this growth. The US, as a leading data center hub in North America, hosts major data center markets in cities such as Atlanta, Northern Virginia, Chicago, Dallas/Ft. Worth, and Silicon Valley. Notable companies have announced their plans to expand, further boosting the server market in the regi
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The traditional method for power load forecasting is susceptible to various factors, including holidays, seasonal variations, weather conditions, and more. These factors make it challenging to ensure the accuracy of forecasting results. Additionally, there is a limitation in extracting meaningful physical signs from power data, which ultimately reduces prediction accuracy. This paper aims to address these issues by introducing a novel approach called VCAG (Variable Mode Decomposition—Convolutional Neural Network—Attention Mechanism—Gated Recurrent Unit) for combined power load forecasting. In this approach, we integrate Variable Mode Decomposition (VMD) with Convolutional Neural Network (CNN). VMD is employed to decompose power load data, extracting valuable time-frequency features from each component. These features then serve as input for the CNN. Subsequently, an attention mechanism is applied to give importance to specific features generated by the CNN, enhancing the weight of crucial information. Finally, the weighted features are fed into a Gated Recurrent Unit (GRU) network for time series modeling, ultimately yielding accurate load forecasting results.To validate the effectiveness of our proposed model, we conducted experiments using two publicly available datasets. The results of these experiments demonstrate that our VCAG method achieves high accuracy and stability in power load forecasting, effectively overcoming the limitations associated with traditional forecasting techniques. As a result, this approach holds significant promise for broad applications in the field of power load forecasting.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset corresponds to data which have been extracted from the 60W MSX-60 solar panel model. The model of the PV was obtained using particle swarm optimisation. The accuacy of the model is exellent and attractive as it was benchmarked by experiemal curves provided by the manufacturer. Precisely, the datasheet contain 27x399 data points corresponding to some variables such as Temperature, Irradiance, Maximum power voltage, maximum power current, open circuit voltage, short circuit current and many others. This data was generated for the design and implementation of a solar Phtotovoltaic Emulator (PVE). The authors recommend that the prescribed data offers a broad spectrum for solar panel research, such as MPPT, PV performance studies and many more.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains technical specifications and performance criteria for various electric vehicle (EV) models. A total of 15 different EV models have been evaluated, each based on 20 different criteria. These criteria are categorized into cost and benefit criteria. Below is a detailed description of the key criteria included in the dataset:Price: The selling price of the vehicles, categorized as a cost criterion.Combined Consumption in Mild Weather: The energy consumption performance of the vehicles under mild weather conditions, categorized as a cost criterion.Acceleration: The time it takes for the vehicles to accelerate from 0 to 100 km/h, categorized as a benefit criterion.Top Speed: The maximum speed that the vehicles can achieve, categorized as a benefit criterion.Total Power: The total power output capacity of the vehicles, categorized as a benefit criterion.Total Torque: The maximum torque the vehicles can generate, categorized as a benefit criterion.Usable Battery Capacity: The usable battery capacity of the vehicles, categorized as a benefit criterion.Warranty Period: The warranty period offered for the vehicles, categorized as a benefit criterion.Charge Power (10-80%): The power capacity at which the vehicles can charge from 10% to 80%, categorized as a benefit criterion.Charge Time: The time required for the vehicles to reach a certain charge level, categorized as a cost criterion.Charge Speed: The speed at which the vehicles charge, categorized as a benefit criterion.WLTP Range: The driving range of the vehicles as determined by the Worldwide Harmonized Light Vehicles Test Procedure (WLTP), categorized as a benefit criterion.WLTP Rated Consumption: The energy consumption values of the vehicles according to WLTP standards, categorized as a cost criterion.Adult Occupant Safety: The safety performance of the vehicles for adult occupants, categorized as a benefit criterion.Child Occupant Safety: The safety performance of the vehicles for child occupants, categorized as a benefit criterion.Vulnerable Road Users Protection: The vehicles' performance in protecting vulnerable road users such as pedestrians and cyclists, categorized as a benefit criterion.Safety Assist: The safety assist systems provided by the vehicles, categorized as a benefit criterion.Maximum Payload: The maximum payload capacity of the vehicles, categorized as a benefit criterion.Cargo Volume: The cargo volume capacity of the vehicles, categorized as a benefit criterion.Unladen Weight (EU): The unladen weight of the vehicles as per EU standards, categorized as a cost criterion.This dataset provides a comprehensive overview of the various factors that can influence the decision-making process when selecting an electric vehicle, by balancing both cost and benefit criteria.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This article suggests a new method to expand a family of life distributions by adding a parameter to the family, increasing its flexibility. It is called the extended Modi-G family of distributions. We derived the general statistical properties of the proposed family. Different methods of estimation were presented to estimate the parameters for the proposed family, such as maximum likelihood, ordinary least square, weighted least square, Anderson Darling, right-tailed Anderson-Darling, Cramér-von Mises, and maximum product of spacing methods. A special sub-model with three parameters called extended Modi exponential distribution was derived along with different shapes of its density and hazard functions. Randomly generated data sets and different estimation methods were used to illustrate the behavior of parameters of the proposal sub-model. To illustrate the importance of the proposed family over the other well-known methods, applications to medicine and geology data sets were analyzed.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The no-code development platform market is experiencing rapid growth, driven by the increasing demand for faster application development, reduced IT costs, and the expansion of citizen development initiatives. The market, estimated at $15 billion in 2025, is projected to maintain a robust Compound Annual Growth Rate (CAGR) of 25% throughout the forecast period (2025-2033). This growth is fueled by several key factors. Businesses are increasingly seeking to accelerate digital transformation efforts, and no-code platforms offer a powerful solution by enabling faster development cycles and reduced reliance on scarce skilled developers. The rise of citizen developers—individuals within organizations who create applications without extensive coding skills—further contributes to market expansion. Furthermore, the increasing availability of user-friendly, intuitive platforms with robust functionalities is making no-code development accessible to a wider range of users, including small and medium-sized enterprises (SMEs). The market's segmentation reflects diverse needs, encompassing platforms tailored for specific industries and application types. Leading players like Ninox, AppSheet, Appy Pie, and Microsoft Power Apps are constantly innovating, enhancing features, and expanding their market reach through strategic partnerships and acquisitions, further intensifying competition and driving market expansion. However, despite the significant growth potential, certain challenges hinder market penetration. Security concerns surrounding data integrity and application vulnerabilities remain a key restraint. Integration complexities with existing enterprise systems can also present obstacles for adoption. Moreover, the lack of customization options in some no-code platforms might limit their suitability for complex applications, potentially pushing users towards traditional coding methods. Addressing these concerns through robust security measures, improved integration capabilities, and enhanced customization features will be crucial for continued market growth and widespread adoption of no-code development platforms. The market's future growth trajectory will likely depend on continuous innovation, addressing security and integration concerns, and expanding the range of application possibilities offered by these platforms.