Matlab has a reputation for running slowly. Here are some pointers on how to speed computations, to an often unexpected degree. Subjects currently covered: Matrix Coding Implicit Multithreading on a Multicore Machine Sparse Matrices Sub-Block Computation to Avoid Memory Overflow Matrix Coding - 1 Matlab documentation notes that efficient computation depends on using the matrix facilities, and that mathematically identical algorithms can have very different runtimes, but they are a bit coy about just what these differences are. A simple but telling example: The following is the core of the GD-CLS algorithm of Berry et.al., copied from fig. 1 of Shahnaz et.al, 2006, "Document clustering using nonnegative matrix factorization': for jj = 1:maxiter A = W'*W + lambda*eye(k); for ii = 1:n b = W'*V(:,ii); H(:,ii) = A \ b; end H = H .* (H>0); W = W .* (V*H') ./ (W*(H*H') + 1e-9); end Replacing the columwise update of H with a matrix update gives: for jj = 1:maxiter A = W'*W + lambda*eye(k); B = W'*V; H = A \ B; H = H .* (H>0); W = W .* (V*H') ./ (W*(H*H') + 1e-9); end These were tested on an 8049 x 8660 sparse matrix bag of words V (.0083 non-zeros), with W of size 8049 x 50, H 50 x 8660, maxiter = 50, lambda = 0.1, and identical initial W. They were run consecutivly, multithreaded on an 8-processor Sun server, starting at ~7:30PM. Tic-toc timing was recorded. Runtimes were respectivly 6586.2 and 70.5 seconds, a 93:1 difference. The maximum absolute pairwise difference between W matrix values was 6.6e-14. Similar speedups have been consistantly observed in other cases. In one algorithm, combining matrix operations with efficient use of the sparse matrix facilities gave a 3600:1 speedup. For speed alone, C-style iterative programming should be avoided wherever possible. In addition, when a couple lines of matrix code can substitute for an entire C-style function, program clarity is much improved. Matrix Coding - 2 Applied to integration, the speed gains are not so great, largely due to the time taken to set up the and deal with the boundaries. The anyomous function setup time is neglegable. I demonstrate on a simple uniform step linearly interpolated 1-D integration of cos() from 0 to pi, which should yield zero: tic; step = .00001; fun = @cos; start = 0; endit = pi; enda = floor((endit - start)/step)step + start; delta = (endit - enda)/step; intF = fun(start)/2; intF = intF + fun(endit)delta/2; intF = intF + fun(enda)(delta+1)/2; for ii = start+step:step:enda-step intF = intF + fun(ii); end intF = intFstep toc; intF = -2.910164109692914e-14 Elapsed time is 4.091038 seconds. Replacing the inner summation loop with the matrix equivalent speeds things up a bit: tic; step = .00001; fun = @cos; start = 0; endit = pi; enda = floor((endit - start)/step)*step + start; delta = (endit - enda)/step; intF = fun(start)/2; intF = intF + fun(endit)*delta/2; intF = intF + fun(enda)*(delta+1)/2; intF = intF + sum(fun(start+step:step:enda-step)); intF = intF*step toc; intF = -2.868419946011613e-14 Elapsed time is 0.141564 seconds. The core computation take
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
These are the Matlab scripts to import the integration and complexity analysis on the dimension reduced EEG data. Script files are in MATLAB .m format. is a MS-Word .doc file that explains the scripts.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Compositional data, which is data consisting of fractions or probabilities, is common in many fields including ecology, economics, physical science and political science. If these data would otherwise be normally distributed, their spread can be conveniently represented by a multivariate normal distribution truncated to the non-negative space under a unit simplex. Here this distribution is called the simplex-truncated multivariate normal distribution. For calculations on truncated distributions, it is often useful to obtain rapid estimates of their integral, mean and covariance; these quantities characterising the truncated distribution will generally possess different values to the corresponding non-truncated distribution.
In the paper Adams, Matthew (2022) Integral, mean and covariance of the simplex-truncated multivariate normal distribution. PLoS One, 17(7), Article number: e0272014. https://eprints.qut.edu.au/233964/, three different approaches that can estimate the integral, mean and covariance of any simplex-truncated multivariate normal distribution are described and compared. These three approaches are (1) naive rejection sampling, (2) a method described by Gessner et al. that unifies subset simulation and the Holmes-Diaconis-Ross algorithm with an analytical version of elliptical slice sampling, and (3) a semi-analytical method that expresses the integral, mean and covariance in terms of integrals of hyperrectangularly-truncated multivariate normal distributions, the latter of which are readily computed in modern mathematical and statistical packages. Strong agreement is demonstrated between all three approaches, but the most computationally efficient approach depends strongly both on implementation details and the dimension of the simplex-truncated multivariate normal distribution.
This dataset consists of all code and results for the associated article.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Matlab code that generates Figure 1D.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Matlab code and raw data
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
MATLAB codes por optimization using neural networks, data and manual.
https://www.gnu.org/licenses/gpl.htmlhttps://www.gnu.org/licenses/gpl.html
Physiological waveforms - such as electrocardiograms (ECG), electroencephalograms (EEG), electromyograms (EMG) - are generated during the course of routine care. These signals contain information that can be used to understand underlying conditions of health. Effective processing and analysis of physiological data requires specialized software. The WaveForm DataBase (WFDB) Toolbox for MATLAB and Octave is a collection of over 30 functions and utilities that integrate PhysioNet's open-source applications and databases with the high-precision numerical computational and graphics environment of MATLAB and Octave.
http://www.gnu.org/licenses/gpl-3.0.en.htmlhttp://www.gnu.org/licenses/gpl-3.0.en.html
A revised version of the Matlab implementations of the expansions for the Fermi-Dirac integral and its derivatives is presented. In the new version, our functions for computing the Kummer functions M(a, b, x) and U(a, b, x) are incorporated into the software. The algorithms for computing the Kummer functions are described in [1,2]. In this way, the implementations of the expansions for the Fermi-Dirac integral can be used in earlier Matlab versions and can be easily adapted to GNU Octave. The efficiency of the computations is also greatly improved.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excel file containing the topography of the CLV area written in ASCII format. (XLSX 1035 kb)
call >> Kegg2SBToolbox2('model_map.txt', 'reactions_compounds_final.csv','extracellular.txt','testmodel.txt') for an example
where model_map is the desired mapping of species, reaction_compounds_final.csv is the entire network, extracellular.txt is a manual mapping which compounds are in extracellular-space, testmodel.txt is the output model
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This package contains Matlab scripts used in the article: Mullakkal-Babu, F. A., Wang, M., van Arem, B., & Happee, R. (2016, November). Design and analysis of full range adaptive cruise control with integrated collision avoidance strategy. In 2016 IEEE 19th International conference on intelligent transportation systems (ITSC) (pp. 308-315). IEEE.
The code describes the control algorithm for a full speed range ACC with collision avoidance function. The code outputs the simulated ACC behavior in several scenarios, including stop and go, normal, emergency braking, and cut in. You can specify sensor delay and actuator lag values.
This is sample Matlab script for postprocessing of DHSVM bias and low flow corrected data using Integrated Scenarios Project CMIP5 climate forcing data to model future projected streamflow in the Skagit River Basin. Testing HydroShare Collections...testing, testing.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
These are the scripts used to import the empirical data and perform the fast fourier transform (FFT) analyses reported in Trujillo, Stanfield, & Vela (2017). The effect of electroencephalogram (EEG) reference choice on information-theoretic measures of the complexity and integration of EEG signals. Frontiers in Neuroscience, 11: 425. doi: 10.3389/fnins.2017.00425. Data is in Matlab M-files format. Also included are MS Word files explaining the scripts and the empirical data files, and MS Excel files containing the de-identified demographics of the study subjects and EEG trial information.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Educational resources for a litter collection service-learning project integrated into a MATLAB programming course. Students collect litter data using an app, and then process that data in MATLAB to answer a variety of research questions. The project is best conducted over several weeks and is ideal for an introduction to MATLAB course. Discussions could include MATLAB concepts in addition to plastic pollution and engineering/science for social responsibility.
Raw data, processed spectra and the numerical integration code for the publication "A Magic Angle Spinning Activated 17O DNP Raser", DOI: 10.1021/acs.jpclett.0c03457. NMR data is in the Bruker TopSpin format. Numerical integration was performed in Matlab.
No description is available. Visit https://dataone.org/datasets/e02f63dca86bfb7c29d82a5fab7e064a for complete metadata about this dataset.
A set of MATLAB scripts related to Gauss quadrature and Christoffel function for exponential integral weight functions
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Available data and code for manuscript: "Photonic-electronic integrated circuit-based coherent LiDAR engine"
Arxiv version: https://arxiv.org/abs/2306.07990
Execution tested with Matlab 2019b or newer on Windows. Unzip folder to access files.
Refer to ReadME.txt for instructions.
Contact anton.lukashchuk@epfl.ch if problems with matlab code arise.
All matlab code remains under copyright by the authors: Anton Lukashchuk, Johann Riemensberger. The code is provided solely to be used to reproduce the figures of the aforementioned paper.
https://edmond.mpg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.17617/3.TWZVMVhttps://edmond.mpg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.17617/3.TWZVMV
Repository for data and Matlab codes related to the study entitled "Disparity sensitivity and binocular integration in mouse visual cortex areas".
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Numerical Analysis Software market is experiencing robust growth, driven by the increasing demand for advanced computational capabilities across diverse sectors. The market, estimated at $2.5 billion in 2025, is projected to witness a Compound Annual Growth Rate (CAGR) of 8% from 2025 to 2033, reaching an estimated market value of $4.8 billion by 2033. This expansion is fueled by several key factors. The proliferation of big data and the need for efficient data analysis techniques are pushing organizations to adopt sophisticated numerical analysis software solutions. Furthermore, advancements in artificial intelligence (AI), machine learning (ML), and high-performance computing (HPC) are creating new applications and opportunities for numerical analysis software. The rising adoption of cloud-based solutions is also contributing to market growth, offering scalability and cost-effectiveness. However, the market faces certain restraints, including the high cost of advanced software licenses and the need for specialized expertise to effectively utilize these tools. The market is segmented by software type (commercial vs. open-source), application (engineering, finance, scientific research), and deployment mode (on-premise vs. cloud). Key players in the market include established names like MathWorks (MATLAB), and Analytica, alongside open-source options like GNU Octave and Scilab. The competitive landscape is characterized by a mix of large vendors offering comprehensive solutions and smaller players focusing on niche applications. The continued growth of the Numerical Analysis Software market hinges on several key trends. The increasing integration of numerical analysis techniques within broader data science and analytics workflows is a prominent factor. This is leading to the development of more user-friendly interfaces and integrated platforms. Furthermore, the growing emphasis on data security and privacy regulations is influencing the development of secure and compliant software solutions. The market also witnesses ongoing innovation in algorithms and computational techniques, driving improvements in accuracy, speed, and efficiency. The rise of specialized applications within specific industries, such as financial modeling, weather forecasting, and drug discovery, also fuels further market growth. The adoption of advanced hardware, such as GPUs and specialized processors, is enhancing the performance and capabilities of numerical analysis software, fostering further market expansion.
Matlab has a reputation for running slowly. Here are some pointers on how to speed computations, to an often unexpected degree. Subjects currently covered: Matrix Coding Implicit Multithreading on a Multicore Machine Sparse Matrices Sub-Block Computation to Avoid Memory Overflow Matrix Coding - 1 Matlab documentation notes that efficient computation depends on using the matrix facilities, and that mathematically identical algorithms can have very different runtimes, but they are a bit coy about just what these differences are. A simple but telling example: The following is the core of the GD-CLS algorithm of Berry et.al., copied from fig. 1 of Shahnaz et.al, 2006, "Document clustering using nonnegative matrix factorization': for jj = 1:maxiter A = W'*W + lambda*eye(k); for ii = 1:n b = W'*V(:,ii); H(:,ii) = A \ b; end H = H .* (H>0); W = W .* (V*H') ./ (W*(H*H') + 1e-9); end Replacing the columwise update of H with a matrix update gives: for jj = 1:maxiter A = W'*W + lambda*eye(k); B = W'*V; H = A \ B; H = H .* (H>0); W = W .* (V*H') ./ (W*(H*H') + 1e-9); end These were tested on an 8049 x 8660 sparse matrix bag of words V (.0083 non-zeros), with W of size 8049 x 50, H 50 x 8660, maxiter = 50, lambda = 0.1, and identical initial W. They were run consecutivly, multithreaded on an 8-processor Sun server, starting at ~7:30PM. Tic-toc timing was recorded. Runtimes were respectivly 6586.2 and 70.5 seconds, a 93:1 difference. The maximum absolute pairwise difference between W matrix values was 6.6e-14. Similar speedups have been consistantly observed in other cases. In one algorithm, combining matrix operations with efficient use of the sparse matrix facilities gave a 3600:1 speedup. For speed alone, C-style iterative programming should be avoided wherever possible. In addition, when a couple lines of matrix code can substitute for an entire C-style function, program clarity is much improved. Matrix Coding - 2 Applied to integration, the speed gains are not so great, largely due to the time taken to set up the and deal with the boundaries. The anyomous function setup time is neglegable. I demonstrate on a simple uniform step linearly interpolated 1-D integration of cos() from 0 to pi, which should yield zero: tic; step = .00001; fun = @cos; start = 0; endit = pi; enda = floor((endit - start)/step)step + start; delta = (endit - enda)/step; intF = fun(start)/2; intF = intF + fun(endit)delta/2; intF = intF + fun(enda)(delta+1)/2; for ii = start+step:step:enda-step intF = intF + fun(ii); end intF = intFstep toc; intF = -2.910164109692914e-14 Elapsed time is 4.091038 seconds. Replacing the inner summation loop with the matrix equivalent speeds things up a bit: tic; step = .00001; fun = @cos; start = 0; endit = pi; enda = floor((endit - start)/step)*step + start; delta = (endit - enda)/step; intF = fun(start)/2; intF = intF + fun(endit)*delta/2; intF = intF + fun(enda)*(delta+1)/2; intF = intF + sum(fun(start+step:step:enda-step)); intF = intF*step toc; intF = -2.868419946011613e-14 Elapsed time is 0.141564 seconds. The core computation take