This resource contains the experimental data that was included in tecplot input files but in matlab files. dba1_cp has all the results is dimensioned (7,2) first dimension is 1-7 for each span station 2nd dimension is 1 for upper surface, 2 for lower surface. dba1_cp(ispan,isurf).x are the x/c locations at span station (ispan) and upper(isurf=1) or lower(isurf=2) dba1_cp(ispan,isurf).y are the eta locations at span station (ispan) and upper(isurf=1) or lower(isurf=2) dba1_cp(ispan,isurf).cp are the pressures at span station (ispan) and upper(isurf=1) or lower(isurf=2) Unsteady CP is dimensioned with 4 columns 1st column, real 2nd column, imaginary 3rd column, magnitude 4th column, phase, deg M,Re and other pertinent variables are included as variables and also included in casedata.M, etc
MATLAB is the most powerful software for scientific research, especially for scientific data analysis. It is assumed that trainees have no prior programming expertise or understanding of MATLAB. The following lectures on MATLAB are available on YouTube for international learners. https://youtube.com/playlist?list=PL4T8G4Q9_JQ8jULIl_gFOzOqlAALmaV5Q My profile: https://researchsociety20.org/founder-and-director/
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Compositional data, which is data consisting of fractions or probabilities, is common in many fields including ecology, economics, physical science and political science. If these data would otherwise be normally distributed, their spread can be conveniently represented by a multivariate normal distribution truncated to the non-negative space under a unit simplex. Here this distribution is called the simplex-truncated multivariate normal distribution. For calculations on truncated distributions, it is often useful to obtain rapid estimates of their integral, mean and covariance; these quantities characterising the truncated distribution will generally possess different values to the corresponding non-truncated distribution.
In the paper Adams, Matthew (2022) Integral, mean and covariance of the simplex-truncated multivariate normal distribution. PLoS One, 17(7), Article number: e0272014. https://eprints.qut.edu.au/233964/, three different approaches that can estimate the integral, mean and covariance of any simplex-truncated multivariate normal distribution are described and compared. These three approaches are (1) naive rejection sampling, (2) a method described by Gessner et al. that unifies subset simulation and the Holmes-Diaconis-Ross algorithm with an analytical version of elliptical slice sampling, and (3) a semi-analytical method that expresses the integral, mean and covariance in terms of integrals of hyperrectangularly-truncated multivariate normal distributions, the latter of which are readily computed in modern mathematical and statistical packages. Strong agreement is demonstrated between all three approaches, but the most computationally efficient approach depends strongly both on implementation details and the dimension of the simplex-truncated multivariate normal distribution.
This dataset consists of all code and results for the associated article.
Scripts and data acquired at the Mirror Lake Research Site, cited by the article submitted to Water Resources Research: Distributed Acoustic Sensing (DAS) as a Distributed Hydraulic Sensor in Fractured Bedrock M. W. Becker(1), T. I. Coleman(2), and C. C. Ciervo(1) 1 California State University, Long Beach, Geology Department, 1250 Bellflower Boulevard, Long Beach, California, 90840, USA. 2 Silixa LLC, 3102 W Broadway St, Suite A, Missoula MT 59808, USA. Corresponding author: Matthew W. Becker (matt.becker@csulb.edu).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Matlab code and raw data
This data set consists of Conductivity, Temperature, Depth (CTD) data in MATLAB Format from the 2002 Polar Star Mooring Cruise (AWS-02-I). These data are provided in a single mat-file (MATLAB) for the entire cruise.
Matlab has a reputation for running slowly. Here are some pointers on how to speed computations, to an often unexpected degree. Subjects currently covered: Matrix Coding Implicit Multithreading on a Multicore Machine Sparse Matrices Sub-Block Computation to Avoid Memory Overflow Matrix Coding - 1 Matlab documentation notes that efficient computation depends on using the matrix facilities, and that mathematically identical algorithms can have very different runtimes, but they are a bit coy about just what these differences are. A simple but telling example: The following is the core of the GD-CLS algorithm of Berry et.al., copied from fig. 1 of Shahnaz et.al, 2006, "Document clustering using nonnegative matrix factorization': for jj = 1:maxiter A = W'*W + lambda*eye(k); for ii = 1:n b = W'*V(:,ii); H(:,ii) = A \ b; end H = H .* (H>0); W = W .* (V*H') ./ (W*(H*H') + 1e-9); end Replacing the columwise update of H with a matrix update gives: for jj = 1:maxiter A = W'*W + lambda*eye(k); B = W'*V; H = A \ B; H = H .* (H>0); W = W .* (V*H') ./ (W*(H*H') + 1e-9); end These were tested on an 8049 x 8660 sparse matrix bag of words V (.0083 non-zeros), with W of size 8049 x 50, H 50 x 8660, maxiter = 50, lambda = 0.1, and identical initial W. They were run consecutivly, multithreaded on an 8-processor Sun server, starting at ~7:30PM. Tic-toc timing was recorded. Runtimes were respectivly 6586.2 and 70.5 seconds, a 93:1 difference. The maximum absolute pairwise difference between W matrix values was 6.6e-14. Similar speedups have been consistantly observed in other cases. In one algorithm, combining matrix operations with efficient use of the sparse matrix facilities gave a 3600:1 speedup. For speed alone, C-style iterative programming should be avoided wherever possible. In addition, when a couple lines of matrix code can substitute for an entire C-style function, program clarity is much improved. Matrix Coding - 2 Applied to integration, the speed gains are not so great, largely due to the time taken to set up the and deal with the boundaries. The anyomous function setup time is neglegable. I demonstrate on a simple uniform step linearly interpolated 1-D integration of cos() from 0 to pi, which should yield zero: tic; step = .00001; fun = @cos; start = 0; endit = pi; enda = floor((endit - start)/step)step + start; delta = (endit - enda)/step; intF = fun(start)/2; intF = intF + fun(endit)delta/2; intF = intF + fun(enda)(delta+1)/2; for ii = start+step:step:enda-step intF = intF + fun(ii); end intF = intFstep toc; intF = -2.910164109692914e-14 Elapsed time is 4.091038 seconds. Replacing the inner summation loop with the matrix equivalent speeds things up a bit: tic; step = .00001; fun = @cos; start = 0; endit = pi; enda = floor((endit - start)/step)*step + start; delta = (endit - enda)/step; intF = fun(start)/2; intF = intF + fun(endit)*delta/2; intF = intF + fun(enda)*(delta+1)/2; intF = intF + sum(fun(start+step:step:enda-step)); intF = intF*step toc; intF = -2.868419946011613e-14 Elapsed time is 0.141564 seconds. The core computation take
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
EEG Data and MATLAB Scripts for Data Preprocessing, Frequency Domain Analysis, Functional Connectivity Analysis, and Plotting
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
tootsea (toolbox for time series exploration and analysis) is a matlab solftware, developped at lops (laboratoire d'océanographie physique et spatiale), ifremer. this tool is dedicated to analysing datasets from moored oceanographic instruments (currentmeter, ctd, thermistance, ...). tootsea allows the user to explore the data and metadata from various instruments file, to analyse them with multiple plots and stats available, to do some processing/corrections and qualify (automatically and manually) the data, and finally to export the work in a netcdf file.
MATLAB led the global advanced analytics and data science software industry in 2025 with a market share of ***** percent. First launched in 1984, MATLAB is developed by the U.S. firm MathWorks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Matlab functions for maximum likelihood estimation of a variety of probabilistic discounting models from choice experiments. Data should take the form of binary choices between immediate and delayed rewards. The available discount functions are: 1) exponential 2) hyperbolic (Mazur's one-parameter hyperbolic) 3) generalized hyperbolic 4) Laibson's beta-delta
This is sample Matlab script for postprocessing of DHSVM bias and low flow corrected data using Integrated Scenarios Project CMIP5 climate forcing data to model future projected streamflow in the Skagit River Basin. Testing HydroShare Collections...testing, testing.
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4812https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4812
This repository contains the Matlab code and generated data for the manuscript "Data-driven geometric parameter optimization for PD-GMRES" which uses a quadtree approach to optimize parameters for the iterative solver PD-GMRES. It includes hardware specific data to allow for reproducibity of our results. Our calculations were performed using MATLAB R2019a and should be reproducible up to and including version R2022a. A change in version R2022b leads to different numerical behavior. However, the code does run on newer Matlab versions. Further information is contained in the README.
Replication files for "Job-to-Job Mobility and Inflation" Authors: Renato Faccini and Leonardo Melosi Review of Economics and Statistics Date: February 2, 2023 -------------------------------------------------------------------------------------------- ORDERS OF TOPICS .Section 1. We explain the code to replicate all the figures in the paper (except Figure 6) .Section 2. We explain how Figure 6 is constructed .Section 3. We explain how the data are constructed SECTION 1 Replication_Main.m is used to reproduce all the figures of the paper except Figure 6. All the primitive variables are defined in the code and all the steps are commented in code to facilitate the replication of our results. Replication_Main.m, should be run in Matlab. The authors tested it on a DELL XPS 15 7590 laptop wih the follwoing characteristics: -------------------------------------------------------------------------------------------- Processor Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz 2.40 GHz Installed RAM 64.0 GB System type 64-bit operating system, x64-based processor -------------------------------------------------------------------------------------------- It took 2 minutes and 57 seconds for this machine to construct Figures 1, 2, 3, 4a, 4b, 5, 7a, and 7b. The following version of Matlab and Matlab toolboxes has been used for the test: -------------------------------------------------------------------------------------------- MATLAB Version: 9.7.0.1190202 (R2019b) MATLAB License Number: 363305 Operating System: Microsoft Windows 10 Enterprise Version 10.0 (Build 19045) Java Version: Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode -------------------------------------------------------------------------------------------- MATLAB Version 9.7 (R2019b) Financial Toolbox Version 5.14 (R2019b) Optimization Toolbox Version 8.4 (R2019b) Statistics and Machine Learning Toolbox Version 11.6 (R2019b) Symbolic Math Toolbox Version 8.4 (R2019b) -------------------------------------------------------------------------------------------- The replication code uses auxiliary files and save the pictures in various subfolders: \JL_models: It contains the equations describing the model including the observation equations and routine used to solve the model. To do so, the routine in this folder calls other routines located in some fo the subfolders below. \gensystoama: It contains a set of codes that allow us to solve linear rational expectations models. We use the AMA solver. More information are provided in the file AMASOLVE.m. The codes in this subfolder have been developed by Alejandro Justiniano. \filters: it contains the Kalman filter augmented with a routine to make sure that the zero lower bound constraint for the nominal interest rate is satisfied in every period in our sample. \SteadyStateSolver: It contains a set of routines that are used to solved the steady state of the model numerically. \NLEquations: It contains some of the equations of the model that are log-linearized using the symbolic toolbox of matlab. \NberDates: It contains a set of routines that allows to add shaded area to graphs to denote NBER recessions. \Graphics: It contains useful codes enabling features to construct some of the graphs in the paper. \Data: it contains the data set used in the paper. \Params: It contains a spreadsheet with the values attributes to the model parameters. \VAR_Estimation: It contains the forecasts implied by the Bayesian VAR model of Section 2. The output of Replication_Main.m are the figures of the paper that are stored in the subfolder \Figures SECTION 2 The Excel file "Figure-6.xlsx" is used to create the charts in Figure 6. All three panels of the charts (A, B, and C) plot a measure of unexpected wage inflation against the unemployment rate, then fits separate linear regressions for the periods 1960-1985,1986-2007, and 2008-2009. Unexpected wage inflation is given by the difference between wage growth and a measure of expected wage growth. In all three panels, the unemployment rate used is the civilian unemployment rate (UNRATE), seasonally adjusted, from the BLS. The sheet "Panel A" uses quarterly manufacturing sector average hourly earnings growth data, seasonally adjusted (CES3000000008), from the Bureau of Labor Statistics (BLS) Employment Situation report as the measure of wage inflation. The unexpected wage inflation is given by the difference between earnings growth at time t and the average of earnings growth across the previous four months. Growth rates are annualized quarterly values. The sheet "Panel B" uses quarterly Nonfarm Business Sector Compensation Per Hour, seasonally adjusted (COMPNFB), from the BLS Productivity and Costs report as its measure of wage inflation. As in Panel A, expected wage inflation is given by the... Visit https://dataone.org/datasets/sha256%3A44c88fe82380bfff217866cac93f85483766eb9364f66cfa03f1ebdaa0408335 for complete metadata about this dataset.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This data set is uploaded as supporting information for the publication entitled:A Comprehensive Tutorial on the SOM-RPM Toolbox for MATLABThe attached file 'case_study' includes the following:X : Data from a ToF-SIMS hyperspectral image. A stage raster containing 960 x800 pixels with 963 associated m/z peaks.pk_lbls: The m/z label for each of the 963 m/z peaks.mdl and mdl_masked: SOM-RPM models created using the SOM-RPM tutorial provided within the cited article.Additional details about the datasets can be found in the published article.V2 - contains modified peak lists to show intensity weighted m/z rather than peak midpoint. If you use this data set in your work, please cite our work as follows:[LINK TO BE ADDED TO PAPER ONCE DOI RECEIVED]
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A set of Matlab functions to calculate simple Bayes Factors. Based on the work of Jeff Rouder and EJ Wagenmakers.
The Matlab scripts will compute parametric maps from Bruker MR images as described in the JoVE paper published in 2017 Complete download (zip, 465.7 KiB)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary material for the manuscript "A Test to Compare Interval Time Series". This includes figures and tables referred to in the manuscript as well as details of scripts and data files used for the simulation studies and the application. All scripts are in MATLAB (.m) format and data files are is MATLAB (.mat) and in EXCEL (. xlsx) formats.
This submission contains the data (in .csv and .mat format) and source code (.m file compatible with MATLAB) to reproduce the analysis of the bone marrow data in Section 4. It also contains a README file explaining the variable names. The source code replaces missing entries by combinations of high and low values according to a fractional factorial structure for the estimation of the main effects of missingness and fits a logistic regression model using each of the resulting sets of factorially-completed covariate data. Estimated regression coefficients and their standard errors are stored, and the effects of missingness presented as in Tables 2 and 3 of the paper.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains MATLAB scripts created during the work on "Design of experiments: a statistical tool for PIV uncertainty quantification". The proposed UQ approach is applied to estimate the uncertainties in time-averaged velocity and Reynold normal stresses in planar PIV measurements of the flow over a NACA0012 airfoil. The approach is also used to the investigation by stereoscopic PIV of the flow at the outlet of a ducted Boundary Layer Ingesting (BLI) propulsor. The codes in this dataset are used for these two experimental cases.
This resource contains the experimental data that was included in tecplot input files but in matlab files. dba1_cp has all the results is dimensioned (7,2) first dimension is 1-7 for each span station 2nd dimension is 1 for upper surface, 2 for lower surface. dba1_cp(ispan,isurf).x are the x/c locations at span station (ispan) and upper(isurf=1) or lower(isurf=2) dba1_cp(ispan,isurf).y are the eta locations at span station (ispan) and upper(isurf=1) or lower(isurf=2) dba1_cp(ispan,isurf).cp are the pressures at span station (ispan) and upper(isurf=1) or lower(isurf=2) Unsteady CP is dimensioned with 4 columns 1st column, real 2nd column, imaginary 3rd column, magnitude 4th column, phase, deg M,Re and other pertinent variables are included as variables and also included in casedata.M, etc