Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper analyzes current practices in psychology in the use of research methods and data analysis procedures (DAP) and aims to determine whether researchers are now using more sophisticated and advanced DAP than were employed previously. We reviewed empirical research published recently in prominent journals from the USA and Europe corresponding to the main psychological categories of Journal Citation Reports and examined research methods, number of studies, number and type of DAP, and statistical package. The 288 papers reviewed used 663 different DAP. Experimental and correlational studies were the most prevalent, depending on the specific field of psychology. Two-thirds of the papers reported a single study, although those in journals with an experimental focus typically described more. The papers mainly used parametric tests for comparison and statistical techniques for analyzing relationships among variables. Regarding the former, the most frequently used procedure was ANOVA, with mixed factorial ANOVA being the most prevalent. A decline in the use of non-parametric analysis was observed in relation to previous research. Relationships among variables were most commonly examined using regression models, with hierarchical regression and mediation analysis being the most prevalent procedures. There was also a decline in the use of stepwise regression and an increase in the use of structural equation modeling, confirmatory factor analysis, and hierarchical linear modeling. Overall, the results show that recent empirical studies published in journals belonging to the main areas of psychology are employing more varied and advanced statistical techniques of greater computational complexity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Code for analysis of missing data
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Exploratory data analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT: The mode of production of scientific knowledge has become complex, leading to the use of research methodological elements that also investigate subjective issues. This study aims to analyze characteristics of PhD theses that adopted the qualitative approach, defended at a Postgraduate Program in Education (PPGE) of a University of the Northeast Region of Brazil, 2013-2016 quadrennium. The theoretical basis of the work is based on contributions from Evandro Ghedin, Marcos Zanette, Marli André and Maria Amélia Franco. To achieve the proposed objective, a quali-quantitative documentary research was developed, based on the identification and analysis of the categories: theme, method, data collection procedure and data analysis technique, synthesized by grouping data extracted from theses abstracts. It was found that, of the amount of 57 theses defended in the period considered, 87.7% (n=50) used a qualitative approach, although only 32.0% (n=16) of these explain this approach in their summary. Public policy and teacher education are the most present among themes. 42.0% (n=21) of the theses clearly indicate the research method, with emphasis on documentary research. There are multiple data collection procedures in them, especially interview and document collection. In 46.0% (n=23) of the theses, the data analysis technique is specified, mainly content analysis. However, it is considered important that researchers in the field of Education clearly inform all the methodological elements of their theses in their abstracts.
Data Science Platform Market Size 2025-2029
The data science platform market size is forecast to increase by USD 763.9 million at a CAGR of 40.2% between 2024 and 2029.
The market is experiencing significant growth, driven by the integration of artificial intelligence (AI) and machine learning (ML). This enhancement enables more advanced data analysis and prediction capabilities, making data science platforms an essential tool for businesses seeking to gain insights from their data. Another trend shaping the market is the emergence of containerization and microservices in platforms. This development offers increased flexibility and scalability, allowing organizations to efficiently manage their projects.
However, the use of platforms also presents challenges, particularly In the area of data privacy and security. Ensuring the protection of sensitive data is crucial for businesses, and platforms must provide strong security measures to mitigate risks. In summary, the market is witnessing substantial growth due to the integration of AI and ML technologies, containerization, and microservices, while data privacy and security remain key challenges.
What will be the Size of the Data Science Platform Market During the Forecast Period?
Request Free Sample
The market is experiencing significant growth due to the increasing demand for advanced data analysis capabilities in various industries. Cloud-based solutions are gaining popularity as they offer scalability, flexibility, and cost savings. The market encompasses the entire project life cycle, from data acquisition and preparation to model development, training, and distribution. Big data, IoT, multimedia, machine data, consumer data, and business data are prime sources fueling this market's expansion. Unstructured data, previously challenging to process, is now being effectively managed through tools and software. Relational databases and machine learning models are integral components of platforms, enabling data exploration, preprocessing, and visualization.
Moreover, Artificial intelligence (AI) and machine learning (ML) technologies are essential for handling complex workflows, including data cleaning, model development, and model distribution. Data scientists benefit from these platforms by streamlining their tasks, improving productivity, and ensuring accurate and efficient model training. The market is expected to continue its growth trajectory as businesses increasingly recognize the value of data-driven insights.
How is this Data Science Platform Industry segmented and which is the largest segment?
The industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.
Deployment
On-premises
Cloud
Component
Platform
Services
End-user
BFSI
Retail and e-commerce
Manufacturing
Media and entertainment
Others
Sector
Large enterprises
SMEs
Geography
North America
Canada
US
Europe
Germany
UK
France
APAC
China
India
Japan
South America
Brazil
Middle East and Africa
By Deployment Insights
The on-premises segment is estimated to witness significant growth during the forecast period.
On-premises deployment is a traditional method for implementing technology solutions within an organization. This approach involves purchasing software with a one-time license fee and a service contract. On-premises solutions offer enhanced security, as they keep user credentials and data within the company's premises. They can be customized to meet specific business requirements, allowing for quick adaptation. On-premises deployment eliminates the need for third-party providers to manage and secure data, ensuring data privacy and confidentiality. Additionally, it enables rapid and easy data access, and keeps IP addresses and data confidential. This deployment model is particularly beneficial for businesses dealing with sensitive data, such as those in manufacturing and large enterprises. While cloud-based solutions offer flexibility and cost savings, on-premises deployment remains a popular choice for organizations prioritizing data security and control.
Get a glance at the Data Science Platform Industry report of share of various segments. Request Free Sample
The on-premises segment was valued at USD 38.70 million in 2019 and showed a gradual increase during the forecast period.
Regional Analysis
North America is estimated to contribute 48% to the growth of the global market during the forecast period.
Technavio's analysts have elaborately explained the regional trends and drivers that shape the market during the forecast period.
For more insights on the market share of various regions, Request F
In a 2024 survey, when asked what methods are most leveraged in cyber threat intelligence (CTI) analysis, over 67 percent of respondents indicated frequently using knowledge bases such as Mitre ATT&CK, and around 28 percent stated using this method occasionally. By contrast, using structured analytic techniques, such as key assumptions check, clustering, or Analysis of Competing Hypothesis (ACH) was the least used method for analysis.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
某股票交易数据集
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data analysis raw data in a PDF file
The share of organizations using big data analytics in market research worldwide steadily increased from 2014 to 2021, despite a slight drop in 2019. During the 2021 survey, 46 percent of respondents mentioned they used big data analytics as a research method.
SPATIALLY ADAPTIVE SEMI-SUPERVISED LEARNING WITH GAUSSIAN PROCESSES FOR HYPERSPECTRAL DATA ANALYSIS GOO JUN * AND JOYDEEP GHOSH* Abstract. A semi-supervised learning algorithm for the classification of hyperspectral data, Gaussian process expectation maximization (GP-EM), is proposed. Model parameters for each land cover class is first estimated by a supervised algorithm using Gaussian process regressions to find spatially adaptive parameters, and the estimated parameters are then used to initialize a spatially adaptive mixture-of-Gaussians model. The mixture model is updated by expectationmaximization iterations using the unlabeled data, and the spatially adaptive parameters for unlabeled instances are obtained by Gaussian process regressions with soft assignments. Two sets of hyperspectral data taken from the Botswana area by the NASA EO-1 satellite are used for experiments. Empirical evaluations show that the proposed framework performs significantly better than baseline algorithms that do not use spatial information, and the results are also better than any previously reported results by other algorithms on the same data.
This statistic presents the leading methods of data analytics application in the mergers and acquisitions sector in the United States in 2018. At that time, 64 percent of executives surveyed were using data analytics on customers and markets.
https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Data Analysis Software Market size was valued at USD 79.15 Billion in 2024 and is projected to reach USD 176.57 Billion by 2031, growing at a CAGR of 10.55% during the forecast period 2024-2031.
Global Data Analysis Software Market Drivers
The market drivers for the Data Analysis Software Market can be influenced by various factors. These may include:
Technological Developments: The need for more advanced data analysis software is being driven by the quick development of data analytics technologies, such as machine learning, artificial intelligence, and big data analytics.
Growing Data Volume: To extract useful insights from massive datasets, powerful data analysis software is required due to the exponential expansion of data generated from multiple sources, including social media, IoT devices, and sensors.
Business Intelligence Requirements: To obtain a competitive edge, organisations in all sectors are depending more and more on data-driven decision-making processes. This encourages the use of data analysis software to find strategic insights by analysing and visualising large, complicated datasets.
Regulatory Compliance: In order to maintain compliance and safeguard sensitive data, firms must invest in data analysis software with strong security capabilities. Examples of these rules and compliance requirements are the CCPA and GDPR.
Growing Need for Real-time Analytics: Companies are under increasing pressure to make decisions quickly, which has led to a growing need for real-time analytics capabilities provided by sophisticated data analysis tools. These skills allow organisations to react quickly to market changes and gain insights.
Cloud Adoption: As a result of the transition to cloud computing infrastructure, businesses of all sizes are adopting cloud-based data analysis software since it gives them access to scalable and affordable data analysis solutions.
The emergence of predictive analytics is being driven by the need for data analysis tools with sophisticated predictive modelling and forecasting skills. Predictive analytics is being used to forecast future trends, customer behaviour, and market dynamics.
Sector-specific Solutions: Businesses looking for specialised analytics solutions to handle industry-specific opportunities and challenges are adopting more vertical-specific data analysis software, which is designed to match the particular needs of sectors like healthcare, finance, retail, and manufacturing.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
List of statistical analysis procedures in metabox.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Electronic health records (EHRs) have been widely adopted in recent years, but often include a high proportion of missing data, which can create difficulties in implementing machine learning and other tools of personalized medicine. Completed datasets are preferred for a number of analysis methods, and successful imputation of missing EHR data can improve interpretation and increase our power to predict health outcomes. However, use of the most popular imputation methods mainly require scripting skills, and are implemented using various packages and syntax. Thus, the implementation of a full suite of methods is generally out of reach to all except experienced data scientists. Moreover, imputation is often considered as a separate exercise from exploratory data analysis, but should be considered as art of the data exploration process. We have created a new graphical tool, ImputEHR, that is based on a Python base and allows implementation of a range of simple and sophisticated (e.g., gradient-boosted tree-based and neural network) data imputation approaches. In addition to imputation, the tool enables data exploration for informed decision-making, as well as implementing machine learning prediction tools for response data selected by the user. Although the approach works for any missing data problem, the tool is primarily motivated by problems encountered for EHR and other biomedical data. We illustrate the tool using multiple real datasets, providing performance measures of imputation and downstream predictive analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This report summarises the data editing, analysis and interpretation protocols for State of NZ Garden Birds 2018 - Te Ahua o nga Manu o te Kari i Aotearoa, which are as follows: - editing the raw bird count data ready for analysis - calculating changes in bird counts over the last 10-year and 5-year periods for a subset of widespread garden birds at national, regional and local scales - using a standardised set of criteria to help the user interpret the results and readily identify changes of potential concern or interest.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Analytical Method Development is a crucial process in the field of scientific research and quality control. It involves creating and optimizing techniques to accurately and precisely analyze substances, compounds, or materials of interest. The primary goal is to establish reliable methods that can identify, quantify, and characterize various components within a sample. During the method development phase, scientists carefully choose suitable instruments, such as chromatographs, spectrometers, or titrators, and develop specific procedures to achieve the desired results. The process often requires iterative experimentation and data analysis to fine-tune the parameters and ensure robustness and reproducibility. Accurate analytical methods are essential in various industries, including pharmaceuticals, environmental monitoring, food safety, and more. They play a vital role in ensuring product quality, safety, and compliance with regulatory standards. In summary, analytical method development is an indispensable aspect of scientific investigations, enabling researchers to derive meaningful data and make informed decisions based on the analysis of complex samples. https://www.silverscreenandroll.com/users/sterinlab https://www.ridiculousupside.com/users/sterinlab https://www.sonicsrising.com/users/sterinlab https://www.swishappeal.com/users/sterinlab https://www.bringonthecats.com/users/sterinlab https://www.burntorangenation.com/users/sterinlab https://www.crimsonandcreammachine.com/users/sterinlab https://www.frogsowar.com/users/sterinlab https://www.ourdailybears.com/users/sterinlab https://www.rockchalktalk.com/users/sterinlab https://www.smokingmusket.com/users/sterinlab https://www.vivathematadors.com/users/sterinlab https://www.widerightnattylite.com/users/sterinlab https://www.musiccitymiracles.com/users/sterinlab https://www.stampedeblue.com/users/sterinlab https://www.celticsblog.com/users/sterinlab https://www.libertyballers.com/users/sterinlab https://www.netsdaily.com/users/sterinlab https://www.postingandtoasting.com/users/sterinlab https://www.blazersedge.com/users/sterinlab
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
dataset and Octave/MatLab codes/scripts for data analysis Background: Methods for p-value correction are criticized for either increasing Type II error or improperly reducing Type I error. This problem is worse when dealing with thousands or even hundreds of paired comparisons between waves or images which are performed point-to-point. This text considers patterns in probability vectors resulting from multiple point-to-point comparisons between two event-related potentials (ERP) waves (mass univariate analysis) to correct p-values, where clusters of signiticant p-values may indicate true H0 rejection. New method: We used ERP data from normal subjects and other ones with attention deficit hyperactivity disorder (ADHD) under a cued forced two-choice test to study attention. The decimal logarithm of the p-vector (p') was convolved with a Gaussian window whose length was set as the shortest lag above which autocorrelation of each ERP wave may be assumed to have vanished. To verify the reliability of the present correction method, we realized Monte-Carlo simulations (MC) to (1) evaluate confidence intervals of rejected and non-rejected areas of our data, (2) to evaluate differences between corrected and uncorrected p-vectors or simulated ones in terms of distribution of significant p-values, and (3) to empirically verify rate of type-I error (comparing 10,000 pairs of mixed samples whit control and ADHD subjects). Results: the present method reduced the range of p'-values that did not show covariance with neighbors (type I and also type-II errors). The differences between simulation or raw p-vector and corrected p-vectors were, respectively, minimal and maximal for window length set by autocorrelation in p-vector convolution. Comparison with existing methods: Our method was less conservative while FDR methods rejected basically all significant p-values for Pz and O2 channels. The MC simulations, gold-standard method for error correction, presented 2.78±4.83% of difference (all 20 channels) from p-vector after correction, while difference between raw and corrected p-vector was 5,96±5.00% (p = 0.0003). Conclusion: As a cluster-based correction, the present new method seems to be biological and statistically suitable to correct p-values in mass univariate analysis of ERP waves, which adopts adaptive parameters to set correction.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This report summarises the protocols for producing the State of NZ Garden Birds 2017 | Te Āhua o ngā Manu o te Kāri i Aotearoa, which are as follows: (1) Securing the legacy of the resources required and generated when preparing and publicising the State of NZ Garden Birds 2017; (2) Editing the raw bird count data ready for analysis; (3) Calculating changes in bird counts for a subset of widespread garden birds at national, regional and local scales; (4) Using a standardised set of criteria to help the user interpret the results and readily identify changes of potential concern or interest; (5) Preparing eye-catching graphics for a non-specialist audience and publicised via multiple channels (media outlets, Facebook, Twitter, email); and (6) Inviting feedback on the resources from NZ Garden Bird Survey 2018 participants via an online questionnaire. Citation: MacLeod CJ, Howard S, Green P, Gormley AM, Brandt AJ, Scott K, Spurr EB 2019. NZ Garden Bird Survey 2017: data editing, analysis, interpretation, visualisation and communication methods. Manaaki Whenua - Landcare Research Contract Report LC3461. https://datastore.landcareresearch.co.nz/dataset/edit/nzgbs-2017-trend-analysis-and-reporting
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
Market Size and Growth: The global AI tools for data analysis market was valued at approximately USD 24,160 million in 2025 and is projected to expand at a CAGR of XX% during the forecast period from 2025 to 2033, reaching a valuation of over USD XX million by 2033. The market growth is attributed to increasing adoption of AI and machine learning (ML) technologies to automate and enhance data analysis processes. Drivers, Trends, and Restraints: Key drivers of the market include the growing volume and complexity of data, the need for real-time insights, and the increasing demand for predictive analytics. Emerging trends such as cloud-based deployment, self-service analytics, and augmented data analysis are further fueling market growth. However, challenges such as data privacy concerns and the lack of skilled professionals in some regions may hinder market expansion.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Targeted and untargeted metabolic profiling of polar and semi-polar metabolites in extracts from freeze-dried sugar beet (Beta vulgaris L.) roots from six different varieties; V1-6. The analysis was performed via RP-UPLC-PDA-FLR (reverse phase ultra-performance liquid chromatography coupled to photodiode array detector and to fluorescence detector) and RP-UPLC-PDA-ESI-QTOF-MS and -MS/MS (reverse phase ultra-performance liquid chromatography coupled to photodiode array detector and to electrospray quadrupole time-of-flight tandem mass spectrometry). The raw data files of metabolite analysis comprise: - “0_Metadata and methods” which contains three .txt documents describing the materials and methods of free amino acid, organic acids, and semi-polar metabolite analysis, as well as two .csv documents including metadata and metainformation. In these files, sample names, reagents, method details, as well as species, harvest dates and the CSFID from Madritsch et al., 2020 (https://doi.org/10.1007/s11103-020-01041-8) can be found. - “1_Free amino acids” including three .csv documents with the raw data, the evaluation, and the summary of free amino acid analysis. - “2_Organic acids” which contains four folders with raw LC-MS and -MSMS data including washes, blanks, standards, and samples. Also, it contains three .csv documents with the raw data, the evaluation, and the summary of organic acid analysis. - “3_Semi-polar compounds” including two folders with raw LC-MSMS data, as well as three .csv documents with the raw data, the evaluation, and the summary of semi-polar metabolite analysis. This work was funded by the Austrian Research Promotion Agency (FFG), grant number 855706.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper analyzes current practices in psychology in the use of research methods and data analysis procedures (DAP) and aims to determine whether researchers are now using more sophisticated and advanced DAP than were employed previously. We reviewed empirical research published recently in prominent journals from the USA and Europe corresponding to the main psychological categories of Journal Citation Reports and examined research methods, number of studies, number and type of DAP, and statistical package. The 288 papers reviewed used 663 different DAP. Experimental and correlational studies were the most prevalent, depending on the specific field of psychology. Two-thirds of the papers reported a single study, although those in journals with an experimental focus typically described more. The papers mainly used parametric tests for comparison and statistical techniques for analyzing relationships among variables. Regarding the former, the most frequently used procedure was ANOVA, with mixed factorial ANOVA being the most prevalent. A decline in the use of non-parametric analysis was observed in relation to previous research. Relationships among variables were most commonly examined using regression models, with hierarchical regression and mediation analysis being the most prevalent procedures. There was also a decline in the use of stepwise regression and an increase in the use of structural equation modeling, confirmatory factor analysis, and hierarchical linear modeling. Overall, the results show that recent empirical studies published in journals belonging to the main areas of psychology are employing more varied and advanced statistical techniques of greater computational complexity.