100+ datasets found
  1. f

    Data from: Integrating Data Transformation in Principal Components Analysis

    • tandf.figshare.com
    pdf
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mehdi Maadooliat; Jianhua Z. Huang; Jianhua Hu (2023). Integrating Data Transformation in Principal Components Analysis [Dataset]. http://doi.org/10.6084/m9.figshare.960499.v3
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Mehdi Maadooliat; Jianhua Z. Huang; Jianhua Hu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Principal component analysis (PCA) is a popular dimension-reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples. Supplementary materials for this article are available online.

  2. a

    02.2 Transforming Data Using Extract, Transform, and Load Processes

    • hub.arcgis.com
    Updated Feb 18, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Iowa Department of Transportation (2017). 02.2 Transforming Data Using Extract, Transform, and Load Processes [Dataset]. https://hub.arcgis.com/documents/bcf59a09380b4731923769d3ce6ae3a3
    Explore at:
    Dataset updated
    Feb 18, 2017
    Dataset authored and provided by
    Iowa Department of Transportation
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    To achieve true data interoperability is to eliminate format and data model barriers, allowing you to seamlessly access, convert, and model any data, independent of format. The ArcGIS Data Interoperability extension is based on the powerful data transformation capabilities of the Feature Manipulation Engine (FME), giving you the data you want, when and where you want it.In this course, you will learn how to leverage the ArcGIS Data Interoperability extension within ArcCatalog and ArcMap, enabling you to directly read, translate, and transform spatial data according to your independent needs. In addition to components that allow you to work openly with a multitude of formats, the extension also provides a complex data model solution with a level of control that would otherwise require custom software.After completing this course, you will be able to:Recognize when you need to use the Data Interoperability tool to view or edit your data.Choose and apply the correct method of reading data with the Data Interoperability tool in ArcCatalog and ArcMap.Choose the correct Data Interoperability tool and be able to use it to convert your data between formats.Edit a data model, or schema, using the Spatial ETL tool.Perform any desired transformations on your data's attributes and geometry using the Spatial ETL tool.Verify your data transformations before, after, and during a translation by inspecting your data.Apply best practices when creating a workflow using the Data Interoperability extension.

  3. d

    Data from: INTEGRATE - Inverse Network Transformations for Efficient...

    • catalog.data.gov
    • data.openei.org
    • +3more
    Updated Jun 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (NREL) (2023). INTEGRATE - Inverse Network Transformations for Efficient Generation of Robust Airfoil and Turbine Enhancements [Dataset]. https://catalog.data.gov/dataset/integrate-inverse-network-transformations-for-efficient-generation-of-robust-airfoil-and-t
    Explore at:
    Dataset updated
    Jun 11, 2023
    Dataset provided by
    National Renewable Energy Laboratory (NREL)
    Description

    The INTEGRATE (Inverse Network Transformations for Efficient Generation of Robust Airfoil and Turbine Enhancements) project is developing a new inverse-design capability for the aerodynamic design of wind turbine rotors using invertible neural networks. This AI-based design technology can capture complex non-linear aerodynamic effects while being 100 times faster than design approaches based on computational fluid dynamics. This project enables innovation in wind turbine design by accelerating time to market through higher-accuracy early design iterations to reduce the levelized cost of energy. INVERTIBLE NEURAL NETWORKS Researchers are leveraging a specialized invertible neural network (INN) architecture along with the novel dimension-reduction methods and airfoil/blade shape representations developed by collaborators at the National Institute of Standards and Technology (NIST) learns complex relationships between airfoil or blade shapes and their associated aerodynamic and structural properties. This INN architecture will accelerate designs by providing a cost-effective alternative to current industrial aerodynamic design processes, including: Blade element momentum (BEM) theory models: limited effectiveness for design of offshore rotors with large, flexible blades where nonlinear aerodynamic effects dominate Direct design using computational fluid dynamics (CFD): cost-prohibitive Inverse-design models based on deep neural networks (DNNs): attractive alternative to CFD for 2D design problems, but quickly overwhelmed by the increased number of design variables in 3D problems AUTOMATED COMPUTATIONAL FLUID DYNAMICS FOR TRAINING DATA GENERATION - MERCURY FRAMEWORK The INN is trained on data obtained using the University of Marylands (UMD) Mercury Framework, which has with robust automated mesh generation capabilities and advanced turbulence and transition models validated for wind energy applications. Mercury is a multi-mesh paradigm, heterogeneous CPU-GPU framework. The framework incorporates three flow solvers at UMD, 1) OverTURNS, a structured solver on CPUs, 2) HAMSTR, a line based unstructured solver on CPUs, and 3) GARFIELD, a structured solver on GPUs. The framework is based on Python, that is often used to wrap C or Fortran codes for interoperability with other solvers. Communication between multiple solvers is accomplished with a Topology Independent Overset Grid Assembler (TIOGA). NOVEL AIRFOIL SHAPE REPRESENTATIONS USING GRASSMAN SPACES We developed a novel representation of shapes which decouples affine-style deformations from a rich set of data-driven deformations over a submanifold of the Grassmannian. The Grassmannian representation as an analytic generative model, informed by a database of physically relevant airfoils, offers (i) a rich set of novel 2D airfoil deformations not previously captured in the data , (ii) improved low-dimensional parameter domain for inferential statistics informing design/manufacturing, and (iii) consistent 3D blade representation and perturbation over a sequence of nominal shapes. TECHNOLOGY TRANSFER DEMONSTRATION - COUPLING WITH NREL WISDEM Researchers have integrated the inverse-design tool for 2D airfoils (INN-Airfoil) into WISDEM (Wind Plant Integrated Systems Design and Engineering Model), a multidisciplinary design and optimization framework for assessing the cost of energy, as part of tech-transfer demonstration. The integration of INN-Airfoil into WISDEM allows for the design of airfoils along with the blades that meet the dynamic design constraints on cost of energy, annual energy production, and the capital costs. Through preliminary studies, researchers have shown that the coupled INN-Airfoil + WISDEM approach reduces the cost of energy by around 1% compared to the conventional design approach. This page will serve as a place to easily access all the publications from this work and the repositories for the software developed and released through this project.

  4. f

    Data_Sheet_1_Impact of Data Transformation: An ECG Heartbeat Classification...

    • figshare.com
    docx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yongbo Liang; Ahmed Hussain; Derek Abbott; Carlo Menon; Rabab Ward; Mohamed Elgendi (2023). Data_Sheet_1_Impact of Data Transformation: An ECG Heartbeat Classification Approach.docx [Dataset]. http://doi.org/10.3389/fdgth.2020.610956.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Frontiers
    Authors
    Yongbo Liang; Ahmed Hussain; Derek Abbott; Carlo Menon; Rabab Ward; Mohamed Elgendi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cardiovascular diseases continue to be a significant global health threat. The electrocardiogram (ECG) signal is a physiological signal that plays a major role in preventing severe and even fatal heart diseases. The purpose of this research is to explore a simple mathematical feature transformation that could be applied to ECG signal segments in order to improve the detection accuracy of heartbeats, which could facilitate automated heart disease diagnosis. Six different mathematical transformation methods were examined and analyzed using 10s-length ECG segments, which showed that a reciprocal transformation results in consistently better classification performance for normal vs. atrial fibrillation beats and normal vs. atrial premature beats, when compared to untransformed features. The second best data transformation in terms of heartbeat detection accuracy was the cubic transformation. Results showed that applying the logarithmic transformation, which is considered the go-to data transformation, was not optimal among the six data transformations. Using the optimal data transformation, the reciprocal, can lead to a 35.6% accuracy improvement. According to the overall comparison tested by different feature engineering methods, classifiers, and different dataset sizes, performance improvement also reached 4.7%. Therefore, adding a simple data transformation step, such as the reciprocal or cubic, to the extracted features can improve current automated heartbeat classification in a timely manner.

  5. f

    List of data transformation methods.

    • plos.figshare.com
    xls
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joydeb Kumar Sana; Mohammad Zoynul Abedin; M. Sohel Rahman; M. Saifur Rahman (2023). List of data transformation methods. [Dataset]. http://doi.org/10.1371/journal.pone.0278095.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Joydeb Kumar Sana; Mohammad Zoynul Abedin; M. Sohel Rahman; M. Saifur Rahman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    List of data transformation methods.

  6. Data Transformation for Clustering Utilization for Feature Detection in MS

    • zenodo.org
    bin, csv
    Updated Mar 10, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vojtech Barton; Vojtech Barton (2022). Data Transformation for Clustering Utilization for Feature Detection in MS [Dataset]. http://doi.org/10.5281/zenodo.6337968
    Explore at:
    bin, csvAvailable download formats
    Dataset updated
    Mar 10, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Vojtech Barton; Vojtech Barton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset and roof-of-concept method used for proceeding titled "Data Transformation for Clustering Utilization
    for Feature Detection in MS" of IWBBIO 22 conference.

  7. m

    Data from: A fast algorithm for computing a matrix transform used to detect...

    • data.mendeley.com
    • narcis.nl
    Updated Jun 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dan Kestner (2020). A fast algorithm for computing a matrix transform used to detect trends in noisy data [Dataset]. http://doi.org/10.17632/mkcxrky9jc.1
    Explore at:
    Dataset updated
    Jun 9, 2020
    Authors
    Dan Kestner
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    A recently discovered universal rank-based matrix method to extract trends from noisy time series is described in Ierley and Kostinski (2019) but the formula for the output matrix elements, implemented there as an open-access supplement MATLAB computer code, is O(N^4), with N the matrix dimension. This can become prohibitively large for time series with hundreds of sample points or more. Based on recurrence relations, here we derive a much faster O(N^2) algorithm and provide code implementations in MATLAB and in open-source JULIA. In some cases one has the output matrix and needs to solve an inverse problem to obtain the input matrix. A fast algorithm and code for this companion problem, also based on the recurrence relations, are given. Finally, in the narrower, but common, domains of (i) trend detection and (ii) parameter estimation of a linear trend, users require, not the individual matrix elements, but simply their accumulated mean value. For this latter case we provide a yet faster O(N) heuristic approximation that relies on a series of rank one matrices. These algorithms are illustrated on a time series of high energy cosmic rays with N > 4 x 10^4 .

  8. Data transformation methods, hyperparameter optimization and feature...

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joydeb Kumar Sana; Mohammad Zoynul Abedin; M. Sohel Rahman; M. Saifur Rahman (2023). Data transformation methods, hyperparameter optimization and feature selection used in prior studies. [Dataset]. http://doi.org/10.1371/journal.pone.0278095.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Joydeb Kumar Sana; Mohammad Zoynul Abedin; M. Sohel Rahman; M. Saifur Rahman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data transformation methods, hyperparameter optimization and feature selection used in prior studies.

  9. f

    Description of the data transformation methods for compositional data and...

    • plos.figshare.com
    xls
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yigang Wei; Zhichao Wang; Huiwen Wang; Yan Li; Zhenyu Jiang (2023). Description of the data transformation methods for compositional data and forecasting models. [Dataset]. http://doi.org/10.1371/journal.pone.0212772.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Yigang Wei; Zhichao Wang; Huiwen Wang; Yan Li; Zhenyu Jiang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Description of the data transformation methods for compositional data and forecasting models.

  10. d

    Data from: Transform: A program to calculate transformations between various...

    • elsevier.digitalcommonsdata.com
    Updated Jan 1, 1986
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    K.G. Dyall (1986). Transform: A program to calculate transformations between various jj and LS coupling schemes [Dataset]. http://doi.org/10.17632/h7nn2n8jfj.1
    Explore at:
    Dataset updated
    Jan 1, 1986
    Authors
    K.G. Dyall
    License

    https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/

    Description

    Title of program: TRANSFORM Catalogue Id: AADT_v1_0

    Nature of problem The natural coupling scheme for multi-configurational Dirac-Fock calculations is the jj coupling scheme. For comparison of results of such calculations with experiment, other coupling schemes are often more useful. The program produces the transformation between a jj coupling scheme and an alternative scheme.

    Versions of this program held in the CPC repository in Mendeley Data AADT_v1_0; TRANSFORM; 10.1016/0010-4655(86)90169-4

    This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2019)

  11. Z

    Data from: A Hybrid Feature Location Technique for Re-engineering Single...

    • data.niaid.nih.gov
    Updated Nov 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Egyed, Alexander (2020). A Hybrid Feature Location Technique for Re-engineering Single Systems into Software Product Lines [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4244545
    Explore at:
    Dataset updated
    Nov 9, 2020
    Dataset provided by
    Linsbauer, Lukas
    Egyed, Alexander
    Michelon, Gabriela Karoline
    Fischer, Stefan
    Assunção, Wesley K. G.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset used for evaluating the hybrid feature location technique presented in the paper: "A Hybrid Feature Location Technique for Re-engineering Single Systems into Software Product Lines". This enables reproducibility, evaluation, and comparison of our study.

    Folder "Dataset" contains for each subject system used:

    (i) the artificial variants and their configurations;

    (ii) the ECCO repository containing the traces;

    (iii) the ground truth and composed variants;

    (iv) the metrics results.

    Folder "Scenarios" contains for each subject system used:

    (i) the videos recorded from exercising features on GUI.

  12. MidcurveNN LineGraphs

    • kaggle.com
    Updated Jul 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anushka Kulkarni (2023). MidcurveNN LineGraphs [Dataset]. https://www.kaggle.com/datasets/anushkaykulkarni/midcurvenn-linegraphs
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 6, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Anushka Kulkarni
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This dataset is related to the problem of geometric dimension reduction, from 2D profile to 1D midcurve.

    The format of original/raw dataset is very simplistic and points based. But to make it more appropriate, this dataset makes it line based.

    Line based Shape Format is [((x1,y1,..),(x2,y2,..)), ..]

    For example: L-profile raw data looks like: 5.0 5.0 10.0 5.0 10.0 30.0 35.0 30.0 35.0 35.0 5.0 35.0

    Output Profile Format: [((5.0,5.0), (10.0,5.0)), ((10.0,5.0), (10.0,30.0)), ((10.0,30.0), (35.0,30.0)), ((35.0,30.0), (35.0, 35.0)), ((35.0, 35.0), (5.0,35.0)), ((5.0,35.0), (5.0,5.0))]

    You can see that list line has been added to close the polygon.

    For L-midcurve raw-data looks like: 7.5 5.0 7.5 32.5 35.0 32.5 7.5 32.5

    Output Midcurve Format: [((7.5,5.0), (7.5, 32.5)), ((7.5, 32.5), (35.0, 32.5)), ((35.0, 32.5) (7.5, 32.5))]

    No closure is done. Care has been taken for shapes like 'T' and 'X' where midcurve poly-lines are not sequential.

    Once base geometries are generated then, various transformations are used to populate augmented dataset.

  13. n

    Data for the Gaze error estimation and linear transformation to improve...

    • data.ncl.ac.uk
    zip
    Updated Apr 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Varun Prakash Padikal (2025). Data for the Gaze error estimation and linear transformation to improve accuracy of video-based eye trackers [Dataset]. http://doi.org/10.25405/data.ncl.28669472.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    Newcastle University
    Authors
    Varun Prakash Padikal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The folder contains two subfolders: one with EyeLink 1000 Plus eye-tracking data, and the other with Tobii Nano Pro data. Each of these folders includes a file named "Gaze_position_raw", which is an Excel file containing all the raw data collected from participants. In this file, different trials are stored in separate sheets, and each sheet contains columns for the displayed target location and the corresponding gaze location.A separate folder called "Processed_data_EyeLink/Tobii" contains the results after performing a linear transformation. In this folder, each trial is saved as a separate Excel file. These files include the target location, the original gaze location, and the corrected gaze location after the corresponding linear transformation.

  14. H

    Replication Data for: Fundamentally New Methods for Transformation of the...

    • dataverse.harvard.edu
    Updated Jul 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Valeriy Kalyuzhnyi (2020). Replication Data for: Fundamentally New Methods for Transformation of the Value of Commodities into Original and Equilibrium Production Prices Using Marx's Five-Sphere Tables from Volume 3 of Capital [Dataset]. http://doi.org/10.7910/DVN/QYOPDS
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 8, 2020
    Dataset provided by
    Harvard Dataverse
    Authors
    Valeriy Kalyuzhnyi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Abstract In this paper, the author presents fundamentally new methods for transforming the value of commodities into original and equilibrium prices of production using Karl Marx's five-sector tables from the third volume of Capital. Mathematical verification of methods is performed using sequential iterations and the program Wolfram Mathematics. For the first time, the method of inverse transformation of production prices of commodities into value prices is presented. It is proved that the pricing systems based on the principles of value and price of production are not mutually exclusive. They complement each other, representing a single whole. A comprehensive solution to the transformation problem shows that Karl Marx does not have the mistakes attributed to him by critics. JEL classification: B14, B16, B24, E11, E20, E21, E22, P16, P17 Keywords: transformation problem, original transformation, the individual sphere of production, the equilibrium price of production, inverse transformation

  15. D

    The Response Scale Transformation Project

    • ssh.datastations.nl
    ods, odt +3
    Updated Dec 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    de . de Jonge; de . de Jonge; R. Veenhoven; R. Veenhoven (2020). The Response Scale Transformation Project [Dataset]. http://doi.org/10.17026/DANS-ZX5-P7PE
    Explore at:
    odt(29645), text/x-fixed-field(22452), ods(11942), ods(11139), ods(12335), ods(13098), text/x-fixed-field(59602), ods(11639), text/x-fixed-field(132938), ods(17048), ods(11734), ods(28694), text/x-fixed-field(1966), ods(14745), ods(11407), ods(14469), ods(12777), ods(12209), text/x-fixed-field(24328), ods(11706), text/x-fixed-field(50572), text/x-fixed-field(18563), odt(9924), text/x-fixed-field(10883), text/x-fixed-field(202843), text/x-fixed-field(2386), ods(97328), text/x-fixed-field(11385), ods(102829), tsv(72547), ods(12341), ods(10873), ods(14458), text/x-fixed-field(10660), ods(38470), text/x-fixed-field(20572), ods(11725), text/x-fixed-field(8864), text/x-fixed-field(165989), ods(14478), text/x-fixed-field(152269), ods(28686), text/x-fixed-field(24767), text/x-fixed-field(2484), text/x-fixed-field(2201), zip(75713), ods(16139), text/x-fixed-field(4196), text/x-fixed-field(14402), odt(923518), ods(11649), text/x-fixed-field(141193), ods(12861), ods(11589), ods(12903), ods(11757), text/x-fixed-field(49805), text/x-fixed-field(12389), text/x-fixed-field(16732), text/x-fixed-field(195127), text/x-fixed-field(122450), ods(11357), text/x-fixed-field(2288), ods(110889), text/x-fixed-field(4853), odt(14649), text/x-fixed-field(1928), ods(12818), ods(12681), ods(11897), text/x-fixed-field(20730), text/x-fixed-field(82219), ods(12707), ods(12159), ods(12189), text/x-fixed-field(12852), odt(866474), ods(12251), text/x-fixed-field(110342), ods(12822), ods(11213), text/x-fixed-field(56990), ods(11821), ods(11480), ods(103685), ods(11803), odt(128432), text/x-fixed-field(19990), ods(12672), ods(12570), text/x-fixed-field(15210), ods(12086), text/x-fixed-field(27258), odt(48839), text/x-fixed-field(3925), ods(12771), tsv(8015), tsv(1382), tsv(1156), tsv(55966), tsv(15059), tsv(22016), tsv(108268), tsv(39979), tsv(117623), tsv(176555), tsv(96630), tsv(13894), tsv(8531), tsv(10586), tsv(2795), tsv(1843), tsv(145500), tsv(879), tsv(9895), tsv(3784), tsv(123192), tsv(136784), tsv(12070), tsv(11112), tsv(17100), tsv(15037), tsv(15159), tsv(1230), tsv(938), tsv(10114), tsv(17282), tsv(3351), tsv(18397), tsv(24102), tsv(43030), tsv(20907), tsv(47877), tsv(173744)Available download formats
    Dataset updated
    Dec 9, 2020
    Dataset provided by
    DANS Data Station Social Sciences and Humanities
    Authors
    de . de Jonge; de . de Jonge; R. Veenhoven; R. Veenhoven
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    In this project we have reviewed existing methods used to homogenize data and developed several new methods for dealing with this diversity in survey questions on the same subject. The project is a spin-off from the World Database of Happiness, the main aim of which is to collate and make available research findings on the subjective enjoyment of life and to prepare these data for research synthesis. The first methods we discuss were proposed in the book ‘Happiness in Nations’ and which were used at the inception of the World Database of Happiness. Some 10 years later a new method was introduced: the International Happiness Scale Interval Study (HSIS). Taking the HSIS as a basis the Continuum Approach was developed. Then, building on this approach, we developed the Reference Distribution Method.

  16. n

    Data from: Transformation of Fluorotelomer Carboxylic Acids and the...

    • curate.nd.edu
    pdf
    Updated May 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Liliya Chernysheva (2025). Transformation of Fluorotelomer Carboxylic Acids and the Development of Methods to Analyze Total Fluorine in Complex Environmental Matrices [Dataset]. http://doi.org/10.7274/28801787.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 16, 2025
    Dataset provided by
    University of Notre Dame
    Authors
    Liliya Chernysheva
    License

    https://www.law.cornell.edu/uscode/text/17/106https://www.law.cornell.edu/uscode/text/17/106

    Description

    Per- and polyfluoroalkyl substances (PFAS) represent a class of synthetic compounds widely utilized in consumer products and prevalent across various environmental matrices. They are typically analyzed individually or comprehensively as a chemical family, primarily identified by their abundant fluorine content. Over the last two decades, there has been a notable trend towards utilizing partially fluorinated PFAS, which emulate the characteristics of fully fluorinated terminal PFAS but are subject to less stringent regulatory oversight. To provide context for experimental work, Chapter 1 introduces the dissertation’s objectives and overarching research questions, while Chapter 2 presents a comprehensive review of PFAS and fluorine analysis techniques. This review highlights current methodological strengths, limitations, and key parameters, laying the groundwork for the two primary research projects. The first project, described in Chapter 3, investigates the ambient transformation of PFAS precursors. It begins by exploring an elimination pathway under controlled laboratory conditions using a base, establishing a baseline understanding of the reaction. A similar experiment is then conducted using a consumer-grade cleaning agent, demonstrating that comparable transformation is possible in indoor environments. Using both targeted and non-targeted analysis, the study reveals challenges in accurately assessing PFAS composition and species abundance due to the transformation of precursors to terminal PFAS. The second project, detailed in Chapter 4, focuses on developing analytical methods for total fluorine (TF) detection using Particle-Induced Gamma-ray Emission (PIGE) spectrometry, a relatively novel technique in PFAS analysis that lacks standardized protocols for this chemical class. Despite this, PIGE demonstrated promising results comparable to those of established analytical instruments. Successful TF analysis methods were applied to various matrices relevant to PFAS treatment, including granular activated carbon, Ottawa sand, and brass. Finally, Chapter 5 synthesizes the dissertation’s findings, discusses broader implications, and identifies opportunities for future PFAS research.

  17. Data from: Transforming towards what? A review of futures thinking applied...

    • zenodo.org
    • data.niaid.nih.gov
    bin
    Updated May 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Silvana Juri; Silvana Juri; Marais-Potgieter Andrea; Marais-Potgieter Andrea (2024). Transforming towards what? A review of futures thinking applied in the quest for navigating sustainability transformations [Dataset]. http://doi.org/10.5281/zenodo.11114262
    Explore at:
    binAvailable download formats
    Dataset updated
    May 4, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Silvana Juri; Silvana Juri; Marais-Potgieter Andrea; Marais-Potgieter Andrea
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    May 4, 2024
    Description

    This is the dataset used for the review article "Transforming towards what? A review of futures thinking applied in the quest for navigating sustainability transformations". The spreadsheet contains the bibliographic records and the data used and organized for its analysis.

  18. Data from: Mixed-Methods Evaluation of Primary Care Transformation in...

    • beta.ukdataservice.ac.uk
    Updated 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stewart Mercer (2025). Mixed-Methods Evaluation of Primary Care Transformation in Scotland and China: Metadata and Documentation, 2020-2024 [Dataset]. http://doi.org/10.5255/ukda-sn-857854
    Explore at:
    Dataset updated
    2025
    Dataset provided by
    DataCitehttps://www.datacite.org/
    UK Data Servicehttps://ukdataservice.ac.uk/
    Authors
    Stewart Mercer
    Area covered
    China, Scotland
    Description

    This project explored and compared recent changes in primary care in Scotland and China, focusing on how these developments addressed the needs of ageing populations and sought to reduce health inequalities. The study identified key facilitators and barriers to progress in both countries and highlighted opportunities for mutual learning.

    Data were collected by GP and patient surveys and individual qualitative interviews with GPs, patients and primary care multidisciplinary team members.

    The data cannot be made available for future reuse as participants were not informed about secondary reuse.

  19. d

    Replication Data for: The Solution to Marx’s Transformation Problem in the...

    • search.dataone.org
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kalyuzhnyi, Valeriy (2023). Replication Data for: The Solution to Marx’s Transformation Problem in the Direct and Inverse Formulation (Programs for transformational calculations and illustration of the effect of the law of large numbers) [Dataset]. http://doi.org/10.7910/DVN/UKI7SC
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Kalyuzhnyi, Valeriy
    Description

    Abstract. In this paper, the author presents fundamentally new methods for transforming the value of commodities into original and equilibrium prices of production using Karl Marx's five-sector tables from the third volume of Capital. Mathematical verification of methods is performed using sequential iterations and the program Wolfram Mathematics. For the first time, the method of inverse transformation of production prices of commodities into value prices is presented. It is proved that the pricing systems based on the principles of value and price of production are not mutually exclusive. They complement each other, representing a single whole. A comprehensive solution to the transformation problem shows that Karl Marx does not have the mistakes attributed to him by critics. JEL classification: B14, B16, B24, E11, E20, E21, E22, P16, P17 Keywords: transformation problem, original transformation, the individual sphere of production, the equilibrium price of production, inverse transformation

  20. n

    Data from: Transformation of measurement uncertainties into low-dimensional...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Feb 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antonios Alexiadis; Scott Ferson; Eann A. Patterson (2021). Transformation of measurement uncertainties into low-dimensional feature vector space [Dataset]. http://doi.org/10.5061/dryad.6hdr7sqx2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 1, 2021
    Dataset provided by
    University of Liverpool
    Authors
    Antonios Alexiadis; Scott Ferson; Eann A. Patterson
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Advances in technology allow the acquisition of data with high spatial and temporal resolution. These datasets are usually accompanied by estimates of the measurement uncertainty, which may be spatially or temporally varying and should be taken into consideration when making decisions based on the data. At the same time, various transformations are commonly implemented to reduce the dimensionality of the datasets for post-processing, or to extract significant features. However, the corresponding uncertainty is not usually represented in the low-dimensional or feature vector space. A method is proposed that maps the measurement uncertainty into the equivalent low-dimensional space with the aid of approximate Bayesian computation, resulting in a distribution that can be used to make statistical inferences. The method involves no assumptions about the probability distribution of the measurement error and is independent of the feature extraction process as demonstrated in three examples. In the first two examples Chebyshev polynomials were used to analyse structural displacements and soil moisture measurements; while in the third, principal component analysis was used to decompose global ocean temperature data. The uses of the method range from supporting decision making in model validation or confirmation, model updating or calibration and tracking changes in condition, such as the characterisation of the El Niño Southern Oscillation.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Mehdi Maadooliat; Jianhua Z. Huang; Jianhua Hu (2023). Integrating Data Transformation in Principal Components Analysis [Dataset]. http://doi.org/10.6084/m9.figshare.960499.v3

Data from: Integrating Data Transformation in Principal Components Analysis

Related Article
Explore at:
pdfAvailable download formats
Dataset updated
Jun 4, 2023
Dataset provided by
Taylor & Francis
Authors
Mehdi Maadooliat; Jianhua Z. Huang; Jianhua Hu
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Principal component analysis (PCA) is a popular dimension-reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples. Supplementary materials for this article are available online.

Search
Clear search
Close search
Google apps
Main menu