8 datasets found
  1. P

    darpa_sd2_perovskites Dataset

    • paperswithcode.com
    Updated May 25, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ian M. Pendleton; Mary K. Caucci; Michael Tynes; Aaron Dharna; Mansoor Ani Najeeb Nellikkal; Zhi Li; Emory M. Chan; Alexander J. Norquist; and Joshua Schrier (2020). darpa_sd2_perovskites Dataset [Dataset]. https://paperswithcode.com/dataset/darpa-sd2-perovskites
    Explore at:
    Dataset updated
    May 25, 2020
    Authors
    Ian M. Pendleton; Mary K. Caucci; Michael Tynes; Aaron Dharna; Mansoor Ani Najeeb Nellikkal; Zhi Li; Emory M. Chan; Alexander J. Norquist; and Joshua Schrier
    Description

    Included in this content:

    0045.perovksitedata.csv - main dataset used in this article. A more detailed description can be found in the “dataset overview” section below Chemical Inventory.csv - the hand curated file of all chemicals used in the construction of the perovskite dataset. This file includes identifiers, chemical properties, and other information. ExcessMolarVolumeData.xlsx - record of experimental data, computations, and final dataset used in the generation of the excess molar volume plots. MLModelMetrics.xlsx - all of the ML metrics organized in one place (excludes reactant set specific breakdown, see ML_Logs.zip for those files). OrganoammoniumDensityDataset.xlsx - complete set of the data used to generate the density values. Example calculations included. model_matchup_main.py - python pipeline used to generate all of the ML runs associated with the article. More detailed instructions on the operation of this code is included in the “ML Code” Section below. This file is also hosted on GIT: https://github.com/ipendlet/MLScripts/blob/master/temp_densityconc/model_matchup_main_20191231.py

    SolutionVolumeDataset - complete set of 219 solutions in the perovskite dataset. Tabs include the automatically generated reagent information from ESCALATE, hand curated reagent information from early runs, and the generation of the dataset used in the creation of Figure 5. error_auditing.zip - code and historical datasets used for reporting the dataset auditing. “AllCode.zip” which contains: model_matchup_main_20191231.py - python pipeline used to generate all of the ML runs associated with the article. More detailed instructions on the operation of this code is included in the “ML Code” Section below. This file is also hosted on GIT: https://github.com/ipendlet/MLScripts/blob/master/temp_densityconc/0045.perovskitedata.csv VmE_CurveFitandPlot.py - python code for generating the third order polynomial fit to the VmE vs mole fraction of FAH included in the main text. Requires the ‘MolFractionResults.csv’ to function (also included). Calculation_Vm_Ve_CURVEFITTING.nb - mathematica code for generating the third order polynomial fit to the VmE vs mole fraction of FAH included in the main text.
    Covariance_Analysis.py - python code for ingesting and plotting the covariance of features and volumes in the perovskite dataset. Includes renaming dictionaries used for the publication. FeatureComparison_Plotting.py - python code for reading in and plotting features for the ‘GBT’ and ‘OHGBT’ folders in this directory. The code parses the contents of these folders and generates feature comparison metrics used for Figure 9 and the associated Figure S8. Some assembly required. Requirements.txt - all of the packages used in the generation of this paper 0045.perovskitedata.csv - the main dataset described throughout the article. This file is required to run some of the code and is therefore kept near the code.

    “ML_Logs.zip” which contains: A folder describing every model generated for this article. In each folder there are a number of files: Features_named_important.csv and features_value_importance.csv - these files are linked together and describe the weighted feature contributions from features (only present for GBT models) AnalysisLog.txt - Log file of the run including all options, data curation and model training summaries
    LeaveOneOut_Summary.csv - Results of the leave-one-reactant set-out studies on the model (if performed) LOOModelInfo.txt - Hyperparameter information for each model in the study (associated with the given dataset, sometimes includes duplicate runs). STTSModelInfo.txt - Hyperparameter information for each model in the study (associated with the given dataset, sometimes includes duplicate runs). StandardTestTrain_Summary.csv - Results of the 6 fold cross validation ML performance (for the hold out case) LeaveOneOut_FullDataset_ByAmine.csv - Results of the leave-one-reactant set-out studies performed on the full dataset (all experiments) specified by reactant set (delineated by the amine) LeaveOneOut_StratifiedData_ByAmine.csv - Results of the leave-one-reactant set-out studies performed on a random stratified sample (96 random experiments) specified by reactant set (delineated by the amine) model_matchup_main_*.py - code used to generate all of the runs contained in a particular folder. The code is exactly what was used at run time to generate a given dataset (requires 0045.perovskitedata.csv file to run).

  2. h

    the-stack

    • huggingface.co
    • opendatalab.com
    Updated Oct 27, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    the-stack [Dataset]. https://huggingface.co/datasets/bigcode/the-stack
    Explore at:
    Dataset updated
    Oct 27, 2022
    Dataset authored and provided by
    BigCode
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Card for The Stack

      Changelog
    

    Release Description

    v1.0 Initial release of the Stack. Included 30 programming languages and 18 permissive licenses. Note: Three included licenses (MPL/EPL/LGPL) are considered weak copyleft licenses. The resulting near-deduplicated dataset is 3TB in size.

    v1.1 The three copyleft licenses ((MPL/EPL/LGPL) were excluded and the list of permissive licenses extended to 193 licenses in total. The list of programming languages… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack.

  3. P

    PhysioNet Challenge 2020 Dataset

    • paperswithcode.com
    Updated Dec 30, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erick A. Perez Alday; Annie Gu; Amit Shah; Chad Robichaux; An-Kwok Ian Wong; Chengyu Liu; Feifei Liu; Ali Bahrami Rad; Andoni Elola; Salman Seyedi; Qiao Li; ASHISH SHARMA; Gari D. Clifford; Matthew A. Reyna (2020). PhysioNet Challenge 2020 Dataset [Dataset]. https://paperswithcode.com/dataset/physionet-challenge-2020
    Explore at:
    Dataset updated
    Dec 30, 2020
    Authors
    Erick A. Perez Alday; Annie Gu; Amit Shah; Chad Robichaux; An-Kwok Ian Wong; Chengyu Liu; Feifei Liu; Ali Bahrami Rad; Andoni Elola; Salman Seyedi; Qiao Li; ASHISH SHARMA; Gari D. Clifford; Matthew A. Reyna
    Description

    Data The data for this Challenge are from multiple sources: CPSC Database and CPSC-Extra Database INCART Database PTB and PTB-XL Database The Georgia 12-lead ECG Challenge (G12EC) Database Undisclosed Database The first source is the public (CPSC Database) and unused data (CPSC-Extra Database) from the China Physiological Signal Challenge in 2018 (CPSC2018), held during the 7th International Conference on Biomedical Engineering and Biotechnology in Nanjing, China. The unused data from the CPSC2018 is NOT the test data from the CPSC2018. The test data of the CPSC2018 is included in the final private database that has been sequestered. This training set consists of two sets of 6,877 (male: 3,699; female: 3,178) and 3,453 (male: 1,843; female: 1,610) of 12-ECG recordings lasting from 6 seconds to 60 seconds. Each recording was sampled at 500 Hz.

    The second source set is the public dataset from St Petersburg INCART 12-lead Arrhythmia Database. This database consists of 74 annotated recordings extracted from 32 Holter records. Each record is 30 minutes long and contains 12 standard leads, each sampled at 257 Hz.

    The third source from the Physikalisch Technische Bundesanstalt (PTB) comprises two public databases: the PTB Diagnostic ECG Database and the PTB-XL, a large publicly available electrocardiography dataset. The first PTB database contains 516 records (male: 377, female: 139). Each recording was sampled at 1000 Hz. The PTB-XL contains 21,837 clinical 12-lead ECGs (male: 11,379 and female: 10,458) of 10 second length with a sampling frequency of 500 Hz.

    The fourth source is a Georgia database which represents a unique demographic of the Southeastern United States. This training set contains 10,344 12-lead ECGs (male: 5,551, female: 4,793) of 10 second length with a sampling frequency of 500 Hz.

    The fifth source is an undisclosed American database that is geographically distinct from the Georgia database. This source contains 10,000 ECGs (all retained as test data).

    All data is provided in WFDB format. Each ECG recording has a binary MATLAB v4 file (see page 27) for the ECG signal data and a text file in WFDB header format describing the recording and patient attributes, including the diagnosis (the labels for the recording). The binary files can be read using the load function in MATLAB and the scipy.io.loadmat function in Python; please see our baseline models for examples of loading the data. The first line of the header provides information about the total number of leads and the total number of samples or points per lead. The following lines describe how each lead was saved, and the last lines provide information on demographics and diagnosis. Below is an example header file A0001.hea:

    A0001 12 500 7500 05-Feb-2020 11:39:16
    A0001.mat 16+24 1000/mV 16 0 28 -1716 0 I
    A0001.mat 16+24 1000/mV 16 0 7 2029 0 II
    A0001.mat 16+24 1000/mV 16 0 -21 3745 0 III
    A0001.mat 16+24 1000/mV 16 0 -17 3680 0 aVR
    A0001.mat 16+24 1000/mV 16 0 24 -2664 0 aVL
    A0001.mat 16+24 1000/mV 16 0 -7 -1499 0 aVF
    A0001.mat 16+24 1000/mV 16 0 -290 390 0 V1
    A0001.mat 16+24 1000/mV 16 0 -204 157 0 V2
    A0001.mat 16+24 1000/mV 16 0 -96 -2555 0 V3
    A0001.mat 16+24 1000/mV 16 0 -112 49 0 V4
    A0001.mat 16+24 1000/mV 16 0 -596 -321 0 V5
    A0001.mat 16+24 1000/mV 16 0 -16 -3112 0 V6
    
    Age: 74
    Sex: Male
    Dx: 426783006
    Rx: Unknown
    Hx: Unknown
    Sx: Unknown
    

    From the first line, we see that the recording number is A0001, and the recording file is A0001.mat. The recording has 12 leads, each recorded at 500 Hz sample frequency, and contains 7500 samples. From the next 12 lines, we see that each signal was written at 16 bits with an offset of 24 bits, the amplitude resolution is 1000 with units in mV, the resolution of the analog-to-digital converter (ADC) used to digitize the signal is 16 bits, and the baseline value corresponding to 0 physical units is 0. The first value of the signal, the checksum, and the lead name are included for each signal. From the final 6 lines, we see that the patient is a 74-year-old male with a diagnosis (Dx) of 426783006. The medical prescription (Rx), history (Hx), and symptom or surgery (Sx) are unknown.

    Each ECG recording has one or more labels from different type of abnormalities in SNOMED-CT codes. The full list of diagnoses for the challenge has been posted here as a 3 column CSV file: Long-form description, corresponding SNOMED-CT code, abbreviation. Although these descriptions apply to all training data there may be fewer classes in the test data, and in different proportions. However, every class in the test data will be represented in the training data.

  4. S

    Data from: Hybrid LCA database generated using ecoinvent and EXIOBASE

    • data.subak.org
    • data.niaid.nih.gov
    • +1more
    csv
    Updated Feb 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    International Reference Center for Life Cycle Assessment and Sustainable Transition (CIRAIG) (2023). Hybrid LCA database generated using ecoinvent and EXIOBASE [Dataset]. https://data.subak.org/dataset/hybrid-lca-database-generated-using-ecoinvent-and-exiobase
    Explore at:
    csvAvailable download formats
    Dataset updated
    Feb 16, 2023
    Dataset provided by
    International Reference Center for Life Cycle Assessment and Sustainable Transition (CIRAIG)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Hybrid LCA database generated using ecoinvent and EXIOBASE, i.e., each process of the original ecoinvent database is added new direct inputs (coming from EXIOBASE) deemed missing (e.g., services). Each process of the resulting hybrid database is thus not (or at least less) truncated and the calculated lifecycle emissions/impacts should therefore be closer to reality.

    For license reasons, only the added inputs for each process of ecoinvent are provided (and not all the inputs).

    Why are there two versions for hybrid-ecoinvent3.5?

    One of the version corresponds to ecoinvent hybridized with the normal version of EXIOBASE and the other is hybridized with a capital-endogenized version of EXIOBASE.

    What does capital endogenization do?

    It matches capital goods formation to the value chains of products where they are required. In a more LCA way of speaking, EXIOBASE in its normal version does not allocate capital use to value chains. It's like if ecoinvent processes had no inputs of buildings, etc. in their unit process inventory. For more detail on this, refer to (Södersten et al., 2019) or (Miller et al., 2019).

    So which version do I use?

    Using the version "with capitals" gives a more comprehensive coverage. Using the "without capitals" version means that if a process of ecoinvent misses inputs of capital goods (e.g., a process does not include the company laptops of the employees), it won't be added. It comes with its fair share of assumptions and uncertainties however.

    Why is it only available for hybrid-ecoinvent3.5?

    The work used for capital endogenization is not available for exiobase3.8.1.

    How do I use the dataset?

    First, to use it, you will need both the corresponding ecoinvent [cut-off] and EXIOBASE [product x product] versions. For the reference year of EXIOBASE to-be-used, take 2011 if using the hybrid-ecoinvent3.5 and 2019 for hybrid-ecoinvent3.6 and 3.7.1.

    In the four datasets of this package, only added inputs are given (i.e. inputs from EXIOBASE added to ecoinvent processes). Ecoinvent and EXIOBASE processes/sectors are not included, for copyright issues. You thus need both ecoinvent and EXIOBASE to calculate life cycle emissions/impacts.

    Module to get ecoinvent in a Python format: https://github.com/majeau-bettez/ecospold2matrix (make sure to take the most up-to-date branch)

    Module to get EXIOBASE in a Python format: https://github.com/konstantinstadler/pymrio (can also be installed with pip)

    If you want to use the "with capitals" version of the hybrid database, you also need to use the capital endogenized version of EXIOBASE, available here: https://zenodo.org/record/3874309. Choose the pxp version of the year you plan to study (which should match with the year of the EXIOBASE version). You then need to normalize the capital matrix (i.e., divide by the total output x of EXIOBASE). Then, you simply add the normalized capital matrix (K) to the technology matrix (A) of EXIOBASE (see equation below).

    Once you have all the data needed, you just need to apply a slightly modified version of the Leontief equation:

    (\begin{equation} \textbf{q}^{hyb} = \begin{bmatrix} \textbf{C}^{lca}\cdot\textbf{S}^{lca} & \textbf{C}^{io}\cdot\textbf{S}^{io} \end{bmatrix} \cdot \left( \textbf{I} - \begin{bmatrix} \textbf{A}^{lca} & \textbf{C}^{d} \ \textbf{C}^{u} & \textbf{A}^{io}+\textbf{K}^{io} \end{bmatrix} \right) ^{-1} \cdot \left( \begin{bmatrix} \textbf{y}^{lca} \ 0 \end{bmatrix} \right) \end{equation})

    qhyb gives the hybridized impact, i.e., the impacts of each process including the impacts generated by their new inputs.

    Clca and Cio are the respective characterization matrices for ecoinvent and EXIOBASE.

    Slca and Sio are the respective environmental extension matrices (or elementary flows in LCA terms) for ecoinvent and EXIOBASE.

    I is the identity matrix.

    Alca and Aio are the respective technology matrices for ecoinvent and EXIOBASE (the ones loaded with ecospold2matrix and pymrio).

    Kio is the capital matrix. If you do not use the endogenized version, do not include this matrix in the calculation.

    Cu (or upstream cut-offs) is the matrix that you get in this dataset.

    Cd (or downstream cut-offs) is simply a matrix of zeros in the case of this application.

    Finally you define your final demand (or functional unit/set of functional units for LCA) as ylca.

    Can I use it with different versions/reference years of EXIOBASE?

    Technically speaking, yes it will work, because the temporal aspect does not intervene in the determination of the hybrid database presented here. However, keep in mind that there might be some inconsistencies. For example, you would need to multiply each of the inputs of the datasets by a factor to account for inflation. Prices of ecoinvent (which were used to compile the hybrid databases, for all versions presented here) are defined in €2005.

    What are the weird suite of numbers in the columns?

    Ecoinvent processes are identified through unique identifiers (uuids) to which metadata (i.e., name, location, price, etc.) can be retraced with the appropriate metadata files in each dataset package.

    Why is the equation (I-A)-1 and not A-1 like in LCA?

    IO and LCA have the same computational background. In LCA however, the convention is to represents outputs and inputs in the technology matrix. That's why there is a diagonal of 1s (the outputs, i.e. functional units) and negative values elsewhere (inputs). In IO, the technology matrix does not include outputs and only registers inputs as positive values. In the end, it is just a convention difference. If we call T the technology matrix of LCA and A the technology matrix of IO we have T = I-A. When you load ecoinvent using ecospold2matrix, the resulting version of ecoinvent will already be in IO convention and you won't have to bother with it.

    Pymrio does not provide a characterization matrix for EXIOBASE, what do I do?

    You can find an up-to-date characterization matrix (with Impact World+) for environmental extensions of EXIOBASE here: https://zenodo.org/record/3890339

    If you want to match characterization across both EXIOBASE and ecoinvent (which you should do), here you can find a characterization matrix with Impact World+ for ecoinvent: https://zenodo.org/record/3890367

    It's too complicated...

    The custom software that was used to develop these datasets already deals with some of the steps described. Go check it out: https://github.com/MaximeAgez/pylcaio. You can also generate your own hybrid version of ecoinvent using this software (you can play with some parameters like correction for double counting, inflation rate, change price data to be used, etc.). As of pylcaio v2.1, the resulting hybrid database (generated directly by pylcaio) can be exported to and manipulated in brightway2.

    Where can I get more information?

    The whole methodology is detailed in (Agez et al., 2021).

  5. SELTO Dataset

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip
    Updated May 23, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sören Dittmer; David Erzmann; Henrik Harms; Rielson Falck; Marco Gosch; Sören Dittmer; David Erzmann; Henrik Harms; Rielson Falck; Marco Gosch (2023). SELTO Dataset [Dataset]. http://doi.org/10.5281/zenodo.7034899
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    May 23, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sören Dittmer; David Erzmann; Henrik Harms; Rielson Falck; Marco Gosch; Sören Dittmer; David Erzmann; Henrik Harms; Rielson Falck; Marco Gosch
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A Benchmark Dataset for Deep Learning-based Methods for 3D Topology Optimization.

    One can find a description of the provided dataset partitions in Section 3 of Dittmer, S., Erzmann, D., Harms, H., Maass, P., SELTO: Sample-Efficient Learned Topology Optimization (2022) https://arxiv.org/abs/2209.05098.


    Every dataset container consists of multiple enumerated pairs of CSV files. Each pair describes a unique topology optimization problem and a corresponding binarized SIMP solution. Every file of the form {i}.csv contains all voxel-wise information about the sample i. Every file of the form {i}_info.csv file contains scalar parameters of the topology optimization problem, such as material parameters.


    This dataset represents topology optimization problems and solutions on the bases of voxels. We define all spatially varying quantities via the voxels' centers -- rather than via the vertices or surfaces of the voxels.
    In {i}.csv files, each row corresponds to one voxel in the design space. The columns correspond to ['x', 'y', 'z', 'design_space', 'dirichlet_x', 'dirichlet_y', 'dirichlet_z', 'force_x', 'force_y', 'force_z', 'density'].

    • x, y, z - These are three integer indices stating the index/location of the voxel within the voxel mesh.
    • design_space - This is one ternary variable indicating the type of material density constraint on the voxel within the TO problem formulation. "0" and "1" indicate a material density fixed at 0 or 1, respectively. "-1" indicates the absence of constraints.
    • dirichlet_x, dirichlet_y, dirichlet_z - These are three binary variables defining whether the voxel contains homogenous Dirichlet constraints in the respective axis direction.
    • force_x, force_y, force_z - These are three floating point variables giving the three spacial components of the forces applied to each voxel. All forces are body forces given in [N/m^3].
    • density - This is a binary variable stating whether the voxel carries material in the solution of the topology optimization problem.

    Any of these files with the index i can be imported using pandas by executing:

    import pandas as pd
    
    directory = ...
    file_path = f'{directory}/{i}.csv'
    column_names = ['x', 'y', 'z', 'design_space','dirichlet_x', 'dirichlet_y', 'dirichlet_z', 'force_x', 'force_y', 'force_z', 'density']
    data = pd.read_csv(file_path, names=column_names)

    From this pandas dataframe one can extract the torch tensors of forces F, Dirichlet conditions ωDirichlet, and design space information ωdesign using the following functions:

    import torch
    
    def get_shape_and_voxels(data):
      shape = data[['x', 'y', 'z']].iloc[-1].values.astype(int) + 1
      vox_x = data['x'].values
      vox_y = data['y'].values
      vox_z = data['z'].values
      voxels = [vox_x, vox_y, vox_z]
      return shape, voxels
    
    
    def get_forces_boundary_conditions_and_design_space(data, shape, voxels):
      F = torch.zeros(3, *shape, dtype=torch.float32)
      F[0, voxels[0], voxels[1], voxels[2]] = torch.tensor(data['force_x'].values, dtype=torch.float32)
      F[1, voxels[0], voxels[1], voxels[2]] = torch.tensor(data['force_y'].values, dtype=torch.float32)
      F[2, voxels[0], voxels[1], voxels[2]] = torch.tensor(data['force_z'].values, dtype=torch.float32)
    
      ω_Dirichlet = torch.zeros(3, *shape, dtype=torch.float32)
      ω_Dirichlet[0, voxels[0], voxels[1], voxels[2]] = torch.tensor(data['dirichlet_x'].values, dtype=torch.float32)
      ω_Dirichlet[1, voxels[0], voxels[1], voxels[2]] = torch.tensor(data['dirichlet_y'].values, dtype=torch.float32)
      ω_Dirichlet[2, voxels[0], voxels[1], voxels[2]] = torch.tensor(data['dirichlet_z'].values, dtype=torch.float32)
    
      ω_design = torch.zeros(1, *shape, dtype=int)
      ω_design[:, voxels[0], voxels[1], voxels[2]] = torch.from_numpy(data['design_space'].values.astype(int))
      return F, ω_Dirichlet, ω_design

    The corresponding {i}_info.csv files only have one row with column labels ['E', 'ν', 'σ_ys', 'vox_size', 'p_x', 'p_y', 'p_z'].

    • E - Young's modulus [Pa]
    • ν - Poisson's ratio [-]
    • σ_ys - Yield stress [Pa]
    • vox_size - Length of the edge of a (cube-shaped) voxel [m]
    • p_x, p_y, p_z - Location of the root of the design space [m]

    Analogously to above, one can import any {i}_info.csv file by executing:

    file_path = f'{directory}/{i}_info.csv'
    data_info_column_names = ['E', 'ν', 'σ_ys', 'vox_size', 'p_x', 'p_y', 'p_z']
    data_info = pd.read_csv(file_path, names=data_info_column_names)

  6. n

    Data from: PyProcar: A Python library for electronic structure...

    • narcis.nl
    • data.mendeley.com
    Updated Dec 18, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Herath, U (via Mendeley Data) (2019). PyProcar: A Python library for electronic structure pre/post-processing [Dataset]. http://doi.org/10.17632/d4rrfy3dy4.1
    Explore at:
    Dataset updated
    Dec 18, 2019
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Herath, U (via Mendeley Data)
    Description

    The PyProcar Python package plots the band structure and the Fermi surface as a function of site and/or s,p,d,f - projected wavefunctions obtained for each k-point in the Brillouin zone and band in an electronic structure calculation. This can be performed on top of any electronic structure code, as long as the band and projection information is written in the PROCAR format, as done by the VASP and ABINIT codes. PyProcar can be easily modified to read other formats as well. This package is particularly suitable for understanding atomic effects into the band structure, Fermi surface, spin texture, etc. PyProcar can be conveniently used in a command line mode, where each one of the parameters define a plot property. In the case of Fermi-surfaces, the package is able to plot the surface with colors depending on other properties such as the electron velocity or spin projection. The mesh used to calculate the property does not need to be the same as the one used to obtain the Fermi surface. A file with a specific property evaluated for each k-point in a k-mesh and for each band can be used to project other properties such as electron–phonon mean path, Fermi velocity, electron effective mass, etc. Another existing feature refers to the band unfolding of supercell calculations into predefined unit cells.

  7. Z

    Data from: Computational 3D resolution enhancement for optical coherence...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeroen Kalkman (2024). Computational 3D resolution enhancement for optical coherence tomography with a narrowband visible light source [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7870794
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    George-Othon Glentis
    Jos de Wit
    Jeroen Kalkman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the code and data underlying the publication "Computational 3D resolution enhancement for optical coherence tomography with a narrowband visible light source" in Biomedical Optics Express 14, 3532-3554 (2023) (doi.org/10.1364/BOE.487345).

    The reader is free to use the scripts and data in this depository, as long as the manuscript is correctly cited in their work. For further questions, please contact the corresponding author.

    Description of the code and datasets

    Table 1 describes all the Matlab and Python scripts in this depository. Table 2 describes the datasets. The input datasets are the phase corrected datasets, as the raw data is large in size and phase correction using a coverslip as reference is rather straightforward. Processed datasets are also added to the repository to allow for running only a limited number of scripts, or to obtain for example the aberration corrected data without the need to use python. Note that the simulation input data (input_simulations_pointscatters_SLDshape_98zf_noise75.mat) is generated with random noise, so if this is overwritten de results may slightly vary. Also the aberration correction is done with random apertures, so the processed aberration corrected data (exp_pointscat_image_MIAA_ISAM_CAO.mat and exp_leaf_image_MIAA_ISAM_CAO.mat) will also slightly change if the aberration correction script is run anew. The current processed datasets are used as basis for the figures in the publication. For details on the implementation we refer to the publication.

    Table 1: The Matlab and Python scripts with their description
    
    
        Script name
        Description
    
    
        MIAA_ISAM_processing.m
        This scripts performs the DFT, RFIAA and MIAA processing of the phase-corrected data that can be loaded from the datasets. Afterwards it also applies ISAM on the DFT and MIAA data and plots the results in a figure (via the scripts plot_figure3, plot_figure5 and plot_simulationdatafigure).
    
    
        resolution_analysis_figure4.m
        This figure loads the data from the point scatterers (absolute amplitude data), seeks the point scatterrers and fits them to obtain the resolution data. Finally it plots figure 4 of the publication.
    
    
        fiaa_oct_c1.m, oct_iaa_c1.m, rec_fiaa_oct_c1.m, rfiaa_oct_c1.m 
        These four functions are used to apply fast IAA and MIAA. See script MIAA_ISAM_processing.m for their usage.
    
    
        viridis.m, morgenstemning.m
        These scripts define the colormaps for the figures.
    
    
        plot_figure3.m, plot_figure5.m, plot_simulationdatafigure.m
        These scripts are used to plot the figures 3 and 5 and a figure with simulation data. These scripts are executed at the end of script MIAA_ISAM_processing.m.
    
    
        Python script: computational_adaptive_optics_script.py
        Python script that applied computational adaptive optics to obtain the data for figure 6 of the manuscript.
    
    
        Python script: zernike_functions2.py
        Python script that gives the values and carthesian derrivatives of the Zernike polynomials.
    
    
        figure6_ComputationalAdaptiveOptics.m
        Script that loads the CAO data that was saved in Python, analyzes the resolution, and plots figure 6.
    
    
        Python script: OCTsimulations_3D_script2.py
        Python script simulates OCT data, adds noise and saves it as .mat file for use in the matlab script above.
    
    
        Python script: OCTsimulations2.py
        Module that contains a python class that can be used to simulate 3D OCT datasets based on a Gaussian beam.
    
    
        Matlab toolbox DIPimage 2.9.zip
        Dipimage is used in the scripts. The toolbox can be downloaded online or this zip can be used.
    
    
    
    
    
    
    The datasets in this Zenodo repository
    
    
        Name
        Description
    
    
        input_leafdisc_phasecorrected.mat
        Phase corrected input image of the leaf disc (used in figure 5).
    
    
        input_TiO2gelatin_004_phasecorrected.mat
        Phase corrected input image of the TiO2 in gelatin sample.
    
    
        input_simulations_pointscatters_SLDshape_98zf_noise75
        Input simulation data that, once processed, is used in figure 4.
    

    exp_pointscat_image_DFT.mat

    exp_pointscat_image_DFT_ISAM.mat

    exp_pointscat_image_RFIAA.mat

    exp_pointscat_image_MIAA_ISAM.mat

    exp_pointscat_image_MIAA_ISAM_CAO.mat

        Processed experimental amplitude data for the TiO2 point scattering sample with respectively DFT, DFT+ISAM, RFIAA, MIAA+ISAM and MIAA+ISAM+CAO. These datasets are used for fitting in figure 4 (except for CAO), and MIAA_ISAM and MIAA_ISAM_CAO are used for figure 6.
    

    simu_pointscat_image_DFT.mat

    simu_pointscat_image_RFIAA.mat

    simu_pointscat_image_DFT_ISAM.mat

    simu_pointscat_image_MIAA_ISAM.mat

        Processed amplitude data from the simulation dataset, which is used in the script for figure 4 for the resolution analysis.
    

    exp_leaf_image_MIAA_ISAM.mat

    exp_leaf_image_MIAA_ISAM_CAO.mat

        Processed amplitude data from the leaf sample, with and without aberration correction which is used to produce figure 6.
    

    exp_leaf_zernike_coefficients_CAO_normal_wmaf.mat

    exp_pointscat_zernike_coefficients_CAO_normal_wmaf.mat

        Estimated Zernike coefficients and the weighted moving average of them that is used for the computational aberration correction. Some of this data is plotted in Figure 6 of the manuscript.
    
    
        input_zernike_modes.mat
        The reference Zernike modes corresponding to the data that is loaded to give the modes the proper name.
    

    exp_pointscat_MIAA_ISAM_complex.mat

    exp_leaf_MIAA_ISAM_complex

        Complex MIAA+ISAM processed data that is used as input for the computational aberration correction.
    
  8. AIMO-24: Model (openai-community/gpt2-large)

    • kaggle.com
    zip
    Updated Apr 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dinh Thoai Tran @ randrise.com (2024). AIMO-24: Model (openai-community/gpt2-large) [Dataset]. https://www.kaggle.com/datasets/dinhttrandrise/aimo-24-model-openai-community-gpt2-large
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Apr 7, 2024
    Authors
    Dinh Thoai Tran @ randrise.com
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    language: en

    license: mit

    GPT-2 Large

    Table of Contents

    Model Details

    Model Description: GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.

    How to Get Started with the Model

    Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:

    >>> from transformers import pipeline, set_seed
    >>> generator = pipeline('text-generation', model='gpt2-large')
    >>> set_seed(42)
    >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
    
    [{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
     {'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
     {'generated_text': "Hello, I'm a language model, why does this matter for you?
    
    When I hear new languages, I tend to start thinking in terms"},
     {'generated_text': "Hello, I'm a language model, a functional language...
    
    I don't need to know anything else. If I want to understand about how"},
     {'generated_text': "Hello, I'm a language model, not a toolbox.
    
    In a nutshell, a language model is a set of attributes that define how"}]
    

    Here is how to use this model to get the features of a given text in PyTorch:

    from transformers import GPT2Tokenizer, GPT2Model
    tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
    model = GPT2Model.from_pretrained('gpt2-large')
    text = "Replace me by any text you'd like."
    encoded_input = tokenizer(text, return_tensors='pt')
    output = model(**encoded_input)
    

    and in TensorFlow:

    from transformers import GPT2Tokenizer, TFGPT2Model
    tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
    model = TFGPT2Model.from_pretrained('gpt2-large')
    text = "Replace me by any text you'd like."
    encoded_input = tokenizer(text, return_tensors='tf')
    output = model(encoded_input)
    

    Uses

    Direct Use

    In their model card about GPT-2, OpenAI wrote:

    The primary intended users of these models are AI researchers and practitioners.

    We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.

    Downstream Use

    In their model card about GPT-2, OpenAI wrote:

    Here are some secondary use cases we believe are likely:

    • Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
    • Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
    • Entertainment: Creation of games, chat bots, and amusing generations.

    Misuse and Out-of-scope Use

    In their model card about GPT-2, OpenAI wrote:

    Because large-scale language models like GPT-2 ...

  9. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ian M. Pendleton; Mary K. Caucci; Michael Tynes; Aaron Dharna; Mansoor Ani Najeeb Nellikkal; Zhi Li; Emory M. Chan; Alexander J. Norquist; and Joshua Schrier (2020). darpa_sd2_perovskites Dataset [Dataset]. https://paperswithcode.com/dataset/darpa-sd2-perovskites

darpa_sd2_perovskites Dataset

Explore at:
Dataset updated
May 25, 2020
Authors
Ian M. Pendleton; Mary K. Caucci; Michael Tynes; Aaron Dharna; Mansoor Ani Najeeb Nellikkal; Zhi Li; Emory M. Chan; Alexander J. Norquist; and Joshua Schrier
Description

Included in this content:

0045.perovksitedata.csv - main dataset used in this article. A more detailed description can be found in the “dataset overview” section below Chemical Inventory.csv - the hand curated file of all chemicals used in the construction of the perovskite dataset. This file includes identifiers, chemical properties, and other information. ExcessMolarVolumeData.xlsx - record of experimental data, computations, and final dataset used in the generation of the excess molar volume plots. MLModelMetrics.xlsx - all of the ML metrics organized in one place (excludes reactant set specific breakdown, see ML_Logs.zip for those files). OrganoammoniumDensityDataset.xlsx - complete set of the data used to generate the density values. Example calculations included. model_matchup_main.py - python pipeline used to generate all of the ML runs associated with the article. More detailed instructions on the operation of this code is included in the “ML Code” Section below. This file is also hosted on GIT: https://github.com/ipendlet/MLScripts/blob/master/temp_densityconc/model_matchup_main_20191231.py

SolutionVolumeDataset - complete set of 219 solutions in the perovskite dataset. Tabs include the automatically generated reagent information from ESCALATE, hand curated reagent information from early runs, and the generation of the dataset used in the creation of Figure 5. error_auditing.zip - code and historical datasets used for reporting the dataset auditing. “AllCode.zip” which contains: model_matchup_main_20191231.py - python pipeline used to generate all of the ML runs associated with the article. More detailed instructions on the operation of this code is included in the “ML Code” Section below. This file is also hosted on GIT: https://github.com/ipendlet/MLScripts/blob/master/temp_densityconc/0045.perovskitedata.csv VmE_CurveFitandPlot.py - python code for generating the third order polynomial fit to the VmE vs mole fraction of FAH included in the main text. Requires the ‘MolFractionResults.csv’ to function (also included). Calculation_Vm_Ve_CURVEFITTING.nb - mathematica code for generating the third order polynomial fit to the VmE vs mole fraction of FAH included in the main text.
Covariance_Analysis.py - python code for ingesting and plotting the covariance of features and volumes in the perovskite dataset. Includes renaming dictionaries used for the publication. FeatureComparison_Plotting.py - python code for reading in and plotting features for the ‘GBT’ and ‘OHGBT’ folders in this directory. The code parses the contents of these folders and generates feature comparison metrics used for Figure 9 and the associated Figure S8. Some assembly required. Requirements.txt - all of the packages used in the generation of this paper 0045.perovskitedata.csv - the main dataset described throughout the article. This file is required to run some of the code and is therefore kept near the code.

“ML_Logs.zip” which contains: A folder describing every model generated for this article. In each folder there are a number of files: Features_named_important.csv and features_value_importance.csv - these files are linked together and describe the weighted feature contributions from features (only present for GBT models) AnalysisLog.txt - Log file of the run including all options, data curation and model training summaries
LeaveOneOut_Summary.csv - Results of the leave-one-reactant set-out studies on the model (if performed) LOOModelInfo.txt - Hyperparameter information for each model in the study (associated with the given dataset, sometimes includes duplicate runs). STTSModelInfo.txt - Hyperparameter information for each model in the study (associated with the given dataset, sometimes includes duplicate runs). StandardTestTrain_Summary.csv - Results of the 6 fold cross validation ML performance (for the hold out case) LeaveOneOut_FullDataset_ByAmine.csv - Results of the leave-one-reactant set-out studies performed on the full dataset (all experiments) specified by reactant set (delineated by the amine) LeaveOneOut_StratifiedData_ByAmine.csv - Results of the leave-one-reactant set-out studies performed on a random stratified sample (96 random experiments) specified by reactant set (delineated by the amine) model_matchup_main_*.py - code used to generate all of the runs contained in a particular folder. The code is exactly what was used at run time to generate a given dataset (requires 0045.perovskitedata.csv file to run).

Search
Clear search
Close search
Google apps
Main menu