4 datasets found
  1. d

    Replication Data for: A Rydberg atom based system for benchmarking mmWave...

    • search.dataone.org
    Updated Oct 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Borówka, Sebastian; Krokosz, Wiktor; Mazelanik, Mateusz; Wasilewski, Wojciech; Parniak, Michał (2025). Replication Data for: A Rydberg atom based system for benchmarking mmWave automotive radar chips [Dataset]. http://doi.org/10.7910/DVN/OYUNJ1
    Explore at:
    Dataset updated
    Oct 29, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Borówka, Sebastian; Krokosz, Wiktor; Mazelanik, Mateusz; Wasilewski, Wojciech; Parniak, Michał
    Description

    Simulation Data The waveplate.hdf5 file stores the results of the FDTD simulation that are visualized in Fig. 3 b)-d). The simulation was performed using the Tidy 3D Python library and also utilizes its methods for data visualization. The following snippet can be used to visualize the data: import tidy3d as td import matplotlib.pyplot as plt sim_data: td.SimulationData = td.SimulationData.from_file(f"waveplate.hdf5") fig, axs = plt.subplots(1, 2, tight_layout=True, figsize=(12, 5)) for fn, ax in zip(("Ex", "Ey"), axs): sim_data.plot_field("field_xz", field_name=fn, val="abs^2", ax=ax).set_aspect(1 / 10) ax.set_xlabel("x [$\mu$m]") ax.set_ylabel("z [$\mu$m]") fig.show() Measurement Data Signal data used for plotting Fig. 4-6. The data is stored in NetCDF providing self describing data format that is easy to manipulate using the Xarray Python library, specifically by calling xarray.open_dataset() Three datasets are provided and structured as follows: The electric_fields.nc dataset contains data displayed in Fig. 4. It has 3 data variables, corresponding to the signals themselves, as well as estimated Rabi frequencies and electric fields. The freq dimension is the x-axis and contains coordinates for the Probe field detuning in MHz. The n dimension labels different configurations of applied electric field, with the 0th one having no EHF field. The detune.nc dataset contains data displayed in Fig. 6. It has 2 data variables, corresponding to the signals themselves, as well as estimated peak separations, multiplied by the coupling factor. The freq dimension is the same, while the detune dimension labels different EHF field detunings, from -100 to 100 MHz with a step of 10. The waveplates.nc dataset contains data displayed in Fig. 5. It contains estimated Rabi frequencies calculated for different waveplate positions. The angles are stored in radians. There is the quarter- and half-waveplate to choose from. Usage examples Opening the dataset import matplotlib.pyplot as plt import xarray as xr electric_fields_ds = xr.open_dataset("data/electric_fields.nc") detuned_ds = xr.open_dataset("data/detune.nc") waveplates_ds = xr.open_dataset("data/waveplates.nc") sigmas_da = xr.open_dataarray("data/sigmas.nc") peak_heights_da = xr.open_dataarray("data/peak_heights.nc") Plotting the Fig. 4 signals and printing params fig, ax = plt.subplots() electric_fields_ds["signals"].plot.line(x="freq", hue="n", ax=ax) print(f"Rabi frequencies [Hz]: {electric_fields_ds['rabi_freqs'].values}") print(f"Electric fields [V/m]: {electric_fields_ds['electric_fields'].values}") fig.show() Plotting the Fig. 5 data (waveplates_ds["rabi_freqs"] ** 2).plot.scatter(x="angle", col="waveplate") Plotting the Fig. 6 signals for chosen detunes fig, ax = plt.subplots() detuned_ds["signals"].sel( detune=[ -100, -70, -40, 40, 70, 100, ] ).plot.line(x="freq", hue="detune", ax=ax) fig.show() Plotting the Fig. 6 inset plot fig, ax = plt.subplots() detuned_ds["separations"].plot.scatter(x="detune", ax=ax) ax.plot( detuned_ds.detune, np.sqrt(detuned_ds.detune**2 + detuned_ds["separations"].sel(detune=0) ** 2), ) fig.show() Plotting the Fig. 7 calculated peak widths sigmas_da.plot.scatter() Plotting the Fig. 8 calculated detuned smaller peak heights peak_heights_da.plot.scatter()

  2. Data product and code for: Spatiotemporal Distribution of Dissolved...

    • zenodo.org
    nc, zip
    Updated Dec 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tobias Ehmen; Tobias Ehmen; Neill Mackay; Neill Mackay; Andrew Watson; Andrew Watson (2024). Data product and code for: Spatiotemporal Distribution of Dissolved Inorganic Carbon in the Global Ocean Interior - Reconstructed through Machine Learning [Dataset]. http://doi.org/10.5281/zenodo.14575969
    Explore at:
    nc, zipAvailable download formats
    Dataset updated
    Dec 30, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tobias Ehmen; Tobias Ehmen; Neill Mackay; Neill Mackay; Andrew Watson; Andrew Watson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data product and code for: Ehmen et al.: Spatiotemporal Distribution of Dissolved Inorganic Carbon in the Global Ocean Interior - Reconstructed through Machine Learning

    Note that due to the data limit on Zenodo only a compressed version of the ensemble mean is uploaded here (compressed_DIC_mean_15fold_ensemble_aveRMSE7.46_0.15TTcasts_1990-2023.nc). Individual ensemble members can be generated through the weight and scaler files found in weights_and_scalers_DIC_paper.zip and the code "ResNet_DIC_loading_past_prediction_2024-12-28.py" (see description below).

    EN4_thickness_GEBCO.nc contains the scaling factors used in "plot_carbon_inventory_for_ensemble_2024-01-27.py" (see description below).
    DIC_paper_code_Ehmen_et_al.zip contains the python code used to generate products and figures.

    Prerequisites: Python running the modules tensorflow, shap, xarray, pandas and scipy. Plots additionally use matplotlib, cartopy, seaborn, statsmodels, gsw and cmocean.

    The main scripts used to generate reconstructions are “ResNet_DIC_2024-12-28.py” (for new training runs) and “ResNet_DIC_loading_past_prediction_2024-12-28.py” (for already trained past weight and scaler files). Usage:

    • Assign the correct directories in the function “create_directories” according to your own system. You won’t need the same if-statements for individual platforms and computers
    • Download the most recent version of GLODAP and store it in the directory chosen in “create_directories”. Check if the filename is the same as used in “import_GLODAP_dataset”. Unless the GLODAP creators change their naming system of the columns, newer versions can be used instead of GLODAPv2.2023
    • Download the HOT, BATS and Drake Passage time series and ensure the filenames are the same as in “import_time_series_data”. Store them in the time series directory chosen in “create_directories”. This is optional and the time series prediction can be commented out.
    • Download EN4 analysis files for the years you want and store them in the EN4 analysis directory chosen in “create_directories”. For the reconstruction to be created from all available EN4 analysis files, the variable prediction_to_file needs to be True, otherwise only a single time slice will be predicted (but not saved) for testing and plotting.
    • If you want to generate reconstructions pre-trained models, make sure the “scalers” and “weight_files” subdirectories are correctly stored in the “training” directory defined in “create_directories”.
    • Store the synthetic dataset of ECCO-Darwin values at GLODAP locations in the directory chosen in “create_directories”. For predicting the full model fields ECCO-Darwin needs to be in a csv-style format (for use in pandas dataframes), i.e. the multi-dimensional data needs to be flattened. Store these altered csv-style files in the directory chosen in “create_directories”

    Once a reconstruction has been generated the following scripts found in the subdirectory “working_with_finished_reconstructions” can be used:

    • ensemble_create_mean_and_std_2023-11-27.py: this creates an ensemble mean from ideally 15 ensemble members (number can be adjusted, if less reconstruction files are found than this number it is adjusted automatically). For DIC it also calculates the uncertainty following the method by Keppler et al. 2023.
    • plot_carbon_inventory_for_ensemble_2024-01-27.py: plots the carbon inventory change for DIC from both ensemble mean and the individual ensemble members. The most important settings are the default. Other options include plotting the seasonal change, others are not supported in this version as they require additional files not supplied here.
    • depth_slices_and_zonal_means_full_prediction_2024-07-05.py: creates several world maps for individual depths and zonal means for the Indian, Atlantic and Pacific Ocean.
    • Hovmoeller_plots_from_predictions_2024-05-02.py: generates simplified Hovmöller plots from individual reconstructions.
    • DIC_comparison_with_other_products_2024-06-27: interpolates and compares this product with climatologies and products from other studies. These need to be downloaded first. Products can be excluded if they are removed from the list “files_to_compare”.
  3. Data and Code for "Engineering Majorana bound states in coupled quantum dots...

    • zenodo.org
    zip
    Updated Nov 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sebastiaan Laurens Daniel ten Haaf; Sebastiaan Laurens Daniel ten Haaf (2023). Data and Code for "Engineering Majorana bound states in coupled quantum dots in a two-dimensional electron gas" [Dataset]. http://doi.org/10.5281/zenodo.10077539
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 7, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sebastiaan Laurens Daniel ten Haaf; Sebastiaan Laurens Daniel ten Haaf
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This folder contains the raw data and code used to generate the plots for the paper Engineering Majorana bound states in coupled quantum dots in a two-dimensional electron gas.

    To run the Jupyter notebooks, install Anaconda and execute:

    conda env create -f environment.yml

    followed by:

    conda activate 2DEG_Kitaev

    Finally,

    jupyter notebook

    to launch the notebooks

    Data is stored in the qcodes '.db' format. Datasets are transformed in the plotting functions to the xarray dataformat, before being processed.

  4. Z

    Data and code for "Singlet and triplet Cooper pair splitting in hybrid...

    • data.niaid.nih.gov
    Updated Nov 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guanzhong Wang (2022). Data and code for "Singlet and triplet Cooper pair splitting in hybrid superconducting nanowires" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5774827
    Explore at:
    Dataset updated
    Nov 23, 2022
    Dataset provided by
    TU Delft
    Authors
    Guanzhong Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This folder contains the raw data and code used to generate the plots for the paper Singlet and triplet Cooper pair splitting in hybrid superconducting nanowires (arXiv: 2205.03458).

    To run the Jupyter notebooks, install Anaconda and execute:

    conda env create -f cps-exp.yml

    followed by:

    conda activate cps-exp

    for the experiment data, or

    conda env create -f cps-theory.yml

    and similarly

    conda activate cps-theory

    for the theory plots. Finally,

    jupyter notebook

    to launch the corresponding notebook.

    Raw data are stored in netCDF (.nc) format. The files are directly exported by the data acquisition package QCoDeS and can be read as an xarray Dataset.

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Borówka, Sebastian; Krokosz, Wiktor; Mazelanik, Mateusz; Wasilewski, Wojciech; Parniak, Michał (2025). Replication Data for: A Rydberg atom based system for benchmarking mmWave automotive radar chips [Dataset]. http://doi.org/10.7910/DVN/OYUNJ1

Replication Data for: A Rydberg atom based system for benchmarking mmWave automotive radar chips

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Oct 29, 2025
Dataset provided by
Harvard Dataverse
Authors
Borówka, Sebastian; Krokosz, Wiktor; Mazelanik, Mateusz; Wasilewski, Wojciech; Parniak, Michał
Description

Simulation Data The waveplate.hdf5 file stores the results of the FDTD simulation that are visualized in Fig. 3 b)-d). The simulation was performed using the Tidy 3D Python library and also utilizes its methods for data visualization. The following snippet can be used to visualize the data: import tidy3d as td import matplotlib.pyplot as plt sim_data: td.SimulationData = td.SimulationData.from_file(f"waveplate.hdf5") fig, axs = plt.subplots(1, 2, tight_layout=True, figsize=(12, 5)) for fn, ax in zip(("Ex", "Ey"), axs): sim_data.plot_field("field_xz", field_name=fn, val="abs^2", ax=ax).set_aspect(1 / 10) ax.set_xlabel("x [$\mu$m]") ax.set_ylabel("z [$\mu$m]") fig.show() Measurement Data Signal data used for plotting Fig. 4-6. The data is stored in NetCDF providing self describing data format that is easy to manipulate using the Xarray Python library, specifically by calling xarray.open_dataset() Three datasets are provided and structured as follows: The electric_fields.nc dataset contains data displayed in Fig. 4. It has 3 data variables, corresponding to the signals themselves, as well as estimated Rabi frequencies and electric fields. The freq dimension is the x-axis and contains coordinates for the Probe field detuning in MHz. The n dimension labels different configurations of applied electric field, with the 0th one having no EHF field. The detune.nc dataset contains data displayed in Fig. 6. It has 2 data variables, corresponding to the signals themselves, as well as estimated peak separations, multiplied by the coupling factor. The freq dimension is the same, while the detune dimension labels different EHF field detunings, from -100 to 100 MHz with a step of 10. The waveplates.nc dataset contains data displayed in Fig. 5. It contains estimated Rabi frequencies calculated for different waveplate positions. The angles are stored in radians. There is the quarter- and half-waveplate to choose from. Usage examples Opening the dataset import matplotlib.pyplot as plt import xarray as xr electric_fields_ds = xr.open_dataset("data/electric_fields.nc") detuned_ds = xr.open_dataset("data/detune.nc") waveplates_ds = xr.open_dataset("data/waveplates.nc") sigmas_da = xr.open_dataarray("data/sigmas.nc") peak_heights_da = xr.open_dataarray("data/peak_heights.nc") Plotting the Fig. 4 signals and printing params fig, ax = plt.subplots() electric_fields_ds["signals"].plot.line(x="freq", hue="n", ax=ax) print(f"Rabi frequencies [Hz]: {electric_fields_ds['rabi_freqs'].values}") print(f"Electric fields [V/m]: {electric_fields_ds['electric_fields'].values}") fig.show() Plotting the Fig. 5 data (waveplates_ds["rabi_freqs"] ** 2).plot.scatter(x="angle", col="waveplate") Plotting the Fig. 6 signals for chosen detunes fig, ax = plt.subplots() detuned_ds["signals"].sel( detune=[ -100, -70, -40, 40, 70, 100, ] ).plot.line(x="freq", hue="detune", ax=ax) fig.show() Plotting the Fig. 6 inset plot fig, ax = plt.subplots() detuned_ds["separations"].plot.scatter(x="detune", ax=ax) ax.plot( detuned_ds.detune, np.sqrt(detuned_ds.detune**2 + detuned_ds["separations"].sel(detune=0) ** 2), ) fig.show() Plotting the Fig. 7 calculated peak widths sigmas_da.plot.scatter() Plotting the Fig. 8 calculated detuned smaller peak heights peak_heights_da.plot.scatter()

Search
Clear search
Close search
Google apps
Main menu