100+ datasets found
  1. First IMF Final Practice with R

    • kaggle.com
    zip
    Updated Nov 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jose Carbonell Capo (2023). First IMF Final Practice with R [Dataset]. https://www.kaggle.com/datasets/pepcarbonell/first-imf-final-practice-with-r/code
    Explore at:
    zip(486316 bytes)Available download formats
    Dataset updated
    Nov 29, 2023
    Authors
    Jose Carbonell Capo
    Description

    Dataset

    This dataset was created by Jose Carbonell Capo

    Contents

  2. f

    Key to all columns

    • datasetcatalog.nlm.nih.gov
    • springernature.figshare.com
    Updated Aug 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Douglas, Margaret R.; Soba, Sara; Lonsdorf, Eric V.; Kammerer, Melanie; Grozinger, Christina; Baisley, Paige (2022). Key to all columns [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000427630
    Explore at:
    Dataset updated
    Aug 15, 2022
    Authors
    Douglas, Margaret R.; Soba, Sara; Lonsdorf, Eric V.; Kammerer, Melanie; Grozinger, Christina; Baisley, Paige
    Description

    This spreadsheet contains an explanation of what the columns mean in all of the datasets. The titles of tabs correspond to the shortened filenames - each file has one tab.

  3. a

    quakes

    • rstudio-pubs-static.s3.amazonaws.com
    • rpubs.com
    Updated Dec 31, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). quakes [Dataset]. https://rstudio-pubs-static.s3.amazonaws.com/852040_3106898049c64edda3a2b49f893a0c41.html
    Explore at:
    Dataset updated
    Dec 31, 2021
    Variables measured
    lat, mag, long, depth, stations
    Description

    The dataset has N=1000 rows and 5 columns. 1000 rows have no missing values on any column.

    Table of variables

    This table contains variable names, labels, and number of missing values. See the complete codebook for more.

    namelabeln_missing
    latNA0
    longNA0
    depthNA0
    magNA0
    stationsNA0

    Note

    This dataset was automatically described using the codebook R package (version 0.9.2).

  4. Reddit: /r/news

    • kaggle.com
    zip
    Updated Dec 17, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). Reddit: /r/news [Dataset]. https://www.kaggle.com/datasets/thedevastator/uncovering-popularity-and-user-engagement-trends/discussion
    Explore at:
    zip(146481 bytes)Available download formats
    Dataset updated
    Dec 17, 2022
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Reddit: /r/news

    Exploring Topics, Scores, and Engagement

    By Reddit [source]

    About this dataset

    This dataset provides an in-depth look into learning what communities find important and engaging in the news. With this data, researchers can discover trends related to user engagement and popular topics within subreddits. By examining the “score” and “comms_num” columns, our researchers will be able to pinpoint which topics are most liked, discussed or shared within the various subreddits. Researchers may also gain insights into not only how popular a topic is but how it is growing over time. Additionally, by exploring the body column of our dataset, researchers can understand more about which types of news stories drive conversation within particular subreddits—providing an opportunity for deeper analysis of that subreddit’s diverse community dynamics

    The dataset includes eight columns: title, score, id, url, comms_num created**body and timestamp** which can help us identify key insights into user engagement among popular subreddits. With this data we may also determine relationships between topics of discussion and their impact on user engagement allowing us to create a better understanding surrounding issue-based conversations online as well as uncover emerging trends in online news consumption habits

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset is useful for those who are looking to gain insight into the popularity and user engagement of specific subreddits. The data includes 8 different columns including title, score, id, url, comms_num, created, body and timestamp. This can provide valuable information about how users view and interact with particular topics across various subreddits.

    In this guide we’ll look at how you can use this dataset to uncover trends in user engagement on topics within specific subreddits as well as measure the overall popularity of these topics within a subreddit.

    1) Analyzing Score: By analyzing the “score” column you can determine which news stories are popular in a particular subreddit and which ones aren't by looking at how many upvotes each story has received. With this data you will be able to determine trends in what types of stories users preferred within a particular subreddit over time.

    2) Analyzing Comms_Num: Similarly to analyzing the score column you can analyze the “comms_num” column to see which news stories had more engagement from users by tracking number of comments received on each post. Knowing these points can provide insight into what types of stories tend to draw more comment activity from users in certain subreddits from one day or an extended period of time such tracking post activity for multiple weeks or months at once 3) Analyzing Body: Additionally by looking at the “body” column for each post researchers can gain a better understanding which kinds of topics/news draw attention among specific Reddit communities.. With that complete picture researchers have access not only to data measuring Reddit buzz but also access topic discussion/comments helping generate further insights into why certain posts might be popular or receive more comments than others

    Overallthis dataset provides valuable insights about user engagedment related specifically topics trending accross subsbreddits allowing anyone interested reseraching such things easier way access those insights all one place

    Research Ideas

    • Grouping news topics within particular subreddits and assessing the overall popularity of those topics in terms of scores/user engagement.
    • Correlating user engagement with certain news topics to understand how they influence discussion or reactions on a subreddit.
    • Examining the potential correlation between score and the actual body content of a given post to assess what types of content are most successful in gaining interest from users and creating positive engagement for posts

    Acknowledgements

    If you use this dataset in your research, please credit the original authors.

    Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: news.csv | Column name | Description ...

  5. u

    Data from: Data corresponding to the paper "Traveling Bubbles and Vortex...

    • portalcientifico.uvigo.gal
    Updated 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michinel, Humberto; Michinel, Humberto (2025). Data corresponding to the paper "Traveling Bubbles and Vortex Pairs within Symmetric 2D Quantum Droplets" [Dataset]. https://portalcientifico.uvigo.gal/documentos/682afb714c44bf76b287f3ae
    Explore at:
    Dataset updated
    2025
    Authors
    Michinel, Humberto; Michinel, Humberto
    Description

    Datasets generated for the Physical Review E article with title: "Traveling Bubbles and Vortex Pairs within Symmetric 2D Quantum Droplets" by Paredes, Guerra-Carmenate, Salgueiro, Tommasini and Michinel. In particular, we provide the data needed to generate the figures in the publication, which illustrate the numerical results found during this work.

    We also include python code in the file "plot_from_data_for_repository.py" that generates a version of the figures of the paper from .pt data sets. Data can be read and plots can be produced with a simple modification of the python code.

    Figure 1: Data are in fig1.csv

    The csv file has four columns separated by comas. The four columns correspond to values of r (first column) and the function psi(r) for the three cases depicted in the figure (columns 2-4).

    Figures 2 and 4: Data are in data_figs_2_and_4.pt

    This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six eigenstates depicted in figures 2 and 4 ("psia", "psib", "psic", "psid", "psie", "psif"). Notice that figure 2 is the square of the modulus and figure 4 is the argument, both are obtained from the same data sets.

    Figure 3: Data are in fig3.csv

    The csv file has three columns separated by comas. The three columns correspond to values of momentum p (first column), energy E (second column) and velocity U (third column).

    Figure 5: Data are in fig5.csv

    The csv file has three columns separated by comas. The three columns correspond to values of momentum p (first column), the minimum value of |psi|^2 (second column) and the value of |psi|^2 at the center (third column).

    Figure 6: Data are in data_fig_6.pt

    This is a data file generated with the torch module of python. It includes six torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the four instants of time depicted in figure 6 ("psia", "psib", "psic", "psid").

    Figure 7: Data are in data_fig_7.pt

    This is a data file generated with the torch module of python. It includes six torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the four instants of time depicted in figure 7 ("psia", "psib", "psic", "psid").

    Figures 8 and 10: Data are in data_figs_8_and_10.pt

    This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six eigenstates depicted in figures 8 and 10 ("psia", "psib", "psic", "psid", "psie", "psif"). Notice that figure 8 is the square of the modulus and figure 10 is the argument, both are obtained from the same data sets.

    Figure 9: Data are in fig9.csv

    The csv file has two columns separated by comas. The two columns correspond to values of momentum p (first column) and energy (second column).

    Figure 11: Data are in data_fig_11.pt

    This is a data file generated with the torch module of python. It includes ten torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the two cases, four instants of time for each case, depicted in figure 11 ("psia", "psib", "psic", "psid", "psie", "psif", "psig", "psih").

    Figure 12: Data are in data_fig_12.pt

    This is a data file generated with the torch module of python. It includes eight torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the six instants of time depicted in figure 12 ("psia", "psib", "psic", "psid", "psie", "psif").

    Figure 13: Data are in data_fig_13.pt

    This is a data file generated with the torch module of python. It includes ten torch tensors for the spatial grid "x" and "y" and for the complex values of psi for the eight instants of time depicted in figure 13 ("psia", "psib", "psic", "psid", "psie", "psif", "psig", "psih").

  6. b

    Video Plankton Recorder data (formatted with taxa displayed in single...

    • bco-dmo.org
    • search.dataone.org
    csv
    Updated Jul 31, 2012
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carin J. Ashjian (2012). Video Plankton Recorder data (formatted with taxa displayed in single column); from R/V Columbus Iselin and R/V Endeavor cruises CI9407, EN259, EN262 in the Gulf of Maine and Georges Bank from 1994-1995 [Dataset]. https://www.bco-dmo.org/dataset/3685
    Explore at:
    csv(370.26 MB)Available download formats
    Dataset updated
    Jul 31, 2012
    Dataset provided by
    Biological and Chemical Data Management Office
    Authors
    Carin J. Ashjian
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Gulf of Maine
    Variables measured
    lat, lon, sal, temp, year, fluor, press, taxon, flvolt, abund_L, and 9 more
    Measurement technique
    Video Plankton Recorder
    Description

    This dataset includes ALL the abundance values, zero and non-zero. Taxonomic groups are diplayed in the 'taxon' column, rather than in separate columns, with abundances in the 'abund_L' column. For the original presentation of the data, see VPR_ashjian_orig. For a version of the data with only non-zero data, see VPR_ashjian_nonzero. In the 'nonzero' dataset, values of 0 in the abund_L column (taxon abundance) have been removed.

    Methodology
    The following information was extracted from C.J. Ashjian et al., Deep- Sea Research II 48(2001) 245-282 . An in-depth discussion of the data and sampling methods can be found there.

    The Video Plankton Recorder was towed at 2 m/s, collecting data from the surface to the bottom (towyo). The VPR was equipped with 2-4 cameras, temperature and conductivity probes, fluorometer and transmissometer. Environmental data was collected at 0.25 Hz (CI9407) or 0.5 Hz (EN259, EN262). Video images were recorded at 60 fields per second (fps).

    Video tapes were analyzed for plankton abundances using a semi-automated method discussed in Davis, C.S. et al., Deep-Sea Research II 43 (1996) 1946-1970. In-focus images were extracted from the video tapes and identified by hand to particle type, taxon, or species. Plankton and particle observations were merged with environmental and navigational data by binning the observations for each category into the time intervals at which the environmental data were collected (again see above Davis citation). Concentrations were calculated utilizing the total volume (liters) imaged during that period. For less-abundant categories, usually only a single organism was observed during each time interval so that the resulting concentrations are close to presence or absence data rather than covering a range of values.

  7. f

    Supplement 1. R and WinBUGS code for fitting the model of species occurrence...

    • figshare.com
    • wiley.figshare.com
    html
    Updated Aug 5, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert M. Dorazio; J. Andrew Royle; Bo Söderström; Anders Glimskär (2016). Supplement 1. R and WinBUGS code for fitting the model of species occurrence and detection and example data sets. [Dataset]. http://doi.org/10.6084/m9.figshare.3526013.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Aug 5, 2016
    Dataset provided by
    Wiley
    Authors
    Robert M. Dorazio; J. Andrew Royle; Bo Söderström; Anders Glimskär
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List breedingBirdData.txt butterflyData.txt ExampleSession.txt MultiSpeciesSiteOcc.R MultiSpeciesSiteOccModel.txt CumNumSpeciesPresent.R

    Description “breedingBirdData.txt” is an example data set in ASCII comma-delimited format. Each row corresponds to data for a single species observed in the avian survey. The 50 columns correspond to 50 sample locations. “butterflyData.txt” is an example data set in ASCII comma-delimited format. Each row corresponds to data for a single species observed in the butterfly survey. The 20 columns correspond to 20 sample locations. “ExampleSession.txt” illustrates an example session in R where the butterfly data are read into memory and then analyzed using the R and WinBUGS code. “MultiSpeciesSiteOcc.R” defines an R function for fitting the model of species occurrence and detection to data. This function specifies a Gibbs sampler wherein 55000 random draws are computed for each of 4 different Markov chains. These computations may require nontrivial execution times. For example, analysis of the avian data required about 4 hours using a computer equipped with a 3.20 GHz Pentium 4 processor. Analysis of the butterfly data required about 1.5 hours. “MultiSpeciesSiteOccModel.txt” contains WinBUGS code for specifying the model of species occurrence and detection. “CumNumSpeciesPresent.R” defines an R function for computing a sample of the posterior-predictive distribution of a species-accumulation curve whose abscissa ranges from 1 to nsites sites.

  8. SCOAPE Pandora Column Observations - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SCOAPE Pandora Column Observations - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/scoape-pandora-column-observations-8c90a
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    SCOAPE_Pandora_Data is the column NO2 and ozone data collected by Pandora spectrometers during the Satellite Coastal and Oceanic Atmospheric Pollution Experiment (SCOAPE). Pandora instruments were located on the University of Southern Mississippi’s Research Vessel (R/V) Point Sur and at the Louisiana Universities Marine Consortium (LUMCON; Cocodrie, LA). Data collection for this product is complete.The Outer Continental Shelf Lands Act (OCSLA) requires the US Department of Interior Bureau of Ocean Energy Management (BOEM) to ensure compliance with the US National Ambient Air Quality Standard (NAAQS) so that Outer Continental Shelf (OCS) oil and natural gas (ONG) exploration, development, and production do not significantly impact the air quality of any US state. In 2017, BOEM and NASA entered into an interagency agreement to begin a study to scope out the feasibility of BOEM personnel using a suite of NASA and non-NASA resources to assess how pollutants from ONG exploration, development, and production activities affect air quality. An important activity of this interagency agreement was SCOAPE, a field deployment that took place in May 2019, that aimed to assess the capability of satellite observations for monitoring offshore air quality. The outcomes of the study are documented in two BOEM reports (Duncan, 2020; Thompson, 2020).To address BOEM’s goals, the SCOAPE science team conducted surface-based remote sensing and in-situ measurements, which enabled a systematic assessment of the application of satellite observations, primarily NO2, for monitoring air quality. The SCOAPE field measurements consisted of onshore ground sites, including in the vicinity of LUMCON, as well as those from University of Southern Mississippi’s R/V Point Sur, which cruised in the Gulf of America from 10-18 May 2019. Based on the 2014 and 2017 BOEM emissions inventories as well as daily air quality and meteorological forecasts, the cruise track was designed to sample both areas with large oil drilling platforms and areas with dense small natural gas facilities. The R/V Point Sur was instrumented to carry out both remote sensing and in-situ measurements of NO2 and O3 along with in-situ CH4, CO2, CO, and VOC tracers which allowed detailed characterization of airmass type and emissions. In addition, there were also measurements of multi-wavelength AOD and black carbon as well as planetary boundary layer structure and meteorological variables, including surface temperature, humidity, and winds. A ship-based spectrometer instrument provided remotely-sensed total column amounts of NO2 and O3 for direct comparison with satellite measurements. Ozonesondes and radiosondes were also launched 1-3 times daily from the R/V Point Sur to provide O3 and meteorological vertical profile observations. The ground-based observations, primarily at LUMCON, included spectrometer-measured column NO2 and O3, in-situ NO2, VOCs, and planetary boundary layer structure. A NO2sonde was also mounted on a vehicle with the goal to detect pollution onshore from offshore ONG activities during onshore flow; data were collected along coastal Louisiana from Burns Point Park to Grand Isle to the tip of the Mississippi River delta. The in-situ measurements were reported in ICARTT files or Excel files. The remote sensing data are in either HDF or netCDF files.

  9. The Pizza Problem

    • kaggle.com
    zip
    Updated Feb 8, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeremy Jeanne (2019). The Pizza Problem [Dataset]. https://www.kaggle.com/jeremyjeanne/google-hashcode-pizza-training-2019
    Explore at:
    zip(178852 bytes)Available download formats
    Dataset updated
    Feb 8, 2019
    Authors
    Jeremy Jeanne
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Problem description

    Pizza

    The pizza is represented as a rectangular, 2-dimensional grid of R rows and C columns. The cells within the grid are referenced using a pair of 0-based coordinates [r, c] , denoting respectively the row and the column of the cell.

    Each cell of the pizza contains either:

    mushroom, represented in the input file as M
    tomato, represented in the input file as T
    

    Slice

    A slice of pizza is a rectangular section of the pizza delimited by two rows and two columns, without holes. The slices we want to cut out must contain at least L cells of each ingredient (that is, at least L cells of mushroom and at least L cells of tomato) and at most H cells of any kind in total - surprising as it is, there is such a thing as too much pizza in one slice. The slices being cut out cannot overlap. The slices being cut do not need to cover the entire pizza.

    Goal

    The goal is to cut correct slices out of the pizza maximizing the total number of cells in all slices. Input data set The input data is provided as a data set file - a plain text file containing exclusively ASCII characters with lines terminated with a single ‘ ’ character at the end of each line (UNIX- style line endings).

    File format

    The file consists of:

    one line containing the following natural numbers separated by single spaces:
    R (1 ≤ R ≤ 1000) is the number of rows
    C (1 ≤ C ≤ 1000) is the number of columns
    L (1 ≤ L ≤ 1000) is the minimum number of each ingredient cells in a slice
    H (1 ≤ H ≤ 1000) is the maximum total number of cells of a slice
    

    Google 2017, All rights reserved.

    R lines describing the rows of the pizza (one after another). Each of these lines contains C characters describing the ingredients in the cells of the row (one cell after another). Each character is either ‘M’ (for mushroom) or ‘T’ (for tomato).

    Example

    3 5 1 6
    TTTTT
    TMMMT
    TTTTT
    

    3 rows, 5 columns, min 1 of each ingredient per slice, max 6 cells per slice

    Example input file.

    Submissions

    File format

    The file must consist of:

    one line containing a single natural number S (0 ≤ S ≤ R × C) , representing the total number of slices to be cut,
    U lines describing the slices. Each of these lines must contain the following natural numbers separated by single spaces:
    r 1 , c 1 , r 2 , c 2 describe a slice of pizza delimited by the rows r (0 ≤ r1,r2 < R, 0 ≤ c1, c2 < C) 1 and r 2 and the columns c 1 and c 2 , including the cells of the delimiting rows and columns. The rows ( r 1 and r 2 ) can be given in any order. The columns ( c 1 and c 2 ) can be given in any order too.
    

    Example

    0 0 2 1
    0 2 2 2
    0 3 2 4
    

    3 slices.

    First slice between rows (0,2) and columns (0,1).
    Second slice between rows (0,2) and columns (2,2).
    Third slice between rows (0,2) and columns (3,4).
    Example submission file.
    

    © Google 2017, All rights reserved.

    Slices described in the example submission file marked in green, orange and purple. Validation

    For the solution to be accepted:

    the format of the file must match the description above,
    each cell of the pizza must be included in at most one slice,
    each slice must contain at least L cells of mushroom,
    each slice must contain at least L cells of tomato,
    total area of each slice must be at most H
    

    Scoring

    The submission gets a score equal to the total number of cells in all slices. Note that there are multiple data sets representing separate instances of the problem. The final score for your team is the sum of your best scores on the individual data sets. Scoring example

    The example submission file given above cuts the slices of 6, 3 and 6 cells, earning 6 + 3 + 6 = 15 points.

  10. f

    Measured Tm values found using R experiments on systems without any abasic...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Jun 26, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Danilowicz, Claudia; Coljee, Vincent; Prentiss, Mara; Peacock-Villada, Alexandra (2015). Measured Tm values found using R experiments on systems without any abasic sites (columns 2–5) and the system with 7 abasic sites dividing the system into 6 groups of 4 bases (columns 6–9). [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001943467
    Explore at:
    Dataset updated
    Jun 26, 2015
    Authors
    Danilowicz, Claudia; Coljee, Vincent; Prentiss, Mara; Peacock-Villada, Alexandra
    Description

    Measured Tm values found using R experiments on systems without any abasic sites (columns 2–5) and the system with 7 abasic sites dividing the system into 6 groups of 4 bases (columns 6–9).

  11. d

    Numerical code and data for the stellar structure and dynamical instability...

    • datadryad.org
    • search.dataone.org
    • +1more
    zip
    Updated May 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arun Mathew; Malay K. Nandy (2021). Numerical code and data for the stellar structure and dynamical instability analysis of generalised uncertainty white dwarfs [Dataset]. http://doi.org/10.5061/dryad.dncjsxkzt
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 10, 2021
    Dataset provided by
    Dryad
    Authors
    Arun Mathew; Malay K. Nandy
    Time period covered
    Apr 28, 2021
    Description

    There is a total of 17 datasets to produce all the Figures in the article. There are mainly two different data files: GUP White Dwarf Mass-Radius (GUPWD_M-R) data and GUP White Dwarf Profile (GUPWD_Profile) data.

    The file GUPWD_M-R gives only the Mass-Radius relation with Radius (km) in the first column and Mass (solar mass) in the second.

    On the other hand GUPWD_Profile provides the complete profile with following columns.

    column 1: Dimensionless central Fermi Momentum $\xi_c$ column 2: Central Density $\rho_c$ ( Log10 [$\rho_c$ g cm$^{-3}$] ) column 3: Radius $R$ (km) column 4: Mass $M$ (solar mass) column 5: Square of fundamental frequency $\omega_0^2$ (sec$^{-2}$)

    =====================================================================================

    Figure 1 (a) gives Mass-Radius (M-R) curves for $\beta_0=10^{42}$, $10^{41}$ and $10^{40}$. The filenames of the corresponding dataset are

    GUPWD_M-R[Beta0=E42].dat GUPWD_M-R[Beta0=E41].dat GUPWD_M-R[Beta0...

  12. Uniform Crime Reporting (UCR) Program Data: Arrests by Age, Sex, and Race,...

    • search.datacite.org
    • doi.org
    • +1more
    Updated 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jacob Kaplan (2018). Uniform Crime Reporting (UCR) Program Data: Arrests by Age, Sex, and Race, 1980-2016 [Dataset]. http://doi.org/10.3886/e102263v5-10021
    Explore at:
    Dataset updated
    2018
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    DataCitehttps://www.datacite.org/
    Authors
    Jacob Kaplan
    Description

    Version 5 release notes:
    Removes support for SPSS and Excel data.Changes the crimes that are stored in each file. There are more files now with fewer crimes per file. The files and their included crimes have been updated below.
    Adds in agencies that report 0 months of the year.Adds a column that indicates the number of months reported. This is generated summing up the number of unique months an agency reports data for. Note that this indicates the number of months an agency reported arrests for ANY crime. They may not necessarily report every crime every month. Agencies that did not report a crime with have a value of NA for every arrest column for that crime.Removes data on runaways.
    Version 4 release notes:
    Changes column names from "poss_coke" and "sale_coke" to "poss_heroin_coke" and "sale_heroin_coke" to clearly indicate that these column includes the sale of heroin as well as similar opiates such as morphine, codeine, and opium. Also changes column names for the narcotic columns to indicate that they are only for synthetic narcotics.
    Version 3 release notes:
    Add data for 2016.Order rows by year (descending) and ORI.Version 2 release notes:
    Fix bug where Philadelphia Police Department had incorrect FIPS county code.
    The Arrests by Age, Sex, and Race data is an FBI data set that is part of the annual Uniform Crime Reporting (UCR) Program data. This data contains highly granular data on the number of people arrested for a variety of crimes (see below for a full list of included crimes). The data sets here combine data from the years 1980-2015 into a single file. These files are quite large and may take some time to load.
    All the data was downloaded from NACJD as ASCII+SPSS Setup files and read into R using the package asciiSetupReader. All work to clean the data and save it in various file formats was also done in R. For the R code used to clean this data, see here. https://github.com/jacobkap/crime_data. If you have any questions, comments, or suggestions please contact me at jkkaplan6@gmail.com.

    I did not make any changes to the data other than the following. When an arrest column has a value of "None/not reported", I change that value to zero. This makes the (possible incorrect) assumption that these values represent zero crimes reported. The original data does not have a value when the agency reports zero arrests other than "None/not reported." In other words, this data does not differentiate between real zeros and missing values. Some agencies also incorrectly report the following numbers of arrests which I change to NA: 10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 99999, 99998.

    To reduce file size and make the data more manageable, all of the data is aggregated yearly. All of the data is in agency-year units such that every row indicates an agency in a given year. Columns are crime-arrest category units. For example, If you choose the data set that includes murder, you would have rows for each agency-year and columns with the number of people arrests for murder. The ASR data breaks down arrests by age and gender (e.g. Male aged 15, Male aged 18). They also provide the number of adults or juveniles arrested by race. Because most agencies and years do not report the arrestee's ethnicity (Hispanic or not Hispanic) or juvenile outcomes (e.g. referred to adult court, referred to welfare agency), I do not include these columns.

    To make it easier to merge with other data, I merged this data with the Law Enforcement Agency Identifiers Crosswalk (LEAIC) data. The data from the LEAIC add FIPS (state, county, and place) and agency type/subtype. Please note that some of the FIPS codes have leading zeros and if you open it in Excel it will automatically delete those leading zeros.

    I created 9 arrest categories myself. The categories are:
    Total Male JuvenileTotal Female JuvenileTotal Male AdultTotal Female AdultTotal MaleTotal FemaleTotal JuvenileTotal AdultTotal ArrestsAll of these categories are based on the sums of the sex-age categories (e.g. Male under 10, Female aged 22) rather than using the provided age-race categories (e.g. adult Black, juvenile Asian). As not all agencies report the race data, my method is more accurate. These categories also make up the data in the "simple" version of the data. The "simple" file only includes the above 9 columns as the arrest data (all other columns in the data are just agency identifier columns). Because this "simple" data set need fewer columns, I include all offenses.

    As the arrest data is very granular, and each category of arrest is its own column, there are dozens of columns per crime. To keep the data somewhat manageable, there are nine different files, eight which contain different crimes and the "simple" file. Each file contains the data for all years. The eight categories each have crimes belonging to a major crime category and do not overlap in crimes other than with the index offenses. Please note that the crime names provided below are not the same as the column names in the data. Due to Stata limiting column names to 32 characters maximum, I have abbreviated the crime names in the data. The files and their included crimes are:

    Index Crimes
    MurderRapeRobberyAggravated AssaultBurglaryTheftMotor Vehicle TheftArsonAlcohol CrimesDUIDrunkenness
    LiquorDrug CrimesTotal DrugTotal Drug SalesTotal Drug PossessionCannabis PossessionCannabis SalesHeroin or Cocaine PossessionHeroin or Cocaine SalesOther Drug PossessionOther Drug SalesSynthetic Narcotic PossessionSynthetic Narcotic SalesGrey Collar and Property CrimesForgeryFraudStolen PropertyFinancial CrimesEmbezzlementTotal GamblingOther GamblingBookmakingNumbers LotterySex or Family CrimesOffenses Against the Family and Children
    Other Sex Offenses
    ProstitutionRapeViolent CrimesAggravated AssaultMurderNegligent ManslaughterRobberyWeapon Offenses
    Other CrimesCurfewDisorderly ConductOther Non-trafficSuspicion
    VandalismVagrancy
    Simple
    This data set has every crime and only the arrest categories that I created (see above).
    If you have any questions, comments, or suggestions please contact me at jkkaplan6@gmail.com.

  13. EM2040 Water Column Sonar Data Collected During H13177

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Sep 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA National Centers for Environmental Information (Point of Contact) (2021). EM2040 Water Column Sonar Data Collected During H13177 [Dataset]. https://catalog.data.gov/dataset/em2040-water-column-sonar-data-collected-during-h13177
    Explore at:
    Dataset updated
    Sep 17, 2021
    Dataset provided by
    National Centers for Environmental Informationhttps://www.ncei.noaa.gov/
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Description

    Sea Scout Hydrographic Survey, H13177 (EM2040). Mainline coverage within the survey area consisted of Complete Coverage (100% side scan sonar with concurrent multibeam data) acquisition. The assigned Fish Haven area and associated debris area were surveyed with Object Detection MBES coverage. Bathymetric and water column data were acquired with a Kongsberg EM2040C multibeam echo sounder aboard the R/V Sea Scout and bathymetry data was acquired with a Kongsberg EM3002 multibeam echo sounder aboard the R/V C-Wolf. Side scan sonar acoustic imagery was collected with a Klein 5000 V2 system aboard the R/V Sea Scout and an EdgeTech 4200 aboard the R/V C-Wolf.

  14. C

    2001 Crimes, with all columns

    • data.cityofchicago.org
    Updated Dec 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chicago Police Department (2025). 2001 Crimes, with all columns [Dataset]. https://data.cityofchicago.org/Public-Safety/2001-Crimes-with-all-columns/8973-dj98
    Explore at:
    application/geo+json, kmz, xlsx, xml, csv, kmlAvailable download formats
    Dataset updated
    Dec 2, 2025
    Authors
    Chicago Police Department
    Description

    This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system. In order to protect the privacy of crime victims, addresses are shown at the block level only and specific locations are not identified. Should you have questions about this dataset, you may contact the Research & Development Division of the Chicago Police Department at PSITAdministration@ChicagoPolice.org. Disclaimer: These crimes may be based upon preliminary information supplied to the Police Department by the reporting parties that have not been verified. The preliminary crime classifications may be changed at a later date based upon additional investigation and there is always the possibility of mechanical or human error. Therefore, the Chicago Police Department does not guarantee (either expressed or implied) the accuracy, completeness, timeliness, or correct sequencing of the information and the information should not be used for comparison purposes over time. The Chicago Police Department will not be responsible for any error or omission, or for the use of, or the results obtained from the use of this information. All data visualizations on maps should be considered approximate and attempts to derive specific addresses are strictly prohibited. The Chicago Police Department is not responsible for the content of any off-site pages that are referenced by or that reference this web page other than an official City of Chicago or Chicago Police Department web page. The user specifically acknowledges that the Chicago Police Department is not responsible for any defamatory, offensive, misleading, or illegal conduct of other users, links, or third parties and that the risk of injury from the foregoing rests entirely with the user. The unauthorized use of the words "Chicago Police Department," "Chicago Police," or any colorable imitation of these words or the unauthorized use of the Chicago Police Department logo is unlawful. This web page does not, in any way, authorize such use. Data are updated daily. To access a list of Chicago Police Department - Illinois Uniform Crime Reporting (IUCR) codes, go to http://data.cityofchicago.org/Public-Safety/Chicago-Police-Department-Illinois-Uniform-Crime-R/c7ck-438e

  15. Data from: Superconductor-ferromagnet hybrids for non-reciprocal electronics...

    • data.europa.eu
    • ekoizpen-zientifikoa.ehu.eus
    • +1more
    unknown
    Updated Jul 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2025). Superconductor-ferromagnet hybrids for non-reciprocal electronics and detectors [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-7798143?locale=hr
    Explore at:
    unknown(64)Available download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data for the manuscript "Superconductor-ferromagnet hybrids for non-reciprocal electronics and detectors", submitted to Superconductor Science and Technology, arXiv:2302.12732. This archive contains the data for all plots of numerical data in the manuscript. ## Fig. 4 Data of Fig. 4 in the WDX (Wolfram Data Exchange) format (unzip to extract the files). Contains critical exchange fields and critical thicknesses as functions of the temperature. Can be opened with Wolfram Mathematica with the command: Import[FileNameJoin[{NotebookDirectory[],"filename.wdx"}]] ## Fig. 5 Data of Fig. 5 in the WDX (Wolfram Data Exchange) format (unzip to extract the files). Contains theoretically calculated I(V) curves and the rectification coefficient R of N/FI/S junctions. Can be opened with Wolfram Mathematica with the command Import[FileNameJoin[{NotebookDirectory[],"filename.wdx"}]]. ## Fig. 7a Data of Fig. 7a in the ascii format. Contains G in uS as a function of B in mT and V in mV. ## Fig. 7c Data of Fig. 7c in the ascii format. Contains G in uS as a function of B in mT and V in mV. ## Fig. 7e Data of Fig. 7e in the ascii format. Contains G in uS as a function of B in mT and V in mV. The plots 7b, d, and f are taken from the plots a, c and e as indicated in the caption of the figure. ## Fig. 8 Data of Fig. 8 in the ascii format. Contains G in uS as a function V in mV for several values of B in mT. ## Fig. 8 inset Data of Fig. 8 inset in the ascii format. Contains G_0/G_N as a function of B in mT. ## Fig9a_b First raw Magnetic field values in T, first column voltage drop in V, rest of the columns differential conductance in S ## Fig9b_FIT First raw Magnetic field values in T, first column voltage drop in V, rest of the columns differential conductance in S ## Fig9c First raw Magnetic field values in T, first column voltage drop in V, rest of the columns R (real number) ## Fig9c inset First raw Magnetic field values in T, odd columns voltage drop in V, even columns injected current in A ## Fog9d Foist column magnetic field in T, second column conductance ration (real number), sample name in the file name. ## Fig. 12 Data of Fig. 12 in the ascii format. Contains energy resolution as functions of temperature and tunnel resistance with current and voltage readout. ## Fig. 13 Data of Fig. 13 in the ascii format. Contains energy resolution as functions of (a) exchange field, (b) polarization, (c) dynes, and (d) absorber volume with different amplifier noises. ## Fig. 14 Data of Fig. 14 in the ascii format. Contains detector pulse current as functions of (a) temperature change (b) time with different detector parameters. ## Fig. 17 Data of Fig. 17 in the ascii format. Contains dIdV curves as function of the voltage for different THz illumination frequency and polarization. ## Fig. 18 Data of Fig. 18 in the ascii format. Contains the current flowing throughout the junction as function time (arbitrary units) for ON and OFF illumination at 150 GHz for InPol and CrossPol polarization. ## Fig. 21 Data of Fig. 21c in the ascii format. Contains the magnitude of readout line S43 as frequency. Data of Fig. 21d in the ascii format. Contains the magnitude of iKID line S21 as frequency.

  16. Z

    Data from: A FAIR and modular image-based workflow for knowledge discovery...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meghan Balk; Thibault Tabarin; John Bradley; Hilmar Lapp (2024). Data from: A FAIR and modular image-based workflow for knowledge discovery in the emerging field of imageomics [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8233379
    Explore at:
    Dataset updated
    Jul 7, 2024
    Dataset provided by
    Duke University School of Medicine
    National Ecological Observatory Network
    Authors
    Meghan Balk; Thibault Tabarin; John Bradley; Hilmar Lapp
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data and results from the Imageomics Workflow. These include data files from the Fish-AIR repository (https://fishair.org/) for purposes of reproducibility and outputs from the application-specific imageomics workflow contained in the Minnow_Segmented_Traits repository (https://github.com/hdr-bgnn/Minnow_Segmented_Traits).

    Fish-AIR: This is the dataset downloaded from Fish-AIR, filtering for Cyprinidae and the Great Lakes Invasive Network (GLIN) from the Illinois Natural History Survey (INHS) dataset. These files contain information about fish images, fish image quality, and path for downloading the images. The data download ARK ID is dtspz368c00q. (2023-04-05). The following files are unaltered from the Fish-AIR download. We use the following files:

    extendedImageMetadata.csv: A CSV file containing information about each image file. It has the following columns: ARKID, fileNameAsDelivered, format, createDate, metadataDate, size, width, height, license, publisher, ownerInstitutionCode. Column definitions are defined https://fishair.org/vocabulary.html and the persistent column identifiers are in the meta.xml file.

    imageQualityMetadata.csv: A CSV file containing information about the quality of each image. It has the following columns: ARKID, license, publisher, ownerInstitutionCode, createDate, metadataDate, specimenQuantity, containsScaleBar, containsLabel, accessionNumberValidity, containsBarcode, containsColorBar, nonSpecimenObjects, partsOverlapping, specimenAngle, specimenView, specimenCurved, partsMissing, allPartsVisible, partsFolded, brightness, uniformBackground, onFocus, colorIssue, quality, resourceCreationTechnique. Column definitions are defined https://fishair.org/vocabulary.html and the persistent column identifiers are in the meta.xml file.

    multimedia.csv: A CSV file containing information about image downloads. It has the following columns: ARKID, parentARKID, accessURI, createDate, modifyDate, fileNameAsDelivered, format, scientificName, genus, family, batchARKID, batchName, license, source, ownerInstitutionCode. Column definitions are defined https://fishair.org/vocabulary.html and the persistent column identifiers are in the meta.xml file.

    meta.xml: A XML file with the metadata about the column indices and URIs for each file contained in the original downloaded zip file. This file is used in the fish-air.R script to extract the indices for column headers.

    The outputs from the Minnow_Segmented_Traits workflow are:

    sampling.df.seg.csv: Table with tallies of the sampling of image data per species during the data cleaning and data analysis. This is used in Table S1 in Balk et al.

    presence.absence.matrix.csv: The Presence-Absence matrix from segmentation, not cleaned. This is the result of the combined outputs from the presence.json files created by the rule “create_morphological_analysis”. The cleaned version of this matrix is shown as Table S3 in Balk et al.

    heatmap.avg.blob.png and heatmap.sd.blob.png: Heatmaps of average area of biggest blob per trait (heatmap.avg.blob.png) and standard deviation of area of biggest blob per trait (heatmap.sd.blob.png). These images are also in Figure S3 of Balk et al.

    minnow.filtered.from.iqm.csv: Filtered fish image data set after filtering (see methods in Balk et al. for filter categories).

    burress.minnow.sp.filtered.from.iqm.csv: Fish image data set after filtering and selecting species from Burress et al. 2017.

  17. Swath sonar multibeam EM122 water column data of R/V SONNE cruises SO268/1...

    • doi.pangaea.de
    html, tsv
    Updated Apr 3, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Iason-Zois Gazis; Peter Urban (2020). Swath sonar multibeam EM122 water column data of R/V SONNE cruises SO268/1 and SO268/2 with links to wcd data files [Dataset]. http://doi.org/10.1594/PANGAEA.914390
    Explore at:
    html, tsvAvailable download formats
    Dataset updated
    Apr 3, 2020
    Dataset provided by
    PANGAEA
    GEOMAR - Helmholtz Centre for Ocean Research Kiel
    Authors
    Iason-Zois Gazis; Peter Urban
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    Feb 26, 2019 - May 20, 2019
    Area covered
    Variables measured
    LATITUDE, DATE/TIME, File name, File size, LONGITUDE, Event label, File format, Uniform resource locator/link to wcd data file
    Description

    Data acquisition was performed using the multibeam echosounder Kongsberg EM122. Raw data are delivered in Kongsberg .wcd format. The data acquisition was part of the international project JPI Oceans - MiningImpact Environmental Impacts and Risks of Deep-Sea Mining.

  18. f

    Data from: A Graphical Goodness-of-Fit Test for Dependence Models in Higher...

    • tandf.figshare.com
    application/gzip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marius Hofert; Martin Mächler (2023). A Graphical Goodness-of-Fit Test for Dependence Models in Higher Dimensions [Dataset]. http://doi.org/10.6084/m9.figshare.1067049.v2
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Marius Hofert; Martin Mächler
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This article introduces a graphical goodness-of-fit test for copulas in more than two dimensions. The test is based on pairs of variables and can thus be interpreted as a first-order approximation of the underlying dependence structure. The idea is to first transform pairs of data columns with the Rosenblatt transform to bivariate standard uniform distributions under the null hypothesis. This hypothesis can be graphically tested with a matrix of bivariate scatterplots, Q-Q plots, or other transformations. Furthermore, additional information can be encoded as background color, such as measures of association or (approximate) p-values of tests of independence. The proposed goodness-of-fit test is designed as a basic graphical tool for detecting deviations from a postulated, possibly high-dimensional, dependence model. Various examples are given and the methodology is applied to a financial dataset. An implementation is provided by the R package copula. Supplementary material for this article is available online, which provides the R package copula and reproduces all the graphical results of this article.

  19. 💎 r/Fitness Posts & Comments 💎

    • kaggle.com
    zip
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Curiel (2025). 💎 r/Fitness Posts & Comments 💎 [Dataset]. https://www.kaggle.com/datasets/curiel/rfitness-posts-and-comments
    Explore at:
    zip(52567764 bytes)Available download formats
    Dataset updated
    Apr 2, 2025
    Authors
    Curiel
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Important Note: The dataset contains some important information regarding the columns 'title' and 'comments'. It's important to understand their values in order to interpret the data correctly.

    In the 'title' column, there may be a significant number of null values. A null value in this column indicates that the corresponding row pertains to a comment rather than a post. To identify the relationship between comment rows and their associated posts, you can examine the 'post_id' column. Rows with the same 'post_id' value refer to comments that are associated with the post identified by that 'post_id'.

    Similarly, in the 'comments' column, the presence or absence of null values is crucial for determining whether a row represents a comment or a post. If the 'comments' column is null, it signifies a comment row. Conversely, if the 'comments' column is populated (including cases where the value is 0), it indicates a post row.

    Understanding these conventions will enable accurate analysis and interpretation of the dataset.

  20. U

    Water column sample data from predefined locations of the West Florida...

    • data.usgs.gov
    • search.dataone.org
    • +2more
    Updated Feb 15, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States Geological Survey (2014). Water column sample data from predefined locations of the West Florida Shelf: USGS Cruise 11BHM03 [Dataset]. https://data.usgs.gov/datacatalog/data/USGS:94b95e3f-fe33-40d3-885f-54c70ead5714
    Explore at:
    Dataset updated
    Feb 15, 2014
    Dataset provided by
    United States Geological Survey
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Sep 20, 2011 - Sep 28, 2011
    Area covered
    Florida
    Description

    The United States Geological Survey (USGS) is conducting a study on the effects of climate change on ocean acidification within the Gulf of Mexico; dealing specifically with the effect of ocean acidification on marine organisms and habitats. To investigate this, the USGS participated in two cruises in the West Florida Shelf and northern Gulf of Mexico regions aboard the R/V Weatherbird II, a ship of opportunity lead by Dr. Kendra Daly, of the University of South Florida (USF). The cruises occurred September 20 - 28 and November 2 - 4, 2011. Both left from and returned to Saint Petersburg, Florida, but followed different routes (see Trackline). On both cruises the USGS collected data pertaining to pH, dissolved inorganic carbon (DIC), and total alkalinity in discrete samples. Discrete surface samples were taken during transit approximatly hourly on both cruises, 95 in September were collected over a span of 2127 km, and 7 over a trackline of 732 km line on the November cruise. Along wit ...

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jose Carbonell Capo (2023). First IMF Final Practice with R [Dataset]. https://www.kaggle.com/datasets/pepcarbonell/first-imf-final-practice-with-r/code
Organization logo

First IMF Final Practice with R

R's basic knowledge, dropping columns, changing factors, visualizing data

Explore at:
zip(486316 bytes)Available download formats
Dataset updated
Nov 29, 2023
Authors
Jose Carbonell Capo
Description

Dataset

This dataset was created by Jose Carbonell Capo

Contents

Search
Clear search
Close search
Google apps
Main menu