This part of the data release includes graphical representation (figures) of data from sediment cores collected in 2009 offshore of Palos Verdes, California. This file graphically presents combined data for each core (one core per page). Data on each figure are continuous core photograph, CT scan (where available), graphic diagram core description (graphic legend included at right; visual grain size scale of clay, silt, very fine sand [vf], fine sand [f], medium sand [med], coarse sand [c], and very coarse sand [vc]), multi-sensor core logger (MSCL) p-wave velocity (meters per second) and gamma-ray density (grams per cc), radiocarbon age (calibrated years before present) with analytical error (years), and pie charts that present grain-size data as percent sand (white), silt (light gray), and clay (dark gray). This is one of seven files included in this U.S. Geological Survey data release that include data from a set of sediment cores acquired from the continental slope, offshore Los Angeles and the Palos Verdes Peninsula, adjacent to the Palos Verdes Fault. Gravity cores were collected by the USGS in 2009 (cruise ID S-I2-09-SC; http://cmgds.marine.usgs.gov/fan_info.php?fan=SI209SC), and vibracores were collected with the Monterey Bay Aquarium Research Institute's remotely operated vehicle (ROV) Doc Ricketts in 2010 (cruise ID W-1-10-SC; http://cmgds.marine.usgs.gov/fan_info.php?fan=W110SC). One spreadsheet (PalosVerdesCores_Info.xlsx) contains core name, location, and length. One spreadsheet (PalosVerdesCores_MSCLdata.xlsx) contains Multi-Sensor Core Logger P-wave velocity, gamma-ray density, and magnetic susceptibility whole-core logs. One zipped folder of .bmp files (PalosVerdesCores_Photos.zip) contains continuous core photographs of the archive half of each core. One spreadsheet (PalosVerdesCores_GrainSize.xlsx) contains laser particle grain size sample information and analytical results. One spreadsheet (PalosVerdesCores_Radiocarbon.xlsx) contains radiocarbon sample information, results, and calibrated ages. One zipped folder of DICOM files (PalosVerdesCores_CT.zip) contains raw computed tomography (CT) image files. One .pdf file (PalosVerdesCores_Figures.pdf) contains combined displays of data for each core, including graphic diagram descriptive logs. This particular metadata file describes the information contained in the file PalosVerdesCores_Figures.pdf. All cores are archived by the U.S. Geological Survey Pacific Coastal and Marine Science Center.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.
This part of the data release includes graphical representation (figures) of data of sediment cores collected in 2014 in Monterey Canyon. It is one of five files included in this U.S. Geological Survey data release that include data from a set of sediment cores acquired from the continental slope, north of Monterey Canyon, offshore central California. Vibracores and push cores were collected with the Monterey Bay Aquarium Research Instituteâs (MBARIâs) remotely operated vehicle (ROV) Doc Ricketts in 2014 (cruise ID 2014-615-FA). One spreadsheet (NorthernFlankMontereyCanyonCores_Info.xlsx) contains core name, location, and length. One spreadsheet (NorthernFlankMontereyCanyonCores_MSCLdata.xlsx) contains Multi-Sensor Core Logger P-wave velocity and gamma-ray density whole-core logs of vibracores. One zipped folder of .bmp files (NorthernFlankMontereyCanyonCores_Photos.zip) contains continuous core photographs of the archive half of each vibracore. One spreadsheet (NorthernFlankMontereyCanyonCores_Radiocarbon.xlsx) contains radiocarbon sample information, results, and calibrated ages. One .pdf file (NorthernFlankMontereyCanyonCores_Figures.pdf) contains combined displays of data for each vibracore, including graphic diagram descriptive logs. This particular metadata file describes the information contained in the file NorthernFlankMontereyCanyon_Figures.pdf. All vibracores are archived by the U.S. Geological Survey Pacific Coastal and Marine Science Center. Other remaining core material, if available, is archived at MBARI.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Companion data for the creation of a banksia plot:Background:In research evaluating statistical analysis methods, a common aim is to compare point estimates and confidence intervals (CIs) calculated from different analyses. This can be challenging when the outcomes (and their scale ranges) differ across datasets. We therefore developed a plot to facilitate pairwise comparisons of point estimates and confidence intervals from different statistical analyses both within and across datasets.Methods:The plot was developed and refined over the course of an empirical study. To compare results from a variety of different studies, a system of centring and scaling is used. Firstly, the point estimates from reference analyses are centred to zero, followed by scaling confidence intervals to span a range of one. The point estimates and confidence intervals from matching comparator analyses are then adjusted by the same amounts. This enables the relative positions of the point estimates and CI widths to be quickly assessed while maintaining the relative magnitudes of the difference in point estimates and confidence interval widths between the two analyses. Banksia plots can be graphed in a matrix, showing all pairwise comparisons of multiple analyses. In this paper, we show how to create a banksia plot and present two examples: the first relates to an empirical evaluation assessing the difference between various statistical methods across 190 interrupted time series (ITS) data sets with widely varying characteristics, while the second example assesses data extraction accuracy comparing results obtained from analysing original study data (43 ITS studies) with those obtained by four researchers from datasets digitally extracted from graphs from the accompanying manuscripts.Results:In the banksia plot of statistical method comparison, it was clear that there was no difference, on average, in point estimates and it was straightforward to ascertain which methods resulted in smaller, similar or larger confidence intervals than others. In the banksia plot comparing analyses from digitally extracted data to those from the original data it was clear that both the point estimates and confidence intervals were all very similar among data extractors and original data.Conclusions:The banksia plot, a graphical representation of centred and scaled confidence intervals, provides a concise summary of comparisons between multiple point estimates and associated CIs in a single graph. Through this visualisation, patterns and trends in the point estimates and confidence intervals can be easily identified.This collection of files allows the user to create the images used in the companion paper and amend this code to create their own banksia plots using either Stata version 17 or R version 4.3.1
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The graph representation of the HRA sample linkml schema dataset.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
GraphTab Sample Dataset
This dataset contains tables from approximatelabs/tablib-v1-sample processed into graph representations.
Usage
from datasets import load_dataset import graphtab
dataset = load_dataset("{full_repo_id}")
graph_data = dataset['test'][0]
graph = graphtab.deserialize_graph(graph_data['graph_data'], graph_data['serialization'])
Citation
If you use this dataset, please cite both the⌠See the full description on the dataset page: https://huggingface.co/datasets/alexodavies/graphtab-sample.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Categorical scatterplots with R for biologists: a step-by-step guide
Benjamin Petre1, Aurore Coince2, Sophien Kamoun1
1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK
Weissgerber and colleagues (2015) recently stated that âas scientists, we urgently need to change our practices for presenting continuous data in small sample size studiesâ. They called for more scatterplot and boxplot representations in scientific papers, which âallow readers to critically evaluate continuous dataâ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.
Protocol
⢠Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column âReplicateâ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column âConditionâ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column âValueâ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in âFile Formatâ, select .csv). This .csv file is the input file to import in R.
⢠Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.
⢠Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.
Notes
⢠Note 1: install the ggplot2 package. The R script requires the package âggplot2â to be installed. To install it, Packages & Data -> Package Installer -> enter âggplot2â in the Package Search space and click on âGet Listâ. Select âggplot2â in the Package column and click on âInstall Selectedâ. Install all dependencies as well.
⢠Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.
replicates
graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()
References
Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.
Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035
Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128
baculum (absent/present) datamorphology of absence/presence of a baculum in mammals.baculumData01.nexMammal PhylogenyMammal phylogeny published by dos Reis et al. (2012) Phylogenomic datasets provide both precision and accuracy in estimating the timescale of placental mammal phylogeny Proceedings of the Royal Society B: Biological Sciences, The Royal Society, 2012, 279, 3491-3500mammalia_dosReis.treeGraphicalModels_Example_1aRevBayes example file for analysis shown in Figure 1.a.GraphicalModels_Example_1bRevBayes example file for analysis shown in Figure 1.b.GraphicalModels_Example_2RevBayes example file for analysis shown in Figure 2.Supplementary InformationProbabilistic Graphical Model Representation in Phylogenetics supplementary information, including additional phylogenetic graphical models and RevBayes implementation and examples.Hohna_etal_GM_SM.pdf
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary materials for appliacation of DA_2DCHROM - data alignment
https://doi.org/10.5281/zenodo.7040975
Content:
data â folder for raw data
full_dataset_alignment â 100 graphical comparisons of randomly picked pairs from full dataset
graph_results â graphical representation of obtained results
metadata â results of midsteps in data alignment process
data folder:
Subfolder Sample_dataset contains 20 sample chromatograms, each processed for S/N 100, 300 and 500 level
full_dataset_alignment:
100 graphical comparisons of randomly picked pairs from full dataset. The pairs are same for both algorithms.
graph_results folder:
To reduce the total size of Supplementary materials, only results for S/N level 500 are exported.
Each subfolder (names of folder correspond with the names of algorithms used through the study) contains numerical (K-S test) and graphical representation of the alignment. In case of failed alignment (not enough anchor points in case of BiPACE2D for example), the graphs are left blank.
metadata folder:
merged_peaks folder
Folder containing formated data with merged peaks (results of preprocessing part of data_alignment_chromatography_v1.4 script)
ref_data folder
Lists of manually exported referential peaks for each chromatogram. Input data for RI algorhitm.
time_correction folder
Each algorithm subfolder contains the result of data alignment itself. For each aligned chromatogram, there are 3 files â aligned chromatogram itself (.txt file with the most bytes), lists of detected anchor peaks (.txt with _anchors extension), and simple graphical check of alignment itself (.png)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This ds-graph represents this information for the Human Reference Atlas Cell Type Populations Effort (BĂśrner et al. 2025). It provides sample registration information submitted by consortium members in single-cell atlassing efforts, including accurate sample sizes and positions (Bueckle et al. 2025). When combined with ref-organ data, this information helps create 3D visual tissue sample placements. Additionally, the sample information is linked to datasets from researchers' assay analyses that offer deeper insights into the tissue samples. The âdsâ stands for âdataset.â ds-graphs represent datasets by tissue sample and donor. It is a dataset graph for the Human Reference Atlaspop Universe. It includes all datasets considered for Human Reference Atlaspop (not enriched).
Bibliography:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison between exact and approximated solutions of Example 3.1 at β = 1.
In particular, MUTAG is a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium. Input graphs are used to represent chemical compounds, where vertices stand for atoms and are labeled by the atom type (represented by one-hot encoding), while edges between vertices represent bonds between the corresponding atoms. It includes 188 samples of chemical compounds with 7 discrete node labels.
This package contains supplementary material associated to the paper "Understanding the Impact of Digital Technology: a Case Study in Rural Areas". Abstract: Designing systems that account for sustainability concerns demands for a better understanding of the impact that digital technology interventions can have on a certain socio-technical context. However, limited studies are available about the elicitation of impact-related information from stakeholders. This paper reports the experience of the authors in identifying the impact of digitalisation in remote mountain areas, in the context of a system for ordinary land management and hydro-geological risk control. Based on a set of interviews and a workshop, we reconstructed the current socio-technical system. Furthermore, we elicited information about the impact of digital technologies that were introduced in the specific context in recent years (GIS software, instant messaging apps, sensors, drones, etc.). Positive impacts are mostly economic and organisational. Negative ones are higher stress due to the excess of connectivity, and partial reduction of decision-making abilities. Our study contributes to the literature with a set of impacts specific to the case, and a list of lessons learned from the experience. The package contains the following files: 1. Socio-technical System.pdf: a graphical representation of the socio-technical system that has been reconstructed by the authors of the paper. 2. Description of System Entities.pdf: detailed description of the entities represented in the socio-technical system. 3. Incremental Data Analysis.pdf: graphs depicted during data analysis, to represent relations between entities, towards the representation of the socio-technical system. 4. Impacts and quotes - selected examples (IT).pdf: sample quotes (in Italian) associated to the impacts identified.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Various datasets from the Bayesys repository.
Size: 6 groups of datasets with each up to 16 experimentally generated from the bayesian network with the number of observation 100,1000,âŚ100000. Ground truth is given
Number of features: 6 - over 1000
Ground truth: Yes
Type of Graph: Directed graph
Six discrete BN case studies are used to generate data. The first three of them represent well-established examples from the BN structure learning literature, whereas the other three represent new cases and are based on recent BN real-world applications. Specifically,
Asia: A small toy network for diagnosing patients at a clinic;
Alarm: A medium-sized network based on an alarm message system for patient monitoring;
Pathfinder: A very large network that was designed to assist surgical pathologists with the diagnosis of lymph-node diseases;
Sports: A small BN that combines football team ratings with various team performance statistics to predict a series of match outcomes;
ForMed: A large BN that captures the risk of violent reoffending of mentally ill prisoners, along with multiple interventions for managing this risk;
Property: A medium BN that assesses investment decisions in the UK property market.
Data generated with noise:
Experiment No. | Experiment | Notes |
---|---|---|
1 | N | No noise |
2 | M5 | Missing data (5%) |
3 | M10 | Missing data (10%) |
4 | I5 | Incorrect data (5%) |
5 | I10 | Incorrect data (10%) |
6 | S5 | Merged states data (5%) |
7 | S10 | Merged states data (10%) |
8 | L5 | Latent confounders (5%) |
9 | L10 | Latent confounders (10%) |
10 | cMI | M5 and I5 |
11 | cMS | M5 and S5 |
12 | cML | M5 and L5 |
13 | cIS | I5 and S5 |
14 | cIL | I5 and L5 |
15 | cSL | S5 and L5 |
16 | cMISL | M5, I5, S5 and L5 |
More information about the datasets is contained in the dataset_description.html files.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Approximated solution of Example 3.2 at t = 0.005.
Redmob's Identity Graph Data helps you bring fragmented user data into one unified view. Built in-house and refreshed weekly, the mobile identity graph connects online and offline identifiers.
Designed for adtech platforms, brands, CRM, and CDP owners, Redmob enables cross-device audience tracking, deterministic identity resolution, and more precise attribution modeling across digital touchpoints.
Use cases
The Redmob Identity Graph is a mobile-centric database of linked identifiers that enables:
Key benefits:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this article, Elzaki decomposition method (EDM) has been applied to approximate the analytical solution of the time-fractional gas-dynamics equation. The time-fractional derivative is used in the Caputo-Fabrizio sense. The proposed method is implemented on homogenous and non-homogenous cases of the time-fractional gas-dynamics equation. A comparison between the exact and approximate solutions is also provided to show the validity and accuracy of the technique. A graphical representation of all the retrieved solutions is shown for different values of the fractional parameter. The time development of all solutions is also represented in 2D graphs. The obtained results may help understand the physical systems governed by the gas-dynamics equation.
2.2 Full Mall Graph Clustering Train The sample training data for this problem is a set of 106981 fingerprints (task2_train_fingerprints.json) and some edges between them. We have provided files that indicate three different edge types, all of which should be treated differently.
task2_train_steps.csv indicates edges that connect subsequent steps within a trajectory. These edges should be highly trusted as they indicate a certainty that two fingerprints were recorded from the same floor.
task2_train_elevations.csv indicate the opposite of the steps. These elevations indicate that the fingerprints are almost definitely from a different floor. You can thus extrapolate that if fingerprint $N$ from trajectory $n$ is on a different floor to fingerprint $M$ from trajectory $m$, then all other fingerprints in both trajectories $m$ and $n$ must also be on seperate floors.
task2_train_estimated_wifi_distances.csv are the pre-computed distances that we have calculated using our own distance metric. This metric is imperfect and as such we know that many of these edges will be incorrect (i.e. they will connect two floors together). We suggest that initially you use the edges in this file to construct your initial graph and compute some solution. However, if you get a high score on task1 then you might consider computing your own wifi distances to build a graph.
Your graph can be at one of two levels of detail, either trajectory level or fingerprint level, you can choose what representation you want to use, but ultimately we want to know the trajectory clusters. Trajectory level would have every node as a trajectory and edges between nodes would occur if fingerprints in their trajectories had high similiraty. Fingerprint level would have each fingerprint as a node. You can lookup the trajectory id of the fingerprint using the task2_train_lookup.json to convert between representations.
To help you debug and train your solution we have provided a ground truth for some of the trajectories in task2_train_GT.json. In this file the keys are the trajectory ids (the same as in task2_train_lookup.json) and the values are the real floor id of the building.
Test The test set is the exact same format as the training set (for a seperate building, we weren't going to make it that easy ;) ) but we haven't included the equivalent ground truth file. This will be withheld to allow us to score your solution.
Points to consider - When doing this on real data we do not know the exact number of floors to expect, so your model will need to decide this for itself as well. For this data, do not expect to find more than 20 floors or less than 3 floors. - Sometimes in balcony areas the similarity between fingerprints on different floors can be deceivingly high. In these cases it may be wise to try to rely on the graph information rather than the individual similarity (e.g. what is the similarity of the other neighbour nodes to this candidate other-floor node?) - To the best of our knowledge there are no outlier fingerprints in the data that do not belong to the building. Every fingerprint belongs to a floor
2.3 Loading the data In this section we will provide some example code to open the files and construct both types of graph.
import os
import json
import csv
import networkx as nx
from tqdm import tqdm
path_to_data = "task2_for_participants/train"
with open(os.path.join(path_to_data,"task2_train_estimated_wifi_distances.csv")) as f:
wifi = []
reader = csv.DictReader(f)
for line in tqdm(reader):
wifi.append([line['id1'],line['id2'],float(line['estimated_distance'])])
with open(os.path.join(path_to_data,"task2_train_elevations.csv")) as f:
elevs = []
reader = csv.DictReader(f)
for line in tqdm(reader):
elevs.append([line['id1'],line['id2']])
with open(os.path.join(path_to_data,"task2_train_steps.csv")) as f:
steps = []
reader = csv.DictReader(f)
for line in tqdm(reader):
steps.append([line['id1'],line['id2'],float(line['displacement'])])
fp_lookup_path = os.path.join(path_to_data,"task2_train_lookup.json")
gt_path = os.path.join(path_to_data,"task2_train_GT.json")
with open(fp_lookup_path) as f:
fp_lookup = json.load(f)
with open(gt_path) as f:
gt = json.load(f)
Fingerprint graph This is one way to construct the fingerprint-level graph, where each node in the graph is a fingerprint. We have added edge weights that correspond to the estimated/true distances from the wifi and pdr edges respectively. We have also added elevation edges to indicate this relationship. You might want to explicitly enforce that there are none of these edges (or any valid elevation edge between trajectories) when developing your solution.
G = nx.Graph()
for id1,id2,dist in tqdm(steps):
G.add_edge(id1, id2, ty = "s", weight=dist)
for id1,id2,dist in tqdm(wifi):
G.add_edge(id1, id2, ty = "w", weight=dist)
for id1,id2 in tqdm(elevs):
G.add_edge(id1, id2, ty = "e")
Trajectory graph The trajectory graph is arguably not as simple as you need to think of a way to represent many wifi connections between trajectories. In the example graph below we just take the mean distance as a weight, but is this really the best representation?
B = nx.Graph()
Get all the trajectory ids from the lookup
valid_nodes = set(fp_lookup.values())
for node in valid_nodes:
B.add_node(node)
Either add an edge or append the distance to the edge data
for id1,id2,dist in tqdm(wifi):
if not B.has_edge(fp_lookup[str(id1)], fp_lookup[str(id2)]):
B.add_edge(fp_lookup[str(id1)],
fp_lookup[str(id2)],
ty = "w", weight=[dist])
else:
B[fp_lookup[str(id1)]][fp_lookup[str(id2)]]['weight'].append(dist)
Compute the mean edge weight
for edge in B.edges(data=True):
B[edge[0]][edge[1]]['weight'] = sum(B[edge[0]][edge[1]]['weight'])/len(B[edge[0]][edge[1]]['weight'])
If you have made a wifi connection between trajectories with an elev, delete the edge
for id1,id2 in tqdm(elevs):
if B.has_edge(fp_lookup[str(id1)], fp_lookup[str(id2)]):
B.remove_edge(fp_lookup[str(id1)],
fp_lookup[str(id2)])
Connected TV (CTV) is an abbreviation for "connected television," encompassing televisions that have the capability to connect to the internet. This enables users to access a diverse range of sources to stream shows, movies, and various video content on their CTVs.
VentiveIQ offers comprehensive viewership data for OTT/CTV, supplemented with IMDB metadata, Device Graph, and IP Addresses associated with households. This data is accessible for both the United States and select international countries. It is conveniently categorized to facilitate audience building and can be seamlessly integrated with additional data sets such as demographics, online behavior/intent data, and personally identifiable information (PII) for enhanced insights and analysis.
DRAKO is a leader in providing Device Graph Data, focusing on understanding the relationships between consumer devices and identities. Our data allows businesses to create holistic profiles of users, track engagement across platforms, and measure the effectiveness of advertising efforts.
Device Graph Data is essential for accurate audience targeting, cross-device attribution, and understanding consumer journeys. By integrating data from multiple sources, we provide a unified view of user interactions, helping businesses make informed decisions.
Key Features: - Comprehensive device mapping to understand user behaviour across multiple platforms - Detailed Identity Graph Data for cross-device identification and engagement tracking - Integration with Connected TV Data for enhanced insights into video consumption habits - Mobile Attribution Data to measure the effectiveness of mobile campaigns - Customizable analytics to segment audiences based on device usage and demographics - Some ID types offered: AAID, idfa, Unified ID 2.0, AFAI, MSAI, RIDA, AAID_CTV, IDFA_CTV
Use Cases: - Cross-device marketing strategies - Attribution modelling and campaign performance measurement - Audience segmentation and targeting - Enhanced insights for Connected TV advertising - Comprehensive consumer journey mapping
Data Compliance: All of our Device Graph Data is sourced responsibly and adheres to industry standards for data privacy and protection. We ensure that user identities are handled with care, providing insights without compromising individual privacy.
Data Quality: DRAKO employs robust validation techniques to ensure the accuracy and reliability of our Device Graph Data. Our quality assurance processes include continuous monitoring and updates to maintain data integrity and relevance.
This part of the data release includes graphical representation (figures) of data from sediment cores collected in 2009 offshore of Palos Verdes, California. This file graphically presents combined data for each core (one core per page). Data on each figure are continuous core photograph, CT scan (where available), graphic diagram core description (graphic legend included at right; visual grain size scale of clay, silt, very fine sand [vf], fine sand [f], medium sand [med], coarse sand [c], and very coarse sand [vc]), multi-sensor core logger (MSCL) p-wave velocity (meters per second) and gamma-ray density (grams per cc), radiocarbon age (calibrated years before present) with analytical error (years), and pie charts that present grain-size data as percent sand (white), silt (light gray), and clay (dark gray). This is one of seven files included in this U.S. Geological Survey data release that include data from a set of sediment cores acquired from the continental slope, offshore Los Angeles and the Palos Verdes Peninsula, adjacent to the Palos Verdes Fault. Gravity cores were collected by the USGS in 2009 (cruise ID S-I2-09-SC; http://cmgds.marine.usgs.gov/fan_info.php?fan=SI209SC), and vibracores were collected with the Monterey Bay Aquarium Research Institute's remotely operated vehicle (ROV) Doc Ricketts in 2010 (cruise ID W-1-10-SC; http://cmgds.marine.usgs.gov/fan_info.php?fan=W110SC). One spreadsheet (PalosVerdesCores_Info.xlsx) contains core name, location, and length. One spreadsheet (PalosVerdesCores_MSCLdata.xlsx) contains Multi-Sensor Core Logger P-wave velocity, gamma-ray density, and magnetic susceptibility whole-core logs. One zipped folder of .bmp files (PalosVerdesCores_Photos.zip) contains continuous core photographs of the archive half of each core. One spreadsheet (PalosVerdesCores_GrainSize.xlsx) contains laser particle grain size sample information and analytical results. One spreadsheet (PalosVerdesCores_Radiocarbon.xlsx) contains radiocarbon sample information, results, and calibrated ages. One zipped folder of DICOM files (PalosVerdesCores_CT.zip) contains raw computed tomography (CT) image files. One .pdf file (PalosVerdesCores_Figures.pdf) contains combined displays of data for each core, including graphic diagram descriptive logs. This particular metadata file describes the information contained in the file PalosVerdesCores_Figures.pdf. All cores are archived by the U.S. Geological Survey Pacific Coastal and Marine Science Center.