At the end of 2023, envelopes accounted for little more than **** percent of the direct mail units sent in the United States throughout that year. Postcards followed with nearly **** percent, while self-mailers' share stood at *** percent.
Large scale analysis of in vivo toxicology studies has been hindered by the lack of a standardized digital format for data analysis. The SEND standard enables the analysis of data from multiple studies performed by different laboratories. The objective of this work is to develop methods to transform, sort, and analyze data to automate cross study analysis of toxicology studies. Cross study analysis can be applied to use cases such as understanding a single compound’s toxicity profile across all studies performed and/or evaluating on- versus off-target toxicity for multiple compounds intended for the same pharmacological target. This collaborative work between BioCelerate and FDA involved development of data harmonization/transformation strategies and analytic techniques to enable cross-study analysis of both numerical and categorical SEND data. Four de-identified SEND data sets from the BioCelerate Toxicology Data Sharing module of DataCelerate® were used for the analyses. Toxicity prof..., Deidentified SEND data was donated by companies participating in BioCelerate’s Toxicology Data Sharing Initiative (TDS module in DataCelerate®).The data included 1-Month Rat and 1-Month Dog SEND datasets for two different compounds intended for the same pharmacological target. To facilitate cross-study analysis of toxicology studies, it is practical to categorize findings within organ systems to provide insights into target organ toxicity. In the proof-of-concept for this application, we focused on the target organs with compound-related effects, namely the kidney, liver, hematopoietic system, endocrine system, and reproductive tract (male). The body weights (BW), food and water consumption (FW), laboratory test results (LB), organ measurements (OM), and microscopic findings (MI) SEND domains were included in the analysis. Each parameter was then assigned to the relevant organ system(s) (Table 1) based on veterinary literature (Faqi 2017) (Stockham 2008), scientific literature on ..., , # Dataset for Cross Study Analyses of SEND Data: Toxicity Profile Classification
https://doi.org/10.5061/dryad.s1rn8pkgr
The data included 1-Month Rat and 1-Month Dog SEND datasets for two different compounds (Compound A and Compound B) intended for the same pharmacological target.Â
The files contain data from toxicology studies performed in rats and dogs to support clinical development for two different drugs intended for the same pharmacological target. The studies were donated by the pharmaceutical companies involved in development of the compounds. All proprietary and identifying information has been removed and deidentified. Â
The toxicology data is organized based on the CDISC - Standard for Exchange of Nonclinical Data (SEND) data standard (https://www.cdisc.org/standards/foundational/send/sendig-v3-1) and stored in .json a...,
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
There are lots of really cool datasets getting added to Kaggle every day, and as part of my job I want to help people find them. I’ve been tweeting about datasets on my personal Twitter accounts @rctatman and also releasing a weekly newsletter of interesting datasets.
I wanted to know which method was more effective at getting the word out about new datasets: Twitter or the newsletter?
This dataset contains two .csv files. One has information on the impact of tweets with links to datasets, while the other has information on the impact of the newsletter.
Twitter:
The Twitter .csv has the following information:
Fridata Newsletter:
The Fridata .csv has the following information:
This dataset was collected by the uploader, Rachael Tatman. It is released here under a CC-BY-SA license.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Hungary Sent Transactions: EUR: Volume: SCT Format data was reported at 1.216 Unit mn in Dec 2019. This records an increase from the previous number of 1.169 Unit mn for Sep 2019. Hungary Sent Transactions: EUR: Volume: SCT Format data is updated quarterly, averaging 0.899 Unit mn from Mar 2014 (Median) to Dec 2019, with 24 observations. The data reached an all-time high of 1.216 Unit mn in Dec 2019 and a record low of 0.435 Unit mn in Mar 2014. Hungary Sent Transactions: EUR: Volume: SCT Format data remains active status in CEIC and is reported by National Bank of Hungary. The data is categorized under Global Database’s Hungary – Table HU.KA007: Payment System Тurnover. [COVID-19-IMPACT]
As of May 16, 2022, approximately ***** million COVID-19 Green Pass certifications were downloaded by Italian users via the IO mobile app. By comparison, almost ** million Green Pass certifications were downloaded using the Immuni mobile app. The so called "Green Pass" was introduced in *********** as part of the efforts to limit the spread of the COVID-19 pandemic in Italy.
Steps to Order River Discharge Time Series 1.Read the Policy Guidelines and agree to the GRDC User Declaration. 2.Examine the GRDC station maps (see right margin) to see whether GRDC data may be useful for your research project. 3.Download the GRDC Catalogue (XLS) from the catalogue menu item, or the KMZ files for use with Google Earth, and select your stations of interest. 4.Prepare a list of selected stations and indicate the time period of interest, ideally in standard text (DOS ASCII) or MS-Excel format (XLS). Alternatively, you can use the GRDC order form (see right margin) for your data request. 5.Write an explanatory summary of your research project (one page). 6.Send Order Form, Station List, and Project Summary to the GRDC, preferably via e-mail (mailto: grdc@bafg.de). 7.Please do not forget to send the signed User Declaration. Send it to the GRDC via fax (+49 261 13065722). Alternatively to fax letter, electronic formats like PDF or a graphic format will be accepted.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The dataset is a synthetically generated server log based on Apache Server Logging Format. Each line corresponds to each log entry. The log entry has the following parameters :
The dataset consists of two files - - logfiles.log is the actual log file in text format - TestFileGenerator.py is the synthetic log file generator. The number of log entries required can be edited in the code.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset can be used to track boxes on an assembly or manufacturing line, or as a starter-dataset for package detection and "defect" detection use cases for boxes.
For the defect detection use case, one can Clone the images from this project to a new one, and add more examples (labels) of boxes with defects, such as: damaged corners
, unsealed boxes
, and more. This defect detection model can be built as a single object-detection model, or broken into a "two pass detection" model (identify the box and the defects with object detection --> send the cropped detections of the defects to a classification model to confirm the classification of the defect, and the severity)
Converting an Object Detection Dataset to Classification with Isolate Objects
Roboflow: Single Label Classification | Roboflow: Multi-Label Classification
Formats | Multi-Label Classification Format | OpenAI CLIP Classification Format
box
Not seeing a result you expected?
Learn how you can add new datasets to our index.
At the end of 2023, envelopes accounted for little more than **** percent of the direct mail units sent in the United States throughout that year. Postcards followed with nearly **** percent, while self-mailers' share stood at *** percent.