Facebook
TwitterThe values in this raster are unit-less scores ranging from 0 to 1 that represent normalized dollars per acre damage claims from antelope on Wyoming lands. This raster is one of 9 inputs used to calculate the "Normalized Importance Index."
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The Handwritten Digits Pixel Dataset is a collection of numerical data representing handwritten digits from 0 to 9. Unlike image datasets that store actual image files, this dataset contains pixel intensity values arranged in a structured tabular format, making it ideal for machine learning and data analysis applications.
The dataset contains handwritten digit samples with the following distribution:
(Note: Actual distribution counts would be calculated from your specific dataset)
import pandas as pd
# Load the dataset
df = pd.read_csv('/kaggle/input/handwritten_digits_pixel_dataset/mnist.csv')
# Separate features and labels
X = df.drop('label', axis=1)
y = df['label']
# Normalize pixel values
X_normalized = X / 255.0
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The CSV dataset contains sentence pairs for a text-to-text transformation task: given a sentence that contains 0..n abbreviations, rewrite (normalize) the sentence in full words (word forms).
Training dataset: 64,665 sentence pairs Validation dataset: 7,185 sentence pairs. Testing dataset: 7,984 sentence pairs.
All sentences are extracted from a public web corpus (https://korpuss.lv/id/Tīmeklis2020) and contain at least one medical term.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
PCL boundaries are part of a wildfire transmission analysis that comprises the Tier I success metric within the Wildfire Crisis Strategy Prioritization Framework for the Mount Hood National Priority Landscape (NPL). This transmission analysis models the origin points of fires that burn communities, key infrastructure, and drinking water sources, how they move across the landscape, and what potential control lines they cross. Summarizing these results within Potential Operational Delineations (PODs) will allow prioritization of area-based treatments within PODs. Summarizing transmission along PCL boundaries will help support strategic decision-making, suppression effectiveness, and reduce firefighter exposure during wildfire response.
The analysis utilizes fire simulation output from the Pacific Northwest Quantitative Wildland Fire Risk Assessment (QWRA) (McEvoy et al., 2023) conducted by Pyrologix, specifically the ignitions and associated perimeters from the Large Fire Simulator (FSIM), and select values data compiled for the QWRA Highly Valued Resources and Assets (HVRAs).
PODs were selected from the Forest Service national feature service dataset on 3/21/2024. FSIM ignitions and perimeter data were based on the 12/15/2022 event set. HVRA point data was converted to a grid representing number of structures per grid cell (sum). HVRA line and polygon data was also converted to a grid where each grid cell was given a value of one. Grid resolution was ninety meters. Only ignitions-perimeters originating within PODs that intersected the analysis area were utilized.
Potential Control Location Prioritization
Potential Control Lines were prioritized by simply summing the perimeter HVRA counts (number of values which intersect each perimeter) from those perimeters that intersected each PCL and then normalizing/rescaling using the same process as above. It should be pointed out there is no consideration of flow direction in this process i.e., PCL received the HVRA perimeter sum even if they were not between the ignition and the PCL.
Potential Operational Delineation Prioritization
For each HVRA value, grid zonal statistics were performed on each perimeter to obtain the sum of total impacted, regardless of land ownership or analysis area boundary. The sum of structures per ignition (perimeter) was then divided by the total number of ignition iterations to normalize across Fire Occurrence Areas.
Finally, the number of structures per ignition iteration were summed by POD, and then divided by the total number of ignitions within the POD to normalize across PODS and multiplied by ten thousand (majority FOA iterations) to rescale the value to represent a relative number of HVRA impacted per uncontrolled fire per POD. This value was then rescaled again to between zero and one so that HVRA could be combined based on relative importance and/or HVRA type.
2023 PNW QWRA Methods Report: https://oe.oregonexplorer.info/externalcontent/wildfire/PNW_QWRA_2023Methods.pdf
Primary Data Contact: Ian Rickert, Regional Fire Planner, Forest Service R6/R10, ian.rickert@usda.gov
Citations:
McEvoy, Andy; Dunn, Christopher; Rickert, Ian. 2023 PNW Quantitative Wildfire Risk Assessment Methods (2023). [Unpublished Manuscript].
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The climate data set was compiled monthly for Bangladesh, from January 1961 to December 2022; it was generated from the Central Climate Information Management System of the BARC. The initial data consisted of measurements from 35 weather stations, covering a multitude of weather parameters that include solar radiation, potential evaporation (PE), evapotranspiration (ETo), maximum temperature, rainfall, humidity, wind speed, cloud cover, and sunshine duration. In the scope of the research titled "Redefining Multi-Target Weather Forecasting with a Novel Deep Learning Model: HTC-LSTM-Attn in Bangladesh," the dataset underwent several pre-processing steps to ensure its quality and suitability for deep learning-based forecasting. Some of these were:Data consolidation: The merging of multiple CSV files (solar radiation, PET, sunshine, wind speed, cloud coverage, humidity, rainfall, and temperature) into one dataset keyed by station code, year, and month.Station filtering: Eleven stations were excluded due to incomplete or unreliable records, retaining 24 stations representing various climate regions.Outlier treatment: Anomalies are detected by the Interquartile range (IQR) method, and such values are replaced with the closest nearest valid value for the same station.Missing value imputation: For gap-filling, k-nearest neighbours (k=5) are applied.Feature engineering: Added seasonal indicators, lag features, and rolling averages to account for temporal dependencies.Feature Selection: By removing highly correlated variables (Pearson's r > 0.9), redundancy was reduced.Normalization: Normalize numerical columns between 0 and 1 scaling using statistics calculated over training sets.Usage:The processed dataset is optimized for deep learningābased weather forecasting models such as HTC-LSTM-Attn, but can also be used for climate trend analysis, seasonal prediction, and meteorological research.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Normalization
# Generate a resting state (rs) timeseries (ts)
# Install / load package to make fake fMRI ts
# install.packages("neuRosim")
library(neuRosim)
# Generate a ts
ts.rs <- simTSrestingstate(nscan=2000, TR=1, SNR=1)
# 3dDetrend -normalize
# R command version for 3dDetrend -normalize -polort 0 which normalizes by making "the sum-of-squares equal to 1"
# Do for the full timeseries
ts.normalised.long <- (ts.rs-mean(ts.rs))/sqrt(sum((ts.rs-mean(ts.rs))^2));
# Do this again for a shorter version of the same timeseries
ts.shorter.length <- length(ts.normalised.long)/4
ts.normalised.short <- (ts.rs[1:ts.shorter.length]- mean(ts.rs[1:ts.shorter.length]))/sqrt(sum((ts.rs[1:ts.shorter.length]- mean(ts.rs[1:ts.shorter.length]))^2));
# By looking at the summaries, it can be seen that the median values become larger
summary(ts.normalised.long)
summary(ts.normalised.short)
# Plot results for the long and short ts
# Truncate the longer ts for plotting only
ts.normalised.long.made.shorter <- ts.normalised.long[1:ts.shorter.length]
# Give the plot a title
title <- "3dDetrend -normalize for long (blue) and short (red) timeseries";
plot(x=0, y=0, main=title, xlab="", ylab="", xaxs='i', xlim=c(1,length(ts.normalised.short)), ylim=c(min(ts.normalised.short),max(ts.normalised.short)));
# Add zero line
lines(x=c(-1,ts.shorter.length), y=rep(0,2), col='grey');
# 3dDetrend -normalize -polort 0 for long timeseries
lines(ts.normalised.long.made.shorter, col='blue');
# 3dDetrend -normalize -polort 0 for short timeseries
lines(ts.normalised.short, col='red');
Standardization/modernization
New afni_proc.py command line
afni_proc.py \
-subj_id "$sub_id_name_1" \
-blocks despike tshift align tlrc volreg mask blur scale regress \
-radial_correlate_blocks tcat volreg \
-copy_anat anatomical_warped/anatSS.1.nii.gz \
-anat_has_skull no \
-anat_follower anat_w_skull anat anatomical_warped/anatU.1.nii.gz \
-anat_follower_ROI aaseg anat freesurfer/SUMA/aparc.a2009s+aseg.nii.gz \
-anat_follower_ROI aeseg epi freesurfer/SUMA/aparc.a2009s+aseg.nii.gz \
-anat_follower_ROI fsvent epi freesurfer/SUMA/fs_ap_latvent.nii.gz \
-anat_follower_ROI fswm epi freesurfer/SUMA/fs_ap_wm.nii.gz \
-anat_follower_ROI fsgm epi freesurfer/SUMA/fs_ap_gm.nii.gz \
-anat_follower_erode fsvent fswm \
-dsets media_?.nii.gz \
-tcat_remove_first_trs 8 \
-tshift_opts_ts -tpattern alt+z2 \
-align_opts_aea -cost lpc+ZZ -giant_move -check_flip \
-tlrc_base "$basedset" \
-tlrc_NL_warp \
-tlrc_NL_warped_dsets \
anatomical_warped/anatQQ.1.nii.gz \
anatomical_warped/anatQQ.1.aff12.1D \
anatomical_warped/anatQQ.1_WARP.nii.gz \
-volreg_align_to MIN_OUTLIER \
-volreg_post_vr_allin yes \
-volreg_pvra_base_index MIN_OUTLIER \
-volreg_align_e2a \
-volreg_tlrc_warp \
-mask_opts_automask -clfrac 0.10 \
-mask_epi_anat yes \
-blur_to_fwhm -blur_size $blur \
-regress_motion_per_run \
-regress_ROI_PC fsvent 3 \
-regress_ROI_PC_per_run fsvent \
-regress_make_corr_vols aeseg fsvent \
-regress_anaticor_fast \
-regress_anaticor_label fswm \
-regress_censor_motion 0.3 \
-regress_censor_outliers 0.1 \
-regress_apply_mot_types demean deriv \
-regress_est_blur_epits \
-regress_est_blur_errts \
-regress_run_clustsim no \
-regress_polort 2 \
-regress_bandpass 0.01 1 \
-html_review_style pythonic
We used similar command lines to generate āblurred and not censoredā and the ānot blurred and not censoredā timeseries files (described more fully below). We will provide the code used to make all derivative files available on our github site (https://github.com/lab-lab/nndb).We made one choice above that is different enough from our original pipeline that it is worth mentioning here. Specifically, we have quite long runs, with the average being ~40 minutes but this number can be variable (thus leading to the above issue with 3dDetrendās -normalise). A discussion on the AFNI message board with one of our team (starting here, https://afni.nimh.nih.gov/afni/community/board/read.php?1,165243,165256#msg-165256), led to the suggestion that '-regress_polort 2' with '-regress_bandpass 0.01 1' be used for long runs. We had previously used only a variable polort with the suggested 1 + int(D/150) approach. Our new polort 2 + bandpass approach has the added benefit of working well with afni_proc.py.
Which timeseries file you use is up to you but I have been encouraged by Rick and Paul to include a sort of PSA about this. In Paulās own words: * Blurred data should not be used for ROI-based analyses (and potentially not for ICA? I am not certain about standard practice). * Unblurred data for ISC might be pretty noisy for voxelwise analyses, since blurring should effectively boost the SNR of active regions (and even good alignment won't be perfect everywhere). * For uncensored data, one should be concerned about motion effects being left in the data (e.g., spikes in the data). * For censored data: * Performing ISC requires the users to unionize the censoring patterns during the correlation calculation. * If wanting to calculate power spectra or spectral parameters like ALFF/fALFF/RSFA etc. (which some people might do for naturalistic tasks still), then standard FT-based methods can't be used because sampling is no longer uniform. Instead, people could use something like 3dLombScargle+3dAmpToRSFC, which calculates power spectra (and RSFC params) based on a generalization of the FT that can handle non-uniform sampling, as long as the censoring pattern is mostly random and, say, only up to about 10-15% of the data. In sum, think very carefully about which files you use. If you find you need a file we have not provided, we can happily generate different versions of the timeseries upon request and can generally do so in a week or less.
Effect on results
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
We used X-ray fluorescence (XRF) scanning on Site U1338 sediments from Integrated Ocean Drilling Program Expedition 321 to measure sediment geochemical compositions at 2.5 cm resolution for the 450 m of the Site U1338 spliced sediment column. This spatial resolution is equivalent to ~2 k.y. age sampling in the 0-5 Ma section and ~1 k.y. resolution from 5 to 17 Ma. Here we report the data and describe data acquisition conditions to measure Al, Si, K, Ca, Ti, Fe, Mn, and Ba in the solid phase. We also describe a method to convert the data from volume-based raw XRF scan data to a normalized mass measurement ready for calibration by other geochemical methods. Both the raw and normalized data are reported along the Site U1338 splice.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
{ 0: 'butterfly', 1: 'cat', 2: 'chicken', 3: 'cow', 4: 'dog', 5: 'elephant', 6: 'horse', 7: 'sheep', 8: 'spider', 9: 'squirrel' } Load data pytorch ``` class Dataset(DT): def init(self, mode="train", size=224, augment=False, augment_rate=0.2, normalize=False, random_state=42): super(Dataset, self)._init_()
self.mode = mode
self.size = size
self.augment = augment
self.augment_rate = augment_rate
self.normalize = normalize
self.random_state = random_state
self.X = []
self.Y = []
self.labels = {
0: 'butterfly',
1: 'cat',
2: 'chicken',
3: 'cow',
4: 'dog',
5: 'elephant',
6: 'horse',
7: 'sheep',
8: 'spider',
9: 'squirrel'
}
self.load_data()
def load_data(self):
if self.mode == "train":
self.X = np.load("./data/trainX.npy")
self.Y = np.load("./data/trainY.npy")
else:
self.X = np.load("./data/testX.npy")
self.Y = np.load("./data/testY.npy")
print(self.mode, "====", self.X.shape[0])
def _len_(self):
return self.X.shape[0]
def _getitem_(self, index):
image = self.X[index]
label = [0] * 10
label[self.Y[index]] = 1
return torch.tensor(image, dtype=torch.float32).permute(2, 0, 1) / 255.0, torch.tensor(label, dtype=torch.float32)
def sample(self, index=0):
image = self.X[index]
label = [0] * 10
label[self.Y[index]] = 1
return image, label
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
Project Description:
In this project, I developed a linear regression model to predict car prices based on key features such as fuel tank capacity, width, length, and year of manufacture. The goal was to understand how these factors influence car prices and to assess the effectiveness of the model in making accurate predictions.
Key Features:
Fuel Tank Capacity: The capacity of the carās fuel tank. Width: The width of the car. Length: The length of the car. Year: The year of manufacture of the car.
Target Variable:
Price: The price of the car, which is the primary variable being predicted.
Methodology:
Data Preparation:
Model Training:
Feature Scaling:
Evaluation:
Visualization:
Results:
Technologies Used:
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
https://www.kaggle.com/datasets/tunguz/big-five-personality-test
https://colab.research.google.com/drive/1ZsS76ZsRjcL1tg_YvqEB_WlzvlmsiinP?usp=sharing
- Lack of Labels: The original dataset did not categorize the responses into specific personality traits, making it impossible to directly train a supervised machine learning model. - Complexity in Interpretation: Although raw scores range from 1 to 5, they were not directly interpretable as personality traits since different numbers of positively and negatively keyed questions meant the maximum score for each trait was different.
To overcome these challenges, I undertook the following process to convert this un-labelled data into a labelled format: - Scoring Mechanism: I calculated scores for each of the five personality traits based on the respondent's answers to relevant questions. For each trait, a total score was computed by summing the individual question scores, taking into account whether the question was positively or negatively keyed. - Normalization and Scaling: To ensure consistency and comparability across traits, I applied a Min-Max Scaler to normalize the scores to a range of 0 to 1. This step was crucial for creating uniform labels that could be used effectively in machine learning models. - Label Assignment: Based on the scaled scores, I assigned labels to each respondent, categorizing them from the highest to lowest for each personality trait.
The labelled data can played a pivotal role in training various machine learning algorithms to predict personality traits based on new user responses. By transforming the dataset, the user can : Develop a Supervised Learning Model: The labelled data enabled user to use classification algorithms, such as Logistic Regression and Support Vector Machines, to predict personality traits with high accuracy. Clustering for Insights: The user can also utilize clustering algorithms on the scaled data to uncover patterns and group users with similar personality profiles, enhancing the interpretability of the model outputs.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThe values in this raster are unit-less scores ranging from 0 to 1 that represent normalized dollars per acre damage claims from antelope on Wyoming lands. This raster is one of 9 inputs used to calculate the "Normalized Importance Index."