Eagle-Mix Dataset
Dataset Description
Eagle-Mix is a comprehensive mixed dataset created for training Eagle models. It combines high-quality conversational data from multiple sources to provide diverse training examples.
Dataset Composition
The dataset is composed of the following sources:
Dataset Count Mean Length Median Length Max Length
ShareGPT 68,623 6,128 6,445 93,262
UltraChat 207,865 5,686 5,230 53,213
OpenThoughts2-1M 1,143,205 16,175 10… See the full description on the dataset page: https://huggingface.co/datasets/Qinghao/eagle-mix.
This document outlines the copyright information and usage rights for training materials developed by Eagle Technology in Content Studio Online for use in New Zealand and the South Pacific.Updated in 2025.
The automated recognition of different vehicle classes and their orientation on aerial images is an important task in the field of traffic research and also finds applications in disaster management, among other things. For the further development of corresponding algorithms that deliver reliable results not only under laboratory conditions but also in real scenarios, training data sets that are as extensive and versatile as possible play a decisive role. For this purpose, we present our dataset EAGLE (oriEnted vehicle detection using Aerial imaGery in real-worLd scEnarios).
The EAGLE dataset is used to detect vehicles of different classes including vehicle orientation based on aerial images. It contains high-resolution aerial images covering different real-world situations with different acquisition sensors, angles and times, flight altitudes, resolutions (5-45 cm ground pixel size), weather and lighting conditions, as well as urban and rural acquisition regions, acquired between 2006 and 2019. EAGLE contains 215,986 annotated vehicles on 318 aerial images for small vehicles (cars, vans, transporters, SUVs, ambulances, police vehicles) and large vehicles (trucks, large trucks, minibuses, buses, fire engines, construction vehicles, trailers) including oriented bounding boxes defined by four points. The annotation contains the respective coordinates of all four vehicle corners, as well as a degree of orientation between 0° and 360° indicating the angle of the vehicle tip. In addition, for each example, the visibility (fully/partially/weakly visible) and the detectability of the vehicle's orientation (clear/unclear) are indicated.
Financial overview and grant giving statistics of Eagles International Training Institute
https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html
This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.
Historical daily stock prices (open, high, low, close, volume)
Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)
Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)
Feature engineering based on financial data and technical indicators
Sentiment analysis data from social media and news articles
Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)
Stock price prediction
Portfolio optimization
Algorithmic trading
Market sentiment analysis
Risk management
Researchers investigating the effectiveness of machine learning in stock market prediction
Analysts developing quantitative trading Buy/Sell strategies
Individuals interested in building their own stock market prediction models
Students learning about machine learning and financial applications
The dataset may include different levels of granularity (e.g., daily, hourly)
Data cleaning and preprocessing are essential before model training
Regular updates are recommended to maintain the accuracy and relevance of the data
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Dataset Card for EagleX v2 Dataset
This dataset was used to train RWKV Eagle 7B for continued pretrain of 1.1T tokens (approximately) (boosting it to 2.25T) with the final model being released as RWKV EagleX v2.
Dataset Details
Dataset Description
EagleX-WorldContinued is a pretraining dataset built from many of our datasets over at Recursal AI + a few others.
Curated by: M8than, KaraKaraWitch, Darok Funded by [optional]: Recursal.ai Shared by [optional]:… See the full description on the dataset page: https://huggingface.co/datasets/RWKV/EagleX-WorldContinued.
https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html
This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.
Historical daily stock prices (open, high, low, close, volume)
Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)
Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)
Feature engineering based on financial data and technical indicators
Sentiment analysis data from social media and news articles
Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)
Stock price prediction
Portfolio optimization
Algorithmic trading
Market sentiment analysis
Risk management
Researchers investigating the effectiveness of machine learning in stock market prediction
Analysts developing quantitative trading Buy/Sell strategies
Individuals interested in building their own stock market prediction models
Students learning about machine learning and financial applications
The dataset may include different levels of granularity (e.g., daily, hourly)
Data cleaning and preprocessing are essential before model training
Regular updates are recommended to maintain the accuracy and relevance of the data
https://choosealicense.com/licenses/wtfpl/https://choosealicense.com/licenses/wtfpl/
A dataset of 2,126 1-turn conversations artificially generated using GPT-4, designed to fit the tone of the Discord bot Mr. Eagle. This dataset was used to train MrEagle-LoRA.
Learn, Reconnect, and Discover the latest advances in Geographic Information Systems (GIS) technology when the New Zealand Esri User Conference returns in-person. Join hundreds of users from around the New Zealand and the South Pacific to discover how they’re leveraging GIS capabilities to solve problems, create shared understanding, and map common ground.This year's 3-day event includes not-to-be-missed opportunities for training, networking and sharing your own stories and experiences.A 2-day option is available for those short on time, while a 4-day option includes discounted instructor-led training for migrating to ArcGIS Pro.
https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html
This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.
Historical daily stock prices (open, high, low, close, volume)
Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)
Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)
Feature engineering based on financial data and technical indicators
Sentiment analysis data from social media and news articles
Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)
Stock price prediction
Portfolio optimization
Algorithmic trading
Market sentiment analysis
Risk management
Researchers investigating the effectiveness of machine learning in stock market prediction
Analysts developing quantitative trading Buy/Sell strategies
Individuals interested in building their own stock market prediction models
Students learning about machine learning and financial applications
The dataset may include different levels of granularity (e.g., daily, hourly)
Data cleaning and preprocessing are essential before model training
Regular updates are recommended to maintain the accuracy and relevance of the data
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset includes a collection of image samples of Jacaranda, Palm and others. They were clipped from Eagle Aerial images of Orange County, California. These samples have been used for training a deep learning model to classify Jacaranda, and can also be used to train a model for Palm.
USGS is assessing the feasibility of map projections and grid systems for lunar surface operations. We propose developing a new Lunar Transverse Mercator (LTM), the Lunar Polar Stereographic (LPS), and the Lunar Grid Reference Systems (LGRS). We have also designed additional grids to meet NASA requirements for astronaut navigation, referred to as LGRS in Artemis Condensed Coordinates (ACC). This data release includes LGRS grids finer than 25km (1km, 100m, and 10m) in ACC format for a small number of terrestrial analog sites of interest. The grids contained in this data release are projected in the terrestrial Universal Transverse Mercator (UTM) Projected Coordinate Reference System (PCRS) using the World Geodetic System of 1984 (WGS84) as its reference datum. A small number of geotiffs used to related the linear distortion the UTM and WGS84 systems imposes on the analog sites include: 1) a clipped USGS Nation Elevation Dataset (NED) Digital Elevation Model (DEM); 2) the grid scale factor of the UTM zone the data is projected in, 3) the height factor based on the USGS NED DEM, 4) the combined factor, and 5) linear distortion calculated in parts-per-million (PPM). Geotiffs are projected from WGS84 in a UTM PCRS zone. Distortion calculations are based on the methods State Plane Coordinate System of 2022. See Dennis (2021; https://www.fig.net/resources/proceedings/fig_proceedings/fig2023/papers/cinema03/CINEMA03_dennis_12044.pdf) for more information. Coarser grids, (>=25km) such as the lunar LTM, LPS, and LGRS grids are not released here but may be acceded from https://doi.org/10.5066/P13YPWQD and displayed using a lunar datum. LTM, LPS, and LGRS are similar in design and use to the Universal Transverse Mercator (UTM), Universal Polar Stereographic (LPS), and Military Grid Reference System (MGRS), but adhere to NASA requirements. LGRS ACC format is similar in design and structure to historic Army Mapping Service Apollo orthotopophoto charts for navigation. Terrestrial Locations and associated LGRS ACC Grids and Files: Projection Location Files UTM 11N Yucca Flat 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff UTM 12N Buffalo Park 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff Cinder Lake 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff JETT3 Arizona 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff JETT5 Arizona 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff Meteor Crater 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff UTM 13N HAATS 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile 1km Grid Shapefile Derby LZ Clip 100m Grid Shapefile Derby LZ Clip 10m Grid Shapefile Derby LZ Clip 1km Grid Shapefile Eagle County Regional Airport KEGE Clip 100m Grid Shapefile Eagle County Regional Airport KEGE Clip 10m Grid Shapefile Eagle County Regional Airport KEGE Clip 1km Grid Shapefile Windy Point LZ Clip 100m Grid Shapefile Windy Point LZ Clip 10m Grid Shapefile Windy Point LZ Clip USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff UTM 15N Johnson Space Center 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff UTM 28N JETT2 Icelandic Highlands 1km Grid Shapefile 100m Grid Shapefile 10m Grid Shapefile USGS 1/3" DEM Geotiff UTM Projection Scale Factor Geotiff Map Height Factor Geotiff Map Combined Factor Geotiff Map Linear Distortion Geotiff The shapefiles and rasters utilize UTM projections. For GIS utilization of grid shapefiles projected in Lunar Latitude and Longitude should utilize a registered PCRS. To select the correct UTM EPSG code, determine the zone based on longitude (zones are 6° wide, numbered 1–60 from 180°W) and hemisphere (Northern Hemisphere uses EPSG:326XX; Southern Hemisphere uses EPSG:327XX), where XX is the zone number. For display in display in latitude and longitude, select a correct WGS84 EPSG code, such as EPSG:4326. Note: The Lunar Transverse Mercator (LTM) projection system is a globalized set of lunar map projections that divides the Moon into zones to provide a uniform coordinate system for accurate spatial representation. It uses a Transverse Mercator projection, which maps the Moon into 45 transverse Mercator strips, each 8°, longitude, wide. These Transverse Mercator strips are subdivided at the lunar equator for a total of 90 zones. Forty-five in the northern hemisphere and forty-five in the south. LTM specifies a topocentric, rectangular, coordinate system (easting and northing coordinates) for spatial referencing. This projection is commonly used in GIS and surveying for its ability to represent large areas with high positional accuracy while maintaining consistent scale. The Lunar Polar Stereographic (LPS) projection system contains projection specifications for the Moon’s polar regions. It uses a polar stereographic projection, which maps the polar regions onto an azimuthal plane. The LPS system contains 2 zones, each zone is located at the northern and southern poles and is referred to as the LPS northern or LPS southern zone. LPS, like its equatorial counterpart LTM, specifies a topocentric, rectangular, coordinate system (easting and northing coordinates) for spatial referencing. This projection is commonly used in GIS and surveying for its ability to represent large polar areas with high positional accuracy while maintaining consistent scale across the map region. LGRS is a globalized grid system for lunar navigation supported by the LTM and LPS projections. LGRS provides an alphanumeric grid coordinate structure for both the LTM and LPS systems. This labeling structure is utilized similarly to MGRS. LGRS defines a global area grid based on latitude and longitude and a 25×25 km grid based on LTM and LPS coordinate values. Two implementations of LGRS are used as polar areas require an LPS projection and equatorial areas a Transverse Mercator. We describe the differences in the techniques and methods reported in this data release. Request McClernan et. al. (in-press) for more information. ACC is a method of simplifying LGRS coordinates and is similar in use to the Army Mapping Service Apollo orthotopophoto charts for navigation. These grids are designed to condense a full LGRS coordinate to a relative coordinate of 6 characters in length. LGRS in ACC format is completed by imposing a 1km grid within the LGRS 25km grid, then truncating the grid precision to 10m. To me the character limit, a coordinate is reported as a relative value to the lower-left corner of the 25km LGRS zone without the zone information; However, zone information can be reported. As implemented, and 25km^2 area on the lunar surface will have a set of a unique set of ACC coordinates to report locations The shape files provided in this data release are projected in the LTM or LPS PCRSs and must utilize these projections to be dimensioned correctly.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Critical components of successful evaluation of clinical outcome assessments (COAs) in multisite clinical trials and clinical practice are standardized training, administration, and documented reliability of scoring. Experiences of evaluators, alongside patient differences from regional standards of care, may contribute to heterogeneity in clinical center’s expertise. Achieving low variability and high reliability of COA is fundamental to clinical research and to give confidence in our ability to draw rational, interpretable conclusions from the data collected. The objective of this manuscript is to provide a framework to guide the learning process for COAs for use in clinics and clinical trials to maximize reliability and validity of COAs in neuromuscular disease (NMD). This is a consensus-based guideline with contributions from fourteen leading experts in clinical outcomes and the field of clinical outcome training in NMD. This framework should guide reliable and valid assessments in NMD specialty clinics and clinical trials. This consensus aims to expedite study start up with a progressive training pathway ranging from research naïve to highly experienced clinical evaluators. This document includes recommendations for education guidelines and roles and responsibilities of key stakeholders in COA assessment and implementation to ensure quality and consistency of outcome administration across different settings.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into building and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Building is useful in applications such as high-quality 3D basemap creation, urban planning, and planning climate change response.Building could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Building in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.The model is trained with classified LiDAR that follows the The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 6 BuildingApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Auckland, Christchurch, Kapiti, Wellington Testing dataset - Auckland, WellingtonValidation/Evaluation dataset - Hutt City Dataset City Training Auckland, Christchurch, Kapiti, Wellington Testing Auckland, Wellington Validating HuttModel architectureThis model uses the SemanticQueryNetwork model architecture implemented in ArcGIS Pro.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.984921 0.975853 0.979762 Building 0.951285 0.967563 0.9584Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 75~%, Test: 25~%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-137.74 m to 410.50 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-17 to +17 Maximum points per block8192 Block Size50 Meters Class structure[0, 6]Sample resultsModel to classify a dataset with 23pts/m density Wellington city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The Multimodal Affective Computing market is experiencing robust growth, driven by increasing demand for advanced human-computer interaction and the rising adoption of AI across various sectors. The market is projected to reach a value of $5 billion in 2025, exhibiting a Compound Annual Growth Rate (CAGR) of 20% from 2025 to 2033. This significant expansion is fueled by several key factors. Firstly, advancements in machine learning and deep learning algorithms are enabling more accurate and nuanced emotion recognition from multiple input modalities like facial expressions, voice tone, and physiological signals. Secondly, the proliferation of smart devices and the increasing integration of affective computing into everyday technologies like smartphones, wearables, and automobiles are driving market penetration. Finally, the growing need for personalized and empathetic user experiences across sectors like education, healthcare, and customer service is fostering demand for solutions that understand and respond to human emotions. The market segmentation reveals a strong presence across various applications. Education and Training leverage affective computing to personalize learning experiences and provide adaptive feedback. Life and Health applications utilize this technology for mental health monitoring, patient care, and personalized medicine. Business Services benefit from enhanced customer experience and improved employee engagement. Industrial design incorporates it to develop intuitive and user-friendly products. The Technology Media sector is leveraging it for content personalization and targeted advertising, while Public Governance benefits from its potential for improved citizen engagement and public safety. While technological limitations and data privacy concerns represent potential restraints, the overall market trajectory suggests sustained growth, driven by continuous innovation and increasing acceptance of AI-powered solutions that prioritize human emotion understanding.
This Structures dataset, photogrammetrically compiled and published at a scale of 1" = 100' was produced from aerial imagery collected in April 2023 using UltraCAM Eagle camera and covers entire Westchester County, NY. It is described as 'struct_poly' and is delivered as a planimetric layer from 2023 imagery compiled/updated in 3D environment with average elevation as an attribute. The layer includes antennas, buildings, miscellaneous structures, tanks, towers, train platforms and train stations.
This Structures dataset, photogrammetrically compiled and published at a scale of 1" = 100' was produced from aerial imagery collected in April 2023 using UltraCAM Eagle camera and covers entire Westchester County, NY. It is described as 'struct_poly' and is delivered as a planimetric layer from 2023 imagery compiled/updated in 3D environment with average elevation as an attribute. The layer includes antennas, buildings, miscellaneous structures, tanks, towers, train platforms and train stations.
The model file “palm_zip.dlpk” was initially a zip file, which has been modified as .dlpk in order to meet the requirement of the ArcGIS deep learning package format. Please change “dlpk” back to “zip” and extract all. You might want to refer the model outside ArcGIS platform. The model has been trained by transfer learning with samples clipped from the ECW format 2011 Eagle Aerial image of Orange County, on the basis of Tensorflow.Keras ResNet50 model with dataset of ImageNet. There is two classes in this model, "'others" and "palm", corresponding to 0 and 1 respectively in the model. All Palm species including King Palm, Queen Palm, Mexican Palm, etc. belong to “palm”, while all other tree species and land covers belong to “others”. The training samples have been resized to 224 by 224 pixels, so please be aware of your image size and content. It is suggested that only those objects with maximal prediction score class as ‘1’ (referring to Palm) and score greater than 0.9 be selected as Palm.
3,859 high-resolution YouTube videos, 2,985 training videos, 421 validation videos and 453 test videos. An improved 40-category label set by merging eagle and owl into bird, ape into monkey, deleting hands, and adding flying disc, squirrel and whale 8,171 unique video instances 232k high-quality manual annotations
The model file “jacaranda_zip.dlpk” was initially a zip file, which has been changed as a .dlpk in order to meet the requirement of the ArcGIS deep learning package format. Please change “dlpk” back to “zip” and extract all. You might want to refer the model outside ArcGIS platform. The model has been trained by transfer learning with samples clipped from the ECW format 2011 Eagle Aerial image of Orange County, on the basis of Tensorflow.Keras ResNet50 model with dataset of ImageNet. There is two classes in this model, "jacaranda" and "others", corresponding to 0 and 1 respectively in the model . All other tree species and land covers except for Jacaranda belong to “others”. The training samples have been resized to 224 by 224 pixel images, so please be aware of your image size and content. It is suggested that only those objects with maximal prediction score class as ‘0’ (referring to Jacaranda) and score greater than 0.97 be selected as Jacaranda.
Eagle-Mix Dataset
Dataset Description
Eagle-Mix is a comprehensive mixed dataset created for training Eagle models. It combines high-quality conversational data from multiple sources to provide diverse training examples.
Dataset Composition
The dataset is composed of the following sources:
Dataset Count Mean Length Median Length Max Length
ShareGPT 68,623 6,128 6,445 93,262
UltraChat 207,865 5,686 5,230 53,213
OpenThoughts2-1M 1,143,205 16,175 10… See the full description on the dataset page: https://huggingface.co/datasets/Qinghao/eagle-mix.