Facebook
Twittercmomy is a python package to calculate central moments and co-moments in a numerical stable and direct way. Behind the scenes, cmomy makes use of Numba to rapidly calculate moments. cmomy provides utilities to calculate central moments from individual samples, precomputed central moments, and precomputed raw moments. It also provides routines to perform bootstrap resampling based on raw data, or precomputed moments. cmomy has numpy array and xarray DataArray interfaces.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Author: Andrew J. Felton
Date: 11/15/2024
This R project contains the primary code and data (following pre-processing in python) used for data production, manipulation, visualization, and analysis, and figure production for the study entitled:
"Global estimates of the storage and transit time of water through vegetation"
Please note that 'turnover' and 'transit' are used interchangeably. Also please note that this R project has been updated multiple times as the analysis has updated throughout the peer review process.
#Data information:
The data folder contains key data sets used for analysis. In particular:
"data/turnover_from_python/updated/august_2024_lc/" contains the core datasets used in this study including global arrays summarizing five year (2016-2020) averages of mean (annual) and minimum (monthly) transit time, storage, canopy transpiration, and number of months of data able as both an array (.nc) or data table (.csv). These data were produced in python using the python scripts found in the "supporting_code" folder. The remaining files in the "data" and "data/supporting_data" folder primarily contain ground-based estimates of storage and transit found in public databases or through a literature search, but have been extensively processed and filtered here. The "supporting_data"" folder also contains annual (2016-2020) MODIS land cover data used in the analysis and contains separate filters containing the original data (.hdf) and then the final process (filtered) data in .nc format. The resulting annual land cover distributions were used in the pre-processing of data in python.
#Code information
Python scripts can be found in the "supporting_code" folder.
Each R script in this project has a role:
"01_start.R": This script sets the working directory, loads in the tidyverse package (the remaining packages in this project are called using the `::` operator), and can run two other scripts: one that loads the customized functions (02_functions.R) and one for importing and processing the key dataset for this analysis (03_import_data.R).
"02_functions.R": This script contains custom functions. Load this using the `source()` function in the 01_start.R script.
"03_import_data.R": This script imports and processes the .csv transit data. It joins the mean (annual) transit time data with the minimum (monthly) transit data to generate one dataset for analysis: annual_turnover_2. Load this using the
`source()` function in the 01_start.R script.
"04_figures_tables.R": This is the main workhouse for figure/table production and supporting analyses. This script generates the key figures and summary statistics used in the study that then get saved in the "manuscript_figures" folder. Note that all maps were produced using Python code found in the "supporting_code"" folder. Also note that within the "manuscript_figures" folder there is an "extended_data" folder, which contains tables of the summary statistics (e.g., quartiles and sample sizes) behind figures containing box plots or depicting regression coefficients.
"supporting_generate_data.R": This script processes supporting data used in the analysis, primarily the varying ground-based datasets of leaf water content.
"supporting_process_land_cover.R": This takes annual MODIS land cover distributions and processes them through a multi-step filtering process so that they can be used in preprocessing of datasets in python.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset of Python projects used for the study of code change patterns and their automation. The dataset lists 120 projects, divided into four domains — Web, Media, Data, and ML+DL.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data for paper 'Hetero-interpenetrated metal-organic frameworks'
Facebook
TwitterThere are a suite of powerful open source python libraries that can be used to work with spatial data. Learn how to use geopandas, rasterio and matplotlib to plot and manipulate spatial data in Python.
Facebook
TwitterEnvironmental DNA (eDNA) water samples were collected at 15 tree islands containing wading bird breeding colonies (order Pelecaniformes) and 15 empty control islands in the central Everglades of Florida in spring of 2017 (April through June) and analyzed for the presence of eDNA from invasive Burmese pythons (Python bivittatus). The Burmese python is now established as a breeding population throughout south Florida, USA. Pythons can consume large quantities of prey and may be a particular threat to wading bird breeding colonies in the Everglades. To quantify python occupancy rates at tree islands where wading birds breed, we utilized environmental DNA (eDNA) analysis—a genetic tool which detects shed DNA in water samples and provides high detection probabilities compared to traditional survey methods. We fitted multi-scale Bayesian occupancy models to test the prediction that Burmese pythons occupy islands with wading bird colonies in the central Everglades at higher rates compared to representative control islands in the same region containing no breeding birds.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Raw data and Python script
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Python scripts were used to identify interfaces and compute the vapor film thickness. To run the scripts, the unprocessed TIFF images for both the X-ray and visible light cameras are required. The python scripts are reported here for transparency.Order of code execution:(1) interface.py - Finds liquid-vapor and sphere-fluid interfaces(2) filmThickness.py - Calculates vapor film thickness(3) filmCompare.py - Compares vapor film thickness for each trial(4) make_video.py - Create multi-panel video of results(5) fft_interface.py - Compute discrete fast Fourier transformChange in naming convention: A different naming convention was used for storing and processing data from experimental trials than for reporting. For example, we used the name A1_roughSphere_xray_C1S0001 to denote data corresponding to the first experimental trial of the rough sphere. When reporting this information, we instead used the notation RO1 for simplicity. This was done for the smooth sphere, and thermoprobe as well.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dataset Description
Overview: This dataset contains three distinct fake datasets generated using the Faker and Mimesis libraries. These libraries are commonly used for generating realistic-looking synthetic data for testing, prototyping, and data science projects. The datasets were created to simulate real-world scenarios while ensuring no sensitive or private information is included.
Data Generation Process: The data creation process is documented in the accompanying notebook, Creating_simple_Sintetic_data.ipynb. This notebook showcases the step-by-step procedure for generating synthetic datasets with customizable structures and fields using the Faker and Mimesis libraries.
File Contents:
Datasets: CSV files containing the three synthetic datasets. Notebook: Creating_simple_Sintetic_data.ipynb detailing the data generation process and the code used to create these datasets.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive reproduces a table titled "Table 3.1 Boone county population size, 1990 and 2000" from Wang and vom Hofe (2007, p.58). The archive provides a Jupyter Notebook that uses Python and can be run in Google Colaboratory. The workflow uses Census API to retrieve data, reproduce the table, and ensure reproducibility for anyone accessing this archive.The Python code was developed in Google Colaboratory, or Google Colab for short, which is an Integrated Development Environment (IDE) of JupyterLab and streamlines package installation, code collaboration and management. The Census API is used to obtain population counts from the 1990 and 2000 Decennial Census (Summary File 1, 100% data). All downloaded data are maintained in the notebook's temporary working directory while in use. The data are also stored separately with this archive.The notebook features extensive explanations, comments, code snippets, and code output. The notebook can be viewed in a PDF format or downloaded and opened in Google Colab. References to external resources are also provided for the various functional components. The notebook features code to perform the following functions:install/import necessary Python packagesintroduce a Census API Querydownload Census data via CensusAPI manipulate Census tabular data calculate absolute change and percent changeformatting numbersexport the table to csvThe notebook can be modified to perform the same operations for any county in the United States by changing the State and County FIPS code parameters for the Census API downloads. The notebook could be adapted for use in other environments (i.e., Jupyter Notebook) as well as reading and writing files to a local or shared drive, or cloud drive (i.e., Google Drive).
Facebook
TwitterWant to keep the data in your Hosted Feature Service current? Not interested in writing a lot of code?Leverage this Python Script from the command line, Windows Scheduled Task, or from within your own code to automate the replacement of data in an existing Hosted Feature Service. It can also be leveraged by your Notebook environment and automatically managed by the MNCD Tool!See the Sampler Notebook that features the OverwriteFS tool run from Online to update a Feature Service. It leverages MNCD to cache the OverwriteFS script for import to the Notebook. A great way to jump start your Feature Service update workflow! RequirementsPython v3.xArcGIS Python APIStored Connection Profile, defined by Python API 'GIS' module. Also accepts 'pro', to specify using the active ArcGIS Pro connection. Will require ArcGIS Pro and Arcpy!Pre-Existing Hosted Feature ServiceCapabilitiesOverwrite a Feature Service, refreshing the Service Item and DataBackup and reapply Service, Layer, and Item properties - New at v2.0.0Manage Service to Service or Service to Data relationships - New at v2.0.0Repair Lost Service File Item to Service Relationships, re-enabling Service Overwrite - New at v2.0.0'Swap Layer' capability for Views, allowing two Services to support a View, acting as Active and Idle role during Updates - New at v2.0.0Data Conversion capability, able to invoke following a download and before Service update - New at v2.0.0Includes 'Rss2Json' Conversion routine, able to read a RSS or GeoRSS source and generate GeoJson for Service Update - New at v2.0.0Renamed 'Rss2Json' to 'Xml2GeoJSON' for its enhanced capabilities, 'Rss2Json' remains for compatability - Revised at v2.1.0Added 'Json2GeoJSON' Conversion routine, able to read and manipulate Json or GeoJSON data for Service Updates - New at v2.1.0Can update other File item types like PDF, Word, Excel, and so on - New at v2.1.0Supports ArcGIS Python API v2.0 - New at v2.1.2RevisionsSep 29, 2021: Long awaited update to v2.0.0!Sep 30, 2021: v2.0.1, Patch to correct Outcome Status when download or Coversion resulted in no change. Also updated documentation.Oct 7, 2021: v2.0.2, workflow Patch correcting Extent update of Views when Overwriting Service, discovered following recent ArcGIS Online update. Enhancements to 'datetimeUtil' Support script.Nov 30, 2021: v2.1.0, added new 'Json2GeoJSON' Converter, enhanced 'Xml2GeoJSON' Converter, retired 'Rss2Json' Converter, added new Option Switches 'IgnoreAge' and 'UpdateTarget' for source age control and QA/QC workflows, revised Optimization logic and CRC comparison on downloads.Dec 1, 2021: v2.1.1, Only a patch to Conversion routines: Corrected handling of null Z-values in Geometries (discovered immediately following release 2.1.0), improve error trapping while processing rows, and added deprecation message to retired 'Rss2Json' conversion routine.Feb 22, 2022: v2.1.2, Patch to detect and re-apply case-insensitive field indexes. Update to allow Swapping Layers to Service without an associated file item. Added cache refresh following updates. Patch to support Python API 2.0 service 'table' property. Patches to 'Json2GeoJSON' and 'Xml2GeoJSON' converter routines.Sep 5, 2024: v2.1.4, Patch service manager refresh failure issue. Added trace report to Convert execution on exception. Set 'ignore-DataItemCheck' property to True when 'GetTarget' action initiated. Hardened Async job status check. Update 'overwriteFeatureService' to support GeoPackage type and file item type when item.name includes a period, updated retry loop to try one final overwrite after del, fixed error stop issue on failed overwrite attempts. Removed restriction on uploading files larger than 2GB. Restores missing 'itemInfo' file on service File items. Corrected false swap success when view has no layers. Lifted restriction of Overwrite/Swap Layers for OGC. Added 'serviceDescription' to service detail backup. Added 'thumbnail' to item backup/restore logic. Added 'byLayerOrder' parameter to 'swapFeatureViewLayers'. Added 'SwapByOrder' action switch. Patch added to overwriteFeatureService 'status' check. Patch for June 2024 update made to 'managers.overwrite' API script that blocks uploads > 25MB, API v2.3.0.3. Patch 'overwriteFeatureService' to correctly identify overwrite file if service has multiple Service2Data relationships.Includes documentation updates!
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Here, we develop and show the use of an open-source Python library to control commercial potentiostats. It standardizes the commands for different potentiostat models, opening the possibility to perform automated experiments independently of the instrument used. At the time of this writing, we have included potentiostats from CH Instruments (models 1205B, 1242B, 601E, and 760E) and PalmSens (model Emstat Pico), although the open-source nature of the library allows for more to be included in the future. To showcase the general workflow and implementation of a real experiment, we have automated the Randles–Ševčı́k methodology to determine the diffusion coefficient of a redox-active species in solution using cyclic voltammetry. This was accomplished by writing a Python script that includes data acquisition, data analysis, and simulation. The total run time was 1 min and 40 s, well below the time it would take even an experienced electrochemist to apply the methodology in a traditional manner. Our library has potential applications that expand beyond the automation of simple repetitive tasks; for example, it can interface with peripheral hardware and well-established third-party Python libraries as part of a more complex and intelligent setup that relies on laboratory automation, advanced optimization, and machine learning.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The mainstem Logan River is a suitable habitat for cold-water fishes such as native populations of cutthroat trout (Budy & Gaeta, 2018). On the other hand, high water temperatures can harm cold-water fish populations by creating physiological stresses, intensifying metabolic demands, and limiting suitable habitats (Williams & et al., 2015). In this regard, the State of Utah Department of Environmental Quality (UDEQ) has identified the Logan River as a suitable habitat for cold-water species, which can become unsuitable when the water temperature rises higher than 20 degrees Celsius (Rule R317-2, 2022). However, the UDEQ does not provide any details on how to evaluate the violations from the standard. One way to evaluate violations is to look at water temperature distributions (i.e., histograms) along the river from high elevations to low elevations at different locations. In this report, I used three different Python libraries to manipulate, extract, and explore the water temperature data of the Logan River from 2014 to 2021 obtained from the Logan River Observatory website. The results (i.e., the generated histograms by executing Jupyter Notebook in the HydroShare environment) show that the Logan River tends to experience higher water temperatures as its elevation drops regardless of the season. This can provide some insights for the UDEQ to simultaneously consider space and time in assessing violations from the standard.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository complements the identically titled paper submitted to WAMTA 2025 and allows to reproduce the published results. For a more description please consider the README.md file.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset overview
This dataset provides data and images of snowflakes in free fall collected with a Multi-Angle Snowflake Camera (MASC) The dataset includes, for each recorded snowflakes:
A triplet of gray-scale images corresponding to the three cameras of the MASC
A large quantity of geometrical, textural descriptors and the pre-compiled output of published retrieval algorithms as well as basic environmental information at the location and time of each measurement.
The pre-computed descriptors and retrievals are available either individually for each camera view or, some of them, available as descriptors of the triplet as a whole. A non exhaustive list of precomputed quantities includes for example:
Textural and geometrical descriptors as in Praz et al 2017
Hydrometeor classification, riming degree estimation, melting identification, as in Praz et al 2017
Blowing snow identification, as in Schaer et al 2020
Mass, volume, gyration estimation, as in Leinonen et al 2021
Data format and structure
The dataset is divided into four .parquet file (for scalar descriptors) and a Zarr database (for the images). A detailed description of the data content and of the data records is available here.
Supporting code
A python-based API is available to manipulate, display and organize the data of our dataset. It can be found on GitHub. See also the code documentation on ReadTheDocs.
Download notes
All files available here for download should be stored in the same folder, if the python-based API is used
MASCdb.zarr.zip must be unzipped after download
Field campaigns
A list of campaigns included in the dataset, with a minimal description is given in the following table
Campaign_name
Information
Shielded / Not shielded
DFIR = Double Fence Intercomparison Reference
APRES3-2016 & APRES3-2017
Instrument installed in Antarctica in the context of the APRES3 project. See for example Genthon et al, 2018 or Grazioli et al 2017
Not shielded
Davos-2015
Instrument installed in the Swiss Alps within the context of SPICE (Solid Precipitation InterComparison Experiment)
Shielded (DFIR)
Davos-2019
Instrument installed in the Swiss Alps within the context of RACLETS (Role of Aerosols and CLouds Enhanced by Topography on Snow)
Not shielded
ICEGENESIS-2021
Instrument installed in the Swiss Jura in a MeteoSwiss ground measurement site, within the context of ICE-GENESIS. See for example Billault-Roux et al, 2023
Not shielded
ICEPOP-2018
Instrument installed in Korea, in the context of ICEPOP. See for example Gehring et al 2021.
Shielded (DFIR)
Jura-2019 & Jura-2023
Instrument installed in the Swiss Jura within a MeteoSwiss measurement site
Not shielded
Norway-2016
Instrument installed in Norway during the High-Latitude Measurement of Snowfall (HiLaMS). See for example Cooper et al, 2022.
Not shielded
PLATO-2019
Instrument installed in the "Davis" Antarctic base during the PLATO field campaign
Not shielded
POPE-2020
Instrument installed in the "Princess Elizabeth Antarctica" base during the POPE campaign. See for example Ferrone et al, 2023.
Not shielded
Remoray-2022
Instrument installed in the French Jura.
Not shielded
Valais-2016
Instrument installed in the Swiss Alps in a ski resort.
Not shielded
Version
1.0 - Two new campaigns ("Jura-2023", "Norway-2016") added. Added references and list of campaigns.
0.3 - a new campaign is added to the dataset ("Remoray-2022")
0.2 - rename of variables. Variable precision (digits) standardized
0.1 - first upload
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This resource contains a video recording for a presentation given as part of the National Water Quality Monitoring Council conference in April 2021. The presentation covers the motivation for performing quality control for sensor data, the development of PyHydroQC, a Python package with functions for automating sensor quality control including anomaly detection and correction, and the performance of the algorithms applied to data from multiple sites in the Logan River Observatory.
The initial abstract for the presentation: Water quality sensors deployed to aquatic environments make measurements at high frequency and commonly include artifacts that do not represent the environmental phenomena targeted by the sensor. Sensors are subject to fouling from environmental conditions, often exhibit drift and calibration shifts, and report anomalies and erroneous readings due to issues with datalogging, transmission, and other unknown causes. The suitability of data for analyses and decision making often depend on subjective and time-consuming quality control processes consisting of manual review and adjustment of data. Data driven and machine learning techniques have the potential to automate identification and correction of anomalous data, streamlining the quality control process. We explored documented approaches and selected several for implementation in a reusable, extensible Python package designed for anomaly detection for aquatic sensor data. Implemented techniques include regression approaches that estimate values in a time series, flag a point as anomalous if the difference between the sensor measurement exceeds a threshold, and offer replacement values for correcting anomalies. Additional algorithms that scaffold the central regression approaches include rules-based preprocessing, thresholds for determining anomalies that adjust with data variability, and the ability to detect and correct anomalies using forecasted and backcasted estimation. The techniques were developed and tested based on several years of data from aquatic sensors deployed at multiple sites in the Logan River Observatory in northern Utah, USA. Performance was assessed based on labels and corrections applied previously by trained technicians. In this presentation, we describe the techniques for detection and correction, report their performance, illustrate the workflow for applying to high frequency aquatic sensor data, and demonstrate the possibility for additional approaches to help increase automation of aquatic sensor data post processing.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global HVAC Edge Controller with Python Runtime market size reached USD 1.42 billion in 2024, reflecting the rapid digital transformation in building automation and intelligent climate control. The market is expected to grow at a robust CAGR of 17.8% from 2025 to 2033, with the total market projected to reach USD 6.14 billion by 2033. This surge is primarily driven by the increasing adoption of IoT-enabled HVAC solutions, the need for real-time data analytics, and the integration of programmable runtimes such as Python for enhanced system customization and interoperability.
The growth of the HVAC Edge Controller with Python Runtime market is significantly influenced by the ongoing digitalization of commercial and industrial infrastructure worldwide. As organizations strive to optimize energy consumption, reduce operational costs, and meet stringent environmental regulations, the demand for advanced edge controllers capable of running Python scripts has escalated. These controllers enable seamless integration with IoT devices, advanced analytics, and AI-driven applications, allowing for smarter and more adaptive control of HVAC systems. The flexibility offered by Python runtime empowers facility managers and system integrators to develop custom algorithms for predictive maintenance, energy optimization, and fault detection, further accelerating market adoption.
Another major growth factor is the rapid expansion of smart building initiatives, particularly in developed regions such as North America and Europe. Governments and private sector entities are investing heavily in smart infrastructure, which necessitates the deployment of intelligent HVAC solutions to ensure occupant comfort, energy efficiency, and regulatory compliance. The ability of HVAC edge controllers with Python runtime to interface with legacy systems and modern cloud platforms makes them an attractive choice for both retrofit and new construction projects. Moreover, the proliferation of edge computing paradigms in building automation is fueling demand for controllers that can process data locally, minimize latency, and enhance system reliability.
The market is also benefiting from the rising adoption of cloud-based deployment models and wireless connectivity options. As remote monitoring and management become essential in the post-pandemic era, organizations are increasingly leveraging cloud-enabled HVAC edge controllers to gain real-time visibility and control over distributed assets. The Python runtime environment, in particular, allows for rapid application development and integration with third-party services, enabling a wide range of use cases from simple automation tasks to complex machine learning-driven optimizations. This trend is expected to continue as the ecosystem of Python-based libraries and frameworks for building automation expands.
Regionally, Asia Pacific is emerging as the fastest-growing market for HVAC Edge Controller with Python Runtime solutions, driven by urbanization, industrialization, and the proliferation of smart city projects. Countries like China, Japan, and India are witnessing significant investments in infrastructure modernization, which is translating into increased demand for intelligent HVAC control systems. Meanwhile, North America maintains a dominant market share due to its mature building automation sector and early adoption of edge computing technologies. Europe is also a key market, characterized by stringent energy efficiency regulations and a strong focus on sustainability in the built environment.
The component segment of the HVAC Edge Controller with Python Runtime market is divided into hardware, software, and services, each playing a pivotal role in shaping the overall ecosystem. Hardware forms the backbone of edge controllers, encompassing microprocessors, sensors, communication modules, and interface boards that enable real-time data acquisition and control. With the increasing complexity of building automation requirements, hardware providers are focusing on delivering robust, scalable, and energy-efficient platforms capable of running Python scripts natively. This hardware evolution is critical to ensuring compatibility with a wide range of HVAC equipment and facilitating seamless integration with both legacy and modern systems.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The archived data 'data_figshare.zip' produces figures 3-9 of the paper under review. Data includes .sqlite databases for each run, .json files that describe run starts, run stops, and events. An archive of numpy archives ('numpy_arrays.zip') that store captured time sequences (arrays) is also included, but is a much larger file and is only needed if modified post processing is to be applied on the ADA2200 data. See acquisition and analysis code for more details including information on how to configure databroker to run the analysis code. The attached local_file.yml should be placed into ~/.config/databroker/ and the placeholder 'your_directory' must be modified to point to the data_figshare directory.https://github.com/lucask07/instrbuilder/tree/master/instrbuilder/bluesky_demo/lockin_analysis(no legal or ethical requirements)
Facebook
TwitterFor the automated workflows, we create Jupyter notebooks for each state. In these workflows, GIS processing to merge, extract and project GeoTIFF data was the most important process. For this process, we used ArcPy which is a python package to perform geographic data analysis, data conversion, and data management in ArcGIS (Toms, 2015). After creating state-scale LSS datasets in GeoTIFF format, we convert GeoTIFF to NetCDF using xarray and rioxarray Python packages. Xarray is a Python package to work with multi-dimensional arrays and rioxarray is rasterio xarray extension. Rasterio is a Python library to read and write GeoTIFF and other raster formats. We used xarray to manipulate data type and add metadata in NetCDF file and rioxarray to save GeoTIFF to NetCDF format. Through these procedures, we created three composite HyddroShare resources to share state-scale LSS datasets. Due to the limitation of ArcGIS Pro license which is a commercial GIS software, we developed this Jupyter notebook on Windows OS.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This folder contains the Python code that was used to process the resource flow data of a basic oxygen steelmaking plant owned by Tata Steel.
Facebook
Twittercmomy is a python package to calculate central moments and co-moments in a numerical stable and direct way. Behind the scenes, cmomy makes use of Numba to rapidly calculate moments. cmomy provides utilities to calculate central moments from individual samples, precomputed central moments, and precomputed raw moments. It also provides routines to perform bootstrap resampling based on raw data, or precomputed moments. cmomy has numpy array and xarray DataArray interfaces.