Facebook
Twitterhttps://www.usa.gov/government-works/https://www.usa.gov/government-works/
The Poisson Process file concerns the solution of an exercise from the fourth module of the Statistics and Applied Data Analysis Specialization course at the University of Colorado Boulder that I took. In these notes, I intend to explain the most important steps.
Facebook
TwitterWe analysed the Understanding Society Data from Waves 1 and 2 in our project to explore the uses of paradata in cross-sectional and longitudinal surveys with the aim of gaining knowledge that leads to improvement in field process management and responsive survey designs.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example session file. (DRFIT 46 kb)
Facebook
TwitterThis paper analyzes the restrictions necessary to ensure that the interest rate policy rule used by the central bank does not introduce local real indeterminacy into the economy. It conducts the analysis in a Calvo-style sticky price model. A key innovation is to add investment spending to the analysis. In this environment, local real indeterminacy is much more likely. In particular, all forward-looking interest rate rules are subject to real indeterminacy.
Facebook
Twitterhttps://paper.erudition.co.in/termshttps://paper.erudition.co.in/terms
Question Paper Solutions of chapter Discrete System Analysis of Digital Control System, 7th Semester , Applied Electronics and Instrumentation Engineering
Facebook
TwitterThis data package contains an internally consistent data product for discrete inorganic carbon, oxygen, and nutrients on the U.S. North American ocean margins, i.e., Coastal Ocean Data Analysis Product (CODAP-NA). It is created by compiling, quality controlling (QC), and synthesizing two decades of discrete measurements of inorganic carbon, oxygen, and nutrient chemistry data from North America’s U.S. coastal oceans. Due to the lack of deep-water sampling (>1500m), cross-over analyses were not conducted like the open ocean data products. Instead, only core data sets from laboratories with known quality assurance are included. Internal consistency checks and outlier detections are used to quality control the data. We worked closely with the investigators who collected and measured these data during the QC process. This version of the CODAP-NA is composed of 3,391 oceanographic profiles from 61 research cruises covering all continental shelves in North America (U.S. west coast, U.S. east coast, Gulf of Mexico, and Alaska coast). Data for 14 variables (temperature; salinity; dissolved oxygen concentration; dissolved inorganic carbon concentration; total alkalinity; pH on the Total Scale; carbonate ion concentration; fugacity of carbon dioxide; and concentrations of silicate, phosphate, nitrate, nitrite, nitrate plus nitrite, and ammonium) have been subjected to extensive quality control. Funding for this work comes from the National Oceanic and Atmospheric Administration (NOAA) Ocean Acidification Program (OAP, Project #: OAP 1903-1903).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: This data repository contains raw data for the analysis of decomposition error in discrete-time open tandem queues. The data is formatted for the computation and validation of point and interval estimates for decomposition error as well as for the analysis of decomposition error in bottleneck queues. TechnicalRemarks: This data repository contains two folders: 01 Equal Traffic Intensities – Raw data for the analysis of decomposition error in tandem queues with equal traffic intensities, 02 Bottleneck Analyses – Raw data for the analysis of decomposition error in tandem queues with bottlenecks. The first folder contains a training data and a test data file. The second folder contains three files: Data set with downstream bottleneck queues, Data set with upstream bottleneck queues, * Data set with similar traffic intensities.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This study identified subgroups of bladder pain syndrome/interstitial cystitis (BPS/IC) patients and potential treatment targets by combining validated questionnaires and patient diaries with discrete mathematical techniques. Hierarchical clustering of questionnaire data revealed three distinct patient groups. Analysis of patient diaries, employing natural language processing—a form of discrete data analysis—found keywords capturing emotional and psychological experiences, complementing the questionnaire results. Integration of questionnaire and diary data visualized the relationships between symptoms and treatment targets through a network graph. This personalized approach, akin to solving the traveling salesman problem in discrete mathematics, was validated through case studies, demonstrating its utility in guiding targeted interventions. The study emphasizes the significant potential of discrete mathematics-based data integration and visualization for personalized management of this complex condition.
Facebook
TwitterThis data product is composed of data from 724 scientific cruises covering the global ocean. It includes data assembled during the previous interior ocean data synthesis efforts GLODAPv1.1 (Global Ocean Data Analysis Project version 1.1) in 2004, CARINA (CARbon IN the Atlantic) in 2009/2010, and PACIFICA (PACIFic ocean Interior CArbon) in 2013, as well as data from an additional 168 cruises. This dataset includes discrete bottle measurements of salinity, oxygen, nitrate, silicate, phosphate, dissolved inorganic carbon, total alkalinity, pH, CFC-11, CFC-12, CFC-113, and CCl4, carbon isotopes and chlorophyll. These data have been subjected to extensive primary and secondary quality control which included systematic evaluation of bias, and adjustments have been applied to remove significant biases, respecting occurrences of any known or likely time trends or variations.
Facebook
Twitterhttps://www.mordorintelligence.com/privacy-policyhttps://www.mordorintelligence.com/privacy-policy
The Discrete Semiconductor Market Report is Segmented by Device Type (Diode, Small-Signal Transistor, and More), End-User Vertical (Automotive, Consumer Electronics, and More), Material (Silicon, Silicon-Carbide, Gallium-Nitride), Power Rating (Low-Power, Mid-Power, High-Power), and Geography (North America, South America, Europe, Asia-Pacific, Middle East, and Africa). The Market Forecasts are Provided in Terms of Value (USD).
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
High frequency estimated chloride (Cl) and observed specific conductance (SC) data sets, along with response variables derived from those data sets, were used in an analysis to quantify the extent to which deicer applications in winter affect water quality in 93 U.S. Geological Survey water quality monitoring stations across the eastern United States. The analysis was documented in the following publication: Moore, J., R. Fanelli, and A. Sekellick. In review. High-frequency data reveal deicing salts drive elevated conductivity and chloride along with pervasive and frequent exceedances of the EPA aquatic life criteria for chloride in urban streams. Submitted to Environmental Science and Technology. This data release contains five child items: 1) Input datasets of discrete specific conductance (SC) and chloride (Cl) observations used to develop regression models describing the relationship between chloride and SC 2) The predicted chloride concentrations generated by applying the s ...
Facebook
TwitterDiscrete choice (DC) methods provide a convenient approach for preference elicitation and they lead to unbiased estimates of preference model parameters if the parameterization of the value function allows for a good description of the preferences. On the other hand, indifference elicitation (IE) has been suggested as a direct trade-off estimator for preference elicitation in decision analysis decades ago, but has not found widespread application in statistical analysis frameworks as for discrete choice methods. We develop a hierarchical, probabilistic model for IE that allows us to do Bayesian inference similar to DC methods. A case study with synthetically generated data allows us to investigate potential bias and to estimate parameter uncertainty over a wide range of numbers of replies and elicitation uncertainties for both DC and IE. Through an empirical case study with laboratory-scale choice and indifference experiments, we investigate the feasibility of the approach and the excess time needed for indifference replies. Our results demonstrate (i) the absence of bias of the suggested methodology, (ii) a reduction in the uncertainty of estimated parameters by about a factor of three or a reduction of the required number of replies to achieve a similar accuracy as with DC by about a factor of ten, (iii) the feasibility of the approach, and (iv) a median increase in time needed for indifference reply of about a factor of three. If the set of respondents is small, the higher elicitation effort may be worth to achieve a reasonable accuracy in estimated value function parameters.
Facebook
TwitterDESCRIPTION
Create a model that predicts whether or not a loan will be default using the historical data.
Problem Statement:
For companies like Lending Club correctly predicting whether or not a loan will be a default is very important. In this project, using the historical data from 2007 to 2015, you have to build a deep learning model to predict the chance of default for future loans. As you will see later this dataset is highly imbalanced and includes a lot of features that make this problem more challenging.
Domain: Finance
Analysis to be done: Perform data preprocessing and build a deep learning prediction model.
Content:
Dataset columns and definition:
credit.policy: 1 if the customer meets the credit underwriting criteria of LendingClub.com, and 0 otherwise.
purpose: The purpose of the loan (takes values "credit_card", "debt_consolidation", "educational", "major_purchase", "small_business", and "all_other").
int.rate: The interest rate of the loan, as a proportion (a rate of 11% would be stored as 0.11). Borrowers judged by LendingClub.com to be more risky are assigned higher interest rates.
installment: The monthly installments owed by the borrower if the loan is funded.
log.annual.inc: The natural log of the self-reported annual income of the borrower.
dti: The debt-to-income ratio of the borrower (amount of debt divided by annual income).
fico: The FICO credit score of the borrower.
days.with.cr.line: The number of days the borrower has had a credit line.
revol.bal: The borrower's revolving balance (amount unpaid at the end of the credit card billing cycle).
revol.util: The borrower's revolving line utilization rate (the amount of the credit line used relative to total credit available).
inq.last.6mths: The borrower's number of inquiries by creditors in the last 6 months.
delinq.2yrs: The number of times the borrower had been 30+ days past due on a payment in the past 2 years.
pub.rec: The borrower's number of derogatory public records (bankruptcy filings, tax liens, or judgments).
Steps to perform:
Perform exploratory data analysis and feature engineering and then apply feature engineering. Follow up with a deep learning model to predict whether or not the loan will be default using the historical data.
Tasks:
Transform categorical values into numerical values (discrete)
Exploratory data analysis of different factors of the dataset.
Additional Feature Engineering
You will check the correlation between features and will drop those features which have a strong correlation
This will help reduce the number of features and will leave you with the most relevant features
After applying EDA and feature engineering, you are now ready to build the predictive models
In this part, you will create a deep learning model using Keras with Tensorflow backend
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Modern data analysis typically involves the fitting of a statistical model to data, which includes estimating the model parameters and their precision (standard errors) and testing hypotheses based on the parameter estimates. Linear mixed models (LMMs) fitted through likelihood methods have been the foundation for data analysis for well over a quarter of a century. These models allow the researcher to simultaneously consider fixed (e.g., treatment) and random (e.g., block and location) effects on the response variables and account for the correlation of observations, when it is assumed that the response variable has a normal distribution. Analysis of variance (ANOVA), which was developed about a century ago, can be considered a special case of the use of an LMM. A wide diversity of experimental and treatment designs, as well as correlations of the response variable, can be handled using these types of models. Many response variables are not normally distributed, of course, such as discrete variables that may or may not be expressed as a percentage (e.g., counts of insects or diseased plants) and continuous variables with asymmetrical distributions (e.g., survival time). As expansions of LMMs, generalized linear mixed models (GLMMs) can be used to analyze the data arising from several non-normal statistical distributions, including the discrete binomial, Poisson, and negative binomial, as well as the continuous gamma and beta. A GLMM allows the data analyst to better match the model to the data rather than to force the data to match a specific model. The increase in computer memory and processing speed, together with the development of user-friendly software and the progress in statistical theory and methodology, has made it practical for non-statisticians to use GLMMs since the late 2000s. The switch from LMMs to GLMMs is deceptive, however, as there are several major issues that must be thought about or judged when using a GLMM, which are mostly resolved for routine analyses with LMMs. These include the consideration of conditional versus marginal distributions and means, overdispersion (for discrete data), the model-fitting method [e.g., maximum likelihood (integral approximation), restricted pseudo-likelihood, and quasi-likelihood], and the choice of link function to relate the mean to the fixed and random effects. The issues are explained conceptually with different model formulations and subsequently with an example involving the percentage of diseased plants in a field study with wheat, as well as with simulated data, starting with a LMM and transitioning to a GLMM. A brief synopsis of the published GLMM-based analyses in the plant agricultural literature is presented to give readers a sense of the range of applications of this approach to data analysis.
Facebook
Twitter
As per our latest research, the global Genomic Results Discrete Data Integration market size reached USD 1.45 billion in 2024, demonstrating robust momentum driven by the increasing adoption of precision medicine and advanced data analytics in genomics. The market is projected to expand at a CAGR of 13.2% during the forecast period, reaching an estimated USD 4.14 billion by 2033. This impressive growth trajectory is fueled by the convergence of high-throughput sequencing technologies, the rising demand for integrated healthcare data, and the need for actionable insights from complex genomic datasets.
A primary growth factor in the Genomic Results Discrete Data Integration market is the exponential rise in genomic data generation, propelled by advancements in next-generation sequencing (NGS) and other high-throughput technologies. As the cost of sequencing continues to decline, the volume of raw genomic data produced by research laboratories, clinical settings, and biopharmaceutical companies has surged. However, the true value of this data is only realized when disparate datasets—spanning genomics, transcriptomics, proteomics, and metabolomics—are seamlessly integrated and analyzed. The integration of discrete genomic results enables researchers and clinicians to uncover complex biological relationships, identify novel biomarkers, and support the development of targeted therapies, thus driving widespread adoption of data integration platforms and solutions.
Another significant driver is the increasing focus on personalized medicine, which relies heavily on the integration of multi-omics data to tailor medical treatments to individual patients. Healthcare providers and pharmaceutical companies are leveraging integrated genomic data to stratify patient populations, predict disease susceptibility, and optimize therapeutic interventions. This shift toward data-driven healthcare is further supported by regulatory agencies encouraging the use of real-world evidence and integrated datasets for drug approval and post-market surveillance. Consequently, the demand for robust, scalable, and interoperable data integration solutions is surging, as stakeholders seek to harness the full potential of genomic and related datasets for clinical and research applications.
Furthermore, the Genomic Results Discrete Data Integration market benefits from technological innovations in artificial intelligence (AI), machine learning (ML), and cloud computing. These technologies facilitate the efficient aggregation, harmonization, and analysis of massive and heterogeneous datasets, overcoming traditional barriers to data integration such as data silos, format inconsistencies, and security concerns. The adoption of AI-driven analytics and cloud-based integration platforms is accelerating, enabling real-time data sharing, collaborative research, and scalable storage solutions. These advancements are not only enhancing the accuracy and speed of data interpretation but also democratizing access to integrated genomic insights across diverse healthcare and research environments.
From a regional perspective, North America continues to dominate the Genomic Results Discrete Data Integration market, accounting for the largest share in 2024, followed by Europe and Asia Pacific. The region’s leadership is attributed to its advanced healthcare infrastructure, significant investments in genomics research, and the presence of leading biopharmaceutical and technology companies. Meanwhile, Asia Pacific is emerging as the fastest-growing region, propelled by expanding genomic research initiatives, increasing healthcare expenditure, and government support for precision medicine. Europe also demonstrates steady growth, driven by collaborative research projects and strong regulatory frameworks supporting data integration. Latin America and Middle East & Africa represent nascent but promising markets, with growing awareness and gradual adoption of integrated genomic solutions.
The Com
Facebook
TwitterThis data release is focused on the analysis of surface water concentration data associated with 12 elements of concern from three hydrologic basins. Data is analyzed with respect to: a) reporting limits, b) the extent of censored data, c) co-location with USGS real-time sensor data, and d) median concentrations at the catchment spatial scale. The Proxies Project (under the Water Quality Program of the USGS Water Mission Area) is a multi-year effort designed to develop rapid and cost-effective approaches for monitoring and risk assessment of a range of aquatic contaminants in riverine surface waters at multiple spatial scales. One component of this project is focused on 12 Elements of Concern (EoC; Al, As, Cd, Cr, Cu, Fe, Hg, Mn, Pb, Se, U and Zn) in three primary hydrologic basins: Delaware River Basin (DRB), the Illinois River Basin (ILRB) and the Upper Colorado (UCOL) River Basin (USGS, 2023). Two modeling approaches being explored as part of the Proxies Project rely on the analysis of previously published EoC concentration data retrieved from the multi-agency supported Water Quality Portal (www.waterqualitydata.us/). This basin-specific retrieved data, covering the 1900-2022 timeframe, was subsequently screened, harmonized and published as part of an earlier USGS Data Release (Marvin-DiPasquale and others, 2022). The two distinct modeling approaches that leverage this previously published data are: a) machine learning statistical analysis of EoC concentration distributions as a function of geospatial attributes; and b) time series analysis in support of estimating EoC concentrations in (near)real-time at a sub-set of USGS real-time stations using discharge in combination with a range of deployed in-situ sensors. Prior to the final stages of model development, there were several data analysis steps required to further define which elements and aquatic fractions (i.e. filtered, unfiltered, and particulate) best lend themselves to further model exploration and development. These intermediate data analyses include: a) an analysis of the change in detection quantitation limits, by element and methods over time (DR_Table _1); b) an analysis of data censoring, by study basin, element, and fraction (DR_Table_2); c) a calculation of median EoC concentrations at the National Hydrography Dataset Plus (NHDPlus) catchment spatial scale (DR_Table_3); d) an analysis of the percentage of censored median EoC concentration values by study basin, element, and fraction (DR_Table_4); e) decision tree analysis associated with the geospatial machine learning modeling approach, by study basin, element and fraction (DR_Table_5); f) discrete EoC concentration data merged with continuous discharge and in-situ sensor data at USGS real-time stations, by station ID, element and fraction (DR_Table_6); and g) an analysis of the total number of observations and the percentage of censored EoC data associated with the merged discrete EoC and continuous discharge and sensor data retrieved from USGS real-time stations, by station ID, element, and fraction (DR_Table_7). The current data release documents the results of these data analyses. The associated seven data tables presented herein are provided in machine-readable comma separated value (*.csv) format and are more fully described in the associated meta-data. REFERENCES Marvin-DiPasquale, M.C., Sullivan, S.L., Platt, L. R., Gorsky, A., Agee, J.L., McCleskey, B.R., Kakouros, E., Walton-Day, K., Runkel, R. L., Morriss, M. C., Wakefield, B. F., and Bergamaschi, B., 2022, Concentration Data for 12 Elements of Concern Used in the Development of Surrogate Models for Estimating Elemental Concentrations in Surface Water of Three Hydrologic Basins (Delaware River, Illinois River and Upper Colorado River): U.S. Geological Survey data release, https://doi.org/10.5066/P9L06M3G. USGS, 2023, Proxies Project, U.S. Geological Survey webpage, accessed 3/11/2025, https://www.usgs.gov/mission-areas/water-resources/science/proxies-project
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Genomic Results Discrete Data Integration market size reached USD 2.18 billion in 2024, reflecting a robust expansion driven by the rapid adoption of precision medicine and the increasing integration of multi-omics data in healthcare and research. The market is projected to grow at a CAGR of 13.6% from 2025 to 2033, reaching an estimated USD 6.47 billion by 2033. This remarkable growth is primarily fueled by technological advancements in bioinformatics, an upsurge in clinical applications of genomics, and a growing demand for actionable insights from complex biological datasets.
One of the primary growth factors propelling the Genomic Results Discrete Data Integration market is the exponential increase in genomic data generated by next-generation sequencing (NGS) technologies. As the cost of sequencing continues to decrease, the volume of genomic, transcriptomic, proteomic, and metabolomic data being produced is rising dramatically. This surge necessitates advanced data integration solutions capable of transforming raw, heterogeneous datasets into structured, clinically relevant information. The ability to harmonize and standardize disparate data sources is crucial for supporting clinical diagnostics, personalized medicine, and drug discovery, all of which rely on robust data integration platforms to drive informed decisions and improve patient outcomes.
Another significant driver is the growing emphasis on personalized medicine and targeted therapeutics. Healthcare providers and pharmaceutical companies are increasingly leveraging discrete data integration platforms to correlate genomic variants with phenotypic outcomes, enabling more precise disease stratification and individualized treatment strategies. The integration of multi-omics data not only enhances the understanding of disease mechanisms but also accelerates the identification of novel therapeutic targets. This trend is further reinforced by regulatory agencies and reimbursement bodies that are placing greater value on the clinical utility of integrated genomic data, thereby incentivizing investments in advanced integration technologies.
Furthermore, the adoption of cloud-based solutions and artificial intelligence (AI) in genomic data integration is revolutionizing the market landscape. Cloud platforms offer scalable storage, computational power, and collaborative environments, making it feasible for institutions of all sizes to process and analyze vast datasets efficiently. AI-driven analytics are enhancing the extraction of actionable insights from integrated data, supporting applications across clinical diagnostics, research, and drug development. The convergence of these technologies is not only improving the speed and accuracy of data interpretation but also expanding the accessibility of genomic insights to a broader range of end-users, including hospitals, research institutes, and biotechnology companies.
Regionally, North America dominated the Genomic Results Discrete Data Integration market in 2024, accounting for the largest revenue share due to its advanced healthcare infrastructure, high adoption of precision medicine, and significant investments in genomics research. Europe followed closely, driven by strong government support and collaborative research initiatives. The Asia Pacific region is emerging as a high-growth market, propelled by increasing healthcare expenditure, expanding genomics research capabilities, and rising awareness of personalized medicine. Latin America and the Middle East & Africa are also witnessing gradual adoption, supported by international collaborations and capacity-building efforts. The regional outlook remains optimistic, with all major regions expected to contribute significantly to the market’s overall expansion through 2033.
The Genomic Results Discrete Data Integration market by component is segmented into software, hardware, and services, each playing a pivotal role in enabling seamless integration and interpretation of complex biological data. Software solutions represent the largest share, driven by the need for sophisticated algorithms that can harmonize, standardize, and analyze multi-omics datasets. These platforms facilitate data interoperability, support regulatory compliance, and enable advanced analytics, making them indispensable for both clinical and research applications. Key sof
Facebook
TwitterThis data product is composed of data from 724 scientific cruises covering the global ocean. It includes data assembled during the previous interior ocean data synthesis efforts GLODAPv1.1 (Global Ocean Data Analysis Project version 1.1) in 2004, CARINA (CARbon IN the Atlantic) in 2009/2010, and PACIFICA (PACIFic ocean Interior CArbon) in 2013, as well as data from an additional 168 cruises. NCEI Accession 0162565 includes discrete bottle measurements of salinity, oxygen, nitrate, silicate, phosphate, dissolved inorganic carbon, total alkalinity, pH, CFC-11, CFC-12, CFC-113, and CCl4, carbon isotopes and chlorophyll. These data have been subjected to extensive primary and secondary quality control which included systematic evaluation of bias, and adjustments have been applied to remove significant biases, respecting occurrences of any known or likely time trends or variations.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Data Set Information:
This dataset contains a total of 71 images including 11 types of images with its distorted versions. Each and every image has its own uniqueness of discrete tone image properties.
Attribute Information:
Types of Images 1.System Generated DTI by setting distinct pixel values 2.Discrete Pixel Logo 3.Business Charts 4.Bi-Level 5.Part of Discrete Information from an Continuous Image
Colorspace models 1.RGB 2.Grayscale 3.Binary
Distortion Types 1.JPEG 2.Gaussian White Noise (GWN) 3.Salt and Pepper noise (SP) 4.Multiplicative Speckle Noise (MSN) 5.Poisson Noise (PN)
** Target**
Use this dataset for analysis purpose
Source:
Creator:
J.Uthayakumar Research Scholar,Department of Computer Science,Pondicherry University,India. Contact: +91 9677583754 Email Id: uthayresearchscholar '@' gmail.com
Guided By,
Dr.T.Vengattaraman Assistant Professor,Department of Computer Science,Pondicherry University,India. Email Id: vengattaramant '@' gmail.com
Dr.P.Dhavachelvan Professor,Department of Computer Science,Pondicherry University,India. Email Id: dhavachelvan '@' gmail.com
keep sharing knowledge
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This integer sequence was registered and published in the On-Line Encyclopedia of Integer Sequences (OEIS.org) Database on October 14 - 2024, under the OEIS code: A377045.
This sequence can be expressed with the help of two general formulas that uses the sequences:
1) A000041: a(n) is the number of partitions of n (the partition numbers).
2) A002407: Cuban primes: primes which are the difference of two consecutive cubes.
3) A121259: Numbers k such that (3*k^2 + 1)/4 is prime.
The two aforementioned general formulas are as follows:
a(n) = A000041(A002407(n)). (1)
a(n) = A000041((3*A121259 (n)^2+1) / 4). (2)
Some interesting properties of this sequence are:
◼ Number of partitions of prime numbers that are the difference of two consecutive cubes.
◼ Number of partitions of primes p such that p=(3*n^2 + 1) / 4 for some integer n (A121259).
◼ a(13) = ~1.49910(x10^43).
◼ The last known integer n in A121259 is 341 and corresponds to a(60) = ~1.59114(x10^323).
The numerical data showed on this dataset was generated by the following Mathematica program:
PartitionsP[Select[Table[(3 k^2 + 1)/4, {k, 500}], PrimeQ]]
The previous program was builded on Mathematica v13.3.0.
Note: More mathematical details, graphics and technical information can be found in the notebook (.nb) & pdf files provided in this data pack.
Facebook
Twitterhttps://www.usa.gov/government-works/https://www.usa.gov/government-works/
The Poisson Process file concerns the solution of an exercise from the fourth module of the Statistics and Applied Data Analysis Specialization course at the University of Colorado Boulder that I took. In these notes, I intend to explain the most important steps.