Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract Due to the importance of textbooks within the processes of teaching and learning in Mathematics, this article focuses on the tasks proposed in five textbooks of 1st of Bachillerato for this topic. The goal is to identify meanings of derivative in the textbooks through the proposed tasks. It is a quantitative research in which, by means of a cluster analysis, the tasks were grouped according to similarity. The results show that the books emphasize three meanings of the derivative: one procedural-algebraic, one algorithmic, and finally another conceptual-geometric meaning, all of them dominated by the symbolic representation system and that exclusively show a mathematical context.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Exceptional mathematical reasoning ability is one of the key features that demonstrate the power of large language models (LLMs). How to comprehensively define and evaluate the mathematical abilities of LLMs, and even reflect the user experience in real-world scenarios, has emerged as a critical issue. Current benchmarks predominantly concentrate on problem-solving capabilities, which presents a substantial risk of model overfitting and fails to accurately represent genuine mathematical⌠See the full description on the dataset page: https://huggingface.co/datasets/PremiLab-Math/MathCheck.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract Within the line of teacher training, we present in this work aspects of a research with future elementary school teachers, where we focus on understanding how students of the University of Granada interpret the objective as an analysis element of a school mathematical task, framed within the Didactic Analysis as a functional tool in the initial formation. A qualitative methodology has been followed through content analysis. The antecedents show the importance of the school tasks to favor mathematics learning and the results show us the difficulty that the future teachers present to establish and to define the objective of a school mathematical task.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Data from a comparative judgement survey consisting of 62 working mathematics educators (ME) at Norwegian universities or city colleges, and 57 working mathematicians at Norwegian universities. A total of 3607 comparisons of which 1780 comparisons by the ME and 1827 ME. The comparative judgement survey consisted of respondents comparing pairs of statements on mathematical definitions compiled from a literature review on mathematical definitions in the mathematics education literature. Each WM was asked to judge 40 pairs of statements with the following question: âAs a researcher in mathematics, where your target group is other mathematicians, what is more important about mathematical definitions?â Each ME was asked to judge 41 pairs of statements with the following question: âFor a mathematical definition in the context of teaching and learning, what is more important?â The comparative judgement was done with No More Marking software (nomoremarking.com) The data set consists of the following data: comparisons made by ME (ME.csv) comparisons made by WM (WM.csv) Look up table of codes of statements and statement formulations (key.csv) Each line in the comparison represents a comparison, where the "winner" column represents the winner and the "loser" column the loser of the comparison.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract In this paper we turn our attention to the different language games associated to the development of Mathematical Modelling activities and to the meanings constituted by students within these language games in relation to the first order ordinary differential equations. The research is based on Mathematical Modelling in Mathematics Education and has as its philosophical basis the studies of Ludwig Wittgenstein and some of his interpreters. Considering these theoretical-philosophical elements, mathematical modelling activities were developed in a Mathematics Degree in a course of Ordinary Differential Equations. Data were collected through written records, audio and video recordings, questionnaires, and interviews. The data analysis methodology considers the students' discursive practices and allowed us to construct trees of idea association. The results indicate that the constitution of meaning within modelling activities is associated to the students' linguistic appropriation of the rules and techniques that are configured in specific language games identified in the Mathematical Modelling activities.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These datatasets relate to the computational study presented in the paper The Berth Allocation Problem with Channel Restrictions, authored by Paul Corry and Christian Bierwirth. They consist of all the randomly generated problem instances along with the computational results presented in the paper.
Results across all problem instances assume ship separation parameters of [delta_1, delta_2, delta_3] = [0.25, 0, 0.5].
Excel Workbook Organisation:
The data is organised into separate Excel files for each table in the paper, as indicated by the file description. Within each file, each row of data presented (aggregating 10 replications) in the corrsponding table is captured in two worksheets, one with the problem instance data, and the other with generated solution data obtained from several solution methods (described in the paper). For example, row 3 of Tab. 2, will have data for 10 problem instances on worksheet T2R3, and corresponding solution data on T2R3X.
Problem Instance Data Format:
On each problem instance worksheet (e.g. T2R3), each row of data corresponds to a different problem instance, and there are 10 replications on each worksheet.
The first column provides a replication identifier which is referenced on the corresponding solution worksheet (e.g. T2R3X).
Following this, there are n*(2c+1) columns (n = number of ships, c = number of channel segmenets) with headers p(i)_(j).(k)., where i references the operation (channel transit/berth visit) id, j references the ship id, and k references the index of the operation within the ship. All indexing starts at 0. These columns define the transit or dwell times on each segment. A value of -1 indicates a segment on which a berth allocation must be applied, and hence the dwell time is unkown.
There are then a further n columns with headers r(j), defining the release times of each ship.
For ChSP problems, there are a final n colums with headers b(j), defining the berth to be visited by each ship. ChSP problems with fixed berth sequencing enforced have an additional n columns with headers toa(j), indicating the order in which ship j sits within its berth sequence. For BAP-CR problems, these columnns are not present, but replaced by n*m columns (m = number of berths) with headers p(j).(b) defining the berth processing time of ship j if allocated to berth b.
Solution Data Format:
Each row of data corresponds to a different solution.
Column A references the replication identifier (from the corresponding instance worksheet) that the soluion refers to.
Column B defines the algorithm that was used to generate the solution.
Column C shows the objective function value (total waiting and excess handling time) obtained.
Column D shows the CPU time consumed in generating the solution, rounded to the nearest second.
Column E shows the optimality gap as a proportion. A value of -1 or an empty value indicates that optimality gap is unknown.
From column F onwards, there are are n*(2c+1) columns with the previously described p(i)_(j).(k). headers. The values in these columns define the entry times at each segment.
For BAP-CR problems only, following this there are a further 2n columns. For each ship j, there will be columns titled b(j) and p.b(j) defining the berth that was allocated to ship j, and the processing time on that berth respectively.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By Huggingface Hub [source]
This Grade School Math 8K Linguistically Diverse Training & Test Set is designed to help you develop and improve your understanding of multi-step reasoning question answering. The dataset contains three separate data files: the socratic_test.csv, main_test.csv, and main_train.csv, each containing a set of questions and answers related to grade school math that consists of multiple steps. Each file contains the same columns:
question,answer. The questions contained in this dataset are thoughtfully crafted to lead you through the reasoning journey for arriving at the correct answer each time, allowing you immense opportunities for learning through practice. With over 8 thousand entries for both training and testing purposes in this GSM8K dataset, it takes advanced multi-step reasoning skills to ace these questions! Deepen your knowledge today and master any challenge with ease using this amazing GSM8K set!
For more datasets, click here.
- đ¨ Your notebook can be here! đ¨!
This dataset provides a unique opportunity to study multi-step reasoning for question answering. The GSM8K Linguistically Diverse Training & Test Set consists of 8,000 questions and answers that have been created to simulate real-world scenarios in grade school mathematics. Each question is paired with one answer based on a comprehensive test set. The questions cover topics such as algebra, arithmetic, probability and more.
The dataset consists of two files: main_train.csv and main_test.csv; the former contains questions and answers specifically related to grade school math while the latter includes multi-step reasoning tests for each category of the Ontario Math Curriculum (OMC). In addition, it has three columns - Question (Question), Answer ([Answer]) â meaning that each row contains 3 sequential question/answer pairs making it possible to take a single path from the start of any given answer or branch out from there according to the logic construction required by each respective problem scenario; these columns can be used in combination with text analysis algorithms like ELMo or BERT to explore different formats of representation for responding accurately during natural language processing tasks such as Q&A or building predictive models for numerical data applications like measuring classifying resource efficiency initiatives or forecasting sales volumes in retail platforms..
To use this dataset efficiently you should first get familiar with its structure by reading through its documentation so you are aware all available info regarding items content definition & format requirements then study examples that best suits your specific purpose whether is performing an experiment inspired by education research needs, generate insights related marketing analytics reports making predictions over artificial intelligence project capacity improvements optimization gains etcetera having full access knowledge about available source keeps you up & running from preliminary background work toward knowledge mining endeavor completion success Support User success qualitative exploration sessions make sure learn all variables definitions employed heterogeneous tools before continue Research journey starts experienced Researchers come prepared valuable resource items employed go beyond discovery false alarm halt advancement flow focus unprocessed raw values instead ensure clear cutting vision behind objectives support UserHelp plans going mean project meaningful campaign deliverables production planning safety milestones dovetail short deliveries enable design interfaces session workforce making everything automated fun entry functioning final transformation awaited offshoot Goals outcome parameters monitor life cycle management ensures ongoing projects feedbacks monitored video enactment resources tapped Proficiently balanced activity sheets tracking activities progress deliberation points evaluation radius highlights outputs primary phase visit egress collaboration agendas Client cumulative returns records capture performance illustrated collectively diarized successive setup sweetens conditions researched environments overview debriefing arcane matters turn acquaintances esteemed directives social
- Training language models for improving accuracy in natural language processing applications such as question answering or dialogue systems.
- Generating new grade school math questions and answers using g...
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Supplementary Materials for Chapter 2 of the Doctoral Dissertation "How we think about numbers - Early counting and mathematical abstraction". Contains Preregistrations, open data and open materials for study 1 and study 2.As children learn to count, they make one of their first mathematical abstractions. They initially learn how numbers in the count sequence correspond to quantities of physical things if the rules of counting are followed (i.e., if you say the numbers in order âone two three four âŚâ as you tag each thing with a number). Around the age of four-years-old, children discover that these rules also define numbers in relation to each other, such that numbers contain meaning in themselves and without reference to the physical world (e.g., âfiveâ is âoneâ more than âfourâ). It is through learning to count, that children discover the natural numbers as mathematical symbols defined by abstract rules.In this dissertation, I explored the developmental trajectory and the cognitive mechanisms of how we gain an understanding of the natural numbers as children. I present new methodological, empirical, and theoretical insights on how and when in the process of learning to count, children discover that numbers represent cardinalities, that numbers can be defined in relation to each other by the successor function and that numbers refer to units. Lastly, I explore this mathematical abstraction as the foundation of how we think about numbers as adults.My work critically tested prominent theories on how learning to count gives meaning to numbers through analogical mapping and conceptual bootstrapping. Findings across five empirical studies suggest that the process is more gradual and continuous than previous theories have proposed. Children begin to understand numbers as cardinalities defined in relation to other numbers by the successor function before they fully grasp the rules of counting. With learning the rules of counting this understanding continuously expands and matures. I further suggest that children may only fully understand numbers as abstract mathematical symbols once they understand how counting and numbers refer to the abstract notion of units rather than to physical things.The central finding of this dissertation is that learning to count does not change childrenâs understanding of numbers altogether and all at once. Nonetheless, when learning to count, children accomplish a fascinating mathematical abstraction, which builds the foundation for lifelong mathematical learning.Š Theresa Elise Wege, CC BY-NC 4.0
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset Summary GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
These problems take between 2 and 8 steps to solve. Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ â Ăá) to reach the final answer. A bright middle school student should be able to solve every problem: from the paper, "Problems require no concepts beyond the level of early Algebra, and the vast majority of problems can be solved without explicitly defining a variable." Solutions are provided in natural language, as opposed to pure math expressions. From the paper: "We believe this is the most generally useful data format, and we expect it to shed light on the properties of large language modelsâ internal monologues"
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
G_crack_length.csv
Facebook
TwitterThe most abundant energy sources in the United States are hydrocarbon fossil fuels consisting of oil, gas, oil shale, and coal. Currently, the most important of these energy sources are crude oil and natural gas. Although supplies are adequate today, it must be realized that oil and gas are depletive substances. Within the next few years, the increasing demand for liquid fuels will necessitate the supplemental supplies of domestic energy from crude oil and natural gas with synthetic fuels such as those from oil shale. To date, those persons working in the development of oil shale technology have found limited amounts of reference data. If data from research and development (R and D) could be made publicly available, however, several functions could be served. The duplication of work could be avoided, documented test material could serve as a basis to promote further developments, and research costs could possibly be reduced. To capture the results of Government-sponsored oil shale research programs, documents have been written to specify the data that contractors need to report and the procedures for reporting them. The documents identify and define the data from oil shale projects to be entered into the Major Plants Data Base (MPDB), Test Data Data Base (TDDB), Resource Extraction Data Base (REDB), and Math Modeling Data Base (MMDB) which will meet the needs of the users of the oil shale data system. This document addresses what information is needed and how it must be formatted so that it can be entered into the TDDB for oil shale.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The First Law of Everything (LOE)/ Law of Nature (Newton)
1st LOE CLASSICAL MECHANICS viewed by homo sapiens for homo sapiens in the derived 1st local law the Law of Human Nature 1st LHN Completeness. Before judging, always try to get the whole Nirvana movie scenario/ composition picture viewed in a thus holistic dualistic reductio ad absurdum way by mentally splitting the unsplittable of the incomplete subset Nirvana movie, you are in, of the superset Nirvana movie. 1st LOE the only one remaining axiomatic assumption is one Consistent hence completely noncontradictory loophole-free everything/cosmos. This defines the falsifying âabsurdâ qualification. The largest combining super set that can and must as exclusive parts of the whole be described using a few slight twists by the five laws of thermodynamics: 0th LTD Mass Inertia Kg remains perpetually identical on a cosmological timescale in super set and smallest set (Lifeless, timeless, meaningless, non free will robotics); 1st LTD Conservation of Energy relative moving action is reaction mass on a cosmological timescale (Intelligent action is intelligent reaction); 2nd LTD mounting & declining complexity mass connections disconnections with permenent and non permenent uniquely movable identicle sorts of connections of identical mass Entropy Cycle on a cosmological timescale (Unique temporary local Consious memory banks); 3d LTD in Perpetual Action Reaction identical repetative within limits unique motion on a cosmological timescale (Identical History repeats itself in a unique way); 4th LTD every object has an Incidental Maximum Velocity the smallest element of nigh 10c during the smallest timescale under incidental maximum resounding non wave pressure on a cosmological timescale. (Life-death cycle with meaning and free will, is a deep religious dictate requiring seemingly contradictory deterministic statistics)
2nd LOE DETERMINISTIC STATISTICS (Gauss)/ 2nd LHN Normality Humans should act freely within the deterministic boundaries of âThe Lawâ. 2nd LOE The five dualistic quantified toothed wheel-like and continuous smooth wheel-like normal/ âconform the normâ distributions from which all other distributions can/ must be derived: 1. Combining Bell curve distribution; 2. Flat distribution; 3. Edge distribution; 4. Broken distribution; 5. Curved distribution. (Double helix DNA, Learning curve, Fair Dirty Distributions requiring a procedure for proof)
3rd LOE PROOF PROCEDURE Laplaceâs theorem formula works consistently defined both deterministically and probabilistically: Pure mathematical non-physics data thus âunreasonable doubtâ requires many axiomatic assumptions based logic formula & Bayesian applied/ dirty âbeyond reasonable doubtâ mathematics of the physics of nature dicates that consistent input means consistent output and inconsistent input means inconsistent output by the Laplace formula given only one Bayesian axiomatic assumption based on a beginning of absolute proof on a consistently defined valued goal. Given the norm of risk as chance times valued consequence acceptable input providing ratios of chance pro versus chance con as probabilities. Prior Odds multiplied by the Independent Likelihood Ratios, provide the Posterior Odds as the new Prior Odds of the endless trial and error evidence-based cycle. 3rd LHN Procedure Itâs Intuitive Common Sense to take your own robotically induced feelings as facts in the five salads that are fit for human consumption: 1. The combining One Mixed Salad, 2. Word Salads 3. Picture Salads 4. Course Number Salads 5. Fine Number Salads of this Socratic Yin & Yang Harry Potter formula as the recipe/ algorithm for the proper proof procedure that is consistent with the synapse of the instrument brain of all mammals, which is consistent with the everything cosmos as the symbiotics of which require the waves of this collective to come to more order than the current laws of nature can explain.
4. LOE ORDER (Euclidian Geometry)
The greatest breakthrough was due to my early age Bildung on Evidence-Based-Medicine as originally intended: rational use of gut feelings 1960-1980, my Just Proof legal model 1990, my Incomplete Higgs-graviton physics model LOE 2010, my Block-model Brain 2014, LOE 2017 & Integrating All Mathematical Mixed Salads Euclidean Geometry 1-Neutrino 2023 into âThe Lawâ. And subsequent constant tweaking of the presentation. First published under peer review following The Law in NWA 2015 search terms âcasus ZPEâ âhet vergeten instrument tussen de orenâ.
Behold the most succinct presentation of the Train Your Brain instruction manual for using the collective and individual instrument brains as the 4th LOE on one A4. The elaborated model is DOI published in the Elementary List. The 4th LOE is the soul as the order function of the cosmos and our brain. The images are in the download version.
5th LOE ZERO IS ONE LENGTH (Eulerâs identity) -1 + 1 =0 or e^(iPi) + 1 =0 zero has a length which is not empty as part of the ruler's massive measuring device. This is consistent with and constitutes the fact (taken as 100% true) that every non-empty sign element of mass must exist in workability. And that irrational numbers become rational in infinity, not needing imaginary numbers anymore, not being lengths âiâ, or measurement lengths for corner âeâ or circle lengths âPiâ. This is proven as a reductio ad absurdum, the strongest proof based on the beginning of absolute proof in the 3rd LOE. i.e. based on the fingerprint of the absolutely proven culprit, Mother Nature is on the mass murder weapon mass and not on matter as elementary. On a lower probative value, because more complex means it is less reliable, the proven way how the main suspect for further investigation is the Lego-Velcro particle built out of 500 identical massive rings in 10% double connections, 40% triple connections, 40% quadruple connections and 10% cinque connections creating the four inertias of the fifth unique chainmail combining enertia algorithms creating the snowflake function for the ice-wall pressure vessel/ snowball larger particle everything cosmos solely built out of snowflakes in empty space. This is an intuitive associative testable artistically creative guess. You either can or canât build such a particle out of these elements via further quick and dirty reverse engineering. All known constants need to be reverse-engineered into this one particle. It might then show that every particle must have 501 rings instead of 500. âThe Lawâ dictates that the burden of proof via investigation lies with current science. The antithesis that itâs proven unsolvable via John Bell's inequality theorem is consistent with âThe Lawâ because of a proven string of fallacies in reasoning that are only consistent to be the greatest bosses of god in peer review. The antithesis is falsified because in breach of the 3rd LOE prohibiting leaving any Socratic questions unanswered, the wise judge on the social contract with humanity must do what science failed to do and provide for instance the Gravity Angel as a historic precedent solution because claiming best practice by not taking a shot at the goal is worse than at least taking a shot. It thus doesnât constitute a strawman fallacy. Contrary to angels; snowflakes and beehives, etcetera, are observed. Observing many life-death Nirvana movie cycles also is the least ill-founded base for any elementary model. 5th LHN FACT Part of the 3rd LOE Bayesian procedure is the operation of hypothesis, anti-thesis, and thesis as a probability having assumed something as 100% true defined as a fact. At an elementary level, facts exist in infinite non-imaginary workability is hereby mathematically proven in number salad. Facts exist even though they are not all directly completely observable. Facts pro a probandum, facts con and the temporarily established facts as a proven best practice inherent circular argument on a goal. The cat is murdered and dead or not dead, maybe attempted murder. Taking the quantum weird fact that the cat is both dead and alive at the same time is absurd and thus falsified as inconsistent because itâs a contradiction and thus proven to be anti-scientific pseudo-science on any elementary scientific claim. Having one or more elements when moving requires a volume of space.
6th LOE INFINITE SPACE There is one element of infinite 3D Euclidean empty thus non-curved space ether (an element that completely surrounds all objects) dynamically constantly invaded by elements of non-empty mass filling the 3D Euclidean volumes of dynamic elements. Euclidâs parallel postulate is proven beyond reasonable doubt as a geometric theorem as the only logical explanation consistent with all known data taken into evidence and not an axiomatic assumption taking the liberty of unreasonable doubt by any majority of peers in breach of the Order of the 4th LOE. The volume, form, and kg mass remain absolutely the same in infinity is the * only * logical solution. 6th LHN FREEDOM Taking the artistic freedom of consciously ignoring slight errors and accepting unwitting uncertainty errors to test in trial and error is a dictate of âThe Lawâ. Your freedom in all aspects of âbreathing spaceâ is limited by the rules of invading mass
Facebook
TwitterThis tutorial presents an introduction to Electrochemical Impedance Spectroscopy (EIS) theory and has been kept as free from mathematics and electrical theory as possible. If you still find the material presented here difficult to understand, don't stop reading. You will get useful information from this application note, even if you don't follow all of the discussions.
Four major topics are covered in this Application Note.
AC Circuit Theory and Representation of Complex Impedance Values
Physical Electrochemistry and Circuit Elements
Common Equivalent Circuit Models
Extracting Model Parameters from Impedance Data
No prior knowledge of electrical circuit theory or electrochemistry is assumed. Each topic starts out at a quite elementary level, then proceeds to cover more advanced material.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Yang-Mills Existence and Mass Gap Problem is one of the most significant open problems in mathematical physics and quantum field theory. The Clay Mathematics Institute has posed the challenge of rigorously proving that a non-abelian Yang-Mills theory in four-dimensional space-time exhibits a mass gap, meaning that the lowest-energy excitations have strictly positive mass. While numerical lattice simulations strongly suggest that a mass gap exists, a non-perturbative, mathematically rigorous proof remains elusive.
In this paper, we propose a framework for solving the Yang-Mills existence and mass gap problem by developing a constructive approach to quantum Yang-Mills theory. We begin by defining a mathematically rigorous formulation of quantum gauge fields using functional analysis, Hilbert space techniques, and the Osterwalder-Schrader reflection positivity framework. The existence of a well-defined quantum Yang-Mills Hamiltonian is established through non-perturbative renormalization techniques, ensuring a finite energy spectrum.
To demonstrate the presence of a mass gap, we employ several independent strategies: (1) Spectral analysis of the Hamiltonian operator, proving the existence of an energy gap in the vacuum state; (2) Wilson loop confinement criteria, establishing an area law for large gauge loops and demonstrating that excitations require finite energy; and (3) Schwinger-Dyson equations, applying self-consistent integral equation techniques to show the emergence of a nonzero mass scale. Additionally, insights from the Gribov-Zwanziger scenario provide supporting arguments for infrared suppression of long-wavelength gluonic modes, reinforcing the existence of a mass gap.
Our results provide a rigorous foundation for Yang-Mills theory, demonstrating that the mass gap is a necessary consequence of the structure of non-abelian gauge fields in four-dimensional space-time. The implications extend to both quantum chromodynamics (QCD) and potential applications in quantum gravity, particularly in holography and AdS/CFT duality. Finally, we outline directions for future work in constructive quantum field theory and the mathematical formulation of gauge theories.
This study advances our understanding of non-abelian gauge theories and provides a candidate solution to one of the Millennium Prize Problems. Further refinement and formal proof verification will be necessary to fully establish its correctness within the framework of rigorous mathematical physics.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT Defining the optimum points for installing of a cable logging system is a problem faced by forestry planners. This study evaluated the application of a mathematical programming model for optimal location of cable logging in wood extraction. The study was conducted in a forestry company located in Parana State, Brazil. We collected data during timber harvesting and developed mathematical models to define the optimal location of the cable logging considering the variables âcycle timeâ and âextraction distanceâ. The variable âcycle timeâ affected the definition of the optimal location of equipment resulted in a reduced number of installation points with the largest coverage area. The variable âdistance extractionâ negatively influenced the location, with an increased number of installation points with smaller coverage. The developed model was efficient, but needs to be improved in order to ensure greater accuracy in wood extraction over long distances.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The study of disorders of consciousness (DoC) is very complex because patients suffer from a wide variety of lesions, affected brain mechanisms, different severity of symptoms, and are unable to communicate. Combining neuroimaging data and mathematical modeling can help us quantify and better describe some of these alterations. The goal of this study is to provide a new analysis and modeling pipeline for fMRI data leading to new diagnosis and prognosis biomarkers at the individual patient level. To do so, we project patientsâ fMRI data into a low-dimension latent-space. We define the latent spaceâs dimension as the smallest dimension able to maintain the complexity, non-linearities, and information carried by the data, according to different criteria that we detail in the first part. This dimensionality reduction procedure then allows us to build biologically inspired latent whole-brain models that can be calibrated at the single-patient level. In particular, we propose a new model inspired by the regulation of neuronal activity by astrocytes in the brain. This modeling procedure leads to two types of model-based biomarkers (MBBs) that provide novel insight at different levels: (1) the connectivity matrices bring us information about the severity of the patientâs diagnosis, and, (2) the local node parameters correlate to the patientâs etiology, age and prognosis. Altogether, this study offers a new data processing framework for resting-state fMRI which provides crucial information regarding DoC patients diagnosis and prognosis. Finally, this analysis pipeline could be applied to other neurological conditions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT The goal of this study was to define an empirical model to calculate the leaf area in rice from linear leaf measure in genotypes used by farmers in Brazil. Through the leaf dimensions it is possible to identify the final crop yield from the LAI. Therefore, the leaves shape is closely related to the production of photoassimilates that will be converted into grain yield. Field experiments were carried out in four counties of Rio Grande do Sul with twelve-three varieties of rice in four growing seasons. We measured the length and width of leaves to construct the model. The relationship between leaf area and linear dimensions was shaped using a linear model for each genotype, and general model grouping all genotypes. The model accuracy was measure following statistics: Root Mean Square Error, BIAS, modified index of agreement and coefficient r. The non-destructive method for individual leaves was appropriate for estimating the leaf area in rice. Moreover, the general equation was estimated and can be used for all modern genotypes of rice in Brazil.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Existing mathematical models for the glucose-insulin (G-I) dynamics often involve variables that are not susceptible to direct measurement. Standard clinical tests for measuring G-I levels for diagnosing potential diseases are simple and relatively cheap, but seldom give enough information to allow the identification of model parameters within the range in which they have a biological meaning, thus generating a gap between mathematical modeling and any possible physiological explanation or clinical interpretation. In the present work, we present a synthetic mathematical model to represent the G-I dynamics in an Oral Glucose Tolerance Test (OGTT), which involves for the first time for OGTT-related models, Delay Differential Equations. Our model can represent the radically different behaviors observed in a studied cohort of 407 normoglycemic patients (the largest analyzed so far in parameter fitting experiments), all masked under the current threshold-based normality criteria. We also propose a novel approach to solve the parameter fitting inverse problem, involving the clustering of different G-I profiles, a simulation-based exploration of the feasible set, and the construction of an information function which reshapes it, based on the clinical records, experimental uncertainties, and physiological criteria. This method allowed an individual-wise recognition of the parameters of our model using small size OGTT data (5 measurements) directly, without modifying the routine procedures or requiring particular clinical setups. Therefore, our methodology can be easily applied to gain parametric insights to complement the existing tools for the diagnosis of G-I dysregulations. We tested the parameter stability and sensitivity for individual subjects, and an empirical relationship between such indexes and curve shapes was spotted. Since different G-I profiles, under the light of our model, are related to different physiological mechanisms, the present method offers a tool for personally-oriented diagnosis and treatment and to better define new health criteria.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An analytical solution was developed to evaluate free-draining sloping furrows based on Walker and Skogerboe (1987) volume balance approach. Its application allows the mathematical calculation of a system performance, instead of constructing graphs that define the evaluation parameters. The mathematical solution is based on fitting equations to potential models, calculating the areas under the curves defined by the fitting and on curve intersection, , from which the system performance is obtained. To validate the methodology, we conducted a field experiment at the Experimental Farm of the Federal University of CearĂĄ, in the municipality of Pentecoste, belonging to the Center of Agricultural Sciences, where field data were analyzed by the two methods. The results showed that the analytical methodology can be used to evaluate the free draining sloping furrows irrigation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
AIC values (ÎAIC) are quoted relative to the minimum AIC value across all models. The model with ÎAIC = 0 is the model with lowest AIC and thus has most statistical support. See Table 1 for parameter definitions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract Due to the importance of textbooks within the processes of teaching and learning in Mathematics, this article focuses on the tasks proposed in five textbooks of 1st of Bachillerato for this topic. The goal is to identify meanings of derivative in the textbooks through the proposed tasks. It is a quantitative research in which, by means of a cluster analysis, the tasks were grouped according to similarity. The results show that the books emphasize three meanings of the derivative: one procedural-algebraic, one algorithmic, and finally another conceptual-geometric meaning, all of them dominated by the symbolic representation system and that exclusively show a mathematical context.