24 datasets found
  1. Data from: Meaning of derivative in the book tasks of 1st of “Bachillerato”

    • scielo.figshare.com
    jpeg
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    María Fernanda Vargas; José Antonio Fernández-Plaza; Juan Francisco Ruiz-Hidalgo (2023). Meaning of derivative in the book tasks of 1st of “Bachillerato” [Dataset]. http://doi.org/10.6084/m9.figshare.14304760.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SciELOhttp://www.scielo.org/
    Authors
    MarĂ­a Fernanda Vargas; JosĂŠ Antonio FernĂĄndez-Plaza; Juan Francisco Ruiz-Hidalgo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract Due to the importance of textbooks within the processes of teaching and learning in Mathematics, this article focuses on the tasks proposed in five textbooks of 1st of Bachillerato for this topic. The goal is to identify meanings of derivative in the textbooks through the proposed tasks. It is a quantitative research in which, by means of a cluster analysis, the tasks were grouped according to similarity. The results show that the books emphasize three meanings of the derivative: one procedural-algebraic, one algorithmic, and finally another conceptual-geometric meaning, all of them dominated by the symbolic representation system and that exclusively show a mathematical context.

  2. h

    Data from: MathCheck

    • huggingface.co
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PremiLab-Math (2024). MathCheck [Dataset]. https://huggingface.co/datasets/PremiLab-Math/MathCheck
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    PremiLab-Math
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Exceptional mathematical reasoning ability is one of the key features that demonstrate the power of large language models (LLMs). How to comprehensively define and evaluate the mathematical abilities of LLMs, and even reflect the user experience in real-world scenarios, has emerged as a critical issue. Current benchmarks predominantly concentrate on problem-solving capabilities, which presents a substantial risk of model overfitting and fails to accurately represent genuine mathematical… See the full description on the dataset page: https://huggingface.co/datasets/PremiLab-Math/MathCheck.

  3. f

    Data from: The Objective Concept of a Mathematical Task for Future Teachers

    • scielo.figshare.com
    • figshare.com
    jpeg
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carmen Gloria Aguayo-Amagada; Pablo Flores; Antonio Moreno (2023). The Objective Concept of a Mathematical Task for Future Teachers [Dataset]. http://doi.org/10.6084/m9.figshare.7245086.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SciELO journals
    Authors
    Carmen Gloria Aguayo-Amagada; Pablo Flores; Antonio Moreno
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract Within the line of teacher training, we present in this work aspects of a research with future elementary school teachers, where we focus on understanding how students of the University of Granada interpret the objective as an analysis element of a school mathematical task, framed within the Didactic Analysis as a functional tool in the initial formation. A qualitative methodology has been followed through content analysis. The antecedents show the importance of the school tasks to favor mathematics learning and the results show us the difficulty that the future teachers present to establish and to define the objective of a school mathematical task.

  4. D

    Comparative Judgement of Statements About Mathematical Definitions

    • dataverse.azure.uit.no
    • dataverse.no
    csv, txt
    Updated Sep 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tore Forbregd; Tore Forbregd; Hermund Torkildsen; Eivind Kaspersen; Trygve Solstad; Hermund Torkildsen; Eivind Kaspersen; Trygve Solstad (2023). Comparative Judgement of Statements About Mathematical Definitions [Dataset]. http://doi.org/10.18710/EOZKTR
    Explore at:
    txt(3623), csv(37503), csv(43566), csv(2523)Available download formats
    Dataset updated
    Sep 28, 2023
    Dataset provided by
    DataverseNO
    Authors
    Tore Forbregd; Tore Forbregd; Hermund Torkildsen; Eivind Kaspersen; Trygve Solstad; Hermund Torkildsen; Eivind Kaspersen; Trygve Solstad
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Data from a comparative judgement survey consisting of 62 working mathematics educators (ME) at Norwegian universities or city colleges, and 57 working mathematicians at Norwegian universities. A total of 3607 comparisons of which 1780 comparisons by the ME and 1827 ME. The comparative judgement survey consisted of respondents comparing pairs of statements on mathematical definitions compiled from a literature review on mathematical definitions in the mathematics education literature. Each WM was asked to judge 40 pairs of statements with the following question: “As a researcher in mathematics, where your target group is other mathematicians, what is more important about mathematical definitions?” Each ME was asked to judge 41 pairs of statements with the following question: “For a mathematical definition in the context of teaching and learning, what is more important?” The comparative judgement was done with No More Marking software (nomoremarking.com) The data set consists of the following data: comparisons made by ME (ME.csv) comparisons made by WM (WM.csv) Look up table of codes of statements and statement formulations (key.csv) Each line in the comparison represents a comparison, where the "winner" column represents the winner and the "loser" column the loser of the comparison.

  5. Data from: Linguistic Appropriation and Meaning in Mathematical Modeling...

    • scielo.figshare.com
    jpeg
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BĂĄrbara Nivalda Palharini Alvim Sousa; Lourdes Maria Werle de Almeida (2023). Linguistic Appropriation and Meaning in Mathematical Modeling Activities [Dataset]. http://doi.org/10.6084/m9.figshare.11314559.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    SciELOhttp://www.scielo.org/
    Authors
    BĂĄrbara Nivalda Palharini Alvim Sousa; Lourdes Maria Werle de Almeida
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract In this paper we turn our attention to the different language games associated to the development of Mathematical Modelling activities and to the meanings constituted by students within these language games in relation to the first order ordinary differential equations. The research is based on Mathematical Modelling in Mathematics Education and has as its philosophical basis the studies of Ludwig Wittgenstein and some of his interpreters. Considering these theoretical-philosophical elements, mathematical modelling activities were developed in a Mathematics Degree in a course of Ordinary Differential Equations. Data were collected through written records, audio and video recordings, questionnaires, and interviews. The data analysis methodology considers the students' discursive practices and allowed us to construct trees of idea association. The results indicate that the constitution of meaning within modelling activities is associated to the students' linguistic appropriation of the rules and techniques that are configured in specific language games identified in the Mathematical Modelling activities.

  6. q

    Data from: The Berth Allocation Problem with Channel Restrictions - Datasets...

    • researchdatafinder.qut.edu.au
    • researchdata.edu.au
    Updated Jun 22, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Paul Corry (2018). The Berth Allocation Problem with Channel Restrictions - Datasets [Dataset]. https://researchdatafinder.qut.edu.au/individual/n4992
    Explore at:
    Dataset updated
    Jun 22, 2018
    Dataset provided by
    Queensland University of Technology (QUT)
    Authors
    Dr Paul Corry
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These datatasets relate to the computational study presented in the paper The Berth Allocation Problem with Channel Restrictions, authored by Paul Corry and Christian Bierwirth. They consist of all the randomly generated problem instances along with the computational results presented in the paper.

    Results across all problem instances assume ship separation parameters of [delta_1, delta_2, delta_3] = [0.25, 0, 0.5].

    Excel Workbook Organisation:

    The data is organised into separate Excel files for each table in the paper, as indicated by the file description. Within each file, each row of data presented (aggregating 10 replications) in the corrsponding table is captured in two worksheets, one with the problem instance data, and the other with generated solution data obtained from several solution methods (described in the paper). For example, row 3 of Tab. 2, will have data for 10 problem instances on worksheet T2R3, and corresponding solution data on T2R3X.

    Problem Instance Data Format:

    On each problem instance worksheet (e.g. T2R3), each row of data corresponds to a different problem instance, and there are 10 replications on each worksheet.

    The first column provides a replication identifier which is referenced on the corresponding solution worksheet (e.g. T2R3X).

    Following this, there are n*(2c+1) columns (n = number of ships, c = number of channel segmenets) with headers p(i)_(j).(k)., where i references the operation (channel transit/berth visit) id, j references the ship id, and k references the index of the operation within the ship. All indexing starts at 0. These columns define the transit or dwell times on each segment. A value of -1 indicates a segment on which a berth allocation must be applied, and hence the dwell time is unkown.

    There are then a further n columns with headers r(j), defining the release times of each ship.

    For ChSP problems, there are a final n colums with headers b(j), defining the berth to be visited by each ship. ChSP problems with fixed berth sequencing enforced have an additional n columns with headers toa(j), indicating the order in which ship j sits within its berth sequence. For BAP-CR problems, these columnns are not present, but replaced by n*m columns (m = number of berths) with headers p(j).(b) defining the berth processing time of ship j if allocated to berth b.

    Solution Data Format:

    Each row of data corresponds to a different solution.

    Column A references the replication identifier (from the corresponding instance worksheet) that the soluion refers to.

    Column B defines the algorithm that was used to generate the solution.

    Column C shows the objective function value (total waiting and excess handling time) obtained.

    Column D shows the CPU time consumed in generating the solution, rounded to the nearest second.

    Column E shows the optimality gap as a proportion. A value of -1 or an empty value indicates that optimality gap is unknown.

    From column F onwards, there are are n*(2c+1) columns with the previously described p(i)_(j).(k). headers. The values in these columns define the entry times at each segment.

    For BAP-CR problems only, following this there are a further 2n columns. For each ship j, there will be columns titled b(j) and p.b(j) defining the berth that was allocated to ship j, and the processing time on that berth respectively.

  7. GSM8K - Grade School Math 8K Q&A

    • kaggle.com
    zip
    Updated Nov 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). GSM8K - Grade School Math 8K Q&A [Dataset]. https://www.kaggle.com/datasets/thedevastator/grade-school-math-8k-q-a
    Explore at:
    zip(3418660 bytes)Available download formats
    Dataset updated
    Nov 24, 2023
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    GSM8K - Grade School Math 8K Q&A

    A Linguistically Diverse Dataset for Multi-Step Reasoning Question Answering

    By Huggingface Hub [source]

    About this dataset

    This Grade School Math 8K Linguistically Diverse Training & Test Set is designed to help you develop and improve your understanding of multi-step reasoning question answering. The dataset contains three separate data files: the socratic_test.csv, main_test.csv, and main_train.csv, each containing a set of questions and answers related to grade school math that consists of multiple steps. Each file contains the same columns: question, answer. The questions contained in this dataset are thoughtfully crafted to lead you through the reasoning journey for arriving at the correct answer each time, allowing you immense opportunities for learning through practice. With over 8 thousand entries for both training and testing purposes in this GSM8K dataset, it takes advanced multi-step reasoning skills to ace these questions! Deepen your knowledge today and master any challenge with ease using this amazing GSM8K set!

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset provides a unique opportunity to study multi-step reasoning for question answering. The GSM8K Linguistically Diverse Training & Test Set consists of 8,000 questions and answers that have been created to simulate real-world scenarios in grade school mathematics. Each question is paired with one answer based on a comprehensive test set. The questions cover topics such as algebra, arithmetic, probability and more.

    The dataset consists of two files: main_train.csv and main_test.csv; the former contains questions and answers specifically related to grade school math while the latter includes multi-step reasoning tests for each category of the Ontario Math Curriculum (OMC). In addition, it has three columns - Question (Question), Answer ([Answer]) – meaning that each row contains 3 sequential question/answer pairs making it possible to take a single path from the start of any given answer or branch out from there according to the logic construction required by each respective problem scenario; these columns can be used in combination with text analysis algorithms like ELMo or BERT to explore different formats of representation for responding accurately during natural language processing tasks such as Q&A or building predictive models for numerical data applications like measuring classifying resource efficiency initiatives or forecasting sales volumes in retail platforms..

    To use this dataset efficiently you should first get familiar with its structure by reading through its documentation so you are aware all available info regarding items content definition & format requirements then study examples that best suits your specific purpose whether is performing an experiment inspired by education research needs, generate insights related marketing analytics reports making predictions over artificial intelligence project capacity improvements optimization gains etcetera having full access knowledge about available source keeps you up & running from preliminary background work toward knowledge mining endeavor completion success Support User success qualitative exploration sessions make sure learn all variables definitions employed heterogeneous tools before continue Research journey starts experienced Researchers come prepared valuable resource items employed go beyond discovery false alarm halt advancement flow focus unprocessed raw values instead ensure clear cutting vision behind objectives support UserHelp plans going mean project meaningful campaign deliverables production planning safety milestones dovetail short deliveries enable design interfaces session workforce making everything automated fun entry functioning final transformation awaited offshoot Goals outcome parameters monitor life cycle management ensures ongoing projects feedbacks monitored video enactment resources tapped Proficiently balanced activity sheets tracking activities progress deliberation points evaluation radius highlights outputs primary phase visit egress collaboration agendas Client cumulative returns records capture performance illustrated collectively diarized successive setup sweetens conditions researched environments overview debriefing arcane matters turn acquaintances esteemed directives social

    Research Ideas

    • Training language models for improving accuracy in natural language processing applications such as question answering or dialogue systems.
    • Generating new grade school math questions and answers using g...
  8. l

    Supplementary Materials "How we think about numbers - Early counting and...

    • repository.lboro.ac.uk
    zip
    Updated May 28, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Theresa Wege (2025). Supplementary Materials "How we think about numbers - Early counting and mathematical abstraction" - Chapter 2 [Dataset]. http://doi.org/10.17028/rd.lboro.22354066.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 28, 2025
    Dataset provided by
    Loughborough University
    Authors
    Theresa Wege
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Supplementary Materials for Chapter 2 of the Doctoral Dissertation "How we think about numbers - Early counting and mathematical abstraction". Contains Preregistrations, open data and open materials for study 1 and study 2.As children learn to count, they make one of their first mathematical abstractions. They initially learn how numbers in the count sequence correspond to quantities of physical things if the rules of counting are followed (i.e., if you say the numbers in order “one two three four …” as you tag each thing with a number). Around the age of four-years-old, children discover that these rules also define numbers in relation to each other, such that numbers contain meaning in themselves and without reference to the physical world (e.g., “five” is “one” more than “four”). It is through learning to count, that children discover the natural numbers as mathematical symbols defined by abstract rules.In this dissertation, I explored the developmental trajectory and the cognitive mechanisms of how we gain an understanding of the natural numbers as children. I present new methodological, empirical, and theoretical insights on how and when in the process of learning to count, children discover that numbers represent cardinalities, that numbers can be defined in relation to each other by the successor function and that numbers refer to units. Lastly, I explore this mathematical abstraction as the foundation of how we think about numbers as adults.My work critically tested prominent theories on how learning to count gives meaning to numbers through analogical mapping and conceptual bootstrapping. Findings across five empirical studies suggest that the process is more gradual and continuous than previous theories have proposed. Children begin to understand numbers as cardinalities defined in relation to other numbers by the successor function before they fully grasp the rules of counting. With learning the rules of counting this understanding continuously expands and matures. I further suggest that children may only fully understand numbers as abstract mathematical symbols once they understand how counting and numbers refer to the abstract notion of units rather than to physical things.The central finding of this dissertation is that learning to count does not change children’s understanding of numbers altogether and all at once. Nonetheless, when learning to count, children accomplish a fascinating mathematical abstraction, which builds the foundation for lifelong mathematical learning.© Theresa Elise Wege, CC BY-NC 4.0

  9. GSM8K - Grade School Math 8K dataset for LLM

    • kaggle.com
    zip
    Updated May 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johnson chong (2024). GSM8K - Grade School Math 8K dataset for LLM [Dataset]. https://www.kaggle.com/datasets/johnsonhk88/gsm8k-grade-school-math-8k-dataset-for-llm
    Explore at:
    zip(5156809 bytes)Available download formats
    Dataset updated
    May 21, 2024
    Authors
    Johnson chong
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset Summary GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.

    These problems take between 2 and 8 steps to solve. Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer. A bright middle school student should be able to solve every problem: from the paper, "Problems require no concepts beyond the level of early Algebra, and the vast majority of problems can be solved without explicitly defining a variable." Solutions are provided in natural language, as opposed to pure math expressions. From the paper: "We believe this is the most generally useful data format, and we expect it to shed light on the properties of large language models’ internal monologues"

  10. Data from: Data for Figure 4 from What is the physical origin of the...

    • rs.figshare.com
    application/csv
    Updated Jul 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masato Kimura; Takeshi Takaishi; Yoshimi Tanaka (2024). Data for Figure 4 from What is the physical origin of the gradient flow structure of variational fracture models? [Dataset]. http://doi.org/10.6084/m9.figshare.26196649.v1
    Explore at:
    application/csvAvailable download formats
    Dataset updated
    Jul 14, 2024
    Dataset provided by
    Royal Societyhttp://royalsociety.org/
    Authors
    Masato Kimura; Takeshi Takaishi; Yoshimi Tanaka
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    G_crack_length.csv

  11. w

    Extractors manual for Oil Shale Data Base System: Test Data Data Base

    • data.wu.ac.at
    html
    Updated Sep 29, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Extractors manual for Oil Shale Data Base System: Test Data Data Base [Dataset]. https://data.wu.ac.at/odso/edx_netl_doe_gov/NGVlMGQwNjEtZDJlMy00YjU5LTg4ZTMtNzZmZmM1MDlmNzc4
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Sep 29, 2016
    Description

    The most abundant energy sources in the United States are hydrocarbon fossil fuels consisting of oil, gas, oil shale, and coal. Currently, the most important of these energy sources are crude oil and natural gas. Although supplies are adequate today, it must be realized that oil and gas are depletive substances. Within the next few years, the increasing demand for liquid fuels will necessitate the supplemental supplies of domestic energy from crude oil and natural gas with synthetic fuels such as those from oil shale. To date, those persons working in the development of oil shale technology have found limited amounts of reference data. If data from research and development (R and D) could be made publicly available, however, several functions could be served. The duplication of work could be avoided, documented test material could serve as a basis to promote further developments, and research costs could possibly be reduced. To capture the results of Government-sponsored oil shale research programs, documents have been written to specify the data that contractors need to report and the procedures for reporting them. The documents identify and define the data from oil shale projects to be entered into the Major Plants Data Base (MPDB), Test Data Data Base (TDDB), Resource Extraction Data Base (REDB), and Math Modeling Data Base (MMDB) which will meet the needs of the users of the oil shale data system. This document addresses what information is needed and how it must be formatted so that it can be entered into the TDDB for oil shale.

  12. The Law

    • zenodo.org
    Updated Mar 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gerhard Ris; Gerhard Ris (2025). The Law [Dataset]. http://doi.org/10.5281/zenodo.15051437
    Explore at:
    Dataset updated
    Mar 19, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Gerhard Ris; Gerhard Ris
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The First Law of Everything (LOE)/ Law of Nature (Newton)

    1st LOE CLASSICAL MECHANICS viewed by homo sapiens for homo sapiens in the derived 1st local law the Law of Human Nature 1st LHN Completeness. Before judging, always try to get the whole Nirvana movie scenario/ composition picture viewed in a thus holistic dualistic reductio ad absurdum way by mentally splitting the unsplittable of the incomplete subset Nirvana movie, you are in, of the superset Nirvana movie. 1st LOE the only one remaining axiomatic assumption is one Consistent hence completely noncontradictory loophole-free everything/cosmos. This defines the falsifying ‘absurd’ qualification. The largest combining super set that can and must as exclusive parts of the whole be described using a few slight twists by the five laws of thermodynamics: 0th LTD Mass Inertia Kg remains perpetually identical on a cosmological timescale in super set and smallest set (Lifeless, timeless, meaningless, non free will robotics); 1st LTD Conservation of Energy relative moving action is reaction mass on a cosmological timescale (Intelligent action is intelligent reaction); 2nd LTD mounting & declining complexity mass connections disconnections with permenent and non permenent uniquely movable identicle sorts of connections of identical mass Entropy Cycle on a cosmological timescale (Unique temporary local Consious memory banks); 3d LTD in Perpetual Action Reaction identical repetative within limits unique motion on a cosmological timescale (Identical History repeats itself in a unique way); 4th LTD every object has an Incidental Maximum Velocity the smallest element of nigh 10c during the smallest timescale under incidental maximum resounding non wave pressure on a cosmological timescale. (Life-death cycle with meaning and free will, is a deep religious dictate requiring seemingly contradictory deterministic statistics)

    2nd LOE DETERMINISTIC STATISTICS (Gauss)/ 2nd LHN Normality Humans should act freely within the deterministic boundaries of ‘The Law’. 2nd LOE The five dualistic quantified toothed wheel-like and continuous smooth wheel-like normal/ ‘conform the norm’ distributions from which all other distributions can/ must be derived: 1. Combining Bell curve distribution; 2. Flat distribution; 3. Edge distribution; 4. Broken distribution; 5. Curved distribution. (Double helix DNA, Learning curve, Fair Dirty Distributions requiring a procedure for proof)

    3rd LOE PROOF PROCEDURE Laplace’s theorem formula works consistently defined both deterministically and probabilistically: Pure mathematical non-physics data thus ‘unreasonable doubt’ requires many axiomatic assumptions based logic formula & Bayesian applied/ dirty ‘beyond reasonable doubt’ mathematics of the physics of nature dicates that consistent input means consistent output and inconsistent input means inconsistent output by the Laplace formula given only one Bayesian axiomatic assumption based on a beginning of absolute proof on a consistently defined valued goal. Given the norm of risk as chance times valued consequence acceptable input providing ratios of chance pro versus chance con as probabilities. Prior Odds multiplied by the Independent Likelihood Ratios, provide the Posterior Odds as the new Prior Odds of the endless trial and error evidence-based cycle. 3rd LHN Procedure It’s Intuitive Common Sense to take your own robotically induced feelings as facts in the five salads that are fit for human consumption: 1. The combining One Mixed Salad, 2. Word Salads 3. Picture Salads 4. Course Number Salads 5. Fine Number Salads of this Socratic Yin & Yang Harry Potter formula as the recipe/ algorithm for the proper proof procedure that is consistent with the synapse of the instrument brain of all mammals, which is consistent with the everything cosmos as the symbiotics of which require the waves of this collective to come to more order than the current laws of nature can explain.

    4. LOE ORDER (Euclidian Geometry)

    The greatest breakthrough was due to my early age Bildung on Evidence-Based-Medicine as originally intended: rational use of gut feelings 1960-1980, my Just Proof legal model 1990, my Incomplete Higgs-graviton physics model LOE 2010, my Block-model Brain 2014, LOE 2017 & Integrating All Mathematical Mixed Salads Euclidean Geometry 1-Neutrino 2023 into ‘The Law’. And subsequent constant tweaking of the presentation. First published under peer review following The Law in NWA 2015 search terms “casus ZPE” “het vergeten instrument tussen de oren”.

    Behold the most succinct presentation of the Train Your Brain instruction manual for using the collective and individual instrument brains as the 4th LOE on one A4. The elaborated model is DOI published in the Elementary List. The 4th LOE is the soul as the order function of the cosmos and our brain. The images are in the download version.

    5th LOE ZERO IS ONE LENGTH (Euler’s identity) -1 + 1 =0 or e^(iPi) + 1 =0 zero has a length which is not empty as part of the ruler's massive measuring device. This is consistent with and constitutes the fact (taken as 100% true) that every non-empty sign element of mass must exist in workability. And that irrational numbers become rational in infinity, not needing imaginary numbers anymore, not being lengths ‘i’, or measurement lengths for corner ‘e’ or circle lengths ‘Pi’. This is proven as a reductio ad absurdum, the strongest proof based on the beginning of absolute proof in the 3rd LOE. i.e. based on the fingerprint of the absolutely proven culprit, Mother Nature is on the mass murder weapon mass and not on matter as elementary. On a lower probative value, because more complex means it is less reliable, the proven way how the main suspect for further investigation is the Lego-Velcro particle built out of 500 identical massive rings in 10% double connections, 40% triple connections, 40% quadruple connections and 10% cinque connections creating the four inertias of the fifth unique chainmail combining enertia algorithms creating the snowflake function for the ice-wall pressure vessel/ snowball larger particle everything cosmos solely built out of snowflakes in empty space. This is an intuitive associative testable artistically creative guess. You either can or can’t build such a particle out of these elements via further quick and dirty reverse engineering. All known constants need to be reverse-engineered into this one particle. It might then show that every particle must have 501 rings instead of 500. ‘The Law’ dictates that the burden of proof via investigation lies with current science. The antithesis that it’s proven unsolvable via John Bell's inequality theorem is consistent with ‘The Law’ because of a proven string of fallacies in reasoning that are only consistent to be the greatest bosses of god in peer review. The antithesis is falsified because in breach of the 3rd LOE prohibiting leaving any Socratic questions unanswered, the wise judge on the social contract with humanity must do what science failed to do and provide for instance the Gravity Angel as a historic precedent solution because claiming best practice by not taking a shot at the goal is worse than at least taking a shot. It thus doesn’t constitute a strawman fallacy. Contrary to angels; snowflakes and beehives, etcetera, are observed. Observing many life-death Nirvana movie cycles also is the least ill-founded base for any elementary model. 5th LHN FACT Part of the 3rd LOE Bayesian procedure is the operation of hypothesis, anti-thesis, and thesis as a probability having assumed something as 100% true defined as a fact. At an elementary level, facts exist in infinite non-imaginary workability is hereby mathematically proven in number salad. Facts exist even though they are not all directly completely observable. Facts pro a probandum, facts con and the temporarily established facts as a proven best practice inherent circular argument on a goal. The cat is murdered and dead or not dead, maybe attempted murder. Taking the quantum weird fact that the cat is both dead and alive at the same time is absurd and thus falsified as inconsistent because it’s a contradiction and thus proven to be anti-scientific pseudo-science on any elementary scientific claim. Having one or more elements when moving requires a volume of space.

    6th LOE INFINITE SPACE There is one element of infinite 3D Euclidean empty thus non-curved space ether (an element that completely surrounds all objects) dynamically constantly invaded by elements of non-empty mass filling the 3D Euclidean volumes of dynamic elements. Euclid’s parallel postulate is proven beyond reasonable doubt as a geometric theorem as the only logical explanation consistent with all known data taken into evidence and not an axiomatic assumption taking the liberty of unreasonable doubt by any majority of peers in breach of the Order of the 4th LOE. The volume, form, and kg mass remain absolutely the same in infinity is the * only * logical solution. 6th LHN FREEDOM Taking the artistic freedom of consciously ignoring slight errors and accepting unwitting uncertainty errors to test in trial and error is a dictate of ‘The Law’. Your freedom in all aspects of “breathing space” is limited by the rules of invading mass

  13. g

    Data from: Basics of Electrochemical Impedance Spectroscopy

    • gamry.com
    Updated May 9, 2006
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2006). Basics of Electrochemical Impedance Spectroscopy [Dataset]. https://www.gamry.com/application-notes/EIS/basics-of-electrochemical-impedance-spectroscopy
    Explore at:
    Dataset updated
    May 9, 2006
    Description

    This tutorial presents an introduction to Electrochemical Impedance Spectroscopy (EIS) theory and has been kept as free from mathematics and electrical theory as possible. If you still find the material presented here difficult to understand, don't stop reading. You will get useful information from this application note, even if you don't follow all of the discussions.

    Four major topics are covered in this Application Note.

    AC Circuit Theory and Representation of Complex Impedance Values

    Physical Electrochemistry and Circuit Elements

    Common Equivalent Circuit Models

    Extracting Model Parameters from Impedance Data

    No prior knowledge of electrical circuit theory or electrochemistry is assumed. Each topic starts out at a quite elementary level, then proceeds to cover more advanced material.

  14. Z

    Yang-Mills Existence and Mass Gap Problem Solutions

    • nde-dev.biothings.io
    • data.niaid.nih.gov
    • +1more
    Updated Mar 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Froom, Vincent (2025). Yang-Mills Existence and Mass Gap Problem Solutions [Dataset]. https://nde-dev.biothings.io/resources?id=zenodo_14948368
    Explore at:
    Dataset updated
    Mar 1, 2025
    Dataset authored and provided by
    Froom, Vincent
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Yang-Mills Existence and Mass Gap Problem is one of the most significant open problems in mathematical physics and quantum field theory. The Clay Mathematics Institute has posed the challenge of rigorously proving that a non-abelian Yang-Mills theory in four-dimensional space-time exhibits a mass gap, meaning that the lowest-energy excitations have strictly positive mass. While numerical lattice simulations strongly suggest that a mass gap exists, a non-perturbative, mathematically rigorous proof remains elusive.

    In this paper, we propose a framework for solving the Yang-Mills existence and mass gap problem by developing a constructive approach to quantum Yang-Mills theory. We begin by defining a mathematically rigorous formulation of quantum gauge fields using functional analysis, Hilbert space techniques, and the Osterwalder-Schrader reflection positivity framework. The existence of a well-defined quantum Yang-Mills Hamiltonian is established through non-perturbative renormalization techniques, ensuring a finite energy spectrum.

    To demonstrate the presence of a mass gap, we employ several independent strategies: (1) Spectral analysis of the Hamiltonian operator, proving the existence of an energy gap in the vacuum state; (2) Wilson loop confinement criteria, establishing an area law for large gauge loops and demonstrating that excitations require finite energy; and (3) Schwinger-Dyson equations, applying self-consistent integral equation techniques to show the emergence of a nonzero mass scale. Additionally, insights from the Gribov-Zwanziger scenario provide supporting arguments for infrared suppression of long-wavelength gluonic modes, reinforcing the existence of a mass gap.

    Our results provide a rigorous foundation for Yang-Mills theory, demonstrating that the mass gap is a necessary consequence of the structure of non-abelian gauge fields in four-dimensional space-time. The implications extend to both quantum chromodynamics (QCD) and potential applications in quantum gravity, particularly in holography and AdS/CFT duality. Finally, we outline directions for future work in constructive quantum field theory and the mathematical formulation of gauge theories.

    This study advances our understanding of non-abelian gauge theories and provides a candidate solution to one of the Millennium Prize Problems. Further refinement and formal proof verification will be necessary to fully establish its correctness within the framework of rigorous mathematical physics.

  15. Development of Mathematical Programming Model for Cable Logging System...

    • scielo.figshare.com
    jpeg
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alynne Rudek; Eduardo da Silva Lopes; Julio Eduardo Arce; Paulo Costa de Oliveira Filho (2023). Development of Mathematical Programming Model for Cable Logging System Location [Dataset]. http://doi.org/10.6084/m9.figshare.7451918.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SciELOhttp://www.scielo.org/
    Authors
    Alynne Rudek; Eduardo da Silva Lopes; Julio Eduardo Arce; Paulo Costa de Oliveira Filho
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT Defining the optimum points for installing of a cable logging system is a problem faced by forestry planners. This study evaluated the application of a mathematical programming model for optimal location of cable logging in wood extraction. The study was conducted in a forestry company located in Parana State, Brazil. We collected data during timber harvesting and developed mathematical models to define the optimal location of the cable logging considering the variables “cycle time” and “extraction distance”. The variable “cycle time” affected the definition of the optimal location of equipment resulted in a reduced number of installation points with the largest coverage area. The variable “distance extraction” negatively influenced the location, with an increased number of installation points with smaller coverage. The developed model was efficient, but needs to be improved in order to ensure greater accuracy in wood extraction over long distances.

  16. Authorized parameter ranges.

    • plos.figshare.com
    xls
    Updated Sep 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lou Zonca; Anira Escrichs; Gustavo Patow; Dragana Manasova; Yonathan Sanz-Perl; Jitka Annen; Olivia Gosseries; Steven Laureys; Jacobo Diego Sitt; Gustavo Deco (2025). Authorized parameter ranges. [Dataset]. http://doi.org/10.1371/journal.pone.0328219.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 2, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Lou Zonca; Anira Escrichs; Gustavo Patow; Dragana Manasova; Yonathan Sanz-Perl; Jitka Annen; Olivia Gosseries; Steven Laureys; Jacobo Diego Sitt; Gustavo Deco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The study of disorders of consciousness (DoC) is very complex because patients suffer from a wide variety of lesions, affected brain mechanisms, different severity of symptoms, and are unable to communicate. Combining neuroimaging data and mathematical modeling can help us quantify and better describe some of these alterations. The goal of this study is to provide a new analysis and modeling pipeline for fMRI data leading to new diagnosis and prognosis biomarkers at the individual patient level. To do so, we project patients’ fMRI data into a low-dimension latent-space. We define the latent space’s dimension as the smallest dimension able to maintain the complexity, non-linearities, and information carried by the data, according to different criteria that we detail in the first part. This dimensionality reduction procedure then allows us to build biologically inspired latent whole-brain models that can be calibrated at the single-patient level. In particular, we propose a new model inspired by the regulation of neuronal activity by astrocytes in the brain. This modeling procedure leads to two types of model-based biomarkers (MBBs) that provide novel insight at different levels: (1) the connectivity matrices bring us information about the severity of the patient’s diagnosis, and, (2) the local node parameters correlate to the patient’s etiology, age and prognosis. Altogether, this study offers a new data processing framework for resting-state fMRI which provides crucial information regarding DoC patients diagnosis and prognosis. Finally, this analysis pipeline could be applied to other neurological conditions.

  17. Data from: Can leaf area in rice be defined by a mathematical model?

    • scielo.figshare.com
    jpeg
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bruna San Martin Rolim Ribeiro; Michel Rocha da Silva; Gean Leonardo Richter; Giovana Ghisleni Ribas; Nereu Augusto Streck; Alencar Junior Zanon (2023). Can leaf area in rice be defined by a mathematical model? [Dataset]. http://doi.org/10.6084/m9.figshare.9598844.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SciELOhttp://www.scielo.org/
    Authors
    Bruna San Martin Rolim Ribeiro; Michel Rocha da Silva; Gean Leonardo Richter; Giovana Ghisleni Ribas; Nereu Augusto Streck; Alencar Junior Zanon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT The goal of this study was to define an empirical model to calculate the leaf area in rice from linear leaf measure in genotypes used by farmers in Brazil. Through the leaf dimensions it is possible to identify the final crop yield from the LAI. Therefore, the leaves shape is closely related to the production of photoassimilates that will be converted into grain yield. Field experiments were carried out in four counties of Rio Grande do Sul with twelve-three varieties of rice in four growing seasons. We measured the length and width of leaves to construct the model. The relationship between leaf area and linear dimensions was shaped using a linear model for each genotype, and general model grouping all genotypes. The model accuracy was measure following statistics: Root Mean Square Error, BIAS, modified index of agreement and coefficient r. The non-destructive method for individual leaves was appropriate for estimating the leaf area in rice. Moreover, the general equation was estimated and can be used for all modern genotypes of rice in Brazil.

  18. f

    Data_Sheet_1_A Novel Synthetic Model of the Glucose-Insulin System for...

    • figshare.com
    • frontiersin.figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sebastiån Contreras; David Medina-Ortiz; Carlos Conca; Álvaro Olivera-Nappa (2023). Data_Sheet_1_A Novel Synthetic Model of the Glucose-Insulin System for Patient-Wise Inference of Physiological Parameters From Small-Size OGTT Data.PDF [Dataset]. http://doi.org/10.3389/fbioe.2020.00195.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers
    Authors
    Sebastiån Contreras; David Medina-Ortiz; Carlos Conca; Álvaro Olivera-Nappa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Existing mathematical models for the glucose-insulin (G-I) dynamics often involve variables that are not susceptible to direct measurement. Standard clinical tests for measuring G-I levels for diagnosing potential diseases are simple and relatively cheap, but seldom give enough information to allow the identification of model parameters within the range in which they have a biological meaning, thus generating a gap between mathematical modeling and any possible physiological explanation or clinical interpretation. In the present work, we present a synthetic mathematical model to represent the G-I dynamics in an Oral Glucose Tolerance Test (OGTT), which involves for the first time for OGTT-related models, Delay Differential Equations. Our model can represent the radically different behaviors observed in a studied cohort of 407 normoglycemic patients (the largest analyzed so far in parameter fitting experiments), all masked under the current threshold-based normality criteria. We also propose a novel approach to solve the parameter fitting inverse problem, involving the clustering of different G-I profiles, a simulation-based exploration of the feasible set, and the construction of an information function which reshapes it, based on the clinical records, experimental uncertainties, and physiological criteria. This method allowed an individual-wise recognition of the parameters of our model using small size OGTT data (5 measurements) directly, without modifying the routine procedures or requiring particular clinical setups. Therefore, our methodology can be easily applied to gain parametric insights to complement the existing tools for the diagnosis of G-I dysregulations. We tested the parameter stability and sensitivity for individual subjects, and an empirical relationship between such indexes and curve shapes was spotted. Since different G-I profiles, under the light of our model, are related to different physiological mechanisms, the present method offers a tool for personally-oriented diagnosis and treatment and to better define new health criteria.

  19. Data from: Analytical determination of the performance of free draining,...

    • scielo.figshare.com
    jpeg
    Updated Jun 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aristides Fraga Lima Filho (2023). Analytical determination of the performance of free draining, sloping furrow irrigation [Dataset]. http://doi.org/10.6084/m9.figshare.14276921.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Jun 11, 2023
    Dataset provided by
    SciELOhttp://www.scielo.org/
    Authors
    Aristides Fraga Lima Filho
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    An analytical solution was developed to evaluate free-draining sloping furrows based on Walker and Skogerboe (1987) volume balance approach. Its application allows the mathematical calculation of a system performance, instead of constructing graphs that define the evaluation parameters. The mathematical solution is based on fitting equations to potential models, calculating the areas under the curves defined by the fitting and on curve intersection, , from which the system performance is obtained. To validate the methodology, we conducted a field experiment at the Experimental Farm of the Federal University of CearĂĄ, in the municipality of Pentecoste, belonging to the Center of Agricultural Sciences, where field data were analyzed by the two methods. The results showed that the analytical methodology can be used to evaluate the free draining sloping furrows irrigation.

  20. f

    Model comparisons of adherence and CD4 recovery parameters.

    • figshare.com
    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sinead E. Morris; Renate Strehlau; Stephanie Shiau; Elaine J. Abrams; Caroline T. Tiemessen; Louise Kuhn; Andrew J. Yates (2023). Model comparisons of adherence and CD4 recovery parameters. [Dataset]. http://doi.org/10.1371/journal.ppat.1010751.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS Pathogens
    Authors
    Sinead E. Morris; Renate Strehlau; Stephanie Shiau; Elaine J. Abrams; Caroline T. Tiemessen; Louise Kuhn; Andrew J. Yates
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    AIC values (ΔAIC) are quoted relative to the minimum AIC value across all models. The model with ΔAIC = 0 is the model with lowest AIC and thus has most statistical support. See Table 1 for parameter definitions.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
María Fernanda Vargas; José Antonio Fernández-Plaza; Juan Francisco Ruiz-Hidalgo (2023). Meaning of derivative in the book tasks of 1st of “Bachillerato” [Dataset]. http://doi.org/10.6084/m9.figshare.14304760.v1
Organization logo

Data from: Meaning of derivative in the book tasks of 1st of “Bachillerato”

Related Article
Explore at:
jpegAvailable download formats
Dataset updated
Jun 1, 2023
Dataset provided by
SciELOhttp://www.scielo.org/
Authors
MarĂ­a Fernanda Vargas; JosĂŠ Antonio FernĂĄndez-Plaza; Juan Francisco Ruiz-Hidalgo
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Abstract Due to the importance of textbooks within the processes of teaching and learning in Mathematics, this article focuses on the tasks proposed in five textbooks of 1st of Bachillerato for this topic. The goal is to identify meanings of derivative in the textbooks through the proposed tasks. It is a quantitative research in which, by means of a cluster analysis, the tasks were grouped according to similarity. The results show that the books emphasize three meanings of the derivative: one procedural-algebraic, one algorithmic, and finally another conceptual-geometric meaning, all of them dominated by the symbolic representation system and that exclusively show a mathematical context.

Search
Clear search
Close search
Google apps
Main menu